text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
A video game development company maintains a library of tools and content for each project. Programmers and artists in multiple international offices must exchange content and coordinate production.
The entire library is several gigabytes, consisting of thousands of software, music, and video files. Less than 5% changes on a typical day, but the changes are often scattered across hundreds of disparate files.
The company had been using rsync to compare each developer's copy of the library to a central repository and update only the changed files. Scanning the entire library for changes took up to six hours for some locations, plus hours more to transfer changed data. Large updates sometimes took longer than overnight, with the central file server saturated during the process, delaying work the next day.
The company deployed SyncDat
software with a single server and clients on each developer's
workstation. Scan time was reduced to less than five minutes.
Changes that used to take up to three hours to transfer now complete in less
than half an hour.
It is no longer necessary to wait overnight for developers to synchronize their work. Faster turnaround and lighter server loads mean different offices can coordinate their work in near real-time.
Faster collaboration results in better quality and reduces the time to market for each project by weeks.
|
<urn:uuid:b07dbf2a-0e1e-4612-8008-dc23c34fb277>
|
CC-MAIN-2017-09
|
http://www.dataexpedition.com/solutions/collaboration.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00254-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.953437 | 260 | 2.609375 | 3 |
Fox-IT has encountered various ways in which ransomware is being spread and activated. Many infections happen by sending spam e-mails and luring the receiver in opening the infected attachment. Another method is impersonating a well-known company in a spam e-mail stating an invoice or track&trace information is ready for download. By following the link provided in the e-mail, the receiver can download the file which contains the malware from a convincing looking website. Distributing ransomware through malvertising, an exploit kit being served on an advertisement network, is also a common way for criminals to infect systems.
In the past few months, Fox-IT’s incident response team, FoxCERT, was involved in several investigations where a different technique surfaced: activating ransomware from a compromised remote desktop server.
Before we get to why this might be lucrative for the criminals, how do they get access in the first place? RDP, or Remote Desktop Protocol, is a propriety protocol developed by Microsoft to provide remote access to a system over the network. This can be the local network, but also the Internet. When a user successfully connects to a system running remote desktop services (formerly known as terminal services) over RDP, the user is presented with a graphical interface similar to that when working on the system itself. This is widely used by system administrators for managing various systems in the organization, by users working with thin clients, or for working remotely. Attackers mostly tend to abuse remote desktop services for lateral movement after getting foothold in the network. In this case however, RDP is their point of entry into the network.
Entries in the log files show the attackers got access to the servers by brute forcing usernames and passwords on remote desktop servers that are accessible from the internet. Day in, day out, failed login attempts are recorded coming from hundreds of unique IP-addresses trying hundreds of unique usernames. Connecting remote desktop servers directly to the internet is not recommended and brute forcing remote desktop services is nothing new. But without the proper controls in place to prevent or at least detect and respond to successful compromises, brute force RDP attacks are still relevant. And now with a ransomware twist as well.
Image 1: Example network with compromised RDP server and attacker deploying ransomware.
After brute forcing credentials to gain access to a remote desktop server, the attackers can do whatever the user account has permissions to on the server and network. So how could an attacker capitalize on this? Underground markets exist where RDP credentials can be sold for an easy cash-out for the attacker. A more creative attacker could attempt all kinds of privileged escalation techniques to ultimately become domain administrator (if not already), but most of the times this is not even necessary as the compromised user account might have access to all kinds of network shares with sensitive data. For example Personally identifiable information (PII) or Intellectual property (IP) which in its turn can be exfiltrated and sold on underground markets. The compromised user account and system could be added to a botnet, used as proxy server, or used for sending out spam e-mail messages. Plenty of possibilities, including taking the company data hostage by executing ransomware.
Depending on the segmentation and segregation of the network, the impact of ransomware being executed from a workstation in a client LAN might be limited to the network segments and file shares the workstation and affected user account can reach. From a server though, an attacker might be able to find and reach other servers and encrypt more critical company data to increase the impact.
The power lies in the amount of time the attackers can spend on reconnaissance if no proper detection controls are in place. For example, the attackers have time to analyze how and when back-ups are created of critical company data before executing the ransomware. This helps to make sure the back-ups are useless in restoring the encrypted data which in its turn increases the chances of a company actually paying the ransom. In the cases Fox-IT was involved in investigating the breaches, the attackers spend weeks actively exploring the network by scanning and lateral movement. As soon as the ransomware was activated, no fixed ransom was demanded but negotiation by e-mail was required. As the attackers have a lot of knowledge of the compromised network and company, their position in the negotiation is stronger than when infection took place through a drive-by download or infected e-mail attachment. The demanded ransom reflects this and could be significantly higher.
Image 2: Example ransomware wallpaper.
Prevention, detection, response
Connecting Remote Desktop Services to the Internet is a risk. Services like that, which are not essential, should be disabled. If remote access is necessary, user accounts with remote access should have hard to guess passwords and preferably a second factor for authentication (2FA) or second step in verification (2SV). To prevent eaves dropping on the remote connection, a strong encryption channel is recommended. Brute force attacks on remote desktop servers and ransomware infections can be prevented. Fox-IT can help to improve your company’s security posture and prevent attacks, for example by an architecture review, security audit or training.
If prevention fails, swift detection will reduce the impact. With verbose logging securely stored and analyzed, accompanied by 24/7 network and end point monitoring an ongoing breach or malware infection will be detected and remediated. The Cyber Threat Management platform can assist in detecting and preventing attacks. And if business continuity and reputation are at stake, our emergency response team is available 24/7.
Wouter Jansen, Senior Forensic IT Expert at Fox-IT
|
<urn:uuid:98ac941f-8e7f-4bc1-8095-dddbff11230f>
|
CC-MAIN-2017-09
|
https://blog.fox-it.com/author/lindagerrits/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00482-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.940505 | 1,134 | 2.90625 | 3 |
Researchers at Sandia National Laboratories are working on a computer that can tackle real-world situations in real-time and can run on the same power as a 20-watt light bulb.
Right now the only "machine" that can handle those functions is the human brain.
Thats why scientists are trying to build a computer system that works more like a brain than a conventional computer.
"Today's computers are wonderful at bookkeeping and solving scientific problems often described by partial differential equations, but they're horrible at just using common sense, seeing new patterns, dealing with ambiguity and making smart decisions," said John Wagner, cognitive sciences manager at Sandia National Laboratories, in a statement.
Scientists at Sandia, which are two major U.S. Department of Energy research and development operations, are working on neuro-inspired computing as part of a long-term research project on future computing systems.
"We're evaluating what the benefits would be of a system like this and considering what types of devices and architectures would be needed to enable it," said Sandia microsystems researcher Murat Okandan. ""If you do conventional computing, you are doing exact computations and exact computations only.
"If you're looking at neurocomputation, you are looking at history, or memories in your sort of innate way of looking at them, then making predictions on what's going to happen next," he added. "That's a very different realm."
Neuro-computing systems are expected to be much better-suited to taking on big data problems, which the U.S. government, along with major enterprises, are working on. The systems also should be better at handling remote autonomous and semiautonomous systems that need greater, and different, computational power, as well as better energy efficiency.
Computers that function more like the human brain could better operate unmanned drones, robots and remote sensors. They also are expected to be ideal for handling complex analysis.
The systems would be able to detect patterns and anomalies, sensing what fits and what doesn't, according to Okandan.
A neuro-inspired computer would vary in its basic functioning from today's computer systems, which largely are calculating machines with a central processing unit and memory that stores a program and data. Today's machines take a command from the program and data from the memory to execute the command, one step at a time. Of course, parallel and multicore computers can do more than one thing at a time but still use the same basic approach.
However, the architecture of neuro-inspired computers is expected to be fundamentally different.
These future machines would be designed to unite processing and storage in a single network architecture "so the pieces that are processing the data are the same pieces that are storing the data, and the data will be processed with all nodes functioning concurrently," Wagner said. "It won't be a serial step-by-step process. It'll be this network processing everything all at the same time. So it will be very efficient and very quick."
A neural-based computer architecture also would have far more working connections.
Each neuron in a neural structure can have connections coming in from about 10,000 neurons, Sandia Labs explained. However, conventional computer transistors connect, on average, to four other transistors in a static pattern.
Computer scientists have focused on mimicking the brain's neural connections before but there's much more excitement about this project because of the advances being made in the field.
Despite these advances, Sandia Labs noted that researchers should be able to create the new architecture, in simple forms, in the next few years. More complex systems may still be "decades" away.
This article, Researchers mimic human brain to build a better computer, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected].
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Researchers mimic human brain to build a better computer" was originally published by Computerworld.
|
<urn:uuid:b6a1def5-4bb9-4131-9a47-f3ab542ab854>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2177009/data-center/researchers-mimic-human-brain-to-build-a-better-computer.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00426-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.954879 | 876 | 3.703125 | 4 |
Computer-related crimes may cause as much as $400 billion in losses annually, according to a new study that acknowledges the difficulty in estimating damages from such acts, most of which go unreported.
The study is the second to come from Intel's McAfee security unit in partnership with the Center for Strategic and International Studies (CSIS), a Washington, D.C.-based think tank.
It drew on publicly available data collected by government organizations and universities worldwide, including institutions in Germany, the Netherlands, China, Australia and Malaysia, as well as interviews with experts.
The low-end estimate of cyberattack-related losses is $375 billion, while the upper limit is $575 billion, it said.
"Even the smallest of these figures is more than the national income of most countries and governments," the report said.
In 2009, a McAfee study estimated global cybercrime costs at $1 trillion, a figure that was criticized and one that the company later said was flawed. In partnership with CSIS, McAfee released a study in May 2013 that said global cybercrime likely didn't exceed $600 billion, which is the estimated cost of the global drug trade.
The latest report acknowledges that most cybercrime incidents are unreported, few companies disclose attacks and that collecting consistent data is difficult since countries haven't agreed on a standard definition of what constitutes cybercrime.
"A few nations have made serious efforts to calculate their losses from cybercrime, but most have not," it said.
The study's authors found aggregate data for 51 countries in all regions of the world that account for some 80 percent of the world's income. Using that data to estimate a global cost but adjusted by region, the study "assumes that the cost of cybercrime is a constant share of national income, adjusted for level of development," according to the report.
The study looked at direct and indirect costs of cyberattacks, such as the loss of intellectual property, business information, the cost of securing networks, reputational damage and the costs of recovering.
The growth of the Internet and its use for business means "the cost of cybercrime will continue to increase as more business functions move online," the report said.
U.S. companies suffered the highest losses. In general, "there are strong correlations between national income levels and losses from cybercrime," it said.
"Explaining these variations lies beyond the scope of this report, but one possibility is that cybercriminals decide where to commit their crimes based on an assessment of the value of the target and the ease of entry," the report said.
Send news tips and comments to [email protected]. Follow me on Twitter: @jeremy_kirk
|
<urn:uuid:6d5d919c-9282-4983-808a-aea3d4ace0ed>
|
CC-MAIN-2017-09
|
http://www.computerworld.com/article/2490566/security0/cybercrime-losses-top--400-billion-worldwide.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00002-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.967955 | 560 | 2.859375 | 3 |
Proliferation of connected devices, systems and services has immense opportunities and benefits for our society. The connected devices, enabling seamless connections among people, devices and networks, play an essential role in our day-to-day life from fitness tracker, to cars, to health monitoring devices, to control systems and delivering utilities to our homes.
A new IoT device is coming online everyday with new features and functionalities. Through connected devices, health care is improving patient care such as a diabetic patient’s blood sugar level can now be monitored and analyzed by doctors remotely enabling quick treatment to a possible life-threatening situation.
Recent research study released by Juniper predicted about the future of human interaction with technology. The report indicates that gesture and motion control will become vital for human computer interaction in the coming years. The study found that by end of 2016 there will be 168 million devices that utilizes motion or gesture tracking such as wearables, virtual reality and more. With this adoption rate, the study suggests that there will be 492 million motion and gesture-tracking devices by 2020.
However, IoT security has not been up to date with the rapid pace of innovation and adoption creating substantial safety, privacy and economic risks. The recent hack on Dyn network exploited a security flaw in inexpensive connected DVRs, Webcams and surveillance cameras, which interrupted some of the biggest sites such as Twitter, Spotify and part of Amazon. Moreover, with connected cars, airplanes, house appliances, industrial systems connected to the internet, there is a real risk to the life and property damage.
[ ALSO ON CSO: How to approach keeping your IoT devices safe ]
While the benefits of IoT are unlimited, the reality is that security is not keeping up with the innovations and adoptions. The IoT ecosystem introduces risk that include malicious actors manipulating the flow of information to and from network-connected devices or tampering with the devices themselves. This can lead to the theft of sensitive data and loss of consumer privacy, interruption of business operations, slowdown of the internet, and potential disruptions to critical infrastructure and finally impacting the economy. As IoT devices become crucial for keeping up with evolving markets, businesses and technology leaders need to be mindful of the security implications of this new technology.
Why IoT devices are susceptible to compromise?
Studies indicate that three-quarters of IoT devices today are susceptible to getting compromised or hacked. Many of the vulnerabilities are due to the lack of password strength and weakness to protect these devices. Many IoT devices are low-profit products with little to no security built into them. It’s not possible to patch the open vulnerabilities as there is no way for consumers to know their devices are compromised not even the manufacturers have a way to fix the open vulnerabilities.
It's high time now for everyone including device manufacturers, suppliers, system integrators, network owners and consumers to get prepared and work in collaboration to secure and protect IoT ecosystem.
How to address IoT security challenges
Many of the IoT vulnerabilities can be mitigated by following best security practices, except the low-cost devices which do not incorporate even the basic security measures. They need to be replaced from any critical locations. At the same time, there is also a need to develop a comprehensive international standard and framework for IoT security. Security needs to be added at the beginning of the product design so to reduce the cost of fixing the bug or vulnerabilities later in the product lifecycle.
Moreover, cybersecurity efforts are a never-ending journey and should constantly evolve with innovations. Security should be evaluated as an integral component of any connected devices. By focusing on security as a feature of connected devices, manufacturers and service providers can have the opportunity for market differentiation in the IoT security management.
No doubt, even if security is included at the design of the production development lifecycle, vulnerabilities can be discovered in products after they are deployed. So it’s highly imperative to develop strong vulnerability management programs and continually scan and patch deployed devices if found with any new vulnerabilities.
It is highly critical to know and monitor the network of connected devices. If there is a clear inventory of connected devices in the network and the inventory database is regularly updated when a device is added or removed from the network, it is likely that we can secure the connected devices and prevent someone from exploiting them.
This article is published as part of the IDG Contributor Network. Want to Join?
|
<urn:uuid:d502984b-7467-4cb8-9e3b-2e9d8d10bd05>
|
CC-MAIN-2017-09
|
http://www.csoonline.com/article/3142624/internet-of-things/the-unlimited-potential-of-iot-and-security-challenges.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00178-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.947876 | 885 | 2.796875 | 3 |
What is The XFP Transceiver Modules?
The XFP (10 Gigabit Small Form Factor Pluggable)is a standard for transceivers for high-speed computer network and telecommunication links that use optical fiber.It was defined by an industry group in 2002, alongwith its interface to other electrical component switch is called XFI.A XFP module is a device comprising both a transmitter and a receiver which are combined and share common circuitry or a single housing.
The XFP Transceiver Modules Development:
The XFP specification was developed by the XFP Multi Source Agreement Group. It is an informal agreement of an industry group, not officially endorsed by any standards body. The first preliminary specification was published on March 27, 2002. The first public release was on July 19, 2002. It was adopted on March 3, 2003, and updated with minor updates through August 31, 2005.
The XFP Transceiver Modules Detail Info:
1. XFP modules are hot-swappable and protocol- independent. 2. They typically operate at optical wavelengths (colors) of 850 nm, 1310 nm or 1550 nm. 3. Principal applications include 10 Gigabit Ethernet, 10 Gbit /s Fibre Channel, Synchronous optical networking (SONET) at OC-192 rates, Synchronous optical networking STM-64, 10 Gbit /s Optical Transport Network (OTN) OTU-2, and parallel optics links. 4. They can operate over a single wavelength or use dense wavelength-division multiplexing techniques. 5. They include digital diagnostics that provide management that were added to the SFF-8472 standard. 6. XFP modules use an LC fiber connector type to achieve high density.
The XFP Transceiver Modules Accessories:
The XFP product line is designed to support 10 Gigabit Fibre Channel, 10 Gigabit Ethernet and OC192/STM-64. Constructed from a metal frame, the cage assembly is designed to be bezel-mounted to an I/O panel with compliant pins for pressing onto the host PCB. The cage assembly features four EMI gaskets, which block any EMI emissions emanating from the transceiver when installed. The front flange provides a flat surface to contact the EMI gasket (attached to the perimeter of the bezel cutout) and stabilizes the cage assembly during insertion and extraction of the transceiver. Heat sinks are optional for applications requiring increased heat dissipation and are attached to the cage assembly using a clip.
The XFP Transceiver Modules Work:
XFP modules 10Gb/s (XFP) transceivers are compliant with the current XFP Multi-Source Agreement (MSA) Specification. XFP modules typically operate at near-infrared wavelengths (colors) of 850 nm, 1310 nm or 1550 nm. Principal applications include 10 Gigabit Ethernet, 10 Gbit/s Fibre Channel, OC-192 rates, STM-64, 10 Gbit/s Optical Transport Network (OTN) OTU-2. FiberStore provide a full range of XFP Modules, including DWDM XFP, CWDM XFP, BiDi XFP,10G XFP and compatible Cisco XFP, Juniper XFP, Brocade XFP, ect. Fiberstore XFP modules use an LC fiber connector type to achieve high density.
For more XFP Transceiver Modules Info, pls focus on Fiberstore.By the way, we also have other modules, such as Cisco compatible SFP (or Mini GBIC).
|
<urn:uuid:13126f40-a1ce-4721-a373-444fe1b24c03>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/different-angles-to-understand-xfp-module.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00122-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.906303 | 746 | 2.78125 | 3 |
Among the bigger concerns about cloud computing relate to data governance and regulatory issues about where data can legally reside. For example, a lot of countries in Europe and Asia prohibit storing data created inside their borders on servers that are beyond their judicial reach.
For this reason, along with improved performance from being closer to users, Rackspace has become the latest cloud computing provider to deploy its platform in Europe. It already has data centers in the UK, so opening up a cloud computing service really only means deploying its cloud computing-management software on servers that are already installed.
Over time, these distributed services will be federated to give customers that do business internally a cloud computing platform that can support their global operations, notes Pat Matthews, vice president and general manager for Cloud at Rackspace.
Technically, providers could offer services from anywhere on the globe. But this signals the globalization of cloud computing in a way that requires each provider to set up shop in the market it wants to serve. But local governments are becoming increasingly savvy about the economic implications of cloud computing, which just goes to show that like politics, all data is local.
|
<urn:uuid:e8695a0d-9e9a-490a-bd8c-1c56e31bc287>
|
CC-MAIN-2017-09
|
http://www.itbusinessedge.com/cm/blogs/vizard/the-globalization-of-cloud-computing/?cs=41244
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00298-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.954711 | 226 | 2.609375 | 3 |
Swedish car manufacturer Koenigsegg holds many speed records, including breaking the 0-300 km/h record in just 14.53 seconds in 2011, holding the Nürburgring speed record of 311 km/h in 2006 and also winning the title of fastest production car in 2005 for a 389 km/h top speed. To continue on this winning trajectory, Koenigsegg’s Technical Director, Jon Gunner, turned to another ultra-fast technology: computers.
The company had an aggressive goal of developing a market-leading hyper car in just six months. As Gunner explains, they were on a mission to deliver a vehicle that would “outperform all production cars on the track and, without modification, also be able to reach a top speed of 440 km/h.”
Smaller than many of its competitors, the 50-person company needed a competitive edge that would enable it to outdesign rivals. To facilitate the development of its latest hyper car, Koenigsegg partnered with High Performance Computing (HPC) Wales, ICON Technology & Process Consulting Ltd. (ICON) and Fujitsu.
With ICON’s assistance, Koenigsegg’s engineers are accessing HPC Wales’ advanced computing infrastructure to simulate the aerodynamics of the car. It’s the type of experiment that would previously require the use of expensive physical prototyping using wind tunnels, but Koenigsegg was committed to an all-virtual design process.
David Green, Commercial Director of ICON, discusses the partnership, “440 km/h is far and away faster than any wind tunnel can ever reproduce. Koenigsegg also don’t have the resources to build lots of prototypes. So, wherever possible they use virtual design for the structure, aerodynamics and fluid dynamics. This makes Koenigsegg unique; they are entirely committed to virtual design. We saw an opportunity for ICON to help Koenigsegg develop the car by utilising our relationship with Intel, Fujitsu and HPC Wales. We worked out a deal that gave them rapid access to the multiple cores on the HPC Wales system, required for highly computationally intensive CFD simulation.”
Because the team would not be verifying the results with physical testing, the simulations had to be accurate. The proof would come on racing day.
The project took a couple months to complete and during that time Koenigsegg’s engineers carried out more than thirty different simulations of the aerodynamics. Some runs used about 128 cores and ran for twenty-four hours to capture one second of air flow over the car. That was the kind of detail they were after.
Says Green: “We used a highly accurate method called Detached Eddy Simulation, where we make very few assumptions to simplify the simulation, but we can describe very accurately what happens to the air surrounding the car. There are cheaper ways of doing CFD simulations but, in a case like this, where Koenigsegg are not going to be able to do a lot of fine tuning with prototypes, we wanted the simulation to be as close to real life as possible. HPC Wales has enabled this prestigious project with Koenigsegg.”
The project is a perfect example of using HPC as a Service in order to augment existing computational resources. The Swedish designer car maker already used iconCFD software in-house, but its cluster tapped out at 32 cores. The system was sufficient for constructing models in-house and performing simple jobs, but it did not meet the requirements of this extravagant mission.
The payoff of all this computational work is a new supercar called One:1, which debuted at the 2014 Geneva Motor Show. In simulations, the “world’s first mega car” as its being called hit its aggressive 440 km-per-hour (273 miles-per-hour) target. That’s a few clicks higher than other reining speed champs the Hennessey Venom GT and the Bugatti Veyron Super Sport. The One:1 is also expected to go from 0-250 mph (400kph) in just 20 seconds. If One:1 can match the simulated speed on the track, the company will have achieved its goal of creating the fastest production car in the world.
|
<urn:uuid:04b3af7f-ac0b-4132-986f-e1b84df3a9f9>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2014/05/15/hpc-propels-fastest-production-car/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00474-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.952667 | 891 | 2.734375 | 3 |
This week, NASA announced it would soon be launching a new HPC and data facility that will give earth scientists access to four decades of satellite imagery and other datasets. Known as the NASA Earth Exchange (NEX), the facility is being promoted as a “virtual laboratory” for researchers interested in applying supercomputing resources to studying areas like climate change, soil and vegetation patterns, and other environmental topics.
Much of the work will be based on high-resolution images of Earth that NASA has been accumulating since the early 70s, when the agency began collecting the data in earnest. Originally known as the Earth Resources Technology Satellite (ERTS) program, and later renamed Landsat, its mission was to serve up images of the earth, allowing scientists to observe changes to our planet over time. This includes tracking forest fires, urban sprawl, climate change and a host of other valuable information. Data generated by these satellites has been extremely popular in the global science community. In the last 10 years, more than 500 universities around the globe have used Landsat data to support their research.
Over time though, the program’s growth created a logistical problem. Multiple datasets eventually spanned facilities around the US, which presented challenges for researchers looking to retrieve satellite imagery. Recognizing the issue, NASA created the NEX program with the goal of increasing access to the three-petabyte library of Landsat data.
NEX will houses all data generated by Landsat satellites and related datasets, as well as offering analysis tools powered by the agency’s HPC resources. We spoke with NASA AMES Earth scientist Ramakrishna Nemani, who explained the purpose behind the NEX facility and how it has been implemented. “The main driver is really big data,” he told HPCwire. “Over the past 25 years we have accumulated so much data about the Earth, but the access to all this data hasn’t been that easy.”
Prior to NEX, he said, researchers would be tasked with locating, ordering and downloading relevant data. The process could be time consuming because the satellite imagery they wanted could be housed at one or more locations. Even after locating the desired images, data transfer times would often be prohibitive.
NASA set out to solve the problem, leveraging one of their strongest assets: supercomputing. The agency decided to take all of the disparate datasets and migrate them to the AMES research center. “We said ‘let’s do an experiment.’ We already have a supercomputer here at AMES, so we can bring all these datasets together and locate them next to the supercomputer,” said Nemani.
That system, known as Pleiades, is the world’s largest SGI Altix ICE cluster and the agency’s most powerful supercomputer. Pleiades has been upgraded over time accumulating several generations of Intel Xeon processors: Harpertown, Nehalem, Westmere, and, most recently Sandy Bridge. For extra computational horsepower, the Westmere nodes are equipped with NVIDIA Tesla GPUs. Linpack performance is 1.24 petaflops, which earned it the number 11 spot on the June 2012 TOP500 list.
The system also includes 9.3 petabytes of DataDirect storage. Given that, AMES is now able to host the three petabytes of image data at a single location. But NEX was created to do more than hold all the satellite imagery under one roof. A collection of tools was developed to help researchers analyze the data using the Pleiades cluster.
For example, a scientist could create vegetation patterns with the toolset, piecing together images like a jigsaw puzzle. The program estimates that processing time for a scene containing 500 billion pixels would take under 10 hours. Without the NEX toolset, scientists would have to create their own computational methods to perform similar research.
While making Pleiades’ compute resources available was beneficial for researchers, it posed somewhat of a challenge for the NEX project team, since a certain level of virtualization is required to support concurrent access. The marriage of virtualization and supercomputing can be “tricky business,” according to Nemani, but the program had a unique plan in this regard.
“We have two sandboxes that sit outside of the supercomputing enclave,” he said. “We bring in people and have them do all the testing on the sandboxes. After they get the kinks worked out and they’re ready to deploy, we send them inside.”
Eventually, the program would like to have scientists run their own sandbox program and upload it to the supercomputer as a virtual machine.
While NEX has some cloud elements to it, NASA could not feasibly run the project on a public cloud infrastructure. “We are trying to collocate the computing and the data together, just like clouds are doing. I would not say this is typical cloud because we have a lot of data. I cannot do this on Amazon because it would cost me a lot of money,” said Nemani
The NEX program also features a unique social networking element, which allows researchers to share their findings. It’s not uncommon for scientists to move on after working a particular topic. However, this reduces access to codes and algorithms utilized in their research. These social media tools provided by NEX allow peers to go back and verify the results of previous experiments. Combined with access to HPC and the legacy datasets, the facility provides what may be the most complete set of resources of its kind in the world.
“Basically, we are trying to create a one-stop shop for earth sciences,” said Nemani.
|
<urn:uuid:51c3975c-9431-4d84-9a17-71ed9b949be2>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2012/07/25/nasa_builds_supercomputing_lab_for_earth_scientists/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00174-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.955226 | 1,180 | 3.671875 | 4 |
Running a Task Outside AutoMate (Command-line Operation)
Occasionally users may have a need to run a task created in AutoMate from outside of AutoMate. Usually the user wishes to run the task from one of the following:
- A batch file
- An external program
- The command line
- A desktop icon
The files AMTask.exe and AMTaskCm.exe (collectively AMTask) exist for this purpose and can be found in the AutoMate folder which is installed (by default) at "C:\Program Files\AutoMate 6\".
How to use AMTask
Proper usage of AMTask.exe or AMTaskcm.is entering the default installation path of AMTask.exe or AMTaskCm.exe enclosed in parenthesis, followed by the path location of the .AML file associated with the task, also enclosed in parenthesis (see the syntax examples below). Once started, AMTask does not end until the task specified on the command-line parameter has finished processing.
Command Line Options
AMTask accepts several command-line parameters to control its operation:
The filename of the task to run. If the task file name includes spaces it must be surrounded in quotes or improper operation will result. The first parameter must be "taskname".
Specifies variable(s)/value(s) to pass to the task. Format is semi-colon delimited name=value pairs. For example: /v:varname1=value1;varname2=value2;varname3=value3.
- /password <password>
If the task is password protected, use <password> to decrypt the task before execution.
Prompt for the password if the task is password protected and the /password parameter has not been specified. NOTE: This parameter is only available in AMTask.exe
Display a message box (or output to standard output) the usage and syntax help.
Since the variable list is semi-colon delimited, semi-colons are not allowed in the variable name or value to pass. This can be worked around by replacing a semi-colon with another special character before passing it to AMTask and configuring the task to replace it back to a semi-colon at run time using an embedded expression in the task. For example, if an exclamation point were used as a replacement character for a semi-colon, a Set Variable action at the beginning of your task using the expression Replace$(var1, "!", ";") as the new variable data would convert the exclamation points back to semi-colons.
The difference between AMTask.exe and AMTaskCm.exe
The two files AMTask.exe and AMTaskCm.exe work exactly the same with the exception of one characteristic. AMTask.exe is a pure Windows application and is designed to run a task and return to a Windows application when the task specified has been completed, whereas AMTaskCm.exe is a "console application" and is designed to be run from a command prompt or batch file. Additionally, AMTaskCm.exe emits proper return codes: 0 for task success, 1 for task failure and 2 if the task stops.
Why two files?
True windows applications will return immediately when run from the command prompt regardless of when they actually finish running; thus, using the original AMTask.exe one would not be able to determine when the launched task finished or retrieve a return code to determine its success or failure. To avoid this behavior, use AMTaskCm.exe which is designed for use in a console (command line or batch-file) environment.
Why not always use AMTaskCm.exe then?
When AMTaskCm.exe (a console application) is invoked from a true Windows application (not from the command prompt) it causes a command prompt (AKA DOS box) to appear if one was not already open. This is not visually appealing and can confuse users. The rule to remember is:
- Use AMTask.exe when launching from a Windows application, macro/script, or windows itself.
- Use AMTaskCm.exe when launching from the command prompt, a DOS based application or most importantly, a batch file.
The following are syntax examples that can used when running a task outside of the Task Administrator.
"C:\Program Files\AutoMate 6\AMTask.exe" /?
Run a task
"C:\Program Files\AutoMate 6\AMTask.exe" "C:\Documents and Settings\All Users\Documents\My AutoMate Tasks\check email.aml"
Run a task and pass variables
"C:\Program Files\AutoMate 6\AMTask.exe" "C:\Documents and Settings\All Users \Documents\My AutoMate Tasks\check email.aml" /v:MyName=MrJones;Phone=213-738-1700
|
<urn:uuid:8ee6d7c2-499e-45b4-b3e2-e56ef1dc10bf>
|
CC-MAIN-2017-09
|
http://www.networkautomation.com/urc/knowledgebase/running-a-task-outside-automate-command-line-operation/7E2FECDF-A644-B1BE-0E0E9520C83F05AC/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00470-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.799138 | 1,042 | 2.53125 | 3 |
You've heard the term and probably read stories about smart homes where the toaster talks to the smoke detector. But what makes it all connect? When will it become mainstream, and will it work? These frequently asked questions help explain it all.
What is the Internet of Things?
There is no agreed-upon definition, but there is a test for determining whether something is part of the IoT: Does one vendor's product work with another's? Does a door lock by one vendor communicate with a light switch by another vendor, and do you want the thermostat to be part of the conversation?
Here's the scenario: As you approach the front door of your house, a remote control built into your key unlocks the door. The door's wireless radio messages the network, which prompts the hall light to turn on. The house thermostat, which was lowered after you left for work, returns to a comfort zone. Everything is acting in concert, which brings us to the elegant definition of IoT by Paul Williamson, director of low power wireless for semiconductor maker CSR: "A true Internet of Things is coordination between multiple devices."
What makes the Internet of Things almost human?
In a word: Sensors. Many IoT devices have sensors that can register changes in temperature, light, pressure, sound and motion. They are your eyes and ears to what's going on the world. Before we talk about what they do, let's describe them. These sensors are part of a device category called a microelectromechanical system (MEMS) and are manufactured in much the same way microprocessors are manufactured, through a lithography process. These sensors can be paired with an application-specific integrated circuit or an ASIC. This is a circuit with a limited degree of programming capability and is hardwired to do something specific. It can also be paired with microprocessor and will likely be attached to a wireless radio for communications.
Can you give an example of how IoT sensors work?
Here's the scene: You are away on vacation and the house is empty. A moisture sensor detects water on the basement floor. That sensor finding is processed by an app, which has received another report from a temperature sensor that detects the flow of water in the main water pipe. (When water flows, it takes away heat and lowers the temperature).
That both sensors are detecting anomalies is cause for concern. A high rate of flowing water may signal a burst pipe, triggering an automated valve shutoff; a slight water flow might be a running toilet, and the water on the basement floor by routine leakage from a heavy rain. In either case, you get a machine-generated message describing the findings.
Here's how you investigate. Via a mobile app, you get two one-time codes to unlock your front door, one for your neighbor and another for a plumber. When the door is unlocked, a text alert tells you who entered. Having knowledge of the condition of your home may be a big driver of IoT adoption.
How will IoT sensors work in public spaces?
Take parking. Cities are embedding sensors in on-street parking spaces from a company called Streetline that can detect if a car is parked in one. Drivers looking for a parking space use the company's mobile app, which lets them know when a space becomes available. Streetline has also added sound level and surface temperature sensors to help cities determine the best times to apply salt and use noise sensors to ensure compliance with ordinances.
In the public arena, a smartphone can double as a sensor. In Boston, as people drive down a road, the phone's accelerometer sensor will keep track of bumps. An accelerometer can tell up from down, but more precisely it measures acceleration. All it took to turn a smartphone into a road condition monitoring tool, was an app that used its existing sensor in a new way.
Do you want your bathroom scale to talk to your refrigerator?
The IoT opens up a lot of opportunity for creative app writers. Let's start with a smart refrigerator. You buy your groceries online and have them delivered to your home. It has now become advantageous for grocers and food product makers to add RFID tags to their products. The refrigerator knows what is inside via weight-sensitive shelves and expiration dates. It can also help you keep a grocery list, automate orders and provide nutritional information.
For instance, let's say you decide to take a pint of Ben and Jerry's ice cream out of the freezer. When that happens, a connected wireless speaker announces, loudly: "Please reconsider this selection. As requested, here is your most recent weight and BMI." The wireless speaker is reporting data collected from your bathroom scale. The scale was never designed to communicate with a refrigerator, but an app writer made it so by linking data from the scale and fridge. This scale-fridge-speaker combination may seem silly, but here's the point: In the IoT, app writers now have the ability to connect seemingly disparate things to create new types of functionality.
How do IoT devices communicate?
An IoT device will have a radio that can send and receive wireless communications. IoT wireless protocols are designed to accomplish some basic services: Operate on low power, use low bandwidth and work on a mesh network. Some work on the 2.4 GHz band, which is also used by Wi-Fi and Bluetooth, and the sub-GHz range. The sub-GHz frequencies, including 868 and 915 MHz bands, may have the advantage of less interference.
Why is low power and low bandwidth important in IoT?
Some IoT devices will get power from electrical systems, but many, such as door locks and standalone sensors, will use batteries. These devices send and receive small amounts of information intermittently or periodically. Consequently, the battery life of an IoT device can range from 1.5 years to a decade, if the battery lasts that long. One IoT maker, Insteon, uses both radio and powerline communication, which can send data over existing electrical wiring as well as via a radio, which it says will offer an increased measure of reliability.
What is a mesh network?
Devices in a mesh network connect directly with one another, and pass signals like runners in a relay race. It is the opposite of a centralized network. The transmission range of an IoT device on a mesh network is anywhere from 30 feet to more than 300 feet.
Since mesh network devices can hand-off signals, they have an ability to connect thousands of sensors over a wide area, such as a city, and operate in concert. Mesh networks have the added ability of working around the failure of any individual device. Wireless mesh IoT protocols include the Z-Wave Alliance, the Zigbee Alliance, and Insteon, which also has an alliance of vendors. These protocols aren't directly interoperable, although there are workarounds via hubs (more on this later).
ZigBee is an open protocol, but its critics say that not all of its implementations are necessarily the same. ZigBee runs a certification to ensure standard deployments. Insteon and Z-Wave are proprietary, which may ensure standardization of implementation.
What's the best wireless network for the IoT?
Today, no wireless technology has a dominant market share in IoT applications. Nick Jones, an analyst at research firm Gartner, said more than 10 IoT wireless technologies will "get significant traction" in IoT applications. These wireless technologies include cellular, satellites and new communications such as Weightless, which uses "white space," or unoccupied TV channels. More importantly, no one wireless technology will meet every need and circumstance. A connected car, for instance, will use a cellular network to contact your home network.
Will I need a gateway or hub in the IoT?
A gateway, bridge or hub provides a connection point between your home network and other devices. The hub works with your home router and provides communications to the machines, devices and sensors that are part of your IoT universe. You will want, by default, your Zigbee smart meter to communicate with your Z-Wave or Insteon thermostat. This will also be true for the washing machine that is connected to a smart metering system and starts a wash only when electric rates are at their lowest point. These connections will be established through hubs that support multiple wireless technologies.
SmartThings, for instance, makes a hub that supports both Zigbee and Z-Wave, as well as a platform to build connecting applications. Eventually, these wireless technologies may be included in home routers, set-top boxes from your cable companies, or even devices such as a Google Chromecast.
Won't Bluetooth win in the end?
Bluetooth Low Energy was originally aimed at wearable technology, not the broad IoT market. But in early 2014, CSR, a semiconductor maker, announced a mesh network for Bluetooth, meaning it could now connect to thousands of things.
Bluetooth's ubiquity in mobile devices means that a Bluetooth mesh network as a broad IoT platform will have some advantages. Because Bluetooth is already a feature on smartphones, a smartphone could act as a management hub inside a home. But it's not perfect. A hub will be needed if someone wants to connect with the home network remotely, such as from work.
Do the big consumer product vendors really want an Internet of Things?
Skeptics say it's unlikely that all the big vendors will embrace open standards. A more likely outcome for the IoT are technological islands defined by proprietary data interchanges.
Without open standards or open communication protocols, devices on the network won't be able to share data and work in concert. Will Apple develop products that can connect with Samsung products? Will Bosch products communicate with those from Samsung or Sears? Maybe not.
Consumers will be frustrated and will be told that they need to buy into a particular vendor's product partner network to get a full IoT experience.
Can open source force the big vendors to play nice?
Open source advocates are hoping they can avert a fracturing of the IoT. The Linux Foundation, a nonprofit consortium, created the AllSeen Aliance and released a code stack in late 2013 that can be used by any electronics or appliance maker to connect to another product. The alliance hopes that the sheer weight of adoption of this stack, called AllJoyn, will help to push the IoT toward open standards. AllJoyn is agnostic about wireless protocols, and support for Bluetooth LE, ZigBee and Z-Wave can be added easily by the community.
Will the IoT destroy what little privacy you have left?
Privacy advocates are plenty worried about the IoT's impact on consumers. Part of this is due to the arrival of IPv6 addresses, the next generation Internet protocol. It replaces IPv4, which assigned 32-bit addresses, with a total limit of 4.3 billion; IPv6 is 128-bit, and allows for 340 trillion trillion trillion addresses or 340,000,000,000,000,000,000,000,000,000,000,000,000. This makes it possible to assign a unique identifier to anything that's part of the IoT (although not everything needs to be IP addressable, such as light switches). This may enable deep insights into a home. Smart metering systems, for instance, will be able to track individual appliance use.
"Information about a power consumer's schedule can reveal intimate, personal details about their lives, such as their medical needs, interactions with others, and personal habits," warned the Electronic Privacy Information Center, in testimony in late 2013 at a Federal Trade Commission workshop. This is information that may be shared with third parties. At this same FTC workshop, another leading privacy group, the Center for Democracy and Technology, outlined its nightmare scenario.
Light sensors in a home can tell how often certain rooms are occupied, and temperature sensors may be able to tell when one bathes, exercises or leaves the house; microphones can easily pick up the content of conversations. The message is clear: Courts, regulators and lawmakers will be fighting over IoT privacy safeguards for years to come.
Will my smart washer attack me?
Security experts are worried that consumers won't be able to tell the difference between secure and insecure devices on their home network. It will be a threat to enterprise networks as well. These devices, many of which will be cheap and junky and made by who-knows-who overseas, may not have any security of their own.
Security researchers imagine problems, such as the connected toilet, demonstrated at a recent Black Hat conference, which flushed and closed its lid repeatedly. Hackers could create havoc by turning appliances and HVAC systems on and off. Baby monitors have been successfully taken over by outsiders. One advantage that IoT security may have is it's still in its early stages, and the security community has a chance to build IoT systems with a strong measure of protection. Cisco is fishing around for ideas. The company is running a contest (with a June 17 submission deadline) with $300,000 in prize money for ideas for securing the IoT.
When will the Internet of Things be ready for prime time?
Vendors will be sorting out the various protocols and technologies for years. Consumers are curious, perhaps, but sensors and hubs for the home aren't flying off the shelves. There are real IoT uses today, especially for home monitoring and security. For now, the big users of sensor networks and remote intelligence gathering are businesses and governments.
|
<urn:uuid:a3cb2fba-9c40-4b1b-bf26-12d361537d74>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2376518/internet/the-abcs-of-the-internet-of-things7.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00170-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.947497 | 2,748 | 3.21875 | 3 |
UICC stands for Universal Integrated Circuit Card. It is a new generation SIM (Subscriber Identification Module) included in cell phones or notebooks used in some high speed wireless 3G networks such as those of AT&T or T-Mobile. The UICC identifies you to your wireless carrier so they know your plan and services. It can store your contacts and enables a secure and reliable voice and multi-media data connection, global roaming and remotely adding new applications and services. The UICC is the best and only universal application delivery platform that works with any 3G or 4G device. For example, subscribers will be able to easily transfer their phone book, contacts and preferences from one handset to another.
Technically, the UICC works in all mobile telecom networks. It is a type of smart card technology. Smaller in size than a full card, it contains a computer, or microprocessor, its own data storage and software. It is an evolution of the SIM used to identify subscribers in GSM networks. GSM, or Global System for Mobile Communications, is the most popular standard for wireless technology in the world with 3 billion users (41 percent of the population).
Like the SIM, the UICC has an application that stores your contacts and another that can help you optimize your call costs when roaming globally by maintaining a list of preferred networks in the UICC.
A big advantage of the UICC over the SIM, however, is it can have multiple applications on it. One of these, the USIM application, is what identifies you and your phone plan to your wireless service provider using one of these standards: Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA) or Long Term Evolution (LTE). Another application, the CDMA SIM (CSIM), enables access to CDMA networks, which are different from GSM or UMTS networks. Other possible applications include ISIM, to secure mobile access to multimedia services, and non-telecom applications such as payment. In the United States for example, many subscribers have a UICC with USIM and ISIM applications for phone service and multimedia respectively.
Another advancement is that the UICC can communicate using Internet Protocol (IP), the same standard used in the Internet and the new generation of wireless networks. It also can support multiple PIN codes, which can better protect your digital life by preventing anyone from misusing personal information.
|
<urn:uuid:3d8cc292-16bc-441e-b62e-7b7f909822e1>
|
CC-MAIN-2017-09
|
https://www.justaskgemalto.com/us/what-uicc-and-how-it-different-sim-card/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00170-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.90719 | 498 | 2.546875 | 3 |
One of the education world's newest trends is using technology to help schools function with the sort of efficiencies normally found in the business world. Enterprise Resource Planning (ERP) packages were among the first tools to move from the business world to the education world. ERP
systems, with their combination of financial, human-resources and procurement functions, are now being used by many educational institutions to streamline complex administrative functions.
Schools are also looking to technology as a means of measuring accountability. The Ohio Department of Education (ODE) is one of the pioneers in this area.
In 1998, the Ohio Legislature enacted SB 55 to improve school accountability. Specifically, SB 55 required the ODE to generate "report cards" measuring district and building-level performance in 18 different areas for all 611 school districts in the state. In doing so, the department and state lawmakers hoped to more easily compare and contrast schools with state performance standards and to identify which institutions were excelling and which were falling behind.
The assignment was a difficult one, admitted the department's chief information officer, Rob Luikart. "The report cards required us to collect data from schools and from proficiency-testing companies. But even after all the data was collected, giving it meaning within the confines of a paper report was not easy. You could see the data, but making sense of it and giving it real value was going to require a lot more."
When Luikart began working for the department earlier in 1998, the organization stressed to him its goal of developing a technology plan and data architecture. Luikart, therefore, decided that a technical approach to the report-card dilemma might be the perfect solution.
"Like many Fortune 500 companies, much of our financial and management information was locked up in legacy systems that didn't talk to one another," he said. "We needed to take a more enterprise view of that information, so we undertook a project to build a data warehouse. That was sort of a watershed event for the agency."
The first function of the new data warehouse was to compile the ODE's school-report-card information. Not only would the system efficiently compile all the components from the districts, it would also allow the department to build an interactive version of the report card to be placed on the Internet.
"We wanted to create an e-government environment, meaning we could make this information -- which is important to constituents, the public, legislators, school boards, administrators, parents, teachers and sometimes even kids -- easily accessible to all of them. We also wanted to add value to it," Luikart said.
Putting it in Place
Once the department decided what it wanted, it went looking for strategic partners and best-of-breed practices in warehouse design and implementation. The ODE utilized the expertise of several consulting groups, chose partners and began building the system.
Today, the department's report cards are compiled electronically and available to anyone at its Web site . There, visitors can view a report card from any Ohio school, examine proficiency-test information, attendance and enrollment data, student achievement statistics, teacher qualifications, graduation rates or annual spending per pupil. They can also instruct the system to compile sophisticated reports. "It allows people to look at trend information. It reveals trends on a given district's three-year performance and shows how well it performed in comparison to similar districts and the overall state average," said Luikart. "It provides data that wouldn't be easy to display on the paper report card because it would take up too much space. Things you can't do with paper can be done easily on the Web, using decision-support software and other tools."
According to Luikart, access to student data at the individual school level allows everyone involved in education to make better decisions for Ohio's
students. From the legislative point of view, the department's Web site will improve accountability by showing the state what it's getting -- or not getting -- for its investment. "We'll be able to see what methods of teaching and curriculum are most effective, if programs are having an impact, etc. That's data that wouldn't be easily understood, or even available, prior to the report cards."
Meanwhile, school administrators will use the system to formulate long-term strategies and spot downward trends before they become serious problems. If one school in the state performs excellently, other schools can easily emulate the methods. Parents can use the system to choose the best school for their children. If a school is performing poorly, communities can organize to make changes.
"Few factors have greater impact on student performance than parent and community involvement," said Luikart. "For this reason, providing information to parents and community members in an interactive environment is important. We want them to understand what questions to ask of teachers, administrators and students; to look for areas of strength and weakness and to understand them so they can make informed decisions. Ohio is a local-control state, so there is a great emphasis on local decision-making. Having the information on the Web gives people a basis on which to ask those kinds of questions."
Rob Silverman of MicroStrategy Inc., one of the ODE's partners, said the ODE is one of the few organizations to realize that providing interactive information over the Web can help build a bond with constituents and deliver better service. "This is an organization that's using technology in a way that really adds value to the public," Silverman said. "It is one of the pioneers in doing so."
With their interactive report cards in place, the department is already looking to take the data warehouse to the next level. Plans include making financial and program information available electronically. "It will be interesting to see how providing this kind of information and this kind of tool unfolds and how it might influence activities in the future," said Luikart. "This is just the beginning of a trend in government and education of moving information into a forum where the public can actually use it."
Justine Kavanaugh-Brown is editor in chief of California Computer News, a Government Technology sister publication. E-mail
|
<urn:uuid:d27a5b0c-c27a-4daf-9b95-fd7a69717102>
|
CC-MAIN-2017-09
|
http://www.govtech.com/magazines/gt/An-ODE-to-Accountability.html?page=2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00346-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.962813 | 1,247 | 2.71875 | 3 |
Sci-fi movies have warned us again and again: Sooner or later, our technology will destroy us.
The moment will come when machines become so smart, they will become a force for destruction rather than the engine of our general betterment.
Will it be the smart energy grid that pushes us over the line? There is growing concern that the automated, intelligent interplay between elements of the power grid could produce new and deeply hazardous vulnerabilities.
Consider first the upside. Smart grid technology enables two-way communication between disparate elements of the power generation, transmission and distribution chain. Constant feedback allows the system to detect and respond to local changes. As a result, lost power can be restored more quickly; systems can respond to peak demands; renewable sources can be integrated into conventional systems more easily.
“The downside is that with this higher degree of coordination comes a higher degree of vulnerability. If bad actors understand the new control paradigm, they can herd the grid to certain places, they can trick the grid operators or the automated equipment to respond in certain ways,” said Battelle Memorial Institute Research Leader Jason Black.
Within the complexity of the smart grid structure, these threats can take any number of forms.
Vulnerability begins at a mundane level, with the plain old physical attack, explosive or otherwise, targeted at the computing centers that run the smart grid. While power plants may have some level of physical security, data centers often do not have such protections. “Some of these major control centers are located in standard office buildings. They are not even located behind concrete barriers,” said Thomas Popik, chairman of the nonprofit Foundation for Resilient Societies, which conducts research into the U.S. power grid.
In addition to physical threats, more complicated attacks also are possible — attacks that seem to mirror Hollywood scenarios. Specifically, computer-based control systems also may be vulnerable to electromagnetic attack, the kind of mass shock wave that disrupts digital transmissions, as depicted in the movie Ocean’s Eleven. In that case, con artists use such a pulse to take a city’s grid offline for a few crucial moments.
“The same thing can happen to any vulnerable electronic component of the smart grid,” Popik said. Nor would the attackers be particularly noticeable. Such a disruptive device could easily fit into a standard van.
It’s a solvable problem, but the solutions have to be implemented early. One solution is a Faraday cage, an enclosure of conductive material that shields equipment from the pulse. You can cage individual pieces of equipment or enclose a whole room. Defense against electromagnetic attack also can be built into new energy facilities, adding 5 percent to the overall cost. Those who add such defenses as a retrofit typically find the costs to be about four times more.
The most widely recognized vulnerability in the smart grid lies in the software itself, the programming that directs the actions of the system. “We’ve tested around 30 different products from over 20 different vendors since April 2013 and we found 85 percent of those have low-hanging vulnerabilities,” said Adam Crain, a partner at Automatak. In examining energy industry software, Crain’s team has found a range of issues that may lead to possible exploitation.
While standards for security may be adequate, implementation is far less certain. Even with the security standards in hand, “now the coders have to take this complex standard — it’s 1,000 pages long — and translate it into software, and that is no easy task,” Crain said. While that software is then tested for functionality, it is seldom tested for security. Bad actors can slip in through security gaps and spread mass damage relatively easily.
“One of the things we found was that the master stations, the control centers, were vulnerable,” Crain said. All it takes is one unsecured power pole to get to the control center: Because everything is interconnected, even a small gap opens the door back to the master control system, giving a bad actor access to literally the entire system.
Where are the weak links? Virtually everywhere. Power poles, capacitors, voltage regulators, power quality meters, smart readers in people’s homes, electrical vehicle charging stations. Name it. Anything that isn’t locked down is a potential source for exploitation, an open window into the beating heart of the network.
All these cyberthreats are in a sense built into the very nature of the smart grid. “There is a huge culture gap, especially in the electrical power space,” Crain said. “The grid was designed for physical reliability, resistance to storms. It was not designed for resistance to cyberattacks. When you mention this issue to people in the field, the best they can say is that, ‘This is why we have redundancy.’ But redundancy doesn’t help you if that redundant asset has the same software and the same vulnerability.””
Now let’s start to think about the impact of all of this on the emergency management community. There’s the obvious threat of mass blackouts, and we’ll come to that. But consider first the “smartness” of the smart grid and all that it implies for people on the front lines of emergency response.
The smart grid depends on the intelligence of the devices to which it is connected. Diverse elements within the power chain must have some degree of awareness, as it were, if they are to communicate effectively up and down the line. This native hardware intelligence poses real risk as the power system becomes increasingly smart. The more intelligent the devices, the more widespread the risk.
At the Northern California Regional Intelligence Center, Cyber Intelligence Analyst Donovan Miguel McKendrick points to the innocuous-seeming Philips Hue light bulb. The bulb’s color can be adjusted to meet a range of settings, a nice feature for changing the mood in your living room. As the manufacturer describes it, users can “[e]xperiment with shades of white, from invigorating blue to soothing yellow. Or play with all the colors in the hue spectrum. … Relive your favorite memories. Even improve your mood.”
Hue is controlled by a smartphone app. Plug it into the smart grid, however, and it becomes theoretically possible to control the light from outside the app via software hack. Then the system’s own intelligence becomes a point of entry for destructive players. “Now suppose that light bulb is installed in an emergency room and someone shuts it off during a procedure,” McKendrick said. “That’s a worst-case scenario.”
Knowing that such things could happen, one moves quickly to considering the possibility that they will.
“The main concern is that somebody is going to get into the system with the motivation and the skill set to do serious damage and to cause loss of life,” McKendrick said. He points to the example of DarkSeoul, a hacking organization credited with successful cyberattacks on South Korea’s banks, television broadcasters, financial companies and government websites.
“That is exactly the concern,” McKendrick said. “That somebody could use a piece of malware like that and target critical infrastructure in America. With the smart grid, it would not be very difficult.”
The same techniques could of course be used to turn off all the lights. As emergency managers begin to contemplate these unpleasant scenarios, it seems reasonable to ask just how big a threat we’re talking about. While the smart grid is by no means ubiquitous, it is rapidly gaining a place among the most prominent energy management models.
More than 26 percent of public utilities and 28 percent of investor-owned utilities (IOUs) are in the early planning stages of developing a smart grid, according to the latest Strategic Directions in the U.S. Electric Industry report from engineering and consulting firm Black & Veatch.
More than 36 percent of public utilities and 23 percent of IOUs are actively deploying physical infrastructure updates, while about 13 percent of public and 17 percent of IOUs are deploying IT infrastructure. Clearly there’s momentum here.
Let’s back up and consider the big picture. The problems are apparent: buggy software, physical vulnerability, the inherent interconnectedness of intelligent devices. Yet the technology exists to remediate the worst of these. So what’s the problem?
As is so often the case, one of the main problems is the human element.
While the expertise exists to address the technological challenges of the smart grid, it does not always exist in the right places, said Adam Cahn, CEO of Clear Creek Networks, a company that builds computer networks for utilities.
“The problem is that the power engineers who are responsible for the management of the physical electric grid are not experts in the data networks that support the grid. They lack the skills to properly configure network devices,” he said. Data network professionals on the other hand may have a solid grasp on the technology, but may not understand the subtleties of power generation.
For emergency management, there are other aspects of the human element that may be more directly controlled.
When it comes to the prevention of crises, emergency managers often assume the role of educator, whether it comes in the form of a smoke alarm campaign or hurricane response instructions. In the case of the smart grid, that same up-front effort could help prevent disaster or at least speed remediation.
“The holes are generally in the human component. Humans are the weakest link every time,” McKendrick said. “So a large part of the job is not just responding to the cyberthreat but also educating the humans.”
Emergency managers likely will face significant hurdles in trying to get their voices heard. “There’s no awareness about how serious it is,” McKendrick said. “People hear about Anonymous, they hear about these malicious threats. But every time it comes out in the media, they say it is the end of the world. Then when the world doesn’t end, people just tend to write it off. Then you have the majority of the populace that just ignores it altogether. ‘My job isn’t in IT, so what do I care?’”
To help raise awareness, emergency managers should build alliances well in advance of a crisis, Black said. It’s important to forge ties with local utilities, to understand the vulnerabilities, to share response plans. If first responders know in advance that a certain school is going to become an ad hoc shelter, it makes sense to tell the utilities that, so that the school can become a priority for the return of power.
While emergency managers might not be able to control malicious actions on the grid, they can play an active role in shaping the policies that are meant to safeguard the system, Popik said. In particular, he recommends leaders join the North American Electric Reliability Corp., which sets policy for the energy industry.
“They need to become involved in the political process and demand the regulation, security and reliability of the electric grid,” he said.
This story was originally published by Emergency Management.
|
<urn:uuid:0b831dc0-8c59-4da7-a05a-775af482fbb4>
|
CC-MAIN-2017-09
|
http://www.govtech.com/public-safety/Securing-the-Smart-Energy-Grid-a-National-Concern.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00346-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.94308 | 2,337 | 3.015625 | 3 |
1:1 computing device programs continue to gain popularity in schools worldwide. E-learning computing devices such as iPads, Chromebooks, tablets, and laptops have helped to engage students, improve technology skills and collaboration, encourage blended learning, and assert cost savings in other areas such as textbooks and paperwork.
Seeking to provide their students with the skills they need for the 21st Century, Bethel Park School District began their 1:1 technology initiative with the 2014-2015 school year. The District’s four-year implementation will provide 1:1 computing devices for all students in grades K-12. Students will benefit from online access to many of their online textbooks and a variety of educational programs already in use in the District, such as Compass Odyssey, Study Island, Spell City, and Math Counts. The devices will also facilitate collaborative activities such as group writing and peer editing.
The District selected Chromebooks for their students because of their cost effectiveness and compatibility with the Google Suite of educational applications. They needed affordable, well-built Chromebook carts to support their technology initiative for students in grades K-6 where students are too young to take the computers home. The District needed to store the Chromebooks safely and securely in every classroom and have them readily accessible to students.
Bethel Park School District chose Black Box, a Pittsburgh-based global provider of charging and storage solutions for e-learning devices, as its solution for Chromebook storage and security. "Working with a local, high-quality, and affordable vendor operating in the District just made sense," said Director of Technology Services Ron Reyer.
Since the District's plan is to purchase computer items through 2018, a functional cart that could grow with them into the future was a must. The one-size-fits-all shelving of the Standard Charging Carts from Black Box allow the District to be ready for what tomorrow’s technology may bring. Each cart accommodates up to 30 devices, including Chromebooks, laptops, iPads, and other e-learning devices, and is backed by a lifetime warranty.
To protect their investment, the District needed to securely store the Chromebooks in every classroom for students in grades K-6. The Standard Charging Cart’s integrated, heavy-duty locks ensure that the devices are kept secure from unwanted access. Plus, the locking wheels keep the carts in place for maximum safety, but allow for mobility when needed.
Working with Black Box resulted in a successful implementation with budget numbers kept in balance. The District had to stick with a specific budget without much room for flexibility, and Black Box delivered.
Regarding the District's decision to go with Black Box, Reyer commented, "A site visit to Black Box's assembly plant in Lawrence, PA convinced us that this was the right decision and six months later we were convinced that our decision to choose Black Box was the best we could have made."
Over the summer of 2014, K-6 teachers learned about the cart's rapid wiring system. The system manages the cables in a neat and secure way from the back of the cart. With the wiring in place, each student can now easily slide his or her Chromebook in its respective slot from the front of the cart and connect the device to its charging cable. "It's efficient and works well," said Third Grade Teacher at Bethel Park School District Bethani Bombich.
Bombich added, "We love, love, love them [Chromebooks]. Use them every day. The kids love using them and it's amazing how quickly they get up to speed and can do different things. Just brings a new dimension to the classroom."
|
<urn:uuid:71befe85-c114-485c-9005-2fd162e12531>
|
CC-MAIN-2017-09
|
https://www.blackbox.com/en-pr/resources/case-studies/technology-solutions/bethel-park-school-district
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00522-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.959919 | 739 | 2.640625 | 3 |
In this provocative talk from TEDGlobal, MIT Media Lab professor Kevin Slavin paints a picture of a world that is increasingly being shaped by algorithms. Slavin explores how these complex computer programs evolved from their espionage roots to become the dominant mechanism for financial services. But as the 2008 financial crash exemplified, allowing algorithms to replace human-based decision-making can have unintended and even dire consequences.
Slavin’s tale is one of contemporary math, not just financial math. He calls on the audience to rethink the role of math as it transitions from being something that we extract and derive from the world to something that actually starts to shape it. Consider that there are some 2,000 physicists working on Wall Street. Algorithmic trading evolved in part because institutional traders needed to move a million shares of something through the market, so they sought to find a way to break up that big thing into a million little transactions, and they use algorithms to do this.
“The magic and the horror of that,” says Slavin, “is that the same math that is used to break up the big thing into a million little things can be used to find a million little things and sew them back together to figure out what’s actually happening in the market.”
The less mathematically-inclined can imaging this scenario as a bunch of algorithms that are programmed to hide and a bunch of algorithms that are programmed to go find them and act. “This represents 70 of the United States stock market, 70 percent of the operating system formerly known as your pension, your mortgage,” says Slavin, drawing pained chuckles from the audience.
What could go wrong?
What could go wrong is that on May 6, 2010, 9 percent of the stock market disappeared in five minutes. This was the flash crash of 2:45, and it was the biggest one-day point decline in the history of the stock market. To this day, no one can agree on what caused this to occur. No one ordered it. They just saw the results of something they did not ask for.
“We’re writing these things that we can no longer read…and we’ve lost the sense of what’s actually happening in this world that we’ve made,” cautions Slavin.
When trying to make sense of uncharted territory, or new environs, scientists will start with a name and description of the new thing or place. A company called Nanex has begun this work for the 21st century stock market. It’s identified some of these algorithms and given them names like the Knife, Twilight, the Carnival, and the Boston Shuffler.
These kinds of algorithms aren’t just operating in the stock market, they are being used to set prices in online marketplaces, like eBay and Amazon. Algorithms locked in loops without human supervision caused a hundred dollar book about fly genetics to climb to $23.6 million (plus shipping and handling) on Amazon.
Netflix is using algorithms, like “pragmatic chaos,” to power its recommendation algorithms. This particular code was responsible for 60 percent of movies rented. Another company uses “story algorithms” to predict which kinds of movies will be successful before they are written. Slavin terms this process as the physics of culture.
Algorithms are even affecting architecture. There is a new style of elevator called destination control elevators that employ a bin-packing algorithm to optimize elevator traffic. The buttons for these systems are pressed before stepping onto the platform. The initial data show that people are not comfortable riding in an elevator where the only internal button option is “stop.”
Algorithmic trading, aka high-frequency trading, is also responsible for the repurposing of building space near major financial centers, where algorithms have become VIP tenants. The big traders want to be as close to the action as possible because in high-frequency trading milliseconds matter. Buildings near the stock exchange – like the Carrier Hotel in Manhattan – are being hallowed out to make room for datacenters.
Telecommunications provider Spread Networks dynamited through mountains to create a 825 mile trench in order to run a fiber-optic cable between Chicago and New York City to be able to traffic one signal nearly 40 times faster than the click of a mouse. Spread’s flagship Ultra Low Latency Chicago-New York Dark Fiber service is now operating with a roundtrip latency of 12.98 milliseconds.
“When you think about this, that we’re running through the United States with dynamite and rock saws so that an algorithm can close the deal 3 microseconds faster all for a communications framework that no human will ever know, that’s a kind of manifest destiny – and we’ll always look for a new frontier,” observes Slavin.
The acceleration and automation go hand and hand, but there is danger in making decisions solely based on computational insight. When an algorithm on Amazon goes out of bounds, it leads to a humorous tale about an overpriced biology text book, but when the market crashes, lives are affected around the world. The flash crash could have served as a warning, but little has changed since that time. The vast majority of stock trades are executed in less than a half a millionth of a second – more than a million times faster than the human mind can make a decision.
As Slavin says: “It’s a bright future if you’re an algorithm.”
|
<urn:uuid:3d8c3a90-eb0a-46a3-ae12-0123f98d1f46>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2013/10/14/algorithms-shape-world/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00046-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.94527 | 1,145 | 2.546875 | 3 |
Multi-Step Authentication and Why You Should Use It
Authentication is one of the essential components of security. It is one part of the concept known as authentication, authorization, and accounting (AAA). Authentication is the process of claiming an identity the proving that you are that claimed identity. Authorization is the mechanism to control what you can access or do. Accounting is the recording of events into a log to review the activities against the rules and policies in order to detect violations or confirm compliance. All three of these should be addressed when constructing a system in order to have a reasonable foundation for reliable security.
As users of online sites and services, we have no control over the security policies and technologies implemented on those sites and services. At best, we may be offered a few authentication options. If any authentication mechanisms are available in addition to a standard password, you need to take full advantage of those benefits.
When a site or service offers authentication options, those options are usually one of the following:
OAuth single sign-on
Certificate authentication is the process of verifying identity, which involves the use of a digital certificate. A digital certificate is produced by a certificate authority (CA) using asymmetric public key cryptography. The digital certificate itself is the subject's public key signed (i.e., encrypted) the CA's private key. A digital certificate is a form of trusted third-party authentication. Its most common use is by servers (i.e., web sites) on the Internet. A web site with a digital certificate is the first party. The second party is the visiting end-user. The third party is the CA that issued the certificate to the web site. If the end-user already knows and trusts in the CA, then the enduser can trust in the identity of the web site by validating the digital certificate.
Unfortunately, most end-users do not have a digital certificate. And, even if users obtained a digital certificate from a public and respected certificate authority, most online sites and services are not configured to accept client-side certificates. When it becomes common or standard for servers to accept client-side certificates, this will be the most secure authentication option. Until then, you will likely have to use one of the other options (assuming one of them is offered/supported by a particular site).
OAuth Single Sign-On
OAuth is a type of single sign-on solution that is gaining popularity online. Single sign-on is the concept of authentication when a single logon event can be used to allow access into a collection of systems. This is different than traditional authentication where each system would require its own unique and local authentication. Single sign-on has been a standard element in company networks for decades. There have been many attempts to duplicate this concept on the Internet, but only now with the adoption of OAuth is that actually becoming a reality.
OAuth is a way to share or borrow the authentication from one site to grant access to another site. Let's call the first site a primary site. The primary site must support OAuth and allow its authentication to be shared by other secondary sites. Secondary sites must also support OAuth and then select which primary site's authentications they will accept. The way OAuth works is:
1. You visit a secondary site and click on an offering to use a primary site's authentication to access the secondary site.
2. This takes you to the primary site. If you do not have a current active session with the primary site, you are prompted to authenticate to the primary site.
3. With an active session to the primary site, you are prompted to confirm or accept the secondary site's request to link to your account on the primary site.
4. Clicking to confirm this returns you to the secondary site where you now have access to that site.
Once OAuth has been confirmed on a secondary site, all future visits to that site will automatically log you in as long as you have a current active session with the primary site. The three most common or popular sites used as primaries are Facebook, Twitter, and Google, but there are dozens of other potential primary sites as well, including Amazon, Dropbox, Evernote, Flickr, LinkedIn, Microsoft, Netflix, PayPal, Tumblr, and Yahoo. Plus, there are numerous sites supporting OAuth to function as secondary sites.
OAuth is a huge convenience for users as it reduces the number of unique logon credential sets that you must keep track of. However, this is not necessarily a good security option. If the primary site's authentication is a basic password only, then when your account is compromised on the primary site, the intruder automatically gains access to all the linked secondary sites as well.
By the way, the primary site will maintain a list of secondary sites that have been linked. This list is for your convenience when you want to disconnect an OAuth link, but an intruder can use it to follow your links to those secondary sites.
ONLY use OAuth to link sites back to a primary site if you have configured multi-factor or multi-step authentication on the primary site. Otherwise, you would be better served setting a long and complicated password for each site and putting up with the hassle of managing multiple difficult credential sets (see my whitepaper Ten Steps to Better, Stronger Passwords for guidance on this).
|
<urn:uuid:5b071b5a-8a50-4322-a828-7746ceb144b6>
|
CC-MAIN-2017-09
|
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/multi-step-authentication-and-why-you-should-use-it/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00166-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.931455 | 1,091 | 3.546875 | 4 |
Smart grids are a fundamental component of the European critical infrastructure. They are rooted on communication networks that have become essential elements allowing the leveraging of the “smart” features of power grids.
Smart grids provide real-time information on the grid, perform actions when required without any noticeable lag, and support gathering customer consumption information. On the downside, smart grids however, provide an increased attack surface for criminals.
For instance, smart meters can be hacked to cut power bills as happened in Spain in 2014 or due to a DDoS attack or malware infection, communications and control of the network could be lost, causing an energy production halt and affecting several systems across borders.
To protect networks and devices from cyber threats, a new ENISA study focuses on the evaluation of interdependencies to determine their importance, risks, mitigation factors and possible security measures to implement.
There is high exposure of smart grid devices that makes it essential to harmonize the current situation by establishing common interconnection protocols. It has also become imperative to seek aligning policies, standards and regulations across the EU to ensure the overall security of smart grids.
These aspects have currently grown in importance due to the risk that cascading failures could result since smart grid communication networks are no longer limited by physical or geographical barriers, and an attack on one country could transgress physical and virtual borders.
The recommendations of this report are addressed to operators, vendors, manufacturers and security tools providers in the EU and they include the following:
- Foster intercommunication protocol compatibility between devices originating from different manufacturers and vendors
- Develop a set of minimum security requirements to be applied in all communication interdependencies in smart grids
- Implement security measures on all devices and protocols that are part, or make use of the smart grid communication network.
|
<urn:uuid:cec9d802-7ce8-4bfe-8a97-303cc84351a2>
|
CC-MAIN-2017-09
|
https://www.helpnetsecurity.com/2016/02/01/defending-the-smart-grid-what-security-measures-to-implement/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00166-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.930438 | 358 | 2.984375 | 3 |
What is Community Policing?
“Community policing is a philosophy that promotes organizational strategies that support the systematic use of partnerships and problem-solving techniques to proactively address the immediate conditions that give rise to public safety issues such as crime, social disorder, and fear of crime.”
Community Policing Defined Report from Community Oriented Policing Services (COPS) from the US Department of Justice
History Of Community Policing
The concept of community policing has be around for a long time and in the US it can be traced as far back as the 19th century. The primary purpose for its inception was to have police engaging with communities to build strong relationships between its members and law enforcement. One of the earliest and major tactics of community policing involved officers going on foot patrols through the neighborhoods they serve. In today's modern era, this has evolved to departments incorporating social media and/or community engagement systems to share relevant local information with residents. It has been an integral strategy for cities who have looked to combat violence, drugs and other criminal activities.
Only 17% of US residents age 16 and up have had a face-to-face interaction with a police officer
Only 40% of property crimes and 47% of violent crimes are reported to police
Implementing Community Policing
According to Strategies for Community Policing, common implementations of community policing include:
- Relying on community-based crime prevention by utilizing civilian education, neighborhood watch, and a variety of other techniques, as opposed to relying solely on police patrols.
- Re-structuralizing of patrol from an emergency response based system to emphasizing proactive techniques such as foot patrol.
- Increased officer accountability to civilians they are supposed to serve.
- Decentralizing the police authority, allowing more discretion amongst lower-ranking officers, and more initiative expected from them.
“Applying community policing techniques backed by the principles of ethical policing will produce a notable correlation between the collaborative relationship that will be fostered and a palatable decline in crime.”
Jon Gaskins, PoliceOne
Community Policing Analysis
A 2014 study published in the Journal of Experimental Criminology, “Community-Oriented Policing to Reduce Crime, Disorder and Fear and Increase Satisfaction and Legitimacy among Citizens: A Systematic Review,” systematically reviewed and synthesized the existing research on community-oriented policing to identify its effects on crime, disorder, fear, citizen satisfaction, and police legitimacy.
The study found:
- Community-policing strategies reduce individuals’ perception of disorderly conduct and increase citizen satisfaction.
- In studying 65 independent assessments that measured outcomes before and after community-oriented policing strategies were introduced, they found 27 instances where community-oriented policing was associated with 5% to 10% greater odds of reduced crime.
- 16 of the 65 comparisons showed community-oriented policing was associated with a 24% increase in the odds of citizens perceiving improvements in disorderly conduct.
- 23 comparisons measured citizen satisfaction with police, and found that community-oriented programs were effective almost 80% of the cases, and citizens were almost 40% more likely to be satisfied with the work of the police.
Although this study was not definitive, it provides important evidence for the benefits of community policing for improving perceptions of the police. The overall findings are ambiguous, and show there is a need to explicate and test a logic model that explains how short-term benefits of community policing, like improved citizen satisfaction, relate to longer-term crime prevention effects, and to identify the policing strategies that benefit most from community participation.
Why Everbridge for Improved Community Policing
Control Public Information Dissemination
Maintain complete power and control to author messages and disseminate information to the public at will.
Easy Resident Text Opt-In
Easily increase resident opt-in’s at an exponential rate. Maintain a robust database of resident contact information to foster a community dialogue or provide effective emergency notifications.
A Force Multiplier
Publish and distribute public information at scale, with the push of one button, via social media, websites, email, text, mobile app, and Google Alerts. Leverage residents to act as force multiplier to assist in preventing and solving crime. Ideal when internal resources are limited.
Precise Neighborhood Targeting
The most precise neighborhood-level geographic targeting system available. Send messages to specific communities or neighborhoods.
Focus on Public Safety
The most trusted public safety product on the market, as used by over 8,000 public safety agencies. Completely focused on helping agencies keep residents safe and informed.
|
<urn:uuid:3bd51f37-7e24-48f7-bbfe-e903d79f2ce1>
|
CC-MAIN-2017-09
|
https://www.everbridge.com/community-policing/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00342-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.9404 | 944 | 3.375 | 3 |
The Helmholtz Association of German Research Centres is the largest scientific organisation in Germany. It is a union of 18 scientific-technical and biological-medical research centers. The official mission of the Association is "solving the grand challenges of science, society and industry". Scientists at Helmholtz therefore focus research on complex systems which affect human life and the environment. The namesake of the association is the German physiologist and physicist Hermann von Helmholtz.The annual budget of the Helmholtz Association amounts to more than 3.4 billion euros, of which about 70% is raised from public funds. The remaining 30% of the budget is acquired by the 18 individual Helmholtz Centres in the form of contract funding. The public funds are provided by the federal government and the rest by the States of Germany . Wikipedia.
Ahrens L.,Helmholtz Center Geesthacht
Journal of Environmental Monitoring | Year: 2011
The occurrence and fate of polyfluoroalkyl compounds (PFCs) in the aquatic environment has been recognized as one of the emerging issues in environmental chemistry. PFCs comprise a diverse group of chemicals that are widely used as processing additives during fluoropolymer production and as surfactants in consumer applications for over 50 years. PFCs are known to be persistent, bioaccumulative and have possible adverse effects on humans and wildlife. As a result, perfluorooctane sulfonate (PFOS) has been added to the persistent organic pollutants (POPs) list of the Stockholm Convention in May 2009. However, their homologues, neutral precursor compounds and new PFCs classes continue to be produced. In general, several PFCs from different classes have been detected ubiquitously in the aqueous environment while the concentrations usually range between pg and ng per litre for individual compounds. Sources of PFCs into the aqueous environment are both point sources (e.g., wastewater treatment plant effluents) and nonpoint sources (e.g., surface runoff). The detected congener composition in environmental samples depends on their physicochemical characteristics and may provide information to their sources and transport pathways. However, the dominant transport pathways of individual PFCs to remote regions have not been conclusively characterised to date. The objective of this article is to give an overview on existing knowledge of the occurrence, fate and processes of PFCs in the aquatic environment. Finally, this article identifies knowledge gaps, presents conclusions and recommendations for future work. © 2011 The Royal Society of Chemistry. Source
Mosler J.,Helmholtz Center Geesthacht
Computer Methods in Applied Mechanics and Engineering | Year: 2010
Variational constitutive updates provide a physically and mathematically sound framework for the numerical implementation of material models. In contrast to conventional schemes such as the return-mapping algorithm, they are directly and naturally based on the underlying variational principle. Hence, the resulting numerical scheme inherits all properties of that principle. In the present paper, focus is on a certain class of those variational methods which relies on energy minimization. Consequently, the algorithmic formulation is governed by energy minimization as well. Accordingly, standard optimization algorithms can be applied to solve the numerical problem. A further advantage compared to conventional approaches is the existence of a natural distance (semi metric) induced by the minimization principle. Such a distance is the foundation for error estimation and as a result, for adaptive finite elements methods. Though variational constitutive updates are relatively well developed for so-called standard dissipative solids, i.e., solids characterized by the normality rule, the more general case, i.e., generalized standard materials, is far from being understood. More precisely, (Int. J. Sol. Struct. 2009, 46:1676-1684) represents the first step towards this goal. In the present paper, a variational constitutive update suitable for a class of nonlinear kinematic hardening models at finite strains is presented. Two different prototypes of Armstrong-Frederick-type are re-formulated into the aforementioned variationally consistent framework. Numerical tests demonstrate the consistency of the resulting implementation. © 2010 Elsevier B.V. Source
Lilleodden E.,Helmholtz Center Geesthacht
Scripta Materialia | Year: 2010
The stress-strain response, slip mechanisms and size effect in Mg (0 0 0 1) single crystal was investigated by microcompression testing. It is found that plasticity occurs relatively homogeneously up to a critical stress, at which point a massive deformation occurs. While the yield stress increases with decreasing diameter, the qualitative behavior is independent of column size. Cross-sectional electron back-scattered diffraction measurements show that twinning is not the predominant deformation mechanism. © 2009 Acta Materialia Inc. Source
Lendlein A.,Helmholtz Center Geesthacht
Journal of Materials Chemistry | Year: 2010
Actively moving polymers that are stimuli-sensitive materials, which are able to shift their shape in response to suitable stimuli, have witnessed significant developments. The incorporation of magnetic nanoparticles into triple-shape polymer networks enables the non-contact activation of the triple-shape effect in an alternating magnetic field. The temperature dependence of water vapor permeability through a material related to the thermal transition of the switching domains has been used in textiles. Biomedical applications are an emerging application field for shape-memory polymers that has a significant role in minimally invasive surgery. The incorporation of inorganic nanoparticles or nano-fibers into shape-memory polymer matrices improves mechanical properties by reinforcement and retarding relaxation processes especially in thermoplastic polymers. The incorporation of nano-layered graphene in epoxy-based shape-memory polymers enhances scratching resistance and the thermal heating capability of the material. Source
Agency: Cordis | Branch: H2020 | Program: ERA-NET-Cofund | Phase: SC5-15-2015 | Award Amount: 50.73M | Year: 2016
In the last decade a significant number of projects and programmes in different domains of environmental monitoring and Earth observation have generated a substantial amount of data and knowledge on different aspects related to environmental quality and sustainability. Big data generated by in-situ or satellite platforms are being collected and archived with a plethora of systems and instruments making difficult the sharing of data and knowledge to stakeholders and policy makers for supporting key economic and societal sectors. The overarching goal of ERA-PLANET is to strengthen the European Research Area in the domain of Earth Observation in coherence with the European participation to Group on Earth Observation (GEO) and the Copernicus. The expected impact is to strengthen the European leadership within the forthcoming GEO 2015-2025 Work Plan. ERA-PLANET will reinforce the interface with user communities, whose needs the Global Earth Observation System of Systems (GEOSS) intends to address. It will provide more accurate, comprehensive and authoritative information to policy and decision-makers in key societal benefit areas, such as Smart cities and Resilient societies; Resource efficiency and Environmental management; Global changes and Environmental treaties; Polar areas and Natural resources. ERA-PLANET will provide advanced decision support tools and technologies aimed to better monitor our global environment and share the information and knowledge in different domain of Earth Observation.
|
<urn:uuid:323c07e1-eeae-49ab-b5de-967feee22e82>
|
CC-MAIN-2017-09
|
https://www.linknovate.com/affiliation/helmholtz-center-geesthacht-24412/all/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00042-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.907567 | 1,512 | 2.875 | 3 |
Snapchat has demonstrated again its lack of understanding in building strong security to protect users of its popular mobile app for sharing photos.
The company introduced last week a CAPTCHA verification method for checking whether a new subscriber is human or a computer program. Cybercriminals will use the latter to set up fake accounts in order to distribute spam or to find ways to steal the personal information of users of the service.
CAPTCHA methods can help reduce the number of fake accounts, but Snapchat's implementation was easily hacked by Steven Hickson, a graduate research assistant at the Georgia Institute of Technology.
In fact, Snapchat's CAPTCHA was so weak, Hickson spent less than an hour building a computer program that could fool the mobile app maker's system with "100 percent accuracy."
"They're a very, very new company and I think they're just lacking the personnel to do this kind of thing," Hickson told CSOonline Monday.
To ensure the would-be user is human, the Snapchat system asks the registrant to choose out of nine illustrations the ones containing Snapchat's white ghost mascot. The problem with the system is that the mascot image varies only in size and angle, making it easy for a computer to find.
To avoid hacking a CAPTCHA system, "you want something that has a lot of variety in the answer," Hickson said. "Basically, one right answer, but a very, very large amount of wrong answers. You want something that's very, very hard for a computer to solve."
Hickson provides the technical details of the hack on his blog. In general, he used Intel's Open Source Computer Vision Library (OpenCV) and a couple of other supporting technologies, to build the program capable of identifying the Snapchat mascot in the illustrations. OpenCV is a library of programming functions that are aimed at giving computers the ability to identify images.
Zach Lanier, senior security researcher for mobile authentication specialist Duo Security, said Hickson's CAPTCHA bypass is "totally legitimate."
"In my opinion, if Snapchat is really concerned about improving security, they should take some lessons from Hickson's findings," Lanier said.
Chris Grayson, senior security analyst for consultancy Bishop Fox, agreed, saying "the CAPTCHA mechanism that they implemented is decidedly weak, as demonstrated by Steven Hicksons proof-of-concept, and offers little additional security to Snapchat users."
Snapchat did not respond to a request for comment.
Mobile app developers have become notoriously weak in building adequate security to protect users' personal information. Recent studies have shown serious weaknesses in data protection in mobile apps built by small vendors, as well as airlines, retail outlets, entertainment companies, insurance companies and financial institutions.
Mobile app security is often given a lower priority than rolling out features, because there has not been a major breach where valuable financial data has been stolen from a smartphone. However, the risk of such a breach will rise as the number of purchases made with a smartphone increases, along with the value of the data stored on the devices.
While security will slowdown the app development process, "it's extremely necessary," Hickson said.
Hickson's work follows on the heels of another incident in which hackers exploited a weakness in Snapchat's feature for finding friends by displaying the usernames of people whose phone numbers match those in other users' address books. Hackers used the vulnerability to steal the usernames and phone numbers of more than 4 million users.
Snapchat updated the app to let users opt out of having their phone numbers linked to their usernames. In addition, people are now required to verify their phone number before using the service called "Find Friends."
This story, "Snapchat Falters on Security Again, Experts Say" was originally published by CSO.
|
<urn:uuid:d224fd79-b13e-4c7f-9c70-c988cd47586d>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2379226/mobile-security/snapchat-falters-on-security-again--experts-say.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00038-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.95282 | 778 | 2.6875 | 3 |
TUCSON, Ariz. -- Augmented reality and 3D printing are the hottest emerging technologies to watch, according to Tom Soderstrom, chief technology officer for NASA Jet Propulsion Laboratory.
The impact of both technologies is already being felt with some new products, including a free augmented reality app in Apple's App Store called Spacecraft 3D. The app was built by JPL designers as an outgrowth of the Jet Propulsion Lab's "petting zoo" -- a development sandbox concept that JPL has had in five locations for four years.
In an interview at the Premier 100 conference here, Soderstrom showed how the Voyager and 10 other spacecraft look in the 3D app. With the app loaded on his iPhone, he focused its camera on a special card, and the 3D image of Voyager appeared on the smartphone's display, and could be rotated to look at all sides of the spacecraft. The image on the card was like the rough surface of a planet, and that same image can be downloaded and printed out -- eliminating the need for the card. The app will soon be available on Google Play for Android, Soderstrom said.
Soderstrom also demonstrated functional plastic tools, including a wrench and a gear that were produced with inexpensive 3D printers. Such printers can now cost $1,500 to $2,500, using a variety of materials, including sheets of highly durable manufactured sapphire, he said. Low-cost 3D printers are being made by MakerBot, Cubify and a number of companies that showed products at the International CES show in January, he said.
JPL engineers helped further develop 3D printing in JPL's five petting zoos, Soderstrom said.
Augmented reality allows computer-generated content to be superimposed over a live camera view of the real world.
Th first uses of augmented reality and 3D printing could be effective in education settings, Soderstrom said. Students could learn how tools and devices look and feel and how to design them with software.
He showed a hand-sized 3D model of the surface of the moon, which blind students can touch to gain an appreciation of the moon's rugged surface. "This model cost just 30 cents to make," he said.
Likewise, augmented reality can be used to enrich the learning experience about spacecraft and other technologies -- all from a student's smartphone or other portable device, Soderstrom said.
JPL engineers are also relying on the virtual world of Second Life to design JPL conference rooms, showing potential users and investors how they would look or could be modified. "Blueprints didn't work, but in Second Life they see how the lighting works, where the walls will be," Soderstrom said.
Soderstrom, a science evangelist of sorts for JPL, spending much of his time in schools explaining the significance of technology to coming generations. This is the second year he has attended Premier 100. He attracted a crowd of CIOs interested in innovations and ways they can promote innovation in their companies.
Soderstrom's purpose in going to schools is to preach about the need for more scientists and engineers. "We're going to need more engineers for all this new technology," he said.
Matt Hamblen covers mobile and wireless, smartphones and other handhelds, and wireless networking for Computerworld. Follow Matt on Twitter at @matthamblen or subscribe to Matt's RSS feed. His email address is [email protected].
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Augmented Reality and 3D Printing are Hot Technologies to Watch" was originally published by Computerworld.
|
<urn:uuid:651ad5a2-0667-4302-810b-fc10cf5a28e3>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2387821/enterprise-software/augmented-reality-and-3d-printing-are-hot-technologies-to-watch.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00214-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.957392 | 761 | 2.71875 | 3 |
Here are more tips to ensure your Wi-Fi network is safe.
Change the default name of your wireless network (called the SSID)
Change the default username and password to login to your wireless Access Point; if you don’t, someone else might login and change your security setting.
Do not let the Access Point broadcast the name of your wireless network (SSID).
The last option means your wireless network name (SSID) will not appear to any wireless PCs when they “”searching for available networks.”” For your own PCs to see your network, you have to enter the name of your network (SSID) when you set up the wireless security on your PC. Many specialists, including leading makers of wireless equipment and the National Security Agency, recommend you do not broadcast the SSID and do use MAC address filtering. The latter is another security feature that only lets devices you identify by their MAC address onto your network. The MAC address is a unique identifier assigned to most network equipment by the manufacturer for identification. It is printed on a tag on the bottom of your Access Point and on the setup page for your wireless cards.
Others, notably Microsoft, argue that these two measures give you a false sense of security because an attacker monitoring your wireless network can easily see SSIDs and MAC addresses as your devices communicate with each other.
What everyone does agree on is that the two most important things to do are to implement the authentication security capabilities built into your Access Point and Wi-Fi adapters, and to change the default password of your Access Point. (See also, How do I make my home or small business Wi-Fi network safe?)
|
<urn:uuid:309883cc-4ea2-4c87-8165-f01fe84e5b38>
|
CC-MAIN-2017-09
|
https://www.justaskgemalto.com/us/what-else-should-i-do-wi-fi-network-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00390-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.921031 | 343 | 2.78125 | 3 |
Douglas Engelbart, a Silicon Valley engineer who invented the computer mouse and is credited with many of the concepts that underpin modern computing and the Internet, died on Tuesday at his home in Atherton, California. He was 88.
Born in 1925, Engelbart was coming of age as World War II raged in Europe. He joined the U.S. Navy as an electronic and radar technician, and after the war studied electrical engineering at Oregon State University. He went on to complete a master's degree and Ph.D. at the University of California at Berkeley, where he was also an assistant professor.
About a year later, in 1957, he joined the Stanford Research Institute (today called SRI International), which was just over a decade old. From 1959 until 1977 he led the organization's Augmentation Research Center, and in 1963 came up with the concept of the computer mouse.
The mouse would go on to revolutionize personal computing, but the public didn't get their first look at it until several years later.
In a presentation at the Fall Joint Computer Conference in San Francisco on Dec. 9, 1968, he introduced the concepts of hypertext linking, real-time text editing, the use of multiple windows, and teleconferencing. He also showed a set of three devices that worked together to control a computer.
"We have a pointing device called a mouse, a standard keyboard and a special key set," he told the audience. The demonstration can be seen in archive footage on YouTube.
In a world of mainframe computers controlled by keyboards, the mouse was a new idea.
"I don't know why we call it a mouse. Sometimes I apologize. It started that way and we never did change it," he said, explaining the name to his audience.
A year later, the Augmentation Research Center underscored its importance in computing by becoming the second node of the ARPANET, the predecessor to today's Internet.
"Doug was a giant who made the world a much better place and who deeply touched those of us who knew him," Curtis Carlson, president and CEO of SRI, said in a statement. "SRI was very privileged and honored to have him as one of our 'family.' He brought tremendous value to society. We will miss his genius, warmth and charm. Doug's legacy is immense -- anyone in the world who uses a mouse or enjoys the productive benefits of a personal computer is indebted to him."
Engelbart received numerous awards for this work through the latter years of his life. They included the National Medal of Technology in 2000, the Lemelson-MIT Prize in 1997 and the Turing Award, also in 1997.
|
<urn:uuid:a6299165-b756-4c06-8794-9ed4d1f9306c>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2167877/smb/douglas-engelbart--inventor-of-the-computer-mouse--has-died.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00442-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.981321 | 547 | 3.09375 | 3 |
In a DNS zone, every record carries its own time-to-live, so that it can be cached, yet still changed if necessary.
This information is originally served by authoritative servers for the related zone. The TTL is represented as an integer number of seconds.
At first sight, the mechanism looks straightforward: if the www.example.com record has a TTL of 30, it’s only valid up to 30 second, and caches must fetch it again if requested after this delay.
DNS caches can be chained: instead of directly querying authoritative servers, a cache can forward queries to a another server, and cache the result.
Having 3 or 4 chained caches is actually very common. Web browsers, operating systems and routers can cache and forward DNS a query. Eventually, this query will be sent to an upstream cache, like the ISP cache or a third-party service like OpenDNS. And these caches can also actually hide multiple chained caches.
In order to respect the original TTL, caches are modifying records as they forward them to clients. The response to a query that has been sitting in a cache for 10 second will be served with a TTL reduced by 10 second. That way, if the original TTL was 30 second, the whole chain is guaranteed to consider this record as expired as the same time: the original meaning of the TTL is retained no matter how many resolvers there are in the way.
Well, not exactly. A TTL is just a time interval, not an absolute date. Unlike a HTTP response, a DNS response doesn’t contain any timestamp. Thus, requests processing and network latency are causing caches to keep a record longer than they actually should in order to respect the initial TTL.
A TTL being an integer value makes things even worse: a chain of N caches can introduce a N second bias.
In practice, this is rarely an issue: TTLs as served by authoritative servers are considered indicative, and not as something to depend on when accurate timing is required.
How does the TTL of a record served by a DNS cache decays over time?
Surprisingly, different implementations exhibit different behaviors.
For a record initially served with a TTL equal to N by authoritative servers:
Hold on… Does it mean that how frequently an entry will actually be refreshed depends on what software resolvers are running?
Sadly, yes. Given a record with a TTL equal to N:
Although a TTL of zero can cause interoperability issues, most DNS caches are considering records with a TTL of zero as records that should not be cached.
This perfectly makes sense when the TTL of zero is the original TTL, as served by authoritative servers.
However, when a cache artificially changes the TTL to zero, it changes a record that had been designed for being cached to an uncachable record that contaminates the rest of the chain.
To illustrate this, a Linux box has been setup with a local DNS cache. Unbound has been chosen, but the last component of the chain actually makes little difference. Even web browsers caches have a very similar behavior.
Queries are forwarded to an upstream cache on the same LAN, running dnscache, and outgoing queries are recorded with ngrep. The same query, whose response has a TTL of 1, is made at a 10 queries per second rate.
The X axis represents outgoing queries, whereas the Y axis is the time elapsed since the previous query.
Even though a TTL of 1 is served by authoritative servers for this record, a large amount of responses are cached and served for 2 second. This is due to the local Unbound resolver.
However, there is also quite a lot of responses that haven’t been cached at all. It happens when dnscache serves a response with a TTL of 0. Since it happens one third of the time, this is suboptimal and not on par with the intent of the authoritate record, which is served with non-zero TTL.
And although dnscache and Unbound are handling TTLs the same way, we can’t expect their caches to be perfectly synchronized. When the local cache considers a record as expired and issues an outgoing query, the upstream server can consider it as not expired yet, just in the middle of the last second. What we get is a constant race between caches, causing jitter and outgoing queries sent after 1 second.
The following experiment has been made by observing outgoing queries when sending 10,000 queries for a record with a TTL of 2, at an average 10 qps rate, to a local cache, using Google, OpenDNS and Level 3 as upstream resolvers.
Here is the number of outgoing queries that were required to complete the 10,000 local queries using different services:
When using Google DNS (blue dots), queries are effectively never cached more than 2 second, even locally. This is due to the max TTL returned by Google DNS being the initial TTL minus one.
When Google DNS returns a TTL of zero, we observe the same jitter and the same slew of queries that couldn’t be served from the local cache.
OpenDNS (orange dots) has the same behavior as dnscache and Unbound, with TTLs in the [0, N] interval. Our initial TTL with a value of 2 actually causes 3, 2, and 1 second delays between request, plus a sensitive amount of consecutive outgoing queries due a null TTL.
Note: Since the original post OpenDNS resolvers have been updated to behave like Bind: TTLs are in the [1, N] interval.
Level 3 (green dots) is running Bind, which returns a TTL in the [1, N] interval. Bind caches records one second less than dnscache and Unbound. Because our local resolver is Unbound, queries received with a TTL of N are actually cached N + 1 second. But as expected, the frequency of required outgoing queries very rarely exceeds 3 second.
What the correct behavior is, is out of scope of this article. All major implementations are probably correct.
But from a user perspective, with only 2 caches in the chain, and for a given record, the same set of queries can require up to 3.5 more outgoing queries to get resolved, depending on what software the remote cache is running.
With CDNs and popular web sites having records with a very low TTL (Facebook has a 30 second TTL, Skyrock has a 10 second TTL), the way a cache handles TTLs can have a sensible impact on performance. That said, some resolvers can be configured to pre-fetch records before they expire, effectively mitigating this problem.
MacOS X provides a system-wide name cache, which is enabled by default.
Its behavior is quite surprising, though. It seems to enforce a minimum TTL of 12.5 seconds, while still requiring some outgoing queries delayed by the initial TTL value, and some consecutive ones. Resolving the 10,000 queries from the previous test took only an average of 232 outgoing queries.
Using Google DNS, OpenDNS and Level 3 as a remote resolver produce the same result, with the exception on Level 3 (Bind) avoiding frequent (less than 1 sec) consecutive queries during the time other return a TTL of zero.
|
<urn:uuid:849ea89b-54a0-4e20-aa75-91e6539b4c5b>
|
CC-MAIN-2017-09
|
http://blog.catchpoint.com/2012/04/04/dns-records-and-ttl-how-long-does-a-second-actually-last/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00386-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.942982 | 1,504 | 3.265625 | 3 |
The data center does create a lot of garbage or waste products. From paper to many other products, the data center has to deal with the recycling or safe disposal of trash. Paper printouts, servers and so on are all part of the data center waste management system. The waste management system gets more complex depending on the product that needs to be disposed off. Security of information remains a critical concern for most data center managers.
Confidential information can be retrieved from dumpster diving and going through the discarded material from a data center can yield plenty of information to the trained eye. Paper normally gets recycled and excess IT equipment goes on to online auction sites or even bought as scrap by other vendors.
Shredding the paper is a good idea before giving it up for recycling. For hardware, polices and operating procedures like being zeroed out is an essential step to be taken before they can be given to scrap dealers.
Read More About Data Center Waste Management
|
<urn:uuid:b63d08f4-2d31-452f-8b7d-04e3da6ea64c>
|
CC-MAIN-2017-09
|
http://www.datacenterjournal.com/data-center-waste-management-system/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00258-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.924442 | 193 | 2.9375 | 3 |
Astronauts have been drinking recycled urine for some time, and according to Wired magazine, on the long trip to Mars, they’ll be shielded from radiation by astronaut poop. That, in microcosm, is what’s happening back here on Earth.
According to the “One Water Vision,” all water is just the same H2O recycled over and over. Some of the water in your coffee, for example, might have been excreted by a Neanderthal, or been part of the iceberg that sank the Titanic — or both, although the chances of that seem fairly slim.
Following the Industrial Revolution, the human population spiked — from about 1 million to an estimated 7 billion in 2013, hammering freshwater supplies. The results? More waste entering the water supply, more incentive to recover potable water from that waste, and the discovery that pharmaceuticals and street drugs are entering the water supply. And if you think marijuana in the drinking water is “far out,” you could be part of the problem.
Some of this waste has bad effects, like death. Canadian researchers suspect that birth control pill estrogen in drinking water is causing a spike in prostate cancer deaths. Traditional water-treatment technology does little to remove illegal or pharmaceutical drugs that have been excreted or flushed, and thus “what goes around comes around.”
Global Water Senior Vice President Graham Symmonds suggests a “three-pipe” system, comprising a potable water pipe, a nonpotable water pipe for irrigation or industrial use, and a sewer pipe. The system reduces demand for potable water by 40 percent, he said. And while removing pharmaceuticals from water is expensive, only the potable system would need such treatment. “Your grass doesn’t care if there is aspirin in the water,” he said.
Advanced metering infrastructure (AMI), said Symmonds, can help struggling municipalities and utilities by letting customers monitor their consumption and reduce waste. AMI can also help recover “nonrevenue” water, he said, from missing or ineffective meters, leaks and errors. “You can find a lot of revenue by cleaning up your system.”
What about ocean water? Desalination is expensive compared to traditional water treatment. Texas, for example — which already desalinates brackish groundwater — estimates that desalinating sea water would cost $800-$1,400 per acre foot, with each acre foot being equivalent to 326,000 gallons or roughly the amount used by an average household in a year.
As for recovering water from waste? “The technologies exist to produce high-quality water from sewage,” Symmonds said, “and the regulatory framework is under construction, but some places do it already.”
Luckily new ideas are flooding in for better desalination techniques, protecting fish from medication, extracting useful chemicals from sewage, and stopping the formation of hydrogen sulfide, a poisonous, explosive and smelly material that corrodes sewer pipes, costing the U.S. $14 billion annually. Sewage can even be used to generate electricity.
|
<urn:uuid:3a3af087-1dce-4793-b9b2-a0b27e035b59>
|
CC-MAIN-2017-09
|
http://www.govtech.com/transportation/209248441.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00258-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.946464 | 649 | 3.046875 | 3 |
Primer: Geospatial AnalysisBy Kevin Fogarty | Posted 2004-08-04 Email Print
Mapping data yields more than just good directions; it can yield customers. A look at the benefits of geospatial analysis.
What is it? A way to determine where your customers live or work by correlating their street addresses with their physical location. It's done by adding data from mapping software or the Global Positioning System to the customer information you already have, such as purchasing history, creditworthiness and income. By combining demographic and geographic information, marketers can, for example, draw a line around a particular region and ask the database for the names and income ranges of customers who live within that area. Existing databases can track street addresses and ZIP codes, but can't usually tell, say, whether East Main Street and West Main Street are next to each other or miles apart.
Where's the benefit? By knowing a customer's physical location, you can gain tremendous insight into that person's needs, says Fred Limp, director of the Center for Advanced Spatial Technologies (CAST) at the University of Arkansas. If, for example, a customer at 123 Main St. just bought a riding mower, a neighbor at 321 Elm might, too. A normal database query would tell you the two addresses are in the same ZIP code, but not that they are around the corner from each other in a development with unusually large lawns.
How would I use it? The most-cited example, according to Limp, is to help select retail locations by analyzing neighborhood demographics surrounding each potential spot. Without searchable location data, you'd have to rely on ZIP codes to identify the area to be examined. "But then, once you put in the data, you can also ask: How many customers make more than $100,000 and live within two miles of the store?" Limp says. Making those kinds of connections can also help a wholesaler identify where it's losing sales because there aren't enough distributors, or allow an insurance company to set rates according to the disaster risk for a particular house, not just a neighborhood, says Henry Morris, group vice president for applications and information access at IDC.
Where do I get this information? Most detailed electronic maps are created by federal, state or local governments to maintain roads, bridges and other infrastructure, but they're available free to the public. The problem is, you have to piece it together. For a price, service bureaus will do the work for you, using interoperability standards created by the Open GIS Consortium, which represents vendors of geospatial-analysis products.
What's the downside? Not all entities want to give up their information. Utilities, for example, build detailed maps that include customer locations, pipes and underground lines. They share some information to help keep backhoes from digging in the wrong places, but are reluctant to share details such as the locations of vulnerable central switching stations or other critical elements, according to Bob Samborski, executive director of the Geospatial Information & Technology Association. Issues like this limit the detail and effectiveness of geographic data. Various government agencies are negotiating with private-sector companies, Samborski says, but no wide-ranging agreement has been reached.
|
<urn:uuid:8909f418-879d-427a-a283-939c90822f2f>
|
CC-MAIN-2017-09
|
http://www.baselinemag.com/crm/Primer-Geospatial-Analysis
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00307-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.941537 | 667 | 2.625 | 3 |
It lasted only two seconds, but a paralyzed man made history Thursday when he kicked off the World Cup in Brazil with the help of a robotic suit.
The feat was backed by Walk Again Project, a nonprofit group involving U.S. and European universities and other organizations as well as more than 100 scientists. It posted the video on Twitter.
"We did it!!!!" tweeted neuroscientist Miguel Nicolelis of Duke University Medical Center, who led the construction of the exoskeleton.
Electrodes under a cap worn by the user detect brain signals and transmit them to the exoskeleton that translates them into steps or kicks, according to Colorado State University (CSU), whose vice president for research Alan Rudolph is a project manager for Walk Again.
CSU researchers developed a 3D-printed polymer liner in the cap to keep the electrodes in place and also contributed special brackets on the cap that hold LED sensors providing feedback to the user showing how well he controls the exoskeleton.
The user also receives tactile feedback from artificial skin in the suit developed by Gordon Cheng at Technical University of Munich (TUM), a partner in the Walk Again Project. The artificial skin, consisting of flexible printed circuit boards, sends signals to small motors that vibrate against the user's arms, helping guide the robotic legs.
The project's origin, according to TUM, lies in an experiment in which Nicolelis had a monkey walking on a treadmill in North Carolina while Cheng had a humanoid robot in Kyoto, Japan, follow the signal generated by the monkey's movements.
"After the Kyoto experiment, we felt certain that the brain could also liberate a paralyzed person to walk using an external body," Cheng said in a statement.
The World Cup event is "the beginning of a future in which the robotic garment will evolve to the point of becoming accessible and enabling anyone with paralysis to walk freely," Walk Again said in a statement.
But improvements to the bulky, heavy mechanisms will be necessary before the technology can take off.
Strength-boosting exoskeletons developed for people who can move their limbs or have partial movement, such as Cyberdyne's HAL suit, tend to be lighter.
"Robotic exoskeletons remain in the very earliest stages of development," Francis Collins, director of the U.S. National Institutes of Health, wrote in a blog post after seeing a demo of the machine in Brazil.
"Scientists need to refine their designs and test them on more people, and they need to analyze and publish the enormous amount of data they've already gathered."
|
<urn:uuid:0e95492c-00a7-459d-b539-76eb49aedb5f>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2375508/hardware/paralyzed-man-in-robot-suit-kicks-off-world-cup.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00475-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.943288 | 525 | 2.640625 | 3 |
Network News Transfer Protocol is a prime set of rules which is used by both computer’s clients as well as servers in order to manage the comments situated on the Usenet newsgroups. In the past, NNTP was introduced as the substitute of original “UUCP” protocol. But now NNTP servers can handle the collected international network of Usenet newsgroups. Moreover, such NNTP servers are included at your ISP (internet access provider). But an NNTP user will be included as a part of following explorers like: Netscape and Opera etc. But another separate client program can be added too as a newsreader.
NNTP protocol is the well suited for sending Usenet news communication either between servers or a news server and newsreader client. This very simple protocol is to some extent similar to POP3 and SMTP protocols.
Basically, NNTP protocol in documented form is appeared first in RFC 3977 that was made public in the month of October, 2006. This RFC was the result of NNTP IETF working group’s hard work which set aside RFC 977 (issued in the year 1986). But RFC 3977 launched a capability labels registry so to be used for any further expansion of the network news transfer protocol in the future. Up to now, RFC 3977 itself is only created the extensions in this protocol. Anyhow, to record a new additional room, this extension is needed to be made public as a standards follow or else as an experimental RFC. But extension, opening with X will be reserved for the private usage.
Below are those commands, identified as well as responses revisited by a NNTP server. List of these commands can be mentioned as: ARTICLE, BODY, HEAD, HELP, LAST, SLAVE, LIST, NEWNEWS, NEXT, POST, QUIT, STAT and GROUP.
But NNTP reply code grouping is included:
Code: 1yz Description: Informative message.
Code: 2yz Description: Command ok.
Code: 3yz Description: Command ok as far as this, launch rest of that.
Code: 4yz Description: Command was acceptable, but couldn’t be executed for certain reason.
Code: 5yz Description: Command can’t be implemented, or else incorrect, or any serious error relating to program is occurred etc.
Code: x9z Description: Debugging output.
Besides this, NNTP reply codes are available for the certain tasks. These can be mentioned as under:
Code: 100 Description: Help content follow.
Code: 199 Description: Debug output
Code: 200 Description: Server all set – posting acceptable.
Code: 201 Description: Server all set – no posting permitted.
Code: 202 Description: Slave status renowned.
Code: 240 Description: Article post ok.
Code: 400 Description: Service discontinued etc.
In any case, NNTP spells out a specific protocol for the giving out, inquisition, repossession and redeployment of the news articles with the help of a consistent stream like TCP server-client representation. NNTP is intentionally planned as a result that news articles require simply, is being stored on presumably central host while contributors on the other participating hosts linked to the local area network (LAN) may examine the news articles by means of stream acquaintances to the news-host.
|
<urn:uuid:0028d394-7818-464e-b864-f9a13a6bebfe>
|
CC-MAIN-2017-09
|
https://howdoesinternetwork.com/2012/nntp
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00527-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.922522 | 697 | 2.515625 | 3 |
Technology Primer: Cloud and Virtualization
- Course Length:
- 0.5 Day Instructor Led
This half-day Technology Primer introduces the audience to the concepts of Cloud Computing and Virtualization. Cloud Computing is generally characterized by its Service Model types. The course first introduces the audience to the idea of Virtualization, Virtual Machines, Hypervisors and Containers. These are the first building blocks of Cloud Computing. Then the course introduces the audience to second set of building blocks of Cloud Computing - the Cloud Computing Service Models - and presents a high level comparison of the three primary Service Models and where they may fit into a wireless networking environment. The final building block introduces the audience to OpenStack and wraps up the discussion with a simple example of Cloud Computing implemented using OpenStack.
This technology primer is designed for a wide range of audiences including operations, engineering, and performance personnel, as well as other personnel interested in understanding the basics of Cloud Computing and Virtualization in the context of a wireless service provider’s network.
After completing this course, the student will be able to:
• Describe Virtualization
• Describe Virtual Machines
• List the role and tasks of a Hypervisor
• Describe Containers
• Describe Cloud Computing
• Explain Cloud Computing in the context of a Wireless Network
• Describe OpenStack
• Illustrate an example implementation of the Cloud using OpenStack
1.1. What is Virtualization?
1.2. Types of Virtualization
1.3. Physical Network Functions
1.4. Virtual Network Functions
2. Virtualization Technology
2.1. Virtual Machine
3. Cloud Computing
3.1. What is the Cloud?
3.2. Cloud Computing
3.3. Applicability to the wireless domain
4. Cloud Computing Technology
4.1. What is OpenStack?
4.2. OpenStack architecture
4.3. OpenStack as a Cloud enabler
|
<urn:uuid:95b05156-9f09-4619-b510-67b2cc30be19>
|
CC-MAIN-2017-09
|
https://www.awardsolutions.com/portal/ilt/technology-primer-cloud-and-virtualization?destination=ilt-courses
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00647-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.798587 | 404 | 3.265625 | 3 |
In recent years, in order to reduce the cost of fiber access optical fiber network end-users, some well-known domestic and foreign universities and research institutions, manufacturers racing to develop cheap, connecting a simple, reliable plastic optical fiber and communication systems. This article briefly describes the progress of the plastic optical fiber structure, materials, manufacturing methods, performance and communication systems.
Plastic optical fiber structure: the short-distance communication using plastic optical fiber, its profile of the refractive index distribution can be divided into two types: step-index plastic optical fiber and a graded-index plastic optical fiber. Step index plastic optical fiber due to its mode dispersion interaction between people shot reflection of light occurs repeatedly, the emitted waveform relative to the incident waveform broadening, its transmission bandwidth of tens to hundreds of MHz • km. Gradient refractive index gradient plastic optical fiber to optimize distribution suppression mode dispersion, from reducing the manpower of the material dispersion, and thus obtain a bandwidth of up to hundreds MHZ • km to several GHz • km gradient plastic optical fiber.
The plastic optical fiber materials: When the plastic optical fiber material is selected, the main consideration is the material itself is a translucent, refractive index, etc.. The core material in addition should be good transparency, uniform optical refractive index appropriate, attention should be paid to the mechanical, chemical stability, thermal stability, processing and cost factors.
Currently, often selected for the plastic optical fiber core materials are: poly (methyl methacrylate) (PMMA), polyphenylene propylene (PS), polycarbonate (PC), fluorinated poly methyl acrylate (FPMMA) and perfluoro resin and so on. Often selected as the plastic optical fiber cladding materials are: poly (methyl methacrylate), fluorine plastic, silicon resin or the like.
The manufacture of plastic optical fiber: the quartz glass optical fiber manufacturing method is completely different: extrusion method and interfacial gel method, the communication method of manufacturing a plastic optical fiber.
The extrusion method is mainly used in the manufacture of the step-type plastic optical fiber. The process steps are as follows: First, as a core of poly (methyl methacrylate) the monomers methacrylamide methylphenidate through after purified by distillation under reduced pressure, together with a polymerization initiator agent and a chain transfer agent is fed to the polymerization vessel together, Then the container was placed in an electric oven heating, the placement of a certain time, so that the monomer is completely polymerized, and finally, will be filled with a fully polymerized poly (methyl methacrylate), the container was heated to the drawing temperature, and dry nitrogen pressurized molten polymer from the upper end of the container, the bottom of the container mouth is extrusion of a plastic optical fiber cores at the same time so that the extruded core coated with a layer of low refractive index polymer is made of bands jump-type plastic optical fiber.
Gradient type of plastic optical fiber manufacturing method of the interfacial gel method. The interfacial gel method, the process steps are as follows: the first high refractive index dopant in the core monomer to prepare the the core mixed solution, followed by the initiator and a chain transfer agent into the core to control the rate of polymerization, the polymer molecular size the mixed solution, and then the solution was put into a selected as the cladding material within the hollow tube of poly methyl methacrylate (PMMA), and finally the mixed solution of PMMA with a core tube placed in an oven at a certain temperature and time conditions of polymerization. In the polymerization process, the PMMA tube gradually a mixed solution of swelling, the gel phase is formed in the inner wall of the PMMA tube. In the gel phase molecular movement speed slows down, the polymerization reaction is accelerated due to the “gel effect”, the thickness of the polymer is gradually thickened, the polymerization was terminated on PMMA tube center, thereby obtaining a refractive index along radial gradient distribution fiber preform law, the last before plastic optical fiber preform is fed into the furnace heat drawn into graded-index plastic optical fiber.
|
<urn:uuid:51e969c3-4589-4c70-9652-01014e0f2598>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/principle-of-plastic-optical-fiber-transmission-system.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00399-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.900845 | 868 | 2.859375 | 3 |
The popular mobile messaging application WhatsApp Messenger has a major design flaw in its cryptographic implementation that could allow attackers to decrypt intercepted messages, according to a Dutch developer.
The problem is that the same key is used to encrypt both outgoing and incoming streams between the client and the WhatsApp server, said Thijs Alkemade, a computer science and mathematics student at Utrecht University in the Netherlands and lead developer of the open-source Adium instant messaging client for Mac OS X.
"RC4 is a PRNG [pseudo-random number generator] that generates a stream of bytes, which are xored [a crypto operation] with the plaintext that is to be encrypted. By xoring the ciphertext with the same stream, the plaintext is recovered," Alkemade said Tuesday in a blog post that describes the issue in detail.
Because of this, if two messages are encrypted with the same key and an attacker can intercept them, like on an open wireless network, he can analyze them to cancel out the key and eventually recover the original plaintext information.
Reusing the key in this manner is a basic crypto implementation error that the WhatsApp developers should have been aware of, Alkemade said Wednesday. It's a mistake made by the Soviets in the 1950s and by Microsoft in its VPN software in 1995, he said.
Alkemade released proof-of-concept exploit code for the vulnerability, but initially tested it on the WhatsPoke open-source library, not on the official WhatsApp client. Since then he has confirmed that the issue exists in the WhatsApp clients for Nokia Series 40 and Android devices.
"I don't think the situation will be different with the iOS client," he said.
WhatsApp also uses the same RC4 encryption key for HMAC (hash-based message authentication code) operations to authenticate messages.
This allows an attacker to intercept a message sent by a user to the server and resend it back to the user as if it came from the WhatsApp server, but this is not something that can be easily exploited, Alkemade said.
The Dutch developer didn't attempt to contact WhatsApp before disclosing the issue publicly. "I thought that it's important for people to know that WhatsApp is not secure and I didn't expect them to fix it rapidly," he said.
Fixing this doesn't require rethinking the entire encryption implementation, Alkemade said. If they add a method to generate different keys for encryption in both directions, as well as for message authentication, then the problem is solved, he said.
According to Alkemade, users for now should assume that anyone who can intercept their WhatsApp connections can also decrypt their messages and should consider their previous WhatsApp conversations compromised.
Until the issue is fixed the only thing that users can do to protect themselves is to stop using the application, Alkemade said.
"WhatsApp takes security seriously and is continually thinking of ways to improve our product," WhatsApp said in an emailed statement. "While we appreciate feedback, we're concerned that the blogger's story describes a scenario that is more theoretical in nature. Also stating that all conversations should be considered compromised is inaccurate," the company said.
The company did not respond to a request seeking further clarification on why it considers the scenario theoretical and that statement inaccurate.
According to Matthew Green, a cryptography professor at Johns Hopkins University in Baltimore, using the same key for both sent and received messages is a problem because WhatsApp uses the RC4 stream cipher.
"A known feature of XOR-based ciphers is that if you have two messages encrypted with the same stream of (pseudo)random bytes, you can XOR the ciphertexts together," he said. "What happens is the RC4 bytes cancel out and you get the XOR of the two messages."
"Now in some cases this is gibberish," Green said. "However, if you know what some fields in at least one received message are, you can easily cancel them out and, yes, recover the message bits from the other message."
There are also other tricks that can be used, Green said. "It's a really bad thing and you should assume worst case that the messages are now cleartext."
|
<urn:uuid:9c85a20f-b5fd-4b7b-affe-8ce3bce9feb9>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2170694/byod/d--39-oh--basic-flaw-in-whatsapp-could-allow-attackers-to-decrypt-messages.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00515-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.953895 | 867 | 2.78125 | 3 |
There has been a lot of news on quantum computing recently, from reports on NPR to Nature to the usual technical trade publications. The hype is enormous given the potential for changing the landscape, but I think there are a number of outstanding questions.
One of the biggest issues I see is that we have around 60 years of knowledge and understanding of programming languages. We have FORTRAN, COBOL, PL1, C, Python, Java and a whole myriad of languages all revolving around programming as we know it today. If quantum computing is to become successful there has to be an interface that allows it to be programmed easily. What will the interface be?
There will always be instances where people will, for the sake of performance, write machine code. Yes, it still happens today, but for quantum computing, given what I have read, there will have to be a paradigm shift in how things are programmed.
Quantum computing, given the cryogenics involved, is at best going to be relegated to large customers that have the proper facilities. That is not a great deal different than what happened in the 1950s when only the largest organizations had computers.
I think the key to success of this completely disruptive technology (assuming that the technology matches the marketing spec sheet) is going to be the interface and training the right people to use the programing method to utilize the machine. In the beginning, this will be very basic, but the things to watch for, I think, will be the how quickly the infrastructure is developed.
Of course, you are going to need to get data in and out of the machine, communicate with the machine and all the things that we have today. How fast these things come together will determine the success or failure of the technology, I believe.
Labels: programming languages,quantum computing
posted by: Henry Newman
|
<urn:uuid:db4629ef-36cc-425b-acf8-d759415ce8ca>
|
CC-MAIN-2017-09
|
http://www.infostor.com/index/blogs_new/Henry-Newman-Blog/blogs/infostor/Henry-Newman-Blog/post987_163151010.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00567-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.952669 | 376 | 2.5625 | 3 |
Last May, Google announced that it was using a herd of goats to mow the lawn at its Mountain View, Calif., headquarters. This year, HP has taken the lead in innovative uses for livestock in business. Its researchers have come up with a way to power data centers with cow manure.A farm of 10,000 dairy cows could fulfill the power requirements of a 1-megawatt data center - a medium-sized data center - with power left over to support other needs on the farm, say the HP Labs researchers.
In this process, the heat generated by the data center can be used to increase the efficiency of the anaerobic digestion of animal waste. This results in the production of methane, which can be used to generate power for the data center. This symbiotic relationship allows the waste problems faced by dairy farms and the energy demands of the modern data center to be addressed in a sustainable manner.
HP offers some fun facts about cows and manure that you might not already know:
* The average dairy cow produces about 55 kg (120 pounds) of manure per day, and approximately 20 metric tons per year - roughly equivalent to the weight of four adult elephants.
* The manure that one dairy cow produces in a day can generate 3.0 kilowatt-hours (kWh) of electrical energy, which is enough to power television usage in three U.S. households per day.
* A medium-sized dairy farm with 10,000 cows produces about 200,000 metric tons of manure per year. Approximately 70 percent of the energy in the methane generated via anaerobic digestion could be used for data center power and cooling, thus reducing the impact on natural resources.
* Pollutants from unmanaged livestock waste degrade the environment and can lead to groundwater contamination and air pollution. Methane is 21 times more damaging to the environment than carbon dioxide, which means that in addition to being an inefficient use of energy, disposal of manure through flaring can result in steep greenhouse gas emission taxes.
* In addition to benefiting the environment, using manure to generate power for data centers could provide financial benefit to farmers. HP researchers estimate that dairy farmers would break even in costs within the first two years of using a system like this and then earn roughly $2 million annually in revenue from selling waste-derived power to data center customers.
HP did not say when dung-powered data centers will be generally available, but you can read more about its cow power theories in this paper.
|
<urn:uuid:19dd960b-9583-4de3-bab8-ccc00ea3c96b>
|
CC-MAIN-2017-09
|
http://www.banktech.com/infrastructure/hp-develops-manure-powered-data-centers/d/d-id/1293828
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00263-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.935198 | 507 | 3.171875 | 3 |
The Nuclear Regulatory Commission has rejected the notion that it is not ready to decide whether aging atomic power plants need to make upgrades intended to limit radiation releases during a major crisis, but its ultimate action on the matter is not yet clear.
In the wake of the 2011 Fukushima Daiichi nuclear plant meltdowns in Japan, the commission has identified a host of potential ways to improve the security and safety of U.S. reactors. It has divided those potential measures into tiers, identifying some as requiring action sooner and others further down the road.
Some Democrats and watchdog groups have suggested that the agency has not moved quickly enough to act on potential improvements. In a Friday letter, Senate Environment and Public Works Committee Chairwoman Barbara Boxer (D-Calif.) noted that today it two year anniversary of the onset of the Japanese crisis and that NRC staff issued a report on post-Fukushima recommendations in July 2011. Boxer asked commission Chairwoman Allison Macfarlane for provide a status update in advance of a hearing next month.
Republicans lawmakers have suggested, however, that the commission should not address its post-Fukushima response in a piecemeal fashion. Nor should it issue any major new requirements before conducting a thorough comparison of the Japanese and U.S. regulatory systems and devising a comprehensive plan for how to fill any potential gaps that could exacerbate a terrorist attack or natural disaster affecting a nuclear reactor.
During a recent hearing, Representative Edward Whitfield (R-Ky.) suggested that various safety and security issues that have arisen since the Fukushima incident “seem so interdependent." He questioned why the commission appeared to be making efforts to address them “independently and separate” from one another.
Specifically, Whitfield and other House Republicans have suggested that the commission should not require aging nuclear plants to install filtered vents until it completes the regulatory comparison and post-Fukushima reactor safety plan. Agency staff, along with many Democrats and watchdog groups, say filtered vents would limit radiation releases in the event of a terrorist attack or natural disaster. Should a facility lose power, vents relieve pressure building inside a heating reactor core while filters would reduce the amount of radiation that passes through the vents.
In response to Whitfield’s concerns, all five presidentially appointed commissioners said they were taking a comprehensive look at how requiring filtered vents and other possible post-Fukushima actions could impact one another, but suggested it would not be prudent to delay certain actions deemed to be the most pressing.
Republican Commissioner Kristine Svinicki suggested that if the agency did not dispose of some issues, it would create a state of perpetual uncertainty for the industry.
“We’re trying to strike a balance,” Svinicki said during the Feb. 28 hearing. “We’re attempting to integrate as well as we can.”
Commissioner William Ostendorff, also a Republican, said at the meeting that there “has been significant consideration of interlapping” between all issues the commission has addressed recently, suggesting that it was not making individual regulatory decisions in a vacuum.
In a recent letter to House Republicans, the commission also noted that it has already “conducted a regulatory comparison of the station blackout regulations that existed in Japan at the time of the” Fukushima incident and that it “continues to evaluate the various technical and regulatory factors in Japan that contributed to the accident.”
The commission in the letter also defended its staff’s estimates on the cost to install filtered vents.
While NRC staff projected about $16 million per plant, industry officials put the figure closer to $45 million. They argue that NRC evaluators are only accounting for the cost of filter components purchased from outside vendors and are not including the expense of additional modifications operators might have to make on-site to make the filters viable.
The commission’s letter, though, says the expense estimates “were intended to cover not only the equipment costs, but also the site specific engineering and plant modification costs.” It adds that the “estimate used in the NRC’s staff’s assessment was based upon discussions with vendors, regulators, and plant operators who have had experience with the installation of filtering systems at foreign nuclear power plants.”
Republicans have asserted that the agency should only pursue post-Fukushima regulatory actions if the anticipated safety and security gains outweigh the costs of compliance. In their letter, the commissioners responded by noting that they considered but took no action on several post-Fukushima requirements.
“Examples of items considered but not acted upon or implemented include the immediate shutdown of operating plants, the installation of various systems, structures, and components (beyond ongoing actions), the staging of robots to provide access to contaminated areas, adding multiple and diverse instruments to measure parameters such as spent fuel pool level and requiring all plants to install dedicated bunkers with independent power supplies and coolant systems,” the commissioners said in the Feb. 15 letter.
It remains unclear, however, whether the commissioners will decide to go forward with a filtered vent requirement. Spokesman Scott Burnell told Global Security Newswire that the agency continues to deliberate on its course of action.
In their letter to House Republicans, the commissioners suggested that some already-established NRC requirements could help mitigate radioactive releases from a terrorist attack or Fukushima-style event.
“The addition of backup equipment to supplement current safety systems and development of mitigating strategies, such as those implemented in the U.S. following Sept. 11, 2001, to address such external hazards and plant conditions might have supported the efforts of plant operators to mitigate the event at Fukushima Daiichi,” the commissioners said. “These measures would provide additional protection for the existing barriers; including the reactor fuel, coolant systems, and containments.”
At least one NRC panel member, Svinicki, has previously expressed opposition to a filtered vent requirement. She argued last year that existing protective measures should prevent them from being necessary.
In January, the commission rejected a watchdog group’s legal bid to have it require filtered vents without deciding whether to issue a similar mandate on its own terms.
|
<urn:uuid:dd6b2be7-ac69-4631-b4df-0299d0318bf7>
|
CC-MAIN-2017-09
|
http://www.nextgov.com/defense/2013/03/nrc-new-nuclear-plant-safety-measures-not-premature-final-decision-pending/61803/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00439-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.953791 | 1,275 | 2.546875 | 3 |
MIT researchers uncovered a critical difference between the flu viruses that infect birds and humans. This discovery could help scientists monitor the evolution of avian flu strains and develop vaccines against a deadly flu pandemic.
The researchers found that a virus's ability to infect humans depends on whether it can bind to one specific shape of receptor on the surface of human respiratory cells.
In addition to helping researchers develop a better way to track the evolution of avian flu that leads to human adaptation, these findings will aid in developing more effective strategies for seasonal flu, which still is a leading cause of death.
Shown here is a colorized transmission electron micrograph of avian influenza A (H5N1) viruses (seen in gold) grown in MDCK cells (seen in green).
- Anne Trafton, MIT
|
<urn:uuid:d0c8dc68-90af-441f-8f22-56e19bb667b7>
|
CC-MAIN-2017-09
|
http://www.govtech.com/health/Avian-Flu-Research-may-Speed-Vaccine.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00615-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.906342 | 165 | 3.828125 | 4 |
Mangrove: Fujitsu’s Experimental Green Data Center Design
Creating a green data center is not just a matter of locating the facility in Norway to keep the servers cool or plugging into a string of hydroelectric dams. The next generation green data center will be re-conceptualized from the concrete floor to the virtual network.
In this move away from traditional data center designs from the ground up, the servers and networks will be redesigned and the physical infrastructure of the building will be designed around the servers. It will be efficient and flexible, easy to upgrade and to reconfigure. It will dynamically pool resources from different components. It will use virtualization and new technologies to improve performance and reliability, to lower costs and to save energy.
That, at least, is the vision of researchers at Fujitsu Laboratories who are trying to create a new, integrated system architecture for future green data centers. The project, called Mangrove, is described in a paper as an attempt to achieve “total optimization by vertically integrating servers, storage drives, networks, middleware, and facilities.”
The design starts with a fundamental idea: tear apart the servers into components and reconnect them through software and networks. Specifically, Fujitsu Laboratories plans to separate the server into three components: the CPU motherboard, the storage device, and middleware to control them. Instead of installing racks of complete servers, Mangrove calls for the installation of groups of motherboards – “motherboard pools” – and groups of memory devices, or “disk pools.” With the proper interconnections and software, Fujitsu researchers believe they can network together hardware configurations to meet any specification, and alter the configurations when needs change.
Disk Pool Configuration
This approach has the potential to cut costs and energy requirements in several ways. First is the layout of the facility and cooling of the components. Rather than building racks for bulky servers, data center managers would be able to use more compact racks designed specifically for either motherboards or for disks. In that way, many more components can be squeezed together to save space. Cooling the components should be easier, since high heat-generating CPU pools can be separated from the pools of HDDs or solid state disk drives, which don't generate as much heat. Each can be relegated to areas of the building cooled to the proper temperature.
But the real advances are supposed to come from the software and networks that link components from the CPU pool to those from the disk pool. The software suite, including networks, operating systems and middleware, is designed to dynamically configure servers out of banks of CPUs and banks of memory devices. Once configured to meet the needs of a particular application, each setup should not need to be changed very often. But if a CPU board or memory device should fail, the software is designed to replace them with other devices in the pool.
These software suites are custom-designed by Fujitsu. Aside from a regular Ethernet network in the data center, there's a special network for connecting the disks to the CPUs. It's called DAN, for Disk Area Network, to distinguish it from a SAN (Storage Area Network.) With DAN, one disk drive does not have to be shared by multiple nodes, but should be able to be configured with CPUs in any combination to meet a particular need. For example, connect one motherboard to four disk drives to create a general-purpose server, or add many more disk drives to turn the configuration into a storage server, according to the paper.
The brainpower behind all this is a software program called MangroveManager. That's the program that will choose and connect together the proper number of disks and CPUs into a configuration that will meet the needs specified by the user. It also makes the components work together as a single system by installing the operating system and middleware on the constructed server.
Interested in a RAID for example? Fujitsu says its software can automatically put together a Redundant Array of Independent Disks for you, and even re-configure it later, without you ever connecting or disconnecting a cable. Here's the process:MangroveManager chooses motherboards and memory devices from the pools and tells DAN to make the connections. MangroveManager installs the OS, as well as a piece of middleware created to build RAIDs. And here's where the names and acronyms get complicated. The middleware program is called Akashoubin (the Japanese name for a large reddish-brown Asian kingfisher.) This bird is divided into two parts: Akashoubin Manager (AsbM) and Akashoubin RAID Controller (AsbRC). AsbM controls AsbRC, which in turn controls the disks and their RAID functions. In addition, if any of the CPUs or storage devices in a configuration fails, Akashoubin can replace them with other units from the pools without moving any physical parts.
Sure, it may seem simple enough, but the researchers admit that the whole system is still a work in progress. Several components are still under development, including several pieces of the data center network. This network is a conventional Layer 2 LAN/SAN, but it needs to meet the criterion of the Green IDC by providing “optimization of cost performance by vertical integration,” says the paper. In order to achieve that, Fujitsu wants the system to be able to scale to something on the order of 2,000 servers, to provide flat characteristics (uniform delay and throughput across all servers,) to offer security and reliability, and to sport low-cost power-saving features.
To reach those goals, the Fujitsu Research team is still working on several components. One is a “high-density, large-capacity switch” that should, for example, be able to relegate heavy control plane processing to the CPU pool but assign light management processing tasks to the switch's own local CPU. That way, the amount of processing required is proportional to the cost of the processor used for the task. The same efficiency strategy goes for a network manager that will be able to manage the virtualization and, say, put switches with a light load to sleep until needed. They're also trying to build high-speed optical interconnects out of inexpensive consumer-level optical technology.
And in order to use the capabilities of this reconfigurable data center to create a more perfect data center, they're evaluating a new VM placement design system. It is supposed to not only create new designs automatically in response to new specifications from a system manager, but will include a “scoring system” that will help optimize placements by rating their effectiveness according to a combination of several criteria. Several criteria would make up the score, such as reducing power consumption, minimizing software-license fees, enhancing fault-tolerance and reducing costs. Users can also change the relative importance of each of these criteria (such as giving more weight to reducing power consumption) so that the final score of any placement will be customized to meet their specific needs.
So there's still much work to be done. Fujitsu Laboratories says the next step is a “proof of concept” study, in which everything described in the paper is integrated into a prototype system. If that prototype shows promise, the concept will take a big step toward reality. Once they can prove the whole thing works, then you can build it next to hydroelectric dams in Norway and you've got an ideal green data center.
The full paper is published in the Fujitsu Scientific & Technical Journal here.
|
<urn:uuid:8900c687-bc96-48c0-9f13-3b59ae824b78>
|
CC-MAIN-2017-09
|
https://www.enterprisetech.com/2012/10/26/mangrove:_fujitsu_s_experimental_green_data_center_design/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00135-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.939911 | 1,548 | 2.78125 | 3 |
General Electric Co has developed new commercial drone technology to detect gas leaks.
For a while now drones have been applied in increasing numbers to solve problems across industrial sectors. Usually, these applications offer benefits in terms of safety, cost, speed or efficiency. Aerial inspections of bridges and cell towers, for example, could render the days of brave crews armed with ropes and ladders mounting dangerous operations firmly in the past.
A single pilot can gather all the data required, in a move that will be faster, cheaper and safer for the majority of industrial projects.
The ‘Raven’ drone
That thinking is at the crux of GE‘s new drone, called Raven. Raven has been purpose-built to serve the needs of the oil and gas industry. The move has come in response to two things. First, a struggling fossil fuel industry is seeking any measures possible to save costs and increase efficiencies. And second, the USA’s Environmental Protection Agency this year launched new regulations requiring industry players to better detect and deal with leaks of any kind.
Currently still at the prototype stage, GE says Raven will offer industry maintenance crews a quick and easy way to inspect key equipment. The drone will come complete with autonomous flight and programmable flight plans, and is capable of streaming live data back to a central control room. In itself, that’s not particularly revolutionary, but GE also says the Raven can actively detect gas leaks.
Fitted with a laser-based sensor that reads methane levels in real time, Raven has successfully detected methane emissions at well sites in testing. GE is now working on perfecting the hardware and finalizing a software package to bring all the data together.
Speaking to Bloomberg, Lorenzo Simoneli, chief executive officer at GE Oil & Gas, said “When you think of Project Raven and the usage of new tools and applications, it’s going to be key to taking the industry forward. There’s a lot that you can do going forward to help drive productivity.”
Commercial drone industry on the up
Relaxing regulations and rapidly advancing technologies are providing the foundations for a boom in commercial drone services. Drones are disrupting industries ranging from agriculture and filmmaking to industrial inspection and conservation. According to a recent study on the commercial applications of drone technology by PwC, the emerging global market for business services using drones is valued at over $127 billion.
|
<urn:uuid:32b50939-ee06-416b-a454-33bf78d0f9f1>
|
CC-MAIN-2017-09
|
https://internetofbusiness.com/ge-commercial-drone-detect-gas-leaks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00080-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.931409 | 490 | 2.546875 | 3 |
NASA issued a study today that said if life ever existed on Mars, the longest lasting environments were most likely below the planet's surface.
The hypothesis comes from analyzing tons of mineral data gathered over the years from more than 350 sites on Mars gathered by NASA and European Space Agency Mars space probes.
More on space: 8 surprising hunks of space gear that returned to Earth
The idea is that "Martian environments with abundant liquid water on the surface existed only during short episodes. These episodes occurred toward the end of a period of hundreds of millions of years during which warm water interacted with subsurface rocks. The types of clay minerals that formed in the shallow subsurface are all over Mars but the types that formed on the surface are found at very limited locations and are quite rare. This has implications about whether life existed on Mars and how the Martian atmosphere has changed."
NASA says another clue is the detection of a mineral called prehnite which forms at temperatures above about 400 degrees Fahrenheit (about 200 degrees Celsius). These temperatures are typical of underground hydrothermal environments rather than surface waters., NASA said.
"If surface habitats were short-term, that doesn't mean we should be glum about prospects for life on Mars, but it says something about what type of environment we might want to look in," said the report's lead author, Bethany Ehlmann, assistant professor at the California Institute of Technology and scientist at NASA's Jet Propulsion Laboratory. "The most stable Mars habitats over long durations appear to have been in the subsurface. On Earth, underground geothermal environments have active ecosystems."
Layer 8 Extra
Check out these other hot stories:
|
<urn:uuid:18e10d9b-030f-4428-bf05-e9139f62df53>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2221025/smb/nasa--if-there-was-life-on-mars--it-was-likely-underground.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00432-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.94868 | 340 | 4.15625 | 4 |
Biometric security is becoming more prevalent. More than 770 million biometric applications will be downloaded every year by 2019, Juniper Research said last year, quoted by CSO then. That’s up from only 6 million identity-proving biometric apps in 2015. It will be big, then.
What we’re usually thinking about, though, when biometrics are mentioned in the context of devices, is the proving of a person’s identity, perhaps with fingerprints.
However, some scientists think that there’s another way to approach biometrics. They think it doesn’t have to simply be geared towards just the identifying and verification users. You can use it for security-related tracking too.
Behavioral researchers think that eye movement can be used to track the places a user looks at on a computer screen. Analyzing the viewed spots, including for how long, could let software provide specific messages pertaining to that content being viewed.
A use could be to advise computer users that they’re about to give away PII, or sensitive personally identifiable information online, think professors at the University of Alabama in Huntsville. A kind of phishing-prevention tool, possibly.
Ironically, in this case, the eye tracker isn’t primarily for identifying the person, as it’s usually used in biometric security. Its purpose is to stop the person getting identified. They’re using the same equipment, though.
“Displaying warnings in a dynamic manner that is more readily perceived and less easily dismissed by the user” is the goal, says the university’s press release. By creating pop-ups that appear when a user looks at a field in a form, for example, the scientists think they can produce a more effective warning than something static in a text box. It’s less same-old-same-old.
“I need to know how long the user's eyes stay on the area and I need to use that input in my research,” says Mini Zeng a computer science doctoral student, who’s been working on the project. Where the user’s eyes are on the screen and for how long is calculated in the tracking.
If the user looks away from the PII-capturing form, the warning can be made to disappear. If the user looks back again, the warning flashes on the screen again and can stay there for a pre-determined amount of time—to force the user to read it. The researchers think that it’s the unpredictability of the warning flashing on the screen that adds to the effectiveness.
"If you get a warning every single time and it becomes annoying or habitual, you are going to ignore it," says Dr. Sandra Carpenter, a psychology professor in the press release.
Although the University of Alabama researchers don’t mention, in their press release, how they see the system being implemented, presumably any web-based form that has a dubious intent could be made to display the dynamic warning, perhaps through URL whitelists and blacklists lookups. The warning could be independent of the website publisher.
And if an eye recognition biometric sensor hardware gets added to devices anyway, perhaps it could help with kids’ homework management. “Hey, you’ve been looking at that Instagram post a little too long, Get back to the work,” the message might say.
This article is published as part of the IDG Contributor Network. Want to Join?
|
<urn:uuid:8925cf36-92cc-4d92-a0a4-9076a637fe8e>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/3042246/security/how-eye-tracking-could-stop-pii-leaks.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00132-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.94861 | 731 | 2.546875 | 3 |
Nov 95 Level of Govt: Local. Function: Land Use Planning Problem/situation: There was no way for most database users in the city of Raleigh and Wake County, N.C., to access multiple levels of land-use information for analysis and mapping. Solution: City and county GIS departments initiated a joint project and developed a Multi-access Parcel System. Jurisdiction: Raleigh, N.C., Wake County, N.C., Lee County, Fla. Vendors: IBM, Graphic Data Systems Inc. Contact: Charles Friddle, Wake County geographic information systems director 919/856-6375. Colleen Sharpe, geographic information systems director, Raleigh 919/890-3636.
By Bill McGarigle Contributing Writer Until recently, there was no simple way for most database users in the city of Raleigh and Wake County, N.C., to access multiple levels of land-use information for analysis and mapping. Wake County GIS Director Charles Friddle explained that "departments needing information from the assessor's file could access attribute data via the IBM mainframe, using their PCs and Graphic Data Systems Inc. (GDS) software, but were not able to link that data with the graphics. To bring up a map described by the data required a separate search from a GIS workstation. All that took time." In response, city and county GIS departments initiated a joint project with system supplier GDS to find a solution. The result was the Multi-access Parcel System (MAPS) - a software program that Friddle said "gives us a quantum leap in system accessibility." Administrators and staff alike agree that MAPS has significantly cut the time needed to research records, eliminated much data-storage redundancy, and provided real-time data communication between municipal and county agencies. BACKGROUND For Raleigh and Wake County, the capability for rapid, simplified access to land-use information was a much-needed tool. Already home to 500,000 people and growing by 3 percent annually, the county is attracting increasing numbers of high-tech industries in electronics and biotechnology. Keeping up with the combined urban and industrial growth requires city and county agencies to track approximately 14,000 new construction starts a year; research the county's 180,000-plus parcels and identify each one over 10 acres to determine suitability for commercial and industrial development, provide maps in response to requests for sites with specific geographic criteria; and scramble to find locations for new schools. In addition, subdivisions, zoning changes, and development of new parcels currently produce between 6,500 and 10,000 land transactions annually, all requiring timely appraisal. Part of the difficulty in accessing information, explained Raleigh GIS Manager Colleen Sharpe, also stemmed from agencies using different software applications. "The one used by the county assessor to access the mainframe looked completely different from ours over here. When our employees went over to Wake County, they needed to know how to use the computer one way. When their people came over here, they were looking at another computer and a different application." Although the county and the city had been developing applications, adding information to the databases, and regularly upgrading the system since 1989, the GIS departments determined that the rate of growth and land-use changes in the county required larger numbers of staff to have a faster, simpler method of accessing various databases from desktop PCs. The GIS departments discussed the problems collectively and began looking at ideas other state and local agencies were using. Through GDS, they learned that Lee County, Fla., had a comparable IBM mainframe, was using the same GIS software, and had the same needs. Friddle pointed out that Lee County had also been working on a solution. "They had developed the connection between the IBM mainframe and the digital processor that was running the GIS system. We were able to take what they did and modify it for our needs. It took a lot of work. There were similarities, but there were also many differences. It gave us a place to start." The two departments subsequently proposed a joint project to develop their own program, in concert with GDS. The MAPS project, as it was known, was funded by the county and the city of Raleigh. "Both contributed staff resources," Sharpe said. "I worked on it, one of their programmer analysts worked on it, then we had our GDS software-applications person working on it. The project was one of our best examples of cooperation." Despite hardware problems along the way, MAPS was up and running by the summer of 1995, nine months after the project began. "Seamless" is how Sharpe described the MAPS application. "The user doesn't know or need to know what database to access to get the information. It's very user friendly - just follow the menu, point and click. The program was intended to have a Windows look, require a minimal amount of typing and only two to four hours instruction. We developed it pretty much as planned." Friddle stresses that MAPS doesn't allow an accessed database to be altered - it enables users to produce their own maps, do analyses, and make copies from their own PCs. "A lot of municipalities have a GIS system with PCs, workstations, or X-terminals accessing one digital processor. But what we have developed here is a network of PCs, X-terminals and workstations that enable users to access a wide range of databases on various platforms - whether it's an IBM mainframe, the county's digital processor, or the city of Raleigh's digital processor," Friddle said. "With our system, you don't have to know where the data is; you just ask for what you want, and the system tells you where the data is. You don't have two or three terminals sitting on your desk. You can do everything from one device." "MAPS lets users with only one piece of information access the entire database," Sharpe added. "If you want to know where someone lives, and you have only the person's name, type it in, and a map comes up on the screen. If, at the same time, you want ownership information, you can get that also. If you have only a map and want to know who lives in a certain parcel, you can access the person's name and the associated ownership information - all through MAPS." Assistant County Manager Wally Hill uses MAPS to access a variety of real estate and tax-accounting data on the mainframe. "The program is seamless for people like me, who don't have or need GIS training. I don't have to know where the information comes from - it's all accessible. Not only can I see a map of the site I'm interested in, but I can pull up all the existing information on it, using just this one program." Hill believes a version of MAPS may eventually provide citizens with a view-only access to public records. ELUDAS A valuable spin-off from the MAPS project is the Existing Land-Use Derivation Assignment System (ELUDAS) - a program that translates the county assessor's codes into attributes other agencies can use to plot land-use maps. The program was written by Scott Ramage, a student who interned for a year with the county planning department. "ELUDAS is a single-focus program," Friddle explained. "It is used to identify existing land use throughout the county. MAPS, on the other hand, allows users to view, query, analyze and report a wide variety of information - including the land-use information generated by ELUDAS." Friddle added: "In the past, determining existing land use meant the planning department had to send someone out in the field with colored pencils to color maps, then graphically enter that information into the GIS. Now, the ELUDAS program enables planners to access the assessor's files for all that information without ever leaving their desks." EXPANSION VS. COST As the system is presently configured, Raleigh and Wake County agencies have real-time access to each other's databases via a T1 telecommunication line. At the time of this report, nearby towns of Cary and Fuquay-Varina can only download information from county agencies but expect to be in the data-sharing loop before the end of the year. According to Mike Jennings, a Wake County planning director, many small towns throughout the county find the hardware connections needed to access the system much too expensive at this time. However, the county provides maps and data on disk to all municipalities, without charge. OTHER BENEFITS Sharpe sees the MAPS project as an example of the benefits that can come from close cooperation between municipal and county governments. "We save our taxpayers money by not duplicating data; we use each other's data, free of charge, and reduce the time needed to respond to requests for information and other services." OUTLOOK Hill believes MAPS holds much promise for being available to a wide range of non-technical personnel, including himself. "I use it most of the time, except in instances that require expertise in GIS to conduct an intelligent data search, which MAPS doesn't do. Then, I have to call our GIS folks and ask for help. I don't have the knowledge of the software to do that myself." Hill concedes that, although MAPS saves time and cuts down redundancy in data storage, "right now, the real dilemma is how do you make GIS easy enough to use so that you don't always have to have your staff right there to help you do things." When asked about a version of MAPS for use in libraries, Jennings pointed to recent budget cuts. "Right now there's no money to provide either the hardware or a simplified query program that enables citizens to access public information. One thing we are seriously pursuing, however, is creating regional service centers throughout the county. The assessor wants to put a terminal in each of these so that people in outlying towns won't have to come downtown to get property information. But we're not talking about a lot of expansion," Jennings cautioned, "not after the county board of commissioners cut budgets by 20 percent last year." Tight budgets notwithstanding, the GIS program is not static, Friddle stressed. "MAPS and ELUDAS are only the most recent developments."
|
<urn:uuid:51b5de47-4e79-45fe-a875-2f1fb8cd2b38>
|
CC-MAIN-2017-09
|
http://www.govtech.com/magazines/gt/Merging-CityCounty-GIS-Efforts.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00484-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.958281 | 2,127 | 2.53125 | 3 |
As the name implies, this new feature highlights the top research stories of the week, hand-selected from prominent science journals and leading conference proceedings. This week brings us a wide-range of topics from stopping the spread of pandemics, to the latest trends in programming and chip design, and tools for enhancing the quality of simulation models.
Heading Off Pandemics
Virgina Tech researchers Keith R. Bisset, Stephen Eubank, and Madhav V. Marathe presented a paper at the 2012 Winter Simulation Conference that explores how high performance informatics can enhance pandemic preparedness.
The authors explain that pandemics, such as the recent H1N1 influenza, occur on a global scale and affect large swathes of the population. They are closely aligned with human behavior and social contact networks. The ordinary behavior and daily activities of individuals operating in modern society (with its large urban centers and reliance on international travel) provides the perfect environment for rapid disease propagation. Change the behavior, however, and you change the progression of a disease outbreak. This maxim is at the heart of public health policies aimed at mitigating the spread of infectious disease.
Armed with this knowledge, experts can develop effective planning and response strategies to keep pandemics in check. According to the authors, “recent quantitative changes in high performance computing and networking have created new opportunities for collecting, integrating, analyzing and accessing information related to such large social contact networks and epidemic outbreaks.” The Virginia Tech researchers have leveraged these advances to create the Cyber Infrastructure for EPIdemics (CIEPI), an HPC-oriented decision-support environment that helps communities plan for and respond to epidemics.
Help for the “Average Technologist”
Another paper in the Proceedings of the Winter Simulation Conference addresses methods for boosting the democratization of modeling and simulation. According to Senior Technical Education Evangelist at The MathWorks Justyna Zander (who also affiliated with Gdansk University of Technology in Poland) and Senior Research Scientist at The MathWorks Pieter J. Mosterman (who is also an adjunct professor at McGill University), the list of practical applications associated with computational science and engineering is expanding. It’s common for people to use search engines, social media, and aspects of engineering to enhance their quality of life.
The researchers are proposing an online modeling and simulation (M&S) platform to assist the “average technologist” with making predictions and extrapolations. In the spirit of “open innovation,” the project will leverage crowd-sourcing and social-network-based processes. They expect the tool to support a wide range of fields, for example behavioral model analysis, big data extraction, and human computation.
In the words of the authors: “The platform aims at connecting users, developers, researchers, passionate citizens, and scientists in a professional network and opens the door to collaborative and multidisciplinary innovations.”
OpenStream and OpenMP
OpenMP is an API that provides shared-memory parallel programmers with a method for developing parallel applications on a range of platforms. A recent paper in the journal ACM Transactions on Architecture and Code Optimization (TACO) explores a streaming data-flow extension to the OpenMP 3.0 programming language. In this well-structured 26-page paper, OpenStream: Expressiveness and data-flow compilation of OpenMP streaming programs, researchers Antoniu Pop and Albert Cohen of INRIA and École Normale Supérieure (Paris, France) present an in-depth evaluation of their hypothesis.
The work addresses the need for productivity-oriented programming models that exploit multicore architectures. The INRIA researchers argue the strength of parallel programming languages based on the data-flow model of computation. More specifically, they examine the stream programming model for OpenMP and introduce OpenStream, a data-flow extension of OpenMP that expresses dynamic dependent tasks. As the INRIA researchers explain, “the language supports nested task creation, modular composition, variable and unbounded sets of producers/consumers, and first-class streams.”
“We demonstrate the performance advantages of a data-flow execution model compared to more restricted task and barrier models. We also demonstrate the efficiency of our compilation and runtime algorithms for the support of complex dependence patterns arising from StarSs benchmarks,” notes the abstract.
The Superconductor Promise
There’s no denying that the pace of Moore’s Law scaling has slowed as transistor sizes approach the atomic level. Last week, DARPA and a semiconductor research coalition unveiled the $194 million STARnet program to address the physical limitations of chip design. In light of this recent news, this research out of the Nagoya University is especially relevant.
Quantum engineering specialist Akira Fukimaki has authored a paper on the advancement of superconductor digital electronics that highlights the role of the Rapid Single Flux Quantum (RSFQ) logic circuit. Next-generation chip design will need to minimize power demands and gate delay and this is the promise of RSFQ circuits, according to Fukimaki.
“Ultra short pulse of a voltage generated across a Josephson junction and release from charging/discharging process for signal transmission in RSFQ circuits enable us to reduce power consumption and gate delay,” he writes.
Fukimaki argues that RSFQ integrated circuits (ICs) have advantages over semiconductor ICs and energy-efficient single flux quantum circuits have been proposed that could yield additional benefits. And thanks to advances in the fabrication process, RSFQ ICs have a proven role in mixed signal and IT applications, including datacenters and supercomputers.
Working Smarter, Not Harder
Is this a rule to live by or an overused maxim? A little of both maybe, but it’s also the title of a recent journal paper from researchers Susan M. Sanchez of the Naval Postgraduate School, in Monterey, Calif., and Hong Wan of Purdue University, West Lafayette, Ind.
“Work smarter, not harder: a tutorial on designing and conducting simulation experiments,” published in the WSC ’12 proceedings, delves into one of the scientist’s most important tasks, creating simulation models. Such models not only greatly enhance scientific understanding, they have implications that extend to national defense, industry and manufacturing, and even inform public policy.
Creating an accurate model is complex work, involving thousands of factors. While realistic, well-founded models are based on high-dimensional design experiments, many large-scale simulation models are constructed in ad hoc ways, the authors claim. They argue that what’s needed is a solid foundation in experimental design. Their tutorial includes basic design concepts and best practices for conducting simulation experiments. Their goal is to help other researchers transform their simulation study into a simulation experiment.
|
<urn:uuid:02a1f87b-48a9-403f-8f39-e744cfe3cd59>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2013/01/24/the_week_in_hpc_research/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00304-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.904443 | 1,421 | 2.515625 | 3 |
Decision Sciences Myths DebunkedBy Dhiraj Rajaram | Posted 2011-01-07 Email Print
Data-driven management yields better results for business -- and getting started is easier than you may think.
Decision Sciences is the discipline of solving business problems by using first principles with a mix of mathematics, business knowledge, and technology. Familiar examples include data mining and predictive analytics. It emphasizes the adoption of a structured, hypothesis-driven approach to break down the problem, analyze data, and make business decisions.
Executives often make poor decisions when they rely on intuition and try to predict the future based on gut feel. Decision Sciences provide effective methods to examine alternatives, separate facts from biases, and improve decision making. Many Fortune 500 companies and industry pioneers are leveraging data-based methods to identify efficient ways to do work, adapt to swift changes in the economic environment, and react to changing customer needs. But despite the growing acceptance of Decision Sciences, myths about the field persist in the marketplace.
Myth: Decision Sciences requires clean data.
Reality: It is true that building enterprise data warehouses would help institutionalize data-driven decision making, but it is not a prerequisite to getting started. “Dirty data" -- raw pieces of information that are not cleansed, are stored in disparate sources and contain missing values and outliers -- can still yield value. And processing data can induce bias and skew results. The most important requirements for getting your organizations to start making data driven decisions are raw data and analysts who are capable of operating with a hypothesis-driven mindset.
Myth: Decision Sciences raise the risks of breaching data privacy and security.
Reality: Techniques to mask customer sensitive information transform data without altering the intrinsic patterns that exist and are necessary for analyses.
Myth: Decision Sciences requires a large budget and expensive off-the-shelf tools.
Reality: Sophisticated analytics software is not a pre-requisite; first principles-based thinking and a hypothesis-driven mindset are the key elements. Force-fitting generic models to the tools available in the market will give incorrect to average results. A qualified analyst can cut and slice the information available without any expensive software to yield insights that guide businesses to make the right decisions.
Myth: Decision Sciences requires highly trained personnel.
Reality: One does not need a PhD, formal training in statistics or proficiency in programming logic to get started on decision sciences. Understanding the information available, applying it in the right context and asking the right set of questions are the keys to successfully practicing Decision Sciences. To quote a famous statistician John Tukey, “An approximate answer to the right problem is worth a great deal more than an exact answer to an approximate problem.”
Demystifying Decision Sciences helps an enterprise open doors - each open door leads to actionable insights and new doors. It is a constantly evolving maze and business managers need to use the right combination of business, math and technology to prioritize issues and open the right doors - making it more ubiquitous and guiding all decision processes.
Firms that approach Decision Sciences expecting an "aha!” moment will likely be disappointed at the lack of dramatic insights or incedible new market opportunities. The reality is that Decision Sciences provides improvement and focus to existing business processes, not million-dollar discoveries.
Dhiraj Rajaram is the founder and CEO of Mu Sigma, an analytics services company that helps clients institutionalize data-driven decision making.
|
<urn:uuid:0f1f5de4-afbc-48a9-aa3c-356ef7b208c8>
|
CC-MAIN-2017-09
|
http://www.baselinemag.com/security/Decision-Sciences-Myths-Debunked
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00532-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.899072 | 708 | 2.78125 | 3 |
The National Institute of Standards and Technology (NIST) has added the identification of DNA, footmarks, and specific fingerprint information to its biometric data standard, with more enhancements slated for the future.
The organization–which oversees technology standards for the federal government–published a significantly revised biometric standard last month that expands the amount of information that can be used across the world to identify victims of crimes or solve the crimes themselves, it said.
Federal agencies such as the Department of Defense (DOD), the FBI, the Department of Homeland Security–as well as international law-enforcement entities–use the standard as a common language for the collection and exchange of biometric data.
The new standard–the Data Format for the Interchange of Fingerprint, Facial, & Other Biometric Information–now includes a way to identify DNA, which is becoming key to the identification of perpetrators of crimes such as murder and rape. The move represents the first international standard for the exchange of DNA biometric data, according to NIST.
via InformationWeek Government, continued here.
|
<urn:uuid:d14974fa-59f8-4dfb-9630-3e19e3622b05>
|
CC-MAIN-2017-09
|
http://www.fedcyber.com/2011/12/09/biometric-standard-expanded-to-include-dna-footprints/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00300-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.922297 | 217 | 2.78125 | 3 |
Every day, two-factor authentication - ATM-style identification combining the use of something you know (a password) with something you have (a token) - proves itself to be an essential part of broad-based information security systems, mitigating multiple threats, and protecting identities and information assets. While never claiming to be information security's silver bullet, strong two-factor authentication plays a crucial role in protecting vital data.
In the fight against Internet crime, the static password is the user's worst enemy. Two-factor authentication eliminates the risk of most phishing attacks, which rely on the mass harvesting of identity and account information for "replay" later. Two-factor authentication also prevents user impersonation through guessed passwords or with passwords harvested from other sites - a prominent issue today as users struggle to manage multiple passwords across various online accounts. To suggest that two-factor authentication is useless because it doesn't directly prevent real-time man-in-the-middle attacks - in which the attacker sets up a fake Web site to which he lures users who then unwittingly enter their personal information - implies there is a fix-all solution that will solve the problem.
Users need a convenient, reliable way of recognizing when it's safe to provide a credential to an application, and of verifying that the application is authentic. Along these lines, RSA Security has been exploring new ways in which the browser and operating system interfaces for user authentication can be strengthened. We are working with other leaders in the industry to raise the standard for authentication interfaces and, in particular, the protocols for authentication exchanges with Web sites. These improvements, along with protections against various forms of malware, will go a long way toward addressing the legitimate concerns raised by man-in-the-middle attacks. More importantly, they will help to ensure ongoing consumer confidence in e-commerce.
Strong two-factor authentication has proven itself to be a highly effective means of protecting corporations and individuals from a multitude of cybercrimes, in both business-to-business and consumer applications. In conjunction with the other developments outlined above, two-factor authentication is more necessary today than ever - the reason why organizations such as the National Institute of Standards and Technology, the Federal Deposit Insurance Corp. and Microsoft have identified it as the way forward. The idea that it does nothing to protect against identity theft is not just incorrect - it's recklessly defeatist. Like a doom-merchant advocating there is no point in locking your front door if you live in a war zone, detractors are missing the obvious point that there are dozens of threats out there - and no one solution will prevent them all.
Let's work together to ensure the promise of trustworthy online commerce - and direct our strongest response at those who are capitalizing on current security weaknesses, rather than those who are investing in fixing them.
Learn more about this topicThe opposing viewpoint
By Bruce Schneier of Counterpane.Your thoughts
Agree? Disagree? Discuss.
|
<urn:uuid:dec36f28-6cf3-47f5-a69a-7cf236fb7e3c>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2320143/lan-wan/is-two-factor-authentication-too-little--too-late--no-.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00300-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.929415 | 599 | 2.59375 | 3 |
NASA's Cassini spacecraft, which is now flying near Saturn, is turning its cameras back toward Earth today so it can grab a photo of its home planet from hundreds of millions of miles away.
Scientists expect Earth to appear as a small blue dot between the rings of Saturn in the image, NASA said.
The photo is being taken as part of a series of images Cassini is shooting of the Saturn system. The images ultimately will be put together to create a mosaic of Saturn.
"While Earth will be only about a pixel in size from Cassini's vantage point 898 million miles away, the team is looking forward to giving the world a chance to see what their home looks like from Saturn," said Linda Spilker, Cassini project scientist. "We hope you'll join us in waving at Saturn from Earth, so we can commemorate this special opportunity."
Cassini will start taking Earth's image at 5:27 p.m. ET today. The effort is expected to last for about 15 minutes.
While the image is being taken, Saturn will be eclipsing the sun from Cassini's vantage point. Since the spacecraft will be in Saturn's shadow, it will have a clear view of the planet's rings.
NASA reported that at the time of the photo, North America and part of the Atlantic Ocean will be in sunlight.
Unlike two previous Cassini eclipse mosaics of the Saturn system in 2006, which captured Earth, and another in 2012, today's image will be the first to capture the Saturn system with Earth in natural color, the space agency noted.
It also will be the first to capture Earth and its moon with Cassini's highest-resolution camera. When the two earlier photos were taken, the sun would have damaged Cassini's sensitive lenses on its best camera.
"Ever since we caught sight of the Earth among the rings of Saturn in September 2006 in a mosaic that has become one of Cassini's most beloved images, I have wanted to do it all over again, only better," said Carolyn Porco, Cassini imaging team lead. "This time, I wanted to turn the entire event into an opportunity for everyone around the globe to savor the uniqueness of our planet and the preciousness of the life on it."
The robotic spacecraft has been orbiting Saturn for years. The project is a joint effort between NASA's Jet Propulsion Laboratory, the European Space Agency and the Italian Space Agency.
This article, Say 'cheese,' Earthlings! Spacecraft to snap home planet pic from deep space, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is [email protected].
|
<urn:uuid:6064d181-b488-4a73-b007-06ee460ed14c>
|
CC-MAIN-2017-09
|
http://www.computerworld.com/article/2497974/emerging-technology/say--cheese---earthlings--spacecraft-to-snap-home-planet-pic-from-deep-space.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00352-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.945639 | 591 | 3.46875 | 3 |
Bellovin pointed out several years ago that most network penetrations are possible thanks to coding errors. Within those coding errors, about 50% are due to buffer overflows.
An attacker interested in penetrating your system knows that the weakest link will not only include people, but also code that they’ve written. The buffer overflow is a common technique because of the amount of C code in use and the lack of array bounds checking required by the C programming language. This gives an attacker an easy to cause an error in a system in a controlled manner, which in turns allows them access to an organization’s networks, applications, and information.
In fact, Robert Morris’ Internet worm (The Morris Worm) from the late 1980s used a buffer overflow attack to exploit known vulnerabilities in Unix sendmail, finger, and rsh/exec. Another example is the “Ping of Death,” which results in a buffer overflow when a 65,536-byte ping packet is sent fragmented.
According to Bellovin, there are several things that organizations can do to mitigate the risk of buggy software. These include:
- Use any programming language other than unprotected C
- Program in Java
- Program in C++ using class String
- If an organization must use C, then a safe string library that properly handles string values should be created
- If C must be used, evaluate the various library functions and use the ones that are known to handle string values safely
- Programmers should always check buffer lengths
Not only is this useful advice for an organization writing code to follow, it also gives useful information that can be applied during testing and validation activities. For example, one test of a new or changed system should always be how does it handle strings that are outside of expected values?
Bellovin also indicates that part of the reason buggy code exists is that requirements are usually insufficient. For example, a developer is commonly given a requirement such as the following:
“Field x should be 256 bytes long.”
Which is significantly different than:
“Field x should be 256 bytes long, any input longer than this must be rejected.”
The second form of the requirement is clearly more complete than the first. A developer uses the requirements that he gets to code the system; the more complete these are the more error-free the system will be. A software tester will also use these requirements to validate the functioning of the system.
Unfortunately, firewalls don’t address every security risk organizations face. As leaders in the field of information security indicate, getting requirements right, coding and testing using those requirements, and using the correct tools are all aspects of a holistic approach to security.
|
<urn:uuid:a58a6895-44b6-4c10-bbfe-a73b8deccccc>
|
CC-MAIN-2017-09
|
http://blog.globalknowledge.com/2012/10/22/what-is-the-1-security-threat-not-addressed-by-a-firewall/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00472-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.919269 | 557 | 3.375 | 3 |
The fiber optic amplifier plays a significant and key role within the enhancing the capacity for a communication system to deliver information. The light signals can be transmitted by the use of optical transmitters, optical receivers and optical fiber.
The optical amplifier is a device amplifying an optical signal directly, without the need to first convert it to an electrical signal. The most popular parameter of gain from it is bandwidth and noise performance. It’s compensation for the wakening of knowledge throughout the transmission, due to fiber optic attenuation. The wavelength and also the power of the input fiber signal are decided through the fans.
Fiber optic amplifier has industry’s highest color resolution and simple amplifier, and sensor setup will lead to enhanced stability for previously difficult detection applications. What is more, it can offer you very high-output powers with diffraction-limited beam quality when utilizing it. Its saturation characteristics have the ability to prevent any intersymbol interference so that it is vital for optical fiber communications. That fiber amplifiers are often operated in the strongly saturated regime enables the highest output power. The amplified spontaneous emission will affect its gain achievable. It’s important to safeguard a high-gain amplifier from the parasitic reflections, for the parasitic laser oscillation or perhaps to fiber will be damaged by these.
Optical amplifiers could be transferred in the forward direction, in the backward direction, or bidirectional. However, its direction from the pump wave won’t modify the small-signal gain, the ability efficiency of the saturated amplifier as well as the noise characteristics. Furthermore, the amplification of a weak signal-impulse in a monocentrics nonlinear medium could be allowed because of it. Along with the advancement of we’ve got the technology, the caliber of it’s been improved greatly that it is well-liked by many companies. Besides, there are all sorts of products on the market so the people might have more opportunities to pick one that’s ideal for their needs.
However, when it comes to choice for the fiber optic amplifier, the best solution is to figure out the best providers that focus on this type of products. Because the components of this kind of products are complex, and you’re simply unfamiliar with the related details about it. The professional providers can use their professional knowledge and lots of years of experiences to provide you with wise advice, which can help you make a right decision. Of course, some providers provides you with certain warranty so that you can take it to their company for repair when it reduces.
CATV EDFA is a type of fiber optic amplifier. It is used to increase the output power of the transmitter and prolong the signal transmission distance. It’s widely requested TV signals, video, telephone, and data long haul transmission. FiberStore provides high output power and low noise EDFA CATV Amplifiers with selection of output power from 14dBm to 27dBm to meet the requirements of a high-density solution for the large-scale distribution of broadband CATV video and knowledge signals to video overlay receivers in a FTTH/FTTP or PON system.
|
<urn:uuid:bdebbc78-31e0-446c-a964-5bfbfe7d6bec>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/why-we-need-a-fiber-optic-amplifier.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00400-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.930775 | 639 | 2.65625 | 3 |
- By Anne Armstrong
- Jul 09, 2012
It was simpler in the old days to recognize an attack: The enemy fired on your shores, burned your capital, and shot at soldiers and civilians. No one questioned we were at war in 1812 or 1941.
In the years since, the nature of war and aggression has changed rather dramatically. The guerrilla war in Vietnam made it difficult to distinguish between enemy soldier and civilian because both were enmeshed in a shadowy war in which everyone and everything was fair game. More recently, the spread of terrorism has ratcheted up the potential enemies and the potential costs.
As Amber Corrin discusses in her article on new developments in cyber warfare, governments are trying to define how and when to use the tools that have grown from the advanced software and the interconnected infrastructure that supports everything from the electrical grid to water purification plants and nuclear power plants. Last year, President Barack Obama signed executive orders that begin to define the rules of engagement in the cyber world. But much remains undefined. A preemptive strike takes on a very different meaning when the military inserts a virus into the workings of a computer belonging to a perceived rogue or enemy state.
And if the activities David Sanger describes in his book “Confront and Conceal” are true, then the United States has already initiated a cyberattack on a nuclear facility of what we see as a rogue state — in this case, Iran. The goal was to prove that a facility could be disabled without risking airplanes, pilots or innocent civilians on the ground. It certainly gives new meaning to the way we think about war.
Anne Armstrong is Co-President & Chief Content Officer of 1105 Public Sector Media Group.
|
<urn:uuid:358d511a-f5f7-4499-8d8a-f2bfe1902897>
|
CC-MAIN-2017-09
|
https://fcw.com/articles/2012/07/15/editors-note-defining-cyber-war.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00168-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.964764 | 347 | 2.5625 | 3 |
While there is a universal desire in the HPC community build the world’s exascale system, the achievement will require a major breakthrough in not only chip design and power utilization but programming methods, NVIDIA chief scientist Bill Dally said in a keynote address at ISC 2013 last week in Leipzig, Germany.
In last Monday’s speech, titled “Future Challenges of Large-scale Computing,” Dally outlined what needs to happen to achieve an exascale system in the next 10 years. According to Dally, who is also a senior vice president of research at NVIDIA and a professor at Stanford University, it boils down to two issues: power and programming.
Power may present the biggest dilemma to building an exascale system, which is defined as delivering 1 exaflop (or 1,000 petaflops) of floating point operations per second. The world’s largest rated supercomputer is the new Tianhe-2, which recorded 33.8 petaflops of computing capacity in the latest Top 500 list of the world’s largest supercomputers, while consuming nearly 18 megawatts of electricity. It has a theoretical peak of nearly 55 petaflops.
Theoretically, an exascale system could be built using only x86 processors, Dally said, but it would require as much as 2 gigawatts of power. That’s equivalent to the entire output of the Hoover Dam, Dally said, according to an NVIDIA blog post on the keynote.
Using GPUs in addition to X86 processors is a better approach to exascale, but it only gets you part of the way. According to Dally, an exascale system built with NVIDIA Kepler K20 co-processors would consume about 150 megawatts. That’s nearly 10 times the amount consumed by Tianhe-2, which is composed of 32,000 Intel Ivy Bridge sockets and 48,000 Xeon Phi boards.
Instead, HPC system developers need to take an entirely new approach to get around the power crunch, Dally said. The NVIDIA chief scientist said reaching exascale will require a 25x improvement in energy efficiency. So the 2 gigaflops per watt that can be squeezed from today’s systems needs to improve to about 50 gigaflops per watt in the future exascale system.
Relying on Moore’s Law to get that 25x improvement is probably not the best approach either. According to Dally, advances in manufacturing processes will deliver about a 2.2x improvement in performance per watt. That leaves an energy efficiency gap of 12x that needs to be filled in by other means.
Dally sees a combination of better circuit design and better processor architectures to close the gap. If done correctly, these advances could deliver 3x and 4x improvements in performance per watt, respectively.
According to NVIDIA’s blog, Dally is overseeing several programs in the engineering department that could deliver energy improvements, including: utilizing hierarchical register files; two-level scheduling; and optimizing temporal SIMT.
Improving the arithmetic capabilities of processors will only get you so far in solving the power crunch, he said. “We’ve been so fixated on counting flops that we think they matter in terms of power, but communication inside the system takes more energy than arithmetic,” Dally said. “Power goes into moving data around. Power limits all computing and communication dominates power.”
Besides addressing the power crunch, the way that supercomputers are programmed today also serves as an impediment to exascale systems.
Programmers today are overburdened and try to do too much with a limited array of tools, Dally said. A strict division of labor should be instituted among the triumvirate of programmers, tools, and the architecture to drive efficiency into HPC systems.
The best result is delivered when each group “plays their positions,” he said. Programmers ought to spend their time writing better algorithms and implementing parallelism instead of worrying about optimization or mapping, which are better off handled by programming tools. The underlying architecture should just provide the underlying compute power, and otherwise “stay out of the way,” Dally said according to the NVIDIA blog.
Dally and his team are investigating the potential for items such as collection-oriented programming methods to make programming supercomputers easier. Exascale-sized HPC systems are possible in the next decade if these limitations are addressed, he said.
|
<urn:uuid:aa3efad3-a557-47b0-a31d-a02fe34fb126>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2013/06/24/exascale_requires_25x_boost_in_energy_efficiency_nvidias_dally_says/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00168-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.942472 | 937 | 3.03125 | 3 |
If you work with data much, you don't need a statistical model to predict that the odds of consistently getting data in the format you need for analysis are pretty low. Those who do a great deal of data cleaning and reformatting often turn to scripting languages like Python or specialty tools such as OpenRefine or R.
But it turns out that there's a lot of data munging you can do in a plain old Excel spreadsheet -- if you know how to craft the proper formulas.
In a presentation at the recent 2014 Computer Assisted Reporting (CAR) conference, MaryJo Webster, senior data reporter with Digital First Media -- a newspaper group in New York -- shared some of her favorite Excel tricks. The goal of these tips, Webster said: Learn at least one new thing that will make you say, "Why didn't I know this before?"
Tip 1: Split dates into separate fields
You can extract the year, month and day into separate fields from a date field in Excel by using formulas =Year(CellWithDate), =MONTH(CellWithDate) and =DAY(CellWithDate). Splitting dates this way -- by year, month and day of month -- works in Microsoft Access as well, Webster said.
In addition, you can also get the day of the week for any date in Excel with =WEEKDAY(CellWithDate). The default returns numbers, not names of the days of week, with 1 for Sunday, 2 for Monday and so on.
To display the name of the weekday instead of a number, apply a custom format to the cells with the weekday numbers, using Format cells > Custom; then type ddd in the Type text box to get three-day abbreviations or dddd for the full day name.
Tip 2: Find someone's current age
If you have someone's date of birth, you can find his or her current age on whatever day you open the spreadsheet with the =DATEDIF() and =TODAY() functions. TODAY(), as you might guess, gives the current date. DATEDIF() gives the difference between two dates in units of years ("y"), months ("m") or days ("d"), using the syntax:
=DATEDIF(Date1, Date2, Unit of measure)
So, to get current age in years, use the formula:
Note that the years unit returns ages in whole numbers and does not round up.
Tip 3: Create multiple rows out of only one
Sometimes you need data in a format with one row for each observation, but what you already have comes with multiple observations for each row instead. In Webster's example of Affordable Care Act Exchange plan pricing, there is a column for prices in each age group: 1-20 years old, 21 years old, 22 years old and so on. However, some visualization and analysis tools require one row for each plan/price combination, not one row with multiple prices.
Reshaping Excel data
Tableau visualization software is one such tool that needs one data point per row, not multiple data points per row, so the vendor created a Tableau Reshaper Tool that works with recent versions of Excel on Windows.
You can download this free tool from the Tableau website. Although one add-in says it's for Excel 2010, it worked fine with Excel 2013 on my Windows 8 PC.
Several CAR attendees said they've spent hours reshaping large data sets by manually cutting and pasting, and the free Tableau tool will save them a lot of time. You don't need to have other Tableau software installed on your system to use it.
The columns you're keeping as row ID columns should be placed on the left, and all your data columns on the right. To use the reshaper tool, put your cursor on the first cell with data that you want transformed. Then go to the Tableau menu and choose reshape data. Say OK. You can watch a brief example below.
Tip 4: Create more easily sortable data
Another common data format problem is when you get a "spreadsheet" that's less like a sortable table of data and more like a Word document with column headers. One example: a spreadsheet with the name of a team on one row followed by all the players on that team, then the name of another team right below followed by the players and so on. It's difficult to analyze a worksheet where column headers are interspersed with data, since you can't easily sort, filter or visualize data by team.
One way to deal with this is to add a new column with the team name for each player.
Reformatting Excel data
"The trick is that you need to have a pattern to follow," according to Webster. In the example above, the position column is empty for the team name rows but filled in for the player rows. By filling in just the first cell with the team name manually, you can then use this formula to automatically fill in the rest:
That says: If cell B3 is blank, fill in the value of the cell in the first column of the same row (in this case A3). Otherwise, fill in the value from the cell that's just above it (in this case C2, which should be the team name from the row above for all the player rows). Make sure to start with the first player row after having manually entered the first header row.
Search and replace
Tip 5: Create a new column
You probably know that you can do a search and replace in Excel with a typical text-editor control-F find-and-replace. But did you know that you can also create an entirely new column in Excel based on search-and-replace on an existing column? That needs the =SUBSTITUTE function, using the syntax:
=SUBSTITUTE(CellWithText, "oldtext", "newtext")
For more of Webster's Excel tips, including how to do data lookups from another worksheet using VLOOKUP(), download her PDF document My Favorite (Excel) Things 2014 and the sample spreadsheet.
Sharon Machlis is online managing editor at Computerworld. Her e-mail address is [email protected]. You can follow her on Twitter @sharon000, on Facebook, on Google+ or by subscribing to her RSS feeds: articles; and blogs.
This story, "5 Tips for Data Manipulation in Excel" was originally published by Computerworld.
|
<urn:uuid:de921830-dea8-4086-b91c-c826c8f1220e>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2378199/enterprise-software/5-tips-for-data-manipulation-in-excel.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00344-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.908208 | 1,341 | 3.15625 | 3 |
Confusion has come along with the associated taxonomy of VoIP technology and IP telephony. Both of them refer to use the same IP network to send voice messages. But the main difference between VoIP and IP telephony is that VoIP is connecting old fashion analog phones to specific gateway device who are able to convert analog voice data into digital bits and send them across the internet bypassing the expensive PSTN telephone networks. In the case of IP telephony the phones by them selves are digital devices and they are made to record the users voice directly into digital signal and send it across IP network using special Communication manager devices that are enabling this technology to work. IP telephony technology resides on IP network and natively uses the IP network for communication.
|
<urn:uuid:bc110905-3689-4093-9661-eec3f2b98727>
|
CC-MAIN-2017-09
|
https://howdoesinternetwork.com/tag/ip-telephony
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00568-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.949688 | 150 | 3.25 | 3 |
Sniffer, Spoofer (generic description).
A sniffer or a spoofer is usually a standalone program to intercept and analyse certain data. For example a sniffer can intercept and analyse network traffic and catch certain data, for example passwords. Trojans sometimes use sniffing capabilities to steal passwords and user information from infected computers.
There also exist a lot of commercial and free sniffers. They can be used to analyse network traffic for perfomance, security issues and faults.
Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action.
More scanning & removal options
More information on the scanning and removal options available in your F-Secure product can be found in the Help Center.
You may also refer to the Knowledge Base on the F-Secure Community site for more information.
Description Details: Alexey Podrezov, July 14th, 2003
Description Last Modified: Alexey Podrezov, May 24th, 2004
|
<urn:uuid:67ce8498-dcab-4a9b-a3c2-96b2e0d0b41e>
|
CC-MAIN-2017-09
|
https://www.f-secure.com/v-descs/sniffer.shtml
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00092-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.865769 | 214 | 2.703125 | 3 |
Programming tools that harness the computing power of CPUs and graphics processors have been updated, bringing more parallel programming capabilities to the table.
Standards-setting firm Khronos Group released OpenCL 2.0, which is a key development platform used to write applications in which processing is broken down over multiple processors and hardware inside systems. The group also released OpenGL 4.4, a graphics programming standard that takes advantage of the latest graphics hardware available in consoles, PCs and mobile devices.
OpenCL has grown in importance as graphics hardware and other co-processors are increasingly used to crunch complex math and science applications. Some of the world's fastest computers combine CPUs and co-processors to speed up application processing, and Hewlett-Packard and Dell are offering servers and workstations loaded with graphics cards for customers that work on visual and CAD/CAM applications.
"It does significantly expand some of the new GPGPU compute function," said Jim McGregor, principal analyst at Tirias Research.
McGregor referred to general-purpose graphics processing unit computing, in which processing is increasingly offloaded to graphics processors in systems.
But the effectiveness of OpenCL depends on programming and OS, which needs to support all the functions. OpenCL is backed by organizations such as Intel and Nvidia, which offer their own parallel programming tools to speed up processing of applications. Microsoft offers DirectX, its own parallel programming framework that is also used for game development and rendering.
Khronos also announced OpenGL 4.4, which allows "applications to incrementally use new features while portably accessing state-of-the-art graphics processing units (GPUs) across diverse operating systems and platforms," the organization said in a release.
The new graphics specification also allows easy porting of applications across APIs (application programming interfaces), Khronos said.
|
<urn:uuid:498801da-0095-4cd2-b7c1-67dea92b07b2>
|
CC-MAIN-2017-09
|
http://www.itworld.com/article/2707208/hardware/khronos-group-releases-updates-to-graphics--parallel-programming-tools.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00444-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.938151 | 375 | 2.671875 | 3 |
Security threats to your mobile device lurk as malware, fraudulent lures such as SMS spoofing, and toll fraud, but they're all becoming favorites of digital crooks as people move away from using PCs and toward smartphones and tablets, according to a new report.
Such cybercrime is worth big money, whether it happens on your PC or smartphone. Cybercrime in 2011 cost consumers $110 billion worldwide and $21 billion in the United States, according to Symantec's recently released annual Cybercrime Report (PDF).
But online crime may soon cost us more. The frequency of mobile threats doubled between 2010 and 2011, Symantec says, and 35% of online adults worldwide have either lost or had their mobile device stolen, exposing them to identity and data theft.
In its report, Symantec defines mobile cybercrime as unsolicited text messages that captured personal details, an infected phone that sent out an SMS message resulting in excess charges (typically known as toll fraud), and traditional cybercrime such as e-mail phishing scams.
It sounds like your cell phone is open to some nasty threats, but is mobile security really something you should be worrying about? Does your smartphone need the same kind of 24/7 threat detection that your PC does?
No doubt, mobile devices are the next big target for malicious actors looking to make a quick buck. During this year's Black Hat conference in Las Vegas, for example, vulnerabilities were demonstrated against popular technologies used in mobile devices such as near field communication, baseband firmware, and HTML 5.
The problem is that while mobile threats may be rising, it's unclear just how prevalent these issues are in the United States. Symantec's statistics, for example, say that 31% of mobile users in 2011 received a text message from someone they didn't know or an SMS requesting they click on an embedded link or dial a certain number to get a "voicemail." All of these techniques are tricks the bad guys can use to inject malware onto your phone or attempt to trick you into handing over personal data.
But that 31% of users is a worldwide statistic based on interviews with more than 13,000 people in 24 different countries around the globe. Symantec also said it found the highest incidence of cybercrime in countries such as Russia, China, and South Africa where the rate of victimization ranges from 80 to 92%. High incidences of cybercrime in concentrated areas can often skew worldwide results, especially when those areas are highly populous nations such as China and Russia.
Lookout Weighs In
Lookout Mobile Security also recently released its annual mobile security report and noted that toll fraud, where malware secretly contacts high-priced SMS services that slap hidden charges on your mobile bill, is currently the most prevalent type of mobile malware. But this type of activity primarily affects users in Eastern Europe and Russia, the security firm says.
Links to malicious Websites, however, are a concern for mobile device users in the United States. Around four in ten American users are likely to click on an unsafe link, according to Lookout. Malicious links can come from e-mail, social networks, or the SMS-based spam and phishing techniques that Symantec described.
If you're an Android user, you should also be aware that your platform is the most popular target for malware creators, according to a recent report from security firm McAfee. That's hardly a surprise given the open approach Google takes to apps on Google Play as well as the fact that Android is the largest smartphone platform in the world.
One popular trick is to create an app that looks like a more popular program such as Angry Birds and bundle that fake app with malicious software. Lookout in late 2011 uncovered just such a scam in Google Play used for SMS toll fraud; however, that scam affected users in Europe and parts of Western Asia, not North America.
Mobile security threats are apparently on the rise, and this trend is bound to grow as more people turn to using smartphones and tablets in their everyday lives. For now, however, it appears the best approach for North American users to practice mobile security is to be wary of what you download and the links that you click on.
Make sure you're downloading genuine apps and not imitations from app stores such as Google Play or GetJar. Signs to look for in trusted apps include a large number of good user reviews written in coherent English, a link to the app developer's website to see if the app is actively supported, and the number of users an app has.
Beyond apps, just as on a PC, never click on a Web link purporting to be from a bank or other financial institution, especially if that link comes to you via SMS.
Mobile devices may be the next frontier for malware creators, but as with PCs, the best defense is to use common sense and be on your guard for incoming scams via e-mail, social networks, and text messages.
This story, "Mobile security threats rise" was originally published by PCWorld.
|
<urn:uuid:509d9fab-74bf-4310-9831-b5a4f03061eb>
|
CC-MAIN-2017-09
|
http://www.itworld.com/article/2720599/mobile/mobile-security-threats-rise.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00212-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.950409 | 1,024 | 2.578125 | 3 |
Bluetooth Software Updates & Support
General Questions about Bluetooth
Bluetooth wireless technology is an international open standard for allowing intelligent devices to communicate with each other over short range wireless links. It allows any sort of electronic equipment - from computers and cell phones to keyboards and headphones - to make its own connections, without wires, cables or any direct action from a user.
A distinctive advantage for Bluetooth wireless technology is its low power consumption, enabling extended operation for battery powered devices like cell phones, personal digital assistants, and web tablets.
Bluetooth wireless technology includes both hardware and software components. The hardware consists of a Bluetooth module or chipset, which is comprised of a Bluetooth radio (transceiver) and baseband or a single-chip that contains both. Another part of the hardware is the antenna. The radio transmits and receives information via the antenna and the air interface. The chip also contains a digital signal microprocessor, which is part of the baseband. The key functions of the baseband are piconet and device control - for example, connection creation, frequency-hopping sequence selection and timing, modes of operation like power control and secure operation, and medium access functions like polling, packet types, packet processing and link types (voice, data, etc.).
The software consists of the protocol stack. Broadcom offers Widcomm® Bluetooth protocol software for embedded systems (BTE), Windows (BTW), and Windows-CE (BTW-CE). All Widcomm software provides simple integration, powerful diagnostics and the shortest possible time to market.
Bluetooth is primarily a cable replacement technology, enabling users to connect to a wide range of computing and telecommunications devices without using cables.
The real magic of Bluetooth wireless technology lies in the concept of a Personal Area Network (PAN) and ad hoc connectivity. Through the Discovery Service, PAN devices are capable of spontaneously joining into a network as they approach each other. This occurs only while the devices are in close proximity: the devices leave the network as they are removed from proximity. The opportunity for automatic, unconscious connections between mobile devices provides new freedom for end-users.
802.11 offers features that are complementary to Bluetooth wireless technology - such as greater bandwidth and greater range. But these same features are also limiting for 802.11 because the technology is a poor choice for small devices with limited battery power.
Bluetooth wireless technology and 802.11b/g both use the 2.4 GHz ISM (Industrial, Scientific, Medical) unlicensed spectrum, and in some configurations can interfere with each other. If the Bluetooth and 802.11b/g antennas are more than 3 meters apart, however, interface in minimal. Co-existence schemes such as adaptive frequency hopping have been implemented to address potential interference issues and Broadcom has introduced its InConcert® coexistence technology, which mitigates interference problems.
Depending on the specification of the device, Bluetooth wireless technology works within a 30- to 300-foot radius.
The raw (over-the-air) data rate is 1 megabit/second. If two devices are talking to each other in an optimal manner, the channel will support 700 kilobits/second in one direction with a reverse channel of over 50 kilobits simultaneously. Bandwidth allocation is very flexible, and completely under the control of the device which establishes the connection.
Security is a critical issue in any wireless system Bluetooth provides several components that ensure secure wireless connections. First, at the highest level, the application itself can provide authentication and encryption, and is used in the most critical applications. Second, the Bluetooth specification provides for authentication and flexible encryption at the baseband level. The third level of security is based on the transmitter characteristics of low power and frequency hopping, which helps deter casual eavesdroppers.
Bluetooth wireless technology is different from historical wireless standards in that it is an open standard that is consistent worldwide. The Bluetooth Special Interest Group (SIG), comprised of leaders in the telecommunications, computing, and network industries, is driving development of the technology and bringing it to market. The Bluetooth SIG includes promoter companies 3Com, Ericsson, IBM, Intel, Lucent, Microsoft, Motorola, Nokia and Toshiba, and more than 2000 adopter companies.
The goal of the SIG is to promote the standard, ensure interoperability, define the radio characteristics, link protocols and profiles, and provide free access to the Bluetooth standard. The benefits to producers include standardized connectivity and protocols, opportunities for product enhancement and differentiation, and reduced interoperability concerns.
Broadcom is an associate member of the Bluetooth SIG and participates in various Bluetooth SIG working groups.
The Bluetooth standard outlines specifications for the following:
- Hardware: radio, baseband controller, other components
- Software: link controller stack, host controller interface, host drivers
- Qualification: protocols, testing services, review and certification
Product qualification is a way to ensure that Bluetooth products really do work together and a forum for demonstrating that a company's product complies with the Bluetooth specifications (per the Adopters Agreement). The qualification process involves protocol conformance tests, profile interoperability tests, compliance declarations and documentation reviews as described in the Bluetooth Qualification Program Reference Document (PRD). The PRD defines the specific test standards and criteria that must be met by hardware manufacturers and software developers in order to receive qualification.
Interoperability means plug and play operation with full compatibility among products from different manufacturers. Each manufacturer must send their products to a Bluetooth Qualification Test Facility, which performs interoperability tests under controlled conditions. Unplugfests are an informal forum to determine interoperability issues. Broadcom has been attending Unplugfests since they first started in 1999.
Bluetooth wireless technology is named after Harald Blåtand ("Bluetooth"), who was King of Denmark from 940 to 981. He was the son of Gorm the Old, the King of Denmark, and Thyra Danebod, daughter of King Ethelred of England. Harald was responsible for peacefully uniting Denmark and Norway. Bluetooth wireless technology unites devices such as PDAs, cellular phones, PCs, headphones, and audio equipment, using short-range, low-power, low-cost radio technology.
|
<urn:uuid:9a6f0c15-4cdb-44fc-8014-0559be4abcd9>
|
CC-MAIN-2017-09
|
https://www.broadcom.com/support/bluetooth
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00260-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.915502 | 1,268 | 2.703125 | 3 |
China's fastest supercomputers have some clear goals, namely development of its artificial intelligence, robotics industries and military capability, says the U.S.
But some of the early iterations of this effort seem a little weird.
China recently deployed what it calls a "security robot" in a Shenzhen airport. It's named AnBot and patrols around the clock. Here's what AnBot looks like, according to a Chinese government newspaper, The People's Daily online. And here is AnBot with its electric cattle prod-like device deployed and operational.
AnBot may seem like a Saturday Night Live prop, but it's far from it. The back end of this "intelligent security robot" is linked to China's Tianhe-2 supercomputer, where it has access to cloud services. AnBot conducts patrols, recognizes threats and has multiple cameras that use facial recognition.
These cloud services give the robots petascale processing power, well beyond onboard processing capabilities in the robot. The supercomputer connection is there "to enhance the intelligent learning capabilities and human-machine interface of these devices," said the U.S.-China Economic and Security Review, in a report released Tuesday that examines China's autonomous systems development efforts.
The ability of robotics to improve depends, this report said, on the linking of A.I., data science and computing technologies.
The report notes that simultaneous development of high-performance computing systems such as the Tianhe-2 and the Sunway TaihuLight supercomputers -- along with robotic mechanical manipulation -- "give A.I. the potential to unleash smarter robotic devices that are capable of learning as well as integrating inputs from large databases."
Both the Tianhe-2 and Sunway TaihuLight have dominated the rankings of the Top 500 supercomputing list. TaihuLight is currently the world's fastest supercomputer. China is intent on keeping its edge and has plans to deliver an exascale system in 2020, three years before the U.S. One exaflop equals one quintillion calculations per second; a quintillion is 1 followed by 18 zeros.
Chinese tech firms are working to learn as much as they can from their U.S. counterparts, particularly in A.I. development. The government report cited Baidu's founding, in 2011, of its Baidu Silicon Valley AI Lab as one example.
The report recommends that the government increase its own efforts in developing manufacturing technology in critical areas, as well as monitor China's "growing investments in robotics and artificial intelligence companies" in the U.S.
This story, "China’s policing robot: Cattle prod meets supercomputer" was originally published by Computerworld.
|
<urn:uuid:381ea043-e3c6-40d0-b562-ee50ca57613a>
|
CC-MAIN-2017-09
|
http://www.itnews.com/article/3136745/robotics/china-s-policing-robot-cattle-prod-meets-supercomputer.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00612-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951167 | 551 | 2.59375 | 3 |
Every year the percentage of security breaches that take several months to be discovered and contained increases, why is this?
Every year the percentage of security breaches that take several months to be discovered and contained increases – a statistic that clearly highlights companies’ inefficiency in identifying and responding to adverse events. Why?
Being able to handle a security breach requires two main components – a well-defined attack detection capability and a structured response phase. Currently, most enterprises are failing in at least one of these components. This article will focus on issues surrounding the response phase.
As soon as an Information Security Incident is declared, a specific procedure must be followed to ensure that it is treated and mitigated in a consistent way.
In general any adverse event that affects CIA (Confidentiality, Integrity and Availability) is considered an Information Security Incident. A significant number of incidents that an IT infrastructure faces usually impact one of these three attributes therefore a consistent methodology has to be adopted to track their progression during their entire lifecycle.
If the adverse event is caused by a system outage or as a consequence of human error, IT departments are usually able to deal with the incident and recover the situation. Generally, this is achieved through the engagement of subject matter experts from the IT team. Examples of these incidents include:
Unwanted change on core systems leading to a loss of integrity
The tracking and classification of these incidents is conducted by either the Incident Management or the Incident Handling team, while the IT Security team usually acts as a trusted advisor to ensure that Information Security Incidents are progressed until closure.
Technical actions are generally accomplished by someone outside the Security Team, usually the technical owner of the particular platform that caused the adverse event.
This works for well-defined Information Security Incidents. But is this process applicable to adverse events caused by external attackers also known as Cyber Incidents?
As previously noted, Incident Management and Incident Handling teams can apply an overall framework to ensure the tracking and progression of an Information Security Incident. Technical owners of potentially compromised systems are subject matter experts from an administrative point of view, but they are not trained to deal with the unique situations caused by an intruder. IT Security team members usually act as advisors, supporting IT development and infrastructure teams through security assessments, vulnerability reporting and both high-level and technical guidelines to fix the issues.
However, identifying a Cyber Attack that leads to an unauthorised access requires specialised people who can spot anomalies across systems and implement successful containment actions. These skills are generally not covered by IT security personnel or by pure forensics people.
For these scenarios, a specific set of skills is required, ranging from defensive and offensive security, mixed with forensics techniques and methodologies that can be applied to both networks and hosts. In simple terms, people designated to deal with these types of adverse events must have Intrusion Forensics skills.
Intrusion Forensics is not a new discipline but it still quite rare and definitely not a skill that is easy to develop in SOC-style environments or in internal Incident Handling teams. This is because it requires continuous exposure to a certain number of intrusions to develop the correct investigative mindset and enough experience in dealing with crisis situations.
Attackers just need to leverage a single vulnerability to gain access to a corporate environment and the footprint left behind could be pretty small. The challenge for Intrusion Forensics is to spot the single anomaly across a vast number of systems and technologies that could prove an intrusion attempt. Knowledge of what normal behaviour looks like for systems and networks helps a lot. However, the gap between knowing an IT infrastructure and being able to identify an intrusion and respond to it in a consistent way is quite substantial.
Cyber Incident Responders and Investigators are people specifically trained in Intrusion Forensics and able to apply this discipline to unauthorised access to corporate systems. These techniques can be applied to a wide range of scenarios, from non-targeted malware outbreaks to state-sponsored attacks involving lateral movements across different systems and networks.
The output from this methodology will generally help to define the number of compromised systems and the attack vector. From here, a well-defined set of containment and investigative actions can be implemented based on the type and sophistication of the attack.
It is important that the investigation activity feeds all of its findings back into the response actions as soon as they become available: this ensures that containment time falls within an acceptable window.
Classic Incident Management and Incident Handling are currently failing against Cyber Attacks as they base decisions and incident progression on feedbacks provided by IT personnel not prepared to deal with intruders. To successfully handle these kinds of adverse events, Cyber Incident Responders with Intrusion Forensics skills are required. This will help enterprises to deal with unauthorised access attempts in a consistent way with the goal of containing incidents in a timely fashion, whilst trying to avoid mistakes that could worsen the situation.
During a complex compromise, well-trained Incident Responders may be the only defense that remains, and the only personnel with the ability to contain the crisis situation.
|
<urn:uuid:ba33e099-19bb-4c73-b618-f8ff4851fc40>
|
CC-MAIN-2017-09
|
https://www.mwrinfosecurity.com/our-thinking/why-classic-incident-handling-fails/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00312-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.952395 | 1,024 | 2.515625 | 3 |
To connect mobile health units in the hills of rural New England with broadband access, policymakers are looking up -- all the way to space. In the next few months, roaming trailers that serve rural communities and ships that double as health clinics for Maine’s outer islands will be equipped with the gear necessary to draw broadband Internet from a satellite powered by Hughes Network Systems. The initiative is being led by the New England Telehealth Consortium, a federally funded group of health care providers dedicated in part to improving rural health care access. The consortium’s efforts also focus on building new broadband infrastructure, but the mobile units that serve many of the hard-to-reach communities in the area would never be able to plug into the grid. Instead, they’ll transmit data through the Hughes Spaceway 3 broadband satellite, floating 22,300 miles above the Earth. It will allow those providers to use remote monitoring, electronic health records and more in a way that they never could with their current technology.
“People will sometimes say: ‘Well, those are just going to be unserved areas,” says Tony Bardo, vice president for government solutions at Hughes. “With satellites, there are no unserved areas. We can serve wherever you can see the southern sky.”
The 10-year, $500,000 project -- which got some start-up funding from the states -- will serve mobile units reaching more than 400 sites and 2.5 million patients in Maine, New Hampshire and Vermont. And it could be the first of many, as rural connectivity is continually integrated into the health care reform conversation. In fact, the FCC announced last week that it would be setting some of its $400 million in recently announced rural health funding for satellite projects specifically.
This story was originally published on GOVERNING.com
|
<urn:uuid:2aff540c-2e2d-4d48-9d41-9588afda99eb>
|
CC-MAIN-2017-09
|
http://www.govtech.com/health/Space-Satellite-Connects-Rural-Health-Clinics-to-Broadband.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00488-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.953318 | 371 | 2.515625 | 3 |
Program Provides High Schoolers Access to IT Careers
While some kids spent the past few months skiing and bowling in front of their Wiis or walking through the mall picking out the latest spring attire, more than 100 Chicago Public Schools students were preparing for an IT certification exam. On May 5 and 6, these students sat for a rigorous CompTIA exam, a move that will help them find gainful employment or give them valuable experience before moving on to higher education.
Roughly 650 students in six high schools participate in CompTIA’s Education to Careers (E2C). The program takes place as an elective during the regular school day and students can gain IT certifications. The program at Chicago Public Schools (CPS) engages students in two courses, A+ and Network+, teaching them the skills to pass the respective certification exams.
The program has been in place at CPS for about a decade. CompTIA began collaborating with the district several years ago to encourage new workers to enter IT. “CompTIA is interested in [getting new entrants into the field] for our corporate members who are looking for an ongoing pipeline [to] bring on board successful employees [for] the future,” explained Gretchen Koch, director of skills development programs at CompTIA.
“We work very actively with these communities and with these schools to [develop] an interest in young people in IT as a wonderful profession to pursue,” she said.
Charles Willard, CPS’ career cluster manager for IT, identified a number of benefits the program provides. First, it gives the students a meaningful skill. “As we know, computers are widely used across the board in many levels of industry,” he said. “[E2C] has afforded [students] the opportunity to step up into the world of employment at a higher level.”
Students acquire jobs while in high school with companies such as Best Buy and Circuit City, practicing the skills they learn in the classroom. For example, the students might work on diagnosing problems on PCs and then repairing them.
Not only does the program allow students to use the classroom-learned skills to find employment, it engages them in higher reasoning. “Eighty percent of resolving an [IT] issue is all mental diagnostics,” said Willard. “You’ve got to be able to walk mentally through the PC or the network and eliminate factors that might be causing the problem, and that takes higher reasoning.”
CPS perpetuates this engagement in higher reasoning through its articulation agreements with higher education institutions. “We do prepare them for a career if they so desire,” said Willard, “but at the same time, we have articulation agreements with the city colleges [and] universities, so that if a student wishes, they can go on to post-secondary school and enhance their skills.”
Students in the E2C program, as well as other similar programs, perform better academically than those who don’t get involved. “What we’ve found, and what research has found — in the career and technical education area across the U.S. and also in Illinois — is that our students tend to stay the course longer,” said Willard. “That is, they stay in school longer, they have better attendance and they have a higher graduation rate.”
The program gives students “opportunities they would not otherwise have,” he stated. And CompTIA has offerings in place to help ensure these opportunities are a bit easier to attain.
One accommodation the IT education provider makes is providing free vouchers for the teachers of E2C programs. The free vouchers “encourage the teachers to be certified themselves,” said Koch, “so they have a good indication of what it takes to pass the exam for their students.”
The company also offers discounted exam tickets to its members to help students pay for the exams. “[CompTIA] looked at the price of [its] certifications and saw that pricing could potentially be an impediment, particularly in publicly funded institutions,” said Koch. To help with this, “member institutions can purchase vouchers for their students at a significant discount for [CompTIA] certifications.”
Even with these features of the program, CPS recognizes it can still improve. One area in which CPS is pushing for improvement is community involvement, said Willard. “[CPS is trying] to get the business community around Chicago to embrace the Chicago Public Schools and give our students more opportunities for internships, paid and unpaid. We’re pushing towards the business world to reach back into the community, open up the doors and give our kids opportunities to really put their skills to work.”
“Internships can lead to full-time employment for these kids,” said Koch. “And internships give them real-life experience that they can add to their resumes.
“These children are so impressive; they are so smart,” she added. “They just love computers. This is the computer generation, [and] they’ve grown up with this stuff. These kids are really into it, and they’re really doing a good job.”
|
<urn:uuid:fe61367e-f2c8-4936-9f0b-cab5f320ef2e>
|
CC-MAIN-2017-09
|
http://certmag.com/program-provides-high-schoolers-access-to-it-careers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00429-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.968519 | 1,111 | 2.625 | 3 |
While congress continues to wrangle over the future of specific space agency programs - in particular the seemingly doomed Ares rocket - NASA continues to prepare for future operations by bulking up commercial space, heavy lift rocket and outer space plans.
The space agency in the past few weeks has issued requests for information on a new heavy lift rocket, advanced space exploration technologies that move beyond Low Earth Orbit and today, a call for more details on how commercial programs will advance space transportation needs.
NASA said it is currently in a "conceptual phase" of developing Commercial Crew Transportation (CCT) requirements that will define how commercial outfits will be able to transport NASA astronauts and cargo safely to and from LEO and the International Space Station.
NASA said it wants to collect information from the commercial space industry to help the space agency plan the strategy for the development and demonstration of a CCT capability and to receive comments on NASA human-rating technical requirements that have been drafted as part of this initiative, NASA stated.
That draft, called the Commercial Human-Rating Plan (CHRP) defines the allocation of responsibilities, requirements, mandatory standards, and process for achieving NASA human spaceflight certification for commercial crew transportation services.
NASA is now looking for more details from space industry players to determine issues such as: "What is the approximate dollar magnitude of the minimum NASA investment necessary to ensure the success of your company's CCT development and demonstration effort? What is the approximate government fiscal year phasing of this investment from award to completion of a crewed orbital flight demonstration?"
In February NASA awarded $50 million to five companies under the CCT program who could help design and build future spacecraft that could take astronauts to and from the International Space Station. The five companies and their awards were Blue Origin: $3.7 million; Boeing: $18 million; Paragon Space Development Corporation: $1.4 million; Sierra Nevada Corporation: $20 million; United Launch Alliance: $6.7 million.
The money is expected to be used toward the development of crew concepts and technology for future commercial support of human spaceflight and are designed to foster entrepreneurial activity leading to high-tech job growth in engineering, analysis, design and research, and to promote economic growth as capabilities for new markets are created, NASA said.
In another future planning development, NASA last week said it defined six targeted technologies of the future via its Flagship Technology Demonstration effort. Such Flagship technologies could be developed at costs ranging from $400 million to $1 billion.
The key technologies from the NASA request included:
- Advanced Solar Electric Propulsion: This will involve concepts for advanced high-energy, in-space propulsion systems which will serve to demonstrate building blocks to even higher energy systems to support deep-space human exploration and eventually reduce travel time between Earth's orbit and future destinations for human activity.
- In-Orbit Propellant Transfer and Storage: The capability to transfer and store propellant-particularly cryogenic propellants-in orbit can significantly increase the Nation's ability to conduct complex and extended exploration missions beyond Earth's orbit. It could also potentially be used to extend the lifetime of future government and commercial spacecraft in Earth orbit.
- Lightweight/Inflatable Modules: Inflatable modules can be larger, lighter, and potentially less expensive for future use than the rigid modules currently used by the ISS. NASA said it will pursue a demonstration of lightweight/inflatable modules for eventual in-space habitation, transportation, or even surface habitation needs.
- Aerocapture, and/or entry, descent and landing (EDL) technology: This involves the development and demonstration of systems technologies for: precision landing of payloads on "high-g" and "low-g" planetary bodies; returning humans or collected samples to Earth; and enabling orbital insertion in various atmospheric conditions.
- Automated/Autonomous Rendezvous and Docking: The ability of two spacecraft to rendezvous, operating independently from human controllers and without other back-up, requires advances in sensors, software, and real-time on-orbit positioning and flight control, among other challenges. This technology is critical to the ultimate success of capabilities such as in-orbit propellant storage and refueling, and complex operations in assembling mission components for challenging destinations.
- Closed-loop life support system demonstration at the ISS: This would validate the feasibility of human survival beyond Earth based on recycled materials with minimal logistics supply.
The third major planning effort announced by NASA happened earlier this month when NASA began its search for a next-generation rocket capable of taking equipment and humans into space.
NASA said its procurement activities are intended to find affordable options for a heavy-lift vehicle that could be achieved earlier than 2015 - the earliest date that the currently envisioned heavy-lift system could begin work.
In his April speech outlining NASA's future, President Obama said there would be $3.1 billion for the development of a new heavy lift rocket to fly manned and unmanned spaceflights into deep space. Obama said he wanted this technologically advanced rocket to be designed and ready to build by 2015.
With that goal in mind, NASA sent out a Request for Information that will begin what has in the past been a long process to build a "new US developed chemical propulsion engine for a multi-use Heavy Launch Vehicle. NASA said it was looking for a "demonstration of in-space chemical propulsion capabilities; and significant advancement in space launch propulsion technologies. The ultimate objective is to develop chemical propulsion technologies to support a more affordable and robust space transportation industry including human space exploration."
The space agency said it will look for features that will reduce launch systems manufacturing, production, and operating costs.
As part of the RFI announcement, NASA said it will initiate development and flight testing of in-space engines. Areas of focus will include low-cost liquid oxygen/methane and liquid oxygen/liquid hydrogen engines and will perform research in chemical propulsion technologies in areas such as new or largely untested propellants, advanced propulsion materials and manufacturing techniques, combustion processes, and engine health monitoring and safety.
NASA said the new heavy lift system should help the US explore multiple potential destinations, including the Moon, asteroids, Lagrange points, and Mars and its environs in the most cost effective and safe manner. At the same time, NASA desires to develop liquid chemical propulsion technologies to support a more affordable and robust space transportation industry.
NASA said its approach will strengthen America's space industry, and could provide a catalyst for future business ventures to capitalize on affordable access to space, NASA said.
The moves are preceded by the fact that NASA has all but shut down its Constellation development program - the space agency cancelled the Ares V RFP in March -- in the face of budget constraints and the direction the current administration wants it to go.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories:
|
<urn:uuid:456c2f03-f28b-4105-aaab-e9cfbc9813db>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2230819/security/nasa-preps-advanced-technology-for-the-future--now.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00305-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.925931 | 1,413 | 2.65625 | 3 |
The European Commission is drafting new cybersecurity requirements to beef up security around so-called Internet of Things (IoT) devices such as Web-connected security cameras, routers and digital video recorders (DVRs). News of the expected proposal comes as security firms are warning that a great many IoT devices are equipped with little or no security protections.
According to a report at Euractiv.com, the Commission is planning the new IoT rules as part of a new plan to overhaul the European Union’s telecommunications laws. “The Commission would encourage companies to come up with a labeling system for internet-connected devices that are approved and secure,” wrote Catherine Stupp. “The EU labelling system that rates appliances based on how much energy they consume could be a template for the cybersecurity ratings.”
In last week’s piece, “Who Makes the IoT Things Under Attack?,” I looked at which companies are responsible for IoT products being sought out by Mirai — malware that scans the Internet for devices running default usernames and passwords and then forces vulnerable devices to participate in extremely powerful attacks designed to knock Web sites offline.
One of those default passwords — username: root and password: xc3511 — is in a broad array of white-labeled DVR and IP camera electronics boards made by a Chinese company called XiongMai Technologies. These components are sold downstream to vendors who then use it in their own products.
That information comes in an analysis published this week by Flashpoint Intel, whose security analysts discovered that the Web-based administration page for devices made by this Chinese company (http://ipaddress/Login.htm) can be trivially bypassed without even supplying a username or password, just by navigating to a page called “DVR.htm” prior to login.
Worse still, even if owners of these IoT devices change the default credentials via the device’s Web interface, those machines can still be reached over the Internet via communications services called “Telnet” and “SSH.” These are command-line, text-based interfaces that are typically accessed via a command prompt (e.g., in Microsoft Windows, a user could click Start, and in the search box type “cmd.exe” to launch a command prompt, and then type “telnet” to reach a username and password prompt at the target host).
“The issue with these particular devices is that a user cannot feasibly change this password,” said Flashpoint’s Zach Wikholm. “The password is hardcoded into the firmware, and the tools necessary to disable it are not present. Even worse, the web interface is not aware that these credentials even exist.”
Flashpoint’s researchers said they scanned the Internet on Oct. 6 for systems that showed signs of running the vulnerable hardware, and found more than 515,000 of them were vulnerable to the flaws they discovered.
Flashpoint says the majority of media coverage surrounding the Mirai attacks on KrebsOnSecurity and other targets has outed products made by Chinese hi-tech vendor Dahua as a primary source of compromised devices. Indeed, Dahua’s products were heavily represented in the analysis I published last week.
For its part, Dahua appears to be downplaying the problem. On Thursday, Dahua published a carefully-worded statement that took issue with a Wall Street Journal story about the role of Dahua’s products in the Mirai botnet attacks.
“To clarify, Dahua Technology has maintained a B2B business model and sells its products through the channel,” the company said. “Currently in the North America market, we don’t sell our products directly to consumers and businesses through [our] website or retailers like Amazon. Amazon is not an approved Dahua distributor and we proactively conduct research to identify and take action against the unauthorized sale of our products. A list of authorized distributors is available here.” Continue reading →
|
<urn:uuid:50a49a5f-95fb-427a-9e60-44f8b2fe3b9c>
|
CC-MAIN-2017-09
|
https://krebsonsecurity.com/tag/european-commission/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00305-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951178 | 837 | 2.546875 | 3 |
Enterprise organizations already struggle with the mass of data they need to manage and everyone knows that – “big data” buzzwords notwithstanding – it’s only going to get worse. As it turns out, a physics analogy may help you visualize the data problem and approach better solutions.
Some large enterprise organizations are caught off guard by the pace at which data is being generated internally and externally. Without proper planning, IT teams can find themselves painted into a corner with limited options. They need to get a handle on what can be done with growing data sets, as well as to deal with the severe limitations on using and deploying new and existing applications that rely on the data being in close proximity.
Those collections of ever-larger sets of data build mass. The greater the mass of data, the greater the pull this data has on a host of network resources.
If words like “mass” and “pull” make you think of physics, you aren’t alone. Data gravity, a term coined by Dave McCrory, compares the pull that all objects have on each other in our physical universe to the forces exerted by data sets within the digital universe of the global enterprise. Much as the Big Bang Theory proposes a universe in which all matter was drawn together, then exploded, the data gravity analogy applies some of the same properties to the behavior observed by the consolidation of the traditional data repositories and workloads.
It’s not just an intellectual conversation-starter. Those who are considering data gravity as an important element in IT planning – and we think you should be among them – see it as a way to address ongoing challenges in enterprise computing.
For example, one result of intense data growth is that it becomes harder and harder to move that data. LAN and WAN bandwidth become barriers to migrating entire data sets and they restrict how the data is accessed, manipulated, and analyzed. IT professionals spend significant time and effort re-designing their distributed architecture applications to handle the sheer size of the data. They have to, to save on bandwidth and to provision additional storage resources in the same geographic area.
Data analytics is also pulled close to the data, to maximize performance or simply to save bandwidth utilization. At the very least, the application may be moved closer to the data to ensure that the prescribed performance metrics are met. We can observe the pull of data gravity as applications are forced to move physically closer to the data in order to maximize network resource efficiency, or simply to meet the basic service level requirements needed for an application.
The greatest phenomenon with respect to the data’s value is that, as greater sets of data are collected in a single location, there is a greater likelihood that additional applications will be attracted to this data due to the potential value gained from analyzing it. This can be described as the pull of the data.
This powerful pull is what has started the Edge to Core initiatives most large enterprises are now performing, wherein they are moving data and infrastructure out of local sites and into regional data centers. Doing so permits central data management as well as the benefits of data deduplication. (Most data isn’t unique to a single user; with global deduplication the storage costs can be drastically reduced.) The similarities between this phenomenon and the properties of gravity are noteworthy. Just as the mass or density increases within a physical item, so does the strength of gravitational pull.
Build a Data Gravity Profile Before Starting Your Next Project
Regardless of how large the next application may be, it’s always best to consider the long-term effects of your user population growth as well as the resources the application will consume. If the data grows, how long will it be before it cannot be moved easily? At that point, what effect will its data gravity have on other application resources? How important will that data become to other services that may be located further away? What issues will arise if the data cannot be moved?
Choosing the right software, architecture, storage, and vendors becomes critical when you considering any company-wide solution, no matter how small it may seem in the beginning. The design decisions you make early on have far-reaching effects on your ability to manipulate, move, or analyze data, and thus to harness the information in a way that results in timely and relevant business intelligence. Since company value is the ultimate goal, careful planning with data gravity in mind can be a critical step in all your projects moving forward.
Did you find this post helpful? You might enjoy reading our white paper, Data Deduplication for Corporate Endpoints.
|
<urn:uuid:68c09eb9-1243-409e-a2bb-70a7556470f0>
|
CC-MAIN-2017-09
|
https://www.druva.com/blog/data-gravity-charting-future-organizations-critical-information/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00305-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.942292 | 937 | 2.8125 | 3 |
Traffic shaping, also known as packet shaping, Quality of Service (QoS) or bandwidth management, is the manipulation and prioritization of network traffic to reduce the impact of heavy users or machines from effecting other users. This bandwidth throttling or rate limiting is performed to guarantee QoS and return on investment (ROI) via the efficient use of bandwidth.
Specifically, traffic shaping is achieved by delaying the flow of certain packets and prioritizing the flow of other preferred streams by predetermined sets of constraints. The benefits of traffic shaping includes the prioritization of business-critical over non-critical traffic and the creation of tiered service levels.
Traffic shaping is required both at the network level (for example IP or TCP/UDP port level) and the application level (often referred to as deep packet inspection (DPI) or layer 7 (L7)). Further advances in traffic shaping allow prioritization of traffic by user ID, in addition to IP, host name or application.
|
<urn:uuid:f78859a0-0d1b-4730-9523-afe8c7817f26>
|
CC-MAIN-2017-09
|
https://www.a10networks.com/resources/glossary/traffic-shaping
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00533-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.931939 | 199 | 2.96875 | 3 |
Many people use the Internet as the first port of call for current news and background information, and videos are very popular with users when doing so. If an email recipient clicks on the link contained in the email, he will be taken to a primed site containing five different YouTube videos.
But besides the five films, the perpetrators have incorporated a Java applet on the website that has been primed to exploit a specific Java vulnerability on computers. If the Java variant installed on the computer is older than version 7 update 11, blackmail malware is installed on the computer with the aid of an exploit, and the infected PC is exploited to send more email.
In a second variant, the perpetrators also steal passwords that have been stored in the Firefox browser, e.g. for online shops, email inboxes or social networks, and read all unencrypted network traffic. This enables the criminals to spy closely on users.
Spam email with alleged video of the Boston bombing
G Data security tips for recipients of the spam emails
- Delete without opening: Spam email received should be deleted without being read. Email attachments or links in messages should not be clicked on for security reasons.
- Install security software: Users should install an effective security solution that includes virus protection, a spam filter, HTTP filter and real-time protection.
- Install updates: Users should always install all available patches and updates for the installed operating system and programs, to keep the PC fully up to date at all times.
|
<urn:uuid:b1176138-d005-4d45-b304-2fbed95d5d10>
|
CC-MAIN-2017-09
|
https://www.gdata-software.com/news/3177-boston-marathon-bombing-being/page/82
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00533-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.925882 | 299 | 2.53125 | 3 |
The security of critical infrastructure in the electricity sector is complex. Electricity assets are concentrated in small areas and distributed over large geographical expanses. They are manned and unmanned, involve dangerous equipment that citizens must be protected from, and they provide a resource to the public that enables the quality of life we enjoy today.
Protection of these assets require security professionals to use every tool in the toolbox. Security managers have to consider protecting physical property, cyber assets, employees, and the public. Priorities must be established that respects the needs of the public and the organization being protected.
+ MORE ON CRITICAL INFRASTRUCTURE: Protecting vital electricity infrastructure +
Any protection program that is developed must be as efficient and cost-effective as possible, as budgets are limited and ratepayers are sensitive to wasteful spending. Effective security programs rely on risk management principles and associated tools to establish priorities, allocate budget dollars, and harden infrastructure sites.
Physical security protection encompasses defensive mechanisms to prevent, deter, and detect physical threats of various kinds. Specifically, these measures are undertaken to protect personnel, equipment, and property against anticipated threats. Properly conceived and implemented security policies, programs and technologies are essential to ensure a facility’s resistance to numerous threats while meeting demand, reliability, and performance objectives.
Security plans should be developed based off of solid security principles, practical security assessments, and known threat data. To create actionable security plans and procedures, we must first understand some very basic security principles. All too often, simple definitions are interchangeably used. This leads to confusion and unfitting assumptions. Understanding the definitions listed below will help start to build a comprehensive security program.
Threat - Actions, circumstances, or events that may cause harm, loss, or damage to your organization’s personnel, assets, or operations.
Risk - The combination of impact and likelihood for harm, loss, or damage to your organization from exposure to threats.
Vulnerability - Weaknesses and gaps in a security program or protection efforts that can be exploited by threats.
Resilience - The ability to prepare for and adapt to changing conditions, and withstand and recover rapidly from disruptions. Resilience includes the ability to withstand and recover from deliberate attacks, accidents, or naturally occurring threats or incidents.
Risk management - An analytical process that considers the operational context of the organization and the risk of unwanted events that might impact personnel, operations, and assets, with the aim of developing strategies that reduce risk by reducing the likelihood and impact of these events.
Once risks to a facility are accurately assessed, security professionals can determine whether countermeasures currently in place are adequate to mitigate those risks or if additional procedural, programmatic, or physical security countermeasures should be implemented. Any process used for identifying these risks should:
- Identify those threats which could affect personnel, assets, or operations
- Determine the organization’s vulnerability to those threats
- Identify the likelihood and impact of the threats
- Prioritize risks
- Identify methods and strategies to reduce the likelihood and impact of the risks
There are three general strategies for dealing with risk:
- Accept the risk – choose to accept the risk, and budget for the consequences that are likely to flow from that decision
- Avoid the risk – choose not to undertake the risky activity
- Reduce the risk – design controls to reduce the likelihood or the impact of the risk.
As you assess risk, a useful tool is a Design Basis Threat (DBT) which describes the threats that an asset should be protected from. Often used in the nuclear power industry, a DBT is typically a description of the motivation, intentions and capabilities of potential adversaries. A DBT is derived from credible intelligence information and other classified and non-classified data concerning realistic threats.
A DBT for the electricity sector has recently been completed by the NERC Electricity Information Sharing and Analysis Center’s (E-ISAC) Physical Security Advisory Group, with the assistance of the US Department of Energy. It is available on the E-ISAC member web portal and NERC members are encouraged to consult the DBT as part of their security planning process. It is not intended to cover all facility-specific threats that may need to be considered, but it does provide a starting point for threats rooted in past attack examples in North America.
A threat and vulnerability assessment done by professionals and a DBT are simply tools designed to help you determine security gaps, assess the importance of fixing those gaps, and identifying mitigation measures. The outputs of using these tools will directly feed your physical security plan. Your risk assessment results should arm you with the information required to make sound decisions based on real risks to an organization's assets and operations.
This article is published as part of the IDG Contributor Network. Want to Join?
|
<urn:uuid:a45d7078-51b7-4463-843d-bafae0c49d60>
|
CC-MAIN-2017-09
|
http://www.csoonline.com/article/3048348/security/at-the-intersection-of-energy-risk-management-and-facility-security.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00477-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.934356 | 985 | 3.140625 | 3 |
Minimizing the Risk of BusinessBy Larry Dignan | Posted 2005-08-31 Email Print
The UPS Brown Voyager? It could happen if private companies take over low-earth space travel and free up NASA to shoot for the stars.
But tourism can only go so far, Meyers says. Space Island Group plans to use the insides of the current shuttle's external fuel tanks as facilities that will be leased for research, tourism, and even sponsored launches and sporting events. Using solar space sails developed by NASA, the company hopes to build power stations that would beam energy to Earth via a weak microwave signal. Meyers is pitching officials in China, India and California on space power to fund his shuttle development.
The building block for all of those business cases, however, is cheap launches, Edwards says. Forecast International estimates that a rocket launch can cost from $25 million to $150 million, depending on payload.
What could make a better business case? Less expensive rockets. Space Exploration Technologies, based in El Segundo, Calif., is developing a rocket that would launch for $6 million. "If that works, it would open space up quite a bit," Edwards says.
Solution:Communicate the risks of space exploration and embrace them.
Perhaps the most daunting challenge facing NASA and commercial providers is the basic risk of flying into space. Will travel outside the atmosphere ever be completely safe? Should it be? How many deaths can be allowed?
With little margin for error, techies in a space operation have to unfailingly put the right information and analysis in from of the people who can act on it. Changing the Fate of Those in Space
Elon Musk, CEO of Space Exploration, says the risks need to be communicated clearly, and the public then will know enough to accept or reject space travel.
"NASA overstated the safety [aspect], and now space travel is held up to an unreasonable standard," Musk says. "Space travel is dangerous and as long as we accept that risk, we shouldn't be overly concerned about it."
Meyers notes that handing off low-orbit space to the private sector would allow more risk-taking—and possibly more technology breakthroughs. As for selling risk, Meyers looks to NASCAR for inspiration: "With NASCAR, you know it's dangerous and you know something can go wrong. And a lot of people are attracted to that."
|
<urn:uuid:a13fda3b-cd2d-417b-89fb-f0d6a8785e64>
|
CC-MAIN-2017-09
|
http://www.baselinemag.com/c/a/Business-Intelligence/Should-NASA-Open-LowOrbit-Space-to-Business/2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00177-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.95458 | 491 | 2.515625 | 3 |
The use of fiber optic cables for communication has exposed gates for communication multiplexing technologies that maximize the capabilities at minimum costs. CWDM(Coarse wavelength division multiplexing), modulates different wavelength lasers with multiple signals. In effect, what this means is maximized use of a single optical fiber to deliver and receive a many signals, minimizing costs for telecom companies. Companies simply use the best optical amplifiers, multiplexers and demultiplexers to boost capacity from the fiber optic using CWDM technology.
Related technologies are DWDM(dense wavelength division multiplexing) and conventional WDM. Conventional WDM make use of the 3rd transmission window with a wavelength of 1550nm, accommodating as much as 8 channels. DWDM is identical but with a higher density channel. Systems could use 40 channels, each at 100 GHz spacing or 80 channels spaced 50 GHz apart. A technology, the ultra dense WDM is capable of working at a spacing of just 12.5 GHz, allowing more channels. DWDM and WDM are expensive in contrast to CWDM.
In CWDM technology, there is an increase in channel space. This means requirement of less sophisticated and cheaper transceiver devices. Operating in the same window of 1550 nm and using OH-free silica fibers, maximum efficiencies are achieved in channels 31, 49, 51, 53, 55, 57, 59 and 61. The channels are spaced 20 nm apart. DWDM spaces them 0.4 nm apart. Less precision optics minimizing cost, uncooled lasers with lower maintenance requirements can therefore be utilized in CWDM devices, operating in the region of 1470, 1490, 1510, 1530, 1550, 1570, 1590 and 1660nm. 18 different channels can be used with wavelengths as much as 1270 nm. In addition to being cost effective, power consumption for laser devices utilized in CWDM technologies are also far less. CWDM signals can not be transmitted long term but they are ideal for applications inside a selection of 60 km for example inside a city as well as for CATV(cable tv) networks allowing upstream and downstream signals.
A quantity of manufacturers offer all related CWDM multiplexers, demultiplexers and optical amplifiers. Networking solution providers are the right people to seek guidance for use of CWDM, DWDM or WDM technology. They carry out the entire installation and commissioning from the right, integrated devices for error-free high-speed, high data transmissions over fiber optic lines. Cost and gratifaction optimized CWDM solutions with built in expansion capabilities can be found from reliable and trusted online network solution companies.
Fiberstore could possibly be the right choice with experience and technological expertise to provide the best CWDM solution. We’re experienced on fiber optic network products. Our CWDM multiplexers and CWDM transceivers have the best warranty and very competively priced as well as being of the highest quality. So you can buy our products with confidence.
|
<urn:uuid:67b10ac0-f281-425a-91e5-d77fe676927d>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/buy-cost-effective-cwdm-fiber-optic-products.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00177-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.930579 | 635 | 3.359375 | 3 |
DOE appeal: Breaking exaflop barrier will require more funding
- By Frank Konkel
- Jul 17, 2013
The Cielo supercomputer at Los Alamos National Laboratory, built by Cray, has a theoretical maximum performance of 1.37 petaflops. (LANL photo)
Department of Energy-funded supercomputers were the first to crack the teraflop- (1997) and petaflop (2008) barriers, but the United States is not likely to be the first nation to break the exaflop barrier without significant increases in DOE funding.
That projection is underscored by China's 55-petaflop Milky Way 2, which has achieved speeds double those of DOE's 27-petaflop Oak Ridge National Laboratory-based Titan, and which took the title of world's fastest supercomputer in June.
China is rapidly stockpiling cash for its supercomputing efforts, while Japan recently invested $1 billion into building an exascale supercomputer – both countries hope to build one by 2020 – and the European Union, Russia and a handful of large private sector companies are all in the mix as well.
DOE's stated goal has also been to develop an exascale supercomputing system – one capable of a quintillion, or 1,000,000,000,000,000,000 floating point operations per second (FLOPS) – by 2020, but developing the technology to make good on that goal would take at least an additional $400 million in funding per year, said Rick Stevens, associate laboratory director at Argonne National Lab.
"At that funding level, we think it's feasible, not guaranteed, but feasible, to deploy a system by 2020," Stevens said, testifying before the House Science, Space and Technology subcommittee on Energy on May 22. He also said that current funding levels wouldn't allow the United States to hit the exascale barrier until around 2025, adding: "Of course, we made those estimates a few years ago when we had more runway than we have now."
DOE's Office of Science requested more than $450 million for its Advanced Scientific Computing Research program in its Fiscal 2014 budget request, while DOE's National Nuclear Security Administration asked for another $400 million for its Advanced Simulation and Computing Campaign. That's a lot of money at a time when sequestration dominates headlines and the government is pinching pennies.
Subcommittee members weighed in on the matter, stressing the importance of supercomputing advancements but with a realistic budgetary sense. Chairman Cynthia Lummis (R-Wyo.) said the government must ensure DOE "efforts to develop an exascale system can be undertaken in concert with other foundational advanced scientific computing activities."
"As we head down this inevitable path to exascale computing, it is important we take time to plan and budget thoroughly to ensure a balanced approach that ensures broad buy-in from the scientific computing community," Lummis said. "The federal government has limited resources and taxpayer funding must be spent on the most impactful projects."
An exascale supercomputer would be 1,000 times more powerful than the IBM Roadrunner, which was the world's fastest supercomputer in 2008. Developed at the Los Alamos National Laboratory with $120 million in DOE funding, it was the first petaflop-scale computer, handling a quadrillion floating operations per second. Yet in just five years, it was rendered obsolete by hundreds of faster supercomputers and powered down, an example of how quickly supercomputing is changing. Supercomputers are getting faster and handling more expansive projects, often in parallel.
Supercomputers through time
Some highlights of the history of supercomputing. Hover mouse over each one for more information.
The U.S. Postal Service, for instance, uses its mammoth Minnesota-based supercomputer and its 16 terabytes of in-memory computing to compare 6,100 processed pieces of mail per second against a database of 400 billion records in around 50 milliseconds.
Today's supercomputers are exponentially faster than famous forefathers in the 1990s and 2000s.
IBM's Deep Blue, which defeated world champion chess player Gary Kasparov in a best-of-three match in 1997, was one of the 300 fastest supercomputers in the world at the time. At 11.38 gigaflops, Deep Blue calculated 200 million chess moves per second, yet it was 1 million times slower than the now-retired Roadrunner, which was used by DOE's National Nuclear Security Administration used to model the decay of America's nuclear arsenal. Of vital importance to national security before it was decommissioned, Roadrunner essentially predicted whether nuclear weapons – some made decades ago – were operational, allowing a better grasp of the country's nuclear capabilities.
Titan, which operates at a theoretical peak speed of 27 petaflops and is thus 27 times faster than Roadrunner, has been used to simulate complex climate models and simulate nuclear reactions. However, even at its blazing speed, Titan falls well short of completing tasks like simulating whole-Earth climate and weather models with precision. Computer scientists believe, though, that an exascale supercomputer might be able to do it. Such a computer, dissecting enough information, might be able to predict a major weather event like Hurricane Sandy long before it takes full form.
Yet reaching exascale capabilities will not be easy for any country or organization, even those that are well funded, due to a slew of technological challenges that have not yet been solved, including how to power such a system. Using today's CPU technology, powering and cooling an exascale supercomputing system would take 2 gigawatts of power, according to various media reports. That is roughly equivalent to the maximum power output of the Hoover Dam.
Frank Konkel is a former staff writer for FCW.
|
<urn:uuid:1eda161c-b2c9-4edc-a2d1-078e35603423>
|
CC-MAIN-2017-09
|
https://fcw.com/articles/2013/07/17/exoflop-supercomputing.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00529-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.947302 | 1,209 | 2.578125 | 3 |
What is Google Apps and how does it work?
Google Apps is a suite of cloud-based productivity applications that you can access from any computer with an Internet connection, at any time. The data and the applications themselves are run from Google's data centers. Several versions of Google Apps are available: free, standard, education, non-profits and government (see below for more information).
Which tools are included are included in Google Apps and what do they do?
The Google Apps Suite includes six tools.
-Gmail is Google's e-mail platform, and allows you 25GB of storage. (For more, read: Google Docs Quick Tip: Uploads and Storage Basics.)
-Google Calendar,an agenda management tool, lets you schedule and share online calendars, and synchs to your mobile device.
-Google Docs is a group of tools that lets you create documents, spreadsheets, drawings and presentations.
-Google Groups are user-created groups where you can generate and manage mailing lists, enable content sharing and create searchable archives.
-Google Sites is a tool that creates webpages for intranets and team-managed sites. These webpages are secure and coding-free.
-Google Video is a private, secure hosted video sharing tool.
[Take a look at "5 Things You Didn't Know You Could Do With Google Apps"—a look at some of Google Apps' coolest hidden features.]
How is Google Apps different from Gmail tools?
If you have a free Gmail account, you're probably familiar with many of the tools listed above. The Google Apps Premier Edition, however, has a few differences.
1. Because Google Apps is for your business, you will sign up with your own company domain instead of the [email protected] address. There is an IT administrative control panel built into Google Apps for your business where an IT admin can manage and control the user accounts across your domain.
2. A few business-oriented features are available only to Premier Edition customers. These include increased access controls so your company is in charge of how your information is shared outside of your domain; increased Gmail thresholds and improved mailing list capabilities; higher storage quotas; scheduling of conference rooms and other shared resources in your office through Google Calendar; and the ability to share internal videos with your company.
Which Google Apps edition is right for me?
Google Apps Premier Edition is the version intended for businesses. With the Premier Edition, businesses have:
-A 30-day free trial period
-A 99.9% uptime guarantee as a service level agreement
-Enhanced technical support through expedited e-mail and 24/7 phone assistance for critical issues
-Advanced spam and virus filtering beyond what you see in standard Gmail filtering
-Outgoing message policy enforcement
-Message archiving and compliance
-Integration APIs so you can integrate Google Apps with your existing IT environment
-Professional service partners for deployments and custom solutions.
Four other versions of Google Apps are specifically designed to meet the needs of specific groups. These include: Google Apps for Education; Google Apps Standard Edition, intended for small groups such as clubs, families or sports teams; Google Apps for Non-profits; and Google Apps for Government.
What do I need to know about privacy and security?
A few key points about Google Security:
-Google does not own your data, you do.
-Google employees will only access your data when an administrator grants permission for troubleshooting purposes.
-Your data will be stored in Google's network of data centers. Only authorized select Google employees have access to these data centers.
-Google keeps your data for as long as you want, and it deletes the data when you ask.
For more information on Google Apps security and privacy, click here.
Will I save money by switching to Google Apps?
If your business is currently running on Microsoft Exchange 2007, you can determine if your business will save money by switching to Google Apps by visiting this Google site.
How can I try out Google Apps?
You can sign up for a free, 30-day trial of Google Apps here.
Have you checked out Google Labs lately? Read, "5 (More) Google Labs Projects that Should Be on Your Radar."]
How much does it cost?
Google Apps Premier Edition costs $50 per user account, per year. (Note that one user account is considered to be one e-mail inbox, not one domain.) Google does not offer discounts on volume or bulk purchases. If your business requires additional archiving, you can add Google Message Discovery to your Google Apps Premier Edition account after signing up. This costs $14 per user account, per year for 1 year of archiving; or, $33 per user account per year for up to 10 years of archiving.
What is the Google Apps Marketplace?
The Google Apps Marketplace offers thousands of products and services—such as installable apps that integrate with Google Apps—for Google users. There's a mix of both paid and free applications from which to choose.
|
<urn:uuid:62ca9c08-df44-461c-a650-68a6c0127d8a>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2418367/cloud-computing/google-apps-faq.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00169-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.90705 | 1,039 | 2.546875 | 3 |
GPS systems quickly went from a luxury item to a necessity to a ubiquitous contributor to the geographic ignorance of Americans, many of whom rely on their smartphones' ability to give directions to every place they go to keep from having to learn the way themselves.
That's a problem when GPS is suddenly unavailable. When the battery dies. When you drive under a bridge and the GPS thinks you jumped to the highway above or below. When you go into a tunnel and disappear entirely.
Two biology researchers from Oxford University are trying to remedy that, using GPS technology they adapted to keep track of badgers in suburban wilderness outside the city of Oxford.
Andrew Markham and Niki Trigoni, both post-doctoral researchers and instructors at Oxford's Department of Computer Science drew quite a lot of attention to themselves and to Oxford's suburban badger population with a system designed to monitor what the badgers were up to when they were underground.
Badgers forage and do most other things above ground by themselves, for the most part, but evidently have a rich communal social life underground.
Putting a camera in the burrows would provide a picture of one room, but not a macro picture of what was going on elsewhere.
"It is quite challenging to identify badgers when they are underground," Markham told the BBC last year.
Rather than string cameras throughout the burrows, or string GPS antennas, the research team planted a series of antennas that would project magnetic fields of varying intensity to cover the whole area of the burrows.
Individual badgers got special collars with sensors capable of detecting the fields, tracking their intensity and recording it.
When each badger came aboveground the radio in its collar sync'd with servers attached to the antenna network, giving researchers detailed information about where the badger had been during its time out of sight.
Because they used very low-frequency magnetic fields, the network Markham and Trigoni built was able to penetrate far deeper underground than radio waves – the medium on which GPS depends.
The two found the data they gathered showed not only good badger-tracking capabilities, but also the ability to identify a spot in three dimensions without having to receive signals from three points to calculate location by triangulation.
The changing patterns of magnetic fields created a unique signature at each point near the transmitter.
Low-frequency magnetic fields penetrated the ground and other solid objects more deeply than radio could have and provided a good depth metric via predictable changes in field intensity.
The result was the ability to "triangulate" the position of a sensor using only one antenna and no triangulation at all.
"Our technology can work out your position in three dimensions from a single transmitter. It can even tell you which way your device is facing," Markham told Wired.
Seeing an opportunity to move out of badgers and into location services, Markham and Trigoni took their system to Isis Innovation, a company owned by Oxford University whose job it is to commercialize scientific findings generated there.
The two are looking for 1.7 million pounds in seed capital to fund their startup, OneTriax, which is working on a version of the receiver that would run on Android.
Smart phones already have magnetometers and electronic compasses they use to orient the screen and locate cell-phone towers.
With slightly more processing power and greater sensitivity, those sensors could also pick up enough magnetic data to be used as a backup location system when line-of-sight GPS radio waves just won't cut it.
Making it work will require an advance in signal processing, but not a huge leap. Badger-net pickups rely on a signal with more information in it than a typical GPS radio signal, so reception will still be a challenge, Markham said.
The two have already developed the software to process it, however, and are working on ways to improve its accuracy to less than the 30cm give or take the Badger-net was able to achieve.
Within four years, Markham predicts, smartphones will be manufactured with his and Trigoni's underground GPS capability.
Then the only problems will be extending all those GPS networks with magnetic broadcasting stations, figuring out how to hand responsibility for location from one to the other as the user's location changes and deciding whether or not they'll have to pay the badgers a royalty.
"We think it's achievable," he told Wired.
Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
|
<urn:uuid:d9123ee9-7fbf-4d7d-98b2-e4f0e8a10c7d>
|
CC-MAIN-2017-09
|
http://www.itworld.com/article/2722395/mobile/badger-network-may-keep-humans-from-getting-lost-underground.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00217-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.959432 | 955 | 3.65625 | 4 |
How a simulated brain outperforms supercomputers
It may sound like science fiction, but the National Science Foundation reports that researchers have created a human brain simulator that in many ways can out-perform the world's fastest supercomputers.
The simulated brain is called Neurogrid, and is the brainchild (terrible pun, I know) of a team at Stanford University. Neurogrid is made up of just 16 chips, though each of them contains more than 65,000 silicon "neurons" whose activity can be programmed according to nearly 80 parameters. In a lot of ways, Neurogrid simulates the human brain, and it could one day help to map out processes relating to schizophrenia and brain injuries, which are nearly impossible to do with a standard computer, even a supercomputer.
So why is Neurogrid so much better at this than a supercomputer? It all has to do with home field advantage. Both Neurogrid and the human brain are much slower than a supercomputer for most tasks. Neither could map out a nuclear blast or model the Big Bang Theory, but both are much better at performing certain tasks than any traditional computer.
According to Neurogrid developers Sam Fok and Alex Neckar, the difference between the way traditional computing systems model the brain and the way Neurogrid works is how communication works within the system, or how the neurons talk to each other. Computers use digital signals — things are either on or off, much like the human brain. But in a human brain and Neurogrid, communication is handled by a continuous non-linear process. It's actually very much like an analog signal. Neurogrid uses an analog signal for computations and a digital signal for communication. In doing so, it follows the same hybrid analog-digital approach, networking all of its neurons, as the brain. And it runs on five watts of power.
A supercomputer such as IBM’s Blue Gene/Q Sequoia can be programmed to model one second of brain activity, but it takes about an hour. By contrast, Neurogrid can model one second of human brain activity in one second, because it's pretty much a model of the human brain itself.
So if you are ever challenged to a thinking contest by a supercomputer, be sure that the topic is mapping the human brain, so you will have some chance of winning. More important, it's great to see that the Neurogrid simulation will soon be helping to combat brain disorders like autism and Alzheimer's disease, which have been painfully difficult to model using traditional computers.
Posted by John Breeden II on Apr 25, 2013 at 9:39 AM
|
<urn:uuid:6ad774a0-8d06-4d5e-9e94-7568813c7506>
|
CC-MAIN-2017-09
|
https://gcn.com/blogs/emerging-tech/2013/04/how-a-simulated-brain-outperforms-supercomputers.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00393-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.946406 | 536 | 3.59375 | 4 |
As we prepare for Cyber Monday and a holiday season of increased online shopping, NCSA advises that everyone take a moment to practice safe cyber behaviors.
These simple steps apply to everyone who connects to the Internet, whether from laptops, personal computers, mobile phones, or gaming consoles. Before you connect to the Internet, take a moment to evaluate that you’re prepared to share information or engage in a larger community.
Keep a clean machine:
- Keep security software current: Having the latest security software, web browser, and operating system are the best defenses against viruses, malware, and other online threats.
- Automate software updates: Many software programs will automatically connect and update to defend against known risks. Turn on automatic updates if that’s an available option.
- Protect all devices that connect to the Internet: Along with computers, smartphones, gaming systems, and other web-enabled devices also need protection from viruses and malware.
- Plug & scan: “USBs” and other external devices can be infected by viruses and malware. Use your security software to scan them.
Protect your personal information:
- Secure your accounts: Ask for protection beyond passwords. Many account providers now offer additional ways for you verify who you are before you conduct business on that site.
- Create Strong Passwords: Combine capital and lowercase letters with numbers and symbols to create a more secure password. When opening new accounts, use long and strong passwords.
- Provide Only Essential Personal Information: Only provide the minimal amount of information needed to complete a transaction. When providing personal information for any purchase or other reason, ensure that you know who is asking for the information, and why they need it.
- Unique account, unique password: Separate passwords for every account helps to thwart cybercriminals.
- Write it down and keep it safe: Everyone can forget a password. Keep a list that’s stored in a safe, secure place away from your computer.
- Own your online presence: When available, set the privacy and security settings on websites to your comfort level for information sharing. It’s ok to limit who you share information with.
Connect with care:
- When in doubt, throw it out: Links in email, tweets, posts, and online advertising are often the way cybercriminals compromise your computer. If it looks suspicious, even if you know the source, it’s best to delete or, if appropriate, mark as junk email.
- Get savvy about Wi-Fi hotspots: Limit the type of business you conduct and adjust the security settings on your device to limit who can access your machine.
- Protect your $$: When banking and shopping, check to be sure the websites you visit are security enabled. Look for web addresses with “https”, which means the site takes extra measures to help secure your information. “http” is not secure.
- Be Aware of Holiday Shopping Gimmicks: Be mindful of holiday shopping efforts to lure you. Cyber crooks will adjust to the holiday season, trying to get you to click through to deals that may appear too good to be true. They may also try to trick you by sending emails that something has gone wrong with an online purchase.
Be web wise:
- Know the Seller: Research online retailers before a first time purchase from a merchant (or auction seller) new to you. Search to see how others have rated them, and check their reviews. Do these things even if you are a return customer, as reputations can change.
- Stay current. Keep pace with new ways to stay safe online. Check trusted websites for the latest information, and share with friends, family, and colleagues and encourage them to be web wise.
- Think before you act: Be wary of communications that implores you to act immediately, offers something that sounds too good to be true, or asks for personal information.
- Back it up: Protect your valuable work, music, photos, and other digital information by making an electronic copy and storing it safely.
Be a good online citizen:
- Safer for me more secure for all: What you do online has the potential to affect everyone – at home, at work and around the world. Practicing good online habits benefits the global digital community.
- Post only about others as you have them post about you.
- Help the authorities fight cyber crime: Report stolen finances or identities and other cybercrime.
|
<urn:uuid:7dac3f53-99c2-4ff1-9582-09d3bf85a2a0>
|
CC-MAIN-2017-09
|
https://www.helpnetsecurity.com/2011/11/21/secure-practices-for-online-shopping/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00093-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.897201 | 915 | 2.734375 | 3 |
Big Data Forcing Update of SQL Standard
A proposed standard would extend the SQL database language to accommodate multidimensional “data cubes” generated by scientists and engineers. The effort promises to launch a new technology dubbed “array databases.”
Proponents of the new standard, dubbed SQL/MDA, for “multidimensional array,” said it is needed because big data in science and engineering is structured differently from, say, business data often structured as simple tables. Multidimensional big data includes sensor outputs, imagery, simulations and statistical data.
The proposed International Organization for Standards (ISO) spec would allow SQL databases to handle multidimensional data cubes consisting of one-dimensional sensor data, two-dimensional satellite imagery and three-dimensional geophysical data. The proposed standard could even accommodate four-dimensional weather data as well as large astrophysics simulations of the known universe.
Researchers at Jacobs University in Bremen, Germany, determined the SQL is unable to find, filter and process multidimensional arrays. As a result, the arrays are maintained on separate databases.
The group, led by computer science professor Peter Baumann, has been looking for ways to extend SQL databases. It said the solution is “array databases.”
In a recent demonstration, the researchers said more than 1,000 computers were link via the cloud to jointly crunch the result of a single database query. The hope is that “distributed query processing” can be used to parse multi-petabyte data cubes, the researchers said. The approach could help answer science and engineering problems once considered unsolvable due to a lack of database tools.
International datacenters will next use the distributed approach to gain insights into geospatial and temporal data cubes. Rasdaman, or “raster data manager,” database management systems have been installed at NASA, the European Space Agency, the British Geological Society and other research institutions to wring out the system.
As the number of remote and image sensors grows, it’s likely that big data generated by geoscience and engineering research will eventually outgrow the proposed SQL multidimensional array database. For example, NASA is currently planning new missions expected to stream more than 24 terabytes of data a day.
The torrent of science data is growing exponentially as new space communication networks come online. NASA recently demonstrated a laser communications network that could eventually beam real-time data back to Earth.
For now, a NASA official grappling with the agency’s big data challenges warned recently, “We regularly engage in missions where data is continually streaming from spacecraft on Earth and in space, faster than we can store, manage, and interpret it.”
Following a recent meeting in Beijing, an SQL working group within ISO agreed on the importance of revising the SQL database standard to accommodate multidimensional arrays. It accepted a proposal by the German researcher Baumann that will be used forge a new standard. If approved, the standard will be called ISO 9075 SQL/MDA, organizers said.
The current ISO 9075 information technology and SQL database language spec defines the data structures and basic operations of SQL data as well as specifying the syntax and semantics of the database language.
|
<urn:uuid:e07c9ac7-00d1-4131-838b-f7d694a210b4>
|
CC-MAIN-2017-09
|
https://www.datanami.com/2014/06/25/big-data-forcing-update-sql-standard/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00445-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.903404 | 663 | 2.6875 | 3 |
Racism means many things to many people. Oxford says it’s “the belief that there are characteristics, abilities, or qualities specific to each race”. That’s a pretty broad definition, and arguably isn’t negative in and of itself.
The common use of the word is far more malicious and, in my opinion, ill-informed. Most people use the word to mean “any sort of behavior or view that reflects negatively on a race other than their own.” Both of those definitions are incorrect, in my opinion. Here’s my view:
True racism is when a person has negative feelings toward someone based on race alone. When person A has all of the characteristics that person B normally accepts as constituting a “decent” person (education, dress, speech, attitude, etc.), but they reject them anyway because of their race, that makes them a racist.
Too often this is confused with behavior-based judgment, and this misunderstanding harms society greatly. When a group of 15 black men walk into a mall dressed as gangsta-rappers — shouting, laughing, and ogling every woman that passes by — the hate that is directed at them is based on their behavior, not their race.
This is in stark contrast to the stereotypical white father who won’t let his daughter marry a black guy from a great family who just got his MBA from Harvard. That’s racism. And until we as a society can openly acknowledge and discuss this distinction we’re doomed to continue in our fear-based silence that does nothing but harm us.:
|
<urn:uuid:aeffabdc-f440-4460-bb28-30f1b2d96f5c>
|
CC-MAIN-2017-09
|
https://danielmiessler.com/blog/the-true-definition-of-racism/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00089-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.970823 | 334 | 3.015625 | 3 |
“Keeping the home fires burning” has become one of the top priorities worldwide today, as nations from India to the United States continue to contend with high-profile electricity outages caused by inadequate infrastructure, as well as storm-related damage.
Even Germany, known for its advanced power policies, is worried about the intermittent nature of its renewable, grid-tied power capacity.
In fact, last November, VIK, the association that represents the interests of German industrial and commercial energy consumers, warned that although the nation can export its renewable energy at times of peak renewable production and low demand, it faces paying high prices to ensure its own load is reliable at peak times.
"Although Germany still remains a net exporter of electricity through its expensive renewables, it would be very misleading to evaluate this outcome as a success in itself," VIK Director General Annete Loske stated. Loske called for further emphasis on network expansion, storage technologies and load management capabilities, as Germany continues its energy transition.
Now, Environment Minister Peter Altmaier has proposed a subsidy that, effective May 1, 2013, would finance up to 30 percent of the costs of new battery storage systems that add to the back-up capacity of grid-tied solar power generation.
Under the new law, storage systems would be limited to a maximum of 30 kilowatts (kW) and would have a domestic content mandate—meaning they would have to be manufactured in Germany.
To pay the full, remaining costs of each battery system, the German state bank, KfW, has developed a program that would comprise low-interest loans from its funds and a repayment bonus from Nuclear Safety (BMU) funds. BMU is currently making the final decision on the release of the grant component and the program's launch date.
Thereafter, KfW and BMU will provide detailed information on the program’s conditions.
In the long run, the program—which currently is on hold, according to industry sources— is designed to incentivize the development solar power batteries that can be installed and connected in houses at the same time as solar panels. Assuming this incentive program were successful, a free-standing house could save up to 60 percent on electricity, according to estimates by the scientific think tank, the Fraunhofer Institute in Würzburg, Germany – while stored power can be used later or offloaded into the national energy grid.
Germany Trade & Invest, the Berlin-based economic development agency of the Federal Republic of Germany, supports the new legislation. “The policy has progressed well, and the increase of fluctuating renewable capacities is now causing the need for storage and smart grid expansion,” said Tobias Rothacher, senior manager, Renewable Energies, at the agency.
“Such batteries not only make customers more independent from the energy price fluctuations, but the technological advances bring Germany closer to immunity from the energy and fuel merry-go-round,” continued Rothacher. There’s also the benefit of produced power being stored rather than being jettisoned expensively –offering further savings to plant operators.”
That new policy will feature high on the agenda when Germany Trade & Invest attends the Batteries Japan 2013 convention, as well as at the Fuel Cell Expo, both in Tokyo this week.
Edited by Braden Becker
|
<urn:uuid:e894934f-82c8-4acb-9705-1470a4b8703e>
|
CC-MAIN-2017-09
|
http://www.iotevolutionworld.com/topics/smart-grid/articles/2013/02/25/328198-energy-storage-germane-germanys-new-pv-solar-subsidy.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00441-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.961862 | 690 | 2.578125 | 3 |
An optical switch developed at the Joint Quantum Institute (JQI) spurs the mark integration of photonics and electronics.
What, isn’t electronics adequate? Well, nothing travels faster than light, as well as in your time and effort to hurry in the processing andtransmission of knowledge, the combined use of photons along with electrons is desirable for developing a workable opto-electronic protocol. The JQI switch can steer a beam of light from one direction toanother in only 120 picoseconds, requiring hardly any power, no more than 90 atto joules. At the wavelength used, in the near infrared,this amounts to about 140 photons. This is actually the setup of a waveguide made from a photonic crystal, a great device put into the fiber optic transmission area.
A quantum dot is placed inside a tiny zone free from holes. Light is distributed into and from the waveguide via endcaps. If properly timed, a pump laser pulse allows probe pulse to exit the side. When the probe and pump beams are not aligned, the probe beam will exit the farend of the waveguide. The center piece of most electronic gear is the transistor, a solid-state component where a gate signal is used to a nearby tiny conducting pathway, thus switching on and off the passage of the information signal.
The analogous process in photonics would be a solid-state component which provides a gate, enabling or disabling the passage of light through a nearby waveguide, or as a router,for switching beams in different directions. Within the JQI experiment, prepared and conducted in the University of Maryland and at the National Institute for Standards and Technology (NIST) by Edo Waks and his colleagues, an all-optical switch has been created utilizing a quantum dot placed in the resonant cavity. The dot, consisting of a nm-sized sandwich of the elements indium and arsenic, is so tiny that electrons moving inside can emit light at only discrete wavelengths, as though the dot were an atom. The quantum dot sits inside a photonic crystal, a material that has been tired of many tiny holes.
The holes preclude the passage of sunshine with the crystal except for a narrow wavelength range. Actually, the dot sits in the small hole-free arcade which acts just like a resonant cavity. When light travels on the nearby waveguide a lot of it gets into the cavity, where it interacts using the quantum dot. And it is this interaction which could transform the waveguide’s transmission properties. Although 140 photons are needed in the waveguide to create switching action,only about 6 photons actually are required to bring about modulationof the quantum dot, thus throwing the switch.
Previous optical switches happen to be able to work only by utilizing bulky nonlinear-crystals and high input power. The JQI switch, by comparison, achieves high-nonlinear interactions using a single quantum dot and very low power input. Switching required only 90 atto joules of power, some five times less than the very best previous reported device made at labs in Japan, which itself used 100 times less power than other all-optical switches. Japan switch, however, has the advantage of operating at room temperature, as the JQI switch needs a temperature close to 40 K.
Continuing our analogy with electronics: light traveling on the waveguide by means of an information-carrying beam could be switched from one direction to another using the presence of asecond pulse, a control beam. To steer the probe beam the side from the device, the slightly detuned pump beam needs toarrive simultaneously with the probe beam, that is on resonance with the dot. The dot lies just off the middle tabs on the waveguide, inside the cavity. The temperature from the quantum dot is tuned to become resonant using the cavity, leading to strong coupling. If the pump beam doesn’t reach the same time as the probe, the probe beam will exit in another direction.
|
<urn:uuid:8b897da1-c4e1-4b93-8e73-d24f63bc1854>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/a-new-type-of-optical-switch-using-a-quantum-dot.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00561-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.927219 | 817 | 3.75 | 4 |
In the past decade our identity has undeniably evolved, we’re preoccupied with identity theft and authentication issues, while governments work to adopt open identity technologies. David Mahdi, a Product Manager at Entrust, explains the critical issues in understanding the very nature of identity in a society actively building bridges between the real and digital world.
What are the critical issues in understanding the very nature of identity in a society actively building bridges between the real and digital world?
While one’s identity in a digital world is analogous to what it is in the traditional “real” world, the challenges and issues associated with trusting one’s digital identity, managing it, and securing it are very different between these two worlds.
The core value to one’s digital identity is Trust. In the real world an individual is able to easily confirm their identity by presenting documents, such as a passport or driver’s license, that have been issued by authorities, based on verifiable information provided by the individual. And because these authorities (such as governments) are trusted, the documents, or credentials they issue can be used by the individual to prove their identity with many different organizations that might be offering services.
In the digital world, however, trust is not as easy to determine. Like the real world, a digital identity must be issued by a trusted authority. The extent to which that digital identity can be used may well be a function of the trust that other organizations put in that Authority. In some cases a digital identity may be issued by a single Authority – a bank, a retailer or even a government agency – and that identity may only be used with that Authority. As a result, to take advantage of the digital world, individuals may have many digital identities. This, however, is not ideal. If the Authority that issues a digital identity is trusted by other organizations, in much the same way that a government issuing passports is trusted, then the digital identities they issue could also be trusted by other organizations, and be used more broadly. But establishing that trust is one of the key challenges of the digital world.
As a result, an individual’s digital identity may actually consist of many different identities, issued by many different organizations, and generally they’ll be used only and trusted by the organization that issued them. This creates a bit of a management nightmare for individuals in the online world as they’re faced with keeping track of which identity is used with which organization, where that identity is stored electronically and, most importantly, how to protect it.
How has individual identity evolved in the digital world?
One of the great opportunities in the digital world is the unparalleled growth of services that are available online – whether it’s for purchasing vacations, accessing health documents, balancing bank accounts and paying bills, or just interacting with friends and business colleagues in a social network or over email. But taking advantage of these services has resulted in an individual having many unique digital identities. Each of these organizations may recognize an individual very differently – and their entitlements with each of the organizations may differ dramatically. An individual’s overall identity, therefore, is a collection of digital identities, all of which must be managed and protected.
While services and networks have expanded, threats in the digital world have also increased – in particular threats related to stealing identities – identity theft. So as individuals take advantage of new services, the number of digital identities they have also expands – and in the absence of an effective way to manage all of these identities, or a consistent way of protecting them, their vulnerability to identity theft also increases.
Would you say fraud is the main catalyst behind authentication innovation?
While many people still lose money to traditional fraud scenarios, such as the massive Ponzi scheme perpetrated by Bernard Madoff, increasingly sophisticated on-line scenarios continue to emerge. Early online attacks, orchestrated largely by “script kiddies” intent on have evolved into sophisticated malware attacks orchestrated by organized crime rings. For the first half of 2010 the Anti-Phishing Working Group (APWG) reported that there were 48,244 phishing attacks occurring across 28,646 unique domain names. At the root of most of these attacks is the use of Social Engineering. Criminals are using very persuasive and often personalized tactics to entice users to take specific actions that will result in the attackers ability to in some way misdirect or take over a users session—or their entire machine!
But fraud is a very broad term that is used to refer to anything from the theft of personal information to the interception of financial transactions. At the end of the day, people who are taking advantage of the Internet want to feel protected from all of these threats online – and a big part of that is having the confidence that their identity is protected. Authentication is an important means of ensuring that a person online is who they say they are – and the means to ensure this is to provide reliable, trusted strong authentication. But for users to adopt stronger authentication it needs to be easy to use so it does not interrupt the typical way in which they interact – it must be flexible, and it must be easily deployed.
Even within organizations the adoption of strong authentication is challenging – while a recent Forrester report indicated that 65% of firms in North America and Europe had adopted strong authentication, it had been rolled out to fewer than 10% to 20% of the employee base.
The desire to provide this broader protection against online threats is certainly an important motivator in to the development of new authentication technologies. As an example, mobile devices, are becoming ubiquitous among online users, and being able to leverage these devices would offer an easy to use, affordable method of authentication that could be easily rolled out to a broad population base. Similarly, authentication methods such as grid cards offers an affordable and easily adopted alternative to traditionally complicated methods such as one time passwords – in turn making stronger authentication accessible to a broad base of users. And to offer these approaches on a single platform, provides organizations with flexibility so they can apply the appropriate authentication method to the type of user, matching their online behavior. All of these innovations have been spurred on by the desire to extend greater protection to the online user.
Nowadays most users have a hard time managing their online identity across multiple websites and services. This comes mainly from a lack of understanding of security risks. Would an official unified identity document like a passport solve the problem or just bring more controversy to the issue?
One of the challenges in the digital world is that individuals receive identities from many different sites, so their digital identity is actually a collection of unique identities, all of which must be managed and protected individually. While a lack of knowledge about security risks certainly makes the user’s experience more difficult, the larger issue is the lack of trust among the issuing authorities – the fact that each agency or site is compelled to issue their own branded identity – and that there is little to no trust of identities issued by different organizations.
An identity that could be trusted by more than one organization would certainly make for an easier user experience, particularly if the identity could be managed and protected seamlessly and transparently to the user.
However, trust between organizations is difficult to establish because organizations often have very different, sometimes competing priorities. Even within government agencies, the jurisdictional concerns make such collaboration difficult – and that is compounded in a competitive environment. Leveraging identities across organizations in some type of federation requires common policies and common processes that are adopted and implemented consistently – and that there is a legal framework governing the Federation.
These are difficult issues to resolve, but the establishment of federations in which identities are trusted would be an important step forward in making it easier for individuals to understand and manage their digital identity. And in the absence of a federation that trusts identities issued by another authority, the number of identities that make up an individual’s overall digital identity, will continue to expand.
What are the key issues we have to deal with when implementing identity management? How can they be resolved?
There are a number of issues that need to be addressed when implementing an identity management solution – much of these can be grouped around administration and deployment, security and lifecycle management of the identities.
One of the first issues in the implementation of an identity management solution is the establishment of trust for the identities. The ability to properly vet the individual before issuing the identity creates a foundation for trust – and the potential extension of the trust framework. The development of a common acceptable framework to issue an identity is an important factor in establishing that underlying trust.
In terms of administration it’s important that an identity management solution can be centrally administered so that policies can be implemented consistently and efficiently throughout the organization. From a security perspective, if central policies cannot be implemented consistently or enforced then it undermines the overall system.
It’s also important that an identity management system provides flexibility to apply different types of identities to different types of users. This reflects the fact that not all users are equal – that different roles may perform different types of transactions, with different risk levels. An effective identity management system will support many different authentication types, which in turn can support different security levels – such as one-time passwords versus digital certificates.
Based on your experience, what’s the quality of the software used to work with open identity standards? What are the missing ingredients?
There’s a lot more acceptance today of the products that are using and leveraging open identity standards than was the case 3 to 5 years ago. However, to a large extent many of the projects that are being implemented are very slow to develop and are very basic applications. As an example, being able to leverage a Google ID across multiple sites is convenient to users and a significant step forward than what has been the case to date, however the applications supported are not high value. The standards that have been developed in this area allow for more robust or stepped up authentication, but to date there has not been a significant movement to leverage this.
What’s your take on government adoption of open identity technologies?
The government has provided a major impetus to the adoption of open identity technologies and to a large extent has led the way. They have been involved in standards-based federated models for many years, based largely on PKI using x.509 certificates.
In more recent years the government has played an important role in driving some of the requirements that need to be addressed for the back-end systems, such as stronger protection of the servers to address privacy concerns. These considerations need to be addressed before these technologies can be leveraged for mass consumption, or for higher value services.
|
<urn:uuid:1616e2fc-52bc-4cf5-bf91-add6798be319>
|
CC-MAIN-2017-09
|
https://www.helpnetsecurity.com/2010/12/20/the-importance-of-identity-in-the-digital-age/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00437-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.959108 | 2,166 | 2.59375 | 3 |
As we have covered almost all topics of scanning this is the last topic that come under scanning….
First of all ..
WHAT IS IP SPOOFING??
Ip spoofing is basically encrypting your Ip address so that it appears something else to attacker or victim i.e it is the virtual Ip address..
~ IP Spoofing is when an attacker changes his IP address so that he appears to be someone else.
~ When the victim replies back to the address, it goes back to the spoofed address and not to the attacker’s real address.
~ You will not be able to complete the three-way handshake and open a successful TCP connection by spoofing an IP address.
You Will Better Understand It With SNAPSHOT..
HOW TO DETECT IP SPOOFING ??
When an attacker is spoofing packets, he is usually at a different location than the address being spoofed
Attacker’s TTL(Time to Live i.e Time for which IP is allocated for use) will be different from the spoofed address’ real TTL. If you check the received packet’s TTL with spoofed one, you will see TTL doesn’t match.
These things are blocked in latest versions of Windows i.e after SP3. Firewall will itself block any spoofing attacks…
This Is all about the IP spoofing and Scanning Part.
The Next Two Parts of upcoming class:
1. How to Protect Yourself From Scanning .
2. How to Hack Websites Using things that We Studied until Now . A little SQL injection tutorial is also required for that. We will try to cover it as quick as Possible..
If you have any doubts about Ip spoofing you can ask..
|
<urn:uuid:6f6737ae-67d1-4663-bec4-1f5807262579>
|
CC-MAIN-2017-09
|
https://www.hackingloops.com/hacking-class-9-ip-spoofing-and-its-use/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00137-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.932743 | 367 | 2.875 | 3 |
FASP transfer protocol speeds data transmission to the cloud
- By John Breeden II
- May 15, 2014
While the government uses the cloud to store everything from genomic sequences to climate data, the transfer speeds going into or coming out of any cloud are extremely slow compared to sharing data between more traditional storage media.
And those speeds won’t change anytime soon, according to Jay Migliaccio, Aspera’s director of cloud services, unless agencies start to invest in new transit protocols.
Migliaccio, speaking at the FOSE 2014 conference presented by 1105 Media, explained that the problem is inherent in the way local wide-area networks handle storage compared to the way that it's done in the cloud.
WANs can use the Internet’s core Transmission Control Protocol (TCP) when transferring files because the local hard drives, the network attached storage devices and just about every non-cloud based storage medium uses the same method of linearly storing data. However, the cloud uses an object-based storage scheme. When moving a file into the cloud, the transfer has to be done using the Hypertext Transfer Protocol (HTTP), which is inherently challenging with very large file sizes.
"To move a 1 terabyte file up to the cloud, you basically have to break it down into thousands of parts to be compatible with the cloud's object-based storage system. And that is neither fast nor efficient."
To address this problem, Aspera, which is owned by IBM, created a protocol that is able to bridge the gap between normal storage and the cloud. Built on the fast and secure protocol (FASP), which is already used by many agencies for large data transfers, the new service, called Aspera On Demand, creates a fast pathway to the cloud. "We can transfer files directly to the object storage area of the cloud as if the file was landing on a disk," Migliaccio said.
And it's not just about raw speed. The new protocol is built on an open architecture that that can be embedded into any program or application. Developers can then control the speed of transfers, giving more bandwidth to higher-tier customers or scaling back when multiple transfers are taking place. Migliaccio compared that to a car, in which control is as important as speed.
Government users will likely be interested in the fact that transfers with the new protocol can be made with encrypted files. "You can have the file be unencrypted on the other end if you want, but still protect it during transport, "Migliaccio said. "Or you can have it remain encrypted so that it's protected within the cloud," which he added might be an attractive option for agencies making use of public clouds for storage.
In terms of speed differences between TCP and Aspera On Demand, Migliaccio said the best one can hope for with a typical transfer of a large file to the cloud was about 100 megabytes per second. But with Aspera On Demand, obtaining speeds of up to 1 gigabit/sec is fairly standard.
Agencies interested in using Aspera On Demand to speed up their transfers of data to the cloud can find it as part of the Amazon Web Services, where it is available to rent by the hour or by the gigabyte. It's also available through Microsoft Azure as a full blown service. It's FIPS-140-2 certified and uses AES-128 bit encryption, though Migliaccio said it would soon be upgraded to 256-bit encryption.
John Breeden II is a freelance technology writer for GCN.
|
<urn:uuid:f5408834-d20b-4030-b54d-9521364ec0eb>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2014/05/15/fose-data-transfer-protocol.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00434-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.953971 | 745 | 2.75 | 3 |
Information security professionals have a tough time of it.
Consider what they have to cope with in today's IT environment. You have big data meeting BYOD, a combination that's almost an invitation to cyber-espionage. The traditional method for protecting corporate networks was to create a hardened outer shell that restricted access to internal data -- the so-called M&M network that's hard on the outside but soft in the middle. That external shell is tough to crack, but attackers have found a creative way to get to the soft middle by using lost or stolen devices or employing social networks to glean usernames and passwords.
Meanwhile, attacks on individual and corporate digital assets are on the rise, and the black hats get more ingenious every day. Infosec professionals have to stay one step ahead, and that requires that they be well educated and as thoroughly trained in the dark art of network security as the bad guys. Going forward, IT security gurus will need to think analytically -- understanding not just how to set up security, but also how to craft security solutions so that the business focus is supported while at the same time protecting the business's digital assets.
Focused procedures, such as penetration testing and "ethical" hacking, can be effective at hunting out specific vulnerabilities, but a holistic approach to network security that blankets the perimeter and protects against a broad range of attacks is better able to adapt to the constant evolution of assaults of this type.
To train for this type of holistic approach, students taking information security courses must practice a variety of defensive techniques, such as configuring access control and designing comprehensive security policies. They must also learn how to properly conduct an organizational security audit to identify security breaches and other alerts.
Universities and colleges are offering courses and projects that prepare and train cybersecurity professionals, and often these courses are specialized and not part of the core curriculum. Moreover, they often remain stuck on rigid, traditional security approaches that lack the flexibility users need in a mobile world. A new approach to cybersecurity protection and related education is needed, one that blends a focus on technology and security techniques with social psychology, risk management, collaboration and overall curriculum integration. An effective educational program is one that recognizes the need for security with flexibility, as part of the entire curriculum -- from entry-level to advanced, and in all classes, whether they are focused on some aspect of technology or on developing leadership skills.
Similarly, an effective curriculum is one that helps students think like professional hackers while guiding them to develop a risk-based approach to security -- which ensures that appropriate measures are applied to protect key data. The National Security Agency is promoting this new approach to cybersecurity education with its hacking competitions, a hands-on way to showcase potential threats and countermeasures. For their part, universities are moving toward hands-on virtual labs and introducing areas ranging from ethics to social psychology.
Just as vital, though, is the need for cybersecurity education for all students, and not just those studying information technologies. In the end, every user has a role in creating a dynamic mobile environment that offers flexibility while remaining secure.
Lynne Y. Williams is a faculty member in the MSIT program at Kaplan University who has been working with computers and networks since the days of VAX mini-mainframes. The views expressed in this article are solely those of the author and do not represent the views of Kaplan University.
Read more about security in Computerworld's Security Topic Center.
This story, "How can we keep infosec pros a step ahead of the bad guys?" was originally published by Computerworld.
|
<urn:uuid:60553a55-d6ae-4380-8c2b-5c0e3bfaf51b>
|
CC-MAIN-2017-09
|
http://www.itworld.com/article/2710712/security/how-can-we-keep-infosec-pros-a-step-ahead-of-the-bad-guys-.html?page=2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00486-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.953273 | 727 | 2.796875 | 3 |
Discriminatory language is as pervasive on sites like Facebook and Twitter as it was a couple of years ago, but fewer teens and young adults seem to be OK with that, a recent survey found.
About half of young people reported seeing discriminatory language or images posted on social networking sites, according to the results of a survey released Wednesday by the Associated Press-National Opinion Research Center for Public Affairs Research and MTV. Roughly the same findings were made in a 2011 survey.
The language might include misogynistic and homophobic words and phrases such as "that's so gay." Many young people use such language, the survey found, to try to be funny or because they think it's cool.
But that thinking might be changing. Compared to 2011, nearly 20 percent fewer teenagers and people in their early 20s said it was OK for them and their friends to use discriminatory language around each other, even when they know they don't mean it, the survey found.
Also, nearly 80 percent of young people said it's important for people who use slurs or discriminatory language online to be held accountable for their actions, according to the survey.
The AP-NORC center's survey was conducted to get a better look at discrimination and bullying trends online, and to see how teenagers and young adults respond to it. Some of the groups most frequently targeted by discriminatory language are people who are overweight, lesbian, gay, bisexual, transgender, those who question their gender identity, blacks and women, the survey found. The most popular sites for hurtful language were YouTube, Facebook, Twitter and gaming networks like Xbox Live and the PlayStation Network.
However, it's unclear from the survey results whether teenagers and young adults would really do anything to stop the use of such language, based on the survey results. Less than half said they would intervene if they saw someone using discriminatory language or images on social media, a 15 percent decline from 2011, the survey found.
Sixty percent said they would take action if the language were used in person. But whether it's online or in the real world, many said they wouldn't intervene because they wouldn't feel comfortable doing so.
Tumblr, Snapchat and Reddit had less discriminatory language than other social media sites, according to the survey.
The survey included more than 1,200 people ages 14-24 who were interviewed in September and October.
|
<urn:uuid:2a266ad2-a1bf-430a-a1a3-c61d7473e45c>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2172059/data-center/study--slurs-still-litter-social-websites--but-such-language-is-increasingly-unacceptabl.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00006-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.978186 | 478 | 2.71875 | 3 |
British lawmakers want more transparency and less bias in decision-making -- not their own, of course, but in decisions made by AI systems.
As more and more software systems and connected devices employ artificial intelligence technologies to make decisions for their owners, the lawmakers want to know what's behind their thinking.
The U.K. Parliament's Science and Technology Committee has been studying the need for more regulation in the fields of robotics and artificial intelligence.
Recent advances in AI technology raise a host of social, ethical and legal questions, the committee's members said in a report published Wednesday.
We need, they said, to think about whether transparency in decisions made by AI systems is important; whether it's possible to minimise bias being accidentally built into them; and how we might verify that such technology is operating as intended and will not lead to unwanted or unpredictable behaviors.
This being the Science and Technology Committee, one of the conclusions is that more research is needed and that Parliament should create a standing Commission on Artificial Intelligence to figure out how to regulate it. Lawyers, social scientists, natural scientists, and philosophers should all be represented on the commission, in addition to computer scientists, mathematicians, and engineers, the committee recommended in its report.
"While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now," the report said.
With AI likely to disrupt the employment market, taking over many jobs that only humans can do today, the committee wants the government to make the education system more flexible -- and able to train people throughout their lives, not just during their school years.
This will be a huge undertaking, in the U.K. at least. The committee notes the country is already suffering a digital skills crisis, and the government has still not published its strategy for helping workers cope with the existing array of digital technologies.
The U.K. government identified robotics and autonomous systems as long ago as 2013 as being one of eight great technology areas in which the country could become a global leader. Despite that, there is no government strategy for developing skills and investment to create future growth in robotics and AI, the committee found. And with Brexit, the U.K.'s departure from the European Union, looming, it may become harder for the country to recruit those skills from abroad.
|
<urn:uuid:670e8aa2-fd9e-4900-9eaf-74099715558b>
|
CC-MAIN-2017-09
|
http://www.itnews.com/article/3130495/lawmakers-want-uk-to-set-example-on-transparency-in-ai-decision-making.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00358-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.966443 | 487 | 3.0625 | 3 |
Crowdsourced mapping for flood tracking, prediction
- By Suzette Lohmeyer
- Dec 02, 2016
The latest tool in flood prediction for low-lying coastal areas in Hampton Roads, Va., is crowdsourced drone footage posted on YouTube.
While regulations and no-fly zones in the Norfolk area (home of the Norfolk Naval Base) should prevent drones from being flown, Derek Loftis and his team at the Virginia Institute of Marine Science realized that the regulations don’t seem to stop drone hobbyists from flying.
“After Hurricane Matthew hit, people were out there recording with their drones,” Loftis said. “Some of them even attached phones to produce live streaming video.”
Loftis realized he could use the drone video as a cost-free way to check the accuracy of his primary flood prediction model, StormSense.
StormSense, which uses street-level hydrodynamic modeling to determine types of flooding as well as the areas at highest risk, is dependent on ultrasonic sensors. At around $5,000 a pop, Loftis said, many towns can’t afford to put them in every spot where there might be flooding. And while Loftis just won a $75,000 grant from the National Institute of Standards and Technology to purchase more sensors, it still won’t be enough to cover every area he wants.
With the video from the drones, Loftis can see “the maximum line of flooding, and we can check if it is the same as we predicted,” he said. “We can figure out how off we are. Are we 20 feet or are we five feet off.”
If using YouTube drone video sounds less than scientific, Loftis agreed. “That’s true,” he said. “But if you can get a hold of the raw footage, you can stitch it into useable data using Esri’s Drone2Map tool that analyzes drone images and converts them into 2-D and 3-D maps.
Using an app from Esri is a strategic part Loftis’ long-term plan to make his flood prediction methodology useable anywhere by anyone. “A lot of cities have contracts or site licenses for the Esri GIS program and are filled with people certified to use it.”
Another way to track flooding is with the Sea Level Rise app, which, like the drone footage, crowdsources native knowledge. Created by the non-profit Wetlands Watch, the app allows local citizens to map flooding during and after an event.
“I watched all the spontaneous social networking spring up during Hurricane Sandy,” Wetlands Watch Executive Director Skip Stiles said. “I thought, well wait a minute, could you use social networking and crowdsourcing to get the knowledge to people who need it?”
The organization partnered with Concursive to create the Sea Level Rise App. Anyone can view the app, but only those who have gone through a 10-minute training are authorized to add data to the map.
Users go out during a flooding event and walk around the edge of the water. Every five steps they use the app to drop GIS pins. The data they collect is exported as an Excel file so it can be turned into a shape file and overlaid on an emergency management grid.
This allows researchers like Derek Loftis who gets data from Wetlands Watch to reduce the margin of error on his prediction models, Stiles said, who is hoping to get the margin of error down to 10 feet.
This not only helps emergency crews know where to go but also has a very practical application for areas like Hampton Roads that have school flood days instead of snow days. “My phone might buzz and say, ‘High Water Advisory for the city from 12-4, and I don’t know when or where exactly,” Stiles said. “With this app you can get enough data” so that at 8 a.m. parents will know if after-school activities might be cancelled because of flooding.
While the app is currently free, Stiles and his team have created a business model to make it self-sustaining.
“Eventually we want to give people franchise areas in which they can map, say, for example, the City of Virginia Beach. The backend data support would be done by Concursive, and the City of Virginia Beach would pay a few bucks and then they could then have all the data,” he said. “Collect a few pennies from a lot of people and it becomes self-supporting.”
Stiles also is looking into the idea of selling the data to insurance companies. “Do you know how many cars we lose due to flooding?” Stiles asked. “If an insurance company paid the cost of just one SUV, we could sustain the Sea Level Rise project for a year.”
Suzette Lohmeyer is a freelance writer based in Arlington, Va.
|
<urn:uuid:faf517b1-e810-4274-a8dc-718b309b16fb>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2016/12/02/crowdsourced-flood-tracking.aspx?admgarea=TC_Mobile
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00002-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.960584 | 1,042 | 2.546875 | 3 |
As the time goes by and the network with more and more virtualised servers and other devices are making that network more complicated, overlay technologies are rising to save the day for network administrators.
Virtual Extensible LAN – VXLAN is a new encapsulation technology used to run an overlay network on current Layer 3 communication network. An overlay network is considered as a practical network that is set up on the top of current layer 2 network. It also considers additional layer 3 technologies to aid flexible computer architectures. VXLAN will make sure it is very easy for network engineers to level out the right cloud computing setting while reasonably separating cloud applications and tenants. A cloud computing environment is defined as a multitenant, every tenant needs its separately configured logical network, which in return needs it’s very own network ID or identification.
What the hell that means?
What it this VXLAN doing actually. To put it simple, VXLAN can create logical network to connect your virtual machines across different networks. It is enabling us to make a layer 2 network for our VMs on top of our layer 3 network. That’s why VXLAN is a overlay technology. In “normal” network if you are connecting virtual machine to get the connection to some other virtual machine on different subnet, you need to use a layer 3 router to make a connection between networks. With VXLAN we can utilize VXLAN gateway of some sort to connect them without even exiting into physical network.
Normally, network engineers have made use of virtual LANs – VLANs to separate applications and tenants in a cloud computing setting but VLAN requirements just permit or allow for up to 4,096 network identifications to be allocated at a specific given period – which may not be adequate addresses for a very big cloud computing setting. The main goal of VXLAN is to lengthen the VLAN address space just by including 24-bit sector identification and maximizing the number of accessible identifications to 16 million. The virtual extensible LAN – VXLAN segment identification in every frame makes individual logical networks stand out which means millions of separated Layer 2 VXLAN networks that can stay on normal Layer 3 infrastructure.
Just like VLANs, just virtual machines in the same rational network can commune with one other. If accepted, VXLAN is capable of potentially permitting network engineers to transfer virtual devices across extended distances and play a very vital role in software-defined networking – SDN, an up and coming structural design that lets servers or controllers tell network switches exactly where they need to send packets. In conventional networks, every switch has proprietary program that tells it exactly what to do. In SDNs, the transfer or packet decisions are consolidated and the flow of network traffic can be planned separately of all personal switches and information center equipment.
To put to use software-defined networking with VXLAN, supervisors can make use of current hardware and software, this feature helps to make the technology strong and appealing financially. There are so many vendors who are rolling out VXLAN gateways because it helps to bridge network services between software based network overlays and fundamental physical infrastructure. A lot of vendors have been able to pitch network overlays set on gateway protocols such as VXLAN or virtual extensible VLAN, as a method to implement software based, virtualized cloud networking. It is a very amazing, however network overlays do not restore the physical setting, and they just abstract it.
The physical network is still available and it has to be well organized. Also, a lot of network overlays are organized in hybrid settings where a lot of the information center is still ruled by legacy architecture and network services, like firewalls and load balances, are still put into place in hardware. Due to this, companies will need VXLAN gateway in other to expand services and administration across both physical and virtual networks. There are VXLAN gateways available in programs, but hardware support will be better scaled. VXLAN is mostly the easiest implementation of the traditional network to virtualization border, got from legacy networking to complete virtualized networking. So many companies are benefiting from these networks in running their businesses.
|
<urn:uuid:a6ba3b43-6c92-4bd8-9851-eedc440e31ac>
|
CC-MAIN-2017-09
|
https://howdoesinternetwork.com/2014/vxlan
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00530-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.92475 | 850 | 2.828125 | 3 |
Implant Runs on the Batteries in Your Ears
Implanted electronic devices can be powered by a biological battery that exists in the ear, researchers at MIT, the Massachusetts Eye and Ear Infirmary and the Harvard-IT Division of Health Sciences and Technology have demonstrated for the first time.
Team member Konstantina Stankovich, assistant professor of otology and laryngology at Harvard Medical School, implanted electrodes into the biological batteries in guinea pigs' ears. These electrodes led to low-power electronic devices on the outsides of the subjects' ears.
"We have been able to extract energy for up to five hours without damaging the subjects' hearing," Stankovich told TechNewsWorld.
The devices wirelessly transmitted data about the chemical conditions in the guinea pigs' ears to an external receiver.
More About the Device
The electrodes implanted in the test subjects' ears were attached to chips developed by MIT's Microsystems Technology Laboratories that had an ultralow-power radio transmitter, and power-conversion circuitry.
Power was supplied by a natural biological battery that exists in the ears of all mammals, including humans. This battery's voltage fluctuates, so the power conversion circuit in the chip gradually builds up a charge in a capacitor in order to get enough juice to power the radio transmitter. That takes between 40 seconds and four minutes.
However, the biological battery still didn't provide enough voltage to start the transmitter.
"An AA battery gives you 1.5 volts, but the energy source we're harvesting puts out 0.1 volts, and we're taking only a very small amount of that, because we know this biological battery is essential to hearing," Stankovich said.
The researchers shot a burst of radio waves at the control circuit to kick-start it. Once the transmitter was fired up, it was self-sustaining.
The Ears Have the Power
The biological battery resides in the cochlea, which is a spiral-shaped cavity in the inner ear that has three chambers. Two of these, the scala vestibuli and the scala tympani, contain perilymph, and the third, the scala media, or cochlear duct, contains endolymph.
Perilymph is a liquid containing more sodium than potassium. The endolymph contains more potassium than sodium. The endolymph's high concentration of potassium ions produces an ionic electrical potential that's about 80 to 90 millivolts more positive than that of the perilymph.
The scala vestibuli is separated from the scala media by the basilar membrane. The electrical potential existing between both sides of the membrane contributes to hearing.
Here's roughly how it works: Noise vibrations entering the ear move the perilymph. That motion is sensed by the stereocilia on the Organ of Corti, a cellular layer on the basilar membrane. The stereocilia convert the motion they sense to electrical signals, using the differences in the electrical potential existing on both sides of the basilar membrane, and send these signals to the brain through the auditory nerve.
Going to Market
The chip used by the researchers "would occupy about a quarter of a penny" so it can be fully inserted into the ear of a rodent or a human, Stankovich said.
"Ultimately, the electrodes will be inside the cochlea and the chip in the middle ear," Stankovich continued. "We are working on packaging and the miniaturization of the electrodes."
The device could be used "as a sensor for the inner ear and potentially to power devices that would deliver drugs and therapies to the ear," Stankovich said. "Right now, the energy we harvest is not sufficient to power a cochlear implant or hearing aid."
This "is possible in the long term if circuits are simplified and power draw is reduced," said Anu Cherian, senior industry analyst for energy and power systems at Frost & Sullivan. "More integrated therapies are beyond the scope of today's options."
It might take 10 years before a device based on this technology actually hits the market, Cherian told TechNewsWorld.
"Right now, the tests are on guinea pigs," she explained. "If they were being conducted on humans right now, I'd say it's about five years from commercialization."
|
<urn:uuid:6c2cdf33-20bb-4e02-9817-a5877b332a31>
|
CC-MAIN-2017-09
|
http://www.linuxinsider.com/story/science/76580.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00474-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.937186 | 897 | 3.5625 | 4 |
Geoff Huston, APNIC
The tabloid press is never lost for a good headline, but in July 2012 this one in particular caught my eye: "Global Chaos as Moment in Time Kills the Interwebs." I am pretty sure that "global chaos" is somewhat "over the top," but a problem did happen on July 1 this year, and yes, it affected the Internet in various ways, as well as affecting many other enterprises that rely on IT systems. And yes, the problem had a lot to do with time and how we measure it. In this article I will examine the cause of this problem in a little more detail.
What Is a Second?
I would like to start with a rather innocent question: What exactly is a second? Obviously it is a unit of time, but what defines a second? Well, there are 60 seconds in a minute, 60 minutes in an hour, and 24 hours in a day. That information would infer that a "second" is 1/86,400 of a day, or 1/86,400 of the length of time it takes for the Earth to rotate about its own axis. Yes?
Almost, but this definition is still a little imprecise. What is the frame of reference that defines a unit of rotation of the Earth? As was established in the work a century ago in attempting to establish a frame of reference for the measurement of the speed of light, these frame-of-reference questions can be quite tricky!
What is the frame of reference to calibrate the Earth's rotation about its own axis? A set of distant stars? The Sun? These days we use the Sun, a choice that seems logical in the first instance. But cosmology is far from perfect, and far from being a stable measurement, the length of time it takes for the Earth to rotate once about its axis relative to the Sun varies month by month by up to some 30 seconds from its mean value. This variation in the Earth's rotational period is an outcome of both the Earth's elliptical orbit around the Sun and the Earth's axial tilt. These variations mean that by the time of the March equinox the Solar Day is some 18 seconds shorter than the mean, at the time of the June solstice it is some 13 seconds longer, at the September equinox it is some 21 seconds shorter, and in December it is some 29 seconds longer.
This variation in the rotational period of the Earth is unhelpful if you are looking for a stable way to measure time. To keep this unit of time at a constant value, then the definition of a second is based on an ideal version of the Earth's rotational period, and we have chosen to base the unit of measurement of time on Mean Solar Time. This mean solar time is the average time for the Earth to rotate about its own axis, relative to the Sun.
This value is relatively constant, because the variations in solar time work to cancel out each other in the course of a full year. So a second is defined as 1/86,400 of mean solar time, or in other words 1/86,400 of the average time it takes for the Earth to rotate on its axis. And how do we measure this mean solar time? Well, in our search for precision and accuracy the measurement of mean solar time is not, in fact, based on measurements of the sun, but instead is derived from baseline interferometry from numerous distant radio sources. However, the measurement still reflects the average duration of the Earth's rotation about its own axis relative to the Sun.
So now we have a second as a unit of the measurement of time, based on the Earth's rotation about its own axis, and this definition allows us not only to construct a uniform time system to measure intervals of time, but also to all agree on a uniform value of absolute time. From this analysis we can make calendars that are not only "stable," in that the calendar does not drift forward or backward in time from year to year, but also accurate in that we can agree on absolute time down to units of minute fractions of a second. Well, so one would have thought, but the imperfections of cosmology intrude once again.
The Earth has the Moon, and the Earth generates a tidal acceleration of the Moon, and, in turn the Moon decelerates the Earth's rotational speed. In addition to this long-term factor arising from the gravitational interaction between the Earth and the Moon, the Earth's rotational period is affected by climatic and geological events that occur on and within the Earth . Thus it is possible for the Earth's rotation to both slow down and speed up at times. So the two requirements of a second—namely that it is a constant unit of time and it is defined as 1/86,400 of the mean time taken for the Earth to rotate on its axis—cannot be maintained. Either one or the other has to go.
In 1955 we went down the route of a standard definition of a second, which was defined by the International Astronomical Union as 1/31,556,925.9747 of the 1900.0 Mean Tropical Year. This definition was also adopted in 1956 by the International Committee for Weights and Measures and in 1960 by the General Conference on Weights and Measures, becoming a part of the International System of Units (SI). This definition addressed the problem of the drift in the value of the mean solar year by specifying a particular year as the baseline for the definition.
However, by the mid-1960s this definition was also found to be inadequate for precise time measurements, so in 1967 the SI second was again redefined, this time in experimental terms as a repeatable measurement. The new definition of a second was 9,192,631,770 periods of the radiation emitted by a Caesium-133 atom in the transition between the two hyperfine levels of its ground state.
So we have the concept of a second as a fixed unit of time, but how does this relate to the astronomical measurement of time? For the past several centuries the length of the Mean Solar Day has been increasing by an average of some 1.7 milliseconds per century. Given that the solar day was fixed on the Mean Solar Day of the year 1900, by 1961 it was around a millisecond longer than 86,400 SI seconds. Therefore, absolute time standards that change the date after precisely 86,400 SI seconds, such as the International Atomic Time (TAI), get increasingly ahead of the time standards that are rigorously tied to the Mean Solar Day, such as Greenwich Mean Time (GMT).
When the Coordinated Universal Time (UTC) standard was instituted in 1961, based on atomic clocks, it was felt necessary that this time standard maintain agreement with the GMT time of day, which until then had been the reference for broadcast time services. Thus, from 1961 to 1971 the rate of broadcast time from the UTC atomic clock source had to be constantly slowed to remain synchronized with GMT. During that period, therefore, the "seconds" of broadcast services were actually slightly longer than the SI second and closer to the GMT seconds.
In 1972 the Leap Second system was introduced, so that the broadcast UTC seconds could be made exactly equal to the standard SI second, while still maintaining the UTC time of day and changes of UTC date synchronized with those of UT1 (the solar time standard that superseded GMT). Reassuringly, a second is now a SI second in both the UTC and TAI standards, and the precise time when time transitions from one second to the next is synchronized in both of these reference frameworks. But this fixing of the two time standards to a common unit of exactly 1 second means that for the standard second to also track the time of day it is necessary to periodically add or remove entire standard seconds from the UTC time-of-day clock. Hence the use of so-called leap seconds. By 1972 the UTC clock was already 10 seconds behind TAI, which had been synchronized with UT1 in 1958 but had been counting true SI seconds since then. After 1972, both clocks have been ticking in SI seconds, so the difference between their readouts at any time is 10 seconds plus the total number of leap seconds that have been applied to UTC.
Since January 1, 1988, the role of coordinating the insertion of these leap-second corrections to the UTC time of day has been the responsibility of the International Earth Rotation and Reference Systems Service (IERS). IERS usually decides to apply a leap second whenever the difference between UTC and UT1 approaches 0.6 second in order to keep the absolute difference between UTC and the mean solar UT1 broadcast time from exceeding 0.9 second.
The UTC standard allows leap seconds to be applied at the end of any UTC month, but since 1972 all of these leap seconds have been inserted either at the end of June 30 or December 31, making the final minute of the month in UTC, either 1 second longer or 1 second shorter when the leap second is applied. IERS publishes announcements in its Bulletin C every 6 months as to whether leap seconds are to occur or not. Such announcements are typically published well in advance of each possible leap-second date—usually in early January for a June 30 scheduled leap second and in early July for a December 31 leap second. Greater levels of advance notice are not possible because of the degree of uncertainty in predicting the precise value of the cumulative effect of fluctuations of the deviation of the Earth's rotational period from the value of the Mean Solar Day. Or, in other words, the Earth is unpredictably wobbly!
Between 1972 and 2012 some 25 leap seconds have been added to UTC. On average this number implies that a leap second has been inserted about every 19 months. However, the spacing of these leap seconds is quite irregular: there were no leap seconds in the 7-year interval between January 1, 1999, and December 31, 2005, but there were 9 leap seconds in the 13 years between 1985 and 1997, as shown in Figure 1. Since December 31, 1998, there have been only 3 leap seconds, on December 31, 2005, December 31, 2008, and June 30, 2012, each of which has added 1 second to that final minute of the month, at the UTC time of day.
Leaping Seconds and Computer Systems
The June 30, 2012 leap second did not pass without a hitch, as reported by the tabloid press. The side effect of this particular leap second appeared to include computer system outages and crashes—an outcome that was unexpected and surprising. This leap second managed to crash some servers used in the Amadeus airline management system, throwing the Qantas airline into a flurry of confusion on Sunday morning on July 1 in Australia. But not just the airlines were affected, because LinkedIn, Foursquare, Yelp, and Opera were among numerous online service operators that had their servers stumble in some fashion. This event managed to also affect some Internet Service Providers and data center operators. One Australian service provider has reported that a large number of its Ethernet switches seized up over a 2-hour period following the leap second.
It appears that one common element here was the use of the Linux operating system. But Linux is not exactly a new operating system, and the use of the Leap Second Option in the Network Time Protocol (NTP) [7, 8, 9, 10] is not exactly novel either. Why didn't we see the same problems in early 2009, following the leap second that occurred on December 31, 2008?
Ah, but there were problems then, but perhaps they were blotted out in the post new year celebratory hangover! Some folks noticed something wrong with their servers on January 1, 2009. Problems with the leap second were recorded with Red Hat Linux following the December 2008 leap second, where kernel versions of the system prior to Version 2.6.9 could encounter a deadlock condition in the kernel while processing the leap second.
"[...] the leap second code is called from the timer interrupt handler, which holds xtime_lock. The leap second code does a printk to notify about the leap second. The printk code tries to wake up klogd (I assume to prioritize kernel messages), and (under some conditions), the scheduler attempts to get the current time, which tries to get xtime_lock => deadlock."
The advice in January 2009 to sysadmins was to upgrade the systems to Version 2.6.9 or later, which contained a patch that avoided this kernel-level deadlock. This time it is a different problem, where the server CPU encountered a 100-percent usage level:
"The problem is caused by a bug in the kernel code for high resolution timers (hrtimers). Since they are configured using the CONFIG_HIGH_RES_TIMERS option and most systems manufactured in recent years include the High Precision Event Timers (HPET) supported by this code, these timers are active in the kernels in many recent distributions.
"The kernel bug means that the hrtimer code fails to set the system time when the leap second is added. The result is that the hrtimer representation of the time taken from the kernel is a second ahead of the system time. If an application then calls a kernel function with a timeout of less than a second, the kernel assumes that the timeout has elapsed immediately after setting the timer, and so returns to the program code immediately. In the event of a timeout, many programs simply repeat the requested operation and immediately set a new timer. This results in an endless loop, leading to 100% CPU utilisation."
Following a close monitoring of its systems in the earlier 2005 leap second, Google engineers were aware of problems in their operating system when processing this leap second. They had noticed that some clustered systems stopped accepting work during the leap second of December 31, 2005, and they wanted to ensure that this situation did not recur in 2008. Their approach was subtly different to that used by the Linux kernel maintainers.
Rather than attempt to hunt for bugs in the time management code streams in the system kernel, they noted that the intentional side effect of NTP was to continually perform slight time adjustments in the systems that are synchronizing their time according to the NTP signal. If the quantum of an entire second in a single time update was a problem to their systems, then what about an approach that allowed the 1-second time adjustment to be smeared across numerous minutes or even many hours? That way the leap second would be represented as a larger number of very small time adjustments that, in NTP terms, was nothing exceptional. The result of these changes was that NTP itself would start slowing down the time-of-day clock on these systems some time in advance of the leap second by very slight amounts, so that at the time of the applied leap second, at 23:59:59 UTC, the adjusted NTP time would have already been wound back to 23:59:58. The leap second, which would normally be recorded as 23:59:60 was now a "normal" time of 23:59:59, and whatever bugs that remained in the leap second time code of the system were not exercised.
The topic of leap seconds remains a contentious one. In 2005 the United States made a proposal to the ITU Radiocommunication Sector (ITU-R) Study Group 7's Working Party 7-A to eliminate leap seconds. It is not entirely clear whether these leap seconds would be replaced by a less frequent Leap Hour, or whether the entire concept of attempting to link UTC and the Mean Solar Day would be allowed to drift, and over time we would see UTC time shifting away from the UT1 concept of solar day time.
This proposal was most recently considered by the ITU-R in January 2012, and there was evidently no clear consensus on this topic. France, Italy, Japan, Mexico, and the United States were reported to be in favor of abandoning leap seconds, whereas Canada, China, Germany, and the United Kingdom were reportedly against these changes to UTC. At present a decision on this topic, or at the least a discussion on this topic, is scheduled for the 2015 World Radio Conference.
Although these computing problems with processing leap seconds are annoying and for some folks extremely frustrating and sometimes expensive, I am not sure this factor alone should affect the decision process about whether to drop leap seconds from the UTC time framework. With our increasing dependence on highly available systems, and the criticality of accurate time-of-day clocks as part of the basic mechanisms of system security and integrity, it would be good to think that we have managed to debug this processing of leap seconds.
It is often the case in systems maintenance that the more a bug is exercised the more likely it is that the bug will be isolated and corrected. However, with leap seconds, this task is a tough one because the occurrence of leap seconds is not easily predicted. The next time we have to leap a second in time, about the best we can do is hope that we are ready for it.
For Further Reading
The story of calendars, time, time of day, and time reference standards is a fascinating one. It includes ancient stellar observatories, the medieval quest to predict the date of Easter, the quest to construct an accurate clock that would allow the calculation of longitude, and the current constellations of time and location reference satellites. These days much of this material can be found on the Internet.
|
<urn:uuid:94c10746-5cac-4828-817a-32fc13b386f7>
|
CC-MAIN-2017-09
|
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-57/153-leaping.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00650-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.966093 | 3,588 | 3 | 3 |
The Internet Is An Imperfect and Hostile Place
The world is an imperfect place. The internet is no exception. The internet has its good days and it has its bad days. Or to be more precise, the internet has its good seconds and its bad seconds.
Blemishes in internet performance arise from many sources.
- There will always be natural random events that damage internet packets. These events include lightning, solar flares, electrical noise, wind (blowing branches onto wires or across radio paths), and even things as mundane as a squirrel chewing on a cable.
- Internet performance is strongly affected by congestion within the net. Congestion is common on the relatively skinny links that bring the internet into homes and small businesses. Congestion is quite common in the more industrial portions of the internet where large capacity “pipes” come together at inter-carrier exchange points and big data centers. Congestion doesn’t just mean that data flows become sluggish. Congestion can also cause loss of packets or, somewhat surprisingly, congestion can cause the duplication or re-ordering of data packets (typically as side effects of congestion induced changes in packet routing.)
- Network devices are often underpowered. Their software is often flawed. And their configuration settings are often inappropriate for the loads they are expected to handle. These deficiencies can cause devices to fumble when handling internet traffic, or cause them chatter incorrectly, or even to blither incoherently.
- There are intentional malicious attacks that intend to disrupt the internet by inducing ill behavior or by simply inducing large traffic overloads.
- And finally, there are a lot of old devices on the internet. Although many of us think of the internet as something modern we should recognize that today’s internet has gone through many generations of ideas, protocols, and implementations. These may speak old dialects. These devices are somewhat like a Shakespearian character dropped into our modern world: they may be able to interact with others, but not well.
The software in many network devices is written as if many of these imperfections do not exist. This intentional ignorance of risks makes it easy for developers to produce new products quickly and cheaply. But it also means that once in the hands of customers and consumers these products may wobble in strange ways or simply fail when they encounter network conditions that were not considered or anticipated.
What Is Impairment Testing?
Impairment testing is a method through which developers exercise their network code (including the applications layered onto that code) and their network products without spending time and money traveling about the internet looking for unusual or ill conditions.
Impairment testing uses special network tools to create unusual, odd, bad, or improper network conditions.
Often network impairment tools act in a “man in the middle” role. That means that the impairment tool sits between the device under test (DUT) and the rest of the network. As a consequence, every packet to and from the DUT has to pass through the impairment tool.
At its most basic level a network impairment tool may do what we call “standard impairments”:
- Packet Drop
- Packet Duplication
- Packet Delay (fixed or variable, the latter being called “jitter”.)
- Packet Corruption
- Rate Limitation
- Changing the sequence in which packets are delivered.
More sophisticated network impairment tools may enrich these standard impairments by adding the ability to define triggers, to create burst effects, or to choreograph patterns that vary the impairments over time.
Some network impairment tools can also alter the contents of packets or create new packets.
And some tools can track the state of protocol handshaking and apply impairments in ways that are affected by that state. For example, a stateful impairment might reach into a stream of video packets and swap the order of the last packet of a video frame with the first packet of the next frame.
It is broadly acknowledged that the weakest and least tested parts of most software are the parts that handle errors and infrequently occurring conditions. This under-tested code is usually the place where the bugs and security holes lurk.
The intent of impairment tools is to force software in a device under test (DUT) to exercise those code pathways that are not followed under routine conditions.
But simply throwing packet noise at a DUT is likely to leave a developer dazed and confused. A developer needs tools that can focus the impairment effects in repeatable ways so that problems be isolated, diagnosed, repaired, and then re-tested to be sure that the repairs actually fixed the problem.
What Are Some Examples of Network Impairment Testing?
A large storm is blowing outside as I write this note. The rain and wind are creating terrible network conditions. Many packets are being lost and the packet delay ranges from a few milliseconds to many seconds. Many of my Apps have become temporarily unusable because of these conditions.
Storms, and these kinds of network conditions are hardly unusual. And applications that do not work well under these conditions are also hardly unusual. Some applications degrade nicely and gracefully; but many others fail in awful ways that do not reflect well on their developers or vendors.
A person developing a network-based product who cares about how their application or product behaves under imperfect conditions can use an impairment device to re-create those conditions in their development lab.
Notice that in the testbed shown above that all packets to and from the device under test (DUT) are flowing through the impairment device and are being equally subject to the impairment conditions.
Not All Packets and Flows Need To Be Impaired In the Same Way
In these days of home or company networking, or as the “Internet of Things” becomes more widespread it will become less likely that all packets will be candidates for impairments. On a typical home or business network a lot of traffic may be handled locally on good quality, high bandwidth paths that do not exit the home or business and thus are not likely to encounter bad conditions. For this reason it is useful if an impairment device can distinguish between “local” and “non-local” traffic and impose different impairment regimes on each.
Sometimes a developer may want to focus on certain kinds of traffic. For instance, Voice over IP (VoIP) is often far more sensitive to network conditions than typical web browsing.
For these reasons it can be very useful if impairment tools can differentiate between different kinds of traffic.
What Kind of Results Can One Expect?
Sometimes bad network conditions cause application or operating system code to completely fail; to crash. From a developer’s point of view these are often the easiest to diagnose and fix. From a user’s point of view they are certainly inescapably obvious failures.
But more often an application or network stack will degrade more rapidly than it should or begin to exhibit odd side effects. We have all experienced voice conversations that become incomprehensible or sound like we are in an echo-chamber even when there is no apparent degradation to web browsing. Without impairment tools developers have a hard time testing and tuning their code so that the user gets the best possible experience.
Because the effects of network impairments on an application are often subtle rather than catastrophic the developer needs to understand the desired behavior of the Device Under Test (DUT) and be sensitive to deviations. For example, if one is subjecting a streaming video application to impairments the impact may be reflected, rather obviously, as visual blotches on the receiving screen. Or the impact may be reflected less obviously as a stuttering of the received frame rate – resulting in a distracting video that jerks and sputters along.
What Are the Risks of Failing To Do Impairment Testing
The worst thing that can happen to a product vendor is to discover that the product is failing or misbehaving when it is used by customers. Problems at that phase are expensive to diagnose and repair, and they damage the reputation of the vendor.
Impairment testing can help you find and repair flaws before your product reaches your customers.
Impairment testing is not a panacea. Impairment testing is, however, a prudent tool to have in your product testing suite.
If you’re looking for a reliable, secure way to test your business’ critical apps and functionality by simulating various network conditions, check out the Maxwell family of network emulators or download our whitepaper to understand how apps perform on the network, even under adverse network conditions.
|
<urn:uuid:2c575a22-b224-41c3-b9b5-b2505b0f96e8>
|
CC-MAIN-2017-09
|
http://info.iwl.com/blog/care-impairment-testing-internet-protocols/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00174-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.944527 | 1,755 | 2.859375 | 3 |
Wikipedia project raises concerns over social media in class
A recent dust-up between Wikipedia and Canada's largest university raises questions about how collaborative the popular website that bills itself as "the free encyclopedia that anyone can edit" truly is.
The online information portal recently took a professor from the University of Toronto to task for one of his classroom assignments.
Steve Joordens urged the 1,900 students in his introductory psychology class to start adding content to relevant Wikipedia pages. The assignment was voluntary, and Joordens hoped the process would both enhance Wikipedia's body of work on psychology while teaching students about the scientist's responsibility to share knowledge.
But Joordens's plan backfired when the relatively small contingent of volunteer editors who curate the website's content began sounding alarm bells. They raised concerns about the sheer number of contributions pouring in from people who were not necessarily well-versed in the topic or adept at citing their research.
Discussions in the Wikipedia community became very heated with allegations that articles were being updated with erroneous or plagiarized information. Some community members called for widespread bans on university IP addresses and decried the professor's assignment as a needless burden on the community.
Joordens issued a statement defending his students, saying only 33 of the 910 articles edited were tagged for potential problems.
But he also acknowledged that he did not understand the limited scope of the Wikipedia editorial community, which boasts a few thousand members compared to the more than 488 million people that visit the site every month.
"I assumed that the current core of editors was extremely large and that the introduction of up to 1000 new editors would be seen as a positive," Joordens said.
"However, the current core of editors turns out NOT to be that large, and even if my students were bringing signal along with noise, the noise was just too much to deal with on the scale it was happening." Prof calls Wikipedia response 'ridiculous'
Joordens said the Wikipedia community became "annoyed and frustrated," adding that things became heated to a point he found "somewhat ridiculous."
The animated discussion that's ensued from the incident highlights both the pros and cons of using social media in the classroom, experts said.
Sidneyeve Matrix, media professor at Queen's University in Kingston, Ont.,, said crowdsourcing platforms like Wikipedia offer unparalleled opportunities for students to engage with their topics of study and to feel they're actively involved in the learning process.
But collaborative projects can't survive without leadership, she said, adding the zealous editors at Wikipedia have an important role to play as gatekeepers. This case, she said, exposes the difficult balancing act at play.
"I thought it was a lot more open than it is, but at the same time I'm seeing that more and more teachers are using it in their classrooms," she said. "The authenticity and verifiability of the information on the site has been improving, and that doesn't happen from the magic fairy. It happens from dedicated folks who are behind the scenes." Pilot project with teachers, students
Jay Walsh, spokesman for the Wikimedia Foundation that operates Wikipedia, said the online encyclopedia is working to carve out its niche in the classroom.
The website has established a pilot project that works closely with both teachers and students, he said, adding Joordens had some preliminary discussions with the company before carrying out his own plan.
He described the professor's approach as "experimental," emphasizing that editors need to follow certain protocols when contributing to articles. The strong reactions and speedy response of the Wikipedia community, he said, is the very mechanism that makes the site attractive to educators.
"This response is pretty high-value within the Wikipedia community," he said. "It's conceivable for someone to interpret that response as being too fast or not giving us a chance, but in this case there seems to be an openness towards figuring out ways to make this kind of an initiative work."
Joordens agreed, saying he will limit the number of students who take on such voluntary assignments in the future and ensure they're up to speed with the site's editing practices. In turn, he called for Wikipedia members to back down from their hardline position on fledgling contributors.
"Now that at least some members of the Wikipedia community are putting down their digital pitchforks, it is becoming more and more obvious to me that we all share the same goal of improving the quality and quantity of information on Wikipedia," he said. "If we could find ways of working together while also being respectful of one another, we could really do some great things."
The Canadian Press
|
<urn:uuid:2ed9b0fc-8c20-47c4-81dd-57b9f7118ef3>
|
CC-MAIN-2017-09
|
http://e-channelnews.com/ec_storydetail.php?ref=432094&title=Wikipedia-project-raises-concerns-over-social-media-in-class
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00350-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.974087 | 942 | 2.609375 | 3 |
Heartbleed: A Password Manager Reality CheckIs a password manager an effective defense against vulnerabilities like Heartbleed, or just another way to lose data to hackers?
10 Ways To Fight Digital Theft & Fraud
(Click image for larger view and slideshow.)
Should the OpenSSL Heartbleed bug serve as a wake-up call for people not using a password management application or service to manage their passwords? Consider who are at the greatest risk of having their passwords stolen by Heartbleed-targeting hackers: People who reuse their passwords across multiple sites. That's because an attacker only needs to hack into one site -- say, a social network -- to obtain a password that works across multiple sites, such as your banking website.
Faced with that reality, some users have opted to tap a purpose-built security tool for generating and storing strong passwords. "If you don't use a password manager, you will end up using the same password on multiple sites. That password, becomes a 'basket' in which your security for all of the sites you use it for are stored," said David Chartier at AgileBits, which develops 1Password, via email. "So if you use the same password on Amazon, eBay, Facebook, MyCatPictures, and others, then all of those sites are in the same basket. And that basket is extremely fragile. A breach of one of those sites is a breach for all."
[Looking to supplement your security defenses? Read How A Little Obscurity Can Bolster Security.]
Here are some facts to consider if you're wondering whether one of the many different password managers that are available is right for you or your organization:
1. Your own "password manager" might be lacking
When weighing password managers, the first question should be: What are you doing now? How many people have a Word document -- perhaps named "passwords.docx" -- tracking all of their passwords? If so, watch out for malware infections. Harvesting files with interesting-sounding words is child's play for hackers.
2. Security experts swear by password managers
Consider leading information security experts' opinions about password managers. For example, to manage the challenge of safely storing strong, long, and unique passwords, while keeping them easily at hand, Bruce Schneier long ago built and released his own password management application, which is now an ongoing, open-source Windows -- and soon, Linux -- project. Like other password managers, it requires users to enter a master password, which then unlocks the password safe.
One of the upsides of using password managers is practicality: Many different passwords can be securely stored in one place. Some password management tools, furthermore, will even store website URLs and automatically populate website username and password fields, thus creating both a more secure and more automated log-in process.
"I can't imagine life without a password manager," said Sean Sullivan, security advisor at F-Secure Labs, via email. "I have far too many sites to manage otherwise."
3. A password manager: single point of password failure?
On the other hand, some would-be users worry about gathering all of their passwords in a single place, even if that repository itself gets encrypted and protected by a master password. "I've started using two-step authentication, but was avoiding the password generator/keeper programs because those seem like they could be a huge problem if they get hacked," one DarkReading reader recently emailed. "Do you have an expert opinion?"
"This is a great question," AgileBits' Chartier says. "Regarding two-step authentication, let me ask in return how many different sites and services do you plan to use it for? Two, three, one hundred? My guess is that you will
Mathew Schwartz served as the InformationWeek information security reporter from 2010 until mid-2014. View Full Bio
1 of 2
|
<urn:uuid:defae5c5-32bf-415d-8961-41140bb9a129>
|
CC-MAIN-2017-09
|
http://www.darkreading.com/endpoint/heartbleed-a-password-manager-reality-check/d/d-id/1204549?_mc=RSS_DR_EDT
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00346-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.9368 | 800 | 2.515625 | 3 |
Several automobile companies have been taking steps towards the implementation of hybrid technologies, and Ford is not about to be left behind. On Tuesday, it announced the development of a control system that would allow its electric cars to communicate with electric grids to adjust the timing of its charging schedule. The technology is designed for use in Ford's plug-in hybrid cars that will reach the market by 2012.
The new technology, which is conceptually similar to smart grids, allows customers to program when the car recharges, for how long, and at what utility rate. When plugged in, the battery system of the car can talk directly to the grid through a wireless network with smart meters provided by utility companies. The settings are chosen by the car's operator through a touchscreen in the car's dashboard, and works with other Ford technologies like SYNC, SmartGauge with EcoGuide, and Ford Work Solutions.
Integral to the success of this system is the cooperation of utility providers in the program. Ford has provided American Electric Power in Columbus, Ohio with the communications technology, so that the company can develop the relevant parts to hold up their end of the "conversation" between the car and the grid. Ford's other utility partners, with which it will ostensibly share the technology in due time, include Con Ed of New York, Southern California Edison, Hydro-Qu�bec, and Progress Energy of Raleigh, N.C.
Once the technology is installed, the user has quite a bit of control. For example, the car could be set to charge only during the off-peak hours of midnight and 6am, or only when the grid is operating from renewable energy sources like wind or solar power.
Ford also envisions the technology as extensible to areas outside the home, for use at malls or offices, with settings for each location or situation. The system would allow each operator to be responsible for the cost of the electricity they are siphoning from others' properties. For example, a credit card would have to be swiped, and once the user is identified, the car could recharge according to its owner's settings. Gone would be the days of "stations" that only let drivers power their cars in certain locations. With this technology, every powered property is a pump, electricity instead of gas.
Ford was also recently approved for two grants from the Department of Energy for its pursuit of electric vehicles and vehicle-to-grid infrastructure development. One $30 million grant will fund Ford's collaboration with utility providers to implement its power management system in vehicles. A second $62.7 million grant, to be matched by Ford, will assist production of an electric-drive transaxle for use in hybrid and plug-in hybrid vehicles. Ford also plans to bring an electric commercial van to market in 2010, and to have its first plug-in hybrid electric vehicle by 2012.
|
<urn:uuid:99435852-48bd-4f9a-9b77-ae6eb138acfc>
|
CC-MAIN-2017-09
|
https://arstechnica.com/gadgets/2009/08/fords-plug-in-hybrids-will-talk-to-electrical-grid/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00522-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.962924 | 582 | 2.703125 | 3 |
In 1988, Ware Shoals, S.C., lost its largest employer, a move that sent the town reeling. Since 1902, the massive textile mill had hummed with activity and provided employment for the 2,500 residents of the small town, which sits on the banks of the Saluda River in the western part of the state. "We were a single industry community," recalled Grant Duffield, the town's administrator. When the company, Riegel Textile Corp., pulled out, Ware Shoals was left with an abandoned 27-acre site in the center of town. But that was just the beginning of the problem. Along with skyrocketing unemployment, crime soared and those who could leave town moved away. What remained was a community on its knees, where 37 percent of the children were living in poverty.
The abandoned property was a classic brownfield site, scarred by decades of contamination. In 1995, the property owners agreed to turn the site over to the town without incurring liability for the cleanup. By 1998, Ware Shoals decided to redevelop the property and, with help from the Environmental Protection Agency, which designated it a brownfield assessment demonstration pilot site, received $200,000 to assess its cleanup needs and development potential.
Some of the funds have been used to populate a geographic information system with location data concerning pollution sources and plans for redevelopment. With the property located next to a scenic stretch of the Saluda River, town officials hope to attract commercial and residential development and help revive the local economy. And while Duffield doesn't want to overstate the benefits that technology has played in the mill site's renewal, it's clear GIS has had a major impact. "Without the technology, tackling a project of this magnitude would be impossible," he said.
The story of Ware Shoals and its brownfield is far from over. But it is indicative of a new attitude toward the nation's brownfields, where communities are viewing the once-abandoned sites not as eyesores, or ticking time bombs of contamination, but as potential real estate, ripe for development. New legislation that has eased certain liabilities concerning pollution and federal grants that can be leveraged for economic development, are turning brownfields into property goldmines for a growing number of communities. The combination of GIS and the Internet is helping all this happen.
There are as many as 600,000 brownfields in the United States covering several million acres of land. While many people picture abandoned brick factories reeking of pollution when they hear the term, brownfields are actually considered to be any abandoned or underused industrial or commercial property. "Many brownfields don't have problems, but have a perception of a problem," said Greg Jordan, a brownfield liaison for the U.S. Army Corp of Engineers and the Environmental Protection Agency.
That perception dates back to when government's only concern about these sites was whether or not they were contaminated. Brownfields were viewed strictly as an environmental issue. But when the economic boom of the '90s got under way and a number of cities began to enjoy a resurging interest in urban life, attitudes toward brownfields began to shift toward their development potential.
This recent shift in concern is reflected in the fact that state brownfield programs have been operating for less than a decade, according to the National Governors Association. In 1995, the EPA launched the Brownfield Initiative, a national program that has removed a number of barriers to the cleanup and development of brownfield sites. Property owners no longer were held legally and financially responsible if they had nothing to do with the pollution when they purchased a brownfield property.
Although the liability issue isn't entirely gone, the situation today is much better than it was in the '80s. Helping the matter is the EPA's more collaborative role with other federal agencies, such as the Department of Housing and Urban Development, and with state and local governments in helping to assess, clean up and reuse brownfield sites.
In January, President George W. Bush signed legislation that doubled the funds available -- to $200 million -- to help cities and states clean up brownfield sites. In addition, the president's budget proposed a permanent extension to the Brownfield Tax Incentive, which encourages the redevelopment of brownfields.
The surge in federal support comes in the wake of growing interest among cities and states in revitalizing these abandoned sites around the country. For example, a 2000 survey of 187 cities by the U.S. Conference of Mayors found that by cleaning up existing brownfield sites and returning the areas to production, cities could generate as many as 540,000 new jobs. And at least 175 cities estimated brownfield redevelopment could generate up to $2.4 billion in local tax revenue.
The EPA is helping communities return brownfields to active use by funding pollution assessments of sites -- with grants generally in the range of $200,000 -- and once a site is deemed clean enough for development, issuing a memo of understanding that states the EPA will leave a developer alone to pursue redevelopment of the site, unless something significant reappears. According to the EPA's Jordan, there are different degrees of cleanup, depending on what the site will be used for. If a site is designated for commercial use, the level of cleanup isn't going to be nearly as high as it would be if the site was going to be converted into apartments, he pointed out. "We try to lower hurdles and make it easier to get to a solution while protecting the environment," said Jordan.
As cities and states shift their emphasis on brownfields from environmental issues to development potential, the role of technology has grown. Communities want to know what other communities are doing in terms of cleanup and redevelopment, and developers are eager to know what properties are available and the potential economic value of the site based on its location. The public is also interested in learning what's going to happen with an abandoned site and what alternatives are available for development.
The chief tool for collecting, analyzing and presenting information on brownfields is GIS. With its ability to link geographic reference points with various types of data and then overlay the information on an electronic map, GIS has become the technology of choice for state and local officials trying to solve brownfield problems.
In 1997, New York City began using GIS to determine which of the city's industrial zoned properties were vacant and suitable for redevelopment. By overlaying local political boundaries, city officials were able to identify and approach local community groups and enlist their help in deciding which properties to clean and develop first, as well as find out how the local groups wanted to see the properties developed: for commercial, residential or recreational purposes.
In Boston, the city's redevelopment agency has been using GIS extensively to help developers learn what brownfield sites are available and what sort of federal or state funding exists. The Boston Redevelopment Authority (BRA) currently lists 50 publicly owned sites and, through its Boston Atlas Web site
, lets developers view and analyze the sites' potential. "Our GIS shows each lot's potential with multiple layers of information about transportation, utilities, commercial, manufacturing or residential activity in the area," said Noah Luskin, BRA's brownfields coordinator. What the GIS doesn't show is the extent of possible contamination at each site. "Our objective is to show the sites' value, not its liability," he explained.
One of the earliest examples of a city using GIS to clean and redevelop brownfields is Emeryville, Calif., a small industrial city surrounded by Berkeley and Oakland. Once bustling with business, Emeryville was in a steep economic decline by the 1980s. A host of manufacturing firms pulled out and left behind numerous brownfield sites, some more contaminated than others. At its lowest point, Emeryville had 20 percent of its non-residential property vacant and 40 percent underutilized.
Using a grant from the EPA, Emeryville started a project to monitor groundwater in 1996. The data was mapped using ESRI's ArcInfo software. In addition, the city added map layer information, including street names, property owner information and site management information about brownfield sites. With the data, city officials were able to show regulators and developers that Emeryville's groundwater wasn't as polluted as originally believed. Developers could analyze the maps and locate sites based on their potential for residential, commercial and retail use.
The result has been an incredible development of a once moribund area. Since 1996, Emeryville has added more than 500 housing units, 3.6 million square feet of office space, 830,000 square feet of retail space, 488 hotel rooms and more than 8,000 jobs. Amtrak built a new train station, Pixar Animation Studios developed a 13-acre site, and Swedish home furnishing giant IKEA built a 275,000 square foot store on the site of a former steel mill.
Aware of the power of GIS in providing information for developers, regulators and concerned citizens, Emeryville city officials began putting their GIS maps on the Internet for wider access in 1999. Using ESRI's MapObjects software, they created "One-Stop-Shop"
, an online, interactive map service.
The Web site shows land parcels using aerial photography, street maps and property owner information, along with chemical concentrations in the soil for each land parcel and chemical concentrations found in groundwater at that location.
According to Ignacio Dayrit, Emeryville's brownfields project director, the site has been a huge benefit for developers who want to see what's going on in terms of real estate activity and the environmental records for a land parcel. But the Web site has also assisted government regulators. "They like having information on contamination that's easy to access rather than in paper files," he said. "They really value the content that we've put out there."
Brownfields Around the Nation
Cities aren't the only ones using GIS and the Internet to display information about brownfields. Many states have developed interactive sites to provide the public with information. For example, New Jersey has launched I-MapNJ
so that developers and the public can consider brownfield redevelopment opportunities. The state's Department of Environmental Protection, Office of Information Technology, Office of State Planning, the Governor's Urban Coordinating Council and the Brownfields Redevelopment Task Force developed the site collaboratively. I-MapNJ uses ESRI's ArcIMS software and displays data according to a series of pre-selected questions, such as which brownfield sites exist within a specific municipality or within an incentive zone.
Massachusetts has taken a different tack and limits the use of GIS relating to brownfields to its staff of brownfield coordinators at the Department of Environmental Protection. One of the most popular features is an extension that allows coordinators to enter an address and have the GIS report back about whether the location is eligible for federal tax incentives or not and why. Under the $1.5 billion Brownfields Tax Incentive program, environmental cleanup costs for properties in targeted areas are fully deductible. These areas include EPA Brownfield Pilot areas, census tracts where 20 percent or more of the population is below the poverty level; or census tracts that have a population under 2,000 and have 75 percent or more of their land zoned for commercial or industrial use.
But as many state and local officials are finding out, interactive GIS Web sites are only as good as the data they contain. If the information is inaccurate or out of date, users will soon recognize the problems and limitations, and stop using the site. Prime culprits for old or inaccurate data are a city's own departments or agencies, who see little reason to share their data with a redevelopment agency. Several brownfield project managers complained about the lack of support in terms of data sharing.
"Keeping data updated is a huge issue," said one city official from Boston. "We don't have a good system for sharing information between agencies. It's done entirely on an informal basis."
|
<urn:uuid:33aead1f-ad39-426b-9528-5bec1bad8957>
|
CC-MAIN-2017-09
|
http://www.govtech.com/policy-management/Brownfield-Resurrection.html?page=3
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00398-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.956945 | 2,447 | 2.875 | 3 |
FreeRADIUS and Linux Secure Your WLAN
Best of ENP: Wireless security is still a bit of a mess. With Linux and RADIUS, you can begin to straighten it out.
The RADIUS (Remote Authentication Dial-In User Service) protocol has long been a standard authentication, authorization and accounting protocol for Internet service providers and businesses. It determines who you are, what you're allowed to do, and records how long you did it for. It works for more than just dialup users- it also works for DSL, cable, and wireless users.
Anytime you study RADIUS, you read about "AAA", the authentication/authorization/accounting functions. Sometimes it is even called RADIUS-AAA. (And now you know it's not about a twelve-step program or automobile club.) The accounting part can be used for billing or statistical analysis. RADIUS can use text files of users for authentication, or hook into Linux/Unix password files, NIS (Network Information Service), LDAP (Lightweight Directory Access Protocol) directories, MySQL and other databases.
RADIUS works in a rather interesting fashion. It uses a distributed architecture – it sits separately from the NAS (Network Access Server). You store user access data on a central RADIUS server that is available to many NAS. The NAS provide the physical access to the network, such as a dialup server, managed Ethernet switch, or wireless access point. This scheme adds security and flexibility, and is a lot easier to manage than having to individually configure every entry point to your network to perform user authentication.
Note that we're not talking about your cheapie five-port switches from joeskewlnetworkcrap.com for twenty clams; you need the higher-end managed switches that support 802.11i, such as the Netgear ProSafe GSM7212, or the Cisco Catalyst switches, or some such. These are smart switches that support all sorts of security protocols.
There are a number of RADIUS servers; the Free/Open Source edition is called FreeRADIUS. Its maintainers claim it's not ready for public use, but we are going to forge boldly and publicly ahead and use it anyway.
So what is the attraction in using RADIUS for wireless authentication? To answer that we must take a quick look at how wireless authentication works. It ain't pretty.
Wireless Authentication Protocols
You know already that WEP (Wired Equivalent Privacy) is poo, and easily cracked. WepLab, aircrack, AirSnort, Kismet, John the Ripper, and other easily obtainable tools make WEP cracking a yawn rather than a challenge.
WPA (Wi-Fi Protected Access) is the next evolution of wireless security. It's an intermediate step until the 802.11i protocol is finalized. The bit we are interested in is its implementation of the 802.1x authentication protocol and the TKIP protocol.
The 802.1x protocol uses port-based access control. This sets up a gateway on the NAS to prevent traffic from entering the network until it has been identified and authenticated. Of course you see the flaw in this at once – wireless packets fly heedlessly through the air, and care not a fig for silly ports, whether physical or logical. So this does not prevent an attacker from capturing wireless packets. However, it does prevent an attacker from doing things like getting a DHCP lease, or trying for a static IP, or forging a MAC address, because she can't send data through the port.
The steps of the authentication process go like this:
- The wireless client (also known as the supplicant, with all of its overtones of forelock-tugging and kowtowing) requests network access from the NAS.
- The NAS, or authenticator, passes any credentials presented by the supplicant to the RADIUS server.
- The RADIUS server, or authentication server, gives the thumbs-up or thumbs-down to the authenticator. If it's thumbs-down, the supplicant is out of luck. It can beat at the door of the NAS, but that's as far as it will get. It can even beat at the doors of other NAS on the same network, but since they all query the same mean old authentication server, too bad so sad.
Here is a sampling of some WPA authentication protocols:
- EAP-Transport Layer Protocol (EAP-TLS). TLS is a descendant of SSL. It relies on certificates, and it is a nice strong method of authentication, but it is a bit cumbersome. You have to manage public and private key pairs, and creating/revoking them as users come and go can get old.
- EAP-MD5 does a simple comparison of hashed usernames and passwords against a database on the server. There are a couple of problems with this: the hash can be captured and broken, and the server is not authenticated to the supplicant. (See Bruce Schneier's take on breaking MD5, and Val Henson's. Val was one of the first to recognize its weaknesses, and the flaws of compare-by-hash in general.)
- PEAP (Protected EAP). PEAP sets up an encrypted tunnel, sort of like TLS, to protect feebler encryption protocols like MS-CHAP and MD5. The full name is Protected EAP-Microsoft Challenge Handshake Authentication Protocol version 2 (PEAP-MS-CHAP v2), and since it is a Microsoft protocol some folks consider it to be too icky to use, and choose TLS instead. It works fine, but I won't force anyone to like it.
- EAPOL, or EAP Over LANs. You don't need to do anything to make this work, it's the transport used over wired connections.
- EAPOW is the transport for wireless connections.
Which brings us to ...
When a supplicant is authenticated by one of the EAP protocols, the next thing that happens is a secure encryption key exchange. The Temporal Key Integrity Protocol (TKIP) is the mechanism for transmitting the keys. This is an important step, as it adds a layer of real security, because both the authenticator and the supplicant must authenticate to each other. Another option is to use pre-shared keys. We'll look at both of these next week, when we pull it all together and make it go.
|
<urn:uuid:acc11ca2-ddba-4c98-86a6-3b005c477c1e>
|
CC-MAIN-2017-09
|
http://www.enterprisenetworkingplanet.com/netsecur/article.php/3555556/FreeRADIUS-and-Linux-Secure-Your-WLAN.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00042-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.916496 | 1,361 | 2.515625 | 3 |
Geographic information systems (GIS) have assumed a key role in firefighting operations in recent years. Sophisticated GIS-driven mapping can help responders track events and position resources, layering on weather information and demographic data to give rescuers a full picture of the situation on the ground.
GIS may come into play before a fire, for example in helping municipalities mark the exact locations of fire hydrants. During an emergency, geospatial visualizations can help rescuers to plan routes, track the spread of fire and identify communities at immediate risk.
In addition to fulfilling such roles before and during a fire, GIS also can serve an important function for the emergency management community in the aftermath of a blaze. By tying satellite imagery to municipal databases and putting that information in the hands of trained field workers, GIS can dramatically speed up vital damage assessment efforts.
That’s just how it happened during one recent fire in Northern California.
From Weeks Down to Days
In late August the Clayton arson fire added yet another chapter to the long-running saga of devastating California blazes. The conflagration destroyed at least 299 structures, including 189 single-family homes, 40 businesses and a range of other structures such as sheds and smaller outbuildings. More than 4,000 people were evacuated in the course of the fire, which raged for several days.
Even before the smoke had cleared, the California Department of Forestry and Fire Protection (CAL FIRE) called in an outside contractor to begin assessing the damage. FireWhat of Bend, Ore., rolled in almost immediately with a mobile command center. Starting up a suite of tools powered by GIS provider Esri, its experts began documenting the damage.
Built on the chassis of a 39-foot Freightliner RV, the command post offers multiple workstations and a theater seating space that encompasses three big screens. A touchscreen system runs cloud-based ArcGIS Online software on an HP hyper-converged 250 Series server with 36 terabytes of storage and 16 TB of RAM to deliver a terrain profiler, a property value tool, a damage assessment app and an operations dashboard.
“The teams were collecting damaged structures in the field, and as soon as they hit ‘send,’ the information would pop up on the operations dashboard and on all of our screens,” said FireWhat
CEO Sam Lanier.
With the support of the GIS tools, the team was able to record the status of more than 330 structures by the end of its second day on the ground – a process that in the past might have taken two to three weeks, Lanier said.
A Complex Affair
While emergency managers are obliged to focus their energies foremost on the time of the actual crisis, most realize that the follow-up process can be no less critical. What happens after a fire sets the stage for community rebirth in the long term and, more immediately, it unlocks the financial mechanisms of recovery. Without timely damage assessments, insurance checks don’t get written and federal aid does not flow.
Yet damage assessment is an extraordinarily complex affair. FEMA published a 121-page operations manual on the subject in April 2016. That document devotes 17 full pages just to cataloging the roster of likely players. Damage assessment efforts may include a local or county coordinator, a state or tribal coordinator, a mass care and emergency assistance specialist, any of a half-dozen or more FEMA experts and also experts from the Small Business Administration. FEMA recognizes that it is no small task to walk into a ruined town and try to catalog the extent of the damage, much less begin to put a dollar figure on the loss.
In the absence of GIS and its automated tools, it can be a paper-heavy process. “The whole workflow is a two-page document, so for each home the firefighter has to fill in every single column on that document,” Lanier said. “They also have to carry paper maps: For a single mobile home park, it might require 80 paper maps to show all the individual addresses.”
Reporting requires manual effort too. A firefighter might take pictures of a structure, and then have to return to headquarters to manually associate those images with a map, which in turn might be converted to a spreadsheet by a data processing team. It’s a cumbersome process.
Sophisticated GIS-based tools cut through much of the clutter. In the Clayton fire, the team was able to pull down parcel information from the county and automatically incorporate it into a mobile application. This way, most information fields would automatically pre-populate on a form for any given location. “So instead of the firefighter entering all that information by hand, now we are saying: I am here at this point, with all this information already developed in the county parcels, so there is less effort, less time, and the human error is dramatically reduced,” said Lanier.
This automated workflow also enables emergency planners to make adjustments on the fly. “Now we in the command post can see the information as soon as they are gathering it in the field. If there is a question we can add a yellow star to let them know there is more information that we need, and they can click on that star and see that we need another photo on the north side of the home. That used to take days. Now we can capture that data live,” he said.
In its damage assessment handbook, FEMA recognizes the value of GIS not just for the ways in which it can speed the process, but also because of the value inherent in the information it can provide.
GIS tools “can be used to augment damage assessment teams at all levels,” FEMA said. “Remote sensing data collection and analysis can be focused on areas with the most impact and visibly discernible damage, while ground teams could be directed to areas with lesser impacts that would require in-person assessments to make a damage determination.”
When geospatial analysis is put in the hands of a capable ground team, “more of the impacted area can be assessed at a faster rate than traditional ground team methods. In some instances, the geospatial damage assessment may be capable of replacing ground assessment teams, especially in circumstances when damage assessments need to be conducted on a timeline that would not allow the use of traditional ground methods,” FEMA noted.
In urging the use of GIS, FEMA points emergency planners to a range of possible sources for data and software capabilities, representing agencies and organizations from across government and the private sector. These include:
Data that has been collected and archived in advance of a disaster, known as static GIS, can form a critical baseline element for any damage assessment. This information may take the form of population density maps, demographic data and property parcel information. Planners also may incorporate digital elevation models, flood plain data, critical infrastructure details and similar factors in their GIS efforts, FEMA said.
After the fact, GIS can provide high-quality information on a range of factors influencing damage assessments. This includes storm track visualizations, high water lines or flood depth, road closures and infrastructure reports, property damage from aerial analysis and other forms of post-event imagery.
Even with this broad range of available assets, FEMA acknowledges some limitations when it comes to putting GIS to work in the wake of a disaster.
Weather can challenge data collection efforts even after the brunt of a storm has passed. In some areas, critical GIS feeders such as detailed parcel and population data are not available. And some environments are just not amenable to the tools of GIS: In urban areas, geospatial analysis can’t refine the damage level by apartment unit, while in rural areas, steep terrain and heavy tree canopies can limit the effectiveness of GIS.
Despite the limitations, experts say, there is much that GIS can do to enhance the damage assessment process.
“GIS is an ideal tool for streamlining recovery operations,” said Russ Johnson, Esri’s director of Public Safety and Homeland/National Security. “Recovery is difficult, time consuming, and essential to determine damage, costs and reimbursement. With the use of mobile GIS applications, recovery personnel are able to identify damage status and rapidly report that information to a central location.”
In the wake of an event, GIS can do more than just enable the rapid documentation of damage. Sophisticated mapping can help emergency managers determine the best locations to position public assistance, Johnson said. GIS can also help them to identify alternative transportation routes, making it possible for the private sector to return to normal functioning as quickly as possible.
That’s a key phrase: as quickly as possible. After a major event like the Clayton fire, or any other large-scale disaster for that matter, speed is of the essence.
Among other issues, delays in damage assessments may put firefighters at risk by forcing them to remain on scene in a dangerous environment. Moreover, timely damage assessments are a key to the recovery process. “People want to get back into their home and see what is going on,” Lanier said. “The insurance companies also want to do their own assessments before they write checks, and they cannot do that until the formal preliminary assessment is done.”
At the same time, responders must balance haste against an imperative to be extremely accurate. In the case of the Clayton fire, “this was a criminal case. If someone is going to be charged with arson, he is going to be responsible for all the associated costs, so it becomes important to have very accurate intelligence,” said Lanier. Assessors are not just calculating damages: They are potentially collecting legal evidence.
GIS can help to balance these forces, giving emergency teams the tools they need to be both quick and accurate, which can ultimately help to get assistance on the ground faster in support of emergency responders trying to organize recovery efforts. “In order to justify paying for shelters, to justify paying for relief housing, you need to properly document everything before the Stafford Act language can come into play,” Lanier said. “When assessments are done more quickly, you also can build a much better picture of what the recovery need is going to be going forward.”
One final benefit to applying GIS to post-event documentation: better documentation. In the past, Lanier said, emergency responders have faced issues when homeowners have come back to claim that their properties were never assessed or were improperly inventoried. When field workers can tap automated GIS tools to feed their work back to the command post in real time, the chance for such incidents diminishes. “Now we have a live tracking mechanism to show an actual track log of where each person was, and when they assessed the property. That now becomes part of the legal record,” Lanier said.
Even as the embers cooled in Northern California, Esri executives were turning their attention to another crisis, using GIS to help first responders manage the aftermath of Louisiana’s catastrophic flooding. “We are working hand in hand with them to look at where the evacuation routes need to be, how the supplies need to travel,” Johnson said.
This kind of on-scene intelligence falls neatly in line with the strengths GIS also has demonstrated before, during and after major crises: the ability to capture critical layers of data, to display it in a meaningful way, and to get it into the hands of first responders looking to drive meaningful action.
|
<urn:uuid:afd1ff77-baf5-42ce-97f8-653529ad9757>
|
CC-MAIN-2017-09
|
http://www.govtech.com/em/disaster/The-Role-of-GIS-in-the-Aftermath-of-a-Wildfire.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Esri-News+%28ESRI.com+-+News%29
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00570-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951234 | 2,386 | 2.765625 | 3 |
Don't program robots -- train them. That's the stated goal of Brain Corporation, whose operating system is designed to allow robots to learn how to dump trash or open doors via hands-on training, not programming.
The Brain Operating System -- also called BrainOS -- injects a level of intelligence into robots that will allow them, like animals, to be given hands-on and visual training on how to perform home, service or industrial tasks, the company says.
Robots with BrainOS can be supervised through remote controls or other signals to "demonstrate explicitly the desired behavior," all without complex programming, the company says on its website.
"Alternatively, developers can observe the robot and reward the desired behavior, as if they were training an animal," Brain says.
If BrainOS works as designed, repetition will reinforce the process of how robots should conduct specific activities, just like it does for human brains. Programming sets strict rules on what activities a robot can do, Brain notes.
Robots with BrainOS could be trained to move around while not bumping into obstacles, and also to pick up and move items, according to the company. Robots could be trained for "yard maintenance, or cleaning up after pets," Brain says.
The techniques enabled by BrainOS would represent a change in training robots. Robots have been around for decades, but instructions still need to be programmed in. Traditional programming can be expensive, but Brain says its OS provides a more natural and easier way to train robots, by taking in sensory input and readings to manage robot hardware and behavior.
Company officials so far have not been available to discuss more specifically how BrainOS works. The company says, however, that the OS will be available in the fall, provided on a bStem developer board. The board has a Qualcomm Snapdragon S4-Pro processor, 15 sensors for visual, audio and other forms of sensory input, and an FPGA (field-programmable gate array) for robot control.
Brain, founded in 2009, is backed by Qualcomm, which is developing a chip called Zeroth that is designed to mimic the human brain, operating on neural computing principles. Zeroth may never be released, but technologies from its development are being used by Qualcomm in mobile chips. For example, some context and location awareness algorithms are being used in Qualcomm's latest Snapdragon 800-series chips.
Brain has also received funds as part of the White House's US$100 million Brain Initiative to better understand the human brain.
Robots are catching the attention of major IT companies. Robotics companies are being snapped up by Amazon and Google. Amazon has plans to deliver packages using flying drones, and is testing drones in different shapes and sizes.
Robotics are also increasingly being used in schools as an interactive way to promote STEM (science, technology, engineering and math) education. Many robots -- like the PiBot -- are created by students using Arduino, an easy-to-learn hardware and software development platform to create robots or interactive electronic objects.
Intel later this year will ship its first 45-centimeter-tall robot called "Jimmy" for $1,500. Jimmy, which will be able to walk, has the Linux OS and can be programmed in the HTML5 scripting language, which is widely used for mobile apps. Smartphones, tablets and PCs will be able to control the robot, and Intel has said HTML5 is a new way to bring robotics into the mobile era.
In addition to BrainOS, Brain is including ROS in the development kit, given its popularity among robot makers.
|
<urn:uuid:378d1690-8a19-40ff-bc96-98742e22adad>
|
CC-MAIN-2017-09
|
http://www.computerworld.com/article/2490709/emerging-technology/train-robots-hands-on-to-dump-trash-with-this-new-os.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00094-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.954859 | 725 | 3.328125 | 3 |
This is the first in a series of Market Updates on the creation of accessible documents. It concentrates on the creation of accessible PDF files from word processing and desktop publishing systems.
Document creation is one of the major parts of personal productivity.
An accessible electronic document is one that can be read easily by a person with a disability. The possible disabilities include various degrees of vision impairment, muscular-skeletal disorders (that limit the ability to use traditional controls such as a mouse), dyslexia, and learning difficulties.
Documents in a language other than the native language of the speaker can be difficult to access, and with the internationalisation of the web this is becoming a more common problem. Some of the issues and solution to this problem are shared with access for the disabled so it will be considered in this report.
|
<urn:uuid:43759a82-aa6a-4f81-a503-2bf5a9f352d6>
|
CC-MAIN-2017-09
|
http://www.bloorresearch.com/research/market-update/creation-of-accessible-documents/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00390-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.939375 | 165 | 2.515625 | 3 |
NASA is taking its final steps to launching astronauts from American soil again and the first steps in sending humans into deep space and Mars.
The space agency announced that Boeing Co. and SpaceX, both U.S. companies, have landed highly-sought-after Commercial Crew Transportation Contracts to build the spacecraft that will ferry astronauts from Cape Canaveral Air Force Station in Florida to the International Space Station and back.
The contract covers a minimium of two missions and can be extended to cover up to six.
NASA has a deadline of launching astronauts from American soil by 2017, giving the two companies only a few years to finish their designs, build, test and certify their spacecraft.
"This sets the stage for the most ambitious and exiting chapter of human space flight," said NASA Administrator Charlie Bolden during a Tuesday afternoon press conference at Kennedy Space Center, which leads the agency's Commercial Crew Program. "The greatest nation on Earth should not be dependent on anyone to get into space. Today we're one step closer to launching astronauts from U.S. soil on American spacecraft and end our reliance on Russia."
Since the U.S. retired its fleet of space shuttles in 2011, NASA has depended on Russia to ferry its astronauts back and forth to the International Space Station, paying the Russian space agency about $70 million per astronaut.
That arrangement has proved to be increasingly sticky given the increased tensions between the two countries since Russia's aggressive moves toward Ukraine.
Kathy Lueders, deputy program manager for NASA's Commercial Crew Program, pointed out during the news conference today that both SpaceX and Boeing must meet five certification milestones, including flight readiness, all under NASA oversight. The companies also must conduct a flight test, carrying cargo and one astronaut, to the space station, where it will dock and then return the crew safely home.
Both SpaceX and Boeing have had considerable experience working with NASA.
SpaceX, one of two private companies ferrying supplies, food and scientific experiments to the space shuttle, wants to be the company ferrying humans, as well.
The commercial space company, which aims to populate Mars one day, is scheduled to launch a resupply mission to the space station on Sept. 20 from Cape Canaveral.
As for Chicago-based Boeing, the leading aerospace company is developing a Commercial Crew Vehicle that can be launched on a variety of space vehicles for NASA. The company has been working under a separate $18 million NASA project to develop the systems and key technologies, including life support, avionics and landing systems, needed for a capsule-based commercial crew transport system that can ferry astronauts to the space station.
Boeing appears to be getting some help from a well-known name - Jeff Bezos, the founder and chief executive of Amazon.com Inc.
Bezos has been quietly working to set up Blue Origin LLC, a company focused on developing technologies for private and commercial space flight.
The company has been working with both NASA and Boeing on developing commercial spacecraft. According to The Wall Street Journal, Blue Origin is working with Boeing for the NASA contract to carry astronauts to and from the space station.
This story, "NASA Gives Key Space Taxi Contracts to SpaceX, Boeing " was originally published by Computerworld.
|
<urn:uuid:0413070e-01d7-4867-a032-599ee3ae4115>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2684065/space/nasa-gives-key-space-taxi-contracts-to-spacex-boeing.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00266-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.948956 | 659 | 2.8125 | 3 |
The Air Force is looking for technology to correct radar interference caused by wind turbines, contracting documents show.
The proliferation of wind turbines and alternative energy projects around the country and near military installations has created risks that radar systems, which are used to track aircraft and other vehicles, would be affected. The fear is that turbines' rotating blades will scatter waves or be mistaken for planes and other moving targets.
“It has been observed that operation of these energy production sites can confound military equipment or otherwise negatively impact training or operational readiness,” according to the solicitation.
The Air Force wants engineers to develop ways to account for unwanted signals or errors caused by wind turbines and other objects in a landscape. The techniques created would help it address and mitigate the risks posed by wind turbines to military and air traffic control operations.
The service is soliciting proposals between Aug. 27 and Sept. 6. The tender comes just as the Pentagon and Interior Department have inked an agreement to push green electricity projects on military bases. The plan would help ensure energy for bases if the commercial grid is disrupted.
The departments are aware of the risks caused by building wind projects so close to military bases. “If improperly sited, offshore development could impact military missions; therefore DOI and DoD will continue to work closely together to identify areas most appropriate for offshore wind development,” states the memorandum of understanding between Defense and Interior. New technology to address wind turbine radar interference, if successfully implemented, might also be of some help.
|
<urn:uuid:81b99f8d-b13d-4b94-89da-3d803824ebe1>
|
CC-MAIN-2017-09
|
http://www.nextgov.com/defense/2012/08/air-force-wants-correct-radar-inference-wind-turbines/57251/?oref=ng-channelriver
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00142-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.948514 | 305 | 2.671875 | 3 |
Password management is a common IT support issue that creates problems for many organizations. Password management can be divided into two categories. The first category is normal user accounts, for day-to-day activities by normal users. The second category is privileged accounts, such as background applications and server administration accounts, managed only by IT administrators.
Regular user accounts usually cause a lot of routine work for the IT help desk, because users tend to forget passwords and accidentally lock their accounts, and they call support for help, which can use up a lot of time. Privileged accounts are managed by very thorough IT guys, but they can also cause a lot of worries and headaches because of their shared nature. One privileged account is usually managed by multiple admins, and each admin usually has multiple accounts, creating complex relationships to keep in mind.
Privileged accounts fall into two types: service and administrative. Privileged service accounts run services and other background applications. Privileged administrative accounts are used for managing servers. A local Administrator account is an example of a privileged server management account.
Every IT team needs many privileged passwords for managing servers and applications. It's a common situation when a group of servers is managed by several different administrators, and proper account maintenance requires close cooperation between them. Password changes may cause unexpected effects, such as account lockouts, if not properly communicated to all team members.
In the following example (see picture), Joe and Bill manage two servers each, one of them shared. One day Bill comes to work and decides to change the service password because it's going to expire today. He changes that and updates his two managed services. He is happy! And guess what happens next? Yes, he's just broken the Exchange server managed by Joe, and not only that, he's locked out the shared service account, because the Exchange is still running with old credentials.
Another example is a local Administrator account. Now what if Joe, getting back at Bill, resets the local admin password for SQL1 and says nothing to Bill? How can Bill now access this shared server? That's a good revenge?
To address privileged password management issues, Netwrix designed a product called Netwrix Privileged Account Manager. Once the product is deployed, all privileged password management takes place from a central server, accessible from a Web browser. Administrators never update passwords directly, but rather go through a management console, which ensures the proper workflow. All you have to do is specify a list of managed servers once for each account. Then, when someone from your team changes a password, the product goes through all of your servers and updates automatically discovered services. You may even remove administrative permissions from your normal accounts to prevent inadvertent changes and let Netwrix Privileged Account Manager take care of your service accounts.
Note: if you are looking for password management of regular user accounts, please visit Netwrix Password Manager home page.
|
<urn:uuid:38ed8986-8841-4540-a6e7-54d339458bfe>
|
CC-MAIN-2017-09
|
https://www.netwrix.com/privileged_password_management.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00438-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.940948 | 589 | 2.828125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.