text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Use the APIs in this category to access data from data sources, such as XML, JSON, or SQL. To learn more about data sources and how to use them in your apps, visit the Data storage documentation. - AsyncDataAccess - Allows communication with an asynchronous worker in another thread. - AsyncWorker - Represents objects that do asynchronous work. - DataAccessError - Represents an error from a DataAccess load or save operation. - DataAccessErrorType - Represents types of data access errors. - DataAccessReply - The reply from an asynchronous data access operation. - DataSource - Provides access to data from an external data source. - DataSourceType - Represents data types that you can use with a DataSource. - JsonDataAccess - Provides load and save operations for JSON data. - SqlConnection - Connects to an SQL database and executes commands asynchronously. - SqlDataAccess - Retrieves and updates data in an SQL database. - SqlWorker - A worker that executes SQL commands in another thread. - XmlDataAccess - Converts data from XML format to Qt C++ value objects or from Qt C++ value objects to XML format.
<urn:uuid:4ae9f110-ae29-4d69-8122-932ba6f18416>
CC-MAIN-2017-09
http://developer.blackberry.com/native/reference/cascades/data_management_data_sources.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00543-ip-10-171-10-108.ec2.internal.warc.gz
en
0.718148
273
2.71875
3
Local area network (LAN) is being widely deployed in the enterprise, university, hospital, army, hotel and places where a group of computers or other devices share the same communication link to a More... FDDI – Fiber Distributed Data Interface WAN – Wide Area Network MAN – Metropolitan Area Network LAN (Local Area Network) is a computer network within a small area, such as a home, school, computer More...
<urn:uuid:50f4978f-185b-4c96-bea5-e02a71c7638b>
CC-MAIN-2017-09
http://www.fs.com/blog/tag/local-area-network
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00543-ip-10-171-10-108.ec2.internal.warc.gz
en
0.901374
89
2.828125
3
Mozilla, Samsung Building Rust Programming Language, Servo Browser Mozilla announced that it is working with Samsung to deliver a new advanced browser based on a new programming language known as Rust.Ever-focused on advancing the Web, Mozilla's research arm has joined forces with Samsung to create a new browser engine based on a new programming language called Rust. Mozilla Research is collaborating with Samsung to deliver an advanced technology Web browser engine called Servo that will take advantage of the faster, multi-core, heterogeneous computing architectures on the horizon. And Rust is the language Mozilla felt compelled to create purpose-built for the task. "Servo is an attempt to rebuild the Web browser from the ground up on modern hardware, rethinking old assumptions along the way," Brendan Eich, CTO at Mozilla, wrote in an April 3 blog post. "This means addressing the causes of security vulnerabilities while designing a platform that can fully utilize the performance of tomorrow's massively parallel hardware to enable new and richer experiences on the Web. To those ends, Servo is written in Rust, a new, safe systems language developed by Mozilla along with a growing community of enthusiasts." Mozilla and Samsung are bringing both the Rust programming language and Servo, the experimental Web browser engine, to Android and ARM, Eich said. This will enable them to further experiment with Servo on mobile. And Samsung has contributed an ARM back-end to Rust and the build infrastructure necessary to cross-compile to Android. But why the need for yet another programming language? "Rust, is a systems language that is focused on speed, safety and concurrency," a Mozilla spokesperson told eWEEK. "Rust is an attempt to create a modern language that can replace C++ for many uses while being less prone to the types of errors that lead to crashes and security vulnerabilities. Because Servo is designed from the ground up using Rust as its main implementation language, Servo will also tend to avoid sources of bugs and security vulnerabilities associated with incorrect memory management common to browsers implemented in unsafe languages such as C++, resulting in a faster, more secure experience for people browsing the Web."
<urn:uuid:6359c4cb-b09c-4c68-aac6-63dcc084bcf4>
CC-MAIN-2017-09
http://www.eweek.com/developer/mozilla-samsung-building-rust-programming-language-servo-browser
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00067-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939189
432
2.53125
3
Lisa Miller doesn’t know what her department would do without the cameras and sensors that monitor traffic conditions. Working as the traveler information manager for the Utah Department of Transportation (UDOT), Miller’s feelings were confirmed two years ago when rough weather jeopardized part of the network. In winter 2011, winds exceeding 100 mph bombarded overhead traffic cameras in northern Utah, which was bad news for traffic monitoring. “It knocked out our camera coverage in the area where we needed it the most, and we’ve really come to rely on the network as our eyes and ears on the road,” she said. “When we don’t have it, it’s hard to be effective and to give the traveling public the information they need.” More than 700 cameras and 1,500 in-road sensors record photos, videos and traffic data on state roads. UDOT, local news groups and national organizations use this information to communicate traffic information to the public. There are also several weather sensors, about 80 in Miller’s estimation, that deliver weather information to drivers in the same fashion. The public can visit the Utah Department of Transportation’s traffic conditions home page to see the data, and a map with icons depicting the locations where traffic cameras and weather sensors are currently mounted. Users can view photos of current road conditions and data on weather conditions. This information also is available on a smartphone app for Android and iPhone systems. UDOT reaches the public through social media as well, and the cameras, sensors and website information help employees keep citizens informed. “We are very proactive at UDOT,” Miller said. “Over the past couple of years, the traveling public has really turned to UDOT as a source, a provider [and] a partner.” Utah has full-time, in-house weather forecasters at stations dispersed throughout the state, and programs where drivers can call in to report road condition information. A Closer Look Most traffic cameras and sensors are located in the Salt Lake area where about 80 percent of the state’s residents live. The two types of equipment work together to gather data for the state’s traffic operations center, and then that information is disseminated to the public. UDOT’s radar and loop sensors at intersections feed data to the department’s signal engineers in almost real time. Pole-mounted radar sensors shoot microwave beams across lanes, and vehicles that are hit by the beams reflect microwave energy back to the antenna, which is how a computer analyzes changes in speed. The department also has vehicle inductive-loop traffic detectors beneath the pavement that detect vehicles passing over them. Like this story? If so, subscribe to Government Technology's daily newsletter. Camera feeds, along with sensor data, help Utah’s employees make informed assessments about road activity. They use their knowledge to notify the public in multiple ways, including variable message signs. “They watch congestion develop or dissipate and put messages back out to the public on overhead message signs warning of where congestion might be,” said Blaine Leonard, Utah’s intelligent transportation system program manager. “They can use that after a major ball game or a big Fourth of July celebration to sense what’s happening.” The same statewide fiber-optic network that supports data transmission from cameras and sensors to the traffic operations center also supports data transfer from roadway weather information system sensors. UDOT’s weather sensor network comprises various types of units. Traditional setups involve atmospheric sensors mounted on a structure near the road that communicate with ground-level sensors called “pucks” embedded in nearby pavement that read the temperature on the road. There also are sensors mounted on poles near the road that fire infrared energy on a surface, allowing personnel to gauge how much light ice and snow reflect back from the ground. This latter group of sensors is called non-invasive because they don’t require installation on actual pavement in the middle of traffic flow. Data gleaned from these sources allow UDOT to post up-to-date weather condition information on variable message signs statewide as well as through social media and government websites. Foundation for Future Development A combination of factors prompted Utah to begin deploying cameras and sensors at the dawn of the 21st century. It was a natural result of the evolution of traffic systems that had been under way throughout the 1990s. According to Miller, the intelligent transportation system wave was rushing through the country at that time, and it influenced how cities developed their infrastructure, including Salt Lake City. “The whole concept of intelligent transportation systems started to come alive at that time, and when you manage traffic more efficiently, it’s less expensive, it saves people time and gas,” she said. Utah began installing cameras in the late 1990s, as Leonard recalls, and there were only a half dozen or so at first. The state was building a new intelligent transportation system in conjunction with reconstruction of Interstate 15 in Salt Lake County at the time, but that wasn’t the only impetus for increased use of traffic management technology. Soon after, Utah was selected to host the 2002 Winter Olympics, and government leaders knew an influx of tourists was coming. “That was sort of our start, and we’ve been adding cameras in our system continuously ever since then and replacing cameras as they wore out,” Leonard said. Neither Leonard nor Miller could name a specific cost of the overall installation or management of their decades-old system because it’s been around so long and its growth has been tied to other state projects. The freeway reconstruction was billed at more than $1 billion in the beginning. However, Leonard spoke generally about the cost of specific cameras and sensors. A typical camera costs about $4,000, but it requires a pole and pole foundation to be built with it, as well as a cabinet with equipment inside that lets people operate the camera, which increases the total cost to about $20,000. Sensor costs vary because there are several kinds, many with different installation requirements. It’s safe to say that Utah’s sensors and cameras will be integral to traffic operations for the foreseeable future, and the government plans to deploy more, especially in rural areas. “You have plenty of weather information for urban areas, but you have very little when it comes to the rural areas,” said Jeff Williams, UDOT meteorologist and weather operations program manager. “Sometimes there’s a hundred miles between towns. A lot of times, these are trucking routes where you have to make sure that commerce is not impacted.” Click Here to read more.
<urn:uuid:9a4bcf3c-4b5c-4e70-871e-a774079e637c>
CC-MAIN-2017-09
http://www.govtech.com/Utah-Network-of-Cameras-and-Sensors-Keeps-Traffic-Moving.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00067-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957648
1,391
2.734375
3
Wiedner E.B.,Ringling Brothers and Barnum and Bailey Center for Elephant Conservation | Peddie J.,Drs. Peddie | Peddie L.R.,Drs. Peddie | Abou-Madi N.,Cornell University | And 11 more authors. Journal of Zoo and Wildlife Medicine | Year: 2012 Three captive-born (5-day-old, 8-day-old, and 4-yr-old) Asian elephants (Elephas maximus) and one captive-born 22-yr-old African elephant (Loxodonta africana) from three private elephant facilities and one zoo in the United States presented with depression, anorexia, and tachycardia as well as gastrointestinal signs of disease including abdominal distention, decreased borborygmi, tenesmus, hematochezia, or diarrhea. All elephants showed some evidence of discomfort including agitation, vocalization, or postural changes. One animal had abnormal rectal findings. Nonmotile bowel loops were seen on transabdominal ultrasound in another case. Duration of signs ranged from 6 to 36 hr. All elephants received analgesics and were given oral or rectal fluids. Other treatments included warm-water enemas or walking. One elephant underwent exploratory celiotomy. Three animals died, and the elephant taken to surgery was euthanized prior to anesthetic recovery. At necropsy, all animals had severe, strangulating intestinal lesions. Copyright © 2012 by American Association of Zoo Veterinarians. Source Wong A.W.,University of Florida | Wong A.W.,Florida Fish And Wildlife Conservation Commission | Wong A.W.,University of Queensland | Bonde R.K.,University of Florida | And 9 more authors. Aquatic Mammals | Year: 2012 West Indian manatees (Trichechus manatus) are captured, handled, and transported to facilitate conservation, research, and rehabilitation efforts. Monitoring manatee oral temperature (OT), heart rate (HR), and respiration rate (RR) during out-of-water handling can assist efforts to maintain animal well-being and improve medical response to evidence of declining health. To determine effects of capture on manatee vital signs, we monitored OT, HR, and RR continuously for a 50-min period in 38 healthy, awake, juvenile and adult Florida manatees (T. m. latirostris) and 48 similar Antillean manatees (T. m. manatus). We examined creatine kinase (CK), potassium (K+), serum amyloid A (SAA), and lactate values for each animal to assess possible systemic inflammation and muscular trauma. OT range was 29.5 to 36.2° C, HR range was 32 to 88 beats/min, and RR range was 0 to 17 breaths/5 min. Antillean manatees had higher initial OT, HR, and RR than Florida manatees (p < 0.001). As monitoring time progressed, mean differences between the subspecies were no longer significant. High RR over monitoring time was associated with high lactate concentration. Antillean manatees had higher overall lactate values ([mean ± SD] 20.6 ± 7.8 mmol/L) than Florida manatees (13.7 ± 6.7 mmol/L; p < 0.001). We recommend monitoring manatee OT, HR, and RR during capture and handling in the field or in a captive care setting. Source Miller M.,Disneys Animal Programs and Environmental Initiatives | Weber M.,Disneys Animal Programs and Environmental Initiatives | Valdes E.V.,Disneys Animal Programs and Environmental Initiatives | Neiffer D.,Disneys Animal Programs and Environmental Initiatives | And 3 more authors. Journal of Zoo and Wildlife Medicine | Year: 2010 A combination of low serum calcium (Ca), high serum phosphorus (P), and low serum magnesium (Mg) has been observed in individual captive ruminants, primarily affecting kudu (Tragelaphus strepsiceros), eland (Taurotragus oryx), nyala (Tragelaphus angasii), bongo (Tragelaphus eurycerus), and giraffe (Giraffa camelopardalis). These mineral abnormalities have been associated with chronic laminitis, acute tetany, seizures, and death. Underlying rumen disease secondary to feeding highly fermentable carbohydrates was suspected to be contributing to the mineral deficiencies, and diet changes that decreased the amount of starch fed were implemented in 2003. Serum chemistry values from before and after the diet change were compared. The most notable improvement after the diet change was a decrease in mean serum P. Statistically significant decreases in mean serum P were observed for the kudu (102.1-66.4 ppm), eland (73.3-58.4 ppm), and bongo (92.1-64.2 ppm; P < 0.05). Although not statistically significant, mean serum P levels also decreased for nyala (99.3-86.8 ppm) and giraffe (82.6-68.7 ppm). Significant increases in mean serum Mg were also observed for kudu (15.9-17.9 ppm) and eland (17.1-19.7 ppm). A trend toward increased serum Mg was also observed in nyala, bongo, and giraffe after the diet change. No significant changes in mean serum Ca were observed in any of the five species evaluated, and Ca was within normal ranges for domestic ruminants. The mean Ca:P ratio increased to greater than one in every species after the diet change, with kudu, eland, and bongo showing a statistically significant change. The results of this study indicate that the diet change had a generally positive effect on serum P and Mg levels. Copyright 2010 by American Association of Zoo Veterinarians. Source
<urn:uuid:e14cf3e6-6cb2-4ee3-80f3-3b6f35143b46>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/disneys-animal-programs-and-environmental-initiatives-730340/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00067-ip-10-171-10-108.ec2.internal.warc.gz
en
0.907198
1,262
2.546875
3
When the internet of things (IoT) arrives in force it will be two things: a boon for scientists, analysts, and anyone else interested in making things run better, faster, cheaper and, potentially, a security nightmare for everyone else. The boon comes from finally having enough data to make causal connections between events and outcomes: X leads definitively to Y, not X may have led to Y "but we just don't know" (a favorite defense of attorneys everywhere). This can be as simple as improving your health by counting steps with an accelerometer to improving crop yields or understanding ocean currents by deploying armies of sensors. The potential security nightmare comes from the billions of new network holes created by billions of devices that will be calling home to corporate, government and, increasingly, home-based networks with updates. "At a minimum you need some type of encryption between the device and whatever it is talking to," says Tom Hunt, CEO of WindSpring, an IoT data compression company that, because of the work it does at the sensor level compressing data, is in a unique position to understand the difficulties of securing IoT devices. "And that is just phase one when extending the client/server model [to sensors]." There are a number of security challenges facing IoT makers. First off, devices have to be cheap. This does not allow for much onboard compute or storage -- often just a few kilobytes. Because of this, even light-weight security protocols like PK Zip cannot run at the device level. Nor can technologies like virtual private networks (VPNs) that would secure data in transit and prevent hackers from gaining network access via that data stream. Next comes the challenge of sensor-to-sensor network security. With such low levels of power and compute, these devices will serve up a cornucopia of network access points for hackers to exploit. "What scares IoT managers the most is more and more access is being provided to their systems from devices that they've never touched," says Hunt referring to extended sensor networks that may include devices from partners' or customers' networks that they have no control over. And, thirdly, physical device security will be of great concern since many IoT devices will be deployed, unprotected in the field. Another important issue is what Hunt calls the "DIY attitude" on the part of device makers. Like the early days of the PC industry, these manufacturers are not taking security seriously; thinking their engineers can just bolt on some open source protocol or other and call it done. "There's a zillion protocols and most of them are not well suited to IoT," says Hunt. "I sat down with a carrier outside the US who said they are going to provide security by running VPN links from all of their IoT sensors back to the network and I said 'How are you going to do that? You've got 2K of memory. What VPN client runs on 2k of memory?'" (The answer, in case you were wondering, is none.) Some IoT observers may not see device level security as that big a deal since hacking into a temperature sensor doesn't yield a lot of data, but what if they change the reading and that affects the command and control of an industrial process? Or, and this is what keeps chief information security officers (CISOs) awake at night, that same sensor allows a hacker to gain network access? "It's when you tap into that sensor, what else can you get access to?," says Hunt. "Here security is more important than ever because, unlike any system we've deployed before, you now have a low cost unattended thing that's sitting out there communicating with your host and if you don't protect, the consequences are far greater than they were [with previous technologies]." Read about a new approach to securing today's distrubted workforce and devices
<urn:uuid:b0e73fa7-4ab1-4699-be60-6ef848cb8393>
CC-MAIN-2017-09
https://blog.iboss.com/executives/low-cost-sensors-at-the-heart-of-future-iot-security-quagmire?utm_source=hs_email&utm_medium=email&utm_content=30735919&_hsenc=p2ANqtz--IsbnB6AmqhjiFhSm5deZvhXTRZrKNUnnw6M8zbpL7VahPX6sYew58X-36iku0Qaiu0Kt3R82CV2Rj8RL0VtfSPN6DaA&_hsmi=30735919
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00243-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96546
784
2.75
3
Nominet and Oxford Flood Network are using Internet of Things technologies to help prevent and mitigate flood damage in the UK. The UK domain name registry has also launched an interactive, online map highlighting how technology can be used to improve flood defences anywhere in the world. The project is currently being trialled in the Oxford area and uses IoT sensors to create a localised, early-warning system in flood-prone areas. More than 30 IoT devices are being employed to monitored water levels in the streams, groundwater and basins of both the Thames and Cherwell rivers. The data is then processed and combined with information collected by the Environment Agency before being presented in map form. Ben Ward, director of the Flood Network, believes that more insights are likely to follow as the technology is expanded. “This map will show the water situation at street level and help people to make better decisions as when a flood occurs, we can complement existing models with live observations on the ground,” he explained. “We’ve been working with great volunteers across the city to make the Flood Network happen, and we’re keen to get more on board to get an even clearer picture of Oxford’s water situation. As the network grows and connects more places, it gathers data which can be fed back to the authorities to improve flood models, leading to better defences and emergency responses.” In order to ensure reliable communication between the IoT devices, some of which are located in hard-to-reach places, Nominet utilises its TV white space (TVWS) database to identify which frequencies can be used to transfer information. In addition, because the IoT sensors make use of existing Internet standards, like DNS, they represent an easily scalable solution. This is particular important not only for enhancements in the Oxford area, but also to bring the technology to other parts of the UK. Just last week, the devastation in Cumbria from ‘Storm Desmond’ provided a timely reminder of the importance of reliable monitoring systems when it comes to limiting flood damage.
<urn:uuid:100efed8-f459-4a5c-ae4a-99e55cf5dc68>
CC-MAIN-2017-09
https://internetofbusiness.com/iot-used-to-bolster-uk-flood-defences/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00243-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936466
420
2.640625
3
Networks are evolving rapidly. The proliferation of devices, users, applications, and services has made the network edge more porous, while at the same time expanding the attack surface. And these remote devices and applications are now commonly accessing data center resources in a way unimaginable just a few years ago. Add IoT and Cloud to a highly mobile and distributed network, and you have created the perfect storm for disaster. Security has evolved organically as well. But not in a way that complements the evolution of the network. Instead, it is not uncommon for organizations to have security solutions from dozens of different vendors installed inside their distributed infrastructure. These siloed security devices use different management tools, different sources for threat intelligence, and have no ability to share critical information. It’s something often referred to as the accidental security architecture. And when combined with the growing security skills shortage, it’s a complex and expensive recipe for human error and unseen security gaps where advanced threats often pass unnoticed. Unfortunately, the attack technology used by cybercriminals is organized, responsive, and designed to share information, while the security installed in most networks is not. It is designed to circumvent and exploit the weaknesses inherent in your accidental security architecture. What’s needed is a way to integrate your security technologies together to close the security gaps. They need to share threat intelligence, management, and orchestration. They need to intelligently collaborate to respond to threats. And they need to operate using open standards for enhanced interoperability with your existing and future security investments. And central to a cohesive, integrated security strategy is the Enterprise Firewall. The Evolution of Firewalls Firewalls have undergone significant change since Digital Equipment Corporation engineers wrote the first paper on packet filtering in 1988. Here is a brief breakdown on the evolution: Packet Filters and Network Address Translation Packet filtering and NAT are used to monitor and control packets moving across a network interface, apply predetermined security rules, and obscure the internal network from the public Internet A proxy firewall is an intermediary device that terminates connection requests on one side of the proxy and builds a new network connection on the other. Because no packets actually pass through the proxy firewall, it is able to filter out unauthorized or infected traffic, and completely obscure the internal network by removing any identifiable source information. This level of security comes with significant performance challenges. Stateful firewalling, also known as dynamic packet filtering, monitors the state of connections and makes determinations on what sorts of data packets belonging to a known active connection are allowed to pass through the firewall. Unified Threat Management (or UTMs) As security solutions began to multiply, many organizations did not have the IT staff necessary to install, manage, and monitor the growing array of specialized security technologies. UTM devices combined many of the most critical security functions – firewall, IPS, VPN, gateway antivirus, content filtering, load balancing, etc. – into a single device, usually with a unified management console. While this is still a powerful solution for many smaller organizations, the challenges for growing enterprises include limited performance, a single point of failure, and deploying security across an increasingly distributed network environment. Next-Generation Firewall (NGFW) This term was coined by Gartner in late 2000 to describe a new sort of all-in-one security appliance based on the UTM model, but combined with enterprise-class scalability and performance, and a focus on granular inspection of Layer 7 application traffic. The Next Firewall Evolution – The Fortinet Security Fabric In spite of these advances in firewall technology, they are still isolated security devices inspecting traffic passing through a single network chokepoint. This model is increasingly ineffective as networks become increasingly distributed and borderless. Firewalls are still the basic building block of any security strategy, but now they need to be part of a tightly integrated, highly collaborative, and widely distributed security fabric. - Firewalls need to be deployed everywhere: At the Internet edge, in remote and branch offices, in the enterprise core for infrastructure segmentation and to secure the convergence of distributed networks, at the data center edge, inside the data center core (including traditional, virtual and SDN environments), and out in the cloud. - They need to leverage common global and local intelligence, share centralized management and orchestration, and consistently enforce policy anywhere across highly mobile, distributed, and virtualized network environments. - Because they can be deployed in a variety of architectures, Enterprise Firewalls need to be able to provide coordinated and seamless monitoring and response to threats from the network access layer up to the application layer. - They need to collaborate intelligently with other security technologies, such as web and email security, web application firewalls, sandboxes, anti-malware solutions, IPS and IDS, encryption and VPN, access control, DDoS, endpoint security, whether they are an integral part of the firewall, or specialized security devices, applications, or services distributed across the network. - And they need to interoperate with critical network technologies, such as switches, routers, load balancers, wireless access points, and server controllers, to collect and coordinate distributed network intelligence, broadly assess the scope and scale of any detected threat, and enforce policy as close to the detected problem as possible. These new Enterprise Firewalls become the foundation on which organizations can build an intelligent and highly interactive security architecture. Fortinet has just announced such an architecture, called the Fortinet Security Fabric. It is a tightly integrated set of security technologies that can be woven into the network, and designed share threat intelligence, collaborate to provide coordinated threat response, and adapt dynamically to the changing threat landscape. The Fortinet Enterprise Firewall plays a pivotal role in this new Security Fabric. Utilizing FortiGate’s common management, and unified operating system available in wide variety of form factors, organizations can distribute consistent firewall security across the network. And its flexible API design allows it to interoperate across Fortinet’s entire portfolio of security solutions as well as its rich ecosystem of third-party alliance partners - without ever sacrificing performance or business functionality. The answer to an increasingly complex network environment and sophisticated threat landscape cannot be compounding the accidental security architecture we already have in place. The best answer to complexity is simplicity. Which is exactly what the new Fortinet Security Fabric is designed to deliver. Evolved security designed for the next generation of digital business networks.
<urn:uuid:57e8e5bc-687b-4487-a94c-7d8016ef9796>
CC-MAIN-2017-09
https://blog.fortinet.com/2016/05/02/the-next-step-in-enterprise-firewall-evolution
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00363-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928499
1,317
2.765625
3
WIkimedians -- those people who contribute to the free Wikipedia website -- were recognized for their selfless work on Wednesday in Slubice, Poland with the unveiling of a monument. The fiber-and-resin sculpture, designed by Armenian Mihran Hakobyan, depicts the Wikipedia globe held up by four people. Consistent with the Wikipedia logo's globe, the world is shown as being unfinished, ready to accept more knowledge. The sculpture stands 5-and-a-half feet high in Frankfurt Square. The idea to create a monument came from Dr. Krzysztof Wojciechowski, director of Collegium Polonicum. RELATED: Steve Jobs waxwork sculpture debuts Started in 2001, Wikipedia now includes more than 33 million articles in 287 languages, and is one of the world's most popular websites, with about half a billion monthly visitors. “It is a truly special and exciting day, and one that I hope shines the spotlight on the thousands of Wikimedians who edit Wikipedia and make it the source of free knowledge it has come to be. I look forward to visiting Słubice one day to see the monument for myself and perhaps meeting some of those involved in the project.” said Jimmy Wales, founder of Wikipedia, in a statement. This story, "Monumental Day for Wikipedia in Poland" was originally published by Network World.
<urn:uuid:325c755a-b477-431a-8ded-699ddd7360f4>
CC-MAIN-2017-09
http://www.cio.com/article/2837933/internet/monumental-day-for-wikipedia-in-poland.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00539-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934994
294
2.546875
3
6 Gbps SATA transfer speed is on its way The solid-state disk drive is supposed to be fast. After all, it's mostly made of memory -- and last we checked, flash RAM was fast. In practice, however, some applications with SSDs can be slower than with HDDs, the reason being the way data is cached as it's collected and moved through I/O channels into system RAM. The transfer interface is the bottleneck, and the engineers that contribute to the Serial ATA (SATA) transfer specification admit that fact openly. Just a few years ago, you might never have thought that 3 gigabits per second (Gbps) would end up causing problems; but as it turned out, the faster SATA 2.0 maximum transfer rate enabled new applications, which ended up introducing users to those bottlenecks for the first time. Now, the SATA-IO organization is preparing to eliminate that logjam, with the publication this morning of the SATA 3.0 specification. Its goal is to accelerate maximum transfer speeds to 6 Gbps, and in so doing, widen the bandwidth between components where these new bottlenecks have recently been introduced. "SSDs provide faster data access and are more robust and reliable than standard HDDs because they do not incur the latency associated with rotating media," states a recent SATA-IO white paper. "SSDs are used in a variety of applications but one of the most exciting is two-tier, hybrid drive systems for PCs. The SSD serves as short-term and immediate storage, leveraging its lower latency to speed boot time and disk heap access while a HDD, with its lower cost per megabyte, provides efficient long-term storage. With SATA 3 Gb/s, SSDs are already approaching the performance wall with sustained throughput rates of 250-260 MB/s [megabytes per second, note the capital "B"]. Next-generation SSDs are expected to require 6 Gb/s connectivity to allow networks to take full advantage of the higher levels of throughput these devices can achieve." The rapidly improved transfer rate may also increase not only the efficiency but also the lifespan of conventional hard drives, by introducing a concept called native command queueing (NCQ). With data transfer and data processing threads operating at roughly parallel speeds today, the only way existing HDD controller cards can synchronize these processes is by running them in sequence. That eliminates the opportunity controllers might have to read some data out of sequence (similar to the way Internet packets are received out of sequence) and assemble them later. By doubling the theoretical maximum throughput rate, HDDs can read more data from rotating cylinders along parallel tracks, without having to move the head...and that reduces wear on the drive head. Of course, a new generation of drive controllers will need to be created to take advantage of this capability. However, the best news of all is that a new generation of SATA cables does not have to be created. We can all use the cables we have now, to take advantage of the performance gains to come. Though the new controller cards will themselves be replacements, the SATA interface itself doesn't change to the extent that new cables are required. So existing external equipment, including the latest models in the exploding realm of external HDD storage, will be fully compatible. SATA-IO is making no promises as to how soon consumers will start seeing the new controllers, or PCs where those controllers are pre-installed.
<urn:uuid:27c43244-2e3b-49a6-8200-4472b1b3d470>
CC-MAIN-2017-09
https://betanews.com/2009/05/27/6-gbps-sata-transfer-speed-is-on-its-way/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00239-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951336
702
2.625
3
Messina M.,Centro Speleologico Etneo | Grech T.,Centro Speleologico Etneo | Fiorenza F.,Centro Speleologico Etneo | Marletta A.,Centro Speleologico Etneo | And 3 more authors. International Journal of Speleology | Year: 2015 Monte Conca Cave is a karst system placed in Messinian evaporites, consisting of an active cave and a resurgence located on the massif of Monte Conca, Campofranco within the "Riserva Naturale Integrale di Monte Conca". A sulfidic spring is located in the terminal gallery of the cave. To characterize the physical and chemical parameters of the Monte Conca Cave and of the sulfidic spring, air temperature, relative humidity, water pH, and concentrations of dissolved sulfides, nitrates and sulfates were monitored. The high sulfide consumption rate in the sulfidic spring, evaluated by a kinetic study, suggests that biotic consumption is dominant. Moreover, snottites and filamentous floating mats, rich in sulfur and nitrate suggest a microbial activity related to the sulfur cycle. Iron content was also evaluated in water and snottites, given its involvement in microbial activity. The microbial mats could be the source of an autotrophic system in close correlation with the biological cycle of many species of living organisms found near the spring. Some of them show typical troglobitic characteristics, while the presence and abundance of others depends on the water amount. The greater abundance of taxa found close to the sulfidic spring suggests a complex food web associated with it. The monitoring lasted a year and half has highlighted the difference between chemical- physical parameters of the cave and the sulfidic spring, emphasizing its typical microenvironment. © 2015 Societa Speleologica Italiana. All Rights reserved. Source
<urn:uuid:e04e2fdc-91ad-4433-b3f0-f426b2aaec0d>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/centro-speleologico-etneo-2434855/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00591-ip-10-171-10-108.ec2.internal.warc.gz
en
0.880753
397
2.90625
3
Data visualization tools for Linux A quick look at six open source graphics utilities A short list of visualization tools In this article, I provide a survey of a number of popular Linux data visualization tools and include some insight into their other capabilities. For example, does the tool provide a language for numerical computation? Is the tool interactive or does it operate solely in batch mode? Can you use the tool for image or digital signal processing? Does the tool provide language bindings to support integration into user applications (such as Python, Tcl, Java programming languages, and so on)? I also demonstrate the tools' graphical capabilities. Finally, I identify the strengths of each tool to help you decide which is best for your computational task or data visualization. The open source tools that I explore in this article are (with their associated licenses): - Gnuplot (Gnuplot Copyright, non GPL) - GNU Octave (GPL) - Scilab (Scilab) - MayaVi (BSD) - Maxima (GPL) - OpenDX (IBM Public License) Gnuplot is a great visualization tool that has been around since 1986. It's hard to read a thesis or dissertation without running into a gnuplot graph. Although gnuplot is command-line driven, it has grown from its humble beginnings to support a number of non-interactive applications, including its use as a plotting engine for GNU Octave. Gnuplot is portable, operating on UNIX®, Microsoft® Windows®, Mac OS® X and many other platforms. It supports a range of output formats from postscript to the more recent PNG. Gnuplot can operate in batch mode, providing a script of commands to generate a plot, and also in interactive mode, which allows you to try out its features to see the effect they have on your plot. A standard math library is also available with gnuplot that corresponds to the UNIX math library. Arguments for functions support integer, real, and complex. You can configure the math library for radians or degrees (the default is radians). For plotting, gnuplot can generate 2-D plots with the plot command and 3-D plots (as 2-D projections) with the splot command. With plot command, gnuplot can operate in rectangular or polar splot command is Cartesian by default but can also support spherical and cylindrical coordinates. You can also apply contours to plots (as shown in Figure 1, below). A new style for plots, pm3d, supports drawing palette-mapped 3-D and 4D data as maps and surfaces. Here's a short gnuplot example that illustrates 3-D function plotting with contours and hidden line removal. Listing 1 shows the gnuplot commands that are used, and Figure 1 shows the graphical result. Listing 1. Simple gnuplot function plot set samples 25 set isosamples 26 set title "Test 3D gnuplot" set contour base set hidden3d offset 1 splot [-12:12.01] [-12:12.01] sin(sqrt(x**2+y**2))/sqrt(x**2+y**2) Listing 1 illustrates the simplicity of gnuplot's command set. The sampling rate and density of the plot are determined by samples and isosamples, and a title is provided for the graph with the title parameter. The base contour is enabled along with hidden line removal, and the sinc plot is created with the splot command using the internally available math library functions. The result is Figure 1. Figure 1. A simple plot from gnuplot In addition to creating function plots, gnuplot is also great for plotting data contained in files. Consider the x/y data pairs shown in Listing 2 (an abbreviated version of the file). The data pairs shown in the file represent the x and y coordinates in a two-dimensional space. Listing 2. Sample data file for gnuplot (data.dat) 56 48 59 29 85 20 93 16 ... 56 48 If you want to plot this data in a two-dimensional space, as well as connect each data point with a line, you can use the gnuplot script shown in Listing 3. Listing 3. Gnuplot script to plot the data from Listing 2 set title "Sample data plot" plot 'data.dat' using 1:2 t 'data points', \ "data.dat" using 1:2 t "lines" with lines The result is shown in Figure 2. Note that gnuplot automatically scales the axes, but you're given control over this if you need to position the plot. Figure 2. A simple plot from gnuplot using a data file Gnuplot is a great visualization tool that is well known and available as a standard part of many GNU/Linux distributions. However, if you want basic data visualization and numerical computation, then GNU Octave might be what you're looking for. GNU Octave is a high-level language, designed primarily for numerical computation, and is a compelling alternative to the commercial Matlab application from The MathWorks. Rather than the simple command set offered by gnuplot, Octave offers a rich language for mathematical programming. You can even write your applications in C or C++ and then interface to Octave. Octave was originally written around 1992 as companion software for a textbook in chemical reactor design. The authors wanted to help students with reactor design problems, not debugging Fortran programs. The result was a useful language and interactive environment for solving numerical problems. Octave can operate in a scripted mode, interactively, or through C and C++ language bindings. Octave itself has a rich language that looks similar to C and has a very large math library, including specialized functions for signal and image processing, audio processing, and control theory. Because Octave uses gnuplot as its backend, anything you can plot with gnuplot you can plot with Octave. Octave does have a richer language for computation, which has its obvious advantages, but you'll still be limited by gnuplot. In the following example, from the Octave-Forge Web site (SimpleExamples), I plot the Lorentz Strange Attractor. Listing 4 shows the interactive dialog for Octave on the Windows platform with Cygwin. This example demonstrates the use of lsode, an ordinary differential equation solver. Listing 4. Visualizing the Lorentz Strange Attractor with Octave GNU Octave, version 2.1.50 Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003 John W. Eaton. This is free software; see the source code for copying conditions. There is ABSOLUTELY NO WARRANTY; not even for MERCHANTIBILITY or FITNESS FOR A PARTICULAR PURPOSE. For details, type `warranty'. Please contribute if you find this software useful. For more information, visit http://www.octave.org/help-wanted.html Report bugs to <bug-octave&bevo.che.wisc.edu>. >> function y = lorenz( x, t ) y = [10 * (x(2) - x(1)); x(1) * (28 - x(3)); x(1) * x(2) - 8/3 * x(3)]; endfunction >> x = lsode("lorenz", [3;15;1], (0:0.01:25)'); >> gset parametric >> gsplot x >> The plot shown in Figure 3 is the output from the Octave code shown in Listing 4. Figure 3. A Lorentz plot with Octave GNU Octave (in concert with gnuplot) can emit multiple plots on a single page with the multiplot feature. Using this feature, you define how many plots to create and then define the particular plot using the After the subwindow is defined, you generate your plot normally and then step to the next subwindow (as shown in Listing 5). Listing 5. Generating multiplots in Octave >> multiplot(2,2) >> subwindow(1,1) >> t=0:0.1:6.0 >> plot(t, cos(t)) >> subwindow(1,2) >> plot(t, sin(t)) >> subwindow(2,1) >> plot(t, tan(t)) >> subwindow(2,2) >> plot(t, tanh(t)) The resulting multiplot page is shown in Figure 4. This is a great feature for collecting related plots together to compare or contrast. Figure 4. A multiplot with GNU Octave You can think about Octave as a high-level language with gnuplot as the backend for visualization. It provides a rich math library and is a great free replacement for Matlab. It's also extensible, with packages developed by users for speech processing, optimization, symbolic computation, and others. Octave is in some GNU/Linux distributions, such as Debian, and can also be used on Windows with Cygwin and Mac OS X. See the Related topics section for more information on Octave. Scilab is similar to GNU Octave in that it enables numerical computation and visualization. Scilab is an interpreter and a high-level language for engineering and scientific applications that is in use around the world. Scilab originated in 1994 and was developed by researchers at Institut national de recherche en informatique et en automatique (INRIA) and École Nationale des Ponts et Chaussées (ENPC) in France. Since 2003, Scilab has been maintained by the Scilab Consortium. Scilab includes a large library of math functions and is extensible for programs written in high-level languages, such as C and Fortran. It also includes the ability to overload data types and operations. It includes an integrated high-level language, but has some differences from C. A number of toolboxes are available for Scilab that provide 2-D and 3-D graphics and animation, optimization, statistics, graphs and networks, signal processing, a hybrid dynamic systems modeler and simulator, and many other community contributions. You can use Scilab on most UNIX systems, as well as the more recent Windows operating systems. Like GNU Octave, Scilab is well documented. Because it is a European project, you can also find documentation and articles in a number of languages other than English. When Scilab is started, a window displays allowing you to interact with the interpreter (see Figure 5). Figure 5. Interacting with Scilab In this example, I create a vector (t) with values ranging from 0 to 2PI (with a step size of 0.2). I then generate a 3-D plot (using z=f(x,y), or the surface at the point xi,yi). Figure 6 shows the resulting plot. Figure 6. The resulting Scilab plot from the commands in Figure 5 Scilab includes a large number of libraries and functions that can generate plots with a minimum of complexity. Take the example of generating a simple three-dimensional histogram plot: rand(5,5) builds a matrix of size 5,5 containing random values (which I scale to a maximum of 5). This matrix is passed to the function hist3d. The result is the histogram plot shown in Figure 7. Figure 7. Generating a random three-dimensional histogram plot Scilab and Octave are similar. Both have a large base of community participation. Scilab is written in Fortran 77, whereas Octave is written in C++. Octave uses gnuplot for its visualization; Scilab provides its own. If you're familiar with Matlab, then Octave is a good choice because it strives for compatibility. Scilab includes many math functions and is very good for signal processing. If you're still not sure which one to use, try them both. They're both great tools, and you may find yourself using each of them for different tasks. MayaVi, which means magician in Sanskrit, is a data visualization tool that binds Python with the powerful Visualization Toolkit (VTK) for graphical display. MayaVi also provides a graphical user interface (GUI) developed with the Tkinter module. Tkinter is a Tk interface, most commonly coupled with Tcl. MayaVi was originally developed as a visualization tool for Computational Fluid Dynamics (CFD). After its usefulness in other domains was realized, it was redesigned as a general scientific data visualizer. The power behind MayaVi is the VTK. The VTK is an open source system for data visualization and image processing that is widely used in the scientific community. VTK packs an amazing set of capabilities with scripting interfaces for Tcl/Tk, Java programming language, and Python in addition to C++ libraries. VTK is portable to a number of operating systems, including UNIX, Windows, and MAC OS X. The MayaVi shell around VTK can be imported as a Python module from other Python programs and scripted through the Python interpreter. The tkinter GUI provided by MayaVi allows the configuration and application of filters as well as manipulating the lighting effects on the visualization. Figure 8 is an example visualization using MayaVi on the Windows platform. Figure 8. 3-D visualization with MayaVi (CT heart scan data) MayaVi is an interesting example of extending the VTK in the Python scripting language. Maxima is a full symbolic and numerical computation program in the vein of Octave and Scilab. The initial development of Maxima began in the late 1960s at Massachusetts Institute of Technology (MIT), and it continues to be maintained today. The original version (a computer algebra system) was called DOE Macsyma and led the way for later development of more commonly known applications such as Mathematica. Maxima provides a nice set of capabilities that you'd expect (such as differential and integral calculus, solving linear systems and nonlinear sets of equations) along with symbolic computing abilities. You can write programs in Maxima using traditional loops and conditionals. You'll also find a hint of Lisp in Maxima (from functions such as quoting, is written in Lisp, and you can execute Lisp code within a Maxima session. Maxima has a nice online help system that is hypertext based. For example, if you want to know how a particular Maxima function works, you can example( desolve ) and it provides a number of Maxima also has some interesting features such as rules and patterns. These rules and patterns are used by the simplifier to simplify expressions. Rules can also be used for commutative and noncommutative algebra. Maxima is much like Octave and Scilab in that an interpreter is available to interact with the user, and the results are provided directly in the same window or popped up in another. In Figure 9, I request plot of a simple 3-D graph. Figure 9. Interacting with Maxima The resulting plot is shown in Figure 10. Figure 10. The resulting Maxima plot from the commands in Figure 9 Open Data Explorer (OpenDX) An overview of visualization tools wouldn't be complete without a short introduction to Open Data Explorer (OpenDX). OpenDX is an open source version IBM's powerful visualization data explorer. This tool was first released in 1991 as the Visualization Data Explorer, and is now available as open source for data visualization as well as building flexible applications for data visualization. OpenDX has a number of unique features, but its architecture is worth mentioning. OpenDX uses a client/server model, where the client and server applications can reside on separate hosts. This allows the server to run on a system designed for high-powered number crunching (such as a shared memory multi-processor) with clients running separately on lesser hosts designed more for graphical rendering. OpenDX even allows a problem to be divided amongst a number of servers to be crunched simultaneously (even heterogeneous servers). OpenDX supports a visual data-flow programming model that allows the visualization program to be defined graphically (see Figure 11). Each of the tabs define a "page" (similar to a function). The data is processed by the transformations shown, for example, the middle "Collect" module collects input objects into a group, and then passes them on (in this case, to the "image" module which displays the image and the "AutoCamera" module which specifies how to view the image). Figure 11. Visual Programming with OpenDX OpenDX even includes a module builder that can help you build custom modules. Figure 12 shows a sample image that was produced from OpenDX (this taken from the Physical Oceanography tutorial for OpenDX from Dalhousie University). The data represents land topology data and also water depths (bathemetry). Figure 12. Data Visualization with OpenDX OpenDX is by far the most flexible and powerful data visualizer that I've explored here, but it's also the most complicated. Luckily, numerous tutorials (and books) have been written to bring you up to speed, and are provided in the Related topics section. I've just introduced a few of the open source GNU/Linux visualization tools in this article. Other useful tools include Gri, PGPLOT, SciGraphica, plotutils, NCAR Graphics, and ImLib3D. All are open source, allowing you to see how they work and modify them if you wish. Also, if you're looking for a great graphical simulation environment, check out Open Dynamics Engine (ODE) coupled with OpenGL. Your needs determine which tool is best for you. If you want a powerful visualization system with a huge variety of visualization algorithms, then MayaVi is the one for you. For numerical computation with visualization, GNU Octave and Scilab fit the bill. If you need symbolic computation capabilities, Maxima is a useful alternative. Last, but not least, if basic plotting is what you need, gnuplot works nicely. - The Los Alamos National Laboratories not so Frequently Asked Questions is a great resource for using gnuplot and finding answers to some of the more complicated gnuplot questions. - Browse a large number of Octave scripts, functions, and extensions at the GNU Octave Repository. You'll also find instruction for adding your own recipes to the octave-forge. - The gnuplot home page is the place for gnuplot software downloads and documentation. You can also find a demo gallery to help you figure out what's possible with gnuplot and how to tailor these recipes for your application. - GPlot is a Perl wrapper for Gnuplot. It's written by Terry Gliedt and may help you if you find Gnuplot to be complicated or unfriendly. GPlot loses some of the flexibility of Gnuplot, but extends many of the most common options in a much simpler way. - GNU Octave is a high-level language for numerical computation that uses gnuplot as its graphical engine. It's a great alternative to the commercial Matlab software. Its Web site contains downloads and access to a wide range of documentation. - You can download the MayaVi Data Visualizer at SourceForge.net. You can also find documentation here, as well as a list of features that MayaVi provides for VTK. - The Visualization Toolkit (VTK) is a powerful open source software system for 3-D computer graphics, image processing, and visualization. You'll find software, documentation, and lots of helpful links for using VTK on this site. - Scilab is a free scientific software package for numerical computation and graphical visualization. At this site, you'll find the latest version of Scilab, as well as documentation and other user information (such as how to contribute to the project). - Maxima is another alternative to Maple and Mathematica, in addition to the open source alternatives, Octave and Scilab. It has a distinguished lineage and supports not only numerical capabilities, but also symbolic computation with inline Lisp programming. - The Open Data Explorer is an open source version of IBM's powerful data visualization and application development package that's a must for hardcore scientific visualizations. - Data Explorer tutorials, maintained by Dalhousie Physical Oceanography, nicely demonstrate the power provided by DX. - The NCAR Graphics home page provides a stable UNIX package for drawing contours, maps, surfaces, weather maps, x-y plots, and many others. - Gri is a high-level language for scientific graphics programming. You can use it to construct x-y graphs, contour plots, and image graphs with fine control over graphing attributes. - SciGraphica is great for data analysis and technical graphics. - The ImLib3D library is an open source package for 3-D volumetric image processing that strives for simplicity. - ODE is an open physics engine that's perfect for physical systems modeling. Combine this with Open/GL and you have a perfect environment for graphical simulation. - The ROOT system is a newer object-oriented data analysis framework. ROOT is a fully featured framework with over 310 classes of architecture and analysis behaviors. - If your interests lie more with statistics, read this three-part series, "Statistical programming with R." Part 1, "Dabbling with a wealth of statistical facilities" (developerWorks, September 2004), introduces the toolkit's features, Part 2, "Functional programming and data exploration" (developerWorks, October 2004), looks more closely at R language functionality, and Part 3, "Reusable and object-oriented programming" (developerWorks, January 2006) examines R's object-oriented features as well as more of R's general programming concepts. - Python programmers looking for a faster way to process arrays should check out "Numerical Python" (developerWorks, October 2003). - With IBM trial software, available for download directly from developerWorks, build your next development project on Linux.
<urn:uuid:13c387b7-285b-4aa6-9f24-cd4cec0c04b7>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/linux/library/l-datavistools/?S_TACT=105AGX59&S_CMP=GR&ca=dgr-btw01LinuxDataVisTools
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00587-ip-10-171-10-108.ec2.internal.warc.gz
en
0.896847
4,692
2.625
3
The Evolution of Backup The Evolution of Backup has been through many iterations and is centuries old. 3000 BC: Clay tablets, used by Sumerians, are used to record commercial transactions. Backup consisted of duplicating the tablets. 2500 BC: Papyrus starts being used in Egypt. Manually copying a papyrus is the only form of safeguarding it. 197 BC: Vellum, or parchment, made from animal skin eventually replaced the papyrus. Backup consisted of manually making copies and was sealing in a bronze box for archiving. 105 AD: Paper is invented. Bookkeeping, financial transactions, and other important business records are kept in paper, and kept in safes for storage and protection. 1800s: Typewriters are invented, gaining momentum in the 1860s. Carbon paper was used to create copies of typed documents, to be stored in filing cabinets. 1890s: Herman Hollerith invented the punch card, so that data could be recorded in a medium that could also be read by a machine. Copies of punch cards had to be made for backup. 1900 to 1950s: Punch cards are the primary method for data entry, data storage, and data processing. Cards had to be handled with care and stored accordingly. 1960s: Magnetic tapes start replacing punch cards for data storage. The first backup strategies started to appear and backup tapes gain momentum. 1969: First floppy disk is introduced. 1970’s: Tape cartridges and cassette tapes start being widely used especially in home computers as a low cost data storage system. 1980’s: CD-R and CD-RW conquered the market for software distribution, slowly replacing floppy disks. 1982: Maynard Electronics is founded, and Archive, a backup software that later is renamed Backup Exec is created. 1987: DAT (digital audio storage) tapes hit the market, instead of being used by the music industry as first intended, gain interest from the corporate world for data storage and backup. Early 90’s: CD drives and the declining costs of rewritable CD’s (CD-RW) help boost the media as a viable alternative for file backups. Later replaced by DVD’s. 1994: IOMEGA Zip Drive, a removable hard disk storage with 100MB and 250MB capacity is released, becoming very popular. A year later IOMEGA Jaz hits the market with 1GB capacity. Late 90’s: SAN’s (Storage Area Networks) gain traction in the corporate world. VTL (virtual tape libraries) start being used to replace tape libraries. 2000’s: The USB flash drive enters the market. Decreasing prices and increasing storage capacity kills IOMEGA and their removable disks. Mid 2000’s: Due to faster USB and Firewire ports, external hard drives start gaining market share and being used for backups especially for personal computers. Late 2000’s: NAS (Network Attached Storage) climbs in reputation in the corporate world. Tapes continued to be replaced by disk-to-disk backup strategies. 2005: VTL gains wider adoption, replacing tape libraries in increasing number of companies. Symantec acquires VERITAS and with it, Backup Exec. 2006: Amazon launches EC2, the popular cloud computing platform, and S3 (simple storage service) for cloud storage. 2007: Dropbox founded, makes cheap cloud storage available for the masses. Axcient is founded as a new type of cloud platform for data protection and recovery, focusing on the corporate market. 2009: SSD (solid state storage) exceeds 11 million units shipped, showing the technology starts gaining traction and wider adoption. 2013: Symantec exists the cloud space shutting down Backup Exec.cloud. Gartner coins the term “Recovery-as-a-Service” based on the popularity of cloud-based disaster recovery solutions like Axcient and estimates a projected compound annual growth rate (CAGR) of at least 21% during the next three years. 2015: Gartner releases the first Magic Quadrant for Disaster Recovery-as-a-Service. 2015: IDC publishes the first MarketScape report for DRaaS. DRaaS gains momentum as the next step in the evolution of data protection and recovery.
<urn:uuid:df2b570b-623a-4282-8030-843237cde97d>
CC-MAIN-2017-09
https://axcient.com/the-evolution-of-backup/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00231-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922543
891
3.359375
3
CDC's BioMosaic helps track MERS - By Mark Rockwell - May 16, 2014 A big-data analytics app is helping the Centers for Disease Control and Prevention anticipate the arrival of the next case of the potent Middle East Respiratory Syndrome. The BioMosaic analytic tool integrates demography, migration and health data, and CDC officials said they have used it to analyze which U.S. airports have the most traffic from the Middle East in the springtime months. MERS has appeared in international travelers from the Arabian Peninsula, and international and U.S. health care agencies believe the coronavirus originated in that region. The BioMosaic tool combines information about travel, disease patterns and where groups of people from other countries settle in the U.S. Using that information, CDC said it can direct health information and services where they are needed most. CDC's May 14 Morbidity and Mortality Weekly Report said cases of MERS have been on the upswing since mid-March. The report also noted the use of BioMosaic to monitor the disease's spread. According to CDC, the app brings together complex data from multiple sources into a visual format, including maps. CDC's Division of Global Migration and Quarantine, Harvard University and the University of Toronto launched the BioMosaic Project in 2011. In the report, CDC said it used BioMosaic to analyze International Air Transport Association travel data for May and June from Saudi Arabia and the United Arab Emirates to North America from 2010 to 2012. The CDC's analysis showed that five U.S. cities -- Los Angeles, New York, Boston, Chicago and Washington -- accounted for 75 percent of arrivals from Saudi Arabia and the UAE, with about 100,000 travelers estimated to arrive in those cities in May and June 2014. In the past few months, MERS has spread from the Arabian Peninsula to Europe, Southeast Asia and the United States, where CDC recently confirmed two cases. Infectious disease experts have known about MERS for several years, but there has been an increase in cases in recent months, raising concerns that it could become a global health threat. In the past, CDC has said MERS kills as many as 45 percent of the people it infects. The Transportation Security Administration has begun posting signs at security lines at U.S. airports describing MERS symptoms. As of May 16, the World Health Organization has confirmed more than 570 MERS cases worldwide, including 171 deaths from the disease. CDC confirmed the first U.S. case on May 2 in a traveler who had recently returned from Saudi Arabia. A second imported U.S. case was identified in a traveler from Saudi Arabia and was reported to CDC by the Florida Department of Health on May 11. Mark Rockwell is a staff writer at FCW. Before joining FCW, Rockwell was Washington correspondent for Government Security News, where he covered all aspects of homeland security from IT to detection dogs and border security. Over the last 25 years in Washington as a reporter, editor and correspondent, he has covered an increasingly wide array of high-tech issues for publications like Communications Week, Internet Week, Fiber Optics News, tele.com magazine and Wireless Week. Rockwell received a Jesse H. Neal Award for his work covering telecommunications issues, and is a graduate of James Madison University. Click here for previous articles by Rockwell. Contact him at [email protected] or follow him on Twitter at @MRockwell4.
<urn:uuid:01e72592-245a-4d8c-869b-a6e7cfe7c9d1>
CC-MAIN-2017-09
https://fcw.com/articles/2014/05/16/cdc-biomosaic-mers.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00231-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954645
722
2.8125
3
When using Windows there will ultimately come a time when you need to close a program that is frozen, is malware, or is simply not behaving properly. Unfortunately, sometimes just clicking on the Windows close button () does not close a program properly. This guide will teach you how to use the Windows Task Manager to close a program in Windows 10, Windows 8, and Windows 7. To terminate, or close, a program in Windows 10 and Windows 8, you should press the Ctrl+Alt+Delete keyboard combination on your keyboard to open the Windows security screen. At the security screen, shown above, click on the Task Manager button. This will launch the Windows 10/8 Task Manager as shown below. You now want to click on the More Details option to show the full list of running processes. Select the process, or program, you wish to terminate by left-clicking on it once so it becomes highlighted as shown in the image above. Once you select a process, the End Task button will become available. To terminate the program, click on the End Task button and the program will be terminated. To close, or terminate, a program in Windows 7, you should press the Ctrl+Alt+Delete keyboard combination on your keyboard to open the Windows 7 security screen. At the security screen, shown above, click on the Task Manager button. This will launch the Windows 7 Task Manager as shown below. You now want to select the process, or program, you wish to terminate by left-clicking on it once so it becomes highlighted as shown in the image below. Once you select a process, the End Process button will become available. To terminate the program, click on the End Process button and the program will be terminated. Many Spyware, Hijackers, and Dialers are installed in Internet Explorer through a Microsoft program called ActiveX. These activex programs are downloaded when you go to certain web sites and then they are run on your computer. These programs can do a variety of things such as provide legitimate services likes games or file viewers, but they can also be used to install Hijackers and Spyware on to ... This tutorial will walk you through recovering deleted, modified, or encrypted files using Shadow Volume Copies. This guide will outline using Windows Previous Versions and the program Shadow Explorer to restore files and folders as necessary. When you install Windows, you are shown the Windows license agreement that provides all the legal language about what you can and cannot do with Windows and the responsibilities of Microsoft. Finding this license agreement, afterwards, is not as easy. This tutorial will explain how to find the license agreement for the edition of Windows installed on your computer. If you use a computer, read the newspaper, or watch the news, you will know about computer viruses or other malware. These are those malicious programs that once they infect your machine will start causing havoc on your computer. What many people do not know is that there are many different types of infections that are categorized in the general category of Malware. Windows Safe Mode is a way of booting up your Windows operating system in order to run administrative and diagnostic tasks on your installation. When you boot into Safe Mode the operating system only loads the bare minimum of software that is required for the operating system to work. This mode of operating is designed to let you troubleshoot and run diagnostics on your computer. Windows Safe Mode ...
<urn:uuid:9a5fceba-cb0b-4fe3-81d1-ab5a51c14eb5>
CC-MAIN-2017-09
https://www.bleepingcomputer.com/tutorials/close-a-program-using-task-manager/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00231-ip-10-171-10-108.ec2.internal.warc.gz
en
0.892856
697
2.609375
3
It’s garnered a truckload of consumer media buzz recently because developers claim it’s the most sophisticated augmented reality headset in development so far. Company literature plays up the stereoscopic 3D view that presents unique, parallel three-dimensional images for each eye the same way our eyes perceive images in the real world. The imagery spans approximately 100 degrees, stretching beyond a person’s peripheral view. The consumer gaming community probably won’t be able to enjoy the Rift until at least later this year, but gamers aren’t the only ones interested. Facebook bought Oculus VR earlier this year for $2 billion, presumably to enhance the social networking experience beyond instant messaging and random status updates. But around the world, the military’s already adapted the technology for training and intelligence purposes, indicating that the Rift’s reach will extend to the highest levels of government. Here are three places where trainees are using the headset to hone their skills: - The Norwegian Army is testing the Rift’s application in tank driving scenarios. M-113 drivers navigated using Rift goggles that were connected to image processing software and external cameras that captured tank surroundings. Thanks to the set-up’s situational awareness, vehicle operators could maneuver independently without needing verbal commands. Testers experienced some dizziness and noticed that the goggles lacked screen resolution to see well at distances, but they believe these bugs can be fixed in time. - The United States Navy is using the Rift in a similar fashion to train sailors in Project Blue Shark. Future war fighters could drive or repair ships with three-dimensional awareness while communicating with others in real time thousands of miles away. - It’s no secret that the American military is all about drones these days, and the private sector’s experimentation with the Rift could enhance the drones’ functionality. According to Digital Trends, many camera-equipped unmanned aerial vehicles (UAVs) need two people to operate: one to drive the drone, and one to control the camera. Norwegian researches, however, figured out a way to equip a drone with two cameras that forwarded images to the Rift and moved based on the headset wearer’s head movements. The process is still in its early stages, but refinements could lead to a wave of VR-enabled and controlled drones.
<urn:uuid:b1a832d2-eeeb-442c-9e62-0312e4688090>
CC-MAIN-2017-09
http://www.govtech.com/videos/3-Ways-the-Oculus-Rift-Could-Change-the-Military.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00631-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930881
475
2.703125
3
The Fundamental Characteristics of Storage April 8, 2013 13 Comments Latency is a measurement of delay in a system; so in the case of storage it is the time taken to respond to an I/O request. It’s a term which is frequently misused – more on this later – but when found in the context of a storage system’s data sheet it often means the average latency of a single I/O. Latency figures for disk are usually measured in milliseconds; for flash a more common unit of measurement would be microseconds. IOPS (which stands for I/Os Per Second) represents the number of individual I/O operations taking place in a second. IOPS figures can be very useful, but only when you know a little bit about the nature of the I/O such as its size and randomicity. If you look at the data sheet for a storage product you will usually see a Max IOPS figure somewhere, with a footnote indicating the I/O size and nature. Bandwidth (also variously known as throughput) is a measure of data volume over time – in other words, the amount of data that can be pushed or pulled through a system per second. Throughput figures are therefore usually given in units of MB/sec or GB/sec. As the picture suggests, these properties are all related. It’s worth understanding how and why, because you will invariably need all three in the real world. It’s no good buying a storage system which can deliver massive numbers of IOPS, for example, if the latency will be terrible as a result. The throughput is simply a product of the number of IOPS and the I/O size: Throughput = IOPS x I/O size So 2,048 IOPS with an 8k blocksize is (2,048 x 8k) = 16,384 kbytes/sec which is a throughput of 16MB/sec. The latency is also related, although not in such a strict mathematical sense. Simply put, the latency of a storage system will rise as it gets busier. We can measure how busy the system is by looking at either the IOPS or Throughput figures, but throughput unnecessarily introduces the variable of block size so let’s stick with IOPS. We can therefore say that the latency is proportional to the IOPS: Latency ∝ IOPS I like the mathematical symbol in that last line because it makes me feel like I’m writing something intelligent, but to be honest it’s not really accurate. The proportional (∝) symbol suggests a direct relationship, but actually the latency of a system usually increases exponentially as it nears saturation point. We can see this if we plot a graph of latency versus IOPS – a common way of visualising performance characteristics in the storage world. The graph on the right shows the SPC benchmark results for an HP 3PAR disk system (submitted in 2011). See how the response time seems to hit a wall of maximum IOPS? Beyond this point, latency increases rapidly without the number of IOPS increasing. Even though there are only six data points on the graph it’s pretty easy to visualise where the limit of performance for this particular system is. I said earlier that the term Latency is frequently misused – and just to prove it I misused it myself in the last paragraph. The SPC performance graph is actually plotting response time and not latency. These two terms, along with variations of the phrase I/O wait time, are often used interchangeably when they perhaps should not be. According to Wikipedia, “Latency is a measure of time delay experienced in a system“. If your database needs, for example, to read a block from disk then that action requires a certain amount of time. The time taken for the action to complete is the response time. If your user session is subsequently waiting for that I/O before it can continue (a blocking wait) then it experiences I/O wait time which Oracle will chalk up to one of the regular wait events such as db file sequential read. The latency is the amount of time taken until the device is ready to start reading the block, i.e not including the time taken to complete the read. In the disk world this includes things like the seek time (moving the actuator arm to the correct track) and the rotational latency (spinning the platter to the correct sector), both of which are mechanical processes (and therefore slow). When I first began working for a storage vendor I found the intricacies of the terminology confusing – I suppose it’s no different to people entering the database world for the first time. I began to realise that there is often a language barrier in I.T. as people with different technical specialties use different vocabularies to describe the same underlying phenomena. For example, a storage person might say that the array is experiencing “high latency” while the database admin says that there is “high User I/O wait time“. The OS admin might look at the server statistics and comment on the “high levels of IOWAIT“, yet the poor user trying to use the application is only able to describe it as “slow“. At the end of the day, it’s the application and its users that matter most, since without them there would be no need for the infrastructure. So with that in mind, let’s finish off this post by attempting to translate the terms above into the language of applications. Translating Storage Into Application Earlier we defined the three fundamental characteristics of storage. Now let’s attempt to translate them into the language of applications: Latency is about application acceleration. If you are looking to improve user experience, if you want screens on your ERP system to refresh quicker, if you want release notes to come out of the warehouse printer faster… latency is critical. It is extremely important for highly transactional (OLTP) applications which require fast response times. Examples include call centre systems, CRM, trading, e-Business etc where real-time data is critical and the high latency of spinning disk has a direct negative impact on revenue. IOPS is for application scalability. IOPS are required for scaling applications and increasing the workload, which most commonly means one of three things: in the OLTP space, increasing the number of concurrent users; in the data warehouse space increasing the parallelism of batch processes, or in the consolidation / virtualisation space increasing the number of database instances located on a single physical platform (i.e. the density). This last example is becoming ever more important as more and more enterprises consolidate their database estates to save on operational and licensing costs. Bandwidth / Throughput is effectively the amount of data you can push or pull through your system. Obviously that makes it a critical requirement for batch jobs or datawarehouse-type workloads where massive amounts of data need to be processed in order to aggregate and report, or identify trends. Increased bandwidth allows for batch processes to complete in reduced amounts of time or for Extract Transform Load (ETL) jobs to run faster. And every DBA that ever lived at some point had to deal with a batch process that was taking longer and longer until it started to overrun the window in which it was designed to fit… Finally, a warning. As with any language there are subtleties and nuances which get lost in translation. The above “translation” is just a rough guide… the real message is to remember that I/O is driven by applications. Data sheets tell you the maximum performance of a product in ideal conditions, but the reality is that your applications are unique to your organisation so only you will know what they need. If you can understand what your I/O patterns look like using the three terms above, you are halfway to knowing what the best storage solution is for you…
<urn:uuid:3fe72c07-bcb4-477a-954e-cd4cb5056a2c>
CC-MAIN-2017-09
https://flashdba.com/2013/04/08/the-fundamental-characteristics-of-storage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00155-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945813
1,644
3.125
3
Researchers say they have built a flying robot that mimics the movements of a swimming jellyfish. Researchers say they have built a flying robot. It's not designed to fly like a bird or an insect, but was built to mimic the movements of a swimming jellyfish. Scientists at New York University say they built the small, flying vehicle to move like the boneless, pulsating, water-dwelling jellyfish. Leif Ristroph, a post-doctoral student at NYU and a lead researcher on the project, explained that previous flying robots were based on the flight of birds or insects, such as flies. A researcher at New York University has built a flying robot that mimics the motions of a jelly fish. (Credit: NYU/L. Ristroph) Last spring, for example, Harvard University researchers announced that they had built an insect-like robot that flies by flapping its wings. The flying robot is so small it has about 1/30th the weight of a U.S. penny. Before the Harvard work was announced, researchers at the University of Sheffield and the University of Sussex in England worked together to study the brains of honey bees in an attempt to build an autonomous flying robot. By creating models of the systems in a bee's brain that control vision and sense of smell, scientists hope to build a robot that would be able to sense and act as autonomously as a bee. The problem with those designs, though, is that the flapping wing of a fly is inherently unstable, Ristroph noted. "To stay in flight and to maneuver, a fly must constantly monitor its environment to sense every gust of wind or approaching predator, adjusting its flying motion to respond within fractions of a second," Ristroph said. "To recreate that sort of complex control in a mechanical device -- and to squeeze it into a small robotic frame -- is extremely difficult." To get beyond those challenges, Ristroph built a prototype robot that is 8 centimeters wide and weighs two grams. The robot flies by flapping four wings arranged like petals on a flower that pulsate up and down, resembling the flying motion of a moth. The machine, according to NYU, can hover and fly in a particular direction. There is more work still to be done. Ristroph reported that his prototype doesn't have a battery but is attached to an external power source. It also can't steer, either autonomously or via remote control. The researcher added that the work he's done so far is a blueprint for designing more sophisticated and complex vehicles. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "Flying robot mimics a jellyfish" was originally published by Computerworld.
<urn:uuid:a0aa1a00-257b-4bde-98cb-8a5378a27ad9>
CC-MAIN-2017-09
http://www.networkworld.com/article/2172210/data-center/flying-robot-mimics-a-jellyfish.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00331-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961814
627
3.5625
4
CATV, namely Cable TV is also known as community antenna television. In addition to bringing television programs to those millions of people throughout the world who are connected to a community antenna, cable TV will likely become a popular way to interact with the World Wide Web and other new forms of multimedia information and entertainment services. CATV generally use coaxial cable to transmit TV programs, and actually the fibers bring a lot of benefits in data communication in CATV systems. In Cable TV, channels are divided into a number of frequencies and modulated over a single cable enabling the cable operator to propagate and distribute many channels on a fiber optic and coaxial cable direct-to-home. CATV works by spreading TV channels, FM radio, Data services and Telephony over a single wire. - CATV services enable the viewers to choose from a list of TV shows, such as movies, sports or other preferences etc. and watch according to their wish. - There is no overbuying of channels with using CATV. Because CATV is more suitable for those subscribers who wants only a few favorite channels because CATV allows the users to choose their favorite channels instead of paying for many unwanted channels which come in a package as a whole. - CATV has made possible telephony services along the same cable. - Though CATV remains uninterrupted due the presence of the coaxial cables and optic fibers, picture quality may sometime be affected. However, it is hardly affected by bad weather. - No converter needed - Easy installation - Using many amplifiers that reduce signal quality and not easy to repair - Having EMI distortion - Using Tree-Branch Structure - Not easy to expand
<urn:uuid:23615f7f-5c4f-446f-ad46-080713696a17>
CC-MAIN-2017-09
http://www.fs.com/blog/catv-cable-tv.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00624-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943551
344
2.8125
3
Florida gets technical with evacuation maps Project will use light detection data to estimate storm surge depths - By Rutrell Yasin - Oct 18, 2010 Florida emergency management officials have started a mapping project to update regional evacuation studies. The Florida Coastal Mapping project combines data collection with disaster preparedness by collecting Light Detection and Ranging (LIDAR) data — which uses light detection to capture information — for coastal counties. The data could be run through a computerized model to estimate storm surge depths from hurricanes and used to develop new regional evacuation plans, according to a story in GovTech. In Florida, many regions’ hurricane evacuation studies haven’t been updated since the 1990s, according to the Florida Division of Emergency Management’s (FDEM) website. The agency plans to use the new information gathered from the mapping project to refresh the State Regional Evacuation Studies by the end of the year, the Government Technology article states. “The State Regional Evacuation Studies will be used by every emergency management entity in Florida as the basis for developing evacuation and protective measure plans, shelter planning and identifying coastal high hazards zones,” FDEM spokeswoman Lauren McKeague wrote in an e-mail to Government Technology “Additionally the studies will be used by all the state’s growth management agencies to identify impacts to public safety plans and to address growth management standards put in place by the Florida Legislature, including traffic and other future land use planning.” The new LIDAR data will also be available to other agencies in need of quality land contour data, according to McKeague. NASA data lets scientists map forests (and the trees) Geospatial apps help temper Mother Nature's fury The next step is for the local emergency managers to incorporate data from the mapping project into their planning. The information can be run through the Sea, Lake and Overland Surges from Hurricanes computerized model that evaluates the threats from a hurricane storm surge and tells officials which areas need to be evacuated. Organizations are finding other ways to use LIDAR data such as to measure carbon cycles and predict forest fires. For example, scientists have developed the world’s first global forest height data map using satellite data from NASA. Michael Lefsky of Colorado State University created the map using data collected by NASA's ICESat, Terra, and Aqua satellites. The map will be used to determine how much carbon the world’s forests store and how fast that carbon cycles through ecosystems and back into the atmosphere. It also will help scientists predict the spread and behavior of fires and understand the suitability of species to specific forests. To create the map, Lefsky combined data collected from LIDAR laser technology and the Moderate Resolution Imaging Spectroradiometer, a satellite instrument aboard both the Terra and Aqua satellites. LIDAR uses laser pulses to determine distance -- in this case, the distance between the ground and the top of the trees. Rutrell Yasin is is a freelance technology writer for GCN.
<urn:uuid:001d401b-459c-4308-bad4-51ecaa9252d9>
CC-MAIN-2017-09
https://gcn.com/articles/2010/10/18/florida-mapping-project.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00500-ip-10-171-10-108.ec2.internal.warc.gz
en
0.891085
627
2.734375
3
Any talk of what the future holds needs to include a look at a fully-connected world. In all honesty, it’s not hard to imagine this particular scenario as we’ve already experienced the beginning of it. The internet has ushered in a new era of connectivity allowing people to communicate in ways never before thought possible, and technological advances have led to a surge of products taking advantage of this capability. Just think of your smartphone and how easy it is to connect to the web at the push of a button. Or perhaps you have a wearable like a FitBit or Apple Watch, with the ability to track your vital signs and relay them to a central server. These are only glimpses at the potential inherent in a connected world, one which most experts believe could have 50 billion things connected to the internet by the end of the decade. A world that connected is one that your business should be a part of. What exactly does a connected world look like? For one thing, it will involve extensive use of cloud computing. We’ve already gotten a taste of how versatile and useful the cloud is in the past few years — it’s gotten to the point where we barely think about what goes into receiving and sending emails. Consider all of the other cloud-based applications you use and realize there will be much more of that in the future with nearly every program and tool — from project management to research collaboration to big data lakes — powered through cloud computing. But that’s just the tip of the iceberg. A connected future definitely involves the Internet of Things (IoT), a term used to describe how everyday objects will have connections to the web allowing them to communicate with users and each other. IoT is in its infancy at the moment with self-driving cars part of it. But the future will have streets full of these cars, as well as appliances embedded with internet capabilities, from telling us when we’re about to run out of laundry detergent to alerting us when repairs are needed. IoT will also have a macro level presence. Whole cities will be filled with sensor embedded devices, from stoplights to street signs. This will affect traffic flow, waste and pollution management, and even crime prevention. A connected future means being within arm’s reach of a connected device, a thought that thrills some and unnerves others. Either way, it’s going to happen, likely sooner than most think. With so much promise and emphasis stemming from the connected world idea, it only makes sense businesses have much to gain. Even if your company isn’t currently related to technology, the connected future will affect you. Customers will expect you to feature products or services that take advantage of this technology, even if it seems like such a feature would feel tacked on. In cases where embedding web-connected sensors in your products isn’t feasible, the connected advantage can still play a role in your company by ensuring you’re able to respond to problems in quick fashion. A complaint on a social media platform, for example, can be answered and rectified almost instantaneously. The connected future represents a giant puzzle, and allowing your company to be a piece in that puzzle will satisfy customers and help your organization grow. A fully-connected world is the future after all, and becoming a part of it early will allow you to grow and succeed with it. The ensuing years will be times of adjustment as companies figure out how best to use this new level of connectivity. But if one thing is certain, there’s no stopping the world from becoming more connected. The possibilities that come will results from this concept becoming a reality are nearly endless. It’s time to jump at the opportunity and make the most of it.
<urn:uuid:3dc1904c-39f5-4b9c-8555-37034975c3ec>
CC-MAIN-2017-09
https://www.bsminfo.com/doc/a-connected-future-what-it-will-look-like-and-why-you-should-care-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00552-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958591
774
2.75
3
by Douglas Comer, Purdue University Traditional packet-processing systems use an approach known as demultiplexing to handle incoming packets (refer to for details). When a packet arrives, protocol software uses the contents of a Type Field in a protocol header to decide how to process the payload in the packet. For example, the Type field in a frame is used to select a Layer 3 module to handle the frame, as Figure 1 illustrates. Demultiplexing is repeated at each level of the protocol stack. For example, IPv6 uses the Next Header field to select the correct transport layer protocol module, as Figure 2 illustrates. Modern, high-speed network systems take an entirely different view of packet processing. In place of demultiplexing, they use a technique known as classification . Instead of assuming that a packet proceeds through a protocol stack one layer at a time, they allow processing to cross layers. (In addition to being used by companies such as Cisco and Juniper, classification has been used in Linux and with network processors by companies such as Intel and Netronome .) Packet classification is especially pertinent to three key network technologies. First, Ethernet switches use classification instead of demultiplexing when they choose how to forward packets. Second, a router that sends incoming packets over Multiprotocol Label Switching (MPLS) tunnels uses classification to choose the appropriate tunnel. Third, classification provides the basis for Software-Defined Networking (SDN) and the OpenFlow protocol. Motivation for Classification To understand the motivation for classification, consider a network system that has protocol software arranged in a traditional layered stack. Packet processing relies on demultiplexing at each layer of the protocol stack. When a frame arrives, protocol software looks at the Type field to learn about the contents of the frame payload. If the frame carries an IP datagram, the payload is sent to the IP protocol module for processing. IP uses the destination address to select a next-hop address. If the datagram is in transit (that is, passing through the router on its way to a destination), IP forwards the datagram by sending it back out one of the interfaces. A datagram reaches TCP only if the datagram is destined for the router itself. TCP then uses the protocol port numbers in the TCP segment to further demultiplex the incoming datagram among multiple application programs. To understand why traditional layering does not solve all problems, consider MPLS processing. In particular, consider a router at the border between a traditional internet and an MPLS core. Such a router must accept packets that arrive from the traditional internet and choose an MPLS path over which to send the packet. Why is layering pertinent to path selection? In many cases, network managers use transport layer protocol port numbers when choosing a path. For example, suppose a manager wants to send all web traffic down a specific MPLS path. All the web traffic will use TCP port 80, meaning that the selection must examine TCP port numbers. Unfortunately, in a traditional demultiplexing scheme, a datagram does not reach the transport layer unless the datagram is destined for the local network system. Therefore, protocol software must be reorganized to handle MPLS path selection. We can summarize: A traditional protocol stack is insufficient for the task of MPLS path selection because path selection often involves transport layer information and a traditional stack will not send transit datagrams to the transport layer. Classification Instead of Demultiplexing How should protocol software be structured to handle tasks such as MPLS path selection? The answer lies in the use of classification. A classification system differs from conventional demultiplexing in two ways: To understand classification, imagine a packet that has been received at a router and placed in memory. Encapsulation means that the packet will have a set of contiguous protocol headers at the beginning. For example, Figure 3 illustrates the headers in a TCP packet (for example, a request sent to a web server) that has arrived over an Ethernet. Given a packet in memory, how can we quickly determine whether the packet is destined to the web? A simplistic approach simply looks at one field in the headers: the TCP destination port number. However, it could be that the packet is not a TCP packet at all. Maybe the frame is carrying Address Resolution Protocol (ARP) data instead of IP. Or maybe the frame does indeed contain an IP datagram, but instead of TCP the transport layer protocol is the User Datagram Protocol (UDP). To make certain that it is destined for the web, software needs to verify each of the headers: the frame contains an IP datagram, the IP datagram contains a TCP segment, and the TCP segment is destined for the web. Instead of parsing protocol headers, think of the packet as an array of octets in memory. Consider IPv4 as an example. To be an IPv4 datagram, the Ethernet Type field (located in array positions 12 and 13) must contain 0x0800. The IPv4 Protocol field, located at position 23, must contain 6 (the protocol number for TCP). The Destination Port field in the TCP header must contain 80. To know the exact position of the TCP header, we must know the size of the IP header. Therefore, we check the header length octet of the IPv4 header. If the octet contains 0x45, the TCP destination port number will be found in array positions 36 and 37. As another example, consider classifying Voice over IP (VoIP) traffic that uses the Real-Time Transport Protocol (RTP). Because RTP is not assigned a specific UDP port, vendors use a heuristic to determine whether a given packet carries RTP traffic: check the Ethernet and IP headers to verify that the packet carries UDP, and then examine the octets at a known offset in the RTP packet to verify that the value matches the value used by a known codec. Observe that all the checks described in the preceding paragraphs require only array lookup. That is, the lookup mechanism treats the packet as an array of octets and merely checks to verify that location X contains value Y, location Z contains value W, and so on—the mechanism does not need to understand any of the protocol headers or the meaning of values. Furthermore, observe that the lookup scheme crosses multiple layers of the protocol stack. We use the term classifier to describe a mechanism that uses the lookup approach described previously, and we say that the result is a packet classification. In practice, a classification mechanism usually takes a list of classification rules and applies them until a match is found. For example, a manager might specify three rules: send all web traffic to MPLS path 1, send all FTP traffic to MPLS path 2, and send all VPN traffic to MPLS path 3. Layering When Classification Is Used If classification crosses protocol layers, how does it relate to traditional layering diagrams? We can think of classification as an extra layer that has been squeezed between Layer 2 and Layer 3. When a packet arrives, the packet passes from a Layer 2 module to the classification module. All packets proceed to the classifier; no demultiplexing occurs before classification. If any of the classification rules matches the packet, the classification layer follows the rule. Otherwise, the packet proceeds up the traditional protocol stack. For example, Figure 4 illustrates layering when classification is used to send some packets across MPLS paths. Interestingly, a classification layer can subsume all demultiplexing. That is, instead of classifying packets only for MPLS paths, the classifier can be configured with additional rules that check the Type field in a frame for IPv4, IPv6, ARP, Reverse ARP (RARP), and so on. Classification Hardware and Network Switches The text in the previous section describes a classification mechanism that is implemented in software—an extra layer is added to a software protocol stack that classifies frames after they arrive at a router. Classification can also be implemented in hardware. In particular, Ethernet switches and other packet-processing hardware devices contain classification hardware that allows packet classification and forwarding to proceed at high speed. The next sections explain hardware classification mechanisms. We think of network devices, such as switches, as being divided into broad categories by the level of protocol headers they examine and the consequent level of functions they provide: A Layer 2 Switch examines the Media Access Control (MAC) source address in each incoming frame to learn the MAC address of the computer that is attached to each port. When a switch learns the MAC addresses of all the attached computers, the switch can use the destination MAC address in each frame to make a forwarding decision. If the frame is unicast, the switch sends only one copy of the frame on the port to which the specified computer is attached. For a frame destined to the broadcast or a multicast address, the switch delivers a copy of the frame to all ports. A VLAN Switch adds one level of virtualization by permitting a manager to assign each port to a specific VLAN. Internally, VLAN switches extend forwarding in a minor way: instead of sending broadcasts and multicasts to all ports on the switch, a VLAN switch consults the VLAN configuration and sends them only to ports on the same VLAN as the source. A Layer 3 Switch acts like a combination of a VLAN switch and a router. Instead of using only the Ethernet header when forwarding a frame, the switch can look at fields in the IP header. In particular, the switch watches the source IP address in incoming packets to learn the IP address of the computer attached to each switch port. The switch can then use the IP destination address in a packet to forward the packet to its correct destination. A Layer 4 Device extends the examination of a packet to the transport layer. That is, the device can include the TCP or UDP Source and Destination Port fields when making a forwarding decision. Switching Decisions and VLAN Tags All types of switching hardware described previously use classification. That is, switches operate on packets as if a packet is merely an array of octets, and individual fields in the packet are specified by giving offsets in the array. Thus, instead of demultiplexing packets, a switch treats a packet syntactically by applying a set of classification rules similar to the rules described previously. Surprisingly, even VLAN processing is handled in a syntactic manner. Instead of merely keeping VLAN information in a separate data structure that holds meta information, the switch inserts an extra field in an incoming packet and places the VLAN number of the packet in the extra field. Because it is just another field, the classifier can reference the VLAN number just like any other header field. We use the term VLAN Tag to refer to the extra field inserted in a packet. The tag contains the VLAN number that the manager assigned to the port over which the frame arrived. For Ethernet, IEEE standard 802.1Q specifies placing the VLAN Tag field after the MAC Source Address field. Figure 5 illustrates the format. A VLAN tag is used only internally—after the switch has selected an output port and is ready to transmit the frame, the tag is removed. Thus, when computers send and receive frames, the frames do not contain a VLAN tag. An exception can be made to the rule: a manager can configure one or more ports on a switch to leave VLAN tags in frames when sending the frame. The purpose is to allow two or more switches to be configured to operate as a single, large switch. That is, the switches can share a set of VLANs—a manager can configure each VLAN to include ports on one or both of the switches. We can think of hardware in a switch as being divided into three main components: a classifier, a set of units that perform actions, and a management component that controls the overall operation. Figure 6 illustrates the overall organization and the flow of packets. As black arrows in the figure indicate, the classifier provides the high-speed data path that packets follow. When a packet arrives, the classifier uses the rules that have been configured to choose an action. The management module usually consists of a general-purpose pro-cessor that runs management software. A network administrator can interact with the management module to configure the switch, in which case the management module can create or modify the set of rules the classifier follows. A network system, such as a switch, must be able to handle two types of traffic: transit traffic and traffic destined for the switch itself. For example, to provide management or routing functions, a switch may have a local TCP/IP protocol stack and packets destined for the switch must be passed to the local stack. Therefore, one of the actions a classifier takes may be "pass packet to the local stack for Demultiplexing". High-Speed Classification and TCAM Modern switches can allow each interface to operate at 10 Gbps. At 10 Gbps, a frame takes only 1.2 microseconds to arrive, and a switch usually has many interfaces. A conventional processor cannot handle classification at such speeds, so a question arises: how can a hardware classifier achieve high speed? The answer lies in a hardware technology known as Ternary Content Addressable Memory (TCAM). TCAM uses parallelism to achieve high speed—instead of testing one field of a packet at a given time, TCAM checks all fields simultaneously. Furthermore, TCAM performs multiple checks at the same time. To understand how TCAM works, think of a packet as a string of bits. We imagine TCAM hardware as having two parts: one part holds the bits from a packet and the other part is an array of values that will be compared to the packet. Entries in the array are known as slots. Figure 7 illustrates the idea. In the figure, each slot contains two parts. The first part consists of hardware that compares the bits from the packet to the pattern stored in the slot. The second part stores a value that specifies an action to be taken if the pattern matches the packet. If a match occurs, the slot hardware passes the action to the component that checks all the results and announces an answer. One of the most important details concerns the way TCAM handles multiple matches. In essence, the output circuitry selects one match and ignores the others. That is, if multiple slots each pass an action to the output circuit, the circuit accepts only one and passes the action as the output of the classification. For example, the hardware may choose the lowest slot that matches. In any case, the action that the TCAM announces corresponds to the action from one of the matching slots. The figure indicates that a slot holds a pattern rather than an exact value. Instead of merely comparing each bit in the pattern to the corresponding bit in the packet, the hardware performs a pattern match. The adjective ternary is used because each bit position in a pattern can have three possible values: a one, a zero, or a "don't care". When a slot compares its pattern to the packet, the hardware checks only the one and zero bits in the pattern—the hardware ignores pattern bits that contain "don't care". Thus, a pattern can specify exact values for some fields in a packet header and omit other fields. To understand TCAM pattern matching, consider a pattern that identifies IP packets. Identifying such packets is easy because an Ethernet frame that carries an IPv4 datagram will have the value 0x0800 in the Ethernet Type field. Furthermore, the Type field occupies a fixed position in the frame: bits 96 through 111. Thus, we can create a pattern that starts with 96 "don't care" bits (to cover the Ethernet destination and source MAC addresses) followed by 16 bits with the binary value 0000100000000000 (the binary equivalent of 0x0800) to cover the Type field. All remaining bit positions in the pattern will be "don't care". Figure 8 illustrates the pattern and example packets. Although a TCAM hardware slot has one position for each bit, the figure does not display individual bits. Instead, each box corresponds to one octet, and the value in a box is a hexadecimal value that corresponds to 8 bits. We use hexadecimal simply because binary strings are too long to fit into a figure comfortably. The Size of a TCAM A question arises: how large is a TCAM? The question can be divided into two important aspects: A switch can also use patterns to control broadcasting. When a manager configures a VLAN, the switch can add an entry for the VLAN broadcast. For example, if a manager configures VLAN 9, an entry can be added in which the destination address bits are all 1s (that is, the Ethernet broadcast address) and the VLAN tag is 9. The action associated with the entry is "broadcast on VLAN 9". A Layer 3 switch can learn the IP source address of computers attached to the switch, and can use TCAM to store an entry for each IP address. Similarly, it is possible to create entries that match Layer 4 protocol port numbers (for example, to direct all web traffic to a specific output). SDN technologies allow a manager to place patterns in the classifier to establish paths through a network and direct traffic along the paths. Because such classification rules cross multiple layers of the protocol stack, the potential number of items stored in a TCAM can be large. TCAM seems like an ideal mechanism because it is both extremely fast and versatile. However, TCAM has two significant drawbacks: cost and heat. The cost is high because TCAM has parallel hardware for each slot and the overall system is designed to operate at high speed. In addition, because it operates in parallel, TCAM consumes much more energy than conventional memory (and generates more heat). Therefore, designers minimize the amount of TCAM to keep costs and power consumption low. A typical switch has 32,000 entries. Classification-Enabled Generalized Forwarding Perhaps the most significant advantage of a classification mechanism arises from the generalizations it enables. Because classification examines arbitrary fields in a packet before any demultiplexing occurs, cross-layer combinations are possible. For example, classification can specify that all packets from a given MAC address should be forwarded to a specific output port regardless of the packet contents. In addition, classification can make forwarding decisions depend on combinations of source and destination. An Internet Service Provider (ISP) can choose to forward all packets with IP source address X that are destined for web server W along one path while forwarding packets with IP source address Y that are destined to the same web server along another path. ISPs need the generality that classification offers to handle traffic engineering that is not usually available in a conventional protocol stack. In particular, classification allows an ISP to offer tiered services in which the path a packet follows depends on a combination of the type of traffic and how much the customer pays. Classification is a fundamental performance optimization that allows a packet-processing system to cross layers of the protocol stack without demultiplexing. A classifier treats each packet as an array of bits and checks the contents of fields at specific locations in the array. Classification offers high-speed forwarding for network systems such as Ethernet switches and routers that send packets across MPLS tunnels. To achieve the highest speed, classification can be implemented in hardware; a hardware technology known as TCAM is especially useful because it employs parallelism to perform classification at extremely high speed. The generalized forwarding capabilities that classification provides allow ISPs to perform traffic engineering. When making a forwarding decision, a classification mechanism can use the source of a packet as well as the destination (for example, to choose a path based on the tier of service to which a customer subscribes). Material in this article has been taken with permission from Douglas E. Comer, Internetworking With TCP/IP Volume 1: Principles, Protocols, and Architecture, Sixth edition, 2013.
<urn:uuid:904a0e9d-8980-4fcc-b4b8-f0276145a557>
CC-MAIN-2017-09
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-58/154-packet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00548-ip-10-171-10-108.ec2.internal.warc.gz
en
0.894321
4,127
4
4
Developing countries risk creating giant mountains of electronic waste as their consumption of PCs and gadgets increases, the UN has warned. According to a new report from the United Nation Environment Programme, certain parts of Asia, Africa and Latin America are set to see a rise in sales of electronics over the coming decade. And unless countries such as India and China step up measures to properly collect and recycle these materials, the resulting waste poses a substantial risk to public health and the environment. Issued at a meeting of world chemical authorities, the report took data from 11 developing countries to estimate current and future e-waste generation. This includes old desktop and notebook computers, printers, mobile phones, pagers, digital cameras and mp3 players. The UNEP predicts that in India e-waste from old computers will have shot up by 500 per cent by 2010, compared to 2007 levels. In South Africa and China this increase is predicted to be between 200 and 400 per cent. According to the report, most e-waste in China is improperly handled, with much of it incinerated by backyard recyclers to recover precious metals like gold.
<urn:uuid:7b4c723d-ea30-44c2-a6d2-e0351a762222>
CC-MAIN-2017-09
http://www.pcr-online.biz/news/read/developing-nations-could-face-e-waste-mountains/022737
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00016-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929917
232
3.03125
3
iSCSI � What Does It Mean for Your Storage Network? Page 2 iSCSI Deployment Examples Now, let's look at a few iSCSI deployment examples. The examples are as follows: - Network Storage Services via iSCSI. - Multiple Cards to Single iSCSI Router. - iSCSI HBA and Fibre Channel Tape Backup. Network Storage Services via iSCSI Two iSCSI HBAs can be used in conjunction with standard Ethernet NICs through a Gigabit-capable switch connected to an iSCSI-capable Redundant Array of Inexpensive Disk (RAID) Array. This configuration is appropriate as either the next step in transitioning to an iSCSI-exclusive SAN or as an initial iSCSI SAN configuration. Multiple Cards to Single iSCSI Router Multiple HBAs in separate servers can be used in conjunction with a Gigabit capable switch connected to an iSCSI capable router with Fibre Channel ports. This is then connected directly to a native Fibre Channel RAID Array. This configuration is also appropriate as the next step in transitioning to an iSCSI front-end SAN with Fibre Channel storage devices. iSCSI HBA and Fibre Channel Tape Backup An iSCSI HBA can be used in conjunction with a Gigabit-capable switch connected to an iSCSI-capable router with Fibre Channel ports connected to a Fibre Channel tape drive. This configuration can be used as a means to perform backup and recovery, by using an existing Ethernet infrastructure. Next, let's discuss what the availability of iSCSI will mean to customers who had considered storage networking too expensive. In other words, what are the advantages of iSCSI SANs? As previously explained, iSCSI is an end-to-end protocol for transporting storage I/O block data over an IP network. The protocol is used on servers (initiators), storage devices (targets), and protocol transfer gateway devices. iSCSI uses standard Ethernet switches and routers to move the data from server to storage. It also enables IP and Ethernet infrastructure to be used for expanding access to SAN storage and extending SAN connectivity across any distance. The technology is based on SCSI commands used in storage traffic today and IP protocols for networking. Leveraging the Best from Storage Networking iSCSI builds on the two most widely used protocols from the storage and the networking worlds. From the storage side, iSCSI uses the SCSI command set, the core storage commands used throughout all storage configurations. On the networking side, iSCSI uses IP and Ethernet, which are the basis for most enterprise networks, and uses in metropolitan and wide area networks is increasing as well. With almost 30 years of research, development and integration, IP networks provide the utmost in manageability, interoperability and cost-effectiveness. 10GbE Enabling iSCSI and Its AdvantagesAs the demand for bandwidth is increasing for storage and networking applications, Gigabit Ethernet technology provides the right path. However to make these applications mainstream, 10GbE is needed for the following reasons: First of all, a 10 Gigabit Ethernet network will have the capabilities to provide solutions for unified storage and networking applications. With networking applications requiring gigabits of throughput and the terabits of storage transactions, existing gigabit networks will max out. However, 10GbE will be able to sustain lower latencies and high performance needed for these applications. Second, with regards to interchangeability and Interoperability of equipment, the Fiber Channel model is not optimized for connectivity of multiple vendor devices. With 802.3 standards based products, Ethernet has continued to provide solutions that can connect systems for multiple vendors, thus allowing for a better cost model and a variety of vendors to chose from for the end user. Third, would be the consolidation of the SAN and network attached storage (NAS) markets. The final reason why 10GbE is needed, is the ability to connect Fiber Channel SAN islands through IP. A link greater than the gigabit interfaces in the SAN islands is required. So, with the preceding in mind, what would be the advantages of iSCSI SANs? The following advantages are as follows: Building iSCSI SANs with 10GbE: A Data Center Approach An iSCSI SAN is a perfect choice for a user interested in moving to networked storage. Using the same block level SCSI commands as direct attach storage, iSCSI provides compatibility with user applications such as file systems, databases, and web serving. Similarly, since iSCSI runs on ubiquitous and familiar IP networks, there is no need to learn a new networking infrastructure to realize SAN benefits. To build an iSCSI storage network in a data center, iSCSI host bus adapters can be used in servers, along with iSCSI storage devices and a combination of switches and routers. An iSCSI SAN is an optimal choice for a user interested in moving to IP Storage. iSCSI is like one more application to the network protocol stack. So, iSCSI is not only compatible with the existing networking architecture, but also maintains the same block level SCSI commands. This capability allows the information technology (IT) staff to transition from the direct attached storage (DAS) model to the iSCSI SAN model. By adding the storage traffic to the existing network, the IT staff doesnt need any additional training to manage the networks for IP Storage. In a typical data center, the servers updates/retrieves the data from the storage devices located remotely at gigabit speeds. Consolidated storage serves multiple servers at the same time. In the same environment, network traffic is processed at gigabit speeds. The IT staff has a challenging task to support the growing needs of storage and network requirements. Though gigabit networks are being deployed widely, it cannot solve all the problems. Storage networks have low latency and high bandwidth requirements. iSCSI at 10 Gigabit Ethernet is the answer to these requirements. The 10GbE provides a smooth transition for the existing storage networking infrastructure to higher speeds. Applications like synchronous mirroring demand low latency, and file serving needs high bandwidth. By using a host bus adapter (HBA), which supports both the network protocols and the iSCSI protocols, both the SAN and NAS environments can be consolidated. The 10GbE networks facilitate the high bandwidth and low latency required in this environment, thereby resulting in improved application response time. This would consist of the following data center applications: - Server and storage consolidation. - Accelerated Backup Operations. - Seamless Remote Site Access and Storage Outsourcing. Server and Storage Consolidation With a networked storage infrastructure, customers can link multiple storage devices to multiple servers. This allows for better resource utilization, ease of storage management, and simpler expansion of the storage infrastructure. Accelerated Backup OperationsBackup operations previously restricted to operating across traditional IP LANs at the file level can now operate across IP Storage networks at the block level. This shift facilitates faster backup times, and provides customers the flexibility to use shared or dedicated IP networks for storage operations Seamless Remote Site Access and Storage OutsourcingWith the storage network based on IP, customers can easily enable remote access to secondary sites across metropolitan or wide area IP networks. The remote sites can be used for off-site backup, clustering or mirroring replication. Additionally, customers can choose to link to storage service providers for storage outsourcing applications such as storage-on-demand.
<urn:uuid:f7f7a4bc-7ea8-4fa1-9ed3-333bc33df8c5>
CC-MAIN-2017-09
http://www.enterprisestorageforum.com/ipstorage/features/article.php/11567_1547831_2/iSCSI-ndash-What-Does-It-Mean-for-Your-Storage-Network.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00544-ip-10-171-10-108.ec2.internal.warc.gz
en
0.910714
1,549
2.859375
3
We've been told for a few years now that the Internet of Things -- common household, industrial and public devices enabled with sensors -- will transform how we work, play and interact with the world around us. What we haven't heard a lot of detail about is how the Internet of Things will actually operate: How will information be transferred from IoT sensors to other devices and computers? How and with what will the sensors be programmed? How will we strike the balance between accessibility and security? It's also a potentially self-serving initiative, as the service dispenses with the need to download apps in order to interact with an IoT device. Rather, The Physical Web relies on a URL-based identity and communication system. "The Physical Web must be an open standard that everyone can use," Google writes. "The number of smart devices is going to explode, and the assumption that each new device will require its own application just isn't realistic. We need a system that lets anyone interact with any device at any time." That all makes perfect sense to me, and The Physical Web's operational model -- as a "discovery service where URLs are broadcast and any nearby device can receive them" -- is much more likely to scale successfully with the billions of smart devices expected to populate the IoT. In Google's vision, people will be able to use vending machines, rental cars, appliances, devices in retails stores, and thousands of other objects that contain URL-accessible functions, features and information. "Once any smart device can have a web address, the entire overhead of an app seems a bit backward," Google says. It's also worth noting that once a smart device has a web address, it can be cataloged and mined for information by the Internet's largest collector and monetizer of information -- Google. Many of us -- myself included -- have made a permanent devil's bargain with Internet companies: They offer enticing features and services, and we offer information about ourselves. With the Internet of Things, the stakes get raised even further: People with smart devices will broadcast their activities not only to their carriers (and the NSA), but to Internet companies and other businesses with an IoT presence. That's what most of us expect. But it sure seems as if a URL-based system for the IoT would provide Google with more benefits than anybody else. Without trying to sound paranoid, it's worth thinking about, especially when you look at Google's overarching information collection strategy (examples abound, including here, here, here, here and here). As for enterprises, "a system that lets anyone interact with any device at any time" sure seems like a potentially risky double-edged sword. The Internet of Things holds a lot of promise; let's hope we approach it with common sense. This story, "The Physical Web: Google's Trojan Horse gift to the Internet of Things" was originally published by CITEworld.
<urn:uuid:5976fea5-3ee5-4b42-bac4-fee2b1bd3cbe>
CC-MAIN-2017-09
http://www.itworld.com/article/2695008/networking/the-physical-web--google-s-trojan-horse-gift-to-the-internet-of-things.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00544-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957182
595
2.859375
3
The questioner on Quora asks: "When is the difference between 99% accuracy and 99.9% accuracy very important?" And the most popular answer cites an example familiar to all of you: service level agreements. However, the most entertaining reply comes next and less heralded, but is such a delightful and enlightening read that I'm going to post it here virtually in its entirety. The author is Alex Suchman, a computer science and mathematics student at the University of Texas. Here's his answer: When it can stop a Zombie Apocalypse. It's 2020, and every movie buff and video gamer's worst fear has become reality. A zombie outbreak, originating in the depths of the Amazon but quickly spreading to the rest of the world (thanks a lot, globalization) threatens the continued existence of the human race. The epidemic has become so widespread that population experts estimate one in every five hundred humans has been zombified. The zombie infection (dubbed "Mad Human Disease" by the media) spreads through the air, meaning that anyone could succumb to it at any moment. The good news is that there's a three day asymptomatic incubation period before the host becomes a zombie. A special task force made of the best doctors from around the world has developed a drug that cures Mad Human, but it must be administered in the 72-hour window. Giving the cure to a healthy human causes a number of harmful side effects and can even result in death. No test currently exists to determine whether a person has the infection. Without this, the cure is useless. As a scientist hoping to do good for the world, you decide to tackle this problem. After two weeks of non-stop lab work, you stumble upon a promising discovery that might become the test the world needs. Scenario One: The End of Mankind Clinical trials indicate that your test is 99% accurate (for both true positives and true negatives). Remembering your college statistics course, you run the numbers and determine that someone testing positively will have Mad Human only 16.6% of the time . Curse you, Thomas Bayes! You can't justify subjecting 5 people to the negative effects of the cure in order to save one zombie, so your discovery is completely useless. With its spread left unchecked, Mad Human claims more and more victims. The zombies have started taking entire cities, and the infection finally reaches Britain, the world's last uncontaminated region. Small tribal groups survive by leaving civilization altogether, but it becomes clear that thousands of years of progress are coming undone. After the rest of your family succumbs to Mad Human, you try living in isolation in the hope that you can avoid the epidemic. But by this point, nowhere is safe, and a few months later you join the ranks of the undead. In 2023, the last human (who was mysteriously immune to Mad Human) dies of starvation. Scenario Two: The Savior Clinical trials indicate that your test is 99.9% accurate. Remembering Bayes' Theorem from your college statistics course, you run the numbers and determine that someone testing positively will have Mad Human 66.7% of the time . This isn't ideal, but it's workable and can help slow the zombies' spread. Pharmaceutical companies around the world dedicate all of their resources to producing your test and the accompanying cure. This buys world leaders precious time to develop a way to fight back against the zombies. Four months after the release of your test, the U.S. military announces the development of a new chemical weapon that decomposes zombies without harming living beings. They fill Earth's atmosphere with the special gas for a tense 24-hour period remembered as The Extermination. The operation is successful, and the human race has been saved! Following the War of the Dead, you gain recognition as one of the greatest scientific heroes in history. You go on to win a double Nobel Prize in Medicine and Peace. Morgan Freeman narrates a documentary about your heroics called 99.9, which sweeps the Academy Awards. Your TED Talk becomes the most-watched video ever (yeah, even more than Gangnam Style). You transition into a role as a thought leader, and every great innovator of the next century cites you as an influence. Life is good. And that, my friends, is an example of when the difference between 99% and 99.9% accuracy would be very important. If you're wondering about those and notations in the essay, they point to the actual math that supports Suchman's assertions, which he furnishes in the Quora reply and made my head hurt. Small price to pay. Welcome regulars and passersby. Here are a few more recent buzzblog items. And, if you’d like to receive Buzzblog via e-mail newsletter, here’s where to sign up. You can follow me on Twitter here and on Google+ here. - 2013’s 25 Geekiest 25th Anniversaries - Yahoo buys teen’s startup for $30M, hires him … and WHAT? - Why business cards still beat “the bump.” - How Indiana tried to redefine pi. - Cheeky browser plug-in pokes fun at cloud computing. - Twitter API changes kill off Twit Cleaner. - How to tell a WSJ writer (or any journalist) is embarrassed. - Verizon non sequitur as puzzling as a Y-shaped bridge. - It’s an extension cord with a brain – a BRAIN! - “DRM Chair” self-destructs after 8 sittings. - “Fresh Prince of Bel-Air” theme song on phone locks down school. - The best of Modern Seinfeld (Technology Edition)
<urn:uuid:15ba7cc6-b560-4787-8db7-fc6696e23e37>
CC-MAIN-2017-09
http://www.networkworld.com/article/2224405/data-center/how-that--extra-9--could-ward-off-a-zombie-apocalypse.html?source=nww_rss
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00068-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943178
1,205
2.578125
3
In 2007, NASA began an agency wide initiative to replace its aging facilities with smaller, more efficient buildings. When agency leaders selected the Ames Research Center in Northern California for funding, officials at the center proposed a fairly traditional replacement facility. But when Ames Associate Director Steve Zornetzer saw the plans, he had a different vision: Make the building a showcase of NASA’s technological expertise and leadership in imagining the future. “It was inconceivable to me that in the 21st century, in the heart of Silicon Valley, NASA would be building a building that could have been built 25 years ago,” he said. “NASA had to build the highest-performing building in the federal government, embed NASA technology inside and make a statement to the public that NASA was giving back to the people of planet Earth what it had developed for advanced aerospace applications,” he said. But there was one catch: The redesigned building couldn’t cost more than the originally proposed project. The Sustainability Base, as the project became known, centered on four elements: - Make maximum use of the existing environment; - Employ advanced technologies to minimize energy consumption and maximize efficiency; - Install advanced monitoring and adaptive operational systems; - Create a living laboratory for research into advancing sustainability goals. Faced with a tight timeline and budgetary constraints, the architects and contractors chose design tools that allowed fast and effective communications among all involved. The design team relied on a Building Information Modeling process based on Autodesk Sustainability Solutions, which was integrated with other modeling tools. This facilitated communication across teams and aided in making design decisions quickly and accurately. The building's core design elements included a complex radial geometry, an innovative steel-frame exoskeleton, and numerous eco-friendly features, such as geothermal heat and cooling , natural ventilation, high-performance wastewater treatment, and photovoltaics on the roof. The resulting $26 million, 50,000-square-foot, two-story building houses 220 office workers, including scientists, managers, mission support personnel and financial specialists. The extensive floor-to-ceiling windows and open spaces fully embrace the natural daylight. With reduced demand for artificial light and the application of high-efficiency radiant heating/cooling systems, the building site produces more electricity than it uses and is on its way to reducing potable water consumption by up to 90 percent compared with a comparable traditional building. Read the full Sustainability Base case study here.
<urn:uuid:56c6a846-a958-408b-b881-7c855a41c3ef>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2013/04/nasa-establishes-sustainability-base-earth/62225/?oref=river
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00536-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947952
513
3.09375
3
Today, it's easy to take the Mac for granted. The whole platform, along with Apple itself, has been reinvented time and again as the tech world has changed, and at the ripe young age of 30 it shows little sign of going away. But there were many times over the past three decades when the Mac's future, and Apple's, was far from certain. Here are some of the most important milestones -- and some of the serious missteps -- in the Mac's 30-year history. Original Mac introduction (1984): When Steve Jobs unveiled the original Mac on Jan. 24, 1984, he introduced the world to a new type of computing experience. Although GUI systems, including the Apple Lisa, had already been developed, the Mac was the first such system to be unveiled to the general public. Until then, such computers had largely been developed as experimental prototypes at labs like Xerox PARC or pitched to specific markets, often with a significant price tag. (The Apple Lisa originally sold for $9,995 -- in 1984 dollars.) Note: Hardware teardown expert iFixit marked today's anniversary by tearing down an original Mac. Test-drive a Mac program: Despite the innovation the Mac represented compared to other common personal computers of the early 1980s -- the Apple II, the Commodore 64 and the IBM PC, for instance -- consumers were wary of the new system because it was priced higher than many of its early competitors. In an effort to show off the value of the Mac and its GUI, Apple CEO John Scully devised a program where potential buyers could borrow a Mac for a few days, take it home and test-drive it. While the program helped raise awareness about the Mac experience, it didn't succeed in jump-starting sales. Many would-be Mac buyers praised the computer when returning it -- then bought something less expensive. The first expandable non-all-in-one Macs, the Mac II and SE (1987): Early Macs followed the same integrated all-in-one design as the original Mac, including the limited screen size and lack of upgrade or expansion options. Apple broke with that trend in 1987 when it launched the Mac II, the first Mac to use an external display, and the all-in-one Mac SE. Together, they were the first Macs that could be upgraded with additional RAM or expansion cards that could extend the hardware feature set. Mac user base reaches 1 million (1987): Three years after the Mac's rollout, the number of Macs in use worldwide topped 1 million. Diversification gone awry (1987-97): The Mac II may have been the first major departure from the original Mac design, but it was far from the last. During the decade that followed, Apple released an incredible number of models, eventually creating multiple product lines for a range of different markets. The Quadra line was for business, the Performa family was for home users, and LC line was aimed primarily at schools. Despite the different markets and occasionally different case designs, many of the Macs shared similar, if not identical, hardware regardless of name or model number. Things got even more confusing when Apple began selling Macs with model numbers in each line that differed only in the software that came pre-installed on them. The diversification became so pervasive that, at one point, Apple provided poster-size product matrices to Mac resellers just so they could keep the lineup straight.
<urn:uuid:311d12c3-7781-46b8-ac2c-814b63b71d37>
CC-MAIN-2017-09
http://www.computerworld.com/article/2486893/apple-mac/24-milestones-in-the-mac-s-30-year-history.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00236-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960589
703
2.546875
3
RDF (Resource Description Framework) is one of the three foundational Semantic Web technologies, the other two being SPARQL and OWL. In particular, RDF is the data model of the Semantic Web. That means that all data in Semantic Web technologies is represented as RDF. If you store Semantic Web data, it's in RDF. If you query Semantic Web data (typically using SPARQL), it's RDF data. If you send Semantic Web data to your friend, it's RDF. In this lesson we will introduce RDF. In this lesson you will learn: - What RDF is and how it fundamentally differs from XML and relational databases - What is meant by a "graph data model" - How RDF is typically represented visually - The importance of the URI, and the significance (or lack thereof) of identity "universality" RDF is the foundation of the Semantic Web and what provides its innate flexibility. All data in the Semantic Web is represented in RDF, including schema describing RDF data. RDF is not like the tabular data model of relational databases. Nor is it like the trees of the XML world. Instead, RDF is a graph. In particular, it's a labeled, directed graph. I don't mean "graph" as in "charts and graphs" but rather as in "dots and lines." Therefore you can think of RDF as a bunch of nodes (the dots) connected to each other by edges (the lines) where both the nodes and edges have labels. The term labeled, directed graph will mean a lot to the mathematicians in the audience, but for the rest of you I've included a simple example here. This is a complete, valid, visual representation of a small RDF graph. Show it to any Semantic Web practitioner and it will be immediately obvious to her what it represents. The nodes of the graph are the ovals and rectangles (ovals and rectangles are a convention that we'll get to shortly). The edges are labeled arrows that connect nodes to each other. The labels are URIs (this is very important, and we'll cover it in more detail in a bit). Note: the graph nature of RDF is why the logos of Semantic Web companies almost universally have some reference to a graph. See if you can spot the graph in the logos of Revelytix, Ontoprise, Ontotext, Apache Jena, and Cambridge Semantics, not to mention the RDF logo, pictured at the top if this lesson. There are three kinds of nodes in an RDF directed graph: - Resource nodes. A resource is anything that can have things said about it. It's easy to think of a resource as a thing vs. a value. In a visual representation, resources are represented by ovals. - Literal nodes. The term literal is a fancy word for value. In the example above, the resource is http://www.cambridgesemantics.com/people/about/rob (once again, a URI) and the value of the foaf:name property is the string "Rob Gonzalez". In a visual representation, literals are represented by rectangles. - Blank nodes. A blank node is a resource without a URI. Blank nodes are an advanced RDF topic that we'll cover in another lesson. I usually recommend avoiding them in general, especially if you're new to the space. They are listed here simply for completeness. Edges can go from any resource to any other resource, or to any literal, with the only restriction being that edges can't go from a literal to anything at all. Think about this for a second. This means that anything in RDF can be connected to anything else simply by drawing a line. This idea is key. When we talk about Semantic Web technologies being fundamentally more flexible than other technologies (XML, relational databases, BI cubes, etc.), this is the reason behind it. In the abstract, you're just drawing lines between things. Moreover, creating a new thing is as easy as drawing an oval. If you compare this mentally to the model you might know from working with a relational database, it's starkly different. Even for basic relationships, such as many-to-many relationships, the abstract model of a relational database gets complicated. You end up adding extra tables and columns (think foreign keys, join tables, etc.) just to work around the inherent rigidity in the system. The ability to connect anything together, any time you want, is revolutionary. It's like hyperlinking on the Web, but for any data you have! The following video was in Introduction to the Semantic Web, but I'll include it here as well since it underscores this point. This linking between things is the fundamental capability of the Semantic Web, and is enabled by the URI. The Central Importance of the URI If you want to connect two things in a relational database you have to add foreign keys to tables (or, if you have a many-to-many relationship, create join tables), etc. If you want to link things between databases, you need an ETL job using something like Informatica. It's just not easily done. If you consider the XML world, the same thing is true. Connecting things within an XML document is possible, if tedious. Connecting things between XML documents requires real work. Unless you're one of the very few who just loves XSLT, you're not doing that very often. The fundamental value and differentiating capability of the Semantic Web is the ability to connect things. The URI is what makes this possible. URI stands for Universal Resource Identifier. The universal part of that is key. Instead of making ad hoc IDs for things within a single database (think primary keys), in the Semantic Web we create universal identities for things that are consistent across databases. This enables us to create linkages between all things (hold the skepticism for a second; we'll get to it!). In RDF, resources and edges are URIs. Literals are not; they are simple values. Blank nodes are not (this is what the "blank" means in the name). Everything else is, including the edges. If you look at our example above, there are several examples of URIs. - foaf:member (this is shorthand for http://xmlns.com/foaf/0.1/member) - foaf:name (again, shorthand for http://xmlns.com/foaf/0.1/name) The first one is the URI for the company Cambridge Semantics. The second is a URI for Rob, the author of this article. The other two are URIs for the edges that connect the resources (we'll say more about URIs of edges in a minute). You should notice that a couple of the URIs above are URLs. You can click on them. They are valid Web addresses. So what makes them a URI? In short, all URLs are URIs, but not all URIs are URLs. It's a little confusing, for sure, but the vast majority of Semantic Web practitioners stick to using URLs for all of their URIs. See the article URL vs. URI vs. URN for a succinct explanation of the differences if you're curious. Back to the concept of universality of identity. If I have a database that contains information about myself, I would use the URI http://www.cambridgesemantics.com/people/about/rob to refer to any data relating to me. If you have another database that has other information about me, you would also use that same URI. That way, if we wanted to find all facts in both databases about me, we could query using the single universal URI. Like all theoretically simple models, things are different in the real world. On the Limits of Universality and RDF Schema Design There is a major problem with the concept of universality presented above. It's impossible to get everyone everywhere to agree on a single label for every specific thing that ever was, is, or shall be. If you read the introductions to Semantic Web technologies around the web, you'll see lots of people focus on the importance of the URI. After all, how can you connect things if you don't agree on their labels? The focus on URI definition is especially true for those creating RDF vocabularies. What we mean by an RDF Vocabulary is essentially the set of URIs for the edges that make up RDF graphs. The edges are what relates the things in graph, and are what give it meaning. Using specific URIs is like speaking in a specific language—hence the term vocabulary. For example, in order for two Semantic Web applications to share data, they must agree on a common vocabulary. (Note: we're going to be covering RDF vocabulary and schema creation in future lessons on RDFS and OWL.) So if two applications have to agree on vocabulary for all concepts, then it stands to reason that all vocabularies must be set ahead of time, right? Fortunately, the a priori existence of share vocabulary turns out to be helpful, but far from necessary. In our example, foaf:name is not the first URI ever created that represents the name concept, and it's OK that another URI for the name concept wasn't reused. Fortunately, as you'll see in the SPARQL tutorials, it is very easy to translate RDF written in one vocabulary to another vocabulary. The Semantic Web technologies were built under the assumption that different people in different applications written for different purposes at different times would create related concepts that overlap in any number of ways, and therefore there are provisions and methods to make it all work together with little effort. There is no such provision in the XML or relational database worlds. Said another way, you do not have to agree on all URIs for all things up front. In fact, it's much easier not to do so. Reuse vocabulary when possible and convenient, and don't worry too much about that when it doesn't work out. This same universal identity conundrum also happens for resources. For example, you could consider my Linked In profile URL to be a URI representing me. This is clearly distinct from the URI that Cambridge Semantics uses, but, again, the Semantic Web offers very simple ways to merge identical concepts so that they appear as one universally. We'll cover the details of how this works in future lessons, but for now let's return to RDF basics. Statements and Triples Now that you get the basics, I have to introduce some community jargon that will help you understand material you read on the Web about Semantic Web technologies. Rather than talk in the language of nodes and edges, Semantic Web practitioners refer to statements or triples, which are representations of graph edges. A statement or triple (they are synonymous) refers to a 3-tuple (hence triple) of the form (subject, predicate, object). This linguistic, sentential form is why RDF schemas are often called vocabularies. As mentioned, the subject is a URI, the predicate is a URI, and the object is either a URI or a literal value. If we represent our graph example as a set of triples, they would be: - (csipeople:rob, foaf:name, "Rob Gonzalez") - (csipeople:rob, foaf:member,http://www.cambridgesemantics.com/) (Note that for brevity I'm using the namespace alias csipeople for the URI namespace http://www.cambridgesemantics.com/people/about/). RDF graphs therefore are simply collections of triples. An RDF database is often called a triple store for this reason. However, Semantic Web practitioners found it very difficult to deal with large amounts of triples for application development. There are lots of reasons that you would want to segment different subsets of triples from each other (simplified access control, simplified updating, trust, etc.), and vanilla RDF made segmentation tedious. At first the community tried using reification to solve this data segmentation problem (we'll cover reification in another lesson, but the concept is essentially triples about triples), but today everyone has converged on using named graphs. Named Graphs and Quads A named graph is simply a collection of RDF statements that has a name (which, as you should have guessed, is a URI). Modern triple stores all support named graphs, and they are built into SPARQL 1.1, the latest SPARQL query language specification. When referring to a triple in a named graph, you would often use 4-tuple notation instead of 3-tuple notation. The 4-tuple is of the form: (named graph, subject, predicate, object) For this reason, a triple store that supports named graphs is often called a quad store, though, somewhat confusingly, triple stores themselves are often quad stores anyway. That is, if an RDF database bills itself as a triple store it probably supports named graphs. The term quad store isn't that important. Looking at the 4-tuples, it's pretty obvious that the same statement can exist in multiple named graphs. This is by design and is a very important feature. By organizing the statement into named graphs, a Semantic Web application can implement access control, trust, data lineage, and other functionality very cleanly. Exactly the best ways to segment triples in your application is an advanced topic that will be covered in future lessons, and is a large part of the value brought by Semantic Web platforms which will often hide the details and logic of named graph creation and segmentation to simplify application development. For now, it's simply important to know that named graphs are the segmentation and organizational mechanism of the Semantic Web. This is a lot of information to cover in a single lesson, especially at this level of detail. However, it boils down to a very simple summary that will become second nature to you if you spend any time implementing Semantic Web technologies: - RDF is a graph data model. - RDF data are directed, labeled graphs. - A single edge in an RDF graph is a 3-tuple that is called either a statement or triple. - Triples are organized into named graphs, forming 4-tuples, or quads. - RDF resources (nodes), predicates (edges), and named graphs are labeled by URIs. - Although preferable to reuse URIs when possible, Semantic Web technologies, including OWL and SPARQL, make it easy to resolve URI conflicts, as we'll see in future lessons. In the next lesson, RDF Nuts & Bolts, we'll get down to the nitty gritty of actually writing and consuming RDF, including popular serializations, datatypes, and libraries.
<urn:uuid:254cf305-7b69-44ca-b110-0a00a7adc03a>
CC-MAIN-2017-09
http://www.cambridgesemantics.com/semantic-university/rdf-101?p_auth=QhF638VM&p_p_auth=67FeXk4O&p_p_id=49&p_p_lifecycle=1&p_p_state=normal&p_p_mode=view&p_p_col_pos=1&p_p_col_count=3&_49_struts_action=%2Fmy_sites%2Fview&_49_groupId=10518&_49_privateLayout=false
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00112-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93939
3,162
3.671875
4
Black Box Explains...PC, UPC, and APC fiber connectors Fiber optic cables have different types of mechanical connections. The type of connection determines the quality of the fiber optic lightwave transmission. The different types we’ll discuss here are the flat-surface, Physical Contact (PC), Ultra Physical Contact (UPC), and Angled Physical Contact (APC). The original fiber connector is a flat-surface connection, or a flat connector. When mated, an air gap naturally forms between the two surfaces from small imperfections in the flat surfaces. The back reflection in flat connectors is about -14 dB or roughly 4%. As technology progresses, connections improve. The most common connection now is the PC connector. Physical Contact connectors are just that—the end faces and fibers of two cables actually touch each other when mated. In the PC connector, the two fibers meet, as they do with the flat connector, but the end faces are polished to be slightly curved or spherical. This eliminates the air gap and forces the fibers into contact. The back reflection is about -40 dB. This connector is used in most applications. An improvement to the PC is the UPC connector. The end faces are given an extended polishing for a better surface finish. The back reflection is reduced even more to about -55 dB. These connectors are often used in digital, CATV, and telephony systems. The latest technology is the APC connector. The end faces are still curved but are angled at an industry-standard eight degrees. This maintains a tight connection, and it reduces back reflection to about -70 dB. These connectors are preferred for CATV and analog systems. PC and UPC connectors have reliable, low insertion losses. But their back reflection depends on the surface finish of the fiber. The finer the fiber grain structure, the lower the back reflection. And when PC and UPC connectors are continually mated and remated, back reflection degrades at a rate of about 4 to 6 dB every 100 matings for a PC connector. APC connector back reflection does not degrade with repeated matings.
<urn:uuid:6a679d7e-c163-4f47-ae58-f4ff32b67002>
CC-MAIN-2017-09
https://www.blackbox.com/en-nz/products/black-box-explains/black-box-explains-pc-upc-and-apc-fiber-connectors
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00464-ip-10-171-10-108.ec2.internal.warc.gz
en
0.90849
434
3.015625
3
“Bot” is a shortened form of robot and is a name for programs that perform repetitive tasks. When used maliciously, bots make it possible for hackers to control thousands of “zombies” (PCs hijacked by hackers) as easily as they can control one. A botnet is a group of bot infected PCs that are all controlled by the same hacker. Botnets can be used to send spam, download and store illegal files, such as porn, or to make computers participate in attacks on other computers. A botnet can be used to flood a specific Web site with traffic, to the point where it is overwhelmed and cannot respond to normal traffic, effectively taking it offline (called a Denial of Service attack).
<urn:uuid:a096b0d4-00b0-42a5-88dc-74fce61dd80b>
CC-MAIN-2017-09
https://www.justaskgemalto.com/us/what-botnet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00464-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951623
149
3.25
3
Is SDN Secure? Software Defined Networking (SDN) is a concept and a trend that is set to revolutionize the world of IT. SDN is not a panacea for all that ails networking, however, and it is not without its own set of risks that need to be addressed for secure deployment. Christopher Hoff, Chief Security Architect at Juniper Networks, has a long history in the IT business of securing networks. When it comes to SDN, his view is that the same rules and best practices continue to apply in order to create a secure infrastructure. "If we're adding connectors or agents that let us talk between existing infrastructure and external controllers, we have to ensure that the communication path is well protected," Hoff told Enterprise Networking Planet. "Today what that generally means is I'll run SSL/TLS and do encryption on the transport with certificates and that's about the extent of it." "That's not really taking security concerns of attack vectors and threats too seriously," Hoff added. In an SDN environment, there are typically the networking devices that handle the actual traffic, then there is the controller that manages the flow. Sitting on top of the controller are SDN applications that provide functionality and capabilities to the network. Hoff referred to the SDN applications as 'Trusted Apps,' which also need to be properly secured. Single Point of Failure? In an SDN network, the controller could potentially be seen as a single point of failure risk for the network. If the controller is attacked, the entire network it controls is potentially at risk. "Any time we put control and management capabilities outside of the routers and switches, we are expanding the attack surface," Hoff said. Are Controllers Secure? Hoff noted that today, controller architectures are somewhat lacking when it comes to security. "Some of the security, if you read through the specs, is optional," Hoff said. "You don't have to use SSL/TLS between the controller and the switch." Fundamentally, securing SDN is all about defense in depth, just as is the case with the rest of IT. Security professionals still need to think about how attackers might compromise a network and then develop an effective threat model to mitigate the risk. Visibility into all the different layers that might exist in an SDN network is also critical to effective security. It's visibility that involves both physical hardware as well server software and virtualization attributes and functions. Source of Truth When it comes to determining where control and 'truth' exists in an SDN network, there are multiple sources, which can further complicate security efforts. "The persistency as well as the consistency of the types of data we're talking about, some is very ephemeral and some is very long lasting," Hoff said. "So you may not see a lot of network configuration changes but you might see a tone of network state changes." The OpenFlow model where there is a single controller that configures network assets is not how actual SDN is likely to be deployed in Hoff's view either. He expects there to be multiple controllers in heterogeneous environments interacting with lots of other controllers. How to Secure SDN Securing SDN is similar to securing all other forms of IT. It's about threat modeling, risk analysis and being able to attach policy to the workloads. "I don't think there is anything that SDN brings to the table, when we think from a threat modeling perspective, that we have not seen before, any time we've gone from a centralized to a distributed network back to centralized." Hoff said. With consist approaches like SSL/TLS encryption, SDN much like other IT capabilities can be secured. "A lot of it is common sense and really understanding what the workflows look like, how the provisioning and orchestration systems are going to interoperate, auditing, logging and forensics capabilities," Hoff said. Watch the interview with Hoff below:
<urn:uuid:c3f0ba58-4ca5-468f-9ca0-312ddc6248ff>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsecur/is-sdn-secure.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00232-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954708
818
2.5625
3
State uses analytics to build roads faster, cheaper - By Rutrell Yasin - Nov 28, 2011 North Carolina’s Transportation Department is using analytics software to build roads faster and for less money while minimizing environmental disruption. NCDOT is analyzing geographic data to help narrow the choices of possible road corridors and, at the same time, is able to reduce costly land surveys. The process can save $500,000 per project and shave 20 percent off the time needed to select and plan a road, according to North Carolina officials. NCDOT and the North Carolina Division of Water Quality are working collaboratively on the project, using SAS Analytics as the engine to analyze volumes of geographic data. How data wizardry can revive America’s cities Planners need to protect water sources as they assess transportation projects. As a result, they have to propose ways to avoid or minimize environmental impacts. Road builders often verify geographical surveys by sending surveyors and water quality experts into the field to document streams and wetlands. However, a large project, such as a bypass, requires substantial time and manpower to survey thousands of acres in order to identify environmental issues. “You might have hundreds of possible combinations for one road,’’ said Morgan Weatherford, environmental program consultant with NCDOT’s Natural Environmental Section. “It’s a major challenge to comply with federal and state environmental regulations in a manner that is beneficial for the environment and taxpayers as well.” A new data source has emerged in recent years with the potential to eliminate some costly field work. Light detection and ranging (LIDAR), which uses laser pulses to record the distance between two points, is useful for charting land elevation, which is key to locating wetlands and streams. LIDAR data is used extensively to update flood maps and is considered more detailed than geological survey information. However, LIDAR produces large volumes of data. One transportation project might involve upward of 30 million records with 30 different attributes per record. Prior to using SAS analytic software, nobody at NCDOT had used LIDAR data to predict stream and wetland locations for construction planning purposes. The NC Division of Water Quality built models to predict headwater streams and tested the accuracy with field surveys.“The models were 85 to 95 percent accurate, depending on the terrain,’’ said Periann Russell, environmental senior specialist with the NC Division of Water Quality. NCDOT also used LIDAR data for a much larger project that includes predicting stream and wetland locations for an entire county. The data will help transportation planners choose a corridor for a 20-mile bypass. It will also be used on several bridge modernization projects and be available to private developers for their proposed developments, NC officials said. Rutrell Yasin is is a freelance technology writer for GCN.
<urn:uuid:899a3cb1-1fa0-4661-96c3-8dde374f107b>
CC-MAIN-2017-09
https://gcn.com/articles/2011/11/28/north-carolina-analytics-builds-roads-faster-cheaper.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00636-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940909
587
2.75
3
Static typing vs. dynamic typing The dispute: Software developers all understand that type checking is a necessary thing. After all, in order for a program to function properly, constructs of a specific type (such as an integer variable) must follow the rules imposed by that type (e.g., you can’t multiply 3, an integer, by “blue,” a string). Whether to do such type checking at compile-time or run-time, however, is a hot-button topic among programmers. What static typing advocates say: Statically typed languages, such as Java, C, C++, and Haskell, do type-checking at compile-time. Proponents say this leads to fewer runtime errors, faster execution, easier-to-understand code (since types have to be explicitly declared), and smarter IDEs. “Static typing will remove an entire class of mistakes a programmer may create from a program simply by not accepting it for compilation” Jack “In a static language, the method signature tells you what types of arguments are required. In a dynamic language, you can't tell as easily - the API documentation has to tell you what objects are expected.” quanticle “Static languages make the programmer more accountable. Dynamic programming sometimes creates a moral hazard, and encourages bad programming habits.” Ari Falkner What dynamic typing advocates say: “The difference is that dynamic languages are ‘write, run, fix’. You can experiment and fix quickly. Static languages are ‘write, compile, build, run, fix’. You can't experiment as easily.” S.Lott “Anyone, who has worked seriously with a modern dynamically typed language like Ruby or Smalltalk, know that they are more productive.” Anders Janmyr “Somehow things just seem to flow better when you're programming in that [dynamically-typed] environment…” Martin Fowler
<urn:uuid:82ee1c87-3d4a-4e91-b2f9-c91e00c18712>
CC-MAIN-2017-09
http://www.itnews.com/article/2916578/enterprise-software/let-s-get-ready-to-grumble-6-arguments-that-get-a-rise-out-of-programmers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00104-ip-10-171-10-108.ec2.internal.warc.gz
en
0.905651
410
2.84375
3
Commercial and civilian drones are already here — even if they’re not supposed to be. Officially, the Federal Aviation Administration (FAA) forbids commercial operation of unmanned aircraft in U.S. national air space, but that hasn’t stopped a growing number of them from taking to the skies. In January, the Los Angeles Police Department had to warn real estate agents not to use images of properties taken from a remotely controlled aircraft. During the Occupy movement in New York City last November, reporter Tim Pool obtained a bird’s-eye view of police action in Zuccotti Park from a customized two-foot-wide drone flying overhead. The camera-equipped device streamed live video to the journalist’s smartphone, which relayed the footage to a public Internet stream. And since 2011, News Corp.’s The Daily has had a news-gathering drone that it reportedly used to capture aerial footage of post-storm Alabama and flooding in South Dakota. Unmanned aircrafts are trickling into use now, but the floodgates will open in 2015: That’s when the FAA will officially allow operation of commercial drones in U.S. air space. The agency predicts that 15,000 flying robots will be winging their way through the nation’s skies by 2020, and that number will double by 2030. Once that swarm of pilotless aircraft is set loose, it’ll be up to state and local officials to sort out most of the rules for using these devices safely, securely and without trampling on privacy rights. “Some kind of consistent policy would be a nice thing to have, but there isn’t an agency or arm of the [federal] government that’s in a position to enforce any privacy regulations,” said Matt Waite, a journalism professor at University of Nebraska-Lincoln (UNL). “These kinds of laws generallytend to be delegated to the states.” Within the year, the FAA must allow any “government public safety agency” to operate an unmanned aerial vehicle (UAV) weighing 25 pounds or less as long as certain conditions, such as daytime use, are met. Currently, few of the thousands of law enforcement agencies in the United States have access to air support, said Don Shinnamon, a public safety aviation consultant. “This technology has the potential to bring air support to many public safety agencies,” he said. “It means a higher level of public safety.” In June, rumors spread about the Environmental Protection Agency (EPA) using drones to spy on cattle farmers in Nebraska and Iowa. “The problem is, the EPA doesn’t have any drones,” said Matt Waite, professor of journalism at the University of Nebraska-Lincoln. “They were doing it the same way they were always doing it, which is two dudes in a Cessna.” The EPA uses manned aircraft to monitor for clean-water violations, such as dirty runoff or manure dumped into a stream. But drone use may not be all that far-fetched, Waite said. “The EPA’s enforcement division is too small for the job that they have to do — single enforcement officers being charged with impossibly large areas to cover — and they can’t just randomly check in on different problems or projects because they’re overworked,” he said. “UAVs might open that up a bit.” Montgomery County, Texas, is about to test that theory. The county, located north of Houston, unveiled its three-foot, 55-pound UAV in October. “We have our share of crime,” said Chief Deputy Randy McDaniel of the Sheriff’s Office. “We have no airplanes and no helicopters, because of the expense involved. We’ve always had to defer to [other agencies] to hopefully help us in those situations where we needed an aircraft. Oftentimes, they’re out doing their own work.” Purchased with the help of a grant from the U.S. Department of Homeland Security (DHS), the county’s $260,000 UAV has yet to be deployed on a mission. But McDaniel has big plans for the device, which looks like a miniature helicopter. The UAV could give a rescue squad an aerial view of a hostage situation, he said, or in the case of an unknown chemical spill, it could read the placard on the container without sending a person into harm’s way. “We bought this for specific reasons,” McDaniel said. “It is for critical incidents where an air asset would be an appropriate way of providing information that we wouldn’t otherwise have.” Michael Toscano, president and CEO of the nonprofit Association for Unmanned Vehicle Systems, predicts that drones will begin to assume some of the more “dangerous, difficult and dull” missions performed by government agencies. For instance, when law enforcement officials are forced to call off a search because of unfavorable weather conditions, a UAV could continue surveillance through the storm. Or when officials need a nonintrusive way to count wildlife or inspectors want to check a damaged roof, a UAV is the ideal candidate. “With better situational awareness, you make better decisions,” Toscano said. “You’re more effective and efficient, and you’re safer.” UAVs can give public safety officials a leg up in just about any situation, added Shinnamon, a certified firefighter and former police chief. To minimize some of the risks firefighters face when they climb onto the roof of a burning building, he said, a UAV could be deployed to seek out hot spots where the structure is most likely compromised. If a toddler wanders off or an Alzheimer’s patient goes missing, Shinnamon said, a UAV could provide a bird’s-eye view of the scene, helping to expedite the search. “The view you get from an aerial perspective is so much better than you get standing on a street corner,” he said. “If that’s your kid who’s missing or your relative who’s an Alzheimer’s patient, you want the police to use anything available to help find them.” But the emergence of unmanned aircraft technology also is stirring up worries over safety and privacy, particularly as this equipment makes it into the hands of commercial organizations. For instance, UAVs flown by minimally trained operators could pose a hazard, said Mary (Missy) Cummings, associate professor of aeronautics and astronautics at the Massachusetts Institute of Technology. To address some of these concerns, the FAA will propose a rule on small unmanned aircraft systems this year, working with industry to establish approval criteria, according to a spokesperson. In addition, the Association for Unmanned Vehicle Systems recently released a code of conduct that includes recommendations for safe, nonintrusive operation. As for privacy, civil liberties advocates have expressed concerns about the potential for UAV use to violate citizens’ rights. Last year, the American Civil Liberties Union published a report providing recommendations for government use of drone aircraft. “We need a system of rules to ensure that we can enjoy the benefits of this technology without bringing us a large step closer to a ‘surveillance society’ in which our every move is monitored, tracked, recorded and scrutinized by the authorities,” wrote authors Jay Stanley and Catherine Crump. “The prospect of cheap, small, portable flying video surveillance machines threatens to eradicate existing practical limits on aerial monitoring and allow for pervasive surveillance, police fishing expeditions, and abusive use of these tools in a way that could eventually eliminate the privacy Americans have traditionally enjoyed in their movements and activities.” Proponents and observers of the technology agree that the impact of drones will be sweeping — but they contend that their uses will be much more mundane than sinister. “People will see them [drones] more, but it won’t be this kind of dystopian, ‘the skies are filled with robots spying on people all the time’ kind of thing,” said Waite, who recently launched a Journalism Drone Lab at UNL. “What I think will happen in 10 years is that all manner of industries will be transformed — but in utterly banal ways.” For instance, air freight companies may automate the flying of packages from one city to another or farmers may use drones to monitor irrigation systems. “If one of the spigots on one of those center pivot irrigation systems — which are enormous, they’re hundreds and hundreds of feet long — goes bad, a little swath of your crops out in the middle of your field that you can’t see suddenly isn’t getting watered and will die,” he said. A UAV could monitor the spigots to make sure they’re working, and alert someone when they’re not. Golf course operators are another potential customer. For instance, UNL’s PGA Golf Management Program is interested in using UAVs to monitor moisture distribution on fairways, according to Waite. “A little drone could fly over and be taking pictures of the ground — and maybe they’re multi-spectral images, so you can get an idea of how much soil moisture is in the ground and the plants themselves,” he said. “By knowing this, they would know that we only need to water this part of the fifth fairway; we don’t need to water the whole thing.” This would save hundreds of thousands, if not millions, of gallons of water regularly. “UAVs could make golf courses significantly more sustainable,” he said. Before civilian UAVs can fully live up to their potential, however, drone manufacturers need to solve a few problems. Often powered by batteries or fuel cells, small UAVs might only be able to operate for 30 minutes to an hour at a time, said Toscano. Even as UAVs become smaller, safer and more reliable, extending operating time continues to be a challenge. “You want to be able to have these systems operate for long periods of time,” he said. Another technological hurdle is ensuring the security of UAVs. It’s important that law enforcement agencies protect their systems from criminals who would jam a signal or interrupt a mission, Toscano said. And that very real threat is one that Todd Humphreys, who teaches aeronautical engineering at the University of Texas, has begun to address. In late June, Humphreys and his students used a technique called spoofing to hack into a drone and take control of it. The test on the civilian, university-owned drone was done at the campus football field. “We did an attack there just as a dress rehearsal for the next week when we were invited by the Department of Homeland Security down to White Sands [N.M.] to carry out the attack under their noses,” Humphreys told American Public Media. Just a week later, he and his students completed the demonstration for the DHS, repeatedly overtaking navigational signals going to the GPS-guided vehicle from about a kilometer away, according to the University of Texas. Next year, they plan to perform a similar demonstration on a moving UAV from 10 km away. “We’re going to have civilian drones in our air space, and of course, they’re concerned about the security of that premise, so [the DHS] would like to look into any kind of vulnerabilities,” Humphreys told American Public Media. “This is definitely a vulnerability, so they’d like to patch this before 2015 comes around.” To prepare for the 2015 deadline, Congress directed the FAA to designate six test sites that will provide data on how to safely integrate drones into the same air space as manned airplanes. The first test site — New Mexico State University — became operational in June 2011. The sites will help the FAA sort out certification standards and air traffic requirements for unmanned flight operations. They’ll also help the agency coordinate the introduction of drones and the development of the Next Generation Air Transportation System, a massive overhaul of the nation’s air traffic control system. Meanwhile, the public safety community is addressing some of the policy issues triggered by the new technology. In partnership with the International Association of Chiefs of Police, Shinnamon is developing a model UAV policy that could be adopted by law enforcement agencies worldwide. While he’s in agreement with the American Civil Liberties Union’s recommendations for UAV policies, which include usage restrictions and public notice, Shinnamon also recommends other safeguards. Law enforcement agencies should engage their communities, including civil liberties advocates, in UAV discussions, he said, and allow citizens to review and comment on UAV procedures. A search warrant should be issued if a UAV is targeting a specific location to gather criminal evidence, Shinnamon said, and all flights should be approved by a supervisor. Before the deployment of a UAV, he said, an emergency notification system should alert citizens that the aircraft will be overhead. “It’s new technology,” Shinnamon said. “There’s a whole education process that goes along with that.” Montgomery County Sheriff’s Office currently is drafting a policy manual for its UAV. One rule that’s already in effect: The UAV can only be approved for launch by either the sheriff or chief deputy. Although the county’s UAV has yet to fly, McDaniel is already considering possibilities for the future. While the department doesn’t anticipate using a UAV with a weapons platform, he said, a device with tear gas or nonlethal rubber bullet capabilities is a possibility. McDaniel said he’d like to see UAV technology in the toolkits of more law enforcement agencies. “I’m hoping that public safety agencies throughout the United States will move forward in attempting to gain this type of technology,” he said. On the whole, the concerns will continue as UAV use grows and changes, Shinnamon said. “As I’ve watched this technology evolve by leaps and bounds over the last few years, there are lots of issues society is going to have to come together to address.” Associate Editor Jessica Mulholland contributed to this story.
<urn:uuid:f36be233-dab4-4040-8d6e-7f873e31c7c9>
CC-MAIN-2017-09
http://www.govtech.com/security/Are-You-Ready-for-Civilian-Drones.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00280-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952293
3,038
2.578125
3
Machine learning enables cognitive systems to learn, reason and engage with us in a natural and personalized way. Think Netflix movie recommendations, Internet ads based on browsing habits, or even stock trades — these are all ways machine learning is helping us navigate our world in powerful new ways. The industrial revolution was a major turning point in the history of humanity. It enabled businesses to be more productive, create more jobs, and raise the overall standard of living. Today, we are on the precipice of another revolution. With machine learning done right, organizations can develop insights instantly and dramatically grow their business. Machine learning does this consuming greater amounts of data, support greater variability and complexity, and is more forgiving of changing parameters or data points. Output generated through this process, can be deployed seamlessly across multiple different platforms, like cloud computing and on-prem applications, analytics systems, embedded systems and edge networks. Similar to the Industrial Revolution, collaboration is a key component for machine learning — you still need smart people working together to ensure a successful process, resulting in the right output. Only, in this case, the smart workers are data scientists, data engineers, IT architects, developers, system administrators, business users, data mining experts, executives, etc. A true machine learning system is a learning machine, one which constantly keeps learning so its insights are fresh and its actions right. Every action (and non-action) feeds data into the learning machine, which then automates tasks without constantly requiring manual intervention. Machine learning is an entry point to the cognitive era, which enables business driven insights. This is a step change from the pre-cognitive era, where insights were largely technology platform driven.
<urn:uuid:67c9d704-8cb4-467e-a592-76d517e0284e>
CC-MAIN-2017-09
https://www.ibm.com/analytics/us/en/technology/machine-learning/?utm_content=bufferf2497&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00280-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921942
340
2.890625
3
Stanford University researchers "have created a tiny wireless chip, driven by magnetic currents, that's small enough to travel inside the human body. They hope it will someday be used for a wide range of biomedical applications, from delivering drugs to cleaning arteries." It's not as cool as miniaturizing a submarine filled with doctors (or Martin Short, in Innerspace), but it's still an advance in science that must have been inspired by science-fiction stories like Fantastic Voyage and the like. It also reminded me of the Body Wars ride at EPCOT Center, which used a motion-control machine to simulate a ride through the body. The ride was in the now-closed Wonders of Life Pavilion, and included performances by Tim Matheson and Elizabeth Shue. Here's a video tribute to the ride, for those who remember: Read more of Keith Shaw's ITworld.TV blog and follow the latest IT news at ITworld. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:e47cc308-c9a5-402f-9a3d-ad6f715fe71c>
CC-MAIN-2017-09
http://www.itworld.com/article/2729911/networking/sci-fi-reality--tiny-chip-can-enter-bloodstream--fix-you-up.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00453-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941288
228
2.671875
3
Intel Remixes Chip Recipe to Cut Power The chipmaker will add, in 2007, a new manufacturing process aimed helping it cook up low power chips.Intel Corp. plans to begin cutting the power consumption of its chips right at the factory. The chip giant will on Tuesday unveil a plan to create an alternate version of its manufacturing process technologythe means by which it knits together the transistors that make up the circuits inside its chipsdesigned to yield more power-efficient processors and supporting chipsets for notebooks, handhelds and other battery-powered devices. Intel, which at one time focused much of its efforts on building faster and faster processorschips that inevitably also used more powerhas taken a new path of late, announcing plans to deliver less power-hungry processors within all of its x86 product lines, including desktop, notebook and server chips, in 2006. By the end of the decade, the company has said it will deliver chips that use even less power for handheld devices. It intends the alternate manufacturing process to dovetail with the low-power chip design efforts. The new process, which is based on P1264 or Intels 65-nanometer manufacturing technology thats due to come on line later this year, tweaks the way transistorsthe tiny on/off switches inside its chipsas well as the wires that connect them, are formed. The changes, which include steps such as thinning the wires slightly and thickening the layers of material that insulate a part of the transistor known as the gate, help cut down on leakage or electricity that slips past after a transistor switches off, Intel representatives said.
<urn:uuid:bacf20c2-513b-4570-aa20-20d953c85801>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Desktops-and-Notebooks/Intel-Remixes-Chip-Recipe-to-Cut-Power
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00153-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954053
327
2.921875
3
In what ways does cloud computing empower individuals and businesses? The impact of social communities like Facebook and Twitter and web-enabled devices and applications that leverage the power of the cloud is already apparent. Watch as Nicholas Carr, author of The Shallows: What the Internet Is Doing to Our Brains, discusses how cloud computing is democratizing computing power. Just as the PC revolution gave each of us access to a computer, cloud computing gives each of us access to a data center. How will you tap into the power of the utility computing grid? - Innovation – Beyond the Infrastructure (3:30) - Democratization of Computing Power (1:55) - The Explosion of Apps (2:08) - The Convergence of Media & Entertainment and Software (3:08) - Big Data vs. Right Data (2:19)
<urn:uuid:c773adc8-ef39-4dec-96dc-277c95f0285f>
CC-MAIN-2017-09
http://www.internap.com/resources/nicholas-carr-democratization-of-computing-power/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00625-ip-10-171-10-108.ec2.internal.warc.gz
en
0.876075
174
2.640625
3
Smithsonian lab could set a standard for energy efficiency Renovation at research center designed to build one of the greenest facilities in the country - By Rutrell Yasin - May 09, 2011 The Smithsonian Environmental Research Center is building one of the most energy-efficient laboratories in the country. SERC officials are renovating the Mathias Laboratory with the help of a $45 million federal appropriation to the Smithsonian Institution. The remodeled laboratory, located in Edgewater, Md., on the Chesapeake Bay, is being degined to reduce its environmental impact, from where it gets it power to where it gets its materials. SERC officials held a groundbreaking ceremony May 6. Analysts estimate the new facility will consume at least 37 percent less energy, and emit 37 percent less carbon dioxide, than a similar building that meets baseline LEED certification standards. “The Mathias Laboratory project is a cornerstone of the Smithsonian’s environmental research, education and commitment to sustainability,” said Smithsonian Secretary Wayne Clough. The facility will support SERC’s leadership in long-term research and professional training on ecosystem services and human impacts in the biosphere, he added. New metric can help gauge benefits of data center consolidation SERC scientists specialize in a multitude of disciplines, including global change, terrestrial and marine ecology, invasive species and nutrient pollution. The laboratory is designed to promote cross-disciplinary collaboration for SERC’s research teams. The laboratory is named after former U.S. Sen. Charles “Mac” Mathias Jr. (R-Md.), who helped create the Chesapeake Bay Program in 1983 and was one of the earliest environmental defenders. “The new Mathias Laboratory will serve as a lasting and living tribute to the legacy of my friend, Sen. Mac Mathias, who was a leader in the efforts to restore the Chesapeake Bay,” said Rep. Steny Hoyer (D-Md.). Totaling 90,000 square feet, the new building will add 69,000 square feet of laboratory, office and support space to 21,000 square feet of remodeled existing space. A two-story atrium will connect the old and new sections and create an area where staff from various departments can share ideas. The project will seek gold-level Leadership in Energy and Environmental Design certification by the U.S. Green Building Council, targeting the maximum gold score of 51 credits. To leave a greener footprint, the new laboratory will include: - A heating, ventilating, and air conditioning system supplied by a large geothermal well field (300 wells 350 feet deep) and high-efficiency enthalpy wheels that recover energy from exchanged air. - Roof-mounted solar panels to provide hot water. - Space for nearly 650 solar panels that will provide almost 10 percent of the building’s electricity. - Low-flow fume hoods for chemistry experiments. - A system to reclaim wastewater by cleaning it at an outside treatment plant and re-using it in toilets, gardens, fire suppression and constructed wetlands. - Storm-water management with cisterns and wetlands made up of a series of cascading pools lined with native plants to receive runoff. - Bicycle racks and priority parking for car-pool and high-efficiency vehicles, along with solar panel recharging stations for electric vehicles. - Redistributed parking across the site, to decrease construction of new parking lots and impervious surfaces, thus minimizing storm-water run-off. The laboratory will also use regional materials to prevent long-distance transportation and use only certified sustainable wood. Rutrell Yasin is is a freelance technology writer for GCN.
<urn:uuid:c3e3aa51-6f3a-4399-947c-f92b3b3d4e61>
CC-MAIN-2017-09
https://gcn.com/articles/2011/05/09/smithsonian-lab-energy-efficiency.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00625-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919358
762
2.59375
3
Open Source Is the Foundation for Cloud Computing November 19, 2012 It is common knowledge that open source technology is the basis for many large-scale corporate projects, including Cloud computing. The UK Register printed “The Cloud Made of Penguins: Open Source Goes ‘Industrial Scale’,” an article that explains how the big names in open source are being used. OpenStack Software, a mere child of two years, specializes in storage, networking, plus many more components built on Apache platforms. It has caught the attention of many corporate giants, such as HP for its Cloud and the telecom company Nippon Telegraph and Telephone Corporation. Amazon EC2 is the favorite of Linux servers, mostly for storage. Also do not forget that it is used in infrastructure-as-a-service technology, such as Microsoft Azure. The article predicts that since the Linux kernel and middleware are not the attention-grabber they used to be, that Cloud-computing projects on the industrial level will begin to make more headlines. Jim Zemlin of the Linux Foundation pointed out this new idea: “’The difference now is they are not just obviously tinkering around with how to make a software defined network or block storage file format,’ Zemlin said. ‘These are broad-scale industrial initiatives that are financed by the largest computer companies in the world to create the comments they need to make commercial products.’” What is surprising is that people find this trend surprising. After technology become a core part of industry, developers puzzle over how it cane be manipulated for other projects. Remember, necessity is the mother of invention and you use the tools you have to make it. Thinking back on how open source search programs were back in the day, LucidWorks saw a need for a powerful and robust, yet economically priced, search application. Using Apache Lucene, LucidWorks created LucidWorks Search and LucidWorks Big Data. Whitney Grace, November 19, 2012
<urn:uuid:590d8575-7021-42df-9be6-a8a1f4335b77>
CC-MAIN-2017-09
http://arnoldit.com/wordpress/2012/11/19/open-source-is-the-foundation-for-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00149-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930672
410
2.875
3
Cloud computing has become a fixture in the IT landscape over recent years. Some debate has even arisen over whether the development of cloud computing counts as evolution or a revolution. Leaving such questions to others (evolution and revolution both signify change, perhaps simply at a different rate or by more or less gradual steps), a brief (and broad) look at the history of the cloud may provide some indicators as to where it’s headed next. The origins of the cloud are often seen in mainframe computing in the last century. This is a matter also up for some debate, as some cloud proponents like to treat cloud computing as an entirely new phenomenon. Skeptics (or, perhaps, just less excitable types) sometimes see the cloud as nothing new at all, but rather just a rebranding of a computing model that has been around for decades. As with most such arguments, reality is probably somewhere in between. The fundamental model of centralized resources certainly can be traced to mainframe. This trend of centralization is not trivial, however. As computing technology became smaller, cheaper and thus more accessible to more businesses and consumers, computing power began decentralizing. In businesses, for example, getting computer time became less and less a problem—unlike early computers, which were bulky, expensive and difficult to operate. The connectivity of the Internet, leading to the cloud model, essentially involves a recentralization of compute power. “From a business and management perspective [cloud computing] signifies a return of some of the characteristics of mainframe computing, in terms of power, expense, and centralization of operations and expertise,” notes Peter HJ van Eijk at CircleID (“Cloud Is the New Mainframe”). The cloud is certainly more sophisticated than mainframe systems in, say, the 1970s or 1980s, and the scope of access is greatly increased. Depending on the particular service, you can potentially access the cloud from anywhere with an Internet connection—although even this isn’t fundamentally different from using a remote computer with a modem to dial into the VAX system at company headquarters to access email or run some “online” program. So, is the cloud revolutionary or evolutionary? That may simply depend on which aspects of the cloud you want to emphasize relative to mainframes or other computing models. Definition of the Cloud Perhaps you’ve noticed how a word or phrase can spread through a population like a virus (“memes”). In the previous decade, the term cloud or cloud computing became popular, but questions arose as to what exactly it referred to. Is the cloud just a rebranding of the Internet? Does the cloud follow a particular computing model, or is it nothing but a matter of remote access to a centralized service of some sort? Worse, do cloud computing and the cloud mean two entirely different things? How are they related? The result is naturally that a precise definition of the cloud and cloud computing is difficult to pin down. This may not be all that surprising, however. Try to define the term computer. If you ask a dozen people what it is, you’ll probably get 12 (or more) different answers. They may all be similar but simply focus on different aspects of what a computer is. (Is a slide rule a computer? On some definitions, it is. It’s just not a digital computer.) The same is true with the cloud. Cloud computing most likely falls into the “I know it when I see it” category—and nothing is wrong with a term that is a little fuzzy around the edges. But what can we say about the cloud? Given the fact that clouds have been meaningfully differentiated into public and private versions—that is, Internet-style networks and private corporate-style networks—the cloud is at its heart a network. That may seem a trivial conclusion, but it provides a good foundation to start with. The metaphor of the cloud comes from network diagrams that depict a portion of the network—which may be too complex or indeterminate to show accurately—as a cloud, with subnetworks, terminals or other devices connecting to one another through unspecified pathways. Cloud computing is often defined as a model using centralized, virtualized compute power to deliver applications and other resources as a service (rather than a product) over the Internet or some similar network. (Unfortunately, many definitions of cloud computing take up entire articles, so you may think this one is lacking on some points, overbearing in others or simply mistaken.) According to Gartner’s “hype cycle,” cloud computing is currently deflating from its peak of hysteria, with several years still standing between disillusionment and productivity. The question of cost has followed a similar path over recent years, seemingly. Cloud computing was expected to be the inevitable replacement for distributed computing if for no other reason than it is cheaper. The massing of resources and application of virtualization technologies is supposed to reduce cost per unit of compute power through greater utilization, and it enables amortization of costs across many users (beyond the company that owns the resources) as well as greater scalability (both up and down). Conceptually, this seems a simple matter, but numerous questions have been raised as to whether cloud computing really is cheaper. There actually may be no hard answer: Forbes notes, for example, “So is cloud computing really cheaper? The answer really does come down to how closely you are able to manage, track and adjust your infrastructure” (“Is Cloud Computing Really Cheaper?”). Cloud computing can, however, convert capital expenses into operational expenses—a particularly large benefit for companies (and consumers) in an economy heavy laden with debt and a lack of investment capital. And this may be the greatest benefit to some companies, even if long-term costs differ little. Security is a major concern for all aspects of IT, but it has been one that has dogged cloud computing with a particular vengeance. Like cost, however, security may not be an inherent advantage or disadvantage for the cloud: it may simply be something that users must evaluate in each separate case to determine the safest approach (“Has Network Security for the Cloud Matured?”). Where Is Cloud Computing Headed? Given the historical origins of (or, at least, precedents for) cloud computing, as well as the evolution of cost and security concerns, cloud computing is likely to stabilize as one tool in the IT toolbox—neither a complete replacement of other compute models nor a fad that will disappear in a few years. Companies are trying different mixed approaches to cloud computing, ranging from purely public cloud computing, to purely private (company-owned) clouds, to hybrid approaches that attempt to garner the benefits of each while minimizing downsides. No model serves all purposes, and cloud computing is no exception. Yet to be seen is the effect that environmental concerns will have on the cloud. Distributed computing—although possibly less efficient—forms less of a target than mega cloud data centers, with their multi-megawatt power appetites. Such concerns don’t have an impact on the technical aspects of the model, but they may have practical effects, such as higher costs through regulations and taxes. However you define it, cloud computing has earned itself a place in IT. It won’t ever see a wholesale takeover of computing, but it will remain an indispensable tool. The question is simply when it will reach a plateau of adoption in the market and how it will be used by companies in conjunction with other computing models. Photo courtesy of pr_ip
<urn:uuid:3b926175-9771-4a47-8d56-c31ad38b44bf>
CC-MAIN-2017-09
http://www.datacenterjournal.com/how-has-cloud-computing-evolved/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00501-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950857
1,566
3.1875
3
Most of us have tried to sneak a quick game of Minesweeper in during our computer classes at school, but for students at Garnes High School in Norway, playing games won't be something they'll have to hide. Garnes Vidaregåande Skole, a public high school in the city of Bergen, Norway, is to start teaching e-sports to its students starting in August. The elective class puts e-sports on the same footing as traditional sports such as soccer and handball at the school. 30 or so students enrolled in the program will study five hours a week during the three-year program. Folk High Schools—boarding schools that offer one year of non-examined training and education—have already offered some e-sports training, but this will be the first time that e-sports find a place in a regular high school. Students on the program will not simply spend five hours a week playing games at school. While gaming skills are important, the classes will include 90 minutes of physical training optimized for the games in question, with work on reflexes, strength, and endurance. Each class will be split; 15 students will play while the other 15 perform physical exercise. In an interview with Dotablast, Petter Grahl Johnstad, head of the school's science department, says that the students will have their performance graded, with game knowledge and skills, communication, co-operation, and tactical ability all being assessed. The school will have a dedicated room for the program with gaming chairs and high-end PCs with Nvidia GeForce GTX 980Ti video cards, according to its Facebook page. Students will provide their own mice, keyboards, and headsets, to accommodate the wide variety of personal preference that exists. Garnes has not yet decided which game or games its students will study. Two will be offered in the first year, with Dota 2, League of Legends, Counter-Strike: Global Offensive, and Starcraft II all under consideration. In offering this course, the school is embracing a growing trend and no doubt appealing to a lot of kids who'd just love the chance to play games when they should be studying. But that's not all; Garnes is also playing catch-up with its neighbors. A school in Sweden announced last year that it was embarking on a similar scheme to offer e-sports education.
<urn:uuid:6c13445c-0181-417d-8977-d0ee36347526>
CC-MAIN-2017-09
https://arstechnica.com/gaming/2016/01/norwegian-high-school-puts-e-sports-and-gaming-on-the-timetable/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00025-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971921
483
2.640625
3
Few of us realize the scope of the operations, the technological resources and the number of personnel involved in responding to a major air disaster. The task of locating and retrieving victims and wreckage -- essential in determining probable causes of the crash -- require the coordinated efforts of many civil and military agencies, hundreds of technicians and specialists and the convergence of advanced technologies. Aviation safety often depends on the timeliness and thoroughness with which these operations and subsequent investigations are carried out. When Alaska Airlines Flight 261, en route from Puerta Vallarta to San Francisco, lost elevator control at 30,000 feet and dove into the sea off the coast of southern California last year, the crash triggered an immediate response from federal, state and local agencies, the military and several private organizations. From the initial search and rescue efforts to the final recovery of victims and aircraft remains, interagency cooperation and the application of advanced technologies such as GIS, GPS, sonar and other remote-sensing systems played a major role in expediting complex operations. A Grim Task The crash occurred at about 4:30 p.m. on Monday, Jan. 31. Within minutes, search and rescue helicopters from Point Mugu Naval Air Station were on scene, followed by surface craft from the Channel Islands United States Coast Guard (USCG) Station, commanded by Captain Andy Jones. Jones coordinated the search for survivors by USCG and volunteer vessels, and set up a security zone around the area of floating debris. At about 6 p.m., the smaller USCG craft was relieved by the 82-foot cutter, Point Carrew, from the Eleventh Coast Group, Long Beach. Because the plane went down in the Channel Islands National Marine Sanctuary (CINMS), one of the many authorities the USCG alerted was the Santa Barbara headquarters of the National Oceanic and Atmospheric Associations CINMS, the agency that protects and manages the resources of the sanctuary. CINMS has an extensive GIS system with multiple data layers of the islands, information on sea conditions and bathymetric data (topography of the ocean bottom) within the sanctuary. Their staff could provide orientation maps and other supporting data. Although no survivors were found in the immediate area of the crash site, the USCG continued to search the floating debris throughout the night. As many remains as possible had to be recovered and turned over to National Transportation Safety Board (NTSB) investigators. To assist them in tracking the floating debris field in the darkness, the USCG called on the mapping capabilities of the National Ocean Service (NOS) and requested an assessment of other potential National Oceanic and Atmospheric Association (NOAA) assets in the region that could assist with locating debris. After receiving word that an Alaska Airlines jet had gone down in the sanctuary, NOS plotted the site coordinates and mobilized its staff. Throughout the evening, they remained in contact with the USCG, assisting in estimating the drift of the floating debris field. At the Santa Barbara office, CINMS physical scientist Ben Waltenberger forwarded the periodic coordinates of the field, along with wind and sea conditions, to the USCG and the Hazardous Materials Assessment Division (HAZMAT). At HAZMAT headquarters in Seattle, the data was put into a modified oil-spill modeling program and produced a projected path and dispersion rate for the debris. Although the model was developed for oil spills, Chief Scientific Support Coordinator Commander Jim Morris said it has many other applications. "You just have to change a few variables to make the model do something a little different." Coordinates of the projection were sent back to Santa Barbara and loaded into the GIS. The same evening, the USCG cutter Point Carrew had a map with a projected path of the debris field for the next 12 hours. "In emergency situations, where you have to make quick decisions, the ability to visualize data is one of the most important assets you can have," said Waltenberger. Bring in the Subs Within minutes of the crash, the NTSB in Washington, D.C., contacted Naval Sea Systems Command and requested mobilization of their Supervisor of Salvage (SUPSALV) Division. Since the plane was estimated to be at a depth of 700 feet, recovery operations called for side-scan sonar and ships with remotely operated vehicles (ROV) -- tethered robots capable of working for days at depths beyond the practical limit of divers. In San Diego, the M/V Kellie Chouest, a submarine rescue support ship operated by the Navys Deep Submergence Unit, was ordered to proceed to the site with the ROV Scorpio and retrieve the planes black boxes -- the cockpit voice recorder and flight data recorder. At Port Hueneme Naval Base in Ventura, Calif., the USNS Sioux, a fleet tug operated by the Military Seal Lift Command and the research vessel M/V Independence, operated under contract by Mar Inc., were readied for side-scan mapping and ROV recovery operations. Deep Drone, SUPSALVs primary salvage ROV system for depths to 8,000 feet, is kept in constant readiness for recovery operations in any part of the world. ROVs are controlled from the surface through an umbilical that carries control signals and power for video and lights that enable the operator to navigate the craft visually and acoustically. ROVs have an onboard sonar, a still camera and two mechanical arms (manipulators) that can work with tools, attach rigging and pick up objects ranging from the size of a teacup to objects weighing over 200 pounds. Launching the ROVs While the ROVs were preparing to launch, the CINMS staff in Santa Barbara shifted to developing maps of the seafloor using existing bathymetric data and geo-referenced bottom imagery captured a few years earlier by ROV cameras. "The primary purpose of bottom mapping was to look at the terrain in the area of the crash," said Waltenberger. "The Navy was about to put ROVs down and wanted to know what kind of terrain they were going to encounter." MSD Resource Protection Coordinator Lisa Symons said the CINMS team was looking for things such as abandoned wellheads and underwater pipelines. "There is an active oil field in that area, so we were looking for anything that might hamper recovery efforts. We were also talking with SUPSALV about the natural resources they would encounter in the area. Part of our concern and part of the reason we were involved in mapping the debris field was to know where that floating debris was going to end up and how it might impact marine resources." NOAA produced overviews of the area and imbedded the bottom imagery in 2D and 3D bathymetric maps. Users could click on different points in the area of the debris field or wreckage and actually see what the bottom looked like at those points. According to Waltenberger, the area around the crash site was flat and sandy. CINMS put up the maps in a field office at Port Hueneme and made them available to federal, state and local emergency response agencies involved in the operation. They also put maps on the Internet for the press and the general public. The Black Box The arrival of the NTSB investigation team early Tuesday morning coincided with the arrival of the M/V Kellie Chouest with the ROV Scorpio onboard. Since the salvage team knew the type of bottom their ROV would be working in, they began the search for the black boxes almost immediately. Using a handheld transponder, the team locked onto the characteristic pinging coming from the recorders in the tail section of the plane and pinpointed their coordinates using differential GPS, which had three- to five-meter accuracy. By Wednesday, the team had retrieved the cockpit voice recorder, and by noon the following day, the flight data recorder. Less than 72 hours after the crash, both boxes were at NTSB headquarters in Washington. At about the same time that the Kellie Chouest was retrieving the first recorder, the side-scan sonar and the ROV Deep Drone were arriving at Port Hueneme. The sonar went aboard the Sioux, and the Deep Drone went aboard the Independence. The sonar team was tasked with producing a precision bathymetric map of the debris field, which would be used to guide the Independence and Deep Drone in recovering victims and the aircraft. Mapping the Ocean Floor Side-scan sonar is the principal tool for producing bottom topography and locating sunken objects (see "Sophisticated Systems Comb Ocean Floor for Clues to TWA Flight 800 Disaster," February 1997). As the torpedo-shaped sonar sensor is towed back and forth over the target area in a series of parallel, overlapping tracks, its pulses are reflected off the ocean floor and the objects on it. The echoes are received by the sensor and sent to a Data Acquisition and Processing System, which is tied into the vessels GPS-based navigation system. The output is highly detailed, georeferenced imagery in real time. The search effort with the side scan was completed in less than a day. According to Salmon, the debris field was quite small. "The plane obviously went in at a steep angle, so we were able to quickly determine the boundaries of the primary debris field and develop a very precise map. Fortunately for us, the field was in a flat, sandy area. If you went a half mile in almost any direction, you were in rough rocky, canyons, peaks and valleys, with all kinds of problems." At the request of the NTSB, the sonar team also ran a survey outside the primary area for isolated aircraft debris, but none was found. Following processing and review of the sonar map by the command center in Port Hueneme, the Independence put Deep Drone down to document the debris field on videotape. The recovery team monitored the position of the drone, relative to the ship, through a transponder tracking system. Since the vessel had to stay above the drone at all times, data from the tracking system was used to keep the vessel in position through the use of thruster propellers. Images from the video were recorded along with the drones exact depth, x-y coordinates and heading. After reviewing the tape and evaluating areas for recovery, the NTSB and SUPSALV sent the drone back for close-ups of specific targets. Retrieval of the victims was completed on Feb. 6. "It was a slow, tedious process, because at that depth its hard to operate quickly," Salmon said. "Everything is done in slow motion." The Final Sweep At this point, the focus of investigator interest shifted to the tail section: specifically, the horizontal stabilizer, its control mechanism and the vertical stabilizer. Recorded conversations between air traffic control and the plane indicated the pilots had been having difficulty maintaining altitude as much as 30 minutes prior to the crash. On Boeing MD-80 aircraft, the horizontal stabilizer works on the principal of the garage door opener: A long, threaded jackscrew lifts and lowers the door as the screw rotates back and forth through a fixed nut. On the plane, the jackscrew moves the wing-like horizontal stabilizer up or down, depending on whether the pilot moves the control to climb, fly level or descend. By Feb. 10, the Deep Drone team had recovered the essential parts of the tail section. A close-up showed stripped threads still clinging to the jackscrew. According to the Los Angeles Times, the NTSB said the jackscrew had no lubricating grease on it. Other priority parts recovered included flight control surfaces, all wing sections, flaps, rudder components and actuators. SUPSALV Operations Specialist Keith Cooper pointed out that most parts were recovered in specially constructed debris baskets by Deep Drone or slung under the vehicle in specially designed rigging. On Feb. 22, the engines and other large parts of the plane were hauled to the surface with the aid of the fleet tug Sioux. After a subsequent video survey showed the remaining debris to be mostly fragmentary, SUPSALV hired the Sea Clipper, a commercial trawler, to retrieve the remaining pieces with a small-gauge net. The trawler completed the sweeps in two weeks. A final video survey following the last trawl on March 15 showed the area to be clear of debris. Retrieving the victims and the wreckage from Alaska Flight 261 took the coordinated efforts of massive resources and hundreds of specialists from government agencies, military units and civilian organizations. At various times, the operation drew on combined technologies and resources from the U.S. Coast Guard, the Air Force, Navy, NOAA and private organizations. The task was completed in 48 days and provided the NTSB with enough information to identify the probable causes of the crash and prevent its recurrence in other aircraft of this design.
<urn:uuid:2b9a6c0a-a03e-46b0-85c4-6a6d8fcfe85b>
CC-MAIN-2017-09
http://www.govtech.com/featured/Retrieving-The-Wreckage.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00377-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958256
2,651
2.5625
3
About the TLS Extension Server Name Indication (SNI) When website administrators and IT personnel are restricted to use a single SSL Certificate per socket (combination of IP Address and socket) it can cost a lot of money. This restriction causes them to buy multiple IP addresses for regular https websites from their domain host or buy hardware that allows them to utilize multiple network adapters. However, with Apache v2.2.12 and OpenSSL v0.9.8j and later you can use a transport layer security (TLS) called SNI. SNI can secure multiple Apache sites using a single SSL Certificate and use multiple SSL Certificates to secure various websites on a single domain (e.g. www.yourdomain.com, site2.yourdomain.com) or across multiple domains (www.domain1.com, www.domain2.com)—all from a single IP address. The benefits of using SNI are obvious—you can secure more websites without purchasing more IP addresses or additional hardware. Since this is a fairly recent update with Apache, browsers are only recently supporting SNI. Most current major desktop and mobile browsers support SNI. One notable exception is that no versions of Internet Explorer on Windows XP support SNI. For more information on which browsers support SNI, please see SNI browser support. To use SNI on Apache, please make sure you complete the instructions on the Apache SSL installation page. Then continue with the steps on this page. Setting up SNI with Apache To use additional SSL Certificates on your server you need to create another Virtual Host. As a best practice, we recommend making a backup of your existing .conf file before proceeding. You can create a new Virtual Host in your existing .conf file or you can create a new .conf file for the new Virtual Host. If you create a new .conf file, add the following line to your existing .conf file: Next, in the NameVirtualHost directive list your server's public IP address, *:443, or other port you're using for SSL (see example below). Then point the SSLCertificateFile, SSLCertificateKeyFile, and SSLCertificateChainFile to the locations of the certificate files for each website as shown below: NameVirtualHost *:443 <VirtualHost *:443> ServerName www.yoursite.com DocumentRoot /var/www/site SSLEngine on SSLCertificateFile /path/to/www_yoursite_com.crt SSLCertificateKeyFile /path/to/www_yoursite_com.key SSLCertificateChainFile /path/to/DigiCertCA.crt </VirtualHost> <VirtualHost *:443> ServerName www.yoursite2.com DocumentRoot /var/www/site2 SSLEngine on SSLCertificateFile /path/to/www_yoursite2_com.crt SSLCertificateKeyFile /path/to/www_yoursite2_com.key SSLCertificateChainFile /path/to/DigiCertCA.crt </VirtualHost> If you have a Wildcard or Multi-Domain SSL Certificate all of the websites using the same certificate need to reference the same IP address in the VirtualHost IP address:443 section like in the example below: <VirtualHost 192.168.1.1:443> ServerName www.domain.com DocumentRoot /var/www/ SSLEngine on SSLCertificateFile /path/to/your_domain_name.crt SSLCertificateKeyFile /path/to/your_private.key SSLCertificateChainFile /path/to/DigiCertCA.crt </VirtualHost> <VirtualHost 192.168.1.1:443> ServerName site2.domain.com DocumentRoot /var/www/site2 SSLEngine on SSLCertificateFile /path/to/your_domain_name.crt SSLCertificateKeyFile /path/to/your_private.key SSLCertificateChainFile /path/to/DigiCertCA.crt </VirtualHost> Now restart Apache and access the https site from a browser that supports SNI. If you set it up correctly, you will access the site without any warnings or problems. You can add as many websites or SSL Certificates as you need using the above process.
<urn:uuid:d6cee9a8-be7c-4e2a-9b34-e3df4cfea52d>
CC-MAIN-2017-09
https://www.digicert.com/ssl-support/apache-multiple-ssl-certificates-using-sni.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00321-ip-10-171-10-108.ec2.internal.warc.gz
en
0.738231
940
2.75
3
Minecraft and Watson Team Up to TeachBy Mike Vizard | Posted 2016-03-09 Email Print The gamification of learning has the potential to fundamentally change how complex information is taught across a broad range of e-learning environments. In a sign of how gaming is about to transform the way we learn, a small team of high school students at Connally High School in Austin, Texas, hooked up an instance of Microsoft's Minecraft video game to the IBM Watson artificial intelligence platform to teach medical students how viruses function. Working with these students, teacher David Conover got his class to use a medical corpse created by Tufts University to transform Minecraft into a teaching tool that uses Watson running as a cloud service to answer students' questions. This gamification of learning has the potential to fundamentally change how complex information is taught across a broad range of e-learning environments. A self-described serial entrepreneur, Conover says he found himself teaching video game design in the high school. To get the students interested in developing the application, he figured that a familiar Minecraft gaming construct would have the most appeal. So, over a summer, he worked with six students on a full-time basis to develop the application. Conover says that the students spent every free moment they had volunteering to create the app. For teens who normally eschew anything to do with school, that project represented a major breakthrough. Using Natural Language to Ask Questions IBM got involved in this project when Conover and the students discovered the application programming interfaces (APIs) that IBM makes available on the Watson cloud service. Without much effort, the students made it possible to type a question using natural language in Minecraft that would be answered by Watson. That allows the application to easily augment the information the teens spent all summer inputting into the original app. Conover says the students now plan to tap into the voice capabilities of Watson, and also to start building other gaming applications to teach other subjects. For example, they might cover topics such as the best ways to optimize traffic patterns in Austin or figuring out what would be involved in making a journey to Mars. In the meantime, gamification is all the rage in education circles. That means it’s only a matter of time before this technology is applied to all forms of training—both in and out of the classroom. The challenge now is figuring out exactly what gaming metaphor best fits the type of material being taught. Once that’s decided, cognitive computing platforms should provide all the data needed to drive that application in a place that is, quite literally, a simple API call away.
<urn:uuid:459929f8-535f-427e-a1a3-cfbdf2249207>
CC-MAIN-2017-09
http://www.baselinemag.com/blogs/minecraft-and-watson-team-up-to-teach.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00497-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950182
531
3.359375
3
To help the Internet move on from usernames and passwords, Google wants to put a ring on it. Google's engineers have been experimenting with hardware that would act as a master key for online services. Examples include a smart ring for your finger, a cryptographic USB stick, or a token embedded in smartphones. Google vice president of security Eric Grosse and engineer Mayank Upadhyay outline their proposal in a research paper for this month's IEEE Security & Privacy Magazine, according to a report in Wired. The idea is to prevent remote hackers from accessing online accounts through stolen usernames and passwords. Without physically stealing the login device, they'd have no other way to gain entry. Some Web services already offer this type of security through two-step authentication. For instance, when you sign into Gmail on an unrecognized PC, you can have Google send a text message to your phone with a validation code. Once you enter the code, Gmail can remember that PC indefinitely. The problem with two-step authentication is that it's cumbersome to validate all your computers, and to go through the process just to check e-mail on a friend's computer. Signing in when your phone is out of service can be an issue as well, although Google does provide 10 backup codes for that situation. A physical device--ideally one that could communicate wirelessly to computers--would make the process easier. "We'd like your smartphone or smartcard-embedded finger ring to authorize a new computer via a tap on the computer, even in situations in which your phone might be without cellular connectivity," Google's engineers write. Of course, relying a ring or other device to log in raises its own challenges. There'd have to be a backup sign-in method--one that's more secure than just a password--in case the device becomes lost or damaged. And while a ring or other contact-based device would help protect users from faraway hackers, it'd be easier to steal by spouses, co-workers or children. Google's engineers admit that they might still need to require passwords, but those passwords wouldn't have to be as complex as today's hacker-proof formulas. Also, not everyone will want to wear a ring or carry their phones around all the time just to use their computers. Web developers will have to get on board as well, or at least embrace services like Account Chooser, which would let larger services like Facebook or Google act as a master login for smaller sites. Otherwise, we'll still have to remember a whole lot of passwords for sites that don't except hardware-based authentication. Google's not the only tech giant that's interested in replacing the password. Last year, Apple bought AuthenTec, a fingerprint scanner firm, leading to rumors that future iPhones could have fingerprint sensors built into their home buttons. The idea of killing the password became a popular notion last year, after a clever hacker managed to wipe out the digital life of Wired reporter Mat Honan. In a sense, it was a wake-up call, but given how often major websites get hacked, a better solution now seems long overdue. Hardware solutions from the world's major tech players could be just what we need. This story, "Can hardware help kill the password? Google thinks so" was originally published by PCWorld.
<urn:uuid:de56f484-74ac-43ee-b4ac-44bbbf31510a>
CC-MAIN-2017-09
http://www.itworld.com/article/2715449/security/can-hardware-help-kill-the-password--google-thinks-so.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00497-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955589
679
2.6875
3
Data Masking: The ABCs of Network Visibility When it comes to network security, full visibility isn’t always ideal. There are some things that should stay hidden. Most companies have data compliance requirements they must adhere to. HIPAA, PCI, and internal best-practice policies mean Personally Identifiable Information (PII) must be handled with care. Data masking helps organizations control who has access to this sensitive data. What is data masking? Data masking is different from restricting data access. Access restriction renders data invisible. Data masking replaces vulnerable, or sensitive, data with information that looks real. When data is masked, it’s altered so that the basic format remains the same, but the key values are changed. For example, a long card number could be masked in the following manner: 1234 5678 1234 5678 becomes 1234 XXXX XXXX 5678. Data can be masked, or changed in a variety of ways: - Substitution – When external, unrelated, or randomly generated data is used to replace parts of the real data. E.g. a random list of names could be used to replace a real list of customer identities. - Shuffling – When data is swapped or shuffled within the database. - Encryption – When sensitive data is converted into code, using an algorithm. The data can only be decrypted with a key. - Number / Date Variance – Where each number / date in the set is altered by a random percentage of its real value. - Masking Out – Where certain fields or parts of the data are replaced with a mask character (an X, for example). Whichever method is used, the data must be altered in such a way that the original values cannot be obtained through reverse engineering. Typical Use Cases On the whole, data masking helps organizations comply with data protection requirements. Here are some specific situations in which data masking capabilities are particularly useful. Testing / Outsourced Data – Companies may need to provide true-to-life datasets to develop relevant software. In less secure test environments such as this, realistic data is needed, but companies need not risk data security by using actual customer information. Since creating a ‘fake’, but realistic dataset from scratch is time-consuming, companies can use the real data they have, but alter it through data masking to protect customers. Similarly, if business IT operations are outsourced to another organization, data can be masked to prevent exposure of real data to more people than necessary. Network Data – Organizations often need to record and monitor network data. But compliance requirements mean they must avoid storing PII. With data masking, companies can record network data while hiding sensitive data. SSL Decryption – Secure Socket Layer (SSL) encryption is the standard technology used to send private information. For security purposes, organizations must decrypt and examine any SSL traffic on their networks. One of the dangers of SSL decryption is that it makes sensitive data available to anyone with access to network monitoring tools. Clever network tools can decrypt SSL data, while masking data that doesn’t need to be exposed. According to research survey conducted by Enterprise Management Associates in 2016, data masking is one of the Top 5 most commonly used packet broker features. So, if you are considering a purchase for data masking equipment, here are some things to keep in mind: Use of masked data – Make sure you understand how you intend to use the data before you make your purchase, as this can save you time and money. For instance, is the plan to simply distribute the data to a data loss prevention (DLP) device for analysis or do you plan to access the data natively for searches? If you plan to access the data natively, then your solution needs to support regular expression (Regex) search capability. Once the specific information, or type of information, is found that matches the search criteria, that data can be sent to a tool (like a DLP) for further processing. This search capability allows your monitoring tools to be more effective, as they have less data to sift through. Easy access to the data – If you need access to the data for Regex searches, consider purchasing a network packet broker that supports data masking. This will allow you to easily collect the data, search through it, and then forward it to monitoring equipment (like a DLP) for advanced data searches, data storage equipment (like a SAN), or compliance and recording tools. Distribution of masked data – Once the data is masked, how do you plan to distribute it to the appropriate monitoring tools? This is where a packet broker will come in handy for you to distribute the data to the device(s) that it needs to go to. More Information on Data Masking Data masking is a powerful capability that can help your regulatory compliance initiatives. When recording network data, it allows you hide sensitive information with ease.
<urn:uuid:df407151-542b-4c62-b183-45f1757b4f0b>
CC-MAIN-2017-09
https://www.ixiacom.com/company/blog/data-masking-abcs-network-visibility
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00197-ip-10-171-10-108.ec2.internal.warc.gz
en
0.892027
1,023
3.34375
3
Farmers are leading the way with the Industrial Internet of Things (IIoT), but do they need ‘precision farming’ to ensure global food security? “We need to achieve long-term, sustainable global food security without causing environmental damage.” This was the call to action from Richard Green, head of engineering research for the National Center for Precision Farming, at the Smart Summit London late last month. With a warning that we face an expected global population of nine billion by 2050, Green also spoke of the need to build homes, roads and other important infrastructure. All of this means we will lose land for growing crops, and the end result is that we will need to grow more food with less. Precision farming – how does it work? Green’s solution: precision farming – a management concept born out of the advent of global position system (GPS) and global navigation satellite system (GNSS) technology. This method is based on observing, measuring and responding to inter and intra-field variability in crops. The point is to define a decision support system for management of entire farms, with the aim of optimizing returns while preserving resources. “It’s a solution that is both sustainable and economic,” he claimed. Green’s argument, following years of research, is that Precision Farming — or satellite farming as it is sometimes known — can give farmers more control of the IIoT. By using satellites precise locations in a field, farmers gain ‘better’ data on plants and insight on methods that work well. With better information, plants, and technology, farmers can sustainably improve yields and profits while optimizing the use of resources. “Using robotics and autonomous vehicles [with in-built GPS], we are trying to treat each plant individually. We haven’t got there yet but that’s the aim.” Farmers and the IIoT It has been well-documented that farmers are already using Internet of Things (IoT) technology to great effect, in some parts of the world. So are they doing it right? According to Green: “Today, farmers complain that they have too much data and don’t know what to do with it. Tomorrow they will realize they have too little and that what they do have isn’t good enough!” Instead of accruing more data for the sake of it, Green is trying to encourage farmers globally to be more precise with their data accumulation. Everything must be geo-tagged so it can be linked to a precise location. His goal is to build a ‘giant data cloud’ from farms and businesses across the world. In theory, farmers would then be able to access this database to ask questions and make better informed decisions for how they grow their own produce. As an example, Green suggested that precision farming could help farmers see what would happen if they changed the date of farming, and how that affects a yield. They could also see how much ‘each square meter of the farm will make in profit if they incorporate data analytics.’ Targeting developing economies “Small marginal gains are possible, which increases our ability to adapt to climate change,” he notes. Whether the gains would be as great in more advanced countries is questionable. In the UK, where the best farmers can yield 10-12 tonnes per hectare, efficiency could improve slightly, Green admits. What he’s really excited about, though, is the potential for developing countries. These are countries that produce 1-2 tonnes per hectare, ‘so the opportunity is huge’. If data analytics can help farmers in developed countries to just double their yield, that is still ‘incredibly significant’. The problem Green and his team face is that many different parties are invested in precision farming, but they are not yet working together. A large part of his role is to encourage them to collaborate. “Only by all of us coming together, can we actually make it work.” Another major factor is whether or not the developing countries Green spoke of actually have the money and the capability to invest in the upfront costs of IIoT technology without support. There is already a major food shortage of in some parts of the world, so while the IoT will be useful for some countries in time, there are others that need a solution today.
<urn:uuid:73c7fdc7-fbd2-4c42-bedd-184c34eb2398>
CC-MAIN-2017-09
https://internetofbusiness.com/precision-farming-security-iiot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00245-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949613
919
2.90625
3
Rahajarison P.,FOFIFA DRZV | Arimanana A.H.,FOFIFA DRZV | Raliniaina M.,FOFIFA DRZV | Stachurski F.,FOFIFA DRZV | And 2 more authors. Infection, Genetics and Evolution | Year: 2014 Although Amblyomma variegatum is now regularly recorded up to 1600. m in altitude in the Malagasy highlands, where it was previously reported not to persist without a constant supply of ticks introduced from lower infested regions, some parts of the highlands remain tick-free. Studies were carried out to verify whether the cold climate prevailing in these areas in June-September could prevent the survival and moulting of nymphs, the tick life stage present in the environment at this period. Cohorts of engorged A. variegatum nymphs were released from June to August in six different sites (three in 2010, altitudes 1200-1415. m; three in 2011, altitudes 1585-1960. m) which were reported to be either tick-infested (two in 2010, one in 2011) or tick-free. The ticks were placed in cages driven into the soil and open at the bottom so that they could hide in the soil or root network. Of the 1975 nymphs released in 2010 and the 1494 released in 2011, 86% and 85% were recovered, respectively. Twenty to 23% of the recovered ticks were dead, and some of them were obviously predated; predation also likely contributed to the disappearance of the non-recovered ticks. When the rainy season started in October, 59% of the newly moulted adults were still alive in the cages. The moulting period lasted up to 20. weeks, depending on the site and release period. As verified in 2011, unfed nymphs could also survive the cold season. Various A. variegatum life stages are thus able to survive the adverse cold and/or dry seasons: unfed nymphs, engorged nymphs in developmental diapause, moulted adults in behavioural diapause as observed previously. Strong variation in mortality and recovery rates was observed between cages, highlighting the importance of the micro-environment and micro-climate for tick survival. The minimum temperature recorded in the field sites varied from 1.1. °C to 6.8. °C, but the tick-free sites were not the coldest ones; they were, however, those for which the temperature remained below 10. °C for the longest time over the study period. Recovery and mortality rates in the tick-free sites were similar to those of the tick-infested sites: the temperatures recorded during the study periods did not prevent ticks from surviving and moulting although it did delay the metamorphosis. Low temperature alone can therefore not explain the persistence of tick-free areas in the highlands. To further monitor survival, cohorts of engorged nymphs were also maintained in an incubator at 3.6. °C, 6.2. °C or 12.8. °C. More than 50% mortality was observed after 6. days at 3.6. °C, and after 15. days at 6.2. °C, whereas 18. days at 12.8. °C only delayed moulting. The collected survival, moulting and climatic data presented in this study should help to develop a predictive model to assess the distribution of A. variegatum according to climate characteristics. © 2014 Elsevier B.V. Source Nicolas G.,CIRAD - Agricultural Research for Development | Nicolas G.,French Agency for Food Environmental and Occupational Health and Safety | Durand B.,French Agency for Food Environmental and Occupational Health and Safety | Duboz R.,CIRAD - Agricultural Research for Development | And 3 more authors. Acta Tropica | Year: 2013 In 2008-2009 a Rift Valley Fever (RVF) outbreak occurred in the Anjozorobe area, a temperate and mountainous area of the Madagascar highlands. The results of a serosurvey conducted in 2009 suggested recurrent circulation of RVF virus (RVFV) in this area and potential involvement of the cattle trade in RVFV circulation. The objective of this study was to describe the cattle trade network of the area and analyse the link between network structure and RVFV circulation. Five hundred and sixteen animals that tested negative in 2009 were sampled again in 2010. The 2009-2010 cattle-level seroconversion rate was estimated at 7% (95% CI: 5-10%). Trade data from 386 breeders of 48 villages were collected and analysed using social network analysis methodology, nodes being villages and ties being any movements of cattle connecting villages. The specific practice of cattle barter, known as kapsile, that involves frequent contacts between cattle of two breeders, was observed in addition to usual trade. Trade data were analysed using a logistic model, the occurrence of seroconversion at the village level being the outcome variable and the network centrality measures being the predictors. A negative association was observed between the occurrence of seroconversion in the village and introduction of cattle by trade (p=0.03), as well as the distance to the nearest water point (p=0.002). Conversely, the practice of kapsile, was a seroconversion risk factor (p=0.007). The kapsile practice may be the support for inter-village RVFV circulation whereas the trade network is probably rather implicated in the introduction of RVFV to the area from other parts of Madagascar. The negative association of the distance to the nearest water point suggests that after RVFV introduction, a substantial part of transmission may be due to vectors. © 2013 Elsevier B.V. Source Cappelle J.,CIRAD - Agricultural Research for Development | Cappelle J.,Institute Pasteur in Cambodia | Caron A.,CIRAD - Agricultural Research for Development | Caron A.,University of Pretoria | And 17 more authors. Epidemiology and Infection | Year: 2015 Newcastle disease (ND) is one of the most important poultry diseases worldwide and can lead to annual losses of up to 80% of backyard chickens in Africa. All bird species are considered susceptible to ND virus (NDV) infection but little is known about the role that wild birds play in the epidemiology of the virus. We present a long-term monitoring of 9000 wild birds in four African countries. Overall, 3.06% of the birds were PCR-positive for NDV infection, with prevalence ranging from 0% to 10% depending on the season, the site and the species considered. Our study shows that ND is circulating continuously and homogeneously in a large range of wild bird species. Several genotypes of NDV circulate concurrently in different species and are phylogenetically closely related to strains circulating in local domestic poultry, suggesting that wild birds may play several roles in the epidemiology of different NDV strains in Africa. We recommend that any strategic plan aiming at controlling ND in Africa should take into account the potential role of the local wild bird community in the transmission of the disease. © Cambridge University Press 2014. Source Porphyre V.,CIRAD | Rasamoelina-Andriamanivo H.,FOFIFA DRZV | Rasamoelina-Andriamanivo H.,University of Antananarivo | Rakotoarimanana A.,University of Antananarivo | And 4 more authors. Parasites and Vectors | Year: 2015 Background: Taenia solium cysticercosis is a parasitic meat-borne disease that is highly prevalent in pigs and humans in Africa, but the burden is vastly underestimated due to the lack of official control along the pork commodity chain, which hampers long-term control policies. Methods: The apparent and corrected prevalences of T. solium cysticercosis were investigated in pork carcasses slaughtered and retailed in Antananarivo (Madagascar), thanks to a 12-month monitoring plan in two urban abattoirs. Results: Overall apparent prevalence was estimated at 4.6 % [4.2-5.0 %]. The corrected overall prevalence defined as the estimated prevalence after accounting for the sensitivity of meat inspection was 21.03 % [19.18-22.87 %]. Significant differences among geoclimatic regions were observed only for indigenous pigs, with an apparent prevalence estimated at 7.9 % [6.0-9.9 %] in the northern and western regions, 7.3 % [6.0-8.6 %] in the central region, and 6.2 % [4.7-7.8 %] in the southern region. In the central region, where both exotic and indigenous pigs were surveyed, indigenous pigs were 8.5 times [6.7-10.7] more likely to be infected than exotic improved pigs. Urban consumers were more likely to encounter cysticercosis in pork in the rainy season, which is a major at risk period, in particular in December. Differences between abattoirs were also identified. Conclusion: Our results underline the need for improved surveillance and control programmes to limit T. solium cysticercosis in carcasses by introducing a risk-based meat inspection procedure that accounts for the origin and breed of the pigs, and the season. © 2015 Porphyre et al. Source Guerrini L.,CIRAD RP PCP | Guerrini L.,Charles Sadron Institute | Paul M.C.,National Polytechnic Institute of Toulouse | Leger L.,Charles Sadron Institute | And 6 more authors. Geospatial Health | Year: 2014 While the spatial pattern of the highly pathogenic avian influenza H5N1 virus has been studied throughout Southeast Asia, little is known on the spatial risk factors for avian influenza in Africa. In the present paper, we combined serological data from poultry and remotely sensed environmental factors in the Lake Alaotra region of Madagascar to explore for any association between avian influenza and landscape variables. Serological data from cross-sectional surveys carried out on poultry in 2008 and 2009 were examined together with a Landsat 7 satellite image analysed using supervised classification. The dominant landscape features in a 1-km buffer around farmhouses and distance to the closest water body were extracted. A total of 1,038 individual bird blood samples emanating from 241 flocks were analysed, and the association between avian influenza seroprevalence and these landcape variables was quantified using logistic regression models. No evidence of the presence of H5 or H7 avian influenza subtypes was found, suggesting that only low pathogenic avian influenza (LPAI) circulated. Three predominant land cover classes were identified around the poultry farms: Grassland savannah, rice paddy fields and wetlands. A significant negative relationship was found between LPAI seroprevalence and distance to the closest body of water. We also found that LPAI seroprevalence was higher in farms characterised by predominant wetlands or rice landscapes than in those surrounded by dry savannah. Results from this study suggest that if highly pathogenic avian influenza H5N1 virus were introduced in Madagascar, the environmental conditions that prevail in Lake Alaotra region may allow the virus to spread and persist. Source
<urn:uuid:ec457b48-784c-4f9b-b473-f96163e65fce>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/fofifa-drzv-303822/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00189-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942884
2,393
3.078125
3
Milliken & Co., a textile and chemical manufacturer, has introduced BioSmart, a new antimicrobial-charged fabric technology that harnesses the sanitizing power of EPA-registered chlorine bleach. BioSmart is designed to help reduce the spread of infection-causing bacteria and viruses, including emerging antibiotic-resistant microbes such as MRSA. Products made with BioSmart are key to effective infection prevention strategies and programs in the workplace, in community settings and at home, the company reports. According to the Centers for Disease Control and Prevention, Staph bacteria are one of the most common causes of skin infection in the United States and are a common cause of hospital-acquired infections such as pneumonia, surgical wound infections and bloodstream infections. While the majority of MRSA infections occur among patients in healthcare settings, it is becoming more common among competitive sports participants and people within community settings including schools, daycare facilities, health clubs and prisons, the company reports. BioSmart can be applied to synthetics, cotton and polyester/cotton fabrics and is ideal for industries where bacterial contamination is a concern. BioSmart fabrics are non-irritating to the skin, odorless, quick drying and moisture wicking, Milliken says. The company reports that BioSmart recharges after every washing so it is always functioning at full strength. G&K Services, an early adopter of BioSmart, is using the technology for butcher coats and other garments for the food safety and processing industries. Product plans are underway with a number of manufacturers in the healthcare, uniform and consumer products markets. for more information
<urn:uuid:ac8a450c-b2c5-43c9-b809-878957be08ac>
CC-MAIN-2017-09
http://apparel.edgl.com/news/Milliken-Debuts-BioSmart63599
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00541-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931478
327
2.578125
3
The Senate has passed the Broadband Data Improvement Act, a bill that unlike many others should do exactly what its title says. The quality of data the government collects on broadband service has long been derided as insufficient, which is problematic because the Federal Communications Commission (FCC) often makes policy determinations based on it. For example, the Commission currently defines broadband as any service operating at transmission rates as low as 200 kbps, a rate far lower than what the industry and consumers consider qualifies as broadband. Also, the agency assumes that if a single person in a zip code receives service from any given broadband supplier, then all residents of that zip could, an assumption frequently proven untrue. The new bill will lead to the development of better metrics and more accurate data. The House of Representatives passed a similar bill last year. The House and Senate bills contain a few dissimilarities and will still have to be squared before the measures contained in either or both become law. NCTA President and CEO Kyle McSlarrow said: “We applaud the Congress for approving this important legislation, which will enable policymakers to have a much clearer picture about the state of broadband in America. Improved data about the availability and speed of all broadband offerings will help accomplish the important goal of universal broadband for all Americans. As the largest broadband provider in America, our industry will continue to support efforts designed to spur broadband adoption and access in those areas where it currently doesn’t exist.” More Broadband Direct:
<urn:uuid:6ec84f8d-9e54-403d-b13c-9fe1951142a6>
CC-MAIN-2017-09
https://www.cedmagazine.com/news/2008/09/senate-passes-broadband-data-bill
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00593-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95108
307
2.5625
3
NEC breaks long-range speed record with terabit transmission - By Kathleen Hickey - Jan 12, 2012 Could super-fast, cost-effective transoceanic data communications be a thing of the near future? NEC Corp. recently broke an ultra-long haul Internet speed record, successfully transmitting data at 1.15 terabits/sec over 10,000 kilometers, or 6,213 miles. The experimental superchannel optical transmission is the first time terabyte data speeds have been achieved using a single laser source over such a distance – a distance long enough to span the Atlantic or Pacific oceans. Using wavelength division multiplexing (WDM), NEC also combined four superchannels to simultaneously transmit 4 terabits/sec, the company said in a release. According to Sebastian Anthony in ExtremeTech, the engineers used standard technology to achieve the new speed record. “We’re not entirely sure, but the novel innovation here is probably the use of orthogonal frequency-division multiplexing (OFDM) superchannels with optical signals,” he said. Princeton, N.J.-based NEC Laboratories America conducted the test. The technology would use the current Internet infrastructure, avoiding the cost and difficulty of laying new oceanic submarine cables. Faster speeds have been transmitted over fiber, but only for short distances. The current state-of-the-art for transoceanic connectivity is between 10 and 100 gigabits/sec per optical fiber. Upgrading the world’s largest cable, TGN-P, which runs between Japan and the United States, to NEC’s technology would increase the cable’s capacity from 7 terabits/sec to 150 terabits/sec or more, Anthony said. Kathleen Hickey is a freelance writer for GCN.
<urn:uuid:25d2f9e7-9887-4158-8efb-40eea31b7b9a>
CC-MAIN-2017-09
https://gcn.com/articles/2012/01/12/nec-terabit-speed-long-range-record.aspx?admgarea=TC_EMERGINGTECH
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00061-ip-10-171-10-108.ec2.internal.warc.gz
en
0.891238
382
2.5625
3
Nov 95 Level of Govt: State. Function: Environmental Protection. Problem/situation: Effective response to environmental management was inhibited by multiple databases in the Massachusetts's Executive Office of Environmental Affairs. Solution: EOEA developed an integrated environmental management system connecting each division. Jurisdiction: Massachusetts. Vendors: EDS, Oracle. Contact: John Rodman, assistant secretary of Environmental Affairs, 617/727-9800 x217. Bill Loller G2 Research Massachusetts's Executive Office of Environmental Affairs (EOEA) faced a series of problems in 1988. The problems could be boiled down to the lack of coordination by a myriad of subordinate agencies involved in monitoring environmental processes were obstructing the agency's ability to assess environmental quality holistically and accurately. Within the EOEA, the Department of Environmental Protection maintained separate databases to support divisions managing the air, land or water quality programs. The separation of these activities led to multiple inspectors at a single facility to approve of air, land and water permits as well as storage of key environmental information in separate databases. These factors created inefficiency within the department and confusion in the private sector. Inspectors often entered the same data into each of the three databases, and companies were forced to work with numerous engineers and inspectors in order to receive the permits necessary to legally operate. In 1988, the Commissioner of the Department of Environmental Protection developed a new environmental agenda and reorganized the programs under three new bureaus. The goal was to develop comprehensive, not segmented, environmental protection; cross-media inspections and compliance strategies; and inter-agency data sharing. Getting Connected In order to successfully fulfill these objectives, EOEA required a new environmental management system which would integrate environmental data and connect each division. The development and implementation of an Environmental Protection Integrated Computer System (EPICS) ensured that the new operational framework for the department would succeed. The implementation of an integrated system became a priority in 1989. In response to the Waste Prevention Facility-wide Inspections to Reduce the Source of Toxins (FIRST) initiative, the department solicited proposals from the vendor community for an integrated environmental database. EDS was selected based on the technical merits of the proposed system as well as the commitment of EDS to EPICS implementation and future planning. The department had previously worked with EDS for hardware and software in addition to consulting services. EPICS revolves around the facility master file (FMF), the comprehensive data model which can be accessed by all programs within the Department of Environmental Protection. CASE methodology was utilized in developing and designing this integrated data model. Essentially, FMF integrated ten individual databases from the following areas: hazardous waste transporters, hazardous waste, transfer storage disposal, hazardous waste handler, air quality, solid waste, industrial waste water, water pollution control, water supply, water management, and cross connections. The integration of these databases eliminated data redundancy and provided all programs with a complete environmental picture of a facility and the status of its regulatory compliance. A single Oracle database stores all of the FMF information. subsystems within EPICS. The regional offices located across the state and within the Boston area access EPICS via a network of LANs and WANs. In fact, approximately 2,000 employees of EOEA at over a dozen locations throughout the state are connected by the network to the EPICS database. >From an operational standpoint, cross-trained inspectors are now able to perform all necessary functions at one facility during a single trip. A consolidated report is generated from the FMF which details the facility's entire compliance history and permit fees. This streamlines the permitting and inspection process and also provides the facility with a single point of contact if problems should arise. Data on individual programs are also stored on EPICS through the program subsystems. Inter-Agency Data Sharing The department has found increased functionality with EPICS with respect to inter-agency data sharing and improved response to potentially serious situations. In some cases, the Department of Health will contact the department about the increase in illnesses from a particular area. The Department of Environmental Protection can quickly assess the environmental status of a certain area through the FMF, which contains information on air, land and water pollution sources. Similarly, if the department suspects a problem, data contained in the FMF provides the department with a quick way to analyze a facility's environmental compliance and target potential violators for inspections. Moreover, EPICS identifies, tracks, and monitors the use of toxic chemicals within the state. This feature allows the department to pinpoint problem areas and also make recommendations to companies about alternative non-toxic chemical substitutions. The department expanded the Toxic Use Reduction subsystem to track the total volume of toxic chemicals within the state, as well as the manner in which the chemicals are transported. The Department of Environmental Protection is extremely satisfied with the EPICS system. Once the department had a system that complemented the organizational framework, the department experienced a large improvement in productivity through more efficient and streamlined business processes as well as better customer service. Data Manager Douglas Priest pointed out that "just having the information available at a moment's notice is a major savings." EPICS also presents the opportunity to expand into other areas. The department believes that public access may become an important issue in the near future. Moreover, the department highly values EPICS based on two key issues: revenue generation and environmental reaction and protection. According to Victoria Phillips, fees coordinator for EPICS, the automated billing module for permit fees raised $7 million in 1994. The fees are used to hire inspectors and make additions to the system. Skip Russell, a senior systems analyst with the department noted that "without the possibility of fee generation, the EPICS system would have died. Fee generation allows continued development of the system." Secondly, through the facility master file, EPICS has dramatically improved the department's response to environmental inquiries or problems and enforcement of violators. Thus, these two factors have made EPICS an effective method of improving the state's protection of the environment. Overcoming Resistance The department experienced resistance to EPICS at the beginning of the project. However, after the inspectors realized the tremendous benefits and power of the system, EPICS became very favorably viewed by the entire department. As the department adds more modules to the system, an increase in technical personnel will be required to maintain the system. In the near future, the department plans to implement new components to EPICS to enhance the power of such a comprehensive database with other available technologies. In particular, the department is planning to integrate EPICS with a Geographic Information System on which the environmental data can be layered in a mapped format. The department is also interested in implementing Electronic Data Interchange (EDI) to more reliably transmit and receive data from other environmental databases, namely the U.S. EPA, as well as data from the companies that are being regulated. On a smaller scale the department has already increased the system's reporting features through a reporting tool that allows end-users to query and analyze data without having to go through the MIS group. The department is also evaluating the implementation of a GUI front end. These additions, particularly GIS and EDI, will have an enormous impact on the department. Both GIS and EDI will increase the functionality of the system and reduce administrative costs within the department and EOEA. The Massachusetts Executive Office of Environmental Affairs was named winner of the 1994 Computerworld Smithsonian award for the EPICS project. The award recognizes the technology industry's most creative and innovative uses of information technology that benefit a wide spectrum of society.
<urn:uuid:348d3b7b-1ad3-4a35-b19e-01a65a3b7794>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Massachusettes-Integrates-Environmental-Data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00237-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947534
1,526
2.59375
3
Cultural heritage is captured in books, art, and artifacts stored in museums, libraries and other facilities around the world. However, many treasures are in locations where they are unprotected from the risks of degradation or destruction. EMC contributes our expertise to help ensure these cultural treasures are available for future generations to access and enjoy. Through our Information Heritage Initiative, EMC provides products, services and financial assistance for digital information preservation programs worldwide. Through our Heritage Trust Project, EMC provides grants to local institutions striving to preserve the artifacts under their care. Digitizing not only prevents these pieces from disappearing, but provides access for students, scholars and others who may not be able to visit these items in person. Since 2007, we have provided more than $42 million in products, services and financial assistance for digital information preservation programs worldwide. Heritage Trust Project EMC’s Heritage Trust Project recognizes the importance of local preservation projects. The Project supports community-based digital curation efforts around the world with cash grants to local cultural institutions, archives, or private collections. New grants are awarded every year through an open application process. The 2016 application cycle will open on April 6, 2016. Beginning in 2012, we showcased the Project on EMC’s Facebook page, where applicants now submit their proposals directly. An internal group of judges reviews the proposed projects, looking specifically at the potential impact and the sensitive nature of the project. The group chooses seven finalists and then a public vote is held to pick the winners. In 2015, 24 countries were eligible to participate in the Project. The three winners were: The Secrets of Radar Museum, Canada During World War II, Canada provided the 2nd largest radar contingent, loaning more than 6,000 personnel to the British Royal Air Force alone, as well as building and maintaining radar on shore. These men and women signed the Official Secrets Act, standing by as the history of the war unfolded in texts and film without their inclusion. The Secrets of Radar Museum is the only radar-specific history museum in Canada. It shares the stories that World War II veterans were not allowed to tell due to a 50-year oath of secrecy. Through digitization, the museum will be able to share these materials with a much broader audience. University of Rosario, Colombia The Historical Archive of the University of Rosario preserves and safeguards a collection of more than 950 volumes of manuscripts and printed documents concerning the history of the College between the seventeenth and twentieth centuries, including a set of Royal Decrees issued between the kingdoms of Felipe IV and Carlos IV. The Royal Decrees provide insights into colonial institutions and society. Despite their great historical importance, the royal decrees have not received adequate treatment and have begun to deteriorate, requiring digitization to preserve this important collection. The Filipinas Heritage Library The Ulahingan is a major epic of the Manobo indigenous group in Mindanao, Philippines, with 4,000-6,000 lines per episode and 79 episodes on average. This tradition is orally passed from one generation to the next. The epic has been orally recorded on over 1,200 items of reels and cassette tapes. The Filipinas Heritage Library (FHL) recognizes the need to digitally preserve these traditions as part of its mission to preserve and promote accessibility to educational resources on Philippine culture and heritage for the present and future generations. Heritage Trust 2014: Where Are They Now? In 2014, EMC awarded organizations from India, Canada and the United Kingdom with Heritage Trust grants. Updates on their progress are provided below. The Merasi Legacy Project (India) “Merasi” translates to “musician”, and is the name given to a community of people with a rich musical culture who live in the Thar Desert in northwestern Rajastthan, India. Existing on one of the bottom rungs of the Indian caste system, which to this day partially dictates how Indian society functions, the Merasi people have been denied access to education, healthcare, and political representation, with most living in dire poverty. In the past, the history and musical traditions of the Merasi people were handed down orally by older members of the community, but because of the abuse and negativity attached to Merasi history, many younger people have shunned cultural musical practices. In 2014, Folk Arts Rajastthan (FAR), an organization that desired to preserve this musical tradition, was awarded an EMC Heritage Trust grant to work with and train the youth of the Merasi community to document their people’s musical heritage through audio and video recordings. Thanks to the grant, in addition to training within the community, FAR has been able to purchase the up-to-date software and audio and video equipment needed to preserve this threatened global musical treasure. The result will be an archive of audio and video recordings, a web site where the world-at-large can learn more about Merasi heritage, and a book about the community’s musical tradition. “For a while, talented young people in the community were seeking any alternative they could to becoming musicians. Now, 10 years into our project, that’s no longer the case. Young people are beginning to understand that they have an honored legacy that is recognized around the world,” said Karen Lukas, Director of FAR. Nikkei National Museum Internment Project (Canada) On February 24, 1942, lives of more than 20,000 Japanese Canadians were forever altered when Canadian Prime Minister William Lyon Mackenzie King called for the forced relocation of all persons of Japanese origin to designated “internment” sites at least 100 miles from the West Coast of British Columbia. Ten days following the order, the first 2,500 Japanese Canadians were removed to Hastings Park in Vancouver, where they were held for months at a time before being sent to internment camps in the British Columbia interior. In 2014, the Nikkei National Museum was awarded an EMC Heritage Trust grant to aid its efforts to gather, preserve, and share information related to the internment at Hastings Park. The grant helped the museum catalogue, scan, digitize, and upload a growing collection of memorabilia to its searchable database (www.nikkeimuseum.org). The museum also used the funds to increase the website’s capacity and ease-of-use, hire contract archivists, and purchase desperately needed archival supplies. The museum hopes to one day include the names of all 8,000 Japanese Canadians once detained there on its website. “There is an interest now to reclaim the history,” said Sherri Kajiwara, Director/Curator, Nikkei National Museum. “It’s not just for cultural reasons. It’s also for human rights reasons, so that things like this will be remembered, not forgotten, and won’t happen again.” Christmas Lectures (United Kingdom) For nearly 200 years, the Christmas Lectures hosted by the Royal Institution of Great Britain have brought science to life through spectacular presentations designed to capture the attention and engage the minds of young audiences. Aimed at children ages 11 to 17, the Christmas Lectures have covered a wide range of fascinating topics including astronomy, insect habits, the language of animals and robot technology. In 1966, BBC began broadcasting the Christmas Lectures annually, creating a library of 49 uninterrupted years’ worth of footage. In 2014, the Royal Institute was awarded an EMC Heritage Trust grant to aid the digitization and online availability of this vast, educational video collection, and to help locate 16 years’ worth of missing footage. With help from EMC’s grant, 19 Christmas Lectures were online by October 2015, and 10 years’ of missing footage had been located. The Royal Institute plans to have all available lectures digitized and online by November 2016 to coincide with the 80th anniversary of the lectures’ first appearance on the BBC. “We hear stories from teachers who remember watching the lectures as children, and now as teachers, they use our footage to explain an area of science to the next generation,” says Hayley Burwell, Royal Institute’s head of Marketing and Communications. “That’s a really wonderful legacy.”
<urn:uuid:828c6e2c-0352-46d3-836b-a0cd6d16c99c>
CC-MAIN-2017-09
https://www.emc.com/corporate/sustainability/strengthening-communities/heritage.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00113-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944746
1,699
2.5625
3
Project mines tweets, satellite and drone imagery for disaster response - By Rutrell Yasin - May 02, 2014 Hurricanes, floods, fires, earthquakes and blizzards can cause massive disruptions to transportation networks and other public safety systems, making it difficult for emergency management workers to know what's happening on the ground when a disaster strikes. However, first responders might gain a better understanding of an unfolding emergency from a new crowdsourcing project that uses cloud-based tools to help guide satellites and small unmanned aerial vehicles (UAVs) to trouble spots based on posts from social media. At the center of the project is a geo-social networking application called the Carbon Scanner, developed by the Carbon Project, a software firm that specializes in “geo-social” networking and cloud computing. The scanner picks up on Twitter hashtags related to natural disasters and produces a map showing the location of the tweets so emergency workers can better pinpoint affected areas. “It’s an alert system that will show where tweets are located and that will hopefully help us define an area where we should be investigating,” said Nigel Waters, the project leader and a professor with George Mason’s Department of Geography and Geoinformation Science, a project partner. After finding the area of interest, satellite imagery could give emergency operations centers a first look at the extent of damage to infrastructure, Waters said. The tweet does not need geo-location to be picked up by Carbon Scanner. If it does, that’s fine, but the scanner can also detect place names in tweets, such as “flooding at La Guardia Airport” and then put a dot on the map based on that data. The Scanner mines enough information to get a snapshot of what’s going on at a location during a natural disaster, said Jeff Harrison, CEO of the Carbon Project. Officials can then fuse satellite imagery with pictures and video from a variety of other sources, including Civil Air Patrols, drones, traffic cameras and citizens, Waters noted. The project participants are using satellite imagery from DigitalGlobal, a provider of commercial high-resolution earth imagery products and services. The project is being funded by a grant from the Transportation Department’s Research and Innovative Technology Administration. Its partners include George Mason University and representatives from state and municipal transportation departments in Maryland and Westchester County, N.Y. The project started off with a focus on satellite imagery data, but feedback from the project's advisory group helped bring in airborne imagery from the Civil Air Patrol as well as from drones, Harrison said. Often, after storms or hurricanes, clouds can obstruct the satellite view of what is happening on the ground. This was the situation when heavy rains and massive flooding hit Colorado in September 2013. In that case, project participants were able to mine data from Twitter and glean imagery data from drones supplied by a Falcon UAV to get a better picture of the effects of catastrophic flooding that devastated the state from Colorado Springs to Boulder County, Harrison said. Drones will play a big role in providing more real-time information to emergency responders, Harrison, said, especially as the Federal Aviation Administration works to lift restrictions on their use. What’s more, hundreds of small imagery satellites — the size of small refrigerators or big toasters — now being launched by companies such as Skybox and Planet Labs will help government respond to these at trouble spots on the ground, Harrison added. The Carbon Cloud, which runs on Microsoft’s Azure cloud platform, is essential for the rapid mining and processing of thousands of Twitter feeds, Harrison said. “I don’t know if this [project] qualifies as big data, but it is pretty big data,” he said. In addition to mining Twitter and imagery data during the Colorado floods, participants in the two year project that ends in August 2014 have conducted a variety of studies assessing damage from Hurricane Sandy, which struck the northeast in 2012; and flooding in Waynesville, Mo., and the city of Calgary in Alberta, Canada in 2013. Rutrell Yasin is is a freelance technology writer for GCN.
<urn:uuid:a4087d7d-034e-4e09-a712-a9ea6d769f3d>
CC-MAIN-2017-09
https://gcn.com/articles/2014/05/02/carbon-scanner.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00409-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93598
855
2.8125
3
With findings based on self-reported experiences of more than 13,000 adults across 24 countries, the 2012 edition of the Norton Cybercrime Report calculates the direct costs associated with global consumer cybercrime at US $110 billion over the past twelve months. In the United States, it is estimated that more than 71 million people fell victim to cybercrime in the past twelve months, suffering $20.7 billion in direct financial losses. Every second, 18 adults become a victim of cybercrime, resulting in more than one-and-a-half million cybercrime victims each day on a global level. With losses totaling an average of US $197 per victim across the world in direct financial costs, cybercrime costs consumers more than one week’s worth of nutritious food necessities for a family of four. In the past twelve months, an estimated 556 million adults globally experienced cybercrime, more than the entire population of the European Union. This figure represents 46 percent of online adults who have been victims of cybercrime in the past twelve months, on par with the findings from 2011 (45 percent). Compared to last year, the survey shows an increase in newer forms of cybercrime, such as those found on social networks or mobile devices – a sign that cybercriminals are starting to focus their efforts on these increasingly popular platforms. One in five online adults (21 percent) has been a victim of either social or mobile cybercrime, and 39 percent of social network users have been victims of social cybercrime. Nearly one-third (31 percent) of mobile users received a text message from someone they didn’t know requesting that they click on an embedded link or dial an unknown number to retrieve a “voicemail”. While 75 percent believe that cybercriminals are setting their sights on social networks, less than half (44 percent) actually use a security solution which protects them from social network threats and only 49 percent use the privacy settings to control what information they share, and with whom. The 2012 Norton Cybercrime Report also reveals that most Internet users take the basic steps to protect themselves and their personal information – such as deleting suspicious emails and being careful with their personal details online. However, other core precautions are being ignored: 40 percent don’t use complex passwords or change their passwords frequently and more than a third do not check for the padlock symbol in the browser before entering sensitive personal information online, such as banking details. In addition, this year’s report also indicates that many online adults are unaware as to how some of the most common forms of cybercrime have evolved over the years and thus have a difficult time recognizing how malware, such as viruses, act on their computer. In fact, 40 percent of adults do not know that malware can operate in a discrete fashion, making it hard to know if a computer has been compromised, and more than half (55 percent) are not certain that their computer is currently clean and free of viruses. “Malware and viruses used to wreak obvious havoc on your computer,” Merritt continues. “You’d get a blue screen, or your computer would crash, alerting you to an infection. But cybercriminals’ methods have evolved; they want to avoid detection as long as possible. This year’s results show that nearly half of Internet users believe that unless their computer crashes or malfunctions, they’re not certain whether they’ve fallen victim to such an attack.” More than a quarter (27 percent) of online adults report having been notified to change their password for a compromised email account. With people sending, receiving, and storing everything from personal photos (50 percent) to work-related correspondence and documents (42 percent) to bank statements (22 percent) and passwords for other online accounts (17 percent), those email accounts can be a potential gateway for criminals looking for personal and corporate information. “Personal email accounts often contain the keys to your online kingdom. Not only can criminals gain access to everything in your inbox, they can also reset your passwords for any other online site you may use by clicking the ‘forgot your password’ link, intercepting those emails and effectively locking you out of your own accounts,” says Adam Palmer, Norton Lead Cybersecurity Advisor. “Protect your email accordingly by using complex passwords and changing them regularly.”
<urn:uuid:9e21d4d4-7f61-46fc-afdf-4a0773678b25>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2012/09/05/cybercrime-costs-consumers-110-billion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00109-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943064
900
2.75
3
|For SSL to work, the server computer requires a file that is more commonly known as a SSL certificate. This certificate in its most basic form contains both a public key and a private key and unique mathematical codes that identify the web host. A combination of a public and private key allows the certificate to create a secure channel to encrypt and decrypt data travelling between a client and a web host; so even if the data is hijacked halfway, all the hijacker would see is jumbled codes. This certificate needs to be installed onto a web server so that it may begin to initiate secure sessions with client browsers. Once installed, a client browser is able to obtain the certificate from the web host and subsequently encrypt its transmission to the web host using the public key in the certificate. Have more questions? Submit a request
<urn:uuid:5ce7a0de-deef-482d-96b2-b0e5a6d44612>
CC-MAIN-2017-09
https://help.nexusguard.com/hc/en-us/articles/203021389-How-does-SSL-work-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00333-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93063
163
3.828125
4
The concept of cloud computing can trace its inception back to the 1906's when the Turing Award winning computer scientist John McCarthy stated that "computation may someday be organized as a public utility". However, it wasn't until recently that the cloud began to take hold in IT as a result of Amazon providing access to their systems to maximize the use of their data centers through their Amazon Web Services project. As more companies begin to explore the benefits of cloud computing, it was found that this solution had the potential to: While the advantages of a cloud solution are evident, there are many who also have been quick to point out the fact that there are plenty of security concerns one faces when considering moving to the cloud. The debate as to how secure moving applications and data to the cloud is such an area of concern that the topic consumed much of the discussion at the 2009 RSA conference. These ongoing debates have sparked a number of security experts to identify a number of threats to cloud computing to include; Moving data to the cloud requires a great deal of trust in the host since they are essentially housing all of your data. If they fail to put adequate security controls in place between the client and data, a number of attacks can be used to compromise sensitive information. SQL Injection attacks, compromised servers, and session hijacking can all lead to cyber criminals harvesting your data on someone else's watch. While this is also noted as one of the benefits to cloud computing, it can also cause problems. As web applications grow in popularity, more companies rely on them as an integral part of how they do business. Moving these applications to the cloud should mean that the management of these apps is taken care of, but this usually means automated updates, not complete security. In fact, George Reese stated in his article, Twenty Rules for Amazon Cloud Security - "Above all else, write secure web applications." The fact is, while your cloud provider may handle necessary updates of your software, they are not going to review your code for potential vulnerabilities; make sure your input and output is validated, escaped, and filtered; and that your application is protected against other methods of exploiting common threats like Cross-Site Scripting. The very nature of the cloud means that resources are shared as they are needed. Traditional perimeter security in the cloud doesn't work in the same way. For instance, using Amazon's Web Services you may find yourself restricted when it comes to checking logs and deploying tools like traffic sniffers and intrusion detection systems. Essentially, its not your perimeter so the way you used to protect it has changed. Some terms of service even prevent you from running vulnerability scans making it virtually impossible to perform a code review. For PCI compliance, this can present a major problem. Even though data and applications running in the cloud are exposed to a number of security threats, a strong push form industries such as healthcare and ecommerce, as well as support from Google, IBM, Amazon, and other IT powerhouses, means that solutions to these security related problems need to be identified. One way to protect against threats to your web applications and data is to deploy a Web application Firewall as a software solution. No additional hardware is required on the part of the cloud provider and in can be installed directly in front of your web facing applications. When deployed correctly, a Web Application Firewall protects your web applications from known threats including: Web application Firewalls also take traditional security much further. By performing a deep inspection of traffic on the web service layers they are able to stop threats that intrusion detection and prevention systems often miss. Cyber criminals attack the most vulnerable web sites, and they attack the biggest possible pool of victims they can. As more IT departments are forced to scale back, cost saving initiatives like cloud computing become even more attractive. While cloud computing provides managed services, you are still responsible for compliance. No provider will assume this responsibility for you simply because they are managing your applications and data. In order to comply with regulations like PCI DSS, HIPPA, SOX, and the many others it is essential that security be one of the most important factors when making the decision to move to the cloud. What sets dotDefender apart is that it offers comprehensive protection against threats to web applications while being one of the easiest solutions to use. By acting as a Security-as-a-Service solution, dotDefender is able to provide protection to web servers whether the admin has an extensive background in security or just a minimal amount of knowledge on the subject. In just 10 clicks, a web administrator with no security training can have dotDefender up and running. Its predefined rule set offers out-of-the box protection that can be easily managed through a browser-based interface with virtually no impact on your web site’s performance. Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against DoS threats, Cross-Site Scripting, SQL Injection attacks, path traversal and many other web attack techniques. The reasons dotDefender offers such a comprehensive security solution to your web application security hosted in the cloud are: dotDefender's unique security approach eliminates the need to learn the specific threats that exist on each web application. The software that runs dotDefender focuses on analyzing the request and the impact it has on the application. Effective web application security is based on three powerful web application security engines: Pattern Recognition, Session Protection and Signature Knowledgebase.
<urn:uuid:e30663ce-14d7-4bf2-9646-93d1e081427a>
CC-MAIN-2017-09
http://www.applicure.com/solutions/cloud-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00630-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949549
1,117
2.9375
3
Data security breaches seem to happening more and more often these days. Of course, even if your private data is securely stored with an organization you trust, that doesn’t mean your privacy can’t still be compromised. People with legitimate access to it can make improper use of your information, even unintentionally, by sharing it or using in a way you don’t know about without your permission. If this sort of thing keeps you up at night, then you may soon sleep better thanks to work being done by some of the smart folks at MIT Oshani Seneviratne and Lalana Kagal, both researchers in the Digital Information Group (DIG) at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), have developed a protocol to help ensure your privacy through requiring increased transparency by those who would access your data. Seneviratne and Kagal, under the guidance of DIG director Tim Berners-Lee, have developed what they call Privacy Enabling Transparency Systems (PETS). PETS is built on open standards and uses a specification for a new web protocol called Accountable HTTP (HTTPA), developed by Seneviratne as part of her PhD dissertation. Under HTTPA, each piece of personal information that gets transmitted has its own unique identifier. Those identifiers can then be used to associate rules and restrictions for the use and transmission of the information specified by those providing the data (e.g., you and me), and requests for the data can be tracked. When a client requests private data using HTTPA, the server will send instructions on restrictions related to using the data, based on the client’s credentials. At the same time, the data request will get logged across a distributed collection of secure third party servers, called a Provenance Tracking Network (PTN). Data owners will be able to request an audit of the usage of their information based on these logs, which would include reports of valid uses of private data as well as actual or attempted privacy violations. For more more of the nitty-gritty details, you can read Seneviratne’s original proposal for HTTPA, “Augmenting the Web with Accountability.” You can also read a paper by Seneviratne and Kagal outlining PETS, including a test of the system they ran to demonstrate its effectiveness, titled “Enabling Privacy Through Transparency.” Finally, Seneviratne has shared the HTTPA code on GitHub. Of course, it will ultimately be up to individual organizations to implement and comply with HTTPA. Hopefully, those who collect our private data for legitimate use will find it in their interest to do so. If HTTPA becomes widely adopted, we should sleep a little better at night. [h/t MIT News] Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:6c24107a-c65c-45a5-a3af-b0a90aeac794>
CC-MAIN-2017-09
http://www.itworld.com/article/2695722/security/accountable-http-seeks-to-increase-data-privacy-through-transparency.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00506-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926102
642
2.59375
3
Here's the weekly update from NASA's Jet Propulsion Laboratory on what the Mars Curiosity Rover is doing on the Red Planet. This week, we get a discussion of the discovery of what appears to be an ancient riverbed, where water once flowed on the planet. The existence of smooth pebbles help explain why they were shaped by a strong water flow, instead of carried by wind. Very cool explainer from this week's JPL presenter, Sanjeev Gupta: The Mars rover is continuing its journey to a new location on the planet, so we'll consider this week's discovery as the rover's "water break." While the discovery of an ancient riverbed is very cool news for the scientific community, I'm secretly hoping that rover will stumble on some old Martian bones. Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. Watch some more cool videos: Current video game characters battle old-school 8-bit rivals Nokia admits faking phone video Watch a robot turn into a car without Michael Bay's assistance Sherlock Holmes is really good at Blue's Clues Watch this preview of Lego Star Wars: The Empire Strikes Out
<urn:uuid:50ce1578-1438-4288-be66-28538471d40e>
CC-MAIN-2017-09
http://www.itworld.com/article/2721812/cloud-computing/this-week-on-mars--water--water-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00450-ip-10-171-10-108.ec2.internal.warc.gz
en
0.902416
272
2.875
3
Given the current prevalence of mobile devices, especially smartphones, it comes as no surprise that they are becoming more and more entwined with everyday aspects of our lives. We don't just use them to make calls, to text, or to browse the internet anymore. We can use them to do just about anything, and that includes using them as a means to provide our credentials. Since people almost always have their phones on them, it makes sense -- to some extent -- to use them as a means to store and validate one's credentials. Banking information, ticketing purposes, access control -- they can all be handled through smartphones now. But this, of course, begs the question of just how safe the practice really is. "That's possible with NFC, they're using that," says Dionisio Zumerle, a Gartner analyst, of the practice of using smartphones for access control purposes in the enterprise. "But you also have to take into account that a lot of methods they use are only available with certain hardware on certain devices. It's something that we see emerging, but it's not in the mainstream world just yet." For those enterprises that do use mobile devices for storing credentials for things like access control purposes, there are some methods that are implemented to lock them down. The methods are not always particularly sophisticated, however; Irfan Asrar, a researcher for McAfee, says that these credentials are typically secured with native security. "They may have some additional BYOD policies enforced on them," says Asrar. "But most of what we've been seeing has been using the basic authentication with the OS." Zumerle says there are also some slightly more advanced methods of protecting credentials. "With NFC there is secure element, which can be used to secure the credentials," he says. "Usually it involves a separate component, like a Smart Card or an SD card." With secure element, the application code and data is securely stored and executed on the external chips. According to the Smart Card Alliance, the secure element "provides delimited memory for the apps and other functions that can encrypt, decrypt, and sign the data packet." For those that are looking for a slightly more convenient means of using their mobile devices for NFC-based transactions, Zumerle said that there was a new method introduced with Android 4.4 (KitKat) called host-based card emulation, or HCE. With HCE, users no longer need the secure element (the external card) to conduct transactions via NFC. While this practice makes things more convenient for both the user and app providers, it obviously comes at the expense of security. "Yes, HCE makes it less secure," says Zumerle. "When you remove hardware based security from the picture, that's always going to give attackers more ways to reach you." Segmented devices, he explained, are the better solution. TIMA in Samsung's Knox security suite, for example, stores secure configuration volumes in a secure portion of the device's chip. "During runtime, it compares the values of the secure section with the current values of the device, so if something changes, you can immediately kill the secure portion of the device," he says. "It's the same thing you can do with some access control or payment applications; if they've been tampered with, you can embed those controls and try to kill the application." Some inherent risks still remain, however. After all, in a scenario with something like access control, there's nothing stopping attackers from just picking up an employee's phone -- in the event that it's lost or stolen -- and using it to swipe in. "No, nothing prevents people from doing that," says Zumerle. "The device is enable to pair with the reader automatically. You would have to have some sort of combined set up; for example, no NFC if the device is locked. Some vendors do that, but it's at the expense of convenience. It all depends on how much risk an enterprise is willing to take." Asrar adds that even though failsafes typically boil down to native security features of the platform (like device locking) they're often not implemented, so it's up to enterprises to roll out additional safety measures. "A lot of people don't even have passcodes," says Asrar. "But there are additional wares out there like single sign on [for two-factor authentication] that can be tied into enterprises' deployments." Ultimately, while both Zumerle and Asrar feel that there are certainly inherent risks, whether or not it's safe to use mobile devices for access control also depends on the context. As Zumerle points out, the practice forces people to rely on the hardware. But when they do that and want to account for the entire spectrum of mobile devices, it becomes more difficult since they are not all the same. "Technically, if you have a solution that is set up with some security precautions, there are, today, the technological tools that make it so that it's something that enterprises could use," says Zumerle. "So yes, but it depends on the exact solution. Something well done can work, but you need to have the right measures in place. If you implement something that only works for a specific device or scenario -- like something that's good for a specific project -- and then you want to do something like BYOD, then it starts to get complicated." The other side of the coin is that the threat landscape is constantly changing; the practice of using mobile credentials is only safe as long as the good guys can keep up. "There are certainly major vulnerabilities, but we are tracking them on a day to day basis," says Asrar. "The problem is that it's like a cold war situation. The bad guys are constantly evolving and the security companies are also evolving, and we're trying to stay ahead of each other. It's an evolving field, so it's hard to say...we're focused on what was the vulnerability that got through. It really depends on that." Also complicating matters is the fact that there are multiple ways in which mobile credentials are being used. Another popular use for storing one's credentials on a mobile device is banking and financial transactions, so it goes without saying that access control credentials aren't the only thing at risk. To an extent, mobile wallets have some fail-safes built in to protect credentials. Some applications store the credentials on the device itself, but it's encrypted. Others, however, don't actually store users' banking information locally on the device. "Some apps may have the ability to take the information and store it on the packet server, in case you're worried about the device going missing or getting stolen," says Asrar. Similarly, banking credentials can be stored either in the cloud or on a secure element, much like access credentials. Even this approach, however, is not foolproof. "[Storing information on the cloud] does ensure some security to some extent," says Asrar. "But it can be abused, and it comes down to what the transaction company has planned to protect users." He adds that while security companies are constantly trying to determine which vulnerabilities were specifically targeted in these systems -- and subsequently patching them up -- consumers should also take basic steps to protect themselves. Security measures like installing anti-virus, immediately downloading updates, and not trying to bypass your company's security policies are all no-brainers. While the banking data that the user enters can be secured in a number of ways, the actual application can be secured as well, says Zumerle. That way, attackers can't leverage it to fetch or record important data during transactions. "There are certain methods you can deploy and there are vendors that supply solutions that you can embed into the app that have certain controls," says Zumerle. "For example, it can open the app in a sandbox for you and check to see if it is compromised. If it is, it will void the transaction or the application all together." In a similarly preventative measure, many app developers are scrambling or encrypting their wallet apps before releasing them. This effectively prevents attackers from using reverse engineering to compromise software and re-releasing it to the public under the false pretense of it being a legitimate app. "[Scrambling] makes the app harder to understand for an attacker, so it's not as easy to disassemble, reassemble with malicious content, and put on the app store for people to download and use," says Zumerle. So on the device end of the transaction, there are ways that risks can be mitigated. But what about on the other end, at the point of sale? "Well, you can put a reader on the terminal [to capture information]," says Zumerle. "And you can take advantage of an open connection during the transaction, but that starts becoming something like cyber pickpocketing that needs a physical presence and you have to do that in close proximity. In terms of POS risk, a rigged terminal is the greater concern." Asrar, for his part, claims that McAfee has yet to come across a case of attackers nabbing credentials via NFC. "We haven't come across a case, but it's not something that is far-fetched," he says. "There was a case a couple years ago where somebody uploaded an app that could read information from NFC communications, but it was for research purposes to show that it's a protocol and can be reverse engineered." Even though it would appear that threats to the security of your mobile credentials can be found just about anywhere, Asrar says that users can still take simple steps to protect them from being compromised, including locking their devices and utilizing free security software. "Even if users start using half of the features they have at their disposal," he says, "they will reduce the attack surface quite a bit." This story, "The Use of Mobile Credentials is on the Rise, but Can They Be Secured?" was originally published by CSO.
<urn:uuid:6d4d81c6-46bc-4f73-9a81-7b9f4de24e91>
CC-MAIN-2017-09
http://www.cio.com/article/2375798/mobile/the-use-of-mobile-credentials-is-on-the-rise--but-can-they-be-secured-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00026-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967955
2,075
2.75
3
Primer: Free Space OpticsBy David F. Carr | Posted 2005-03-07 Email Print A wireless network that uses light beams instead of radio waves. What is it? Free space optics technology is wireless networking that uses light beams instead of radio waves; it's laser-based optical networking without the fiber optic cable. In corporate networks, the most common application is wireless campus networkingtypically, a rooftop-to-rooftop connection between buildings. Or the laser beam might be shot out one building's window into the window of a building across the street. What are the advantages? Wavelengths for these transmissions do not require a Federal Communications Commission license. While this is true of some wireless networking schemes based on radio-frequency transmissions, free space optics is immune to the radio interference that can sometimes sabotage those systems. Jeff Orr, a senior product marketing manager at Proxim, says radio-frequency products have erased the bandwidth advantage that free space optics once enjoyed. He concedes that his high-end products cost about twice as much as a free space optics unit with the same capacity (about $75,000 for a gigabit radio link versus $40,000 for an FSO alternative). What are the disadvantages? Transmissions fade rapidly in certain kinds of weatherfog, in particular. Isaac Kim, director of optical transport at MRV Communications, says fog droplets scatter the wavelengths of light used in free space optics. The power of the beam can't be increased because of concerns that the lasers might be dangerous to the eyes of anyone who happens to walk through a souped-up beam. Technical innovation has focused more on lowering costs than increasing the range of the optics equipment. To maximize availability, you can use a combination of free space optics and radio-frequency gear. What's the real range? A conservative estimate is up to 500 meters for a free space optics-only solution, or 1 to 2 kilometers (about a mile) for free space optics with a radio-frequency backup. You can stretch this in dry areas. Any other downsides? Atmospheric effects such as scintillation (the "waves of heat" pattern you sometimes see over dark surfaces) can have an impact on free space optics transmissions. Urban rooftop-to-rooftop setups can encounter problems because tall buildings sway slightly in the wind, throwing off the aim of the tightly focused lasers. Who are the vendors? The most recognizable name is Canon, better known for cameras and copiers. LightPointe is a specialist in this niche, where many startups have long since come and gone. MRV Communications offers free space optics gear as an adjunct to its fiber optic and Ethernet solutions. Who will vouch for it? Fred Murphy, associate director of information technology for Jazz at Lincoln Center, says free space optics equipment proved ideal for connecting the center's new auditorium with its administrative offices. "We were so much within that range that it's ridiculous," he says. "We're literally across a New York City street." By shooting a laser out the window of one building into the window of another, Jazz at Lincoln Center established a network link that it owns, rather than paying the phone company for the bandwidth. Establishing a fiber optic link between the two buildings would have been prohibitively expensive because of the complication of digging up a New York street. Paul Wolf, an engineering technology manager at CDI Business Solutions, was initially a reluctant customer of LightPointe's FSO technology when his company started using it to connect two buildings in Houston. He worried about how many complaints would be waiting for him the first time fog knocked out the link. But the setup failed-over smoothly to a redundant RF link when the light beam was interrupted, and in two years the connection has had zero downtime, he says: "Now, I don't even think about it." Any other downsides? In addition to fog, you could run into problems with atmospheric effects such as scintillation (the "waves of heat" pattern you sometimes see over dark surfaces). Also, urban rooftop-to-rooftop setups sometimes run into problems because tall buildings sway slightly in the wind, throwing off the aim of the tightly focused lasers.
<urn:uuid:f8fda05d-497f-4c0a-bccb-e69c23135565>
CC-MAIN-2017-09
http://www.baselinemag.com/storage/Primer-Free-Space-Optics
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00498-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945541
864
2.78125
3
Rising energy prices. Global warming. Old equipment piling up in storage (and landfills). Environmental issues—and IT's role in them—are getting more attention than ever. If you want to use technology in a more sustainable way, here are the answers you need to begin. Sustainable, or "green," IT is a catch-all term used to describe the manufacture, management, use and disposal of information technology in a way that minimizes damage to the environment. As a result, the term has many different meanings, depending on whether you are a manufacturer, manager or user of technology. MORE ON CIO.COM What is sustainable IT manufacturing? Sustainable IT manufacturing refers to methods of producing products in a way that does not harm the environment. It encompasses everything from reducing the amount of harmful chemicals used in products to making them more energy efficient and packaging them with recycled materials. European Union regulations require computer manufacturers such as Dell, HP and Lenovo to abide by green manufacturing laws that limit the use of some toxic substances, such as lead and mercury, in their products. These products are also available in the United States. What is sustainable IT management and use? Sustainable IT management and use has to do with the way a company manages its IT assets. It includes purchasing energy-efficient desktops, notebooks, servers and other IT equipment, as well as managing the power consumption of that equipment. It also refers to the environmentally safe disposal of that equipment, through recycling or donation at the end of its lifecycle. What is sustainable IT disposal? Sustainable IT disposal refers to the safe disposal of IT assets at the end of their lifecycle. It ensures that old computer equipment (otherwise known as e-waste) does not end up in a landfill, where the toxic substances it contains can leach into groundwater, among other problems. Many of the major hardware manufacturers offer take-back programs, so IT departments don't have to take responsibility for disposal. Some U.S. states and the European Union have laws requiring that e-waste be recycled. What is the goal of sustainable IT? The goal behind most green business initiatives, including green IT, is to promote environmental sustainability. In 1987, the World Commission on Environment and Development defined sustainability as an approach to economic development that "meets the needs of the present without compromising the ability of future generations to meet their own needs."
<urn:uuid:54374f9e-092e-48ef-9bfe-bd6fe3ace7fa>
CC-MAIN-2017-09
http://www.cio.com/article/2437751/energy-efficiency/environmentally-sustainable-it-definition-and-solutions.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00498-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953212
495
3.21875
3
A revolution is about to come to the most unlikely of places: those hundreds, maybe thousands, of USB ports scattered throughout your company. This revolution will be all about power distribution and management, the stuff that only interests IT infrastructure staff. But there are wider implications that should make the entire IT organization take notice. The specification for USB Power Delivery (USB PD) was released nearly two years ago, but devices designed around this standard will only start to appear later this year. The new spec turns the capabilities of the USB port on their head. What was a data interface capable of delivering power will become a power provider with a data interface. Current USB ports provide only 10 watts; devices conforming to the new standard will transmit up to 100 watts. Larger, more complex devices will be able to run with only USB power. Here's how USB PD will affect IT: -- It's green. According to a California Energy Commission study, office equipment accounts for 17% of electricity consumption in small commercial office buildings. Most office equipment operates internally on DC power, which must be converted from the building's AC power. While Energy Star power supplies must be at least 80% efficient, some low-end devices are only 65% efficient. USB PD devices reduce energy consumption by delivering direct current in the voltage required for the specific device. It's as if Thomas Edison is having the last laugh nearly a century after Nikola Tesla won the AC/DC power wars. The USB PD specification mandates that DC current flows in both directions. This allows a computer, monitor and other devices connected through USB cables to receive power from a single AC power supply. And that means less energy is wasted converting power. In addition, it allows the device with the most power in its battery to provide power to the other devices. For example, a laptop could power a phone until its battery is drained and then be powered by that same phone. New construction could include two sets of wiring to take advantage of USB PD's energy efficiency. One would carry AC power for appliances with large motors. The second would deliver DC current for tablets, phones and other DC-powered electronics. No conversion needed. -- There's less clutter. Most offices contain a maze of power cords and cables connecting different devices to various power sources. Even organizations that install neat cabling usually find that the clutter grows over time as devices are added. Providing power and connectivity through the same cable significantly reduces the number of cords to manage. This looks neater and means there's less to untangle when changes must be made. More important, USB cables are less expensive than the power supplies they will replace. -- Travel is easier. Travel becomes easier when all devices can be powered from a single power supply with a USB connection on the end. There should be no need to lug a separate charger for each device everywhere. Or worse, to discover at a customer site that you do not have the one power supply you desperately need. As USB PD becomes more common, office buildings, airplanes and hotels will offer direct USB plugs, potentially eliminating the need to carry power supplies. Now is the time to begin determining how to capitalize on USB PD and what resources, transition plans and infrastructure changes you'll need. Early adoption earns green cred and long green. Read more about hardware in Computerworld's Hardware Topic Center. This story, "Power to the People's Devices" was originally published by Computerworld.
<urn:uuid:f9cf0993-4a7c-4055-931b-eeb34a2d9ae5>
CC-MAIN-2017-09
http://www.cio.com/article/2377694/hardware/power-to-the-people-s-devices.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00022-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951928
694
3
3
Heuristic analyzer (or simply, a heuristic) is a technology of virus detection, which cannot be detected by Anti-virus databases. It allows detecting objects, which are suspiced being infected by unknown or new modification of known viruses. Files which are found by heuristics analyzer are considered to be probably infected. An analyzer usually begins by scanning the code for suspicious attributes (commands) characteristic of malicious programs. This method is called static analysis. For example, many malicious programs search for executable programs, open the files found and modify them. A heuristic examines an application’s code and increases its “suspiciousness counter” for that application if it encounters a suspicious command. If the value of the counter after examining the entire code of the application exceeds a predefined threshold, the object is considered to be probably infected. The advantages of this method include ease of implementation and high performance. However, the detection rate for new malicious code is low, while the false positive rate is high. Thus, in today’s antivirus programs, static analysis is used in combination with dynamic analysis. The idea behind this combined approach is to emulate the execution of an application in a secure virtual environment before it actually runs on a user’s computer. In their marketing materials, vendors also use another term - “virtual PC emulation”. A dynamic heuristic analyzer copies part of an application’s code into the emulation buffer of the antivirus program and uses special “tricks” to emulate its execution. If any suspicious actions are detected during this “quasi-execution”, the object is considered malicious and its execution on the computer is blocked. The dynamic method requires significantly more system resources than the static method, because analysis based on this method involves using a protected virtual environment, with execution of applications on the computer delayed according to the amount of time required to complete the analysis. At the same time, the dynamic method offers much higher malware detection rates than the static method, with much lower false positive rates. In Kaspersky Internet Security 2012 the following components include the Heuristic Analyzer:
<urn:uuid:a9d22969-94a6-443b-9420-08bb21960300>
CC-MAIN-2017-09
http://support.kaspersky.com/6324
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00198-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927952
443
3.265625
3
If you think having a computer isolated from the Internet and other computers will keep you "safe," then think again. The same security researchers who came out with Air-Hopper have announced BitWhisper as another method to breach air-gapped systems. This time the Cyber Security Research Center at Ben-Gurion University in Israel jump the air-gap by using heat. The researchers explained the proof-of-concept attack as: BitWhisper is a demonstration for a covert bi-directional communication channel between two close by air-gapped computers communicating via heat. The method allows bridging the air-gap between the two physically adjacent and compromised computers using their heat emissions and built-in thermal sensors to communicate. Computers monitor temperature via "built-in thermal sensors to detect heat" and to trigger internal fans to cool the PC down. BitWhisper utilizes those sensors "to send commands to an air-gapped system or siphon data from it." In the video below, researchers demonstrate "BitWhisper: Covert Signaling Channel between Air-Gapped Computers using Thermal Manipulations." It shows the computer on the left emitting heat and sending a "rotate command" to a toy missile launcher connected to the adjacent air-gapped PC on the right. The Cyber Security Research Center said: The scenario of two adjacent computers is very prevalent in many organizations in which two computers are situated on a single desk, one being connected to the internal network and the other one connected to the Internet. The method demonstrated can serve both for data leakage for low data packages and for command and control. The researchers said they will publish the full research paper "soon." For now, regarding BitWhisper, they pointed to a Wired article that explains that in order for a BitWhisper attack to be successful, both computers must be compromised with malware and the air-gapped system must be within 40 centimeters from the computer controlled by an attacker. The researchers said only "eight bits of data can be reliably transmitted over an hour," but that's enough to steal a password or a secret key. They added that "future research" might involve "using the Internet of Things as an attack vector—an internet-connected heating and air conditioning system or a fax machine that's remotely accessible and can be compromised to emit controlled fluctuations in temperature." Wired's Kim Zetter explained that the BitWhisper attack works somewhat like Morse code, "with the transmitting PC using increased heat to communicate to the receiving PC, which uses its built-in thermal sensors to then detect the temperature changes and translate them into a binary '1' or '0'." She added: The malware on each system can be designed to search for nearby PCs by instructing an infected system to periodically emit a thermal ping—to determine, for example, when a government employee has placed his infected laptop next to a classified desktop system. The two systems would then engage in a handshake, involving a sequence of "thermal pings" of +1C degrees each, to establish a connection. But in situations where the internet-connected computer and the air-gapped one are in close proximity for an ongoing period, the malware could simply be designed to initiate a data transmission automatically at a specified time—perhaps at midnight when no one's working to avoid detection—without needing to conduct a handshake each time. Air-Hopper method to breach air-gapped systems Georgia Tech exploited side-channel signals to steal from 'air-gapped' PCs In January, Georgia Institute of Technology researchers said don't feel smugly safe if you are typing away but didn't connect to a coffee shop's Wi-Fi. "The bad guys may be able to see what you're doing just by analyzing the low-power electronic signals your laptop emits even when it's not connected to the Internet." They explained how keystrokes could be captured from a disconnected PC by exploiting side-channel signals (pdf). "People are focused on security for the Internet and on the wireless communication side, but we are concerned with what can be learned from your computer without it intentionally sending anything," said Georgia Tech assistant professor Alenka Zajic. "Even if you have the Internet connection disabled, you are still emanating information that somebody could use to attack your computer or smartphone." Zajic demonstrated by typing "a simulated password on one laptop that was not connected to the Internet. On the other side of a wall, a colleague using another disconnected laptop read the password as it was being typed by intercepting side-channel signals produced by the first laptop's keyboard software, which had been modified to make the characters easier to identify."
<urn:uuid:db4c63cb-a41a-43ff-a5a3-1b73c9abc4ae>
CC-MAIN-2017-09
http://www.networkworld.com/article/2900219/microsoft-subnet/bitwhisper-attack-on-air-gapped-pcs-uses-heat-to-steal-data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00242-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942968
961
3.296875
3
Drone research has been ramping up in workshops and universities around the country for more than a decade, but it’s the scientists at six test sites designated by the Federal Aviation Administration (FAA) who will pave the way for a new future in aviation. The test sites are the University of Alaska, the state of Nevada, New York’s Griffiss International Airport, the North Dakota Department of Commerce, Texas A&M University Corpus Christi, and Virginia Polytechnic Institute and State University (Virginia Tech). Just as the Wright brothers changed the world with the invention of manned flight, these are the pioneers who will propel unmanned aircraft into the skies and alter the course of history. In November 2013, the FAA released a road map report that recognized the untapped value of drones and also outlined the obstacles to integrating them into the National Airspace System (NAS). Unmanned aircraft were never designed or intended to meet the same rigorous standards as traditional aircraft, so there are a lot of technical and logistical barriers that prevent the FAA from permitting their use in the NAS if they want to sleep well at night. There’s a lot of work to be done before a Boeing 747 full of passengers eastbound for Chicago crosses paths with an unmanned airplane seeding clouds over a ski resort in Colorado, but that day will soon come. The six drone test sites, which were announced at the end of 2013, have until September 2015 to find solutions that will allow drones to weave seamlessly into the national airspace. Like a crack team of bank robbers planning a heist, each test site brings different resources to the project. Nevada was a shoo-in with its clear skies, huge amount of restricted airspace, and history of military research. The University of North Dakota has one of the largest civilian flight training schools in the world and is the only continental test site located in a temperate climate zone. Researchers in Alaska have been involved with drone research for 13 years and their partnerships with institutions in Oregon and Hawaii offer a geographically diverse testing area. Ro Bailey is the deputy director for the Alaska Center for Unmanned Aircraft Systems Integration (ACUASI) at the University of Alaska Fairbanks and a retired Air Force brigadier general. Her test site will help develop safety standards for drone systems and run test flights in extremely high altitudes and high speeds over water. But the public should first understand why they’re doing all this, Bailey said – this technology will change the world. “There are so very many beneficial uses of unmanned aircraft systems that have nothing to do with people looking in windows,” she said. Like this story? If so, subscribe to Government Technology's daily newsletter. A big piece of the FAA’s push to integrate drones into the NAS is to put an end to the battle between legislators, advocacy groups and drone advocates. Drones can save lives, help put out fires and assist in rescuing lost people, but the red tape limiting drone use by first responders has made such stories rare. The 19 firefighters who died in a blaze outside Phoenix in June “probably” would have been spared with the help of drone intelligence, Bailey said. “We have mapped the borders of wildfires, which provides better information to the incident commander for deployment of firefighters the next day. We could use [unmanned systems] to assist with monitoring rivers that are at risk of flooding and provide better information to emergency managers in real time. We’ve used them for infrastructure assessment, in cases where putting manned aircraft in that place was too dangerous,” she said, mentioning a case where drones were used to survey an oil company’s active flare stack. “We can do volumetric measurements far more accurately and more quickly for potential avalanches or how much material has been taken out of a gravel pit. We can do precision mapping for archaeological digs, and in many cases be able to give them such detailed instructions that they can go straight to more promising locations to begin the digs. We can locate polar bear dens so you can keep people away from those dens,” she said. When the imagery comes back for Stellar sea lion counts, she said, the team can always tell whether the images were captured by a drone or by a manned helicopter, because when it’s a manned helicopter, the animals are all either staring at the camera or diving into the water, but they don’t notice the drones at all. Drone research allows scientists to be less invasive, she said. Researchers in Alaska are now developing a whale Breathalyzer drone that flies through a whale’s spout, and analyzes the bacteria collected to determine the health of the whale. Drones are also used to study volcanoes to learn more about how their ash interacts with aircraft and where it’s safe for manned aircraft to fly. Bailey also described how drones are used in research to help ships navigate dangerous, icy waters. In one instance, 250 miles north of Alaska's northern shore, drones were flown at 1,800 feet, dropping small buoys into the water that collected temperature data from nine meters underwater and then wirelessly transmitted that data back to the drones. “There’s not a manned aircraft in the world that would do that for safety reasons,” she said. There’s no replacement for unmanned aircraft when it comes to that kind of work, Bailey added, and that kind of work could prevent a ship from sinking someday. While the FAA wants drones in the air, it has also made it clear that compromising existing aviation safety standards is not an option. One of the ways Alaska will facilitate the harmonious integration of drones into the airspace is by helping to develop the drone type certificate process. Type certificates for traditional aircraft are the proof that an aircraft design has been approved by an authority like the FAA, so when someone buys an aircraft, there’s no question as to whether it’s safe, assuming it’s current on maintenance. Drones don’t have FAA-approved designs and most people probably wouldn’t feel comfortable flying if they knew they were sharing their air with a drone someone built in their garage. It would be the air equivalent of a pedestrian wandering around on the freeway. All three test sites Government Technology interviewed named “sense and avoid” functionality as one of the most pressing areas of research. One of the biggest problems with putting drones in the air is that there is today no consistent way for an aircraft without a person on it to obey the rules of the sky as currently written. But the FAA has stated it will not change existing “see and avoid” rules to accommodate drones, so researchers will need to get inventive so drones can follow the rules. Linking and sense and avoid systems are two areas of research North Dakota will help develop. The Northern Plains Unmanned Aircraft Systems Test Site is led by Bob Becklund, a colonel with the North Dakota National Guard and former commander of the 119th Fighter Wing. “Links have gotta be assured and reliable, which means they gotta be secure from hacking, encrypted, redundant, reliable, all that kind of stuff,” Becklund explained. “That’s quite a challenge. Or in the event a link gets unreliable or there’s a component failure, which of course can happen and the airplane loses its link, then it has to have onboard systems that it can recover itself safely and without hurting anybody on the ground, autonomously in this example.” Air traffic control (ATC) needs to be aware of what a drone is doing at all times, just as with any aircraft, so if something goes wrong such as losing the link with the ground, ATC can then direct other aircraft away from that area, Becklund said. Safety research is a huge priority for the North Dakota site, but the staff hope that drone research won’t just minimize safety impacts in 2015 when drones begin launching, but that their discoveries will contribute to making all of aviation safer. “As far as what we proposed for the test site, that covers the whole spectrum, everything from pilot training standards, which is what the University of North Dakota (UND) specializes in, certification in pilot training and evaluation standards for air crew, the aircraft and ground station and air worthiness (type) certification.” UND offers undergraduate degrees in unmanned aircraft systems operations and it’s that industry and culture of aviation and unmanned aircraft that likely led the FAA to select them as one of the test sites, Becklund said. The university’s engineering school also leads research on nanoscale electronics, which is connected to many of the size-weight-power engineering problems faced by drone researchers, Becklund pointed out. "This region can really offer the FAA and this nation and the world, for that matter, an expertise pool and airspace that’s unencumbered by other aircraft density, a ground population that’s nice and low," Becklund explained. "It’s a perfect place to do flying with new technologies.” There are a few potential drone applications that Becklund likes. The movie industry will be able to save money renting helicopters if they want aerial shots, real estate developers can easily and cheaply get aerial photos of properties, energy companies will have a cost-effective solution to look for breaks in their pipelines or power lines, and auto racing events could be enhanced for fans by assigning each car a drone with a camera, Becklund suggested. In Nevada, the Desert Research Institute (DRI) has already begun promoting their own brand of drone innovation, as they look for new ways to increase snowpack in Lake Tahoe ski resorts. Lake Tahoe today relies on cloud seeding towers that introduce silver iodide crystals into the atmosphere, increasing regional rain and snowfall by an estimated 10 percent. In January, DRI put a cloud seeding drone on display at Heavenly Village in South Lake Tahoe, Calif., to promote what their drones might someday offer the region. Nevada was a natural choice for one of the FAA’s test sites. The state has about 320 flying days per year, thanks to limited cloud cover and 10 times more restricted airspace than all the other states combined, said Tom Wilczek, defense and aerospace industry liaison at the Nevada Governor's Office of Economic Development. The huge amount of airspace they can use for testing and the existing drone research and industry experts in Nevada would have made it seem strange if they weren't selected, Wilczek said. “This is its birthplace,” he said. “The industry came from here and it came here because of the DoD applications. I’d say the whole unmanned systems industry is kind of our birthright. It was a matter of pride.” The University of Nevada, Las Vegas and the University of Nevada, Reno were both quick to offer their research and development to the project, Wilczek said, and they received 100 percent political support from all levels. The wide open airspace Nevada offers will allow researchers to test things they might not be able to test in other areas, Wilczek said, like various clime rates and angles of descent. The FAA has stated that Nevada will concentrate its efforts on developing drone standards and operations, operator standards, certification requirements, and air traffic control procedures. The state has a wealth of experts in all areas of drone manufacture and research to help. Some applicants that wanted to be FAA test sites may have talked about creating a local economy around drone manufacture and research, Wilczek said, but the FAA doesn’t have time for that. The FAA chose the places that already have an industry in place, because they want drones now.
<urn:uuid:53a27371-7cf3-4352-8237-b3a0fa6f2019>
CC-MAIN-2017-09
http://www.govtech.com/federal/Can-FAA-Drone-Sites-Help-Planes-and-UAVs-Coexist.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00538-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958518
2,417
3.140625
3
What is the difference between Photoshop and Illustrator? - 6th June 2016 - Posted by: Juan van Niekerk - Category: Web Design On the surface, Illustrator and Photoshop, both developed by Adobe, seem to be identical types of software. This is however a misconception and here we will take a closer look at each to distinguish the differences between the two. Both Photoshop and Illustrator are used extensively by Graphic Designers and Digital Photographers on a daily basis as they are used to alter (Photoshop) and create (Illustrator) images and graphics in ways that would have been unthinkable a decade ago. Having Photoshop and Illustrator to work with has completely changed the face of graphic design and has opened new doors to artistic expression and creativity. Standing head and shoulders above it’s competitors, Adobe Photoshop has become the most popular photo editing software on the planet, not only among graphic design and photographic professionals, but among people from all walks of life. Photoshop is a raster graphics application, meaning that the alterations that are made to an image affect the actual pixels that the image is made up of. This is used to alter the image in virtually any imaginable manner from retouching colours and textures, adding borders, applying different types of effects or giving the image a more polished, professional look and feel. One drawback to using Photoshop is that, because it is a pixel-based alteration tool, images that have been modified are not easily scalable. When an image is resized, it can become distorted or pixelated and, hence, Photoshop is best utilised on images that will be used in the size that they were originally created in. Adobe Illustrator is a vector-based graphics program. When using Illustrator, a line is created by connecting two dots via a computer algorithm. These dots can be relocated and modified at will and the end result is a sharp and clear image that can be used virtually anywhere. Images that are created using Illustrator can be resized without the fear of blurring or pixilation and it is, therefore, perfect to use on logos, text or repeating graphics of different sizes. It is essentially a tool for the creation, rather than the alteration of graphics and images. Photoshop vs Illustrator It is a question that crops up very often: “should I use Adobe Photoshop or Illustrator?” The simple answer is that they each serve a different purpose. Although there are some overlapping features, Photoshop is not ideal for creating images from scratch, just like Illustrator isn’t ideal to alter existing images. Photographers that make use of photo editors, for example, will find Photoshop more useful as they work with and alter media that already exists. Marketing Developers and Designers who create logos will prefer to use Illustrator as they gain the ability to create graphics from scratch and to resize and print their text and images at will, while still retaining image quality. Graphic Designers, however, will find a wealth of use from both programs. Illustrator is ideal for the creation of web graphics and special effects whereas Photoshop gives them the ability to change their images into eye-catching lures to their work. Both Illustrator and Photoshop are amazing programs to use once you get to grips with their functions and features. There are many tutorials, tips and tricks available on the internet that can help novices understand how to use them but the quickest way to become adept at both Illustrator and Photoshop is, of course, to become certified. If you are interested in becoming skilled at either Adobe Photoshop or Illustrator, or both, take a look at the Adobe Photoshop CS6 and Adobe Illustrator CS6 courses on offer from ITonlinelearning.
<urn:uuid:3f83b4c1-bebd-4a6e-9e03-ec6c8bf0bb1f>
CC-MAIN-2017-09
https://www.itonlinelearning.com/blog/what-is-the-difference-between-photoshop-and-illustrator/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00538-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952332
764
2.5625
3
Ever since Intel executive Gordon Moore predicted in 1965 that microchip capacity would double every 18 months, little has been said about the impact such churning of information technology would have on our waste stream. Computer users who keep demanding faster technology have discovered the landscape is littered with used computers. Worse, they now realize that its not that easy to get rid of these boxes composed of metal, plastic and glass. In 1998, approximately 20.6 million personal computers became obsolete in the United States. Of that number, only 11 percent -- about 2.3 million units -- were recycled, according to the National Safety Councils Environmental Health Center. This year, the number of disused computers is expected to reach 42 million. That number is predicted to grow to 61.3 million by 2007. And the computers keep piling up, with 133 million PCs sold last year, according to Gartner Research. In Massachusetts, which has closed more than 100 landfills in the past 10 years, the dumping of obsolete computers is a major concern. In 1998, more than 75,000 tons of old electronic equipment ended up in the states remaining landfills. That number is expected to reach 320,000 tons by 2005, according to the Department of Environmental Protection. To stem the worst problem -- the dumping of cathode ray tubes (CRTs), which contain as much as eight pounds of lead and other hazardous materials -- the state banned the disposal of CRTs in landfills and incinerators. But few other states have taken such drastic measures. Meanwhile, the tide of obsolete computers continues to rise. Interestingly, the number of computers being dumped in landfills around the country could be a lot higher. Many individuals and businesses feel guilty about throwing the hardware away. "We surveyed residents about the PCs they no longer use and found that most of the obsolete equipment is being stored," said Lisa Sepanski, project manager of the Computer Recovery Project, run by King County, Wash. Emerging Gray Market The urge not to toss old computers has created a gray market for used computers. Recycling firms have sprung up around the country to handle the growing pile of surplus and used technology. There are approximately 400 companies in the United States that have computer and electronics recycling operations, according to the International Association of Electronics Recyclers . Some are in the reselling business, others recover parts or materials. What cant be saved ends up in the hands of smelters and refiners. And then there are those organizations that take donations and resell or give the computers away. Schools and charities long have been recipients of donated equipment, although they have become more selective about the equipment they accept. But thanks to the Internet, donating a working computer has become a bit easier. In February, the Electronic Industries Alliance, a partnership of electronic and high-tech associations, launched a Web site that connects users who want to donate used computers with local charities, needy schools, neighborhoods and community demanufacturers. A similar organization, Share the Technology , provides a national computer donation database on the Internet that lists donation requests and offers across the country. For businesses, organizations and governments with large numbers of obsolete or broken computers, the answer may be found with one of the numerous recyclers around the country. One such company, Waste Management & Recycling, located near Albany, N.Y., recovers, recycles and refurbishes computers that it receives from businesses replacing outmoded computers. Hospitals, schools, universities and local and state governments contract with Waste Management to take their old computers and recycle them, according to Peter Bennison, vice president of business development at Waste Management. At the other end of the scale, PC manufacturers, such as IBM, Compaq, Dell and Hewlett-Packard, have entered the computer recycling and refurbishing business, as well. Last year, IBM recycled 500,000 computers, making it the largest PC recycler in the country. In the next few years, the firm expects to recycle as many as one million computers a year. In 1997, HP opened a recycling facility that shreds and separates up to 4 million pounds per month of broken or obsolete HP computer equipment, including everything from laptops and desktops to servers, printers and monitors. As a major consumer of computers, state and local governments have a significant impact on what happens to obsolete and broken electronic equipment. A few governments leave the decision up to individual agencies and departments. Some sign contracts with recyclers to haul the stuff away. Others try to donate the PCs or they end up storing them. Massachusetts is trying to encourage recycling to start at the moment an agency decides to buy new computers. "In our procurement solicitations, we give extra evaluation to companies that have recycling programs built into the sale of the computer," said Dick Mordaunt, director of IT and office procurement for the states Operational Services Division. State purchasing agents will give extra points during the bid evaluation process to firms that build computers that are easy to recycle or made from recycled material, or that will take back the computers after a certain period. State and local governments also must play the role of facilitator when it comes to computer recycling. Theres no better example than King County, Wash., where Microsoft and other software vendors coexist beside some of the most eco-friendly people in the country. Last year, the county launched the Computer Recovery Project, a multi-faceted effort to help individuals and businesses in the region recycle old computers. The project is part education and awareness, part business, according to project director Sepanski. "We decided to start by educating the public about the amount of waste in color monitors," she said. The $175,000 program ran ads on radio and drummed up publicity about the hazards of dumping old monitors. At the same time, the project contracted with a local firm, Total Reclaim, to take back old monitors. Individuals and businesses pay $10 to Total Reclaim, which disassembles the monitors and recycles the glass from the CRTs, along with plastic casings and circuit boards. Since July 2000, more than 6,000 monitors and thousands of other computer components have been collected for reuse and recycling. Total Reclaim will eventually recycle nearly 95 percent of the 75 tons of computer equipment it recovers. The biggest hurdle has been understanding all the government regulations dealing with recycling and disposal of used computers, explained Sepanski. But the state has become more active in streamlining regulations that affect computer recycling. Another problem has been the public attitude that its the job of government to pick up and take away everybodys obsolete computers. "Its a real challenge getting people out of this frame of thinking," said Sepanski. "They have to realize a computer is a unique commodity. Its going to take public/private cooperation to deal with this problem."
<urn:uuid:a7e5b076-25de-464d-a003-142a44a00504>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Salvaging-The-Surplus.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00062-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954681
1,398
2.84375
3
What if one day you accidentally step on your smartphone and instead of it shattering, it simply bends? That day may be on the way, according to Seth Imhoff, a materials scientist at the Los Alamos National Laboratory. Imhoff told Computerworld the lab is working to develop stronger and more elastic types of glass that would bend instead of shatter under duress. The research under way at Los Alamos, a national lab in New Mexico known for classified work on nuclear weapons, could give consumers more durable smartphones, tablets and laptops. "In an ideal world, we'd have materials that are very, very strong -- that can withstand a lot of stress and have a very large elastic region," Imhoff said. "And ideally when they finally do undergo this change, they'd bend instead of shatter. We're looking, in this particular work, at what are the features that will enable that." Imhoff said Los Alamos scientists are focusing their research on metallic glass, which is made up of metallic atoms and has most of the properties of metals, except that it has the irregular atomic arrangement of glass. The researchers are trying to adjust how the atomic arrangements react under stress. A bendable, metallic glass could have a wide range of uses, such as in space science, electrical transformers, cell-phone cases and even golf clubs. Imhoff said not to expect to buy a smartphone with bendable glass anytime soon. "A whole lot of people are working on this but it tends to be slow moving." Ezra Gottheil, an analyst with Technology Business Research, said non-shattering glass could be a big win for devices. "These expensive toys are very vulnerable," Gottheil said. "Users would be happier if their phones and tablets were more durable. It would increase their lifespan. Vendors would have fewer sales, but more satisfied customers." Gottheil added that if bendable glass can be developed and it's relatively inexpensive, it could be useful in lots of applications, like automobile windshields and even home windows. Scientists working on the bendable glass project at Los Alamos are teaming up with researchers at the University of Wisconsin-Madison, Universitat Autnoma de Barcelona in Spain, and Tohoku University in Japan. Other scientists are also working on technology that could lead to flexible gadgets. In September, scientists at the University of California, Los Angeles, said they had created a light-emitting electronic display that can be stretched, folded and twisted, while remaining lit and snapping back into its original shape. The new stretchable material could change the form of smartphones and could lead to wallpaper-like lighting, minimally invasive medical tools and clothing with electronics integrated into them. This article, Research on bendable glass could lead to flexible mobile phones, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "Research on Bendable Glass Could Lead to Flexible Mobile Phones" was originally published by Computerworld.
<urn:uuid:4fd2770a-207b-469e-a663-ed1e6dbb3cf6>
CC-MAIN-2017-09
http://www.cio.com/article/2378146/mobile/research-on-bendable-glass-could-lead-to-flexible-mobile-phones.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00114-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95323
690
3.171875
3
Agency: Cordis | Branch: FP7 | Program: BSG-SME | Phase: SME-2013-1 | Award Amount: 860.16K | Year: 2013 Road accidents imply dramatic personal and social consequences. In 2011 it is estimated more than 30,000 people died on the roads of the European Union, and for every death on Europes roads there are an estimated 4 permanently disabling injuries such as damage to the brain or spinal cord, 8 serious injuries and 50 minor injuries. These figures also bring about huge economic cost estimated on about 1,5-2% of gross domestic product in Europe. Different combined factors are normally behind each road accident. However, recent in-depth studies reveal that poor road marking affecting visibility and adherence are among the important factors of traffic deaths. Until now, the road marker most common is road paint. These paints have been developed from different plastic materials such as thermoplastics, epoxy resins and the like. These paints present a lack of mechanical resistance to traffic load, in addition, and due to the chemical composition of the paints, the road marks are easily get dirty when raining, compromising its visibility and the safety of drivers. ROADMARK is focused on the development of a novel inexpensive cementitious material for long-lasting and anti-sliding road marking. The final film will have improved features such as anti-skid, more durable, waterproofing and self-cleaning properties. The mineral compositions will allow getting an improved adhesion between the asphalt and the paint that improved the durability of the paint. Since market penetration for ROADMARK is forecasted to be about 14% after 5 years, the estimated saved cost could be calculated on 24Bn resulting calculated from forecasted penetration rate by year. These savings could be easily view by saved lives, about 10,100 lives and 200,000 people will be saved from being seriously damaged on traffic accidents.
<urn:uuid:9f76bfba-bc36-41bf-8152-ab12a66b70d8>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/chemcolor-sevnica-doo-1093261/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00466-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961551
393
2.84375
3
For many years now groups like the MPAA and RIAA have tried to convince the public that piracy (that is, copyright infringement) is theft – and many people have come to believe this, but it’s not true. In reality, copyright infringement is far more analogous to trespassing than it is to theft in its core concepts – and even moreso in the digital world. To make it clear, I am looking at this from a largely historical perspective, looking at the origins of copyright and how it was intended to be used. This gives us a better view than the current laws that have been greatly influenced and complicated by politics and money from those that have a vested interest in maximizing copyright protections and broadening its definitions. First, let’s look at the legal definition of theft – this specific definition is from British law, from which many early American laws derived. A person is guilty of theft, if he dishonestly appropriates property belonging to another with the intention of permanently depriving the other of it… …the author and authors of any map, chart, book or books already printed within these United States, …, who halt or have not transferred to any other person the copyright of such map, chart, book or books, share or shares thereof; and any other person or persons, …, shall have the sole right and liberty of printing, reprinting, publishing and vending such map, chart, book or books, for the term of fourteen years from the recording the title thereof in the clerk’s office… Copyright, in its origin and intended forms provides something very simple: the right to control how a work (primarily a book) is copied. That’s it, that simple. So from a historical perspective, a copyright violation was to create a copy of book without the author’s permission – theft would have been to take a book that the author printed, permanently depriving him of that asset. These are clearly not the same thing – despite the assertions of certain powerful groups. I’m not defending or down playing piracy, but let’s talk about it in the proper context – it’s a property rights issue, not theft.
<urn:uuid:f0552438-979f-4880-a418-47d7040d0031>
CC-MAIN-2017-09
https://adamcaudill.com/2012/05/20/piracy-is-not-theft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00586-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953973
453
2.765625
3
January 04, 2012 The Japanese government collaborated with Fujitsu to create a virus which detects malware and collect info on the hackers. A virus on a virus? This is the technical schematic: I see a few problems with it: - As we all know, most malware and attacks are distributed through non-involved 3rd parties. Obviously the "fight back" mechanism is going to affect these by standers rather than the actual attackers. There are of course tools that can be developed to try and track the actual source of the attack but I don’t see a reason to distribute them as a virus at end-points rather than take a honey-pot approach. I remember that back in the late 90s, there was a trend of "fight back", mainly trying to automatically break into the computer that sent an attack (or allegedly sent an attack) and take it down (or DDoS it). It quickly turned out to be a disaster in terms of going after the wrong people. - Deliberately introducing viral code into end-points is a one of these things that will only end in tears. Any misconfiguration or vulnerability in the "protection" code will allow attackers to efficiently introduce their code into each end point in the organization. Authors & Topics:
<urn:uuid:0dc9838b-940b-4df0-89bf-e62969d27ee4>
CC-MAIN-2017-09
http://blog.imperva.com/2012/01/anti-virus-virus.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00110-ip-10-171-10-108.ec2.internal.warc.gz
en
0.962517
261
2.609375
3
Virtual Space: True shock and awe' The view from above NASA established the Earth Observatory Web site as a free online destination for satellite imagery of everything from volcanic eruptions to dust storms in the Gobi Desert. The site currently features some of the most stunning pictures available of the devastation caused by last month's tsunamis in Asia. This image, provided to NASA by DigitalGlobe Inc. of Longmont, Colo., shows the flooding of Kalutara, Sri Lanka, an hour after the first waves hit. To view more images, go to www.gcn.com and enter 346 in the GCN.com/box. Then consider surfing over to www.redcross.org Does your job produce data in an awesome visual way? Tell [email protected] and appear in Virtual Space.
<urn:uuid:8a893bd6-51c3-490f-8b19-f16642d42fa9>
CC-MAIN-2017-09
https://gcn.com/articles/2005/01/07/virtual-space-true-shock-and-awe.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00162-ip-10-171-10-108.ec2.internal.warc.gz
en
0.812201
171
2.515625
3
NASA thinks warp-drive travel might be possible Last year I got a bit excited about a report that the light speed barrier had been seemingly broken by scientists working with the Large Hadron Collider. Dreams of building a Millennium Falcon and traveling to the stars, the goal of space nerds everywhere, seemed plausible. Einstein's light speed barrier, the chain that keeps us anchored to Earth, could be broken, or so it seemed. But then, the European Organization for Nuclear Research discovered problems with their experiment that meant that neutrinos probably didn't break the speed of light. Suddenly, we were Earth-bound again. However, NASA isn't giving up on faster-than-light travel just yet. While admitting that its mostly speculation at this point, NASA believes that one day faster-than-light travel through the use of warp drives might be possible. For those non-nerds among us, this is more the "Star Trek" version of space travel than the "Star Wars" one, though they are similar. According to NASA scientists, it might be possible to break the laws of special relativity with a ship shaped like a sphere that could be placed between two regions of space-time, with one expanding and one contracting. This requires matter with special properties and could break Einstein's law because the ship isn't actually moving faster than light; space itself is being moved, and the ship is simply falling through the hole — called a wormhole — it created. That much had been worked out as early as 1994 by physicist Miguel Alcubierre. However, in addition to the special matter, his plan also required energy equivalent to the mass-energy of the planet Jupiter. But NASA thinks it might not need a planet-sized ship after all. NASA physicist Harold White recently presented a paper showing that by simply tweaking the geography of the Alcubierre warp drive, it could achieve the same results in a ship about the size of NASA's Voyager 1 probe. White is pushing out of the realm of the theoretical too, vowing to use lasers in his lab to demonstrate how the modified drive could in fact perturb space-time by one part in 10 million. We may not be firing up the Falcon anytime soon, but at least the dream of space travel is alive once again. While some folks might be thinking of booking flights to AlphaCentauri, I think I'll beat the rush and buy a ticket to the planet GJ 667c. With three visible suns, possibly lots of water and an untapped real estate market, it looks like a nice place for a vacation home. Posted by John Breeden II on Dec 05, 2012 at 9:39 AM
<urn:uuid:3a599af4-97e0-4496-b6dc-d1edbc32def9>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2012/12/nasa-thinks-warp-drive-travel-might-be-possible.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00582-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964521
544
3.078125
3
There has been a surge of questions of late regarding IPv6 and whether it can be used to better identify individuals on the Internet. Everyone from marketeers to law enforcement officials seem to hold the same misconception that IPv6 is going to make it possible to expose people in a way that IPv4 does not. It is true that IPv6 will change addressing on the Internet. Many of us hope it restores the ability to identify an actual network endpoint -- a feature that we lost a number of years ago in IPv4. But some appear to be imagining a future where each machine has its very own address, and that these addresses will be easily traced whenever a person visits a website, plays a game online, or even opens an email. In fact, IPv6 actually has features that are designed to foil these sorts of plans. Also, because of the enormous IPv6 address space, it's rather unlikely that a single machine will have a single IPv6 address. To make sense of the discussion, we need some history. As the world started to run out of IPv4 addresses (which is some time ago now), two things happened. First, we changed the way that addresses were given out, so that fewer addresses would be allocated at a time. Second, NAT (Network Address Translation) was invented. A NAT is a mechanism where one network address is mapped to another address. For example, in your home network you might have a cable modem. It probably has one "public" IPv4 address: an address that is routable on the Internet. You probably have some sort of gateway or router (like a wireless access point). That gateway gives out addresses to your tablet, your phone, your Xbox, and so on. Each of these devices gets an address, usually one from a special "private" range specified in RFC 1918. When one of your devices wants to connect to a service on the Internet, the gateway takes the connection to the device, remembers the private address for it, and connects to the Internet service using the public address. The gateway translates between the private address and the public one, keeping track so that the different devices can all use the same public address. So each device in your network has its own address, but as far as the rest of the Internet is concerned they're all at the same address. You don't have to use NAT this way, but it's a common way to use it. As IPv4 addresses get more scarce, NATs are getting larger. We have a NAT in our office. Some ISPs are now running what are called "carrier grade" or "large scale" NAT so there can be hundreds or thousands of machines behind a single address. And unlike the household case above, those "hidden" nodes often have no relation to one another. So yes, in most networks today, it's difficult to identify someone by their address. NATs are a problem on the Internet because they're in the way. Suppose you want to make a voice call over the Internet to your mother. The way you think of this might be that your computer connects to your mother's computer. What actually happens is that you pass your data through your NAT, and your mother passes her data through her NAT, and the only machines that are actually talking to each other across the Internet are the two NATs. If they get anything wrong, packets get lost and the voice quality degrades. Now, there is no scarcity of IPv6 addresses: there are more than enough IPv6 addresses for every atom on the face of the earth. So there's no need to have NAT. Certainly every device that wants one can have an IPv6 address. Doesn't this mean that identification of users (by marketers or governments or whatever) will be easier? Aren't we giving up privacy even as we gain the benefits of getting rid of NAT? No. To begin with, the way that IPv6 addresses are usually issued means that most devices won't have just one address. Instead, they're likely to get various ranges, which means that each time you see a different IPv6 address you don't know whether it is a distinct device. This is sort of the reverse of the IPv4 problem. Under IPv4 and NAT, one address corresponds to multiple machines. Under IPv6, one machine may correspond to multiple addresses. Moreover, there are standard techniques (like those specified in RFC 4941 and RFC 3972) designed to enable a node to change its address. The goal is to conceal that the same node is involved in different transactions, by using different addresses for different transactions. Such techniques are not available under IPv4. So while it is true that nobody can tell which of the boxes is behind your NAT address, they can certainly associate all the traffic with a single NAT. Currently, IPv6 also provides a lot less geolocation data than IPv4 does. This is really just a temporary state of affairs, however, there is so much more IPv4 penetration that it is easy for geolocation database builders to identify the geographical location associated with an IPv4 address. And there are only four billion IPv4 addresses, so it is feasible to store information about every one of them. The low use rates of IPv6 so far, and the enormous size of the address space, means that the geolocation information about IPv6 addresses is not currently commercially viable. In any case, the best way to track someone's behavior is not by address anyway, because people change networks too often. Smartphones and tablets move back and forth between mobile networks and Wi-Fi networks throughout the day. Even many laptops move through different Wi-Fi networks frequently. But someone who wants to track a user doesn't want that tracking to fail every time the user leaves home and changes networks. This is why social networks are so beloved of marketers: they actually reveal the additional information to marketers about where users are and the networks through which they travel. Ultimately, building a profile for a potential or current customer using an IP address -- whether v4 or v6 -- is both tricky and unsatisfying. That doesn't mean that IPv6 will usher in a new era of anonymity, but the worry that IPv6's lack of NAT means it reveals much more about users is mistaken. Sullivan is the Principal Architect at Dyn, a provider of traffic management, message management and performance assurance solutions. Read more about lan and wan in Network World's LAN & WAN section. This story, "IPv6 will allow them to track you down. Not!" was originally published by Network World.
<urn:uuid:96fb2dc1-a98d-4733-9b40-9f1e9c650032>
CC-MAIN-2017-09
http://www.itworld.com/article/2701405/networking/ipv6-will-allow-them-to-track-you-down--not-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00458-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96274
1,340
3.484375
3
Last week, Matthäus Krzykowski and Daniel Hartmann of VentureBeat loaded Android onto an Eee PC 10000H netbook. According to their write-up, it took no more than about four hours to compile Android for the Eee and get it up and running. This wasn't just a proof of concept install. Once running, Android was able to use the onboard graphics, sound, and wireless capabilities of the device. This installation success is helping fuel speculation about the future of Android and other possible Google operating system initiatives. While you might think of Android as a phone-based operating system, in reality it is more than that. Android can theoretically run on many PCs, including laptops and netbooks. The VentureBeat install, like the Nokia N810 we wrote about in early December, shows that Android and Android-like operating systems could have a potential range far wider than handheld devices. That's not to say that you can just swap out Android for any Linux install. Android is built on a Linux kernel but diverges in the way that it handles graphics. Instead of using the standard Linux X Server drivers, Android employs a "framebuffer driver." This alternate technology arbitrates and controls access to the system display using the open source Skia Graphics Library. Skia is also the cross-platform graphics engine that powers Google Chrome. Standard Linux applications depend on X Server and must be ported to Android's graphics system if they are to run properly there. As VentureBeat points out, the framebuffer driver approach currently runs far slower than X Server. Should Android attempt to make a push into the netbook market, it will have to improve its efficiency on that front or face "very slow graphics." In most other regards, Android and standard Linux installs aren't that different. Most Linux drivers work under Android, so porting to new Linux-friendly platforms like netbooks involves minimal work. The VentureBeat developers were impressed by how simple it was to move Android to the Eee, writing that they found the underlying source code very neatly written and easy to port. Android itself made very few assumptions about the platform it was running on, so it automatically updated its screen display to fit the Asus screen (which is approximately five times the size of a G1 phone screen) without special programming. With the Eee port done, speculation is running rampant as to whether Google has a larger OS distribution plan in mind. Tech Republic suggests that Chrome, Android and Google Desktop are leading to a full Google OS push. ZDNet's Garett Rogers writes that Google could monetize Android application sales for traditional PCs to create a new and profitable revenue stream. From an end-user's point of view, this could be a huge, liberating win. An open source operating system with Google and its allies behind it could drive down the price of personal computers while at the same time opening that operating system up for open third-party innovation. Is Google gunning for the netbook market? Given how flexibly Android has been developed and how well it adapts to new platforms without needing specific adjustments, Google would be foolish to not at least consider this avenue as an active development path. Google has the strength, the influence, and now it seems the technology, to take Linux-based netbooks and notebooks to a newer, more polished, and friendlier place, although some will disagree. With Android and its overall consumer ready interface, netbooks could appeal to a wider group of customers and to a possibly revenue-rich market.
<urn:uuid:a732196d-72d2-439b-af6b-d5bf2bee7c39>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2009/01/android-netbook-port-leaves-some-pondering-google-os/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00510-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945289
714
2.765625
3
There are a few parts to COM that are useful to understand. The central concept to COM is software components that expose one or more interfaces: named groups of (typically related) functions that the component implements. Microsoft publishes hundreds of such interfaces implemented by Windows components. Many of them are also implemented by third-party components, allowing them to slot into applications as if they were part of the operating system. For example, Windows has a large COM-based framework, Media Foundation, for constructing media applications (music and video playback and capture, that kind of thing). This includes an interface for "media sources." These components represent, say, a stream of audio data from a CD, audio/video data from a DVD, video from a Web broadcast, and so on. Microsoft provides a number of components that implement this interface with Windows, for well-known sources such as audio CDs and DVDs, allowing Media Foundation programs to use these sources in a common way. Third parties can also implement the same interface for custom streams—perhaps in the future we will have holographic data cubes instead of DVDs—and in doing so will allow existing Media Foundation software to use the sources, without ever having to know any specific details of what they represent. COM itself defines a number of interfaces components can implement. The most important of these is called IUnknown. Every single COM component implements IUnknown. The interface does two things: it handles reference counting, and it's used for getting access to the interfaces that a COM component actually implements. Reference counting is a technique for managing the lifetime for software objects. Each time a program wants to hold on to an object, it calls a function AddRef() that's part of IUnknown. Each time it has finished with the object, it removes a reference with Release() function. Internally, every COM object keeps count of how many references there are, increasing the count for each AddRef(), and decreasing it for each Release(). When the count hits zero—no more references to the object—the object tidies up and destroys itself, closing any files or network connections or whatever else it might have needed. This frees the memory it uses. Reference counting is not a perfect system. It has a few well-known issues. If two objects reference each other, even if nothing else references them, they will never be destroyed. That's because each object will think it has a count of one, and since neither object will Release() the other, the memory is lost, leaked, until the program ends. The problem is known as circular references. This can be easy to detect with only two objects that reference each other directly, but in real programs there can be chains of many hundreds of objects, making it difficult to track down exactly what has gone wrong. It's also not safe: a program can accidentally try to use a component even after it has Release()ed it, and COM does nothing to detect this. The result can be harmless, but it can also be a serious security flaw. COM objects can expose a lot of different interfaces. Getting access to each of the interfaces a COM object supports is done with a function called QueryInterface(). Every single interface used by COM has an associated GUID, a 128-bit identifier that's usually written as a long sequence of hexadecimal digits wrapped in braces. The general appearance of these strings, though not their actual values (such as IUnknown), will likely be familiar to any Windows users who have poked around their registries. They also crop up in Event Viewer and various other places in the operating system. COM components, the EXEs and DLLs that implement COM interfaces, also have GUIDs. The mapping from GUID to EXE or DLL is stored in the registry. Indeed, that's what the registry is for, registering these GUID to DLL mappings. An application uses these component GUIDs (called "CLSIDs," class IDs) to tell the operating system which components it wants to create. To retrieve a specific interface from the component, the application passes the GUID of the interface (or "IID," interface ID) of interest in to the component's QueryInterface() function. If the component implements the interface, QueryInterface() gives the application a way of accessing the interface. If it doesn't, it gives the application nothing. COM also defines an Application Binary Interface (ABI). An ABI is a lower-level counterpart to an Application Programming Interface (API). Where an API enumerates and describes all the functions, classes, and interfaces that a program or operating system lets developers use, the ABI specifies the specific meaning and usage of that API. For example, the Windows ABI specifies that the standard integer number type, called int in C or C++, is a 32-bit value. The ABI also specifies how functions are called—for example, which processor registers (if any) are used to pass values to the function. The COM ABI also defines how programs represent interfaces in memory. Each interface is represented by a table of memory addresses. Each memory address (or "pointer," as they are known) represents the address of one of the interface's functions. The pointers in the table follow the same order as the functions that are listed in the interface's description. When an application retrieves an interface from a component using QueryInterface(), the component gives the application a pointer to the table that corresponds to that particular component. This representation was chosen because it's the natural way of implementing classes and interfaces in C++. C++ also provides the name for this table of pointers; it's called a "v-table," with "v" for "virtual," because C++ describes these interface functions as "virtual" functions. Every C++ object that implements interfaces contains within it a set of pointers, one per interface, with each pointer pointing to the v-table representing an interface. These pointers are sometimes called v-ptrs ("virtual pointers"). It is with v-ptrs that functions like QueryInterface() actually work: when QueryInterface() wants to give an application access to a particular interface, it gives the application a v-ptr to the interface's v-table. While based on C++'s approach to objects, COM is designed to be language agnostic. It can be used with Visual Basic, Delphi, C, and many other languages. It's the ABI that ties these disparate languages together. Because they all know what an interface "looks like" and how it "works," they can interoperate. The final part of the COM puzzle is the Interface Description Language (IDL). The actual interfaces are (or at least, can be) written in a language that isn't a programming language as such. Instead it's a simpler language that's used to describe COM interfaces (and their corresponding GUIDs) in a standard, consistent way. These IDL descriptions then get compiled into a type library (TLB)—a binary representation of the same information—which is then typically embedded into the component's DLL or EXE, or sometimes provided separately, and sometimes not provided at all. Programs and development tools can inspect these TLBs to determine which interfaces a particular component supports. Microsoft saw that COM was good. Redmond started using COM just about everywhere. OLE 1.0 begat COM, and COM begat OLE 2.0: the OLE API was reworked to be just another COM API. ActiveX, a plugin interface for GUI components and scripting languages, infamously used as the plugin architecture for Internet Explorer, was a COM API. Even new APIs such as Direct3D and, as mentioned above, the new Media Foundation multimedia API are all built on COM (or at least, mimic COM). Since its invention, COM has grown to permeate Windows. COM is a large and complex system. It has to be; it has to handle a wide range of situations. For example, COM components that create a GUI have to abide by the specific rules that the Windows GUI API has for threading: any given window can only be modified by a single thread. COM has special support for components that need to live by these rules to ensure that every time an interface's function is called, it uses the right thread. This isn't suitable for most non-GUI components, so COM also has support for regular multithreaded components. You can mix and match—but you have to be careful when you do. Still, Microsoft used COM just about everywhere. In particular, it grew to become the foundation of Microsoft's enterprise application stack, used for building complex server applications. First Microsoft developed Distributed COM (DCOM), which allows an application on one machine to create components on another machine over a network and then use them as if they were local. Then came MTS, Microsoft Transaction Server, which added support for distributed transactions (to make sure a bunch of distributed components either all succeeded in updating the system, or all did nothing, with no possibility of some succeeding but others not). Finally, in Windows 2000, came COM+. COM+ subsumed DCOM and MTS and added performance and scalability enhancements. It also added more features designed for enterprise applications. A proliferation of APIs Concurrent with this development of OLE and COM, Windows itself was being developed and expanded, sprouting new APIs with every new release. 16-bit Windows started out with GDI, USER, file handling, and not a huge amount more. In late 1991, an update for Windows 3.0, the Multimedia Extensions, was released, providing a basic API for audio input and output and control of CD drives. Multimedia Extensions were rolled into Windows 3.1, but just as Windows 3.0 had an extension for sound, so Windows 3.1 gained one for video, with Video for Windows in late 1992. Again, Windows 95 would have this as a built-in component. Microsoft never planned to extend the 16-bit Windows API to 32 bits. The plan was to work with IBM and create OS/2 2.0, a new 32-bit operating system with an API based on the 16-bit OS/2 1.x APIs. But the companies had a falling out, leaving IBM to go its own way with OS/2 and forcing Microsoft to come up with a 32-bit API of its own. That API was Win32: a 32-bit extension to Win16. The 32-bit Windows NT was released in 1993, and this greatly expanded the breadth of the APIs. Though it included 32-bit versions of USER and GDI to ensure applications were easy to port, the 32-bit Windows API did much more. For example, it included support for multithreading and the synchronization mechanisms that go hand in hand with multiple threads. Windows NT had services, long-running background processes that didn't interact with the user (equivalent to Unix daemons), and had an API to support this. Windows NT introduced concepts such as security and user permissions; again, there are APIs to support this. Windows 95 brought API development of its own. DirectX was first introduced on that operating system. Exactly what's in DirectX has varied over the years; at one time or another, it has included a 2D API (DirectDraw), a 3D API (Direct3D), an audio API (DirectSound), a joystick, keyboard, and mouse API (DirectInput), a networking API (DirectPlay) and a multimedia framework to supersede both the Multimedia Extensions and Video for Windows (DirectShow). Many of these are now deprecated, but Direct3D and DirectShow are still important Windows components (even though DirectShow has been partially replaced by the newer Media Foundation). The coding styles and conventions used by these different APIs vary, according to the whims of the development teams that created them and the prevailing coding fashions of the time. The result of all this is that the Win32 API is not some monolithic, consistent thing. It's a mish-mash. Really old parts, parts that are inspired by Win16—including file handling, creation of windows, and basic 2D graphics—are still C APIs. So are parts that correspond closely with core kernel functionality; things like process and thread creation are all C-based. But newer parts, parts without any direct Win16 predecessor, are often COM or COM-like. This includes DirectX, Media Foundation, extensions and plug-ins for the Explorer shell, and more. The use of COM or COM-like mechanisms even extends to certain drivers. Big and sprawling as COM may be, it isn't bad technology. It's complex because it solves complex problems, and in practice, most applications can avoid a lot of the complexity. Software that uses Media Foundation, for example, only needs a small and easily understood subset of COM functionality. Some things, such as Direct3D, don't even play by the rules properly, which is bad if you want to treat them as if they're proper COM, but convenient if you don't. But there was a problem with all that complexity. The only mainstream programming language that supported all of COM's capabilities was C++ (or C, but much less conveniently). Visual Basic, though widely used in enterprise development, could only use a subset of COM's capabilities. This wasn't all bad, as it meant that it handled a lot of complexity automatically, but was nonetheless limiting. The problem with C++ is that it is itself a complex language, and C++ COM programs tend to be prone to the same kind of bugs and security flaws that have plagued C and C++ for decades. And unlike Visual Basic, C++ did virtually nothing automatically. Microsoft has tried to make C++ developers' lives easier. COM requires quite a bit of infrastructural scaffolding to be written by the developer. This was both laborious and repetitive, as the code would be essentially the same each time. A non-standard extension to C++, "Attributed C++" (introduced in 2002), enabled the compiler to do this work instead of the developer. Another non-standard extension, #import, would use TLB metadata to generate nice, C++-like classes simply using (though not creating their own) COM components. Helpful though these were, using COM and C++ still meant wrestling with grizzly bears on a daily basis. So this left developers with a choice: safe, easy, but limited Visual Basic, or unsafe, complex, fiddly, powerful C++. Meanwhile, Sun was busy inventing and developing Java.
<urn:uuid:a67f63b7-d95a-4bf2-87aa-1017f53609aa>
CC-MAIN-2017-09
https://arstechnica.com/features/2012/10/windows-8-and-winrt-everything-old-is-new-again/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00155-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944866
3,015
3.3125
3
Capturing the New Frontier: How To Unlock the Power of Cloud Computing The Benefits of the Cloud Cloud computing is immensely popular with companies and government agencies in search of revolutionary cost savings and operational flexibility. According to industry research firm IDC, cloud computing's growth trajectory is, at 27% CAGR, more than five times the growth rate of the traditional, on-premise IT delivery/consumption model. Cloud computing practitioners cite numerous benefits, but most often point to two fundamental benefits: - Adaptability: An enterprise can get computing resources implemented in record time, for a fraction of the cost of an on-premise solution, and then shut them off just as easily. IT departments are free to scale capacity up and down as usage demands at will, with no up-front network, hardware or storage investment required. Users can access information wherever they are, rather than having to remain at their desks. - Cost Reduction: Cloud computing follows a model in which service costs are based on consumption and make use of highly shared infrastructure. Companies pay for only what they use and providers can spread their costs across multiple customers. In addition to deferring additional infrastructure investment, IT can scale its budget spend up and down just as flexibly. This leads to an order of magnitude cost savings that wasn't possible with 100% proprietary infrastructure. Other benefits of the cloud include collaboration, scaling and availability, but revolutionary cost savings and the almost “instant gratification” offered by the agility of the cloud will be the key contributors to adoption of the cloud. What is the Cloud? So much has been written, advertised and discussed about cloud computing, it is appropriate to define the term for common understanding. Cloud computing generally describes a method to supplement, consume and deliver IT services over the Internet. Web-based network resources, software and data services are shared under multi-tenancy and provided on-demand to customers. It is this central tenet of sharing - and the standardization it implies - that is the enabler of cloud computing's core benefits. Cloud computing providers can amortize their costs across many clients and pass these savings on to them. This paradigm shift in computing infrastructure was a logical byproduct and consequence of the ease-of-access to remote and virtual computing sites provided by the Internet. The U.S. National Institute of Standards & Technology (NIST) defines four cloud deployment models: - Private Cloud, wherein the cloud infrastructure is owned or leased by a single organization and is operated solely for that organization - Community Cloud, wherein the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns, including security requirements - Public Cloud, wherein the cloud infrastructure is owned by an organization selling cloud services to the general public or to a large industry group - Hybrid Cloud, wherein the cloud infrastructure is a composition of two or more cloud models that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability NIST's definition of cloud computing not only defines HOW infrastructure is shared, but also outlines WHAT will be shared. These service models shift the burden of security accordingly between provider and user: Software-as-a-Service, or “SaaS”, is the most mature of the cloud services. SaaS offers a “soup to nuts” environment for consumption of a common application on demand via a browser. Typically, the customer controls little or nothing to do with the application, or anything else for that matter, and is only allowed to configure user settings. Security is completely controlled by the vendor. Examples of providers include Salesforce.com, Workday, Mint.com and hundreds of other vendors. Platform-as-a-Service, or “PaaS”, is an emerging cloud service model. The customer is able to develop applications and deploy onto the cloud infrastructure using programming languages and tools supported by the cloud service provider. They are not able to control the actual infrastructure – such as network, OS, servers or storage – the platform itself. Because the customer controls application hosting configurations as well as development, responsibility for software security shifts largely to their hands. Examples include Google App Engine and Amazon Web Services. Infrastructure-as-a-Service, or “IaaS”, is where even more of the infrastructure is exposed to multi-tenant users. The cloud service provider provisions processing, storage, networks and other fundamental computing resources. The customer is able to deploy and run arbitrary software, which can include operating systems and deployed applications. Software security in this deployment model is completely in the customer's hands, including such components as firewalls. Examples include Amazon Elastic Compute Cloud and Rackspace Cloud. While SaaS gained popularity as an alternative to on-premise software licensing, the models that are driving much of the current interest in cloud computing are the PaaS and IaaS models. Enterprises are especially drawn to the alternative development infrastructure and data center strategies that PaaS and IaaS offer. At this point in time, smaller enterprises seem to have more traction with PaaS, enabling them to rapidly bring websites to market; whereas larger enterprises are more comfortable beginning their cloud deployments with an existing application moved to an IaaS cloud service. How do we fully realize the benefits of the Cloud? Realizing the cloud's benefits is greatly determined by the trustworthiness of the cloud infrastructure – in particular the software applications that control private data and automate critical processes. Cyber-threats increasingly target these applications, leaving IT organizations forced to sub-optimize the cloud deployments containing this software, limiting flexibility and cost savings. Assuring the inherent security of software, therefore, is a key factor to unlock the power of cloud computing and realize its ultimate flexibility and cost benefits. Recommended approaches to Cloud software Security According to the Cloud Security Alliance, a not-for-profit organization promoting security assurance best practices in cloud computing, the ultimate approach to software security in this unique environment must be both tactical and strategic. Some of their detailed recommendations include the following: - Pay attention to application security architecture, tracking dynamic dependencies to the level of discrete third party service providers and making modifications as necessary - Use a software development life cycle (SDLC) model that integrates the particular challenges of a cloud computing deployment environment throughout its processes - Understand the ownership of tools and services such as software testing, including the ramifications of who provides, owns, operates, and assumes responsibility - Track new and emerging vulnerabilities, both with web applications as well as machine-to-machine Service Oriented Architecture (SOA) which is increasingly cloud-based The key to achieving the benefits of the cloud and to putting the above recommendations into practice is Software Security Assurance, or “SSA”. Recognized by leading authorities such as CERT and NIST, SSA is is a risk-managed approach to improving the inherent security of software, from the inside. There are three steps to a successful SSA program: - Find and fix vulnerabilities in existing applications before they are moved into a cloud environment - Audit new code/applications for resiliency in the target cloud environment - Establish a remediation / feedback loop with software developers and outside vendors to deal with on-going issues and remediation. To realize the full benefits of cloud computing, organizations must assess and mitigate the risk posed by application vulnerabilities deployed in the cloud with equal vigor as those within their own data center. It is only then that they will be able to take full advantage of Cloud Computing to save cost and increase the efficiency of their business.
<urn:uuid:e040ab32-efd5-4ae8-855f-61ffae25e1a8>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/datacenter/datacenter-blog/capturing-new-frontier-how-unlock-power-cloud-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00207-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939306
1,566
2.5625
3
Overview on APIs Application Programming Interfaces are a vital part of what makes business communication operate efficiently and securely in an online environment. APIs establish guidelines for software communication and operation. Without APIs, software would use wildly different methods to accomplish the same goals, requiring programmers to learn a whole new set of rules for each implementation. In other words, it’s a set of standards that make it easier for new programmers to understand the work of their peers. APIs are powerful tools that businesses can use to move incredible amounts of data across the Internet. However, API-based communication can introduce problems when uninvited third parties intercept data containing private information like financial records and passwords. Unless the API implementation is secure, there’s a risk of exposing internal data and customer information. SOAP and REST: Cross-Device, Cross-Platform Communication API Pros and Cons Simple Object Access Protocol and Representational State Transfer are two competing API standards that allow one application to communicate with another over a network like the Internet using platform agnostic transfer methods. Which API method you go with is a decision that can go either way depending on your business needs. However, most third-party services use one of two standards. If you’re getting ready to choose an API, use the comparison points below as a guide: 1) REST supports multiple data output types, including XML, CSV, and JSON. SOAP can only handle XML. Because the JSON format is easier to parse than XML, using REST to send data in JSON can actually save on computer infrastructure costs by requiring less computing power to do the same job. JSON and CSV data is also considered easier to work with from a programming standpoint. 2) REST is also able to cache data transfers, so when another endpoint requests an already completed query, the API can use the data from the previous request. Alternatively, SOAP implementations have to process the query every time. 3) SOAP offers better support for Web Services specifications, often making it a stronger option when standardization and security are primary concerns. Both formats support Secure Sockets Layer for data protection during the transfer process, but SOAP also supports WS-Security for enterprise-level protection. When you’re dealing with crucial private information like bank account numbers, it makes more sense to use SOAP. However, SOAP’s extra security isn’t necessary if you’re sending the day’s forecast to a mobile application. While SOAP may sound like it has a total advantage over REST in this case, it comes down to how well the API is implemented. A good REST implementation can be more secure than a poorly-designed SOAP implementation. SOAP also has built-in error handling for communication errors via the WS-ReliableMessaging specification. REST, on the other hand, has to resend the transfer whenever it encounters an error. Testing for API Security and Stability Is Essential API testing is very different in nature from debugging a website or an application, because whether the software works or not depends on the processing servers and systems handling the heavy lifting. APIs move a lot of data behind the scenes, and it’s not as obvious to spot when the implementation is working reliably. Errors in the data transfer requesting handling programming can cause incorrectly formatted responses, which the software won’t be able to use. It’s extremely important that the API platform can handle all the concurrent users that will be accessing the services at the same time. Bottlenecks in the API can cause the service to respond slowly—and the negative effects can rebound in application functionality, website performance, and customer satisfaction. These problems can be compounded when it’s unclear which API endpoint is experiencing the problem. A service like Apica’s API testing platform can simulate SOAP and REST API users in the testing portal to make sure your implementation is efficient and able to handle the workload. If it isn’t, the service can pinpoint any problematic areas. Are You Ready to Achieve Peak Web and Mobile Performance? Start a 6 month full-featured trial of Apica LoadTest or a 30-day trial of Apica WPM. Start Your Free Trial!
<urn:uuid:d7e4522b-ce19-4b93-988a-b381978a30fd>
CC-MAIN-2017-09
https://www.apicasystem.com/blog/understanding-security-dependability-soap-rest/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00627-ip-10-171-10-108.ec2.internal.warc.gz
en
0.896201
870
3.203125
3
Disclaimer: No businesses or even the Internet were harmed while researching this post. We will explore how an attacker can control the Internet access of one or more ISPs or countries through ordinary routers and Internet modems. Cyber-attacks are hardly new in 2013. But what if an attack is both incredibly easy to construct and yet persistent enough to shut Internet services down for a few hours or even days? In this blog post we will talk about how easy it would be to enlist ordinary home Internet connections in this kind of attack, and then we suggest some potentially straightforward solutions to this problem. The first problem The Internet in the last 10 to 20 years has become the most pervasive way to communicate and share news on the planet. Today even people who are not at all technical and who do not love technology still use their computers and Internet-connected devices to share pictures, news, and almost everything else imaginable with friends and acquaintances. All of these computers and devices are connected via CPEs (customer premises equipment) such as routers, modems, and set-top boxes etc. that enable consumers to connect to the Internet. Although most people consider these CPEs to be little magic boxes that do not need any sort of provisioning, in fact these plug-and-play devices are a key, yet weak link behind many major attacks occurring across the web today. These little magic boxes come with some nifty default features: - Updateable firmware. - Default passwords. - Port forwarding. - Accessibility over http or telnet. The second problem All ISPs across the world share a common flaw. Look at the following screen shot and think about how one might leverage this flaw. Most ISPs that own one or more netblocks typically write meaningful descriptions that provide some insight into what they are used for. Attack phase 1 So what could an attacker do with this data? - They can gather a lot of information about netblocks for one or more ISPs and even countries and some information about their use from http://bgp.he.net and http://ipinfodb.com. - Next, they can use whois or parse bgp.he.net to search for additional information about these netblocks, such as data about ADSL, DSL, Wi-Fi, Internet users, and so on. - Finally, the attacker can convert the matched netblocks into IP addresses. At this point the attacker could have: - Identified netblocks for an entire ISP or country. - Pinpointed a lot of ADSL networks, so they have minimized the effort required to scan the entire Internet. With a database gathered and sorted by ISP and country an attacker can, if they wanted to, control a specific ISP or country. Next the attacker can test how many CPEs he can identify in a short space of time to see whether this attack would be worth pursuing: A few hours later the results are in: In this case the attacker has identified more than 400,000 CPEs that are potentially vulnerable to the simplest of attacks, which is to scan the CPEs using both telnet and http for their default passwords. We can illustrate the attacker’s plan with a simple diagram: Attack Phase 2 (Command Persistence) Widely available tools such as binwalk, firmware-mod-kit, and unix dd make it possible to modify firmware and gain persistent control over CPEs. And obtaining firmware for routers is relatively easy. There are several options: - The router is supported by dd-wrt (http://dd-wrt.com) - The attacker either works at an ISP or has a friend who works at an ISP and happens to have easy access to assorted firmware. - Search engines and dorks. As soon as the attacker is comfortable with his reverse-engineered and modified firmware he can categorize them by CPE model and match them to the realm received from the CPE under attack. In fact with a bit of tinkering an attacker can automate this process completely, including the ability to upload the new firmware to the CPEs he has targeted. Once installed, even a factory reset will not remove his control over that CPE. The firmware modifications that would be of value to attacker include but are not limited to the following: - Hardcoded DNS servers. - New IP table rules that work well on dd-wrt-supported CPEs. - Remove the Upload New Firmware page. CPE attack phase recap - An attacker gathers a country’s netblocks. - He filters ADSL networks. - He reverse engineers and modifies firmware. - He scans ranges and uploads the modified firmware to targeted CPEs. Follow-up attack scenarios If an attacker is able to successfully compromise a large number of CPEs with the relatively simple attack described above, what can he do for a follow-up? - ISP attack: Let’s say an ISP has a large number of IP addresses vulnerable to the CPE compromise attack and an attacker modifies the firmware settings on all the ADSL routers on one or more of the ISP's netblocks. Most ISP customers are not technical, so when their router is unable to connect to the Internet the first thing they will do is contact the ISP’s Call Center. Will the Call Center be able to handle the sudden spike in the number of calls? How many customers will be left on hold? And what if this happens every day for a week, two weeks or even a month? And if the firmware on these CPEs is unfixable through the Help Desk, they may have to replace all of the damaged CPEs, which becomes an extremely costly affair for the company. - Controlling traffic: If an attacker controls a huge number of CPEs and their DNS settings, being able to manipulate website traffic rankings will be quite trivial. The attacker can also redirect traffic that was supposed to go to a certain site or search engine to another site or search engine or anywhere else that comes to mind. (And as suggested before, the attacker can shut down the Internet for all of these users for a very long time.) - Company reputations: An attacker can post: - False news on cloned websites. - A fake marketing campaign on an organization's website. - Make money: An attacker can redirect all traffic from the compromised CPEs to his ads and make money from the resulting impressions. An attacker can also add auto-clickers to his attack to further enhance his revenue potential. - Exposing machines behind NAT: An attacker can take it a step further by using port forwarding to expose all PCs behind a router, which would further increase the attack's potential impact from CPEs to the computers connected to those CPEs. - Launch DDoS attacks: Since the attacker can control traffic from thousands of CPEs to the Internet he can direct large amounts of traffic at a desired victim as part of a DDoS attack. - Attack ISP service management engines, Radius, and LDAP: Every time a CPE is restarted a new session is requested; if an attacker can harvest enough of an ISP’s CPEs he can cause Radius, LDAP and other ISP services to fail. - Disconnect a country from the Internet: If a country's ISPs do not protect against the kind of attack we have described an entire country could be disconnected from the Internet until the problem is resolved. - Stealing credentials: This is nothing new. If DNS records are totally in the control of an attacker, they can clone a few key social networking or banking sites and from there they could steal all the credentials he or she wants. In the end it would be almost impossible to take back control of all the CPEs that were compromised through the attack strategies described above. The only way an ISP could recover from this kind of incident would be to make all their subscribers buy new modems or routers, or alternatively provide them with new ones. There are two solutions to this problem. This involves fixes from CPE vendors and also from the ISPs. Vendors should stop releasing CPEs that have only rudimentary and superficial default passwords. When a router is being installed on a user’s premises the user should be required to change the administrator password to a random value before the CPE becomes fully functional. Let's look at a normal flow of how a user receives his IP address from an ISP: - The subscriber turns on his home router or modem, which sends an authentication request to the ISP. - ISP network devices handle the request and forwards it to Radius to check the authentication data. - The Radius Server sends Access-Accept or Access-Reject messages back to the network device. - If the Access-Accept message is valid, DHCP assigns an IP to the subscriber and the subscriber is now able to access the Internet. However, this is how we think this process should change: - Before the subscriber receives an IP from DHCP the ISP should check the settings on the CPE. - If the router or modem is using the default settings, the ISP should continue to block the subscriber from accessing the Internet. Instead of allowing access, the ISP should redirect the subscriber to a web page with a message “You May Be At Risk: Consult your manual and update your device or call our help desk to assist you.” - Another way of doing this on the ISP side is to deny access from the Broadband Remote Access Server (BRAS) routers that are at the customer’s edge; an ACL could deny some incoming ports, but not limited to 80,443,23,21,8000,8080, and so on. - ISPs on international gateways should deny access to the above ports from the Internet to their ADSL ranges. ISPs should be detecting these types of attacks. Rather than placing sensors all over the ISP's network, the simplest way to detect attacks and grab evidence is to lure such attackers into a honeypot. Sensors would be a waste of money and require too much administrative overhead, so let’s focus on one server: 1- Take a few unused ADSL subnets /24 x.x.y.0/24 and so on 2- Configure the server to be a sensor: A simple telnet server and a simple Apache server with htpasswd setup as admin admin on the web server’s root directory would suffice. 3- On the router that sits before the servers configure static routes with settings that look something like this: route x.x.x.0/24 next-hop <server-ip>; route x.x.y.0/24 next-hop <server-ip>; route x.x.z.0/24 next-hop <server-ip>; 4- After that you should redistribute your static routes to advertise them on BGP so when anyone scans or connects to any IP in the above subnets they will be redirected to your server. 5- On the server side (it should probably be a linux server) the following could be applied (in the iptables rules): iptables -t nat -A PREROUTING -p tcp -d x.x.x.0/24 --dport 23 -j DNAT --to <server-ip>:24 iptables -t nat -A PREROUTING -p tcp -d x.x.y.0/24 --dport 23 -j DNAT --to <server-ip>:24 iptables -t nat -A PREROUTING -p tcp -d x.x.z.0/24 --dport 23 -j DNAT --to <server-ip>:24 6- Configure the server's honeypot to listen on port 24; the logs can be set to track the x.x.(x|y|z).0/24 subnets instead of the server. The Internet is about traffic from a source to a destination and most of it is generated by users. If users cannot reach their destination then the Internet is useless. ISPs should make sure that end users are secure and users should demand ISPs to implement rules to keep them secure. At the same time, vendors should come up with a way to securely provision their CPEs before they are connected to the Internet by forcing owners to create non-dictionary/random usernames or passwords for these devices.
<urn:uuid:20e938e0-dfa5-477f-a12b-11c8e7e01256>
CC-MAIN-2017-09
http://blog.ioactive.com/2013/03/behind-adsl-lines-how-to-bankrupt-isps.html?m=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00151-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922556
2,599
2.90625
3
It looks like a bull, trots at the speed of a wolf and carries equipment like a pack mule, but does it have a place on the battlefield of the future? Researchers in the U.S. are conducting a two-year study of a robot that promises to lighten the load that soldiers must carry and they gave it a high-profile demonstration in September. The four-legged robot, developed by the U.S. government-funded Defense Advanced Research Projects Agency (DARPA) and Boston Dynamics, is part of DARPA's Legged Squad Support System (LS3) program, and is packed with technology. It's a development on Big Dog, a robot platform developed by Boston Dynamics several years ago. As warfare gets more high-tech, soldiers are being asked to carry more gear -- as much as 45 kilograms, according to the U.S. military -- and that can slow them down, bring on injuries or hasten the onset of fatigue. So the U.S. Army and DARPA have made physical overburden an important focus of their technology research. The new robot walks on four legs and has a fast-reacting balance system that means it won't fall over if shoved from one side -- something that most robots can't handle. If it does somehow fall, it's capable of righting itself. There are also "eyes" at the front, actually electronic sensors that constantly scan the surroundings. A two-year test of the robot began in July and, if all goes well, will culminate with models of the robot taking part in a battlefield exercise alongside soldiers. Before that happens, researchers want to perfect three distinct autonomous modes: "leader-follower tight" in which the LS3 follows as close as possible to the path of a human leader; "leader-follower corridor" in which the robot follows a leader but has the ability to decide its own path; and "go-to-waypoint" where it makes its own way to a GPS coordinate using sensors to avoid obstacles. The robot is powered by a gasoline engine, which brings advantages -- plans call for it to be able to carry 180 kilograms on a 30+ kilometer hike over 24 hours -- but also means its noisy. Early prototypes were so loud it wasn't possible to have a conversation nearby but that is slowly changing. The latest version, demonstrated a few weeks ago, makes a tenth of the noise. The demonstration, at Joint Base Myer-Henderson Hall in Virginia, gave General James Amos [cq], commandant of the U.S. Marine Corps., and Arati Prabhakar [cq], director of DARPA, a close-up look at the robot. "For me, to see where it's gone just in the last four years and where it was with Big Dog, which was fascinating, you had to have a leap of imagination to know that we would get there eventually. We're getting close. Very, very close," Amos said, according to a story about the demonstration on the U.S. Army's web site. During the test it was controlled with the Tactical Robot Controller (TRC), a handheld touchscreen controller that can operate many of the robotic platforms used by the U.S. military including the TALON, Dragon Runner, Robotic Bobcat, Raider and MAARS robots. In the future, developers want to add voice-recognition to the robot so soldiers will be able to command it to do things by voice alone, DARPA said. And in addition to hauling equipment, the generator in the robot can also be used to recharge of power equipment when needed. Tests of the robot are scheduled to take place approximately every quarter between now and the end of the research program. In December this year it will take part in its first test with the Marine Corps Warfighting Laboratory (MCWL) at a U.S. base location yet to be disclosed. Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service. Follow Martyn on Twitter at @martyn_williams. Martyn's e-mail address is [email protected]
<urn:uuid:037d7428-e51f-4287-83f4-1b9f9fca685a>
CC-MAIN-2017-09
http://www.itworld.com/article/2721760/hardware/darpa-begins-testing-robotic-mule-for-battlefields.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00327-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964388
870
2.921875
3
A new security tool developed by Department of Energy engineers is designed to give security and IT administrators the ability to more quickly identify and respond to an issue on the network. Hone is the brainchild of Glenn Fink, a senior research scientist with the Secure Cyber Systems Group at the DOE s Pacific Northwest National Laboratory (PNNL) in Richland, Wash. Hone is what Fink calls a cyber-sensor that essentially discovers and monitors the relationship between network activity on a computer and the applications--such as Microsoft's Internet Explorer--and processes running on it. By greater visibility into those relationships, IT professionals will be able to more quickly understand and deal with cyber-attacks. In addition, IT administrators can use the tool for a host of network- and security-related tasks, according to Fink. In developing Hone, he said he wanted to help people see what s on their networks. "I want people to understand what s really happening on these very complex machines," Fink said in an interview with eWEEK. He initially created the framework of what would become Hone as a postdoctoral researcher at Virginia Tech. Fink said he saw what visualization technology was doing elsewhere, and asked why people didn t use it in security. Such deep visualization into the system and the network would be hugely beneficial to security administrators, he said. "This was the hammer to hit their nail," he said. Fink took his ideas with him when he went to work for PNNL, where he was able to secure the internal funding and collaboration needed to get going on work on what eventually turned out to be Hone. "It s really easy to get people to say, 'Yeah, that's cool,'" he said. "It's another thing to get people to say, 'And here's the money.'" The problem is what he sees as an inefficient way of dealing with security issues. Right now, security and system administrators spend much of their time searching for unusual patterns in communications between computer systems and the network, Fink said. The problem is that once such a pattern is found, there's nothing to say which program is doing the communicating, so the administrators closely watch the system hoping to see the program work again and allowing them to get a better read on the situation. However, Fink said, they may never see the dangerous program again. However, Hone creates an ongoing record of the communication, not only showing the communications between systems on a network, but also which specific programs--including Web browsers, system updates and malicious programs--are involved in the communication.
<urn:uuid:c1655939-40cb-4d0c-88dd-95984b5ca1e4>
CC-MAIN-2017-09
http://www.cioinsight.com/security/new-security-tool-helps-admins-beat-cyber-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00027-ip-10-171-10-108.ec2.internal.warc.gz
en
0.976076
532
2.84375
3
In-flight cellular in the U.S. may be closer to reality than some consumers realize, with foreign airlines poised to extend services they already offer elsewhere. But evidence from overseas suggests the odds of being trapped next to a chronic caller are slim. At its monthly open meeting Thursday, the U.S. Federal Communications Commission will discuss whether to issue a proposal for legalizing small cellular base stations on airliners. Such a plan would be subject to public comment and wouldn't take effect until well into next year at the earliest. If enacted, it would end a decades-long ban that has preserved airline cabins as rare cell-free spaces. Yet based on reports from overseas, calls in the cabin might prove to be rare and brief. Actually allowing voice calls in flight would be up to the airlines, and most U.S. airlines seem unlikely to do so, let alone invest in equipment to carry cellular voice or data. Most have Wi-Fi for data already. But foreign airlines that already offer cell service elsewhere could probably start allowing calls over U.S. airspace fairly soon if an FCC rule change takes place. Another potential hurdle to such services arose on Thursday, shortly before the FCC's meeting. The U.S. Department of Transportation said it would consider its own ban on in-flight calls, looking at whether allowing them would be "fair to consumers," according to a statement by Transportation Secretary Anthony Foxx. If approved, that rule would mean that any cellular systems the FCC allowed could only be used for data and texting. Pulling out a cellphone in the air and dialing up family and friends is already legal in many places outside the U.S. British Airways, Singapore Airlines, Air France, KLM, Emirates, Aeroflot, Virgin Atlantic and other airlines offer cellular service, though some, including Lufthansa and Aer Lingus, prohibit voice calls. These in-flight services go through small, specialized cellular base stations installed on planes, which talk to the main cell network via satellites. The so-called picocells prevent interference with cell towers on the ground, which was the reason for the FCC's longtime ban. In-flight cellular services typically are billed as international roaming, which carries a stiff premium. Partly because of cost, even passengers who can make cell calls on planes don't do it that much, according to an FAA report issued in July 2012. The agency surveyed aviation authorities in other countries about in-flight cellular in the early days of such services. Based on the few responses it got, airliners weren't bursting into a cacophony of one-sided conversations. For example, France's aviation agency said about 2 percent of passengers used their phones for voice calls, while Jordan's said only about 10 percent of travelers used cellular at all. New Zealand authorities said there were 10 text messages sent for each minute of voice calling, and Brazil said an average of 0.3 passengers per flight leg made calls. The average length of those calls was 110 seconds. There is "a huge demand" for cellular service on planes, according to Kevin Rogers, CEO of service provider AeroMobile. But he said 10 percent of passengers connecting up is about average for a flight. And both Rogers and Ian Dawkins, CEO of rival provider OnAir, press the point that voice calling is only a small part of that use. On an average flight of an AeroMobile-equipped plane, there are five or six phone calls with a typical duration of 90 seconds to two minutes, Rogers said. More than 80 percent of those on the AeroMobile system use only text or data, he said. For OnAir, about 60 percent of activity is data use, 20 percent is texting and 10 percent is voice. The cabin crew can turn off the voice capability of OnAir's system during quiet times, such as when most passengers are sleeping, Dawkins said in an email message. He claimed there has not been a single complaint about voice calls in the six years that OnAir Mobile has been operating. Despite such assurances, U.S. airlines have shown little interest in voice calling, citing passengers' preferences. Even though VoIP (voice over Internet Protocol) services such as Skype could run over the in-flight Wi-Fi widely available on U.S. airlines, none currently allow voice calls. However, the first cellular calls in the air could come from another direction. When foreign airlines fly cell-equipped planes over the U.S., they have to turn off the service even if it's available for the rest of the international journey. For example, passengers on a flight from London to Washington, D.C., are advised that their cellular voice and data service will be cut off for the last two to three hours of the trip as the plane flies over the U.S. East Coast, AeroMobile's Rogers said. "It's frustrating for the passengers, it's frustrating for the cabin crew, and it's frustrating for us," Rogers said. OnAir's Dawkins takes a similar view. If the FCC rule change is approved and does what it seems to propose, those foreign carriers will probably look to extend passengers' cell privileges to the U.S. portion of their flights as soon as possible, Rogers and Dawkins believe. The Federal Aviation Administration has already approved onboard picocells as safe for flying, and AeroMobile has permission to use its satellite spectrum in the U.S., Rogers said. Virgin Atlantic, an AeroMobile customer, offers cellular on 17 of its planes and would like to be able to extend that service to the U.S., according to spokeswoman Olivia Gall. Not surprisingly, Rogers thinks U.S. airlines also will adopt cellular. OnAir said it is in active discussions with several U.S. airlines. U.S. airlines' deployments may come in two phases, first on long-haul international flights and later on domestic trips, Rogers said. "Their international competitors are taking the view, almost without exception, that they want to put onboard full connectivity ... both Wi-Fi and mobile phone," Rogers said. While some passengers are willing to sign up and pay for Wi-Fi, others want the simplicity of simply turning on their phones and using them as usual, paying the cost as part of their regular monthly phone bill, he said. Customers of at least one U.S. mobile operator can already use their phones in the air when traveling overseas where the service is allowed. AeroMobile has an international roaming agreement with AT&T. The carrier's discount packages for roaming in some foreign countries don't cover AeroMobile service, so voice calls cost $2.50 per minute, data costs $0.0195 per KB, and sending a text message costs $0.50. Even within the U.S., it's likely that cell use on airliners would be treated as international roaming. That could make U.S. mobile operators wary about supporting cellular service on domestic flights, said analyst Roger Entner of Recon Analytics. Consumers in Europe and Asia are more savvy about international roaming than most passengers on domestic U.S. flights, he said, so phone companies would have to be prepared to either warn subscribers carefully or field a lot of complaints. "It's very difficult to explain to the average American that the moment they step on a plane, they're in a different country," Entner said. "I don't think the carrier will want to have any part of that. It sounds like a customer service nightmare."
<urn:uuid:3ae4b41b-dbdd-49bb-ac47-21d930fc3048>
CC-MAIN-2017-09
http://www.computerworld.com/article/2486828/mobile-apps/cell-phones-on-planes-may-be-heading-for-the-us--but-will-anyone-use-them-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00147-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961792
1,534
2.625
3
DB2 and IBM's Processor Value Unit pricing How we got here... Prior to the introduction of dual-core chips, there wasn't much question what a processor was since all processors were single-core processors. This made processor-based licensing simple to understand and manage: if you had a four processor server, then you needed four software license entitlements. In October 2000, IBM introduced a new line of highly scalable dual-core microprocessors with their POWER processor (the POWER4 series was the first generation of dual-core processors released by IBM, this generation was replaced by POWER5, followed by POWER6). Today, these chips are available with the POWER (formerly known as System i and System p) server family from IBM. These POWER chips changed the technology landscape by implementing two cores (the ability to have two separate circuitry units executing compute instructions) on a single silicon chip (something that plugs into a server motherboard's package). An example of a multi-core processor (in this case it's a dual-core processor) is shown below: Figure 1. A dual-core processor The following figure compares a single core processor to a dual core processor: Figure 2. A comparison of a single core processor to a dual-core processor The advantage of dual-core chips is that they can be used to drive performance and reduce power requirements as well as the heat generated by higher and higher clock speeds of their single-core counterparts. Dual-core chips are denser as well, which means their physical requirements are more along the lines of their single-core cousins since they share the same form factor. Dual-core chips would also appear to be more environmentally friendly in that they consume less electricity and have lower heating, ventilation, and air conditioning (HVAC) requirements. Finally, the proximity of cached memory can yield performance advantages too and more and more software companies are looking to specifically write their applications to leverage different processor cores for different application operations. Today, my feeling is that most applications still aren't written to fully utilize cores from a optimization perspective; however, all applications definately take advantage of the excess processing capacity and this is definately changing. One good example of true core exploitation is the IBM Data Power appliance which has specific cores that are optimized for specific functions such as XML parsing and validation. These functions exclusively run on specific cores. In the case of the first generation POWER dual-core chips, each core was faster than their single-core predecessors; by putting two of these faster cores on the same chip, customers saw their performance improve more than two times for these new "processors" (the physical chip that plugged into the socket/package on the motherboard that now had two traditional processing cores on it as opposed to one). At this point, IBM Software Group (IBM SWG) determined that a processor should be defined as a core (so a dual-core processor meant two processors from a licensing perspective) since users were getting the full benefit of both cores on the chip. Soon after, other vendors followed suit by releasing dual-core processors of their own such as Sun, Intel, and AMD; since then, we've seen a plethora of multi-core chips (quad-core, hexa-core, octi-core, and so on) each designed for various characteristics: price/performance, electrical consumption, performance, and so on. Some hardware vendors reference the pieces of silicon that plug into the motherboard as a processor no matter how many cores are on it. You can see the marketing advantage here. One vendor may refer to a server with eight processing cores as an eight-way server while another would call it a four-way (since it has four dual-core processors, each processor with two processing cores). Various benchmarking organizations have adopted policies to limit this type of confusion and have implemented rules with respect to the reporting of server capacity such that you typically see benchmark results now reported as Processors/Cores/Threads. For example, a 32/64/128 benchmark report means that the result was attained on a server with 32 dual core processors with some sort of hyperthreading enabled (32 sockets * 2 cores per socket * 2 threads per core = 128 threads). In an effort to promote clear comparisons, the Standard Performance Evaluation Corporation (SPEC) mandates that all results be reported in cores and the Transaction Performance Processing Council (TPC) also mandates the reporting of Processors/Cores/Threads in their disclosure reports. This is something to keep in mind when comparing the performance of servers and so on; ensure you're comparing cores to cores and I recommend you ask for the specific number of cores on the machine or reports that conform to the SPEC mandate to ensure accurate comparisons between any vendors. In the past, to license IBM software running on a dual-core chip, customers counted the number of cores, and multiplied by the per-processor price. If a customer had what was traditionally known as a "four-way" dual-core server (4 sockets each with 2 cores = 8 cores), they were required to license 8 DB2 processors. However, when x86 dual-core chips were first introduced from Intel and AMD, they didn't double (or more than double as was the case with their IBM POWER counterparts) the performance of the server. While these new dual core commodity processors provided some performance improvements by placing two cores on a chip, these architectures just didn't yield 2+ times speedup of their single-core counterparts. On a personal note, my own non-validated or regulated research showed the speedup for these first generation x86 dual core processors to be around 30% (give or take); certainly not double as was the case with the POWER-based processors. After dual core processors were first introduced, the now obsolete dual core processors yielded better results than their initial demonstrated performance. Now mix this with the evolution of the processor market where certain processors are architected to be more energy efficent than pure performance play cores, and you can see how scrambled the simplistic per core pricing becomes. In some cases, you could pay more and not get more performance, and in other cases pay more and get proportionally greater performance. As a result, the market seemed to conclude that not all multi-core chips are created equal and therefore shouldn't be licensed the same way. For example, the x86 quad-core first generation processors that hit the market soon after the dual core craze didn't double the performance of their precedessors (at first, a single quad core processor could not outperform two dual core processors; however, this too changed, or will change, over time.) Because of these core discrepancies, IBM announced a policy in 2005 that started to accrue the value a core provides to the software (for example, speedup) as a key ingredient to the price of the software. For example, since the original x86 dual core processors didn't give the same performance as the original dual core POWER processors, when licensing x86 dual-core chips, you were effectively required to buy one-half of a license for each processor core (essentially one processor license per socket). Of course, since the dual core POWER processors more than doubled the performance of their single core counterparts when they were first introduced, you had to buy two processor licenses per socket: one per core. Note: When Intel introduced hyperthreading which tricked software into thinking there were two processors for each single-core processor by pausing and inter-weaving scheduled worker threads, the yield was around a 30% performance benefit; IBM never charged for these virtual processors because they didn't produce twice the performance. As of DB2 9.7, you still don't license for hyperthreading with x86 chipsets or simultaneous multithreading (SMT) with POWER chipsets, and so on. To further complicate the procurement of IT solutions at this point was the fact that most software vendors were charging for each core and most hardware vendors were charging for the chip (since they can't sell half a chip), leading to even greater confusion in the marketplace. If it hadn't become confusing enough, in the middle of 2006 (before the concept of PVU was announced) the multi-core marketplace became even more complex when Sun introduced their quad-core, hexa-core, and octi-core processors that shipped with their first generation T1000/T2000 "Niagara" servers. These processors were not really engineered for straight out performance of database applications (this guidance came from Sun themselves) - though subsequent generations of this chipset have some-what improved on this oringal positioning. In contrast, these original processors were focussed on energy efficiencies in relation to performance. The point is that today's marketplace is characterized by different kinds of multi-core processors that have different characteristics: some provide double the performance, some provide massive energy savings, some try to compromise between the two, some are engineered for laptops, and more. Indeed, the evolution of the processing core foreshadowed the need for something new. As we look into the future and see more multi-core processors from different vendors with different performance characteristics and value propositions, it seemed that there had to be a way to normalize these discrepancies. After all, it wouldn't seem fair to buy two licenses for a dual-core processor if it only yielded a 30% performance improvement. At the same time, it wouldn't seem fair to pay for only one license with a dual-core processor that yields a 200% (or more) performance improvement. The PVU methodology is used to standardize the value derived from a varying array of processor architectures and attempts to normalize these benefits and equate that to the software you are running to come up with a fair charge metric. So now what? The Processor Value Unit (PVU) In the third quarter of 2006, IBM software group (SWG) announced a replacement for processor core-based pricing and introduced the concept of PVU pricing. Since this announcement, you now interpret license limits for distributed DB2 servers and procure entitlements using this new PVU metric (unless you are using one of the new licensing methods introduced for DB2 Express and DB2 Workgroup servers in DB2 9.7 or the Authorized User license - more on those in a bit). For example, today, when you want to license your DB2 server for unlimited users you arrive at the final DB2 price by aggregating the total amout of PVUs on the server (often referred to as the server's PVU rating) where the DB2 software is running and multiplying that by the per PVU price of the DB2 edition (and/or Feature Pack) that you are running. Since as of February 10th, 2009, all DB2 servers support sub-capacity licensing, you can use the PVU rating of a virtualization session to arrive at the DB2 license costs of that session. (By the way, in the event that the total number of virtual PVUs exceed the total PVU rating of the entire physical server, you simply license the PVU rating of the server.) For example, let's assume you want to license your DB2 software using the PVU model and your server was rated at 400 PVUs; if you installed DB2 in 8 VMWare sessions each configured to use 100 PVUs, you would only license 400 PVUs of DB2, not 800 PVUs. There are some exceptions to this rule depending on the DB2 licensing metric you use. For example, if you licensed DB2 Express with the new SERVER license (which was introduced in DB2 9.7) in these 8 VMWare sessions, you would have to buy 8 DB2 Express SERVER licenses. A PVU is simply a unit of measure used to differentiate processing cores from the various hardware vendors. IBM now maintains the PVU Licensing for Distributed Software table which maps today's available processors to the number of PVUs associated with a single respective processing core. This table is continually updated as manufacturers go to market with different chip architectures. In fact, IBM SWG works so closely with these chip vendors that they are well aware of the performance benefits of many of the new processors before the market itself. You can determine the total PVU rating for your server or virtualization session by adding up the PVUs attributed to each core using this table. Once you've determined the number of PVUs, multiply the total amount of PVUs for your server or virtualized session by the per-DB2 PVU price for the edition or Feature Pack you want to license. A PVU Licensing Example To understand how traditional processor pricing equates to PVU pricing, it's best to compare and contrast the two models. As of the date this article was last updated, conversion rates for the industry's most popular processing cores to PVUs are as shown in Figure 3. Figure 3. Table of Processor Value Units (PVU) per core Note: This table is provided for your convenience and to facilitate a working example; for the most update to date PVU conversion rates, refer to the previous table online. You can see this table is organized by the per core PVU ratings for each vendor and their respective processor architecture. Simply locate the brand of your processor, the respective number of cores on the processor, and arrive at the PVU rating for each core using the PVUs per Processor Core column. For example, using this table you can see that almost any single core processor from any vendor requires 100 PVUs (the exceptions are the IBM PowerXCell 8i and the IBM Cell/B.E. single core processors). If you look at the Intel Xeon x86 multi core processor, you can see that you can buy this processor architecture in a dual core, quad, core, or hexa core format. If a single socket was occupied by a dual core verion of this processor, it would be rated at 100 PVUs (2 cores x 50 PVUs). If you plugged in the quad-core version of this processor into a server, it would be rated at 200 PVUs (4 cores x 50 PVUs). Finally, if you plugged in the latest hexa-core version into a server, that socket would be rated at 300 PVUs (6 cores x 50 PVUs). You can also see that a System z10 processor (used to run Linux on a System z server) is rated at 120 PVUs per core for the Integrated Facility for Linux (IFL) engine it runs. So how do we explicitly figure out how many PVUs of a DB2 edition or Feature Pack we have to buy? From this table we can determine that a single dual-core AMD Opteron processor requires 100 PVUs (50 PVUs per core x 2 cores = 100 PVUs per socket). Let's assume your server has 2 dual-core AMD processors; in this case, you woud be required to buy 200 PVUs of your DB2 software: (2 sockets x (50 PVUs x 2 cores)) = 200 PVUs. Now let's assume in this example you were using the faster performing dual-core POWER6 processor which yields far more performance than an AMD-based dual-core processor. If you have a POWER server with 2 dual core POWER6 processors, you are required to purchase 480 PVUs of DB2: (2 sockets x (120 PVUs x 2 cores)) = 480 PVUs. Note: Some POWER servers use POWER6 processors that are rated at 80 PVUs. Specifically, the POWER p520 and JS12, JS22, JS23, and JS43 server models use specially-rated POWER6 processors each rated at 80 PVUs per core. All other servers using POWER6 processors than those listed online where the most up to date version of this table resides are rated at 120 PVUs. PVUs are used for licensing restrictions as well. This means that if a DB2 edition is limited to 200 PVUs, like DB2 Express with a PVU license, then from the table referenced above, you can deduce that you could install the DB2 Express code on any server that does not exceed a 200 PVU rating. The DB2 9.7 release introduced the FTL and SERVER metric for DB2 Express 9.7 (more on that in a bit) which allows you to bypass the PVU rating of the server; however, you need to be aware of trade-offs that surround this model. Likewise, the DB2 9.7 release introduces a new SOCKET metric for DB2 Workgroup servers which allows you to install DB2 on servers larger than the 480 PVU restrictions since it ignores the PVU count of a server and licenses the software at the socket level (more on that in a bit too). For example, in DB2 9.7 using a DB2 Workgroup SOCKET license, you can actually install this DB2 product on a server rated as high as 960 PVUs by the servers available at the time this article was written. Ignoring some of the new licensing options introduced in DB2 9.7 (for the purposes of illustrating the PVU license - the point of this article), you can see that you can install DB2 Workgroup using a PVU license on: - A server with four dual-core x86 AMD-based Opteron chips - A server with a single quad-core x86 AMD-based Opteron chips - A server with two dual-core POWER6 chips - A server with a two quad-core POWER5 5 QCM chips - A server with a four UltraSparcT1 quad-core chip - and more...look at the table! If you need some help identifying what kind of cores are in what server, IBM publishes a guide to help you identify the processor family correlated to your server. For example, if you have an IBM System x server, you can quickly find out the type of processing core used with this server, how many cores occupy each socket, and the PVU rating per core as shown in the following table. Note: This is a sample for illustrative purposes only. Please visit the referenced Web site for actual values and calculations. Table 1. Quickly finding details on your server using the IBM guide to identifying your processor family |Server Model Number||Processor Vendor/Brand||Processor Type||Processor Model Number*||PVUs per Core**| |3105||AMD Opteron/Athlon||SC/DC||All existing||100/50| |3200||Intel Xeon||DC/QC||3000 to 3499||50| |3200 M2||Intel Xeon||DC/QC||3000 to 3499||50| |3250||Intel Pentium / Xeon||SC/DC||All existing||100/50| |3250 M2||Intel Xeon||DC/QC||3000 to 3499||50| |3350||Intel Xeon||DC/QC||3000 to 3499||50| |3400||Intel Xeon||DC/QC||5000 to 5499||50| |3400 M2||Intel Xeon||DC/QC||5500 to 5599||70| |3450||Intel Xeon||DC/QC||5000 to 5499||50| |3455||AMD Opteron||DC/QC||All existing||50| |3500||Intel Xeon||DC/QC||5000 to 5499||50| |5500 to 5599||70| To further assist you in determining the number of PVUs your server may be rated at, you can use the IBM Processor Value Unit Calculator to determine the overall PVU rating of your server using a useful wizard. An example of this tool is shown below: Figure 4. An example of using the IBM Value Unit Calculator With all of the power available in today's processor cores, clients increasingly want to partition their systems such that an application is limited to only a subset of the total processing power of a system. As of February 10th, 2009, sub-capacity pricing is supported for all DB2 servers (there are some potential requirements you need to follow that correlate to your server's operating system to enable this kind of licensing). In this situation, you are required to license the PVU rating of the partition where the DB2 software runs. Consider the following: Figure 5. An example of sub-capacity licensing a DB2 server The server in Figure 5 is an IBM POWER6 server with a total of four processing cores (it has two dual-cores chips). If you wanted to run WebSphere Application Server on three of those cores, and DB2 Enterprise on the one remaining core, you could license DB2 Enterprise for the number of PVUs for that single-core (in this case 120 PVUs) using POWER virtualization technologies. Under the traditional full capacity licensing model, you would be required to have 480 PVUs of DB2 (since each POWER6 core is rated at 120 PVUs) but you just need to license it for 120 PVUs in this example. Assume for a moment that Figure 6 illustated an AMD-based dual-core machine. In this case, you will be required to buy only 50 PVUs because DB2 is only running on a single-core. In the previous licensing model, you would have been required to purchase a full processor, because you always round a fraction up when licensing (the 1/2 processor here would be rounded up to a full processor license). This is a good example of a potential benefit of the PVU paradigm. As you can imagine, as the number of cores per chip continues to increase with quad-core, octi-core, and so on, sub-capacity pricing could have become more disadvantageous for clients without this kind of model. Note: There are other specifics with respect to the edition and licene you choose when it comes to licensing DB2 editions in a virtualized environemnt. Please refer to my other licensing and packaging articles for all the details. DB2 Pricing 101 Since other licensing options beyond the PVU option were referencd in this article, I felt it would be beneficial to give you a very quick introduction to the different licensing options available with the different DB2 editions as of DB2 9.7. Quite simply, the best way to ensure you understand what PVUs, are and subsequently understand PVU pricing, is with a little background on DB2 pricing in general. Note: In this section, the term server represents either the physical server where the DB2 software is running, or an IBM price-supported virtualization session (such as VMWare, LPAR, and so on) unless otherwise noted. Essentially, DB2 provides the following licensing methodologies: - A per server approach used by the FTL or SERVER pricing model The FTL or SERVER licenses are only available with DB2 Express and were just introduced in the DB2 9.7 release. It's outside the scope of this article to delve into the details of these licenses (since this article's purpose is to explain the PVU license); however, it's suffice to say that these licenses are counted per server. For example, if you had a 4 socket quad core server that was rated at 800 PVUs, you would just buy a single FTL or SERVER license for the physical sever. (Note however that the FTL and SERVER licenses throttles the compute resource used by DB2 to 4 cores and 4 GB of RAM. The FTL license gives you fixed term usage rights for the DB2 Expres software, while the SERVER license gives you perpetual rights to the software. Finally, an FTL or SERVER license supports an unlimited number of users accessing the DB2 software. The FTL and SERVER licenses are well suited for business partners and small-medium businesses (SMBs) because it is very simple to price. In some cases this license can be very advantageous, and in other cases it can be more costly than other licensing options. For example, even in a high availability environment with a warm or cold standby, you still purchase an FTL or SERVER license for each server. You can learn more about how to license any DB2 edition or package in a high availability environment by reading "Licensing distributed DB2 9.7 servers in a high availability environment" by Paul Zikopoulos. - A per server socket approach used by the SOCKET pricing model A SOCKET license is only available with DB2 Workgroup and was just introduced in the DB2 9.7 release. It's outside the scope of this article to delve into the details of this license (since this article's purpose is to explain the PVU license); however, it's suffice to say that this license is counted per socket on the server. For example, if you had a 4 socket Intel-based quad core server that was rated at 800 PVUs, you would buy 4 DB2 Workgroup SOCKET licenses. A SOCKET license supports an unlimited number of users accessing the DB2 software. Finally, no matter how many cores resides on a socket, DB2 will throttle usage to a maximum 16 cores based on the server's enumeration as defind in the BIOS. For example, if you had a 4 socket hexa core server, then DB2 would only schedule work on 16 of the available 24 cores. Your BIOS would determine if it would schedule work on 4 cores of each socket, or fully saturate the 1st and 2nd sockets with work and schedule work on just the remaining 4 cores of the 3rd socket - leaving 1/3 of the 3rd socket and all of the 4th socket completely unutilized. The SOCKET license is well suited for those clients that require the highest performance posible for a mid-market focussed solution. In some cases this license can be very advantageous, and in other cases it can be more limiting than other licensing options. Read Which distributed edition of DB2 9.7 is right for you? by Paul Zikopoulos for more information. - A per user approach, called the Authorized User (AU) model The authorized-user model, as its name would suggest, requires you to count the number of users that will be using DB2 and multiply that number by a per-user price to determine your license costs. This pricing model is especially attractive to companies that have few users in relation to having large databases or for environments that implement application software such as SAP which is also licensed in this manner, and more. In DB2, AU pricing is available for all DB2 editions, as well as the development focused Database Enterprise Developer Edition (DEDE). With the exception of DEDE, all the DB2 editions come with a minimum number of authorized users that you must license to use this model. DB2 Express and DB2 Workgroup require you to minimally license 5 AUs. DB2 Enterprise servers need to be licensed for a minimum 25 AUs for every 100 PVUs for the server upon which this edition is installed. For example, if you installed DB2 Enterprise on a server rated for 400 PVUs, you would need to buy at least 100 AU licenses (400 PVUs/100 PVUs = 4 * 25= 100 AUs). Even if you only had 25 users in your environment, you would still need to buy 100 AU licenses since you have to minimally license DB2 Enterprise with 25 AU licenses per 100 PVUs when you use this license. If your environment had 125 users, in this example, you would need to procure 125 AU licenses since it is greater than the 25 per 100 PVUs minimum. - A capacity approach, based on the processing power rating of the underlying server or virtualization session, called the PVU Processor model The PVU processor model (also known as per capacity pricing) is a pricing metric where the total PVU rating of the underlying server is derived and multiplied by the DB2 per-PVU price to determine the eventual license costs. This model is attractive to companies that have large numbers of users in relation to database size as well as environments where end users are not easily counted (like a Web-based application), because this licensing paradigm removes the need to mainfest or "know" your users (from a licensing perspective). Another attractive feature with PVU pricing is that as new employees are hired, each new employee doesn't require a license entitlement. This leads to a reduction in license administration costs because you don't have to maintain a manifest of your users. This article focuses solely on the PVU processor model and how PVUs change the way you calculate your license costs. For complete details on the features, licensing restrictions, and extensibility options available with the distributed editions of the DB2 family, refer to Which distributed edition of DB2 9.7 is right for you?" by Paul Zikopoulos. Wrapping it up... To summarize, DB2 PVU pricing provides the following benefits: - Creates a simple and fair licensing structure - Avoids fractional licensing for multi-core chips - Avoids charging too much for multi-core chips - Provides flexibility and granularity since over time, new processors will be differentiated based on relative performance - Enables sub-capacity licensing at the processor core - Continues to deliver software price performance improvements - Makes licenses more transferable across distributed systems since PVUs can be moved around to different database servers that run on different core architectures - Provides clarity to middleware licensing. The PVU model might be a change in the way you do things, and to be honest, no one really likes a change. In due time, it'll become old hat, and as different processing cores offering various benefits are brought to the marketplace, you'll be able to leverage them in a fair and consumable manner. - Download a free trial version of DB2 Enterprise 9.7. - Now you can use DB2 for free. Download DB2 Express-C 9.7, a no-charge version of DB2 Express Edition for the community that offers the same core data features as DB2 Express Edition and provides a solid base to build and deploy applications. - "Compare the distributed DB2 9.7 servers" (developerWorks, Sep 2009): Compare the DB2 for Linux, UNIX, and Windows editions in a side-by-side comparison table. - "Which DB2 client connectivity option is right for you?" (developerWorks, Apr 2008): Learn the details of all the various client connectivity options. - "Which distributed edition of DB2 9.7 is right for you?" (developerWorks, Sep 2009): Get the details on what makes each edition of DB2 for Linux, UNIX, and Windows unique. - Visit the developerWorks resource page for DB2 for Linux, UNIX, and Windows to read articles and tutorials and connect to other resources to expand your DB2 skills. - Learn about DB2 Express-C, the no-charge version of DB2 Express Edition for the community.
<urn:uuid:a2ecbd15-60b3-482a-a087-168d2f720c92>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/data/library/techarticle/dm-0611zikopoulos2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00323-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937239
6,423
2.90625
3
In November of last year, Cornell University in partnership with Purdue University received a National Science Foundation grant to establish The MathWorks MATLAB on the TeraGrid as an experimental computing resource. MATLAB was already an important tool for many TeraGrid users, but as a parallel resource it could provide even greater opportunities all in a familiar environment. Thus, the MATLAB on the TeraGrid initiative was deployed at SC09 to provide MATLAB computational services to remote desktop users with complex data analysis needs. This week, Nicole Hemsoth, editor of our sister publication, HPC in the Cloud, presented an overview of the project as it is about to reach its one-year milestone. Part of the appeal for researchers is that the computational learning curve is diminished. Access to the 512-core resource does not require understanding of any particular operating system, MPI library, or batch scheduler. By utilizing the Parallel Computing Toolbox and the MATLAB Distributed Computing Server to access the resource via desktops and the TeraGrid science gateways, users who are part of TeraGrid are granted high-performance equipment without some of the common hassles on the programming front they used to encounter on a regular basis. In other words, it is allowing researchers to focus distinctly on their research problems, rather than forcing them to become, by proxy, experts in parallel programming.
<urn:uuid:3f6d4c3d-281b-4368-a5f6-4c84c5fac5f1>
CC-MAIN-2017-09
https://www.hpcwire.com/2010/10/20/matlab_on_the_teragrid_one_year_later/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00071-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94597
277
2.578125
3
Like many organizations that rely on industrial-strength datacenters, the US Department of Energy (DOE) would like to know if cloud computing can make its life easier. To answer that question, the DOE is launching a $32 million program to study how scientific codes can make use of cloud technology. Called Magellan, the program will be funded by the American Recovery and Reinvestment Act (ARRA), with the money to be split equally between the the two DOE centers that will be conducting the work: the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. One of the major questions the study hopes to answer is how well the DOE’s mid-range scientific workloads match up with various cloud architectures and how those architectures could be optimized for HPC applications. Today most public clouds lack the network performance, as well as CPU and memory capacities to handle many HPC codes. The software environment in public clouds also can be at odds with HPC, since little effort has been made to optimize computational performance at the application level. Purpose-built HPC clouds may be the answer, and much of the Magellan effort will be focused on developing these private “science clouds.” The bigger question, though, is to find out if the cloud model in general is applicable to high performance computing applications used at DOE labs and can offer a cost-effective and flexible approach for researchers. According to ALCF director Pete Beckman, that means getting the best science for the dollar. In a cloud architecture, the virtualization of resources usually translates into better utilization of hardware. In the HPC realm though, virtualization can be a performance killer and utilization is often not the big problem it is in commercial datacenters where hardware is typically undersubscribed. Perhaps of greater interest for HPC users is the ability to fast-track application deployment by taking advantage of the cloud’s ability to encapsulate complete software environments. “There are a lot users who spend time developing there own software inside their own software stack,” says Beckman. “Getting those running on traditional supercomputers can be quite challenging. In the cloud model, sometimes these people find it easier to bring their software stack with them. That can broaden the community.” The entire range of DOE scientific codes will be looked at, including energy research, climate modeling, bioinformatics, physics codes, applied math, and computer science research. But the focus will be on those codes that are typically run on HPC capacity clusters, which represent much of the computing infrastructure at DOE labs today. In general, codes that require capability supercomputers such as the Cray XT and the IBM Blue Gene are not considered candidates for cloud environments. This is mainly because large-scale supercomputing apps tend to be tightly coupled, relying on high speed inter-node communication and a non-virtualized software stack for maximum performance. Most of the program’s $32 million will, in fact, be spent on new cluster systems, which will form the testbed for Magellan. According to NERSC director Kathy Yelick, the cluster hardware will be fairly generic HPC systems, based on Intel Nehalem CPUs and InfiniBand technology. Total compute performance across both sites will be on the order of 100 teraflops. Yelick says there will also be a storage cloud, with a little over a petabyte of capacity. In addition, flash memory technology will be used to optimize performance for data-intensive applications. The NERSC and ALCF clusters will be linked via ESnet, the DOE’s cutting-edge 100 Gbps network. ESnet was also a recipient of ARRA funding, and will be used to facilitate super-speed data transfers between the two sites. One of the challenges in building a private cloud today is the lack of software standards. However, the Magellan work will employ some of the more popular frameworks that have emerged from the cloud community. Argonne, for instance, will experiment with the Eucalyptus toolkit, an open-source package that is compatible with Amazon Web Services API. The idea is to be able to build a private cloud with the same interface as Amazon EC2. Apache’s Hadoop and Google’s MapReduce, two related software frameworks that deal with large distributed datasets, will also be evaluated. Like Eucalyptus, Hadoop and MapReduce grew up outside of the HPC world, so currently there’s not much support for them at traditional supercomputing centers. But the notion of large distributed data sets is a feature of many data-intensive scientific codes and is a natural fit for cloud-style computing. The other aspect of the Magellan effort has to do with experimentation of commercial cloud offerings, such as those from Amazon, Google, and Microsoft. Public clouds, in particular, are attracting a lot of interest due to their ability to offer virtually infinite capacity and elasticity. (Private clouds, because of their smaller size, tend to be seen as fixed resources.) Just as important to the DOE, a public cloud has the allure of offloading the development and maintainence of local infrastructure to someone else. “Will it be more cost effective for a commercial entity to run a cloud, and presumably make a profit on it, than for the DOE to run their own cloud?” asks Yelick. “That is going to be one of the questions most challenging to answer.” Some DOE researchers are already giving public clouds a whirl. Argonne’s Jared Wilkening recently tested the feasibility of employing Amazon EC2 to run a metagenomics application (PDF). The BLAST-based code is a nice fit for cloud computing because there is little internal synchronization, therefore it doesn’t rely on high performance interconnects. Nevertheless, the study’s conclusion was that Amazon is significantly more expensive than locally-owned clusters, due mainly to EC2’s inferior CPU hardware and the premium cost associated with on-demand access. Of course, given increased demand for compute-intensive workloads, that could change. Wilkening’s paper was published in Cluster 2009, and slides (PDF) are available on the conference Web site. The Magellan program is slated to run for two years, with the initial clusters expected to be installed sometime in the next few months. At NERSC, Yelick says the hardware could arrive as early as November, and become operational in December or January. Meanwhile at Argonne, Beckman is already running into researchers who can’t wait to host their codes on the Magellan cloud. “They’re lined up,” he says. “They keep coming down to my office asking when it will be here and how soon they can log in.”
<urn:uuid:c367b540-11bb-4909-b687-02cd6a450855>
CC-MAIN-2017-09
https://www.hpcwire.com/2009/10/13/doe_labs_to_build_science_clouds/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00119-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944781
1,435
2.78125
3
In Part 1, we discussed cybersecurity, primarily from the defensive point of view, and provided a simplistic explanation of Pareto Optimality. But defense isn’t the only side at play. We tend to think, however, in terms of defense, which leans toward a linear thought process. In the study of economics there is a technique called Pareto optimality. Pareto Optimality, or Pareto Efficiency, is a guiding force of economic efficiency. Simply put, it is the principle that there exists a balancing point between opposing interests where neither party benefits more than the other. But how does this relate to cybersecurity? Let us explain.
<urn:uuid:b8b16250-7ad8-403d-a880-609cdd38354a>
CC-MAIN-2017-09
https://www.entrust.com/author/entrustpm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00415-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95012
134
2.703125
3
It’s a busy day in your company and everyone is rushing around trying to respond to requests. Audrey gets an email that looks like it’s from a partner asking her to look into a recently placed order. She clicks on the PDF to check it out. But instead of seeing the partner’s order, she sees a landing page from the company’s security team letting her know she fell prey to a simulated phishing attack. As she looks around the room, she sees that a few co-workers also have stunned looks on their faces. If real, such a phishing attack could have put your company’s sensitive information—such as usernames, passwords, credit card details or PINs of your customers—at risk. According to data from Kaspersky Lab, phishers launched attacks impacting more than 100,000 people daily last year. Despite attempts by security software firms to stop them, cybercriminals are getting craftier by the day. A recent scam, uncovered by security firm Symantec, was targeted against users of Google Drive, which is frequently used by businesses for collaboration. Users were sent a message with the subject header “Documents” and directed to a sign-in page that closely mirrored Google’s. After they signed in, users were sent to a PHP script on a compromised Web server. This page then redirected to a real Google Drive document, leaving visitors unaware that their login credentials had been stolen. Based on the startled looks of the impacted employees, the mock phishing attack that Audrey and her co-workers experienced jolted the system, but did it make the company any safer from cyber threats? Simulated attacks can’t stand alone Phishing impacts thousands of companies each year, but it’s not the only issue they face: malware attacks; physical attacks on company data by workers posing as service personnel; and attacks aimed specifically at mobile devices are on the rise, and are just a few examples of the many threat vectors. The mock phishing attack orchestrated by the company’s security team provides a wake-up call but isn’t the only security education solution the company needs. Here’s why: You have to worry about more than just phishing. Unfortunately, attacks on data don’t stop at users clicking on a link or document in an email from their laptop. For example, access could be granted through a link the user receives via text or information given out by an employee over the phone. Malware can be downloaded through a mobile phone or by clicking something on a perfectly legitimate website. It only teaches in the moment. Yes, the simulated attack did its job by creating shock factor, but what’s next? How can you reduce the risk of it happening again in the same or a slightly different way? Do employees have actionable information about how to avoid the next attack? It does not measure vulnerability to all attacks. If employees fell for a mock phishing attack, will they also fall for other types of attacks? How can you understand the complete vulnerability of individual employees? As you can see, simulated attacks can provide value in assessing vulnerability but don’t provide the complete answer for CISOs. A more complete approach is needed. However, one big issue that security officers face is that most employees think they are immune to security threats. Despite the high news coverage that large breaches receive, and despite tales told by their co-workers and friends about losing their laptops for a few days while a malware infection is cleared up, employees generally believe they are immune to security risks. Those types of things happen to other, less careful people. Mock attacks do have certain benefits, however. For example, they can shock complacent employees-even momentarily. A simulation causes some people to realize how vulnerable to social engineering they really are. This keeps them on their toes and improves overall security. Most importantly, it may cause employees to pause the next time they see anything potentially suspicious. Moreover, it can be a motivator for certain people. After mock phishing attacks employees think “If I’m vulnerable to this, what else am I vulnerable to,” and that’s a win for the security team. Mock attacks can also help break down walls. They can help create a valuable communications channel between users and security and IT staff. It helps people understand that they can report phishing and other potentially malicious attacks to their IT department, even if it turns out to be a false alarm. Creating the best of both worlds As we’ve seen, mock attacks can complete part of the security education picture. As part of a comprehensive security education strategy, they become a valuable way to test and measure progress. Employees who are aware of the company’s plan to sporadically conduct simulated events are often more careful overall, adopting a “If you see something, say something” thought process. However, the overarching goal of any security education program needs to focus on changing the user’s behavior, making him or her less likely to fall for any scheme that will put the company—and its sensitive data—at risk. Mock attacks are a part of this training, but to reach a point where there is a real and lasting behavioral change, a program needs to take into account the entire security picture. This includes: Understanding different kinds of attacks It’s natural to focus on how to keep computers free from malware and data safe from phishers, but security training should also include physical security (how front desk staff and other employees should react when an unscheduled “service person” arrives at their door) and phone training (what to do when a caller asks for information that shouldn’t be divulged). These lessons are difficult to teach via a simulated event, but the right training can teach employees to ask questions such as “Can I see your ID?” “Do you have paperwork?” or “Who at my company requested this?” Protecting different devices Mobile phones have rapidly become a potential treasure trove of personal data for the cyber criminal. They also represent an easy way to get to end users through social engineering techniques such as fake antivirus, which trick users into paying to get rid of non-existent malware. Android is the OS most under attack; according to a report from security vendor Sophos, since it first detected Android malware in August 2010, it has recorded more than 300 malware families and more than 650,000 individual pieces of Android malware. Determining if a URL is legitimate or fraudulent Teaching employees how URLs work is the first step in preventing them from clicking on fraudulent ones even when they are browsing the Internet. In the lower left of most browsers, users can preview and verify where the link is going to take them. Making employees more aware of how to spot fraudulent URLs could help change their actions when they come across those that seem suspicious. Creating strong passwords Many users think easy-to-remember passwords such as 123456 are “good enough,” not realizing that weak passwords make them a company’s vulnerable link. Training users how to properly create and store strong passwords, and putting measures in place that tell individuals the password they’ve created is “weak” can help change behavior. Overall, if a company is going to arm its end users to help keep its data secure, it has to do more than occasional mock attacks. Simulated attacks work best when done as part of an overall security education plan, whose benefits are well articulated and understood, and with the end result being a positive change in employee behavior. In this environment, they can be very valuable to a company, providing data that helps elucidate on the true vulnerability of a company and its employees.
<urn:uuid:fba4b9c4-cbe0-4d20-a41d-576ee672eef4>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2014/04/22/how-can-we-create-a-culture-of-secure-behavior/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00467-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963282
1,607
2.625
3
The annual energy used to transmit, process and filter spam totals 33 billion kilowatt-hours (kWh), according to security software company McAfee. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. This is equivalent to the electricity used in 2.4 million homes, and represents the same amount of greenhouse gas emissions as 3.1 million cars using two billion gallons of petrol. Jeff Green, senior vice-president of product development and McAfee Avert Labs, said, "Spam has an immense financial, personal and environmental impact on businesses and individuals. Stopping spam at its source, as well investing in filtering technology, will save time and money, and will pay dividends to the planet by reducing carbon emissions as well." The Carbon Footprint of Spam study from McAfee looked at global energy expended to create, store, view and filter spam across 11 countries, including Australia, Brazil, Canada, China, France, Germany, Japan, India, Mexico, Spain, the United States and the United Kingdom. The study calculated the average greenhousee gas emission associated with a single spam message at 0.3 grams of CO2. McAfee said this was equivalent to driving three feet; but when multiplied by the yearly volume of spam, it is equivalent to driving around the earth 1.6 million times. The study also found that nearly 80% of the energy used by spam comes from end-users deleting spam and searching for legitimate e-mail (false positives). McAfee, which produces its own spam filtering technology said that spam filtering would save 135TWh of electricity per year, which is equivalent to taking 13 million cars off the road.
<urn:uuid:fffafdd0-05db-43ae-9723-d49fde0478a1>
CC-MAIN-2017-09
http://www.computerweekly.com/news/2240089006/Spam-increases-global-carbon-footprint
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00235-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948072
353
2.5625
3
Programming languages don’t die easily, but development shops that cling to fading paradigms do. If you're developing apps for mobile devices and you haven't investigated Swift, take note: Swift will not only supplant Objective-C when it comes to developing apps for the Mac, iPhone, iPad, Apple Watch, and devices to come, but it will also replace C for embedded programming on Apple platforms. Thanks to several key features, Swift has the potential to become the de-facto programming language for creating immersive, responsive, consumer-facing applications for years to come. Apple appears to have big goals for Swift. It has optimized the compiler for performance and the language for development, and it alludes to Swift being “designed to scale from ‘hello, world’ to an entire operating system” in Swift’s documentation. While Apple hasn’t stated all its goals for the language yet, the launches of Xcode 6, Playgrounds, and Swift together signal Apple’s intent to make app development easier and more approachable than with any other development tool chain. Here are 10 reasons to get ahead of the game by starting to work with Swift now. 1. Swift is easier to read Objective-C suffers all the warts you’d expect from a language built on C. To differentiate keywords and types from C types, Objective-C introduced new keywords using the @ symbol. Because Swift isn’t built on C, it can unify all the keywords and remove the numerous @ symbols in front of every Objective-C type or object-related keyword. Swift drops legacy conventions. Thus, you no longer need semicolons to end lines or parenthesis to surround conditional expressions inside if/else statements. Another large change is that method calls do not nest inside each other resulting in bracket hell -- bye-bye, [[[ ]]]. Method and function calls in Swift use the industry-standard comma-separated list of parameters within parentheses. The result is a cleaner, more expressive language with a simplified syntax and grammar. 2. Swift is easier to maintain Legacy is what holds Objective-C back -- the language cannot evolve without C evolving. C requires programmers to maintain two code files in order to improve the build time and efficiency of the executable app creation, a requirement that carries over to Objective-C. Swift drops the two-file requirement. Xcode and the LLVM compiler can figure out dependencies and perform incremental builds automatically in Swift 1.2. As a result, the repetitive task of separating the table of contents (header file) from the body (implementation file) is a thing of the past. Swift combines the Objective-C header (.h) and implementation files (.m) into a single code file (.swift). Objective-C’s two-file system imposes additional work on programmers -- and it’s work that distracts programmers from the bigger picture. In Objective-C you have to manually synchronize method names and comments between files, hopefully using a standard convention, but this isn’t guaranteed unless the team has rules and code reviews in place. Xcode and the LLVM compiler can do work behind the scenes to reduce the workload on the programmer. With Swift, programmers do less bookkeeping and can spend more time creating app logic. Swift cuts out boilerplate work and improves the quality of code, comments, and features that are supported. 3. Swift is safer One interesting aspect of Objective-C is the way in which pointers -- particularly nil (null) pointers -- are handled. In Objective-C, nothing happens if you try to call a method with a pointer variable that is nil (uninitialized). The expression or line of code becomes a no-operation (no-op), and while it might seem beneficial that it doesn’t crash, it has been a huge source of bugs. A no-op leads to unpredictable behavior, which is the enemy of programmers trying to find and fix a random crash or stop erratic behavior. Optional types make the possibility of a nil optional value very clear in Swift code, which means it can generate a compiler error as you write bad code. This creates a short feedback loop and allows programmers to code with intention. Problems can be fixed as code is written, which greatly reduces the amount of time and money that you will spend on fixing bugs related to pointer logic from Objective-C. Traditionally, in Objective-C, if a value was returned from a method, it was the programmer’s responsibility to document the behavior of the pointer variable returned (using comments and method-naming conventions). In Swift, the optional types and value types make it explicitly clear in the method definition if the value exists or if it has the potential to be optional (that is, the value may exist or it may be nil). To provide predictable behavior Swift triggers a runtime crash if a nil optional variable is used. This crash provides consistent behavior, which eases the bug-fixing process because it forces the programmer to fix the issue right away. The Swift runtime crash will stop on the line of code where a nil optional variable has been used. This means the bug will be fixed sooner or avoided entirely in Swift code. 4. Swift is unified with memory management Swift unifies the language in a way that Objective-C never has. The support for Automatic Reference Counting (ARC) is complete across the procedural and object-oriented code paths. In Objective-C, ARC is supported within the Cocoa APIs and object-oriented code; it isn’t available, however, for procedural C code and APIs like Core Graphics. This means it becomes the programmer’s responsibility to handle memory management when working with the Core Graphics APIs and other low-level APIs available on iOS. The huge memory leaks that a programmer can have in Objective-C are impossible in Swift. A programmer should not have to think about memory for every digital object he or she creates. Because ARC handles all memory management at compile time, the brainpower that would have gone toward memory management can instead be focused on core app logic and new features. Because ARC in Swift works across both procedural and object-oriented code, it requires no more mental context switches for programmers, even as they write code that touches lower-level APIs -- a problem with the current version of Objective-C. Automatic and high-performance memory management is a problem that has been solved, and Apple has proven it can increase productivity. The other side effect is that both Objective-C and Swift do not suffer from a Garbage Collector running cleaning up for unused memory, like Java, Go, or C#. This is an important factor for any programming language that will be used for responsive graphics and user input, especially on a tactile device like the iPhone, Apple Watch, or iPad (where lag is frustrating and makes users perceive an app is broken). 5. Swift requires less code Swift reduces the amount of code that is required for repetitive statements and string manipulation. In Objective-C, working with text strings is very verbose and requires many steps to combine two pieces of information. Swift adopts modern programming language features like adding two strings together with a “+” operator, which is missing in Objective-C. Support for combining characters and strings like this is fundamental for any programming language that displays text to a user on a screen. The type system in Swift reduces the complexity of code statements -- as the compiler can figure out types. As an example, Objective-C requires programmers to memorize special string tokens ( %@) and provide a comma-separated list of variables to replace each token. Swift supports string interpolation, which eliminates the need to memorize tokens and allows programmers to insert variables directly inline to a user-facing string, such as a label or button title. The type inferencing system and string interpolation mitigate a common source of crashes that are common in Objective-C. With Objective-C, messing up the order or using the wrong string token causes the app to crash. Here, Swift again relieves you from bookkeeping work, translating into less code to write (code that is now less error prone) because of its inline support for manipulating text strings and data. 6. Swift is faster Dropping legacy C conventions has greatly improved Swift under the hood. Benchmarks for Swift code performance continue to point to Apple’s dedication to improving the speed at which Swift can run app logic. According to Primate Labs, makers of the popular GeekBench performance tool, Swift was approaching the performance characteristics of C++ for compute-bound tasks in December 2014 using the Mandelbrot algorithm. In February 2015, Primate Labs discovered that the Xcode 6.3 Beta improved Swift’s performance of the GEMM algorithm -- a memory-bound algorithm with sequential access of large arrays -- by a factor of 1.4. The initial FFT implementation -- a memory-bound algorithm with random access of large arrays -- had a 2.6-fold performance improvement. Further improvements were observed in Swift by applying best practices, resulting in an 8.5-fold boost for FFT algorithm performance (leaving C++ with only a 1.1-time performance gain). The enhancements also enabled Swift to outperform C++ for the Mandelbrot algorithm by a factor of a mere 1.03. Swift is nearly on par with C++ for both the FFT and Mandelbrot algorithms. According to Primate Labs, the GEMM algorithm performance suggests the Swift compiler cannot vectorize code the C++ compiler can -- an easy performance gain that could be achieved in the next version of Swift. 7. Fewer name collisions with open source projects One issue that has plagued Objective-C code is its lack of formal support for namespaces, which was C++’s solution to code filename collisions. When this name collision happens in Objective-C, it is a linker error, and the app can’t run. Workarounds exist, but they have potential pitfalls. The common convention is to use a two- or three-letter prefixes to differentiate Objective-C code that is written, say, by Facebook versus your own code. Swift provides implicit namespaces that allow the same code file to exist across multiple projects without causing a build failure and requiring names like NSString (Next Step -- Steve Jobs’ company after being fired from Apple) or CGPoint (Core Graphics). Ultimately, this feature in Swift keeps programmers more productive and means they don’t have to do the bookkeeping that exists in Objective-C. You can see Swift’s influence with simple names like Array, Dictionary, and String instead of NSArray, NSDictionary, and NSString, which were born out of the lack of namespaces in Objective-C. With Swift, namespaces are based on the target that a code file belongs to. This means programmers can differentiate classes or values using the namespace identifier. This change in Swift is huge. It greatly facilitates incorporating open source projects, frameworks, and libraries into your code. The namespaces enable different software companies to create the same code filenames without worrying about collisions when integrating open source projects. Now both Facebook and Apple can use an object code file called FlyingCar.swift without any errors or build failures. 8. Swift supports dynamic libraries The biggest change in Swift that hasn’t received enough attention is the switch from static libraries, which are updated at major point releases (iOS 8, iOS 7, and so on), to dynamic libraries. Dynamic libraries are executable chunks of code that can be linked to an app. This feature allows current Swift apps to link against newer versions of the Swift language as it evolves over time. The developer submits the app along with the libraries, both of which are digitally signed with the development certificate to ensure integrity (hello, NSA). This means Swift can evolve faster than iOS, which is a requirement for a modern programming language. Changes to the libraries can all be included with the latest update of an app on the App Store, and everything simply works. Dynamic libraries have never been supported on iOS until the launch of Swift and iOS 8, even though dynamic libraries have been supported on Mac for a very long time. Dynamic libraries are external to the app executable, but are included within the app bundle downloaded from the App Store. It reduces the initial size of an app as it is loaded into memory, since the external code is linked only when used. The ability to defer loading in a mobile app or an embedded app on Apple Watch will improve the perceived performance to the user. This is one of the distinctions that make the iOS ecosystem feel more responsive. Apple has been focused on loading only assets, resources, and now compiled and linked code on the fly. The on-the-fly loading reduces initial wait times until a resource is actually needed to display on the screen.
<urn:uuid:50dd78e0-a232-4688-9fc1-d8fe26e48367>
CC-MAIN-2017-09
http://www.itnews.com/article/2920333/mobile-development/swift-vs-objective-c-10-reasons-the-future-favors-swift.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00287-ip-10-171-10-108.ec2.internal.warc.gz
en
0.900716
2,676
2.578125
3
IBM has found a way to make transistors that could be fashioned into virtual circuitry that mimics how the human brain operates. The new transistors would be made from strongly correlated materials, such as metal oxides, which researchers say can be used to build more powerful -- but less power-hungry -- computation circuitry. "The scaling of conventional-based transistors is nearing an end, after a fantastic run of 50 years," said Stuart Parkin, an IBM fellow at IBM Research. "We need to consider alternative devices and materials that operate entirely differently." Researchers have been trying to find ways of changing conductivity states in strongly correlated materials for years. Parkin's team is the first to convert metal oxides from an insulated to conductive state by applying oxygen ions to the material. The team recently published details of the work in the journal Science. In theory, such transistors could mimic how the human brain operates in that "liquids and currents of ions [would be used] to change materials," Parkin said, noting that "brains can carry out computing operations a million times more efficiently than silicon-based computers." This version of this story was originally published in Computerworld's print edition. It was adapted from an article that appeared earlier on Computerworld.com.
<urn:uuid:124a49b3-d06c-4bdf-9551-df4556f37eb7>
CC-MAIN-2017-09
http://www.computerworld.com/article/2496271/computer-hardware/ibm-makes-next-gen-transistors-that-could-work-like-the-human-brain.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00283-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963452
264
3.828125
4
There's no arguing with the fact that computers are becoming more prevalent in classrooms nationwide. It's how they are being used and whether those uses are helpful to students that remain unclear. One area in which school computers are utilized more and more, and where their benefits appear concrete, is in career-guidance activities. As the job market has become increasingly competitive, many high schools have realized the importance of assisting students in their career planning. As schools work to develop career education programs, computers now play a key role in managing that large and often complicated task. "Students need to use discretion and judgment about careers," said Joe Keenan, school-to-career coordinator for the Vineland School District in Vineland, N.J. "We're talking about values. We're talking about preferences related to your personal makeup." Accurately matching a student with an appropriate career, therefore, is critical. Thankfully, powerful software packages are now making the process more manageable for schools around the nation. One Simple Program At the Sierra Unified School District, a small, rural district in the Sierra Nevada mountain range of California, career guidance was once a daunting task. The district's Sandy Bluffs Alternative Education Program had been using evaluation materials from a half-dozen vendors, but none of the information was compatible or conclusive. The paperwork was so intensive it left little time for anything else. "We never had time to do much consultation with kids after assessments because we were so busy administering the work," said Mike Ashmore, a teacher and head of guidance counseling for the Alternative Education Program. "Most of the paper assessments had no aptitude reading, and the interest questions were inconclusive." Counselors and teachers at Sandy Bluffs were continually frustrated with the excessive time and effort the manual career-guidance program required, and students often felt stifled by the tedious, complex and seemingly unrewarding results. Magellan has changed all that. A software program produced by Valpar International, Magellan uses nine automated assessments to measure a student's academic skills, physical skills, temperament, people skills, data skills, interests and time required for training. The results guide a student to his or her top five career matches. Once the five picks are determined, the student can then explore the Magellan database to learn more about each occupation, including wage and employment data, general exploratory information and four-year class schedules that can be modified by individual schools. Automating the career-guidance process at Sandy Bluffs freed career counselors to do what they do best -- counsel students. "I'm now able to work closely with each student to help them get more information and to narrow down their interests," Ashmore said. "The counselors can now sit down with students and help them decide how to chart their career paths. The interactions between students and staff are definitely more quality-oriented." Charting a Path Potosi High School in Potosi, Mo., is a rural school outside St. Louis. The school offers career-guidance instruction through a program called A+. The state-sponsored project's purpose is to make sure students are career-oriented and have a clear focus before they enroll in a community college or occupational school. Administering the program was an erratic process before the school began using Magellan. "We were doing paper-oriented assessments from a variety of different companies. We would try to coordinate the results together, but it wasn't always possible," recalled A+ Coordinator Tammy Finley. "Our three counselors were overloaded with too many administrative tasks and too big of a student load. They had very little time to spend with students because they were doing so much research, assessment grading and paper processing." Using Magellan, teachers and administrators at Potosi High can now help students formulate a four-year plan to maximize their coursework. They also have more time to spend working closely with students. "The time savings for counselors translates into more enthusiasm and motivation for students during the process," Finley said. "Counselors now have the opportunity to answer questions more completely and can spend more one-on-one time with students." Some teachers say career-guidance software is helping students become more focused, encouraging them to stick to their goals and pushing them to get better grades. "Once students have a direction and are aware of the steps necessary to reach their goals, we see them become really motivated," said Carl Cortezzo, career educator at Phillipsburg High School in Phillipsburg, N.J. "That goal orientation translates into higher grades." Teachers and counselors at Phillipsburg High are using a program made by Educational Testing Services called SIGI Plus, which takes students through a self-assessment phase that helps them define their work-related values, interests and skills. Students then explore occupations that fit their personal profile. As they narrow the list to occupations of interest, they access the programs' extensive database to find more in-depth information. They then set a career goal and plan the steps necessary to reach that goal. "For students who are at risk of dropping out of school, having a goal and a career plan can make the difference of whether or not they stay in school and complete their education," Cortezzo said. New Jersey's Vineland School District is also using SIGI Plus to help students become more critical players in their career decision-making. "Our goal is to develop more reflective students who will make career choices that are realistic and based on their work-related values, interests and abilities," Keenan said. Vineland has taken several additional steps to help ensure their schools implement the SIGI Plus program effectively. A district-wide intranet is now in place, and within a year access to the program will be available from all school buildings. As the program builds, Keenan said he envisions a day when career testing and information will be cumulative for Vineland students. In that manner, a third-grader could enter interests and abilities and that information would be available to the student's future teachers. Career guidance would therefore become a multi-year process that builds and is refined over many years. "We're going to see kids better able to [understand] themselves, their future, and the choices that are open to them," Keenan said. Justine Kavanaugh-Brown is editor in chief of California Computer News, a Government Technology sister publication. Email
<urn:uuid:6908b9e6-33fb-4f01-bc45-df23f16c1313>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Technology-Tools-Navigate-Students-Toward-Careers.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00635-ip-10-171-10-108.ec2.internal.warc.gz
en
0.974391
1,315
3.140625
3
DDoS attacks are mainly divided into Layer 3 & 4 and Layer 7 attacks. Traditional DDoS attacks are usually Layer 3 & 4, aiming to use up bandwidth or the connection limits of your network resources. Most anti-DDoS solutions are effective in mitigating such attacks. Layer 7 attacks operate at the application protocol level (e.g., HTTP, FTP, SMTP), and are designed to hit server and system resources. Layer 7 attacks are usually more effective and consume less resources to break the limits of victim services. Attacks may appear like legitimate requests and are difficult to differentiate (e.g., HTTP GET/POST DDoS attack).
<urn:uuid:bf08a564-bc78-4d2c-8774-31ee56ccb8d9>
CC-MAIN-2017-09
https://help.nexusguard.com/hc/en-us/articles/203739075-What-is-a-DDoS-attack-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00335-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939309
129
2.734375
3
Unless specialized software is used, the simple act of booting a computer system is almost certain to change data on disk drives connected to the computer. This results in the contamination of digital evidence and often causes vast amounts of data to be destroyed or altered before it can be copied. Copying files or backing up a disk drive are ineffectual forensic methods for a variety of reasons. Deleted files are not copied, nor are files or partitions that are hidden. Often times, backup programs modify the attributes of files and folders by flagging them as having been backed up. The forensic methodology employed by ASR Data is completely non-invasive to the original evidence and does not change any data on disk sub-systems before, during or after the data acquisition process. All information is copied, including deleted files, unallocated disk space, slack space and partition waste space. Gaining access to a disk drive non-invasively may be accomplished in several ways, depending on various technical configurations. Often times, the fastest and easiest way to image an internal disk drive is to remove it from its native environment and connect it to a computer which has had its hardware and software optimized to support the forensic process. Alternatively, the drive may be left in the computer and the computer booted using a modified version of an operating system which has been “neutered” to prevent it from changing any data on disk drives connected to the computer. Providing a quantifiable measurement of authenticity and integrity of data is essential for satisfying admissibility standards such as Federal Rules of Evidence – Article X – Rule 1003 and Federal Rules of Evidence – Article IX – Rule 901. The data acquisition and authentication protocol employed by ASR Data has been developed to facilitate the discovery process and addresses issues raised in Federal Rules of Civil Procedure, Rules 26 and 34. ASR Data integrates digital evidence and chain of custody information and extends the authentication paradigm to include the embedded chain of custody information. ASR Data’s methodology is fault tolerant and can authenticate data on damaged media. The protocol also supports the exclusion of privileged information while retaining the ability to acquire, authenticate and analyze desktops, laptops, servers, mobile devices and many types of removable media and optical data storage mediums. ASR Data has developed tools and techniques that allow us to recover data other utilities and data recovery companies miss. More than simply recovering deleted files, our advanced tools and techniques allow us to defeat passwords, discern subtle patterns of computer usage and much more. Reconstructing an accurate history of computer activity and identifying the “signature” of user initiated actions requires an in depth understanding of computer operating systems, file systems and disk storage subsystems. ASR Data employs a standardized scientific methodology that has been proven to be sound, effective and reliable. Optimized to anticipate a wide variety of legal foundation and theoretical challenges, our findings and opinions are virtually incontrovertible. Information obtained from the technical analysis of a computer may be of little practical value unless the information can be effectively disseminated. The presentation of information is often times as important as the information itself. Findings and opinions are presented in clear, concise terms. Call us at (512) 918-9227 or schedule a free consultation
<urn:uuid:eae11ea0-20e0-4297-bc61-c09bedd2241e>
CC-MAIN-2017-09
http://www.asrdata.com/litigation-support/forensic-methodology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00511-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927579
672
3.140625
3
Two months on, the Heartbleed vulnerability is still worth talking about. One thing that needs to be discussed is that you and I are partly to blame for the problems Heartbleed caused. But we can also talk about some common-sense ways we can help protect ourselves in the future. In order to truly understand Heartbleed, let us first define what a vulnerability is, according to the Information Systems Audit and Control Association (ISACA). ISACA defines vulnerability as "a weakness in the design, implementation, operation or internal control of a process that could expose the system to adverse threats from threat events." In people terms, it is essentially a weakness in some process that could lead to bad things happening. Follow? Great! Next, what is Heartbleed? On April 7, a vulnerability was identified in some implementations of the Secure Sockets Layer (SSL) protocol, called OpenSSL. An SSL protocol establishes an encrypted link between a Web server and your Internet browser. Not only does SSL encrypt your online communications, protecting your username and password, but it also helps ensure that you are connecting to legitimate websites. To continue reading this article register now
<urn:uuid:efe14602-67da-48ec-acb5-5ad8fc29ea6a>
CC-MAIN-2017-09
http://www.computerworld.com/article/2490533/security0/heartbleed-still-matters--and-we-re-all-partly-to-blame.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00280-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938407
235
3.28125
3
The European Space Agency (ESA) is turning to the power of the crowd to improve robotic rendezvous methods, launching an app that will help fuel the construction of autonomous spacecraft that can maneuver, dock, and land themselves. Owners of the Parrot AR.Drone, a midget drone that flies on four rotors, can help ESA with a new crowdsourcing project by downloading the AstroDrone app. The app gamifies the quadcopter's flight as pilots attempt to dock at a simulated International Space Station, and it will monitor the way in which human pilots intuitively assess and correct their position in relation to surrounding obstacles. The ESA hopes that the piloting challenge will provide them with "millions of hours" of data that they otherwise wouldn't be able to collect. The resulting dataset will form the first step to reproducing the flight process with artificial intelligence, teaching robotics to better understand distances and maneuvering skills. "For ESA, this development opens up completely new ways of involving the public in scientific experiments," said Leopold Summerer, head of the ESA's Advanced Concepts Team. "We can obtain real-life data to train our algorithms in large amounts that would practically be impossible to get in any other way." Players are rewarded for docking at a simulated ISS in a rapid, controlled manner, with extra points available for correct orientation and low speed on final approach. Versions for other devices are in the works, as well as new levels that will mimic the ESA's Rosetta probe rendezvousing with the 67P/Churyumov-Gerasimenko comet, a mission due to take place in 2014. With around half a million of the Parrot AR.Drone sold since its launch in 2010, the iOS controlled drone was seen as the best choice for the ESA to obtain as much data as possible. Parrot has also made the drone's source code open to anyone looking to develop its software. Team research fellow Guido de Croon explains on the ESA's site that no information about where the drones are flown or any of the pilot's information will be collected. "We will not receive any raw video images or GPS measurements, only the abstract mathematical image features that the drone itself perceives for navigation, along with velocity readings." The AstroDrone app joins a growing list of interesting hacks of the popular quadcopter, which include a Siri-controlled flight system, the addition of remote-controlled missile launchers, and various range extenders, which replace the AR.Drone's Wi-Fi control with 3G-radio and RC controls. To help with the impossibly cool task of building autonomous spaceships of the future, simply fire up your rotors.
<urn:uuid:7e6bbb2e-15df-4498-a06a-abd9962cd44a>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2013/03/esa-launches-drone-app-to-crowdsource-flight-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00456-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939581
554
2.78125
3
The Greenland ice cap contains a staggering amount of water, which means that scientists have started keeping a nervous eye on it as conditions in the Arctic warm rapidly. If a significant fraction of the ice cap melts, ocean levels could rise by several meters. The precise factors that control the flow of ice from the cap into the ocean aren't entirely sorted out, leaving scientists to track glacial quakes and vanishing lakes in an effort to identify the factors that control this process. A paper that appeared in Nature Geoscience over the weekend suggests that different factors may be in play for different glaciers, and traces one back to its source: weather patterns over the Atlantic Ocean. The glacier in question is the catchily-named Jakobshavn Isbræ, which lies on the west coast of Greenland. The glacier empties into a deep fjord, with the ice floating on top of several hundred meters of ocean water. Up until the late 1990s, the glacier followed the general trend of retreat that's been seen elsewhere. But, in 1997, something changed, as the Jakobshavn Isbræ began to thin and empty into the ocean far more rapidly, with a corresponding retreat deeper into the fjord. The authors were able to tie this acceleration to a change in ocean temperatures through a bit of luck. It turns out the Greenland government became interested in the possibility of harvesting shrimp off the west coast around then, and started performing annual ocean surveys. The data from these surveys revealed that, starting in the early 1990s, a there was a surge of warm water at depths of 200-500m that worked its way up the coast of Greenland. 1997 happened to be the year that it reached the mouth of the fjord into which the Jakobshavn Isbræ empties. Over a two year period, water temperatures shot up from an average of 1.7°C to 3.3°C. The authors argue that this destabilized the underside of the floating glacier, causing its collapse and retreat. What caused the surge of warm water? The authors trace that to a shift in a weather pattern called the North Atlantic Oscillator, which controls storm tracks and currents in the area. Once the shift occurred, a branch of the Gulf Stream called the Irminger Current strengthened, pushing the warm water up the coast. Assuming the authors are right, there's an interesting consequence to this: Greenland's glaciers can't be viewed as a monolithic whole. Only those glaciers that extend above seawater over 200m deep are likely to register this current and that's exactly what the authors find. If there are changes in the dynamics of the rest, we need to look for other causes. In the mean time, the authors note that the Jakobshavn Isbræ sits on deep water for "a considerable distance," so there may be more changes that are worth keeping an eye on. Nature Geoscience, 2008. DOI: 10.1038/ngeo316
<urn:uuid:77071638-e6ce-44c9-88e2-4bf1093b208c>
CC-MAIN-2017-09
https://arstechnica.com/science/2008/09/following-a-glacial-retreat-to-its-cause/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00632-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953126
613
4.25
4
What do dropped packets on a traceroute mean? What is a traceroute? A traceroute is a program that Traces the path a packet will take from point A to point B. It does this by sending a series of packets that when a router receives it will reply with an error message letting the program know that that router is in that Path. It sends these packets one after another until it gets a response from its final destination. This allows to the program to string a series of these error messages in a row to create a logical Path that the packet takes. What information can I get from it? A trace route can be a very useful tool in pointing out a single failure in a network or congestion and even sometimes loops. It allows you to see the route a packet will take to a specific IP and allows Network Engineers to find out if a specific router may be having an issue. Why timeouts on a router aren’t always a bad thing. Control Plane vs Data Plane When routers receive packets addressed to them they treat them much differently than when they receive packets not addressed to them. This is what is called the separation between control plane(brain of the router) and the forwarding plane(the arms of the router). Routers are very different machines than most servers; this is because they were designed to do one thing route packets. To do this they have very very strong arms(Forwarding Planes) but very weak brains(Control Planes) in terms of processing When a router receives a packet that is not addressed to it then it will use its arms(Forwarding Plane) to Push the packet out the correct interface; this is a very quick process(millionths of a second). When a router receives a packet that is addressed to itself it interacts directly with the routers brain(Control plane). This is a much slower process and if in the hands of a nasty user can be used to attack a router and possibly bring it down. To counteract this engineers designed the brain(Control plane) to be completely separated from the arms(Forwarding Plane) and limited the amount of packets that can be processed by the brain(Control plane) so that it would be much more difficult for an attacker to overload it. But what does this have to do with traceroutes? Well you see, when one of those error messages come in to a router from a trace route program they are sent to the brain(Control plane) of the router. These messages have a VERY low priority; this means that unless the router has literally nothing better to do it will simply drop the message(Timeout). This means that if there are lots of people sending traceroutes to that router (as there often is with high traffic routers) then it will appear that that router is having issues when in reality there could be nothing wrong with When ARE timeouts on a traceroute a bad thing? There is only one real instance when a timeout on a traceroute is a bad thing. That is when you see timeouts that continue forward in the route. By that I mean when you see an individual timeout and then many more after that. There are two main instances when this can happen, the first and most common is that there is a firewall that was configured to block these packets in the route. The other instance is that a router is dropping packets going THROUGH it (i.e. Forwarding Plane(arms) packets) and this is can be a VERY bad sign. This is usually caused by one of three reasons either the router is overloaded, the router having a software or physical failure or the router is configured to do this(null route/blackholes). This should be brought to our team’s attention so that we can do our best to avoid this route lessening the impact on your services or investigate if there is a null-route/firewall in the way. A Good Trace Route - Even though there is a time out in the middle the packet still makes it all the way to the end (note the “Trace complete”) - The last 3 hops are the most important, the packet comes into our edge(ten-7-4.edge1.level3.mco01.hostdime.com) then past our core(xe1-3-core1.orl.hostdime.com) then to the final hop. - If a trace route begins to time out after the core then 99% of the time its a firewall issue on the server itself. A Bad Trace Route - Even though there is a time out in the middle the packet still makes it another hop(126.96.36.199) meaning that the 4th hop router is not the issue. - We know from the last trace route that (ae-1-8-bar1.orlando1.level3.net) is the next hop meaning thats where the issue starts. - Also note that the trace does not finish, meaning that this device is unreachable. A Good Trace Route that appears bad due to a firewall - Just like with both the good and the bad traceroute there is a drop in the middle, but note that the packet still makes it all the way past both our core and edge meaning it makes it into our network. - 99% of the time if it makes it past our edge and core then it is a firewall/physical issue.
<urn:uuid:38c6550c-a39c-4de2-96eb-9ba2b01e19c2>
CC-MAIN-2017-09
https://www.hostdime.com/resources/trace-routes-timeouts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00332-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937247
1,174
3.375
3