text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Windows and Mac OS updates every few years. Windows 7 arrives on October 22nd and Apple's Snow Leopard will show up in September. The Linux kernel, the heart of Linux distributions, however, gets updated every few months. What this means for you is that Windows and Mac OS are taking large, slow steps, while Linux is constantly evolving. Thus, Linux's changes may not be as big from version to version, but they tend to be more thoroughly tested and stable. What most users will like in this distribution starts with a faster boot-up for Linux. 1. Fast boot. Older versions of Linux spend a lot of time scanning for hard drives and other storage devices and then partitions on each of them. This eats up a lot of milliseconds because it looks for them one at a time. With the 2.6.30 boot-up, however, instead of waiting for this to get done the rest of the kernel continues to boot-up. At the same time, the storage devices are being checked in parallel, two or more at a time, to further improve the system's boot speed. There are other efforts afoot to speed up Linux's boot times. The upshot of all this work will be to keep Linux the fastest booting operating system well into the future. 2. Storage improvements. Speaking of storage devices, there's a long, laundry list of file system improvements. I won't go into most of those in detail. Suffice it to say that no matter what file system you use either locally or on a network, chances are that it's performance and stability has been improved. For a high-level view of these changes see the Linux Kernel Newbie 2.6.30 reference page. I will mention one issue though simply because, as Jonathan Corbet, Linux kernel developer and journalist put it, "Long, highly-technical, and animated discussion threads are certainly not unheard of on the linux-kernel mailing list. Even by linux-kernel standards, though, the thread that followed the 2.6.29 announcement was impressive." You can say that again. The argument... ah discussion was over how file systems and block I/O (input/output) using the fsync() function in Linux should work. The really simple version of this discussion is that fsync has defaulted to forcing a system to write file system journal and related file data to be written immediately. Most I/O schedulers though push reads over writes. On a non-journaling file system, that's not a big deal. But, a journal write has to go through immediately and it can take up a lot of time while it's doing it. On Ext3, probably the most widely used Linux file system, the result is that Ext3 is very stable, because it makes sure those journal writes go through, but at the same time it's very slow, once more because of those journal writes. You can argue almost endlessly over how to handle this problem, or even that Ext3 fsync function runs perfectly fine. Linus Torvalds, however, finally came down on the side of making the writes faster. The arguments continue though on how to handle fsync(). And, in addition, side discussions on how to handle file reads, writes and creation continue on. For users most of this doesn't matter, developers who get down and dirty with file-systems though should continue to pay close attention. 3) Ext4 tuning. Linux's new Ext4 file system has been in the works for several years now. It's now being used in major Linux distributions like Ubuntu 9.04, and it's working well. That said, Ext4 has gotten numerous minor changes to improve its stability and performance. I've been switching my Linux systems to Ext4 over the last few months. If you've been considering making the switch, wait until your distribution adopts the 2.6.30 kernel, and give it a try. I think you'll be pleased. 4) Kernel Integrity Management. Linux is more secure than most other operating systems. Notice, though, that I say it's more secure. I don't say, and I'd be an idiot if I did, that it's completely secure. Nothing is in this world. The operating system took a big step forward in making it harder for any would be cracker to break it though with the introduction of Integrity Management. This is an old idea that's finally made it into the kernel. What it boils down to is that checks the integrity of files and their metadata when they're called by the operating system using an EVM (extended verification module) code. If a file appears to have been tampered with, the system can lock down the its use and notify the administrator that mischief is afoot. While SE-Linux (Security Enhanced Linux) is far more useful for protecting most users, I can see Integrity Management being very handy for Linux devices that don't get a lot of maintenance such as Wi-Fi routers. Attacks on devices are begining to happen and a simple way to lock them down if their files have been changed strikes me as a really handy feature. 5) Network file system caching. How do you speed up a hard drive, or anything else with a file system on it for that matter? You use a cache. Now, with the adoption of FS-Cache, you can use caching with networked file systems. Right now it only works with NFS (Network File System) and AFS (Andrew File System). These network file systems tend to be used in Unix and Linux-only shops, but there's no reason why you can't use FS-Cache on top of any file system that's network accessible. I tend to be suspicious of network caching since it's all too easy to lose a network connection, which means you can be left with a real mess between what the server thinks has been changed, added, and saved and what your local cache thinks has been saved. FS-Cache addresses this problem of cache coherency by using journaling on the cache so you can bring the local and remote file systems back into agreement. While 2.6.30 may not be the most exciting Linux kernel release, it does include several very solid and important improvements. Personally, I plan on switching my servers over to 2.6.30-based distributions as soon as they become available. If your concerns are mostly with the Linux desktop though I wouldn't be in that much of a hurry, most of the updates are more important for server administrators than desktop users.
<urn:uuid:c66849d6-abe8-4c0f-9fdc-da316ba7aef1>
CC-MAIN-2017-09
http://www.computerworld.com/article/2482068/open-source-tools/linux-2-6-30-s-best-five-features.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00041-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959321
1,346
2.59375
3
If you use an Apple iPhone, iPad or other iDevice, now would be an excellent time to ensure that the machine is running the latest version of Apple’s mobile operating system — version 9.3.1. Failing to do so could expose your devices to automated threats capable of rendering them unresponsive and perhaps forever useless. Zach Straley demonstrating the fatal Jan. 1, 1970 bug. Don’t try this at home! On Feb. 11, 2016, researcher Zach Straley posted a Youtube video exposing his startling and bizarrely simple discovery: Manually setting the date of your iPhone or iPad all the back to January. 1, 1970 will permanently brick the device (don’t try this at home, or against frenemies!). Now that Apple has patched the flaw that Straley exploited with his fingers, researchers say they’ve proven how easy it would be to automate the attack over a network, so that potential victims would need only to wander within range of a hostile wireless network to have their pricey Apple devices turned into useless bricks. Not long after Straley’s video began pulling in millions of views, security researchers Patrick Kelley and Matt Harrigan wondered: Could they automate the exploitation of this oddly severe and destructive date bug? The researchers discovered that indeed they could, armed with only $120 of electronics (not counting the cost of the bricked iDevices), a basic understanding of networking, and a familiarity with the way Apple devices connect to wireless networks. Apple products like the iPad (and virtually all mass-market wireless devices) are designed to automatically connect to wireless networks they have seen before. They do this with a relatively weak level of authentication: If you connect to a network named “Hotspot” once, going forward your device may automatically connect to any open network that also happens to be called “Hotspot.” For example, to use Starbuck’s free Wi-Fi service, you’ll have to connect to a network called “attwifi”. But once you’ve done that, you won’t ever have to manually connect to a network called “attwifi” ever again. The next time you visit a Starbucks, just pull out your iPad and the device automagically connects. From an attacker’s perspective, this is a golden opportunity. Why? He only needs to advertise a fake open network called “attwifi” at a spot where large numbers of computer users are known to congregate. Using specialized hardware to amplify his Wi-Fi signal, he can force many users to connect to his (evil) “attwifi” hotspot. From there, he can attempt to inspect, modify or redirect any network traffic for any iPads or other devices that unwittingly connect to his evil network. For more detailed information please visit Krebs on Security – we listen to him so should you! Krebs is the source of the material above.
<urn:uuid:e3a44b6c-54a9-4ab3-9577-548ac594f907>
CC-MAIN-2017-09
https://www.alpineweb.com/category/alerts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00337-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927219
618
2.515625
3
Sugar Labs has launched the first beta release of Sugar on a Stick, a new version of the open source Sugar Learning Platform that is designed to boot on conventional desktop computers and run directly from USB flash memory devices. The Sugar software environment was originally created for the One Laptop Per Child project's XO laptop. It offers a unique child-friendly user interface and includes an assortment of applications that are geared towards education and collaboration. The platform is currently maintained by Sugar Labs, a nonprofit organization that was founded by former OLPC software director Walter Bender. The organization is building a more inclusive and community-driven culture around Sugar development and is working to bring the platform to a broader audience of students. The Sugar on a Stick (SoaS) initiative is part of a broad effort to simplify deployment of the Sugar platform and make it more accessible to students. The developers aim to boost compatibility with off-the-shelf hardware and conventional desktop computers. The goal is to provide a self-contained Sugar environment that students can boot from USB thumb drives and run easily at school or at home. The SoaS beta, which was released on Wednesday, includes version 0.84 of the Sugar environment on top of the recent Fedora 11 beta. It can be run from a 1GB USB thumb drive, which is also used to store the user's data. This allows the student to have a complete computing environment that is both persistent and portable. "Sugar is perfectly suited for children in the classroom with its simple, colorful interface, built-in collaboration, and open architecture. Sugar on a Stick lets you start a computer with Sugar and store a child's data on the stick without touching the host computer's hard disk," said Bender in a statement. The beta includes several new activities, such as the InfoSlicer tool, which can be used by teachers to assemble bundles of Web content that can be edited and packaged for offline use. The beta also includes a new integrated IRC client and a command terminal. I loaded the SoaS image in VirtualBox to test the new Sugar features. It ran flawlessly and performed reasonably well during my tests. I'm particularly impressed with the new source viewing utility which will allow students to see and modify the Python code of the Sugar shell. For advanced students, this could significantly reduce the barriers to getting started with Sugar activity development. The platform offers several other very impressive tools to introduce young students to the basic principles of computer science. The turtle graphics visual programming environment reminded me of my own childhood experiences with Logo and Basic. I'm also very impressed with the collaboration features, which are practically ubiquitous throughout the entire platform. Sugar has matured significantly since the latest time that I tested the software. The journal system, which serves as a replacement for a conventional filesystem, has also improved dramatically since prior versions. Although the enhanced journal system is beginning to look promising, I still find it to be too unintuitive. Sugar Labs is encouraging parents and teachers to start testing the beta with children. Linux enthusiasts who want to contribute to the project are also invited to help with hardware compatibility testing, bug reporting, and many other tasks. Developers can participate by contributing to the core platform or creating new Sugar activities. Sugar Labs has launched a new website (modeled after addons.mozilla.org) for hosting activities. Refer to the Sugar Labs wiki for instructions on how to download, install, and boot the SoaS environment.
<urn:uuid:0b0b44a5-e335-4d32-89d5-8d474b9f1110>
CC-MAIN-2017-09
https://arstechnica.com/information-technology/2009/04/sugar-labs-releases-beta-of-live-usb-learning-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00513-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956118
704
2.71875
3
If you feel like you need eyes in the back of your head, there's a crowdsourcing app for that. Zensors is a smartphone application that can monitor an area of interest by using a camera, crowdsourced workers and artificial intelligence. Developed by researchers from Carnegie Mellon University and University of Rochester, the idea behind Zensors is to use any camera in a fixed location to detect changes in what's being monitored -- for instance whether a pet's food bowl is empty -- and automatically notify users. The developers say it's a cheap, accessible way to add sensors to the environment, part of the move toward building smart homes and smart cities. The project, presented at the 2015 Computer-Human Interaction Conference (CHI) in Seoul this week, is based on simple user questions written in everyday language about the area being monitored. For example, a question could be: is there a car in the parking space? The presence of a car would trigger a positive response in the alert to the user, which could be sent via email or text message. The camera could be the image sensor in any mobile device, provided it has been set up to monitor something, or a webcam, security camera or any other connected camera. It will capture images at an interval set by the user. Users first select a region of interest in the camera's view by circling it with a finger on a touchscreen -- that's intended to limit the surveillance and protect the privacy of people who might walk into part of the frame. Next, a question is input in the Zensors app, and the job of monitoring the images is farmed out to the Internet. Redundant images in which nothing has changed are automatically ignored. The people who do the initial monitoring could be staff at a call center or an outsourcing service such as Amazon's Mechanical Turk, which was used in the CMU study. When the monitors decide that the question has an affirmative answer, a graph in the app soon changes; it could also issue alerts to users. Zensors gets interesting, however, when the process becomes automatic. After a certain period of human monitoring, machine learning algorithms in the software can learn when a certain condition has been met. For instance, they could learn to recognize that a pet's food bowl is empty. To ensure the algorithms' accuracy, the system would be periodically checked by workers, which could take a more hands-on role if the area monitored has an unexpected change. Computer vision tools can also be added to the data processing, allowing the system to perform tasks such as counting cars or people in a certain area. In a demonstration, a smartphone running Zensors was placed face-up on a table. A question was keyed in: "Is there a hand?" After holding a hand over the phone's camera, the app's graph changed, showing that Mechanical Turk workers had answered from afar. The researchers blamed network latency for the fact that the answer took about 30 seconds. With better responsiveness, Zensors could be used in a variety of business and home applications. A restaurant manager could use it to learn when customers' glasses need to be refilled, and security companies could use it for automatic monitoring. "We are the first ones, as far as I know, to fuse the crowd with machine learning training and actually doing it," said Gierad Laput, a PhD student at Carnegie Mellon's Human-Computer Interaction Institute, who also showed off new smartphone interfaces at CHI. The cost of human monitoring is 2 cents per image, according to the researchers. It costs about US$15 worth of human-vetted data to train the algorithms so they can take over. By contrast, having a programmer write computer-vision software for a sensor that answers a basic yes or no question could take over a month and cost thousands of dollars. "Natural-language processing, machine learning and computer vision are three of the hardest problems in computer science," said Chris Harrison, an assistant professor of human-computer interaction at CMU. "The crowd lets us basically bypass a lot of that. But we just let the crowd do the bootstrapping work and we still get the benefits of machine learning." The researchers plan to keep improving the Zensors app, now in beta, and then release it to the public. Tim Hornyak covers Japan and emerging technologies for The IDG News Service. Follow Tim on Twitter at @robotopia.
<urn:uuid:54049e40-859e-4245-8ffe-cfe8daf9fe49>
CC-MAIN-2017-09
http://www.arnnet.com.au/article/573452/zensors-app-lets-crowdsource-live-camera-monitoring/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00389-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948777
916
2.75
3
A Healthy Approach to the Internet of ThingsBy Samuel Greengard | Posted 2016-02-12 Email Print The IoT is providing a stream of products aimed at making our lives easier, better and healthier. But some of these items are creating an 'Internet of Garbage.' As the Internet of things evolves from concept to reality, we're witnessing a growing stream of products and solutions aimed at making our lives easier, better and healthier. In many cases, as demonstrated by the recent Consumer Electronics Show, the net outcome is an "Internet of Garbage." Even the most optimistic or motivated person probably wouldn't see much value in auto-tightening shoes or vibrating yoga pants that correct your form. To be sure, technology advances often occur in fits and starts. Today's fitness trackers are good, but not great. They provide valuable insights, but they're not entirely accurate or reliable. I've placed Bluetooth LE trackers on my keychain and other items in order to find them if they're misplaced. The trackers work reasonably well, though they're far from perfect. For instance, I've had the tracker's battery die without warning, which renders it useless when it's needed. Advances From Connected Devices On the other hand, connected devices could lead to enormous advances. For example, Swaive recently announced the world's first smartphone-enabled ear thermometer. The company partnered with Sickweather, an online community that provides maps showing where people have caught the flu or common colds. Users can share their anonymous data, and machine learning takes it from there. Already, Sickweather processes more than 6 million illness reports each month by using social media and other data. The ability to plug in connected devices increases the value immeasurably. It's not difficult to extrapolate on this concept and think about public health experts and others using similar technology to map all sorts of other illnesses, diseases and afflictions. Ultimately, researchers and epidemiologists might better understand how to deal with various outbreaks. At the very least, hospitals and clinics would have a far better idea of where to direct supplies and resources, and the rest of us would know when and where it's riskier to head outside. It's also possible to envision similar systems—most likely grabbing data from fitness devices and other wearables—to better understand and formulate policy for everything from eating to exercise. And health care insurance providers, which today treat everyone roughly equally, could design programs to fit demographics and reward individuals for meeting minimum daily step goals and eating healthier foods. To be sure, the journey has just begun. Over the next few years, we will likely walk, run and skip past a lot of crazy and brilliant connected devices on the road to progress.
<urn:uuid:ffea600f-680e-4626-9080-0e66d883c1df>
CC-MAIN-2017-09
http://www.baselinemag.com/blogs/a-healthy-approach-to-the-internet-of-things.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00389-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954508
562
2.921875
3
It is expected in the near future that an individual customer’s need of symmetric bandwidth of Bi-Directional (BiDi) signals is common place with communication systems of optical transport networks, access networks, wireless backhaul networks, and private transmission networks. Network operators have to meet the customer’s need whereas making every effort to save in CAPEX and OPEX. The single wavelength BiDi transmission technology offers a unique solution to meet these apparently conflicting goals at the same time, particularly in access networks such as FTTx and in wireless backhaul networks between a base station and antenna tower or a Remote Radio Head (RRH), compared with the two-wavelength BiDi transmission and the duplex transmission which are currently in use. This article presents pros and cons between competing technologies, operating principles of the single wavelength transmission technology and its applications, and Fiberstore’s BiDi transmission products. A full duplex transmission technology uses a pair of fibers for a simultaneous communication in both directions. For example, in a P2P upstream signal from the subscriber to the CO. The optical transceivers at two ends of a transmission link can be identical if one wavelength is used for both directions. However, the CAPEX and OPEX are much higher due to the cost for two fibers and their installation compared with other BiDi technologies described below which use a single fiber. This technology can be used in the Wavelength Division Multiplexing (WDM) communication as well as in the P2P communication. A two-wavelength BiDi transmission system uses one fiber, but two wavelengths for a simultaneous communication in both directions. These wavelengths are separated widely from each other. For example, in a P2P access network, the downstream signal from the CO to a subscriber is at 1550 nm and the upstream signal from a subscriber to the CO is at 1310 nm. The fact that a different signal wavelength must be used in each opposite direction of transmission imposes on the network operators two disadvantages. A single wavelength BiDi transmission system, on the other hand, uses one fiber and one wavelength for a simultaneous communication in both directions. For example, in a P2P access network , the wavelength can be at 1550nm (or 1310 nm) for both downstream and upstream signals. This reduces CAPEX and OPEX for the network operators since they need to deploy only one kind of optical transceivers at 1550 nm (or 1310 nm). This also guarantees a foolproof installation of transceivers without any confusion since all the transceivers are identical and there is one fiber. In a WDM BiDi system, this is only a viable approach for providing each channel a fully bi-directionally dedicated (or symmetric) bandwidth. This technology may face between upstream and downstream signals a crosstalk and an interferometric beat noise, both coming from reflections at the interface between a transceiver and a channel link fiber with PC (or UPC) type connectors , which may impose a limit on the maximum allowable channel loss, or in other words, the maximum transmission distance. These reflections, however, can be mitigated by using APC type connectors. Here is a table that summarizes pros and cons of various BiDi transmission technologies. The single wavelength BiDi clearly shows its own unique advantages over two other competing technologies, two-wavelength BiDi and Duplex. |Single Wavelength BiDi||Two-Wavelength BiDi||Duplex| |Transmission Distance Limited By||Return Loss||Allowable Channel Loss||Allowable Channel Loss| |Allowable Channel Loss| |P2P||Number of Fibers||1||1||2| |Minimum Number of Transceiver Types||1||2||1| |Foolproof Installation of Transceiver||Yes||No||No| If PC or UPC type connectors are used, the transmission distance may be limited by return loss. If APC type connectors are used, the transmission distance is limited mainly by allowable channel loss. There is always a chance that a wrong type of transceiver can be installed if other different type of transceivers is available. Each duplex transceiver has two optical receptacles, one for the Tx and the other for the Rx. There is always a chance that the Tx at the CO is connected to the fiber for the upstream signal for the subscriber. A TDM for one direction (e.g. upstream) is necessary. CAPEX and OPEX are high due to two pairs of optical MUX and DEMUX for a link. The single wavelength BiDi transmission technology allows over a single fiber a simultaneous communication in both directions at the almost same wavelength. Here is a figure that shows a simple of such transmission system: a P2P optical communication system which is composed of an OutSide Plant (OSP) fiber link over a single fiber as a medium of transmission and identical transceivers at both ends of the fiber link. The signal wavelengths from two transceivers, downstream signal from Tx 1 and upstream signal from Tx 2, are very close to each other, which explains why this approach is named as “a single wavelength BiDi transmission”. The single wavelength BiDi transmission technology finds its applications broadly in the optical transport networks, access network s such as FTTx networks, wireless backhaul networks, and private transmission networks even though the transmission distance may be limited since most deployed optical transmission networks are equipped with PC type connectors and might have finite reflections. However, it may be still very attractive for P2P and WDM transmission systems with the distance up to 20 km because of its unique advantages over other technologies. Furthermore, the transmission distance can be extended much longer up to 120 km once the reflection is minimized using APC connectors. In the WDM BiDi transmission application, this single wavelength BiDi transmission is only a viable approach for providing each channel a fully bi-directionally dedicated (or symmetric) bandwidth. The two-wavelength BiDi transmission technology cannot allocate for each channel a fully dedicated bandwidth in both directions simultaneously since all the subscribers must share a common wavelength in one direction, e.g., 1310 nm in upstream with the TDM technology. The single wavelength BiDi transmission technology is also well poised to support the wireless backhaul networks, such as links between a CO and a base station, a base station and a RRH connected through an optical WDM BiDi system shown in the figure below, a base station and many picocells along the streets in metropolitan areas, and a link between a base station and antennas on a tower. The single wavelength bi-directional transmission can be very cost-effective for a P2P system with link length up to 20 km and for a WDM system with link length up to 120 km. The transmission distance can be extended much longer once the reflection is minimized using APC connectors. This technology will be only viable solution for WDM BiDi systems when each channel needs a fully bi-directionally dedicated bandwidth. Particularly, it is well poised for the wireless backhaul networks to meet the ever increasing demand of bandwidth and volume of traffic.
<urn:uuid:8a7c570e-2d59-4e0a-a40c-3a450c350769>
CC-MAIN-2017-09
http://www.fs.com/blog/ideal-solution-for-wdm-bi-directional-systems-single-wavelength-bidi-transmission-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00389-ip-10-171-10-108.ec2.internal.warc.gz
en
0.923514
1,473
2.9375
3
The front end: branch prediction For reasons of both performance and power efficiency, one of the places where Intel spent a ton of transistors was on Core's branch predictor. As the distance (in CPU cycles) between main memory and the CPU increases, putting precious transistor resources into branch prediction hardware continues to give an ever larger return on investment. This is because when a branch is mispredicted, it takes a relative eternity to retrieve the correct branch target from main memory; during this lengthy waiting period, a single-threaded processor must sit idle, wasting execution resources and power. So good branch prediction isn't just a matter of performance, but it's also a matter of conserving power by making the most efficient possible use of processor cycles. Core essentially uses same three-part branch predictor developed for the Pentium M. I've previously covered the Pentium M's branch predictor in some detail, so I'll just summarize the features here. At the heart of Core's branch prediction hardware are a pair of predictors, one bimodal and one global, that record information about the most recently executed branches. These predictors tells the front end how likely the branch is to be taken based on its past execution history. If the front end decides that the branch is taken, it retrieves the branch's target address from the branch target buffer (BTB) and begins fetching instructions from the new location. Core's two bimodal and global predictors aren't the only branch prediction structures that help the processor decide if a branch is taken or not taken. The new architecture also uses two other branch predictors that were first introduced with the Pentium M: the loop detector and the indirect branch predictor. The loop detector Loop exit branches are only taken once (when the loop terminates), which means that they're not taken a set number of times (i.e., the duration of the loop counter). The branch history tables used in normal branch predictors don't store enough branch history to be able to correctly predict loop termination for loops beyond a certain number of iterations, so when the loop terminates they mispredict that it will keep going based on its past behavior. The loop detector monitors the behavior of each branch that the processor executes in order to identify which of those branches are loop exit conditions. When a branch is identified as a loop exit, a special set of counters is then used to track the number of loop iterations for future reference. When the front-end next encounters that same loop exit branch, it knows exactly how many times the loop is likely to iterate before terminating. Thus it's able to correctly predict the outcome of that branch with 100 percent accuracy in situations where the loop's counter is the same size. Core's branch prediction unit (BPU) uses an algorithm to select on a branch-by-branch basis which of the branch predictors described so far (bimodal, global, loop detector) should be used for each branch. The indirect branch predictor Because indirect branches load their branch targets from a register, instead of having them immediately available as is the case with direct branches, they're notoriously difficult to predict. Core's indirect branch predictor is a table that stores history information about the preferred target addresses of each indirect branch that the front end encounters. Thus when the front-end encounters an indirect branch and predicts it as taken, it can ask the indirect branch predictor to direct it to the address in the BTB that the branch will probably want.
<urn:uuid:b1f10b5e-24de-447b-b7f5-c95887d0b021>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2006/04/core/7/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00389-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938446
716
2.921875
3
IRENE: Key to unlocking mute recordings - By William Jackson - Feb 08, 2012 The idea is simple: Make a high-resolution digital image of a sound recording and develop software to analyze the image and reproduce the effects of a phonograph needle. Scientists at the Lawrence Berkeley National Laboratory and the Library of Congress have brought the concept to a practical application over the past 10 years, developing a system to digitally recover and preserve rare and damaged recordings without the risk of doing additional damage. A race to restore the voices of the past “The approach evolved naturally out of methods of optical metrology, pattern recognition, image processing and data analysis we use for physics research,” said Carl Haber, a high-energy physicist at Lawrence Berkeley. The result is the Image, Reconstruct, Erase Noise, Etc., or IRENE, now being used by the Library of Congress. A version capable of two-dimensional imaging is used for disc recordings with lateral grooves, and three-dimensional imaging is used for cylinders with vertical groove modulation. The 2-D version also works for imaging and playing back optically recorded audio files, such as those on early experimental recordings and on some motion picture soundtracks. The workstations are helping to preserve hundreds of recordings at the library’s audio-visual campus in Culpepper, Va., and are being used in research on preservation of historic one-of-a-kind recordings such as those held by the Smithsonian National Museum of American History. The Berkeley Optical Sound Restoration Project is being conducted with funding from several government agencies, along with grants from private-sector associations supporting libraries and the humanities. IRENE is evolving toward a fully automated system capable of handling imaging exposure, focus, start-stop, parameters and selection. Software then combs over the image to remove obvious faults such as dirt, scratches and wear. Algorithms track and determine the center point of the imaged grooves and “play back” the record with a virtual stylus, producing a digital .wav file without touching the original recording. Although it is becoming more automated and user-friendly, its developers do not see a consumer market for IRENE. “This tool finds its home in laboratories, museums and libraries,” Haber said. In addition to the LOC installations, there is a system at the Lawrence Berkeley Lab, and another is expected to be installed at the University of Chicago’s South Asia Library in Chennai, India. Although it is not likely to be available at Best Buy anytime soon, IRENE still has a lot of work left to do, Haber said. “There are a dozen major sound collections or more around the world. We have only scratched the surface in terms of the major historical collections.” William Jackson is a Maryland-based freelance writer.
<urn:uuid:b2d3edcf-8da1-446c-a79b-3fc59269ac23>
CC-MAIN-2017-09
https://gcn.com/articles/2012/02/06/doe-digital-imaging-sidebar.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00389-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933145
585
2.734375
3
Information security people think that simply making users aware of security issues will make them change their behavior. But security pros are learning the hard way that awareness rarely equals change. One fundamental problem is that most awareness programs are created and run by security professionals, people who were not hired or trained to be educators. These training sessions often consist of long lectures and boring slides--with no thought or research put into what material should be taught and how to teach it. As a result, organizations are not getting their desired results and there's no overall progress. To solve this puzzle, it's important to step back and understand how people most effectively learn subject matter of any type. The science of learning dates back to the early 1950s, and its techniques have been proven over time and adopted as accepted learning principles. Applied to information security training, these techniques can provide immediate, tangible, long-term results in educating employees and improving your company's overall security posture. 1. Serve small bites People learn better when they can focus on small pieces of information that the mind can digest easily. It's unreasonable to cover 55 different topics in 15 minutes of security training and expect someone to remember it all and then change their behavior. Short bursts of training are always more effective. 2. Reinforce lessons People learn by repeating elements over time--without frequent feedback and opportunities for practice, even well-learned abilities go away. Security training should be an ongoing event, not a one-off seminar. 3. Train in context People tend to remember context more than content. In security training, it's important to present lessons in the same context as the one in which the person is most likely to be attacked. 4. Vary the message Concepts are best learned when they are encountered in many contexts and expressed in different ways. Security training that presents a concept to a user multiple times and in different phrasing makes the trainee more likely to relate it to past experiences and forge new connections. 5. Involve your students It's obvious that when we are actively involved in the learning process, we remember things better. If a trainee can practice identifying phishing schemes and creating good passwords, improvement can be dramatic. Sadly, hands-on learning still takes a backseat to old-school instructional models, including the dreaded lecture. 6. Give immediate feedback If you've ever played sports, it's easy to understand this one. "Calling it at the point of the foul" creates teachable moments and greatly increases their impact. If a user falls for a company-generated attack and gets training on the spot, it's highly unlikely they'll fall for that trick again. 7. Tell a story When people are introduced to characters and narrative development, they often form subtle emotional ties to the material that helps keep them engaged. Rather than listing facts and data, use storytelling techniques. (Editor's note: see, for example, How to rob a bank.) 8. Make them think People need an opportunity to evaluate and process their performance before they can improve. Security awareness training should challenge people to examine the information presented, question its validity, and draw their own conclusions. 9. Let them set the pace It may sound cliche, but everyone really does learn at their own pace. A one-size-fits-all security training program is doomed to fail because it does not allow users to progress at the best speed for them. 10. Offer conceptual and procedural knowledge Conceptual knowledge provides the big picture and lets a person apply techniques to solve a problem. Procedural knowledge focuses on the specific actions required to solve the problem. Combining the two types of knowledge greatly enhances users' understanding. For example, a user may need a procedural lesson to understand that an IP address included in a URL is an indication that they are seeing a phishing URL. However, they also need the conceptual understanding of all the parts of a URL to understand the difference between an IP address and a domain name, otherwise they may mistake something like www4.google.com for a phishing URL. Joe Ferrara is president and CEO of Wombat Security Technologies. This story, "Ten commandments for effective security training" was originally published by CSO.
<urn:uuid:50db4a74-1221-4f96-a675-4332d89ecd77>
CC-MAIN-2017-09
http://www.itworld.com/article/2726276/security/ten-commandments-for-effective-security-training.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00157-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957104
873
2.6875
3
The interplay of size and time may make carbon nanotubes the answer to the computer industry's prayers as it grapples with pressure to make silicon chips ever-smaller. Or the same factors may turn CNTs into a technological dead end. Size refers to the dimensions of carbon nanotubes (CNTs) vs. the shrinking geometry of the components on today's silicon chips. A CNT is basically a tube whose wall is 1 carbon atom thick. The tube itself is 1 nanometer (nm, or one billionths of a meter, or one-thousandths of a micron) in diameter, although it can be tens of microns long. Although made of carbon, single-wall CNTs are excellent conductors thanks to quantum conductance, which allows electrons to propagate along the length of the tubes. Time refers to the progression of Moore's Law, an observation by Intel co-founder Gordon Moore that the number of components on a chip can be expected to double every two years, without an increase in price. According to that, about more eight years from now silicon technology, which has reached 14nm geometry, will reach the atomic level. At that time, presumably the industry will no longer be able to uphold Moore's Law by making silicon components continually smaller. Will CNTs, with their 1nm geometry, be ready by then? "We feel that CNTs have a chance to possibly replace silicon transistors sometime in the future -- if critical problems are solved," says Supratik Guha, director of physical sciences at IBM Research. "I am hopeful that CNTs will be used one day," says Max Shulaker, a Stanford graduate student who serves as its spokesman for CNT research. Tiny roadblocks with huge impacts The problem of laying down chip circuits with CNTs that match the size of silicon components hasn't been solved, Guha notes. Since individual CNTs don't carry enough current for a functional transistor, five or six parallel tubes are required for one connection. The tubes must be laid 6nm or 7nm apart to minimize interference, but greater separation would waste space. "Currently we are able to space them 100nm apart, so an order of magnitude improvement is needed," Guha says. "This is where we need some new thinking. With the current separation there is no advantage over silicon." A silicon circuit that's more than 30nm wide isn't an issue, as metal traces in 14nm devices are about 50nm wide, notes semiconductor industry analyst Linley Gwennap, head of The Linley Group. "There is nothing in a 14nm process that actually measures 14nm," he says, adding that the fin, or main body, of an Intel transistor at the 14nm level measures 10nm. IBM's Guha says he's also concerned about the purity of the CNT fibers. Circuit construction requires single-walled CNTs, while tubes with two or more walls have different electrical characteristics and their presence constitute impurities. "We need to be 99.999% pure" -- in other words, requiring single-wall nanotubes -- "and now we are at 99.9%," says Guha. "We are getting there, and I am confident we will fix the problem." Beyond that, Stanford's Shulaker says the main obstacle to the commercialization of CNTs is the need to improve their contact resistance, or in other words, their connectivity with other conductors used in the system, like silicon and copper. The connection points are tiny and therefore create electrical resistance that requires additional voltage to overcome and operate the system, he explains. The issue is also present with silicon, but silicon designers have been working on solutions for decades, he adds. Shulaker also sees the need for better "doping" of the CNTs that are to be used as transistors. Doping is the intentional introduction of certain impurities to control the item's electrical properties so it can function as a transistor. "It took years to refine doping with silicon," says Shulaker. "With CNTs we are at the stage where silicon was when it started." The problem with potential silicon replacement technologies like CNT "is that you can do pretty cool things in the lab with them. But putting billions of them on a chip and trying to crank out millions of chips per month is a different problem. CNTs looks promising in the lab, but they must solve the problem of building them in a production environment," says Gwennap. On a tight deadline But solving CNT's various problems must happen within a specific time frame or the technology may as well be dropped as far as semiconductor progress is concerned. With chip technology now at the 14nm level, in two years it will reach the 10nm level, and in four years the 7nm level, and then maybe the 5nm level in six years, Guha explains. But 5nm is about the width of 20 silicon atoms, so shrinking dimensions lower than 5nm may be difficult, barring the discovery of some way to manipulate individual atoms. "We have another maybe three generations of technology left -- maybe four if you are really optimistic. After that improvements in silicon will cease," predicts Guha. CNTs, of course, are at the 1nm level. But for the industry to adopt the technology, its problems must be resolved in time for planners to add it to their production road maps; before they make chips, they have to build factories. Consequently, "we need to demonstrate the practicality of CNT technology in the next two to three years, or the window of opportunity will close and the technology will not be there when needed," says Guha. If the problems can be solved, "We could see commercial products in six or seven years, at the earliest," Guha says. "Or development could drag on for a decade, or the technology may never become economical." "At this point it looks like standard [silicon] transistors are solid enough to last to at least 7nm and perhaps 5nm," agrees Gwennap. "CNTs could come into production in six to eight years, maybe, which is pretty far out, but it's on a list of things people are looking at to replace standard transistors." Not everyone believes it's possible to get there in time. "I don't see CNTs in under seven years, and even 10 years is farfetched," says David Kanter, senior editor at The Linley Group's Microprocessor Report. "What will be in production two to four years from now has already been selected, and anyone who claims to see further than 10 years ahead is not credible, to use the G-rated way of saying it," he adds.
<urn:uuid:b32f3197-ef29-4a32-8624-219382ce0a31>
CC-MAIN-2017-09
http://www.computerworld.com/article/3002260/emerging-technology/carbon-nanotubes-in-a-race-against-time-to-replace-silicon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00333-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959049
1,408
3.8125
4
By the end of 2016, there are expected to be 6.4 billion connected “things” throughout the world – a number that is up a whopping 30 percent from 2015. By 2020, we should expect the number of connected things to be up to 20.8 billion. This amounts to over 5.5 million new things connected every single day. These “things” – or devices, machines, and objects – are connected to the Internet in our homes, workplaces, schools, cities, and cars, helping us live smarter, more efficient, and more streamlined lives. The rise of IoT As an IoT company, it’s important for us to keep our finger on the pulse of trends within all things IoT. One of the most interesting trends we’ve been tracking throughout the last couple months (thanks to Google Trends) is the significant spike in Google searches for “What is IoT.” Searches for simply “IoT” are up significantly, too, solidifying that people throughout the world are starting to explore what exactly the IoT revolution is and how it is going to effect their lives. When we took a step back, we realized it makes a lot of sense for individuals to begin explicitly questioning “What is IoT,” as IoT is a revolution that has become engrained in our day-to-day life seemingly right under our noses. Whether it’s a fridge texting us to let us know we need to stop at the store on the way home, a thermostat being able to sense when aren’t in a room and respond with a temperature change, or a smartphone unlocking our office door, IoT has made its way into our lives as something that feels like an added “feature” to a product, when it is actually a revolutionary shift in a product’s capability. What is IoT? Because IoT has so many applications across a great deal of industries, it’s hard to settle on just one definition of IoT. In order to get the best idea of what IoT is, we gathered definitions from industry leaders and experts at some of the most innovative and forward-thinking IoT companies and publications throughout the world. These individuals are at the heart of the IoT revolution, and are responsible for many of the IoT-enabled products you see and use today. We simply asked them to answer the question “What is IoT?” in their own words, without any limit to how long or short the definition needs to be. The responses from these industry leaders and experts are below in no particular order. 1. Michael Crawford, Partner, Q Advisors 2. Alex Davies, Analyst and Editor, Rethink IoT 3. Vitaly M. Golomb, Investments and Partnerships, HP Tech Ventures 4. Bernhard Mehl, CEO, Kisi 5. Saverio Romeo, Principal Analyst, Beecham Research 6. Dr. Mazlan Abbas, CEO, REDtone IoT 7. Daniel Burrus, Founder and CEO, Burrus Research, Inc. 8. Joao Marques Lima, Journalist, Computer Business Review 9. Ali Sheikh, Senior Associate, Konica Minolta BIC 10. Scott Nelson, CEO/CTO, Reuleaux Technologies 11. Jessica Groopman, Research Director and Principal Analyst, Harbor Research 12. Kurt Nehrenz, Co-founder and VP Technology, BlueCats 13. Kayiita Johnson, Major Account Technical Sales Rep, Texas Instruments 14. John Myers, Managing Research Director, Enterprise Management Associates 15. Alexandra Deschamps-Sonsino, Founder, Designswarm 16. Murdoch Fitzgerald, VP of Supplier Marketing, Arrow Electronics 17. Nicholas Joshi, Director of Customer Advocacy, MakerBot 18. Rob van Kranenburg, Founder, Council 19. Ken Herron, Chief Marketing Officer, Unified Inbox 20. Toby Ruckert, Founder and CEO, Unified Inbox 21. Davienne Dente, Senior Account Executive, T-Mobile@Work The most frequently used terms and themes found in the definitions from our experts below. What is IoT definitions Partner, Q Advisors IoT is a system of capturing, transmitting, managing and analyzing data in order to monitor events, identify relationships, predict outcomes and improve performances. It’s digital origami through which otherwise flat bits and bytes take on a useful and informative shape. Analyst and Editor, Rethink IoT Adding connectivity, whether direct or indirect, to a previously unconnected object, and deriving a value from that connection. Back to top Investments and Partnerships, HP Tech Ventures Much like the letter “e” was attached to many new business models during the dot com wave; IoT is a term being used to describe some ineffable, Internet-connected future. The reality is, that devices will increasingly be pervasively connected to the Internet. We are going through the first wave of this phenomenon now. Machines communicating with each other and their surrounding environments to help humans, other machines, and applications make smarter, more efficient decisions. Principal Analyst, Beecham Research The question “What is the IoT?”, inevitably, takes me to Mark Weiser’s seminal paper “The Computer for the 21st Century” published in 1991. The paper starts with the famous statement “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they become indistinguishable from it.” From there, the paper is a marvellous jump into the future, a jump into the days we are living and the ones will come! The Internet of Things vision fuses physical spaces and digital (or virtual) spaces through a plethora of devices that disappear in the context around us or in our body or in the objects that we use or may use. All this defines new lifestyles and new modi operandi that efficiently and sustainably led to better life conditions and new ideas. Back to top CEO, REDtone IoT The ultimate goal of IoT is to automate your digital lifestyle and propel your business. Back to top Founder and CEO, Burrus Research, Inc. The Internet of Things (IoT) is a combination of networked sensors and machines that enable machine-to-machine communications. Enabling technologies include the Internet, advanced cloud services, wired and wireless networks, and data-gathering sensors, making the system instantaneous anywhere, anytime. Advantages of IoT include the ability to monitor and control, real-time asset management, faster response times, major cost savings and, perhaps the biggest advantage, the ability to predict and prevent. IoT will create one of the biggest disruptions and opportunities we have seen in every imaginable field. Back to top Journalist, Computer Business Review The IoT is now moving away from the embryo stage into the real world. It is about connecting things (not only physical objects, people, animals, etc, but also services) to the internet. Simply put, if it has an IP address, an identifier and internet connection then it is an IoT-enabled product (hence all those giant stats we get bombarded with all the time). These products send out and receive data from different sources, with the cloud being a crucial part of the whole IoT ecosystem. This architecture generally also includes IoT software, sensors, gateways, and any other sort of hardware needed. Yet, IoT only makes sense if it can provide real value to the end-user, an operator or a manufacturer. A consumer example is a smart wallpaper that can call for the right emergency services in case of a home accident. Are we going to keep calling it IoT? Only the future will tell. Senior Associate, Konica Minolta BIC The Internet of Things is a network that makes [dumb] physical devices smart by allowing these devices to communicate with each other and make various decisions without the need of human interaction. CEO/CTO, Reuleaux Technologies The Internet of Things is a technology-business ecosystem wherein real world activity and situational data from things are collected from sensors and digital infrastructure, especially wireless infrastructure. This data is processed in a contextual understanding that enables companies to achieve greater value through improved operational performance, better customer service, and/or new solutions to customer needs. Research Director and Principal Analyst, Harbor Research The interconnection and interaction of the digital and physical worlds, wherein uniquely identifiable embedded technology connects and integrates physical ‘things’ (i.e. objects, people, devices, machines, infrastructure, systems, etc.) to information networks via existing and emerging Internet infrastructure. Put simply, IoT is a platform for connecting people, objects, and environments to inform and enable visibility, interactions, and innovation. In two words: Co-founder, VP Technology, BlueCats The Internet of Things is an ecosystem currently under construction which will allow diverse and widespread information gathering and informed decision making. It is enabled by discrete, economical sensors who utilize long range, lightweight data transfer protocols, and low power cloud connected gateways which route an enormous amount of information to central systems which can raise alerts, analyze, and take action. This capability is driving efficiency savings and new opportunities through trend analysis and rapid reaction to the state of the endpoints in the connected ecosphere. The world changed when the Internet allowed people to maintain a constant state of connectivity – and the IOT revolution represents the same leap forward. Major Account Technical Sales Rep, Texas Instruments The Internet of Things has the potential to be the next stage of the mobile revolution. In order for that to happen, there needs to be a continued commitment to innovation – not only from startups, but also from the big infrastructure players. Construction firms, Internet providers and governments of all jurisdiction sizes around the world must be committed to creating opportunities for IoT companies. IoT can be as meaningless as certain areas of the Internet, or as useful as others, and our goal is to enable the IoT companies making a lasting impact in the world, on companies and consumers. Managing Research Director, Enterprise Management Associates The Internet of Things (IoT) is an interconnected web of sensor enabled devices that communication between each other and a series of intermediary collection points. This web of devices provides sensor information on device operation, status and location. However, the true value of the Internet of Things goes beyond the simple interconnected operational status communications. The value of the Internet of things is the ability to collect, analyze and act upon the information that flows from and between devices to create optimized scenarios of interaction between the IoT devices. For example, in a connected car scenario, where there may or may not be a driver, IoT promises to reduce congestion and improve transportation safety by historical and realtime information based decision making. The Internet of things defines the potential new business services, products and interactions offered by embedding hardware technologies and connectivity (web/mobile/radio) in previously unconnected physical products & spaces. VP of Supplier Marketing, Arrow Electronics IoT is a movement that is driving transformation and influencing business outcomes, all enabled by technology. Arrow Electronics is a technology partner creating IoT end to end solutions to solve business challenges. Director of Customer Advocacy, MakerBot The IoT revolution is the idea of all devices connected to one universal network. This concept will revolutionize how people interact with technology and ultimately, each other. The Internet of Things is a horizontal operation affecting all domains, infrastructures and institutions much like the Internet has done with the browser (1993). It is driven by a combination of logistics (RFID, barcodes, NFC, QR codes, smart tags) that plans to tag every object on the planet, and IPv6 which will add IP functionality to anything that can hold software (from toothbrushes to cars and washing machines to lamps). It is not new. From the 70s ubicomp, pervasive computing and ambient intelligence brought us smart offices, gadgets and transport but as there was no Cloud all projects remained demos. From 2000 the Cloud enables IoT. That explains the speed and momentum it is gathering now. Operationally you can understand it if you look at what Google is doing with Glass, NEST, Google Car and Alphabet. It links up the data in health, home, mobility and city or Body Area Network, Local Area Network, Wide Area Network and Very Wide Area Network, or wearables, smart home, connected car and smart city. Their products are gateways linking up these networks so the end users always stays in the Google cloud. Only with such deep integration will the real benefits of IoT become apparent to us as citizens and end users: the best feedback on mental and physical health, the best real-time resource allocation (best deals) in home, mobility and public matters. Governments are beginning to see that such a model is the only way to guarantee public and inclusive services. The US is looking into Cyber Physical systems, China into its own China OS, Singapore is building a cybernetics as Smart Nation. We have seen technology affecting our daily lives in just about any operation, it will start changing the nature of politics soon. At that point it becomes clear that it is much more than a backend operation or adding analytics to big data, it is actually a new form of democracy itself. That is why we need public debate as well as the experts, the engineers, talking more responsibility for what they are building. Chief Marketing Officer, Unified Inbox IoT is when I have my house, my office, and my car in my iPhone’s Contacts, and I can communicate with them (i.e., email, text, tweet, and WhatsApp) just as easily as I can a person. Founder and CEO, Unified Inbox IoT is for Artificial Intelligence (AI) what the Internet was for e-commerce. One day we may ask whether new things were born from the internet or whether it was things that created a new type of internet. Perhaps both. We may then find it hard to separate between human, internet and machine. Ultimately IoT is ushering in a new era between nature, universe and technology. Senior Account Executive, T-Mobile@Work The Promise of IoT is to enable everything that simplifies our lives, to communicate. It’s the concept of a giant network giving us control or delivering information to us from multiple sources. It could be the simplest of objects like lightbulbs, or larger, such as appliances. It is allowing us to improve our experiences with things we already encounter in our day to day lives. Wireless has become the primary means for “things” to communicate. We are happy to be at the forefront of this technology, and excited about the future that is already here.
<urn:uuid:515c81aa-cd0d-4241-b575-83b878cc58fa>
CC-MAIN-2017-09
https://blog.getkisi.com/what-is-iot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00033-ip-10-171-10-108.ec2.internal.warc.gz
en
0.924864
3,061
2.640625
3
In Macon, Ga., a Mercer University professor is helping NASA and the Federal Aviation Administration develop technology they hope will make traffic control on airport runways and taxiways safer and more efficient. Behnam Kamali, professor of electrical and computer engineering, has been working with the new technology that's called Aeronautical Mobile Airport Communications System, or AeroMACS, since 2010. The new system could allow for better airport management, fewer delays, lower operation costs and increased safety, said Robert Kerczewski, NASA electronics engineer. AeroMACS is being developed exclusively for communication taking place between traffic towers and aircraft when they are on the ground. The technology is digitally based, unlike the analog radar and communications systems currently used. The difference between digital and analog is that digital produces "error-free" communication and allows for signal security through encryption. It also allows digital data to be used for countless applications to maximize traffic safety and efficiency, Kamali said. The first wave of applications involve employing a system of sensors on airport surfaces to continuously map the location of all aircraft and vehicles. Right now, traffic control towers mostly use sight to determine the location of aircraft on runways and taxiways. Eventually, AeroMACS could allow airports to use automation, Kerczewski said. The technology is also wireless, which is a cheaper alternative to installing fiber-optic wire beneath airport surfaces. "Not having to place wires underground is a very lucrative reason for using AeroMACS," Kerczewski said. The problem is that AeroMACS technology is relatively new and untested on a large scale. There are inherent limits, such as signal decay caused by the use of a high-frequency band for data transmission and communication. The limited signal reach problem means AeroMACS base stations, which Kamali compared to cellphone towers, would have to be built close to each other to cover a large airport. This is where Kamali comes in. His research calls for the use of relays, or electronic repeaters, instead of more than one or two base stations, to retransmit wireless signals, so they reach the entire airport. He presented his work at NASA's Glenn Research Center in Cleveland, Ohio, where AeroMACS technology has been tested since 2010. "Everybody got excited," he said. Kamali's work is important because if AeroMACS is adopted on a global scale and power output is not monitored, it has the potential to interfere with other communication systems like satellite networks, Kerczewski said. The relays promoted by Kamali's research will allow for an increase in the coverage area and capacity of an AeroMACS system without exceeding power limitations that would threaten satellites. Kamali completed much of his work at the Glenn Research Center in Cleveland over several summers as part of the agency's faculty research program. He is a seven-time fellow and also has spent time at NASA's Jet Propulsion Laboratory in California. "The idea is to give the professors an opportunity to work with NASA and increase their experience at the same time they become familiar with a NASA project and contribute to that project," Kerczewski said. Kamali, an Iranian immigrant who has been in the United States since 1976 and at Mercer since 1993, is planning to apply again for a fellowship this summer. This year, AeroMACS will be tested at nine airports across the country, including in San Francisco; Andrews Air Force Base; Anchorage, Alaska; and New Orleans. The FAA is interested in developing the technology as part of a modernization effort and has funded testing, Kerczewski said. "If it's demonstrated well at the nine airports, I think we'll have more people knocking at the door for it," he said. ©2014 The Macon Telegraph (Macon, Ga.)
<urn:uuid:b8c093f3-90c6-400a-a25d-e5a1636e37a1>
CC-MAIN-2017-09
http://www.govtech.com/transportation/Professor-Helps-NASA-and-FAA-with-Airport-Technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00561-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960058
788
3.375
3
The 1950s saw the introduction of automobile seat belts; in the 70s, airbags began showing up in cars. Electronic Stability Controlled rolled out in the late 80s, and the last decade has seen the deployment of radar and camera-based backup assist and blind-spot warning systems. Auto safety experts say network technology could be the next major car safety innovation. “Decades from now, it's likely we'll look back at this time period as one in which the historical arc of transportation safety considerably changed for the better, similar to the introduction of standards for seat belts, airbags, and electronic stability control technology," said David Freedman, administrator for the National High Transportation Safety Agency (NHTSA) in 2014. Car manufacturers, technology companies and federal regulators have worked for almost two decades to develop vehicle-to-vehicle (V2V) communication, which Freedman calls “game-changing” technology that allows cars to digitally communicate with one another to avoid accidents. In 2013, there were an estimated 5,687,000 police-reported traffic crashes in which 32,719 people were killed and 2,313,000 people were injured, according to the U.S. Department of Transportation. NHTSA says V2V technology could help avoid 70% to 80% of car accidents each year. But for all the promise of V2V, debates rage on about how exactly to implement this technology. The road to V2V The vision for V2V is to have a network of connected cars talking to each other, broadcasting their position, speed and status of the brakes up to 10 times per second. Vehicles would read this information about other cars on the road to calculate potential crash risks and be alerted to hazardous conditions. The car would alert the driver, or automatically engage an action, such as stopping. The same technology could power Vehicle to Infrastructure (V2I) communication, allowing cars to communicate with traffic signals to inform drivers of how much longer a light will be green, for example. Some believe this technology could be a precursor to self-driving vehicles. There are flavors of this functionality on the market today. Many new cars on the market come with blind spot warning, forward collision detection and cars will be mandated to have rear back-up assist beginning this year. These systems use either radar or cameras to determine the vehicle’s surroundings. But Sam Abuelsamid, a senior researcher at Navigant Research, says radar and camera systems are limited by their line of sight. “V2V can extend ‘visibility’ beyond line-of-sight to vehicles further down the road,” making it applicable in dangerous blind intersections or to detect accidents further up the road that the driver does not see, he notes. “The primary advantage is reduced latency and response times since information can be transmitted even before a change can be detected by the sensors.” U.S. Federal Communications Commission (FCC) recognized the importance of V2V technology and its need for a highly reliable, low latency, direct communication method. So in 1999 the FCC dedicated 75 megahertz of valuable spectrum radio space for intelligent transportation systems. The 5.850 to 5.925 GHz band (traditional WiFi operates in the 2.5 GHz and 5 GHz bands) has been carved out for critical life-safety vehicle communication systems and a protocol named Dedicated Short Range Communication (DSRC) has been developed to support V2V and V2I. The IEEE has created a specific version of WiFi called 802.11p for Wireless Access in Vehicular Environments, or WAVE. It’s based on the 802.11ac standard, but is specifically for DSRC and operates in the 5.9-GHz band. It provides secure wireless connections at high vehicle speeds (up to 120 miles per hour), low latencies and short-ranges (up to about 300 meters) and it works in hazardous weather conditions such as snow, rain and fog. In 2012 the U.S. DoT funded a $31 million Connected Vehicle Safety Pilot Model Deployment at the University of Michigan’s Mobility Transformation Center, which included 2,800 cars, trucks and buses equipped with V2V/DSRC technology traveling on 27 square miles of roadway in Ann Arbor, MI. In 2014 NHTSA that the tests were successful and it was ready to move forward with making rules to mandate this technology. Steven Bayless, vice president at the Intelligent Transportation Society (ITS) of America, a trade organization that backs vehicle safety initiatives, says the rule-making process could take years to complete. Once rules are in place, manufacturers will begin rolling out the technology. The lack of rules from NHTSA is one of the reasons DSRC isn’t seen in the market broadly yet today. Auto manufacturers don’t want to invest money installing V2V technology in cars unless they know other manufactures will do so too, so a standard is required. The government will not approve a standard until the auto industry is ready to accept it. Despite the dilemma, some manufactures are moving forward even without the mandate. General Motors’ Cadillac CTS will be the first commercially available vehicle with V2V/DSRC as an option in the 2017 model. Cellular could answer the safety call With WiFi-based DSRC technology still being potentially years away from being mandated, some technology companies are developing new ways of supporting V2V communication. Qualcomm is working with auto manufacturers on cellular-based V2V communication, as opposed to the WiFi-enabled V2V that uses spectrum space. Cellular V2V could have a number of advantages over WiFi-based DSRC, says Matt Branda from Qualcomm. Company tests indicate that 5G/LTE cellular V2V could provide larger ranges of communication (up to 450 meters compared to the 300-meter range of DSRC). That could mean additional time for the driver or an autonomous driving system to avoid an accident. While the vehicle network technology is being developed specifically for safety, there are a plethora of other use cases. “Most folks in the automotive space see V2V as opening up a toolbox for autonomous vehicles,” says Bayless, with ITS. Software could be developed that would allow cars to share their routes and organize in the most efficient way while communicating with V2V. Trucks equipped with this technology could platoon together with a lead vehicle guiding the way for a handful of trucks behind it; each following vehicle would benefit from drafting and reduce their fuel consumption. Self-driving cars could benefit from V2V too. In February 2016 a Lexus operated by Google’s autonomous driving software merged into a lane to avoid sandbags covering a storm drain and side swiped a bus that was approaching from behind. The vehicle’s driving system detected the bus but wrongly assumed the transit vehicle would slow down and let the car into the lane. V2V communication would have informed the autonomous vehicle that the bus’s brakes were not engaged and that it was not slowing down, and therefore it was unsafe to merge into the lane. The mishap was one of the first widely publicized self-driving car accidents; and V2V technology could have prevented it. This story, "The future of auto safety is seat belts, airbags and network technology" was originally published by Network World.
<urn:uuid:b48c826e-cc0f-4591-a579-19f6e51788b6>
CC-MAIN-2017-09
http://www.itnews.com/article/3072486/internet-of-things/the-future-of-auto-safety-is-seat-belts-airbags-and-network-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00205-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96142
1,523
3
3
A Microsoft Professional Developers Conference panel on the future of programming languages looks at what is on programmers' minds. The PDC panel of experts debates what's best for programmers and languages. LOS ANGELES-What are some of the most pressing issues facing developers today, and what can be done with programming languages to help with them? Those were among the questions posed to a group of language and programming experts at the Microsoft Developers Conference) here. Gilad Bracha, Anders Hejlsberg, Douglas Crockford, Wolfram Schulte and Jeremy Siek made up the distinguished panel of computer language designers and researchers addressing "The Future of Programming Languages." And the moderator was no slouch either. Erik Meijer, a Microsoft software architect and language expert in his own right, moderated the panel. Meijer was influential in the evolution of the Haskell language and is the leader of Microsoft's "Volta" project to simplify Web and cloud development. The panel touched on a wide variety of issues, not only including identifying the most pressing issues facing developers, but also such topics as whether IDEs (integrated development environments) matter more than languages, whether modeling is important, the degree to which programmers should be allowed freedom with the language and the inevitable dynamic language versus static First, a bit about the panelists ... Gilad Bracha is the creator of the Newspeak programming language. He is currently a distinguished engineer at Cadence Design Systems; previously he was a computational theologist and distinguished engineer at Sun Microsystems. Douglas Crockford is a senior Division at Microsoft, is the chief designer of the C# programming language and a key participant in the development of the Microsoft .NET framework. Hejlsberg also developed Turbo Pascal, the first-ever IDE, and the Delphi language. Wolfram Schulte is a senior researcher at Microsoft, and Jeremy Siek is an assistant professor at the University of Colorado. Siek's areas of research include generic programming, programming language design and compiler Regarding IDEs, Bracha said, "I come from a world where IDEs matter a lot. They are enormously important, but the language is also enormously Hejlsberg said IDEs certainly do matter, "but a lot less than they did 25 years ago." He said frameworks and IDEs have dwarfed languages, but languages remain important. However, Hejlsberg lamented the fact that languages evolve so slowly as compared with other areas of computing. Schulte said he believes, "languages and libraries don't matter so much. You have to look at what problem you want to solve and then pick the language." Indeed, Crockford said he encourages developers to learn as many languages as possible. Yet, when asked whether languages should be designed by committee or by a benevolent dictator, all five panelists, in unison, replied: based, and said although a standards body or committee may be stodgy, it is the structure the organization provides that is most important.
<urn:uuid:313e9ab4-aea0-477e-ba03-5f6eb2ab2893>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Application-Development/Whats-Most-Pressing-for-Programmers
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00201-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922562
668
2.515625
3
Homeland Security Tests 360-Degree Video CamThe surveillance system, which uses multiple cameras to provide high-resolution images in real time, is being pilot tested at Logan International Airport. The Department of Homeland Security has developed new surveillance-camera technology that provides a 360-degree, high-resolution view by stitching together multiple images. The technology, developed by Homeland Security's Science and Technology Directorate, is called the Imaging System for Immersive Surveillance (ISIS) and works by creating images from multiple cameras and turning them into a single view, according to Homeland Security. Photographers have been putting together multiple still images to provide a panoramic view of a scene for some time, but that's typically assembled after images are taken. ISIS creates high-res images from multiple camera streams in real time. ISIS has a resolution 100 megapixels, according to Dr. John Fortune, program manager with the Directorate's Infrastructure and Geophysical Division. Images retain their detail even as investigators zoom in for closer look at something. The system, which looks like a bowl-shaped light fixture with multiple holes for camera lenses, is being used in a pilot test at Boston's Logan International Airport. Airports are among the first places that Homeland Security expects to use the technology, though it would be suitable for other environments as well. Some ISIS capabilities were adapted from technology developed by the Massachusetts Institute of Technology's Lincoln Laboratory for military applications. In collaboration with Pacific Northwest National Laboratory, Lincoln Lab built the current system using commercial cameras, computers, image processing boards, and software. Even as the first version of ISIS is being tested, Homeland Security is working on a next-generation model with custom sensors and video boards, longer-range cameras that take images at higher resolution, and a more efficient video format. Longer-range plans include giving the technology infrared capability for night surveillance.
<urn:uuid:4a7be740-242f-4b47-bf71-6bc4564686e1>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/homeland-security-tests-360-degree-video-cam/d/d-id/1088939
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00077-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940042
380
2.59375
3
Whether you’re simulating the extreme conditions inside an exploding star or designing an ergonomically innovative office chair, it’s a good bet that a high performance computing (HPC) system and some brain-bending programming will be involved. The HPC system may be a supercomputer like the 1.6 petaflop Jaguar behemoth at Oak Ridge National Laboratory, or a cluster powered by off-the-shelf multicore components. Whatever the scale of the hardware and the scope of the application, developers will have to learn how to deal with the complexities of parallel programming to get the most out of their computational resources. The need for parallel programming is being driven by advances in multicore architectures. This rapid and accelerating technology trend is creating an array of HPC systems that range from dual and quad core systems to supercomputers and clusters with tens, hundreds and thousands of cores. These platforms perform at teraflop and petaflop speeds on terabytes of data. Capable of tackling some of today’s most complex and pressing problems in engineering and science, these HPC systems are composed of a computational ecosystem that includes: scalable multicore architectures; fast, flexible, mammoth memories that can support many simultaneous threads; and high bandwidth I/O and communications. Developers who have honed their parallel programming skills are ready to create applications that reach new levels of scalability, performance, safety and reliability. In particular, parallelism can be exploited in mechanical computer-aided engineering (MCAE) applications code for structural analysis and fluid dynamics, in computational chemistry and computational physics simulations and modeling, and industrial applications that run the gamut from oil and gas exploration to the design of high end golf equipment. For example, in the world of MCAE, Dale Layfield, engineer in Sun Microsystem’s ISV Engineering organization, points to the benefits realized by applying parallelization to NASTRAN, a venerable finite element analysis (FEA) program that has been around for about 40 years. “NASTRAN is a highly compute and I/O intensive structural analysis program,” explains Layfield. “It lends itself well to being broken into smaller components and spreading those components across distributed computer clusters which substantially reduces throughput time. Distributed memory parallelism (DMP) helps eliminate the I/O bottleneck by dividing the analysis across a network of separate nodes. Multithreaded SMP (symmetric multiprocessing) allows you to make best use of the processing power within each node. SMP combined with DMP gives you the most bang for your buck.” Like NASTRAN, many of the other complex applications designed to run on HPC systems rely on parallel programming methodologies to handle the increasing number of computationally intensive jobs involving massive amounts of data and memory. As David Conover, Chief Technologist, Mechanical Products for ANSYS notes, “Among the major benefits of parallel programming are faster turnaround time and the ability to create higher fidelity simulations and modeling to solve engineering design challenges. Engineers applying finite element methods can create models with much higher spatial resolutions and more geometric detail. And they can build models that include entire assemblies, rather than just one small component. Then they can analyze the interactions between those components at a high level of detail. Because the users are able to perform more simultaneous tasks of increased complexity, the entire engineering process is far more productive. You just can’t achieve this level of functionality with applications that rely on sequential processes.” By creating larger high fidelity models with greater geometric detail and subjecting them to detailed simulations of the physical forces that they will encounter in real life, engineers can reduce the need for expensive and time-consuming physical testing — the “build and break” approach. In addition, parallelization allows engineers to run more simulations in order to make design decisions earlier in the project lifecycle. To achieve the speedup in applications performance, parallel programming uses threads that allow multiple operations to occur simultaneously. In an article in the May 20, 2009 HPCwire titled,“Parallel Programming: Some Fundamentals Concepts,” authors Shameen Akhter and Jason Roberts, both of Intel, commented, “The entire concept of parallel programming centers on the design, development and deployment of threads within an application and the coordination between threads and their respective operations.” In short, parallel programming allows you to write scalable, flexible code that harnesses more HPC CPU resources and maximizes memory and I/O. It also allows users of the code — whether it’s you, a member of your organization’s engineering or scientific staff, or a customer – to solve problems that could not be solved using sequential programs, and solve them more quickly. Parallel programming is not easy However, as computer science professor Andrew S. Tanenbaum stated at the USENIX ’08 conference, “Sequential programming is really hard…the difficulty is that parallel programming is a step beyond that.” Bronson Messer, a computational astrophysicist at Oak Ridge National Laboratory (ORNL), concurs. He points out that to do computing at the large scales he and his colleagues encounter daily, the application developer needs to understand the entire HPC ecosystem which includes multicore CPUs, high speed file and connective systems, and terabytes of memory that have to be swapped in and out at blinding speeds. “Everything has to play together,” Messer says. “If there is a weak link at this scale, it will almost immediately be exposed. Your parallel code may run on a quad-core or eight-core system, but when you move up to thousands and tens of thousands of processors, your application may be dead in the water. Debugging code on this many processors is an unsolved problem.” Messer also comments that building robustness and fault tolerance into the code is another major hurdle as the rate of data collection escalates. For example, the Sloan Digital Sky Survey telescope in Sun Spot, New Mexico is precisely mapping a swath of space some five billion light years in diameter, generating terabytes, even petabytes of data every night. And when CERN’s Large Hadron Collider finally comes on line, it will generate 700 megabytes of data every second. These parallel programming speed bumps not only apply to code written for the huge supercomputers that are the workhorses of government labs and academia. Developers creating algorithms for the rapidly growing population of HPC grids, clusters and clouds that are infiltrating the enterprise are running into similar problems. And within industry the pressure is even more intense as companies seek to gain a competitive edge through the use of HPC. When asked what he thought was the most difficult task facing developers working with this new programming paradigm, Scott J. Lasica, VP Technical Services Worldwide for HPC toolmaker Rogue Wave Software, was very clear. “Today’s developers need to learn to do multithreading, which, in my opinion, is one of the hardest — if not the hardest — task associated with software programming. Given the level of complexity we’re dealing with, it’s very easy to make mistakes and very hard to figure out where things went wrong.” What’s a developer to do? Lasica points out that fortunately there are a lot of tools available to help developers write multithreaded code in languages like C++ and Java — even Fortran. For example, a Java(TM) application can be dropped into an application server and the server will take care of the threading. Various new debugging tools also help ease the bumpy road to parallelization. But Lasica says that a thorough grounding in the intricacies of multithreading is essential for developers dealing with today’s complex distributed systems. Reza Sadeghi, CTO of MSC Software agrees. And he also prescribes a major mind shift for today’s developers. “Developers tend to think serially, not in terms of what they can do with multiple CPUs,” he explains. “And even if they are thinking parallel, they are still in the realm of dual, quad or eight cores. But the new HPC systems are raising the bar to encompass hundreds and thousands of cores as well as multicore sub architectures. It’s a whole new way of building algorithms and solving complex loops. By adopting this different mindset, backed up by learning all you can about parallelism and multithreading, you can make optimum use of the many diagnostic tools that are now available and build successful HPC applications.” Advanced programming models also help ease the developer’s path. Among the most popular are OpenMP for shared memory programming, and MPI (message passing interface) for distributed memory programming. ORNL’s Messer adds that given the rapid pace of technology, it is important for developers to create algorithms that will scale far beyond their current systems. “If you know apriori that your algorithm won’t scale, you have an immediate problem,” he says. “With today’s multicore HPC systems, you are dealing with a deeper and more complicated memory hierarchy in addition to the problems inherent in multithreading. Despite advances in OS, compilers and programming models, you still may have to manage some of that hierarchy yourself. The results are worth it.” Continuing education is key Addison Snell, general manager of Tabor Research, comments that developers need to familiarize themselves with how to optimize software on multicore HPC systems. “I’m not sure the latest generation of software engineers has been trained to cope with advanced parallelism – there is a serious question of readiness in the software community,” he says. It is certain that as the world of high performance computing heats up, and multicore, multithreaded systems move into the enterprise, those individuals who are familiar with parallel programming will command a favorable position in today’s rough and tumble job market. Application developers should be very familiar with the principles of parallel programming, including how to handle multithreading. They should also be acquainted with parallel tools, and be able to build thread-safe component interfaces. Also, both test engineers and field engineers should have parallel debugging skills and be familiar with parallel analysis and profiling tools. In order to help developers and engineers meet the challenges posed by parallel programming, Sun Microsystems is offering a series of seminars called “An Introduction to Parallel Programming” discussing parallel programming as a fundamental of application development. Log on weekly to access each of these seven modules presented by mathematician and Sun senior staff engineer Ruud van der Pas. http://www.sun.com/solutions/hpc/development.jsp.
<urn:uuid:26a6b7fd-ca73-49b8-9b57-29c5e840ae2b>
CC-MAIN-2017-09
https://www.hpcwire.com/2009/06/23/parallel_programming_is_here_are_you_ready/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00429-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931911
2,240
2.984375
3
Ever have a moment in school when you weren’t entirely comfortable raising your hand or adding your perspective during class? It’s happened to most, if not all of us. Since I’ve been in those shoes, I understand that just because someone doesn’t participate, that doesn’t mean they don’t know the answers or aren’t grasping the concept. It often means they are a little reserved when it comes to putting themselves out there and having their voice heard. If only there was some way of getting these students to feel more comfortable and be more willing to share. Lucky for us, there is. Be heard without speaking Technology is the vehicle that gives the quiet a (louder) voice. As iPad becomes more prevalent in classrooms, opportunities to use online forums also increase. These forums provide students with a certain level of comfort that allows them to participate without the fearful eye of peers watching. Once comfortable online, this has the potential to translate to being more active during class. Share work without raising your hand By combining iPad and Apple TV with an education-centric app like Casper Focus, teachers can project a student’s iPad to an Apple TV. This immediate and spontaneous sharing of learning and creation can help shy students get over their fear by letting their creative work do the speaking. With Casper Focus, sharing a student’s iPad can be done with only a few taps from the teacher’s own iPad device. Speaking from experience As an introvert myself, I know that I would have benefited from this technology when I was in school. Even now, I find it comforting to use technology to get my ideas across, like blogging for example. If the students in your school are anything like me, iPad and Casper Focus might be the answer to helping them get the most out of their classroom time.
<urn:uuid:483bff52-8a4c-40de-99b4-a423f4a61a39>
CC-MAIN-2017-09
https://www.jamf.com/blog/how-can-technology-help-introverted-students/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00301-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966478
389
2.671875
3
Another Decision Analysis tool is called an influence diagram. It provides a graphical presentation of a decision situation. It also serves as a framework for expressing the exact nature of relationships. The term influence refers to the dependency of a variable on the level of another variable. An influence diagram maps all the variables in a management problem. Influence diagrams use a variety of geometric shapes to represent elements. The following conventions for creating influence diagrams were suggested by Bodily (1985) and others. The three types of variables are connected with arrows that indicate the direction of the influence. The shape of the arrow also indicates the type of relationship. Preference between outcome variables is shown as a double-line arrow. Arrows can be one-way or two-way (bi-directional). Influence diagrams (see Figure 9.4) can be constructed at any degree of detail and sophistication. This type of diagram enables a model builder to remember all of the relationships in the model and the direction of the influence. Several software products are available that help users create and implement influence diagrams. Some products include: DAVID that helps a user to build, modify, and analyze models in an interactive graphical environment; and DPL (from ADA Decision Analysis, Menlo Park, CA) that provides a synthesis of influence diagrams and decision trees.
<urn:uuid:21c25224-1260-4772-84a5-31447a0efaef>
CC-MAIN-2017-09
http://dssresources.com/subscriber/password/dssbookhypertext/ch9/page16.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00474-ip-10-171-10-108.ec2.internal.warc.gz
en
0.904841
263
3.671875
4
An article published in the Seattle Post-Intelligencer yesterday evening describes a patent application from European aerospace company Airbus in which pilots fly aircraft entirely through electronic means. The patent application, number US20140180508 A1, is titled "Aircraft with a cockpit including a viewing surface for piloting which is at least partially virtual" and notes that while an aircraft’s cockpit must be located in its nose to afford its pilot forward visibility, the physical requirements of the cockpit’s shape and the amount of glass required are aerodynamically and structurally non-optimal. "For aerodynamic reasons," explains the patent application’s description, "the nose should ideally be lancet-shaped. However, the housing in the nose for radar, a landing gear, and especially for the cockpit, requires a much more complex shape and structure to be provided, with numerous radii of curvature." It would be better, says the patent, if the cockpit were moved into some other area of the aircraft and the pilot equipped with entirely electronic means of observing and controlling the aircraft’s flight. According to the application, the non-windows cockpit would contain "a screen and associated means for projection (including back-projection)" of various "scenes," including the environment immediately forward of the aircraft, and also "a device with lasers for forming a holographic image" to display items like "a 3D mesh of the earth’s surface," "a hologram representing for example an assistant pilot on the ground," or "a holographic representation…of one or more flight instruments." The lack of "glazed surfaces" (i.e., glass or other transparent elements) means that the cockpit itself could be free of the "numerous structural reinforcements" required to support the typical weight of glass as opposed to the same amount of aluminum. It also means that the cockpit could be placed literally anywhere inside the aircraft’s volume, including in the cargo hold or even in or near the aircraft’s empennage. Of course, the patent application is just that: a patent application. It doesn’t address issues such as exactly how to create functional interactive holographic aircraft flight instrumentation. Nor does it attempt to address substantial practical issues about a cockpit made up entirely of virtual instrumentation—like what happens during an emergency situation with partial or total power loss. However, the application does make allowances for some cockpit windows, depending on where in the fuselage the cockpit is located. Don’t expect to actually see a passenger aircraft without cockpit windows any time soon; the time it takes for large planes like those built by Airbus and its main competitor Boeing to go from conception to reality is measured in decades (hence the patent application’s hand-waving about sci-fi-style holographic controls—by the time a plane could be built with this kind of cockpit, that might be a solved problem).
<urn:uuid:568c377d-95cd-483d-92d1-56f8d21172f5>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2014/07/airbus-submits-patent-application-for-windowless-jet-cockpit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00474-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956958
604
2.96875
3
Neutrinos carry text message through solid rock - By Greg Crowe - Mar 15, 2012 Arthur C. Clarke is commonly attributed with the invention of the communications satellite as a means of relaying radio signals between far off points on the surface of the Earth. We have since generally used this method for all of our long-range communication, using wavelengths that span the electromagnetic spectrum. But that could soon change in favor of using the route that is the shortest distance between two points. Scientists and researchers in a combined effort from the University of Rochester and North Carolina State University have announced that they were able to send a simple binary-encoded message through 240 meters of solid stone. The particles that accomplished this were the same as the message itself – “Neutrino.” You remember neutrinos. These are the pesky little buggers that last year presumably disproved Einstein’s assertion that nothing could go faster than the speed of light in a vacuum. We in the Lab had mixed feelings on the subject when we originally heard about it. Since then, scientists at CERN, where the race between neutrinos and light was conducted, have identified flaws in their testing equipment that could have thrown off the results. They’re planning to redo the experiment in May, to see if they can repeat the results. Neutrinos, electrically neutral subatomic particles, are funny things. It’s almost as if they are able to pass through some matter simply because they don’t believe in it. It makes me think of when Wile E. Coyote runs off of a cliff; he keeps going before he realizes he’s no longer on solid ground. The Rochester and NC State research opens the possibility of communicating through walls, buildings, mountains and even, in theory, the other side of the Earth without satellites. Of course, in order to accomplish this, the team needed one of the world’s most powerful particle accelerators, located at the Fermi National Accelerator, outside of Chicago. Oh yeah, they also needed sensitive neutrino detectors, which tend to weigh several tons and are usually located well underground. To have two-way communications between any two points, we would need one of each on either end. Given the vastly increased complexity (and cost) of the infrastructure that would be necessary for this type of communication, don’t expect it to totally replace satellites any time soon. But this experiment is definitely a first step toward this end. Greg Crowe is a former GCN staff writer who covered mobile technology.
<urn:uuid:f677fc35-4aaf-46b9-934a-92e67ce06144>
CC-MAIN-2017-09
https://gcn.com/articles/2012/03/15/neutrinos-message-through-solid-stone.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00174-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963042
540
3.421875
3
Today is World Information Society Day, which aims to raise global awareness of societal changes brought about by the Internet and new technologies. In relation to this, Kaspersky Lab warns of the dangers posed by cybercriminals and offers tips for a secure and pollution-free digital life. Using social networks, banking and shopping online have become part of our everyday lives. A generation of digital natives is living online – and offline – often without being aware of the dangers of the Internet. More than 400 million people worldwide are now on Facebook (1), and more than half the population of Europe is part of the world’s biggest social network.(2) Children and teenagers are in particular danger of exposing personal data, such as private pictures, to the general public, and of revealing private information in social networks. On top of this, the security experts at Kaspersky Lab process an average of 30,000 new malicious and potentially undesirable programs every day – and the number is growing. On the occasion of World Information Society Day, Kaspersky Lab provides some simple tips for a secure digital life: - Keep Windows and third-party applications up-to-date. - Back up your data regularly to a CD, DVD, or external USB drive. - Don’t respond to email or social media messages if you don’t know the sender. - Don’t click on email attachments or objects sent via social networks if you don’t know the sender. - Don’t click on links in email or IM (instant messaging) messages. Type addresses directly into your web browser. - Don’t give out personal information in response to an email, even if the email looks official. - Only shop or bank on secure sites. These URLs start with ‘https://’ and you’ll find a gold padlock in the lower right-hand corner of your browser. - Use a different password for each web site or service you use and make sure it consists of more than 5 characters and contains numerals, special characters and upper-case and lower-case letters. Don’t recycle passwords (e.g. ‘jackie1’, ‘jackie2’) or make them easy to guess (e.g. mum’s name, pet’s name). Don’t tell anyone your passwords. - Make sure you share your child’s online experience and install parental control software to block inappropriate content. - Install Internet security software and keep it updated. While up-to-date protective software is essential for every Internet user, it is particularly important for those who spend a lot of time interacting with others via the Internet. Failing to use this type of software enables malware to take up residence on your computer, where it can intercept your login information for social networks and other services. Kaspersky Lab protects Internet users against all kinds of cyberthreats through its different security solutions, such as Kaspersky® Anti-Virus and Kaspersky® Internet Security. The new Kaspersky PURE solution offers additional features, like a password manager and data encryption, and frees life from digital pollution. More information on Kaspersky Lab products is available at www.kaspersky.co.uk (1)http://www.facebook.com/press/info.php?statistics, May 11, 2010 (2) Forrester Research, Consumer Technographics, April 2010
<urn:uuid:8092c302-0972-455d-96d6-aa06d3ee9e45>
CC-MAIN-2017-09
http://www.kaspersky.com/au/about/news/business/2010/Keep_your_digital_life_safe_warns_Kaspersky_Lab_at_the_World_Information_Society_Day
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00418-ip-10-171-10-108.ec2.internal.warc.gz
en
0.841768
723
3.25
3
Attorney General Kamala D. Harris released the first report detailing the 131 data breaches reported to her office in 2012, showing that 2.5 million Californians had personal information put at risk through an electronic data breach. The report found that 1.4 million Californians would have been protected if companies had encrypted data when moving or sending the data out of the company’s network. “Data breaches are a serious threat to individuals’ privacy, finances and even personal security,” Attorney General Harris said. “Companies and government agencies must do more to protect people by protecting data.” In 2003, California was the first state to pass a law mandating data breach notification, which requires businesses and state agencies to notify Californians when their personal information is compromised in security breach. In 2012, companies and state agencies subject to the law were required for the first time to report any breach that involved more than 500 Californians to the Attorney General’s Office. While not required by law, Attorney General Harris is issuing this report that analyses the data breach notices reported in 2012, provides information to the public about those breaches, and makes recommendations to companies, law enforcement agencies, and the legislature about how data security could be improved. Those recommendations include practices that would decrease the number of data breaches, make it easier for consumers to recover from the loss or theft of their personal information, and call for law enforcement agencies to more aggressively target breaches involving unencrypted personal information. First, companies should encrypt digital personal information when moving or sending it out of their secure network. In 2012, encryption would have prevented reporting companies and agencies from putting over 1.4 million Californians at risk. The Attorney General’s Office will make it an enforcement priority to investigate breaches involving unencrypted personal information. In addition, companies should review and tighten their security controls on personal information, including training employees and contractors. Companies should make the breach notices they send easier to read. The report found that the average reading level of the notices submitted in 2012 was 14th grade, much higher than the average U.S. reading level of 8th grade. Recipients need to be able to understand the notices so that they can take appropriate action to protect their information. Finally, the report recommends that legislators consider expanding the law to require notification of breaches involving passwords. Attorney General Harris is supporting legislation, SB 46 by Senator Ellen Corbett, which would require notification of a breach involving a user name or email address, in combination with a password or security question and answer that would permit access to an online account. Additional key findings of the report include: - The average (mean) breach incident involved the information of 22,500 individuals. The median breach size was 2,500 affected individuals, with five breaches of 100,000 or more individuals’ personal information. - More than 1.4 million Californians would not have been put at risk, and 28 percent of the data breaches would not have required notification, if the data had been encrypted. - The retail industry reported the most data breaches in 2012: 34 (26 percent of the total reported breaches), followed by finance and insurance with 30 (23 percent). - More than half of the breaches (56 percent) involved Social Security numbers, which pose the greatest risk of the most serious types of identity theft. - More than half of the breaches (55 percent) were the result of intentional intrusions by outsiders or by unauthorized insiders. The other 45 percent were largely the result of failures to adopt or carry out appropriate security measures. Attorney General Harris established the Privacy Enforcement and Protection Unit in 2012 to enforce federal and state privacy laws regulating the collection, retention, disclosure, and destruction of private or sensitive information by individuals, organizations, and the government. This includes California’s Online Privacy Protection Act, as well as laws relating to cyber privacy, health and financial privacy, identity theft, government records and data breaches. In October 2012, Attorney General Harris announced a settlement with Anthem Blue Cross over allegations the company breached its members’ personal data by failing to protect their Social Security Numbers.
<urn:uuid:9e595c8c-e73b-4459-a443-1f05b9b1deb5>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/07/03/25-million-californians-had-personal-info-compromised/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00594-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953735
842
2.59375
3
NASA launched a $1.5 million competition to see who can build the best space robot. The space agency announced Thursday that it's teaming with Worcester Polytechnic Institute to challenge teams from academia and industry to build a smart robot that can locate and retrieve geologic samples while maneuvering over rugged terrain on an asteroid or Mars. Registration is open for the competition that will be judged in June 2014. For the challenge, dubbed the Sample Return Robot, NASA is putting up the $1.5 million in prize money, which will be dispersed among teams who complete certain levels of the competition. More information is available from the WPI website. "The objective of the competition is to encourage innovations in automatic navigation and robotic manipulator technologies that NASA could incorporate into future missions," said Michael Gazarik, NASA's associate administrator for space technology, in a statement. "Innovations stemming from this challenge may improve NASA's capability to explore an asteroid or Mars, and advance robotic technology for use in industries and applications here on Earth." This isn't the first time NASA has looked outside its own walls for robotic assistance. Earlier this year, the space agency awarded $5,000 to Team Survey of Los Angeles for successfully completing a 2013 Sample Return Robot Challenge. NASA noted that it expects the 2014 challenge will advance progress already made and expand the field of competing teams. NASA wants to advance its robotics technology, which has been behind much of its exploration of Mars. The NASA rover Curiosity, which just marked one year on the Martian surface, was tasked with determining whether the planet had ever supported life, even in small microbial form. To accomplish its work, the rover, which has 10 scientific instruments and 17 cameras, used its robotic arm to scoop up soil samples and drill into Martian rocks. NASA engineers send instructions to the rover on where and how far to drive. Jennifer Trosper, NASA's deputy project manager for the Mars Science Lab Mission, in a recent interview, said that within a month, Curiosity should be traveling longer distances in its daily drives because the rover will be able to do more decision-making on its own. NASA will be sending Curiosity additional software in order for it to perform auto-navigation. Trosper said giving the rover the ability to make some of the driving decisions on its own will enable it to travel farther and faster. This article, NASA calls on researchers to build smarter space robot, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is [email protected].
<urn:uuid:6f722552-c152-49b7-ba3e-7ef810d9ac8f>
CC-MAIN-2017-09
http://www.computerworld.com/article/2484801/emerging-technology/nasa-calls-on-researchers-to-build-smarter-space-robot.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00522-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934595
566
2.921875
3
Basic errors put patients at risk of serious harm Thursday, Dec 12th 2013 A new report found that several thousand patients may have been exposed to serious harm for a number of years due to basic errors in safety and quality standards. The research was performed by professor Stephen Field, the chief inspector of general practice for the U.K.'s Care Quality Commission, and studied the healthcare habits of family doctors and GP surgery staff. The Independent reported that of the 910 surgeries observed by Field and his team, one third were found to fail one or more of the 16 basic standards for safety and quality in medical operations, as outlined by the independent U.K. health regulator. The top failures were related to infection control, cleanliness and medicine handling and management. The study also stated that 10 organizations' standard failures were so severe that they had the potential to put thousands of individuals in jeopardy of health issues. One main problem in many healthcare groups was cleanliness, illustrated by the fact that teams found cobwebs and insects at a facility that was previously considered a "good practice." "We're talking about the fact that we found maggots in a treatment room," Field told the source. "And when we asked the question - and this is a good practice - the nurse said yes we do seem to have a bit of a problem." Furthermore, teams found improper vaccine handling, which could also lead to serious health problems for patients. These failures were associated with outdated vaccines that should have been disposed of several months before, emergency injections being left out at room temperature and vaccine storage refrigerators that were not monitored. Industry guidelines for vaccine storage Field's study highlights the importance of following healthcare guidelines, especially those relating to vaccine storage and handling. According to Health.gov, medical organizations should not utilize domestic or kitchen refrigerators for storing vaccines, as they are designed for food storage and are not up to the temperature standards for vaccines. Instead, organizations should utilize a purpose-built refrigerator designed to fulfill the requirements of medicine storage. Furthermore, to ensure that vaccines are kept at the proper level, a vaccine temperature monitor or data logger should be installed and checked often. These items should be kept between 36 degrees Fahrenheit (2 degrees Celsius) and 46 degrees Fahrenheit (8 degrees Celsius). Administrators should aim to maintain a temperature level of 41 degrees Fahrenheit (5 degrees Celsius). Health.gov also recommends resetting these systems after the storage unit temperature has been recorded. Field stated that storage areas that are not monitored could put vaccines in danger of being ineffective. Overall, Field told the Independent that the issues encountered during the study were problems that should have been corrected "many, many years ago."
<urn:uuid:31c1b8ac-d08a-4ef0-8554-eb99fdf7b8d1>
CC-MAIN-2017-09
http://www.itwatchdogs.com/environmental-monitoring-news/healthcare/basic-errors-put-patients-at-risk-of-serious-harm-552984
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00522-ip-10-171-10-108.ec2.internal.warc.gz
en
0.974572
546
2.671875
3
Weather forecasting has come a long way since June of 1977, when the European Centre for Medium Range Weather Forecasts (ECMWF) first contracted Cray to deliver one of its early Cray-1A systems across the pond. This was the first time a Cray found its way to the old country—an installation that set the stage for a number of new deployments of both vector and shared memory systems to power European weather prediction over the next several decades. The first Cray system at ECMWF enabled the weather center to offer a 10-day forecast powered by a weather model that achieved sustained performance of 50 megaflops (against the system’s theoretical peak of 160 megaflops). These systems were followed by the Cray X-MP/22, then an X-MP/48, followed by the Y-MP 8/8-64, C90 (with a gigaflop of theoretical peak), and then into shared memory territory with the T3D. This was the last system ECMWF bought for a stretch in favor of Fujitsu and then Power-based systems from IBM. Now, 36 years after choosing their first Cray system, EMCWF is taking the supercomputing back. The Big Blue machines that are being swapped out for the XC30 early this year were ranked at 51 and 52 on the most recent Top500. If you’re wondering why there are two systems of equal proportions that are essentially tied, it’s because specific operational requirements demand a two-machine approach for centers who provide model outputs that power the weather forecasting efforts of an entire continent—as is the case with ECMWF. Isabella Weger, who heads the Computer Division at the weather center (and has been instrumental in the two-cluster approach decision that set the trend for other weather modeling centers worldwide) explained that having separate clusters in the datacenter offers more resilience for operational forecasts.” In essence, one system runs the center’s operational forecasts, which are the critical products they deliver to the 20 member states and 14 co-operative states in Europe that our models for regional and local weather forecasting.” The other cluster runs the center’s research workloads, which includes activities centered on improving their numerical weather prediction model and offering a more comprehensive view into atmospheric behavior. While both clusters are busy chewing on their own workloads, all operational data is available to both machines. The dual storage clusters, which will now be Cray Sonexion-based systems, are cross-mounted across the compute clusters so EMCWF has access to the data readily available in the event that they need to restart the forecast during a system upgrades or problems. Although Weger and team set the dual-cluster trend at ECMWF, this is a rather unique approach to continuity in Cray CEO, Pete Ungaro’s experience. As he told us, “we haven’t seen this kind of configuration outside of operational weather forecasting centers, really. Most people that are using our machines for research tend to build the single biggest engine they can. However, the operational requirements we see even in demanding commercial markets are not as evenly focused from an operational standpoint as what EMCWF and other major weather centers need.” This dual-approach to cluster and storage scenarios is the direct result of Isabella and team’s need to ensure constant delivery of the critical forecasting models centers in Europe rely on. And the data’s importance doesn’t end there—EMCWF has an extensive tape library of model outputs from decades gone by which totals over 50 petabytes of historical climate data. Further, she says their system generates around 50 terabytes per day. These data are used by climate and atmospheric scientists around the world who require detailed data from outdated model output for advanced climate change and other longer-range atmospheric studies. For now, however, it’s about adding more fine-tuned resolution to the models to better help governments prepare for weather events. “If you imagine a grid around the globe, our current model resolution is 16 km between grid points and our plan is in 2015 timeframe to go to a finer resolution of 10 km, hence the driver for compute resources.” All of this takes some serious compute horsepower, which beginning early this year, will mean the use of the Aries interconnected Cray XC30 “Cascade” supercomputer with a multi-petabyte Sonexion storage system—again, split into two separate clusters. Ungaro described the environment as accelerator-free (although the system is capable and Weger said they are considering the future of accelerators for their application) noting that “each of these [Ivy Bridge] systems are in two different halls, each about 19 cabinets, about 3,600 nodes, all interconnected with our Aries interconnect, so about 80,000 cores in each of the machines.” To put all of this compute into some context, keep in mind that over 60 million observations are factored into the overall forecasting model at EMCWF. It starts with observations, which come from a range of sources, many from satellites, others including ground based observational tools, buoys, and airplanes. These observations provide the baseline for the forecast. “We take these many observations and process them to drive a base point for the atmosphere,” Weger explains. “These are all observations from different points in time and space, and we must snap these into a grid of sorts that spans the globe in the proper space and time.” This is EMCWF’s process of “data assimilation” which in itself is both data and computationally-intensive—and it all happens before the forecast model has begun. Complex forecasting is not a “one-shot” system. Since no forecast is perfect, a sense of probability for weather events must also be calculated. “We run an ensemble of 51 forecasts per day, each with some changes in the initial conditions to get a sense of probability. If you relate this to a hurricane, for instance, the model gives you the projected track of the storm with different conditions.” “It’s about performance, of course, but also very important are resilience and reliability and also, portability,” added Weger. She notes that they strive to keep their forecasting system portable across architectures so that with each procurement cycle they have many vendor choices. “The application is mainly Fortran and whenever we optimize or develop code we try to make sure it doesn’t inhibit us from making architecture choices–we don’t want to be locked into a specific vendor or architecture.” While Weger didn’t comment on their experiences using the IBM Power architecture, she and Ungaro both agreed that the benchmarking and procurement process was lengthy and detailed. EMCWF has a scientific and operational 10 year strategy that defines the upgrades they do across their model (called the Intergrated Forecasting System, which is the code comprises the model and data assimilation). Much of their upgrades are driven by the need for a lot of computing resources to power increases in model resolution, thus allowing the center to use more observational data and offer a better representation of the physics in the atmosphere in the model itself. Adding more computational power to the forecasts makes quite a difference over time. While it might not sound like much in passing, the ability to add one more day of quality forecasting per decade, could make an incredible difference during potentially severe weather events. “A seven-day forecast today is as accurate as a 5 day forecast was 20 years,” explained Weger. Ungaro, who was in the room during our chat with Weger, was beaming by the end of the conversation when the topic went back to the “full circle” nature of this new system at ECMWF. “We are very proud to have this kind of history and to help provide the systems that can save lives and make such a difference in the world,” he said. While we might be able to do some speculative math on the potential placement of the new Cray system on the next Top 500 list—and its ability to provide more power for the models than the IBM Power-based system, time will tell. We’ll check in on this story again once the system appears on the June list.
<urn:uuid:af8af61d-caf9-4ee3-b73a-3f8e53cc42eb>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/02/16/cray-goes-back-future-weather-forecasting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00522-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954215
1,746
2.71875
3
Intego has been examining several samples of new Mac malware, Tibet.C, which uses Word documents to install a backdoor on Macs. The infected Word files look like real files when users double-click them - they display text just like regular Word files - but actually contain three parts, that are concatenated within the file, and hidden from users. The Word file, when double-clicked, opens and displays its contents: But within the document is code that executes when the file is opened: This is the first time that Intego's Malware Research Team has seen this technique used to target Macs. Simply launching a document leads to a buffer overflow, which then allows for the malicious code to execute. When this occurs, a backdoor is installed on the Mac, which then contacts a command and control server. Once this backdoor is active, those running the server can have full access to the infected Macs, as well as install other malware on them. They can either harvest documents from the infected Macs, install keyloggers to look for user names, passwords and credit card numbers, or use them as botnets to send spam or attack other computers. These Word documents exploit a Word vulnerability that was corrected in June, 2009, but also take advantage of the fact that many users don't update such software. Word 2004 and 2008 are vulnerable, but the latest version, Word 2011 is not. Also, this vulnerability only works with .doc files, and not the newer .docx format. These files are not very large - the samples that Intego has analyzed range from 90 K to 230 K - and there is no indication that they may be dangerous. There is no request for a password, and the user does not need to be an administrator for this malware to install. It is worth noting that this is not only an attack on Mac users; Intego has found several samples of the same documents that contain code that will run on Windows. This malware is fairly sophisticated, and it is worth pointing out that the code in these Word documents is not encrypted, so any malware writer who gets copies of them may be able to alter the code and distribute their own versions of these documents. While there is a possibility that this is a targeted attack on Tibetan NGOs, because the Word files contain text discussing the Tibetan situation, it seems more likely that this is a general attack on Mac users. The attack will be very effective on those who have not updated their copies of Microsoft Office, or aren't running antivirus software, such as Intego VirusBarrier X6, and the Tibetan text may simply be a smokescreen. This malware highlights the fact that Mac users should apply security updates to software they use regularly as soon as possible. Microsoft's security alert clearly spelled out what this vulnerability could lead too: This security update resolves two privately reported vulnerabilities that could allow remote code execution if a user opens a specially crafted Word file. An attacker who successfully exploited either vulnerability could take complete control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. While, in the past, we did not see this type of attack targeting Macs, it is clear that the game has changed, and that we are entering a new period of Mac malware. The Mac Security Blog has always published articles about security updates for common applications - web browsers, Microsoft Office, Adobe Acrobat and Flash, and many others - for this very reason. If these key programs are not up to date, they may be targeted by attacks, because of their ubiquity. One way of spreading malware like this is by attaching files to spam messages, assuming that a number of people who receive them will open the files out of curiosity. It is therefore essential that, if you receive unsolicited Word documents, you do not open them. Intego VirusBarrier X6, with malware definitions dated March 29, 2012, or later, will detect and eradicate the Tibet.C malware, and will also block the W97/CodeExec.gen code that exploits this specific vulnerability.
<urn:uuid:db0cf8c3-4418-4757-9ee4-9235d4ed6fa2>
CC-MAIN-2017-09
https://www.intego.com/mac-security-blog/tibet-c-malware-delivered-by-poisoned-word-documents-installs-backdoors-on-macs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00342-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945092
840
2.515625
3
That depends on their configurations. For example: While it makes very good sense to include redundant physical links in a network, connecting switches in loops, without taking the appropriate measures, will cause havoc on a network. Without the correct measures, a switch floods broadcast frames out all of its ports, causing serious problems for the network devices. The main problem is a broadcast storm where broadcast frames are flooded through every switch until all available bandwidth is used and all network devices have more inbound frames than they can process. Originally this challenge was with bridges. Though switches have replaced bridges in most organizational networks, the solution is the same. Radia Perlman’s STA (Spanning Tree Algorithm) fix for bridge loops also works for switch-based networks: STA allows redundant physical links while logically disabling the paths that would cause loops. It also lets a network planner design and install redundancy in a network without creating loops. The basic steps in setting up STA are: - Plan the network design and installation: Carefully document how the network is going to be designed and installed. Specifically note each link between switches. - Enable STA on the switches: Many vendor switches have STA turned on by default. - Select the root switch: This root switch is in the center of the network. All other switches recognize the root switch and each selects one path back to the root. - Confirm convergence and operation: Each of the switches will now identify forwarding and blocking ports appropriately. The root switch sends out a BPDU (Bridge Protocol Data Unit) message every two seconds to inform all of the connected switches that everything is still okay. When a topology change takes place, such as a link failure, the affected switches recalculate their new best path back to the root switch. When you are designing a network, the root switch is the central connecting point for all of the connected switches. The administrator may designate which switch is to be the root or may leave the decision up to the switches. To designate the root switch, assign it the lowest bridge ID number in the network. The root switch communicates with the other switches by sending its bridge ID number via a multicast IEEE 802.1D BPDU. By comparing its own ID number with the arriving BPDU’s bridge ID number, each switch can identify the root switch and the port to reach it. The Bridge ID is a combination of Bridge Priority and the MAC address. At times, such as when installing new switches, more than one switch may have the same Bridge Priority value which is the lowest in the network. In that case, the lowest numbered MAC address, on those switches with the lowest Bridge Priority, breaks the tie. That switch, with the lowest Bridge ID, will become the root switch. The process of determining the root switch may take up to 30 seconds. Leaving the root selection to the switches is less desirable. It takes the chance that a switch in the corner of the network may become the root switch. If that happens, network performance slows:
<urn:uuid:f74b85c7-8c03-47ff-b150-0b4b59740188>
CC-MAIN-2017-09
http://blog.globalknowledge.com/2012/10/11/what-happens-if-i-have-more-than-one-switch-with-redundant-links/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00462-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922943
621
2.75
3
Imagine roadways that generate three times the nation's power needs, melt snow in the winter and have embedded LED lighting that can offer driver alerts and be reconfigured depending on road conditions. That's the technology that Scott and Julie Brusaw, co-founders of Solar Roadways, are currently testing. In fact, the first prototype for their Solar Roadways project has already received federal funding. The Sagle, Idaho-based Solar Roadways company is now running a crowdsourcing campaign on Indiegogo.com to raise more money to ramp up production of their hexagonal-shaped Solar Road Panel technology. The hexagon panels are made up of four layers. There's a half-inch thick glass surface, followed by a layer of LED lights, an electronic support structure (circuit board) and a base layer made of recyclable materials. The hexagon-shaped Solar Road Panels connect to make a grid (Image: Solar Roadways). "We can produce three times more power than we use as a nation. That will eliminate the need for coal-fired power plants," Scott Brusaw said. The polygon panels, which snap together to form circuits, can withstand up to 250,000 pounds of pressure, according to Brusaw. And while glass doesn't sound like the best material for a road, Bursaw said one of the technical specs for the panels is that it be textured to provide at least the traction offered by asphalt roads in the rain. "We hesitate to even call it glass, as it is far from a traditional window pane. But glass is what it is, so glass is what we must call it," he said. "We sent samples of textured glass to a university civil engineering lab for traction testing... and ended up with a texture that can stop a vehicle going 80 mph in the required distance." The Solar Roadways would have embedded LED lighting capable of creating numerous traffic patterns and signage (Image: Solar Roadways). The panels would not only collect energy from the sun, they would be part of a "smart" system that could even talk to cloud-connected vehicles. For example, pressure sensitive monitors could detect if a moose has entered the roadway ahead and warn oncoming traffic. Five years ago, the Federal Highway Administration funded the couple's first-ever Solar Road Panel prototype. In their second prototype of the Solar Roadway panels, the Brusaws created a beta parking lot. They foresee a time when parking lots and even athletic courts could be use the embedded LED lights to create a myriad of configurations. For example, a basketball court could be changed in an instant into a stick-ball hockey court. The panels are made up of four layers (Image: Solar Roadways). Brusaw believes that today's asphalt roadways, which are petroleum-based, will become too expensive in the future. "I don't believe we're going to have the ability to build asphalt roads in 50 years. What we're proposing is a road that pays for itself over its lifespan - not only pays for itself, but provide a whole slew of new features," Brusaw said. More than enough power? Each solar panel produces DC power, which is converted by embedded micro-inverters into 240 volts AC. Brusaw estimates that there are currently 31,000 square miles of asphalt and concrete surfaces exposed to the sun in the continental U.S. His company's Solar Road Panels could replace highways and byways as well as sidewalks, driveways and parking lots. "Solar road panels will collect that energy, turn the sunlight into electricity and feed the grid. If it's a business parking lot, you're feeding the building," Brusaw said. The Solar Panels are able to resist up to 250,000 pounds (Image: Solar Roadways). The solar cells have an 18.5% efficiency rating, the same as photovoltaic cells produced by industry-leading installers such as SunPower Corp. By Solar Roadways own calculations, if all the hard-packed surfaces most conducive to solar collection were covered in the panels, the collective entire grid could product 13.3 trillion kilowatt-hours of energy. In comparison, in 2012, the U.S. used 3.8 trillion kilowatt-hours of electricity according to the Energy Information Administration. The Solar Road Panels have embedded heat strips that would melt snow and ice in the winter (Image: Solar Roadways). Brusaw, who's been visited by utility companies to discuss power grid configurations, imagines a "cable corridor" or underground conduit that would replace existing above-ground power lines. "Currently, our prototype parking lot feeds that energy into our load center and we use the power in our building. Since the energy produced by our panels is used locally, the need for long-distance transmission lines (and their transformers) diminishes," Brusaw wrote in an email to Computerworld. "We'd slowly turn our infrastructure to a decentralized power grid, where the bulk of the power would be used close to where it is generated." The solar panel's internal electronic connectors are hermetically sealed, Brusaw said, and external connectors would be protected in weather-proof material and filled with "an anti-corrosion gel," Brusaw said. Embedded LED lighting in the panels would mark roadway lane divisions and other traffic indicators and a heating element in the surface (like the defrosting wire in the rear window of a car) would prevent the build up of snow/ice accumulation in northern climates. That means no more plows or road salt. Also, because the panels are heated, there would be no damage from frost heaves. "Roads won't experience the freeze-thaw cycle," he said. "The Federal Highway Administration contracted with us to design a road system that could pay for itself over time. State DOTs no longer have the money to maintain the current road system." While roads get dirty, Brusaw said they discovered through experimentation, even an extremely dirty panel only produced 9% less energy than a clean panel. "And that would only be temporary - until it rained or a good wind picked up," he said. In keeping with the Brusaws' commitment to sustainable practices, current roadways and parking lots would not be ripped up and replaced, but used as a base for the new modular panels where possible. Interstate highways would be lit at night from beneath (Image: Solar Roadways). Speed bumps on the road to deployment Tom Leyden, who has worked in the solar power industry for 25 years and is currently CEO of photovoltaic battery maker Solar Grid Storage, said Solar Roadways' technology appears viable, but he questioned whether it's too costly. "I don't think this is the cost-effective solution people are looking for at this point," Leyden said. "There are other more cost effective ways to deploy solar." Leyden, who was formerly vice president of project development at SolarCity, one of the nation's largest solar-cell manufacturers, said the focus in the solar industry today is less on technology and more on reducing cost to capital, or beating the cost of electricity from utilities through solar panel installation. Last year, solar power was second only to natural gas in generating new electricity in the U.S., so the market is evolving, Leyden said. Over the last decade, solar power installations averaged 66% growth annually. "There have been many fringe ideas out there," he said. "In Europe, they looked at solar sound barriers along highways. We've considered putting solar cells in space, where you get higher efficiency. All those ideas could be doable technically, but it's a matter of how do you get these projects financed." Brusaw argued that comparing the cost of an asphalt street to a Solar Roadway is an apples to "fruit basket" comparison. For an accurate cost comparison between current systems and the Solar Roadways system, you'd have to combine the costs of current roads -- including snow removal, line repainting, pothole repair, etc. You'd also have to consider power plants and the coal and nuclear material used to power them, as well as the power poles and relay stations required to transmit the power. As Brusaw notes on his website, the "Solar Roadway system... provides all three." Solar Roadways' latest video explaining the technology. Lucas Mearian covers consumer data storage, consumerization of IT, mobile device management, renewable energy, telematics/car tech and entertainment tech for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "You could one day be driving on energy-generating smart streets" was originally published by Computerworld.
<urn:uuid:6605a277-a881-41c6-9194-df7196f3b80a>
CC-MAIN-2017-09
http://www.itworld.com/article/2699244/green-it/you-could-one-day-be-driving-on-energy-generating-smart-streets.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00638-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955133
1,853
2.734375
3
Wednesday’s 24-hour worldwide test of IPv6, the next-generation Internet addressing standard, is sure to yield valuable data and some unexpected results. Government agencies and other public entities that are participating in World IPv6 Day could also see some effects, such as citizens who have trouble accessing public-facing websites. But fear not. The transition to IPv6 — Internet protocol version 6 — will likely take several years, if not a decade. There’s still time to prepare for the new 128-bit standard, which will support trillions of unique IP addresses. In February, the Internet Corp. for Assigned Names and Numbers, one of the nonprofits that coordinate IP distribution, announced all IPv4 addresses had been distributed and that IPv6 would be the new standard going forward. The test on June 8 is a starting point. Google, Facebook and other online heavyweight have publicly committed that they will participate, as will several federal agencies and a few municipal governments and universities. “We want to find holes,” said Timothy Winters, who studies IPv6 as senior manager of the University of New Hampshire InterOperability Laboratory, which tests data communications technology. “If the day goes perfectly that’s great, but I fully suspect that we’re going to find issues — and I hope we do because then we can solve them.” Better to deal with problems now than during a full-scale deployment down the road, he said. What might happen? One possibility is that a website visitor coming in through an IPv6-aware device might get a timeout notice and be unable to access content on a website that’s supporting IPv6 — if the user’s device or router is misconfigured or their Internet service provider isn’t supporting v6. “That’s the real disaster scenario because what happens is your packets are going nowhere,” Winters said. A firewall will eat those data packets up. For the government agencies that are testing IPv6-enabled websites Wednesday, that could mean at least a few citizens won’t be able to get to a government webpage. Rob Barnes, a division manager in Fresno, Calif.’s IT department, said he has read that about 1 percent of website users could fall into this “black hole” situation. Last month the Fresno city government set up a test page in anticipation of the test. If incoming IPv6 traffic proves to be significant, the city might have to begin considering how to support IPv6 full time, he said. The city is will also have 20 workstations running IPv6 on June 8 so that staff also can start testing outside websites The test will give enterprises and website operators a good look for the first time at how many Internet users could use IPv6 if it were turned on everywhere. Cyber-security will also be examined. During the past few days, there have been rumblings that the test has taken a level of significance with high-profile hackers, said Carl Herberger, vice president of security solutions at Radware. He said he's concerned about the possibility of a significant security breach related to the test event. While the security industry backs the new standard, Herberger said, there are vulnerabilities that have yet to be addressed. One issue is that IPv6 is a "heavy" protocol that requires four times the processing power, which in effect makes it a force multiplier for those attempting denial-of-service attacks. Other potential problems that could crop up, Winters said, are server load balancing issues for IPv4 versus IPv6 traffic as well as the discovery of consumer-level routers — the kind available at electronics stores —that advertise they’re IPv6 -enabled but really aren’t or do poorly. Government agencies, in particular, should be thinking about identifying their legacy systems that can only communicate over IPv4, and formulating plans to bring them onto v6, Winters said. He said this can be done in one of two ways: translating existing v4 addresses to IPv6, or tunneling them over a v4 connection. Also, governments (and all organizations) should be only buying hardware that supports the new standard, he said. Participants in World IPv6 day are eager to see what the day will bring. More than 400 entities have publicly announced they are onboard. N.C. State has obtained a significant amount of IPv6 address space and already is running a Cisco dual stack, according to William Brockelsby, the university’s lead network architect. Testing on Wednesday will be confined to a lab rather than the university’s entire network Ed Furia, network design engineer at Indiana University, isn’t expecting too many problems. The school has been running a dual-stack wired network for the past eight years for research purposes, and internal users are running completely on IPv6. But the university’s public-facing applications and websites were brought onto v6 only recently. Winters said governments still operating exclusively on IPv4 won’t see much difference on Wednesday. But there still could be a few hidden problems. Some agencies might not be supporting IPv6 today on the up connection, he said, but many of them have some equipment on their network that supports the new standard. If you don’t turn off that functionality, it could lead to network problems. More will be known Wednesday. “I’m sure there are other issues we’re just not aware of,” Winters said.
<urn:uuid:3808fbc9-3081-44b0-a00c-0af0dab723a4>
CC-MAIN-2017-09
http://www.govtech.com/e-government/World-IPv6-Day-Should-Bring-Surprises.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00210-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952666
1,141
2.59375
3
Recently, I stumbled upon a question related to the behavior of a given program. That program was Mozilla Firefox and the behavior in question was how profile directory names were generated. Through this post, I will cover how to approach this question and how to solve it with available resources (source code). How are Firefox profile directory names generated? The Answer (and the road to get there) To answer this question, we first have to understand which artifacts we are examining. In this case, we are dealing with Firefox profiles. Those are located in a user's AppData folder. By navigating AppData, we eventually find C:\Users\someUser\AppData\Roaming\Mozilla\Firefox\Profiles. The Firefox 'Profiles' folder showing the directory for the profile named "default." We see a folder for the only Firefox profile used on the system. Reasonably, this account is named "default" by default. But what about that seemingly random prefix? Let's see if we can glean anything from trying to create a profile in Firefox. First, let's open up the Profile Manager by typing "firefox.exe -p" in the 'Run...' dialog (Windows key + R). We can confirm that there is only one profile and it is named "default." When we try to create a profile, we see the following window: Great. We can actually see where it is going to be saved. And no matter what we enter in the text input field, the prefix stays the same. This tells us that the folder for this new profile isn't generated based on the profile name we enter. But there are other possibilities, such as the folder name being based on the current time. There are many other tests we could run, but we actually don't need to -- the source code for Firefox is freely available online. Once we download and extract the source code, we can try to find the function that handles the generation of the profile's folder name. Uncompressed, the Firefox source code is about 585MB. That's a lot of code to review. A better way to sift through all of this data is to either index it all and search it or to just recursively grep the root folder. I decided on the latter. To find out where to look first, we can try to find a constant, unique phrase around the text in question. In the above image, the string "Your user settings, preferences and other user-related data will be stored in:" is right before the path name with which we are concerned. So let's grep for that and see if we can find anything. There are many ways to grep data, but this was a quick and dirty job where I wasn't doing any parsing or modifying, so I used AstroGrep. I went ahead and searched for the a watered down version of the aforementioned unique phrase: "will be stored in." The results showed that the file named CreateProfileWizard.dtd contained this unique phrase (there were many files that were responsive, but based on the phrase's context and the filename for the file in which the phrase was found, we can determine relevancy). A snippet of the responsive "CreateProfileWizard.dtd" file containing our unique phrase. Now, it's just a matter of tracing this phrase back to its source. So we grab another unique phrase that is relevant to our discovery, such as "profileDirectoryExplanation," and see if we can find any more relevant files. Grepping for it comes back with more results -- one of which is createProfileWizard.xul. I didn't see much in this file at first glance (though there is quite bit of good info such as "<wizardpage id="createProfile" onpageshow="initSecondWizardPage();">" -- which will be seen soon), so I decided to see what else was in same directory as this file. There, I found createProfileWizard.js. A snippet of the "createProfileWizard.js" file showing the functions used to generate the profile directory names. Skimming the code, we can see that a salt table and Math.random() are used to determine the 8 character prefix. At line 68, we see the concatination of kSaltString ('0n38y60t' in the animated gif), a period ('.'), and aName (our text input field variable -- in our example, "abcdefg" or "4n6k"). In the end, the point is that having source code available makes research more streamlined. By looking at the Firefox source, we were able to determine that the profile directory names aren't based on a telling artifact. Using what is available to you is invaluable. By having a program's source code [and approaching an issue with an analytic mindset], verifying program behavior is made so much easier. I'll end with this: open source development is GOOD -- especially for forensics. Whether it's a tool meant to output forensic artifacts or just another Firefox, we are able to more confidently prove why we are seeing what we're seeing. -Dan Pullega (@4n6k) * Special thanks to Tom Yarrish (@CdtDelta) for spawning this post.
<urn:uuid:4a468df0-c3d6-4cd8-8fed-b9a8de24724c>
CC-MAIN-2017-09
http://www.4n6k.com/2014_03_01_archive.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00562-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941091
1,094
2.671875
3
Astronauts on the International Space Station began unloading cargo from the SpaceX Dragon capsule on Monday, a day after the commercially delivered capsule was attached to the station. Tom Marshburn, a space station astronaut and flight engineer, opened the hatch to the Dragon on Sunday, enabling Commander Kevin Ford of NASA and Canadian Space Agency Flight Engineer Chris Hadfield to enter the cargo craft. Ford and Hadfield began unloading 1,268 pounds of Dragon's cargo early Monday. Over the course of the next few weeks, astronauts will then load 2,668 pounds of used items and experiments onto Dragon to be brought back to Earth on March 25. The docking, originally scheduled for Saturday, was pushed back a day while NASA and SpaceX engineers repaired a problem with the Dragon's thruster system. NASA and SpaceX launched the Dragon spacecraft atop a Falcon 9 rocket Friday morning from Cape Canaveral Air Force Station in Florida. However, shortly after the 10:10 a.m Friday launch, a malfunctioning propellant valve brought down three of Dragon's four thrusters. It was late afternoon Friday before engineers got all of the thrusters, which enable the spacecraft's maneuvering and altitude control, working. At 5:27 a.m. ET on Sunday, Dragon caught up with the station and moved to within 10 meters of it. Astronauts then used one of the station's robotic arms to grab the capsule, pull it in and attach it to the Earth-facing port of the Harmony module. The capsule is scheduled to spend 22 days attached to the station. This current mission is the second of 12 SpaceX flights contracted by NASA to resupply the space station. It also will be the third trip by a Dragon capsule to the orbiting laboratory. After SpaceX made a demonstration flight in May 2012, it then launched the first official resupply mission last October, delivering 882 pounds of supplies. Another successful commercial launch is an important milestone for NASA, which now depends on commercial flights since retiring the agency's fleet of space shuttles in the summer of 2011. For the foreseeable future, NASA will need commercial missions to ferry supplies, and one day may send astronauts, to the space station, while the space agency focuses on developing robotics and big engines in preparation for missions to the moon, asteroids and Mars. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "After robotic arm grabs SpaceX Dragon, unloading begins" was originally published by Computerworld.
<urn:uuid:e304431e-9bf6-42b9-9fdf-fa791a5ee21c>
CC-MAIN-2017-09
http://www.itworld.com/article/2712940/hardware/after-robotic-arm-grabs-spacex-dragon--unloading-begins.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00562-ip-10-171-10-108.ec2.internal.warc.gz
en
0.912463
570
2.765625
3
What is Business Process Management? Business Process Management (BPM) is the epitome of the popular business cliché: “Work smarter, not harder.” It is the practice of using computer software to assist a business with conducting its work. It allows employees to work smarter by letting the BPM system handle logistics and rudimentary processing, freeing users to focus on more high-value tasks and exception cases. The utilization of BPM ensures that work is handled consistently, and that items arrive where they need to be, when they need to be there. Automation works to give employees specific tasks, reducing time spent trying to figure out what tasks actually need to be done. BPM helps remove the “busy work” from your employees’ tasks lists. Implementing BPM benefits a company in numerous ways as it ensures business rules are consistently applied to items moving through the system, as well as allowing an organization to track where items are in a process at any given time. If something gums up the works, notifications go out to the appropriate individuals. Employees and other end users benefit the most with BPM utilization because they no longer need to remember all the rules that apply to certain processes—the system takes care of that for them.
<urn:uuid:17959a7e-be02-48ad-8dbc-46b6bb98849e>
CC-MAIN-2017-09
https://naviant.com/ecm-101-education-guide/ecm-common-terminology/what-is-bpm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00206-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950223
255
2.75
3
Connecting New Users – Internet Safety First I am all for Connecting New Users, i.e. the World as long as we also Communicate the Risks and Dangers Associated with Internet Connectivity to our Children, Families, Other Users and Online Businesses - Internet Crime: Security Threats, Scams, Fraud - Ransomware, DDoS Attacks - Keeping Your Children Safe Online - Children Family, Cyber Security - Identty Theft - Predators, Paedophiles - Child Abuse, Exploitation, Grooming, Trafficking, Prostitution - Harassment, Bullying Cyberbullying, Suicide - Trolling, Flaming, Shaming - Sexting, Pornography, Child Porn, Revenge Porn – Children under 10 years - Net Addiction: Pornography, Gaming, Gambling – Children Under 10 years - Social media scams, Dating Scams, Phishing, Malware - The DarkNet We spend millions; perhaps more collectively, to grow the Internet; but FAIL to give any priority to Internet Safety Most of the following Alliances and Organisations FAIL to even consider Internet Safety Online as an Objective As Internet Safety Online Takes a Back Seat to New Users Growth; Who Suffers, Who Is Responsible? Facebook’s Internet Org - Making the Internet Affordable Internet.org is a Facebook-led initiative bringing together technology leaders, nonprofits and local communities to connect the two thirds of the world that doesn’t have internet access. - Only 1 out of every 3 people can go online - Why aren’t more people connected? - Devices are too expensive - Service plans are too expensive - Mobile networks are few and far between - Content isn’t available in the local language - People aren’t sure what value the internet will bring - Power sources are limited or costly - Networks can’t support large amounts of data - Together we can remove these barriers and give the unconnected majority of the world the power to connect Source: Internet Org Google’s Alliance for Affordable Internet (A4AI) The Alliance for Affordable Internet (A4AI) is the world’s broadest technology sector coalition. We want everyone, everywhere, to be able to access the life-changing power of the Internet affordably. Our goal is to achieve the UN Broadband Commission target of entry-level broadband priced at less than 5% of monthly income, thereby enabling billions more people to come online. To achieve this vision, we’ve assembled a powerful coalition of the willing. Through A4AI, public sector, private sector and not-for-profit organisations are coming together to create policy and regulatory solutions that drive down the cost of Internet access around the world. We know that affordability remains the primary obstacle to Internet access throughout the developing world. Experience shows that policy and regulatory reform are the best tools to unlock technological advances and dramatically reduce the cost to connect. Through a combination of advocacy, research and knowledge-sharing, A4AI drives policy change by seeking to create the conditions for open, competitive and innovative broadband markets. Key pillars of our strategy: - Consensus is key. That’s why all Alliance members have agreed on a set of policy and regulatory best practices that guide our work. - Diverse viewpoints are essential. A4AI encourages a multi-stakeholder approach to policy reform, bringing together key players from the public, private and not-for-profit sectors to co-create solutions. - There is no one-size-fits-all approach. We form powerful local coalitions to develop solutions tailored to local realities in each of our member countries. - Robust evidence and insights underpin progress. We invest heavily in qualitative and quantitative research, ensuring decisions are based on facts, not hunches. Outernet – Connecting the UnConnected Outernet aims to provide data to the net unconnected Syed Karim outlined his vision for satellite-beamed “libraries” of information - Can an entire library be put in your pocket? Most people would say yes. All you need is a mobile phone with access to the internet. - But what about for the many people in the world that lack internet connectivity? The answer is still yes – at least according to Syed Karim, who explained how at TEDGlobal. - The entrepreneur had been invited to the human ingenuity-themed event in Rio de Janeiro, Brazil, to speak about his company, Outernet. - The business aims to address the fact that about two-thirds of the world’s population still has no internet access. - “When you talk about the internet, you talk about two main functions – communication and information access,” he told the BBC. - “It’s the communication part that makes it so expensive.” United Nations’ International Telecommunication Union ITU) - ITU (International Telecommunication Union) is the United Nations specialized agency for information and communication technologies – ICTs. United Nations’ ITU Key Areas of Action - Climate Change - Digital Divide - Emergency Telecommunications - Gender equality - Youth and Academia The Internet Crime Fighters Organization - Connecting New Users – Internet Safety First - “It’s About CARING: Increase Awareness, Educate, Protect You, Family, and Business” - You may not, or may not know that you have a problem; but your friends may have and need your share - We would like to see Internet Safety Online as a Primary Objective of all growth initiatives and participants - Making the Internet Safe for our Children Family, Users and Business; Before “Making the Internet Affordable” (Internet Org) - Social media may be the only way to share since parental controls and browser settings my block some content from those that need it Thanks for reading Connecting New Users This Site is Blocked By Some Browsers, WOT And Parental Controls Triggered By TERMS and TOPICS of Internet Crime; Child Porn, Pornography Addiction, Sexting, Sextortion, Sexual Harassment. Children as young as 9 years old are Watching Porn and Sexting. Use the POWER of Social Media SHARING to HELP INCREASE AWARENESS of these important topics for Parents, Friends and our Children
<urn:uuid:f9309a6c-af59-478c-8bd1-ef1becf2ba39>
CC-MAIN-2017-09
http://internetcrimefightersorg.com/connecting-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00202-ip-10-171-10-108.ec2.internal.warc.gz
en
0.887908
1,343
2.53125
3
What is IT Benchmarking?Performance benchmarks can be likened to government mileage estimates for automobiles. Actual performance in a customer environment with a customer workload will be different. Just because a particular database benchmark says a configuration can support 5,000 concurrent users or 8,000 transactions per second, does not mean that it is what a customer will experience with their own configuration. Some planners consider it a rule of thumb that actual results are unlikely to exceed published results. The major components of a benchmark are: 1) a workload, with associated metric(s) 2) a set of conditions, commonly called "run rules" 3) reporting requirements Predict Performance with Benchmarking For performance analysts and capacity planners, benchmarks can enhance the ability to estimate system hardware requirements and predict performance. Commercial capacity planning software base the what-if analysis of performance scenarios on published benchmark results. The number of possible benchmarks is only limited by the imagination, but they fall into three general categories: 1) industry-standard 2) vendor-oriented 3) customer-sponsored or internal benchmarking Industry-Standard Benchmarking ISBs (industry-standard benchmarks) are developed, maintained and regulated by independent organizations. These benchmarks typically execute on a wide variety of hardware and software combinations. The most well-known ISB organizations are the SPEC (Standard Performance Evaluation Corporation) and the TPC (Transaction Processing Council). Typically, hardware and software vendors are heavily represented in the membership of these organizations. The groups solicit input from members and the IT community when benchmarks are created and updated to reflect changes in the marketplace. Some common ISBs are: IT benchmarking is the process of using standardized software, representing a known workload, to evaluate system performance. Benchmarks are designed to represent customer workloads, such as database or Web applications. They enable a variety of hardware and software configurations to be compared. Many benchmarks are integrated with cost components, so price and performance can be evaluated. - TPC-C, representing a database transaction workload - SPEC jAppServer, representing a multitier, Java 2 Platform, Enterprise Edition application server workload - SPEC CPU2006, representing CPU-intensive workloads - SPC (Storage Performance Council), representing storage-intensive workloads
<urn:uuid:d8fdfecf-b3b1-458a-86c0-f17f99b23d62>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Enterprise-Applications/How-to-Understand-and-Use-Benchmarking/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00202-ip-10-171-10-108.ec2.internal.warc.gz
en
0.884456
449
2.625
3
The conventional office can be a fixed and rather limiting place. Workers must all meet in the same location and start at the same time. A modern office, however, is far more flexible and promotes business on the move. It allows remote staff to work the hours that suit them. All this is achievable thanks to new technologies, such as cloud computing. Here is an overview of how this modern form of computing benefits usinesses on the move. What is cloud computing? Cloud computing is a form of digital computing that is fast becoming the de facto platform for businesses both big and small, according to this article on Forbes. Unlike conventional computing, where data is stored on physical servers, cloud computing takes place on the internet. The technology is offered by a number of trusted companies, such as McLaren Software, and can benefit businesses in a number of ways. No fixed servers Conventional computing necessitates that you save your information on a fixed server, which is probably stored in a room down the hall from a main office. To access the data you need, you have to use a computer that is physically hooked up to this server, a factor that can severely impinge on your ability to work on the move. Today however, with cloud computing, digital information is saved on remote servers. These servers, which are maintained and run by a third party hosting company, can be accessed remotely. This allows you to get to the data you need wherever you want; all you need is an internet connection. In essence, this makes the office wherever you are. Increased security on the move In addition to making fixed servers a thing of the past, cloud computing also allows you to do away with other items of hardware that could limit the ability to carry out business on the move. Items such as hard drives, USB keys and cables are used in conventional computing to allow data to be carried around. However, while these items can make business on the move possible, they can also be lost, stolen or damaged, which could have huge implications on a business. Data stored on the cloud is far more secure; it can only be accessed by authorised personnel and is backed up on several remote servers. In the very rare event that cloud data is lost, the recovery process is very straight forward and fast. Cloud security is also carried out automatically, so a business is given the most up to date security as soon as it becomes available. Other safety benefits of cloud computing are discussed here. In a conventional office, workers need to be using the same equipment in order to collaborate with each other. Cloud computing is technology neutral, however, which means that remote workers can use the systems that best suit them without fear of issues with compatibility. This can be a great benefit to a flexible business, which may have employees using disparate systems on opposite sides of the world.
<urn:uuid:b843dd62-3970-42a9-82cf-3adcb8cb1df5>
CC-MAIN-2017-09
http://www.2-spyware.com/news/post4254.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00378-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960915
573
2.515625
3
The US Department of Energy (DOE) will be the most likely recipient of the initial crop exascale supercomputers in the country. That would certainly come as no surprise, since according the latest TOP500 rankings, the top three US machines all live at DOE labs – Sequoia at Lawrence Livermore, Mira at Argonne, and Jaguar at Oak Ridge. These exascale machines will be 100 times as powerful as the top systems today, but will have to be something beyond a mere multiplication of today’s technology. While the first exascale supercomputers are still several years away, much thought has already gone into how they are to be designed and used. As a result of the dissolution of DARPA’s UHPC program, the driving force behind exascale research in the US now resides with the Department of Energy, which has embarked upon a program to help develop this technology. To get a lab-centric view of the path to exascale, HPCwire asked a three of the top directors at Argonne National Laboratory — Rick Stevens, Michael Papka, and Marc Snir — to provide some context for the challenges and benefits of developing these extreme scale systems. Rick Stevens is Argonne’s Associate Laboratory Director of the Computing, Environment, and Life Sciences Directorate; Michael Papka is the Deputy Associate Laboratory Director of the Computing, Environment, and Life Sciences Directorate and Director of the Argonne Leadership Computing Facility (ALCF); and Marc Snir is the Director of the Mathematics and Computer Science (MCS) Division at Argonne. Here’s what they had to say: HPCwire: What does the prospect of having exascale supercomputing mean for Argonne? What kinds of applications or application fidelity, will it enable that cannot be run with today’s petascale machines? Rick Stevens: The series of DOE-sponsored workshops on exascale challenges has identified many science problems that need an exascale or beyond computing capability to solve. For example, we want to use first principles to design new materials that will enable a 500-mile electric car battery pack. We want to build end-to-end simulations of advanced nuclear reactors that are modular, safe and affordable. We want to add full atmospheric chemistry and microbial processes to climate models and to increase the resolution of climate models to get at detailed regional impacts. We want to model controls for an electric grid that has 30 percent renewable generation and smart consumers. In basic science we would like to study dark matter and dark energy by building high-resolution cosmological simulations to interpret next generation observations. All of these require machines that have more than a hundred times the processing power of current supercomputers. Michael Papka: For Argonne, having an exascale machine means the next progression in computing resources at the lab. We have successfully housed and managed a steady succession of first-generation and otherwise groundbreaking resources over the years, and we hope this tradition continues. As for the kinds of applications exascale would enable, expect to see more multiscale codes and dramatic increases in both the spatial and temporal dimensions. Biologists could model cells and organisms and study their evolution at a meaningful scale. Climate scientists could run highly accurate predictive models of droughts at local and regional scales. Examples like this exist in nearly every scientific field. HPCwire: The first exascale systems will certainly be expensive to buy and, given the 20 or so megawatts power target, even more expensive to run over the machine’s lifetime – almost certainly more expensive that the petascale systems of today. How is the DOE going to rationalize spending increasing amounts of money to fund the work for essentially a handful of applications? Do you think it will mean there will be fewer top systems across the DOE than there have been in the past? Marc Snir: There is a clear need to have open science systems as well as NNSA systems. And though power is more expensive and the purchase price may be higher, amortization is spread across more years as Moore’s Law slows down. We already went from doubling processor complexity every two years to doubling it every three. This may also enable better options for mid-life upgrades. A supercomputer is still cheap compared to a major experimental facility, and yields a broader range of scientific discoveries. Stevens: DOE will need a mix of capability systems — exascale and beyond — as well as many capacity systems to serve the needs of DOE science and engineering. DOE will also need systems to handle increasing amounts of data and more sophisticated data analysis methods under development. The total cost, acquisition and operating will be bounded by the investments DOE is allowed to make in science and national defense. The push towards exascale systems will make all computers more power efficient and therefore more affordable. Papka: The outcome of the science is the important component. Research being done on DOE open science supercomputers today could lead to everything from more environmentally-friendly concrete to safer nuclear reactor designs. There is no real way to predict or quantify the advancements that any specific scientific discovery will have. An algorithm developed today may enable a piece of code that runs a simulation that leads to a cure to cancer. The investment has to be made. HPCwire: So does anyone at Argonne, or the DOE in general, believe money would be better spent on more petascale systems and fewer exascale systems because of escalating power costs and perhaps an anticipated dearth of applications that can make use of such systems? Snir: It is always possible to partition a larger machine; however, it is impossible to assemble an exascale machine by hooking together many petascale machines. The multiple DOE studies on exascale applications in 2008 and 2009 have clearly shown that progress in many application domains depends on the availability of exascale systems. While a jump in a factor of 1,000 in performance may seem huge, it is actually quite modest from the viewpoint of applications. In a 3D mesh code, such as used for representing the atmosphere in a climate simulation, this increase in performance enables refining meshes by a factor of less than 6(4√ 1000 ), since the time scale needs to be equally refined. This assumes no other changes. In fact, many other changes are needed, when precision increases, that is, to better represent clouds, or to do ensemble runs in order to quantify uncertainty. It is sometimes claimed that many petascale systems may be used more efficiently than one exascale system since ensemble runs are “embarrassingly parallel” and can be executed on distinct systems. However, this is a very inefficient way of running ensembles. One would input all the initialization data many times, and one would not take advantage of more efficient methods for sampling the probability space. Another common claim heard is that “big data” will replace “big computation.” Nothing could be further from the truth. As we collect increasingly large amounts of data through better telescopes, better satellite imagery, and better experimental facilities, we need increasingly powerful simulation capabilities. You are surely familiar with the aphorism: “All science is either physics or stamp collecting.” What I think Ernest Rutherford meant by that is that scientific progress requires the matching of deductions made from scientific hypotheses to experimental evidence. A scientific pursuit that only involves observation is “stamp collection.” As we study increasingly complex systems, this matching of hypothesis to evidence requires increasingly complex simulations. Consider, for example, climate evolution. A climate model may include tens of equations and detailed description of initial conditions. We validate the model by matching its predictions to past observations. This match requires detailed simulations. The complexity of these simulations increases rapidly as we refine our models and increase resolution. More detailed observations are useful only to the extent they enable better calibration of the climate models; this, in turn, requires a more detailed model, hence a more expensive simulation. The same phenomenon occurs in one discipline after another. It is also important to remember that research on exascale will be hugely beneficial to petascale computing. If an exascale consumes 20 megawatts, then a petascale system will consume less than 20 kilowatts and become available at the departmental level. If good software solutions for resilience are developed as part of exascale research, then it becomes possible to build petascale computers out of less reliable and much cheaper components. Papka: As we transition to the exascale era the hierarchy of systems will largely remain intact, so the advances needed for exascale will influence petascale resources and so on down through the computing space. Exascale resources will be required to tackle the next generation of computational problems. HPCwire: How is the lab preparing for these future systems? And given the hardware architecture and programming models have not been fully fleshed out, how deeply can this preparation go? Snir: Exascale systems will be deployed, at best, a decade from now – later if funding is not provided for the required research and development activities. Therefore, exascale is, at this stage, a research problem. The lab is heavily involved in exascale research, from architecture, through operating systems, runtime, storage, languages and libraries, to algorithms and application codes. This research is focused in Argonne’s Mathematics and Computer Science division, which works closely with technical and research staff at the Argonne Leadership Computing Facility. Both belong to the directorate headed by Rick Stevens. Technology developed in MCS is now being deployed on Mira, our Blue Gene/Q platform. The same will most likely be repeated in the exascale timeframe. The strong involvement of Argonne in exascale research increases our ability to predict the likely technology evolution and prepare for it. It increases our confidence that exascale is a reachable target a decade from now. Preparations will become more concrete 4 to 6 years from now, as research moves to development, and as exascale becomes the next procurement target. Stevens: While the precise programming models are yet to be determined, we do know that data motion is the thing we have to reduce to enable lower power consumption, and that data locality (both vertically in the memory hierarchy and horizontally in the internode sense) will need to be carefully managed and improved. Thus we can start today to think about new algorithms that will be “exascale ready” and we can build co-design teams that bring together computer scientists, mathematicians and scientific domain experts to begin the process of thinking together how to solve these problems. We can also work with existing applications communities to help them make smart choices about rewriting their codes for near term opportunities such that they will not have to throw out their codes and start again for exascale systems. Papka: We learn from each system we use, and continue to collaborate with our research colleagues in industry. Argonne along with Lawrence Livermore National Laboratory partnered with IBM in the design of the Blue Gene P and Q. Argonne has partnerships with other leading HPC vendors too, and I’m confident that these relationships with industry will grow as we move toward exascale. The key is to stay connected and move forward with an open mind. The ALCF has developed a suite of micro kernels and mini- and full-science DOE and HPC applications that allow us to study performance on both physical and virtual future-generation hardware. To address future programming model uncertainty,Argonne is actively involved in defining future standards. We are, of course, very involved in the MPI forum, as well as in the OpenMP forum for CPUs and accelerators. We have been developing benchmarks to study performance and measure characteristics of programming runtime systems and advanced and experimental features of modern HPC architectures. HPCwire: What type of architecture is Argonne expecting for its first exascale system — a homogeneous Blue Gene-like system, a heterogeneous CPU+accelerator-based machine, or something else entirely? Snir: It is, of course, hard to predict how a top supercomputer will look ten years from now. There is a general expectation that future high-end systems will use multiple core types that are specialized for different types of computation. One could have, for example, cores that can handle asynchronous events efficiently, such as OS or runtime requests, and cores that are optimized for deep floating point pipelines. One could have more types of cores, with only a subset of the cores active at any time, as proposed by Andrew Chien and others. There is also a general assumption that these cores will be tightly coupled in one multichip module with shared-memory type communication across cores, rather than having an accelerator on an I/O bus. Intel, AMD and NVIDIA all have or have announced products of this type. Both heterogeneity and tight coupling at the node level seems to be necessary in order to improve power consumption. The tighter integration will facilitate finer grain tasking across heterogeneous cores. Therefore, one will be able to largely handle core heterogeneity at the compiler and runtime level, rather than the application level. The execution model of an exascale machine should be at a higher level – dynamic tasking across cores and nodes – at a level where the specific architecture of the different cores is largely hidden; same way as the specific architecture of a core, for example, x86 versus Power is largely hidden from the execution model viewed by programmers and most software layers now. Therefore, we expect that the current dichotomy between monolithic systems and CPU-plus-accelerator-based systems will not be meaningful ten years from now. Stevens: To add to Marc’s comments, we believe there will be additional capabilities that some systems might have in the next ten years. One strategy for reducing power is to move compute elements closer to the memory. This could mean that new memory designs will have programmable logic close to the memory such that many types of operations could be offloaded from the traditional cores to the new “smart memory” systems. Similar ideas might apply to the storage systems, where operations that now require moving data from disk to RAM to CPU and back again might be carried out in “smart storage.” Finally, while current large-scale systems have occasionally put logic into the interconnection network to enable things like global reductions to be executed without using the CPU functional units, we could imagine that future systems might have a fair amount of computing capability in the network fabric again to try to reduce the need to move data more than necessary. I think we have learned that tightly integrated systems like Blue Gene have certain advantages. Fewer types of parts, lowest power consumption in their class, and very high metrics such as bisection bandwidth relative to compute performance, which let them perform extremely well on benchmarks like Graph 500 and Green500. They are also highly reliable. The challenge will be to see if in the future we can get any systems that combine the strengths needed to be affordable, reliable, programmable, and lower power consumption. HPCwire: How about the programming model? Will it be MPI+X, something more exotic, or both? Snir: Both. It will be necessary to run current codes on a future exascale machine – too many lines of code would be wasted, otherwise. Of course, the execution model of MPI+X may be quite different in ten years than it is now: MPI processes could be much lighter-weight and migratable, the MPI library could be compiled and/or accelerated with suitable hardware, etc. On the other hand, it is not clear that we have an X that can scale to thousands of threads, nor do we know how an MPI process can support such heavy multithreading. It is clear, however, that running many MPI processes on each node is wasteful. It is also still unclear how current programming models provide resilience, and help reduce energy consumption. We do know that using two or three programming models simultaneously is hard. Research on new programming models, and on mechanisms that facilitate the porting of existing code to new programming models is essential. Such research, if pursued diligently, can have a significant impact ten years from now. Our research focus in this area is to provide a deeper stack of programming models, from DSLs to low-level programming models, thus enabling different programmers to work at different levels of abstraction; to support automatic translation of code from one level to the next lower level, but ensure that a programmer can interact with the translator, so as to guide its decision; to provide programming models that largely hide heterogeneity – both the distinction between different types of cores and the distinction between different communication mechanisms, that is, shared memory versus message passing; to provide programming notations that facilitate error isolation and thus enable local recovery from failures; and to provide a runtime that is much more dynamic that currently available, in order to cope with a hardware that continuously change, due to power management and to frequent failures. Stevens: An interesting question in programming models is if we will get an X or perhaps a Y that integrates “data” into the programming model — so we have MPI + X for simulation and MPI + Y for data intensive — such that we can move smoothly to a new set of programming models that, while they retain continuity with existing MPI codes and can treat them as a subset, will provide fundamentally more power to developers targeting future machines. Ideally, of course, we would have one programming notation that is expressive for the applications, or a good target to compile domain specific languages too, and at the same time can be effectively mapped onto a high-performance execution model and ultimately real hardware. The simpler we can make the X’s or Y’s, the better for the community. A big concern is that some in the community might be assuming that GPUs are the future and waste considerable time trying to develop GPU-specific codes which might be useful in the near-term but probably not in the long-term for the reasons already articulated. That would suggest that X is probably not something like CUDA or OpenCL. HPCwire: The DOE exascale effort appears to have settled on co-design as the focus of the development approach. Why was this approach undertaken and what do you think its prospects are for developing workable exascale systems? Papka: It’s extremely important that the delivered exascale resources meet the needs of the domain scientists and their applications; therefore, effective collaboration with system vendors is crucial. The collaboration between Argonne,Livermore, and IBM that produced the Blue Gene series of machines is a great example of co-design. In addition to discussing our system needs, we as the end users know the types of DOE-relevant applications that both labs would be running on the resource. Co-design works, but requires lots more communication and continued refinement of ideas among a larger-than-normal group of stakeholders. Snir: The current structure of the software and hardware stack of supercomputers is more due to historical accidents than to principled design. For example, the use of a full-bodied OS on each node is due to the fact that current supercomputers evolved from server farms and clusters. A clean sheet design would never have mapped tightly coupled applications atop a loosely coupled, distributed OS. The incremental, ad-hoc evolution of supercomputing technology may have reduced the incremental development cost of each successive generation, but has also created systems that are increasingly inefficient in their use of power and transistor budgets and increasingly complex and error-prone. Many of us believe that “business as usual” is reaching the end of its useful life. The challenges of exascale will require significant changes both in the underlying hardware architecture and in the many layers of software above it. “Local optimizations,” whereby one layer is changed with no interaction with the other layers, are not likely to lead to a globally optimal solution. This means that one need to consider jointly the many layers that define the architecture of current supercomputers. This is the essence of co-design. While current co-design centers are focused on one aspect of co-design, namely the co-evolution of hardware and applications, co-design is likely to become increasingly prevalent at all levels. For example, co-design of hardware, runtime, and compilers. This is not a new idea: the “RISC revolution” entailed hardware and compiler co-design. Whenever one needs to effect a significant change in the capabilities of a system, then it becomes necessary to reconsider the functionality of its components and their relations. The supercomputer industry is also going through a “co-design” stage, as shown by the sale by Cray to Intel of interconnect technology. The division of labor between various technology providers and integrators ten years from now could be quite different than it is now. Consequently, the definition of the subsystems that compose a supercomputer and of the interfaces across subsystem boundaries could change quite significantly. Stevens: I believe that we will not reach exascale in the near term without an aggressive co-design process that makes visible to the whole team the costs and benefits of each set of decisions on the architecture, software stack, and algorithms. In the past it was typically the case that architects could use rules of thumb from broad classes of applications or benchmarks to resolve design choices. However many of the tradeoffs in exascale design are likely to be so dramatic that they need to be accompanied by an explicit agreement between the parties that they can work within the resulting design space and avoid producing machines that might technically meet some exascale objective but be effectively useless to real applications.
<urn:uuid:d1716723-5e01-457c-9935-b7c2b7c57726>
CC-MAIN-2017-09
https://www.hpcwire.com/2012/06/21/exascale_computing:_the_view_from_argonne/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00378-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946999
4,490
2.859375
3
A group of silkworm moths, coached by researchers at the University of Tokyo, just took a driving test. Instead of their moms' old minivans, however, they were given another machine: a robot. The idea of all this wasn't so much to test the robomoths' driving capabilities -- moths are notoriously aggressive drivers, after all, and their tendency to leave their turn signals on for miles on end is well-documented -- but rather to test the creatures' ingrained tracking behaviors. The idea from there was to (potentially) apply those natural impulses to man-made robots. Moths track smells effortlessly; for a robot, though, that kind of impulse is difficult to engineer. But an autonomous device that is capable of sensing smells and then tracking them to their sources -- say, to identify environmental spills and leaks -- could prove hugely valuable. So researchers made a makeshift robot with a styrofoam ball that functions, effectively, like a trackball mouse. They then attached moths to the device (this was, for the moths, the ostensibly the worst part of the test), converting the robot into a comically large entomological exoskeleton. The moths were then let loose. Their destination? Lady moths. Or, in this case, a female sex pheromone that acted as a fairly cruel simulation of an actual lady moth -- a smell whose source was located at the opposite end of a simple obstacle course. As the moth walked toward the pheromone, the foam ball rolled accordingly, and sensors -- attached to the robot's drive motors -- fired off signals according to those movements.
<urn:uuid:d49510bc-43bb-4090-a9ea-bfa0ce0a503d>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2013/02/good-science-bad-your-nightmares-moths-drive-robots/61138/?oref=ng-skybox
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00554-ip-10-171-10-108.ec2.internal.warc.gz
en
0.969893
337
3.078125
3
InfiniBand goes the distance - By Joab Jackson - Dec 23, 2008 Researchers at the Energy Department's Oak Ridge National Laboratory have shown that InfiniBand can be used to transport large datasets via a dedicated network thousands of miles in length with a throughput unmatched by high-speed TCP/IP connections. In a test setup, researchers were able to achieve an average throughput of 7.34 gigabits/sec between two machines at each end of the 8,600-mile optical link. In contrast, the throughout of such traffic using a tweaked high-throughput version of TCP, called Hyper Text Caching Protocol (HTCP), was 1.79 gigabits/sec at best. Oak Ridge researcher Nageswara Rao presented a paper on the group's work, "Wide-Area Performance Profiling of 10GigE and InfiniBand Technologies," at the SC08 conference last month. Increasingly, DOE labs are finding they need to move large files over long distances. In the next few months, for instance, the European Union's Large Hadron Collider will start operation, generating petabytes of data that will cross the Atlantic Ocean to DOE labs and U.S. academic institutions. Rao said difficulties abound with large data transfers over high-speed wide-area networks, including packet conversion from storage networks and the complex task of TCP/IP tuning. "The task of sustaining end-to-end throughput…over thousands of miles still remains complex," the researchers wrote in the paper. Although InfiniBand interconnects are widely used in high-performance computer systems, they aren't usually deployed to carry traffic long distances. Instead, they typically convert traffic into TCP/IP packets at the edge of each endpoint, either by 10 Gigabit Ethernet (10GigE) or some other protocol, and convert it back to InfiniBand at the other end. However, a few vendors, such as Obsidian Research and Network Equipment Technologies, have started offering InfiniBand-over-wide-area devices, which allows the traffic to stay in InfiniBand for the whole journey. Oak Ridge officials wanted to test how well those long-distance InfiniBand connections could work in comparison to some specialized forms of TCP/IP over 10 Gigabit Ethernet. Using DOE’s experimental circuit-switched test-bed network UltraScienceNet, the researchers set up a 10 Gigabit optical link that stretched 8,600 miles round-trip between Oak Ridge — which is outside Knoxville, Tenn. — and Sunnyvale, Calif., via Atlanta, Chicago and Seattle. At each endpoint, they set up a series of Obsidian Research's Longbow XR InfiniBand switches, which run InfiniBand over wide area. The network was a dual OC-192 Synchronous Optical Network, which could support a throughput as fast as 9.6 gigabits/sec. Overall, the researchers found that InfiniBand worked well at transferring large files across great distances via a dedicated network. For shorter distances, HTCP ruled: It could convey 9.21 gigabits/sec over 0.2 mile, compared to InfiniBand’s 7.48 gigabits/sec. But as the distance between the two endpoints grew, HTCP's performance deteriorated. In contrast, InfiniBand’s throughput stayed pretty steady as the mileage increased. However, Rao said HTCP was more resilient on networks that carry additional traffic. That is not surprising because TCP/IP was designed for time-sharing networks — that is, networks that carry traffic between multiple endpoints. Tweaking TCP/IP to take full advantage of a dedicated network takes considerable work and still might not produce optimal results, Rao added. In conclusion, the researchers found that InfiniBand "somewhat surprisingly [offers] a potential alternate solution for wide-area data transport." The Defense Department and DOE’s High Performance Networking Program supported the research. Joab Jackson is the senior technology editor for Government Computer News.
<urn:uuid:fc70995e-dd77-4abe-b746-17f18e40d18b>
CC-MAIN-2017-09
https://gcn.com/Articles/2008/12/23/InfiniBand-goes-the-distance.aspx?Page=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00078-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941866
834
2.8125
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The transistor should become part of wireless communications chips running at about 150GHz in about two years and promises better Internet connectivity and lower power consumption, according to IBM. Microchips typically hold millions of transistors. The speed of transistors is determined by how fast electrons pass through them, which in turn is dependent on the material used and the distance the electrons must travel. For its 350GHz transistor, IBM used its silicon germanium, or SiGe, bipolar technology. In bipolar transistors electrons travel vertically, as opposed to horizontally in standard complementary metal oxide semiconductor (CMOS) transistors. IBM reduced the height of the transistors to shorten the path for the electrons. Germanium added to silicon speeds the electrical flow, improves performance and reduces power consumption, IBM said.
<urn:uuid:1e788b45-b897-404f-a521-654d30ce45ba>
CC-MAIN-2017-09
http://www.computerweekly.com/news/2240048252/IBM-speeds-up-transistor-for-wireless-chips
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00254-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916495
187
3.53125
4
Typically the stuff of mystery, a real flying saucer could appear over the Hawaiian island of Kauai later this week, but it won't be coming from outer space. The rocket-powered, saucer-like craft is part of a NASA project that could aid missions to other planets. The craft, called the Low-Density Supersonic Decelerator (LDSD), looks something like a cross between a flying saucer and a large doughnut. The flight test will be the first time engineers get to test it out in conditions that are similar to those in which it is designed to operate. It will be carried by a high-altitude helium balloon to about 120,000 feet, which is more than three times as high as commercial airliners fly. Once there, it will be dropped from the balloon, and if all goes according to plan, four rocket motors will fire and gyroscopically stabilize the saucer. Then, a rocket engine will send it even higher, to the edge of the stratosphere. Engineers expect it to hit Mach 4 (four times the speed of sound) when it gets to 180,000 feet, at which point newly developed atmospheric braking systems should deploy. "Our goal is to get to an altitude and velocity which simulates the kind of environment one of our vehicles would encounter when it would fly in the Martian atmosphere," NASA engineer Ian Clark said in a statement. Engineers envision using the LDSD's experimental deceleration technologies on future spacecraft that will travel to Mars and other planets. The craft need a way to rapidly decelerate after entering the atmosphere and before hitting the surface. The flight will take place between 8:15 a.m. and 9:30 a.m. local time (18:15 to 19:30 GMT) on a day between June 28 and July 1, or on July 3, the space agency said Tuesday. It will be televised on NASA TV and carried on Ustream. Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service. Follow Martyn on Twitter at @martyn_williams. Martyn's e-mail address is [email protected]
<urn:uuid:add2778c-2890-4b65-af8b-149691613fce>
CC-MAIN-2017-09
http://www.computerworld.com/article/2491209/data-center/nasa-to-send-flying-saucer-on-first-flight-this-week.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00606-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95258
459
3.0625
3
What's a "smart city"? It's a fair question, but a hard one to answer. Many larger municipalities have embraced the "smart city" concept in recent years, but definitions of the term -- and examples of the ways technology is being used to make cities "smart" -- run the gamut. Mayors and city CIOs usually talk about using sensors to, say, wirelessly manage streetlights and traffic signals to lower energy costs, and they can provide specific returns on investment for such initiatives -- x millions of dollars saved over x amount of time, for example. Other examples include using sensors to monitor water mains for leaks (and thereby reduce repair costs), or to monitor air quality for high pollution levels (which would yield information that would help people with asthma plan their days). Police can use video sensors to manage crowds or spot crimes. Or sensors might determine that a parking lot is full, and then trigger variable-message street signs to direct drivers to other lots. Smart cities as places for fun Those are some of the countless practical examples. But smart cities can also be fun. In Bristol, England, a custom-built infrared sensor system was added to street lamps for a few weeks in late 2014 to record the shadows of pedestrians walking by. The shadows were then projected back through the streetlights for others walking by later to see. Called "Shadowing" and developed by Jonathan Chomko and Matthew Rosier, the initiative was intended as a public art installation. A winner of a Playable City Award, "Shadowing" helps illustrate how broad and elusive the definition of "smart city" has become. That's a good thing. "A smart city shouldn't just save money, but should also be attractive and fun to live in," said Carl Piva, vice president of strategic programs at TM Forum, a global nonprofit association with 950 member organizations whose aim is to guide research into digital business transformation, including smart city initiatives. "Being a smart city is more than being efficient and involves turning it around to make it fun," Piva said. The Bristol "Shadowing" project was discussed at a recent forum in Yinchuan, China, attended by politicians and technology experts from around the world, Piva said. It was introduced by Paul Wilson, managing director of Bristol Is Open, a joint venture of the Bristol City Council and the University of Bristol that's devoted to creating an "open, programmable city region" made possible by fast telecom networks and the latest software and hardware. "Many smart city projects don't have immediate ROI attached," Piva said. "My personal reflection is that technology of the future will become more and more invisible to individuals, and the best success criteria will be people not really even noticing the technology. For the time being, that means seeing a lot of technology trying to talk to us or engage with us in various ways. Every city mayor and everybody running for election is now invested in making his city smart. You sort of need to attract businesses and want to attract individuals with talent and make it a prosperous place, to make it livable and workable." Piva admitted that "smart city" is a broad concept and a lot to take in, especially for average taxpayers who must foot the bill for smart city projects. "It's a topic very high up on everybody's mind, and it's a question of which pathway you use to get there," he said. "Different leaders focus in different directions." Piva said he has noticed that some cities want to focus on building technology communities, which seems to be a significant part of what Kansas City, Mo., is doing with an innovation corridor coming to an area with a new 2.2-mile streetcar line. Other cities, especially in Brazil, are using technology to focus on fostering tourism, Piva said. "The common element of smart cities is the citizen and the need to have citizens involved and feel at home," he explained. Over and over, city officials talk about the smart city as needing to provide "citizen engagement." China's focus on smart cities China, which has multiple cities with more than 10 million residents each, has pushed forward with a variety of smart technologies, some that might rankle Americans because of the potential privacy risks they raise. Piva said there are nearly 300 pilot smart city projects going on in a group of municipalities in the middle of the vast nation. "If you jump on a bus, you may encounter facial recognition, which will be used to determine whether you have a bus permit," he said. The city of Yinchuan has reduced the size of its permitting work force from 600 employees to 50 by using a common online process accessible to citizens who need anything from a house-building permit to a driver's license, Piva said. While Yinchuan's payback on new permitting technology is easy to determine, "a lot of these ROIs are really hard to calculate," Piva admitted. A stark contrast to Yinchuan's smart city initiative, which has a concrete monetary ROI, is in Dubai. Officials in that United Arab Emirates city are building a "happiness meter," which will collect digital inputs from ordinary citizens on their reactions to various things. It could be used to evaluate the combined impact of the cleanliness of streets and the effectiveness of security checkpoints with an assortment of other measures. In some cities, citizen inputs regarding happiness may come from smartphones. But they also could come from digital polling stations. For example, users of airport bathrooms might click a happy face button at a kiosk if they thought the bathrooms were clean. The theory behind happiness meters is that, if municipal officials can capture data from citizens about what it's like to live in a city, "people will be more successful and take care of the community better," Piva said. However, he acknowledged, "it's a hard ROI to measure and takes lots of different touchpoints." A working definition of smart city Ask just about any city official or technologist working for a city, and you are likely to get many different examples of a smart city. A strict definition is even harder to nail down. Jack Gold, an analyst at J. Gold Associates, took a stab at a comprehensive definition but only after first jabbing at the broad ways the concept is used. "'Smart city' is one of those all-encompassing terms that everyone defines however they want," he said.
<urn:uuid:d227cb34-5881-43b3-a87c-cf6d62c1f388>
CC-MAIN-2017-09
http://www.cio.com.au/article/585801/just-what-smart-city/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00126-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966821
1,323
2.625
3
60GHz: A Frequency to Watch It's now likely that 60GHz will become the next big frequency in wireless world, with both short-range and wider area applications ahead for the tiny beams of this unlicensed millimeter radio technology. The frequency -- part of the V-Band frequencies in the US -- is considered among the millimeter radio (mmWave) bands. Millimeter wave radios ride on frequencies from 30GHz to 300GHz. Until recently, 60GHz has typically been used for military communications. (See 60GHz Giddyup.) Recent acquisitions by massive technology players indicate growing interest in the technology and the associated patents. Qualcomm Inc. (Nasdaq: QCOM) bought Wilocity recently to combine 60GHz WiGig technology with WiFi. Google (Nasdaq: GOOG) bought Alpental, a startup that, according to one of its founders, is using 60GHz to develop a "hyper scalable mmWave networking solution for dense urban nextGen 5G & WiFi." (See Qualcomm Advances WiGig With Wilocity Buy and Google Buys Alpental for Potential 5G Future.) Why 60GHz, and why now? Here are a few pointers for you. WiGig: A new short-range wireless specification -- using the Institute of Electrical and Electronics Engineers Inc. (IEEE) 802.11ad specification -- that can link devices at up to 7 Gbit/s over a distance of up to 12 meters. That's 10 times faster than the current 802.11n WiFi, though with less range. This makes the technology ideal for wirelessly delivering high-definition video in the home. The Wi-Fi Alliance is expecting WiGig-certified products to arrive in 2015. (See Wi-Fi Alliance, WiGig Align to Make WiFi Super Fast.) Wireless backhaul: Particularly for small cells, operators can use the 60GHz radios to connect small cells to a fiber hub. (See More Startups Target Small-Cell Backhaul.) Wireless bridges: These are useful for providing extra capacity at events, ad-hoc networks, and private high-speed enterprise links. (See Pushing 60.) Wireless video: Some startups have jumped the gun on the WiGig standard and plowed ahead with their own 60GHz video connectivity using the Sony-backed WirelessHD standard. A global unlicensed band exists at 57-64GHz. It is largely uncongested compared to the 2.5GHz and 5GHz public bands currently used for WiFi. (See FCC to Enable Fast Streaming With New 60GHz Rules.) There's also a lot of it. "The 60 GHz band boasts a wide spectrum of up to 9GHz that is typically divided into channels of roughly 2GHz each," Intel Corp. (Nasdaq: INTC)'s LL Yang wrote in an article on the prospects for the wide-area and short-range use of the technology. Spectrum availability is "unmatched" by any of the lower-frequency bands. The spectrum is now open and approved for use across much of the world. This includes the US, Europe, and much of Asia, including China. Here's a spectrum map from Agilent on the band's global availability. As we've already seen, 60GHz technology is expected to offer blazing wireless transmission speeds. Issues with 60GHz No technology is ever perfect, right? Transmissions at 60GHz have less range for a given transmit power than 5GHz WiFi, because of path loss as the electromagnetic wave moves through the air, and 60GHz transmissions can struggle to penetrate walls. There is also a substantial RF oxygen absorption peak in the 60GHz band, which gets more pronounced at ranges beyond 100 meters, as Agilent notes in a paper on the technology. Using a high-gain adaptive antenna array can help make up for some of these issues with using 60GHz for wider area applications. Some vendors have also argued that there are potential advantages for the technology over omnidirectional systems. "The combined effects of O2 absorption and narrow beam spread result in high security, high frequency re-use, and low interference for 60GHz links," Sub10 Systems Ltd. notes. Next time, we'll look at some of the key private and startup companies looking to ride the 60GHz wave. — Dan Jones, Mobile Editor, Light Reading
<urn:uuid:9eba09e4-6936-4780-90b5-9ede75f5235b>
CC-MAIN-2017-09
http://www.lightreading.com/mobile/backhaul/60-ghz-a-frequency-to-watch/d/d-id/709910?_mc=RSS_LR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00123-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925382
899
2.625
3
The same concepts that have lead to open source rockin' the software world have spawned the beginning of a revolution in biotech. An organization called Biofab, funded by the NSF and run through teams at Stanford and Berkeley, is applying open development approaches to creating building blocks (BioBricksTM from BioBricks Foundation) for the bio products of the future. Now, the first of those building blocks (based on E. coli) are just rolling off the production line. This, according to the organizers, represents "a new paradigm for biological research." At its basis, Free and Open Source Software (FOSS) is about sharing and collaborating. The purpose of the open source licenses of which RMS (as Richard Stallman is known to fellow hackers) conceived was to ensure that users of software could have the freedom to use, modify and share the software as they wished. What has evolved is an enormous stock of freely available building blocks (about half a million by Black Duck's count) that make for faster, better, cheaper creation of software. This goal is behind Biofab, to create biological building blocks that can be assembled into an unimaginable plethora of applications. Somewhat in contrast to the philosophical grassroots motivations that have gotten software development to this point, it's being driven by economic motivations, and there's some real money behind the project from the outset. It can cost tens of millions of dollars to create a single microbe that can do useful work because the current process is like creating a software application using machine language. The geniuses behind Biofab are clearly modeling much of what they do on the FOSS model. Stanford's Drew Endy, Biofab director, talks about what they are building and a "biological operating system." In fact the name (Biofab International Open Facility Advancing Biotechnology) is oddly recursive like GNU (which stand for GNU is Not Unix). OK, but biology and software? You can't download even microscopic bugs over the internet can you? Well, since Watson and Crick discovered the double-helix, you kinda can. Sequences of genes in nucleic acids are known as "genetic codes" and are basically the programs that runs the cells with which they are associated, and BioBricks foundation decribes a BioBrick, as follows:Each distinct BioBrickTM standard biological part is a nucleic acid-encoded molecular biological function (e.g., turn on/off gene expression), along with the associated information defining and describing the part. Sure sounds like software to me. In fact it's enough like software that the BioBrix guys figured out they needed and developed a very OSI-like license calledThe BioBrick Public Agreement. I gave it a quick read and it reads very much like a software license, a fairly permissive one with no reciprocal clauses that I could see. (By the way, it would violate the OSI requirement of unrestricted use with a "do no harm" clause, but that seems like a good thing given the bioweapon potential.) The microbiology community sits where the software community was a few decades ago. A few big corporations with a lot of stake in keeping technology proprietary and a grass roots movement to open and shake things up a bit. My guess is that with the analogous trail having been blazed in the software world, things will unfold more quickly in biology. I'll be monitoring from the sidelines to see how it all plays out. Near the end of composing this blog, I stumbled across an outstanding book, Biobazaar: The Open Source Revolution and Biotechnology by biologist/lawyer, Janet Hope. The author delves deeply into what she calls the "irresistible analogy" between open source software and microbiology. Biology aside, it is well worth the read just for concise history of open source she provides in Chapter One and the detailed treatment of open source licenses in Chapter Five.
<urn:uuid:1e2d5e59-713b-4fc1-b098-8a436f47e536>
CC-MAIN-2017-09
http://www.networkworld.com/article/2229234/opensource-subnet/great-news---now-you-can-download-your-very-own-e--coli-bacterium.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00295-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951375
796
2.96875
3
There is a much-derided and apocryphal story that former US vice president Al Gore once claimed to have invented the Internet. In fact, he simply said that he had promoted the development of the technology in Congress – a claim that has been corroborated by the Internet’s actual inventors, Vint Cerf and Bob Kahn. Future historians may look back at Gore’s former running mate Bill Clinton and credit him with a similar role in developing so-called smart city technology – the use of networks, sensors and analytics to make cities more efficient, productive and habitable. Back in 2005, through his philanthropic organisation the Clinton Foundation, the former US president challenged network equipment maker Cisco to use its technical know-how to make cities more sustainable. As a result, Cisco dedicated $25 million over five years to research the topic, spawning what it called the Connected Urban Development programme. This involved working with the cities of San Francisco, Amsterdam and Seoul on pilot projects to prove the technology’s potential. In 2010, when Cisco’s pledge to the Clinton Foundation expired, it launched its Smart and Connected Communities division in order to commercialise the products and services that it had developed during the programme. “By June 2010, we had decided to bring the lessons we learned during the Connected Urban Development experiment to other cities around the world,” explains Gordon Feller, Cisco’s director of urban innovations. Meanwhile, IT giant IBM was hatching similar plans. In 2008, it launched its Smarter Planet initiative, a broad programme to investigate the application of “instrumentation, interconnectedness and intelligence” (i.e. sensors, networks and analytics) to some of the world’s most pressing issues. The following year, IBM’s Smarter Cities programme focused on using that combination of technologies in an urban setting. A few years into their respective smart city strategies, IBM and Cisco each have a number of projects under their belts that demonstrate both the range of problems that can be addressed with the “three I’s” and the scale of the economic opportunity on offer. IBM’s pioneer project was in the Brazilian capital of Rio, where it set up an experimental emergency response centre. This allows the authorities to view information collected by the various city services such as the police, traffic management and energy utilities, plus new data from a specially designed sensor network, and take rapid decisions as a result. “One issue they have in Rio is landslides, which are caused by sudden rainfall in a certain area,” explains Rashik Parmar, president of IBM’s Academy of Technology. “Using data from some of the new weather sensors we installed, they are now able to predict where there is a high risk of landslide 24 hours in advance. And because all the emergency services are represented in the response centre, they can plan their response much faster.” The Integrated Operations Centre software that IBM developed for the Rio project is now a commercially available product. Closer to home, another project saw IBM analyse fuel poverty in Glasgow. By analysing heat dissipation in and around housing estates, it found that exhaust heat from nearby industrial facilities could be redirected in order to warm people’s homes. IBM’s smart city strategy is underpinned by its recent focus on information management and analytics. Through acquisition and internal R&D, the company has armed itself with analytical algorithms and data processing technologies that are essential for making sense of seas of sensor data, Parmar says. Cisco’s smart city projects so far range from brownfield projects – such as a partnership with the Metropolitan Transit Authority in New York to improve rail and station monitoring – to greenfield sites such as Songdo, an entirely new, sustainable city being built on reclaimed marshland in South Korea. According to Feller, much of the technological work involved in Cisco’s smart city projects is focused on older cities. Unlike the world of business communications where the Internet protocol has become standard, the telecommunications networks that underpin cities are typically based on incompatible legacy protocols. “Cities demand to use their existing infrastructure,” explains Feller. “They are not ready to tear out all of their conduits and pipes, so we have to be able to build networks that connect the existing infrastructure to newer technologies.” Whether it’s a matter of integrating networks owned by multiple authorities or persuading numerous stakeholders to share their data, smart city projects are wholly reliant on collaboration. This is true of IBM’s and Cisco’s commercial strategies. Both companies will typically partner with engineering firms, telecommunications operators or facilities management companies – and in some cases each other – to deliver a complete service. “We always take an ecosystem approach,” explains Cisco’s Feller. “In some cases, there might be just a few partners. In the case of Songdo, our partners include the local economic development agency, the national government, the local steel giant and an urban developer.” IBM’s Parmar says that engineering companies are increasingly aware of the benefits of including ‘smart’ technology in their urban development projects. “They are realising that there is competitive advantage in taking some of the Smarter City ideas and presenting them in their own way,” he says. “But when it comes to figuring out where the intersection of IT and engineering lies, they typically do not have the skills.” Managing the complexity of supplier- side partnerships is as nothing compared with the political machinations of city management, however. Both Parmar and Feller identify winning political backing as one of the critical challenges facing smart city projects. Feller says this can be especially difficult in the developed, democratic nations. “Compared with the cities in China – where you have strong mayors who can respond to an opportunity as it arises – in the US and Europe we have these fragmented, collaborative systems of power that are frankly more time-consuming. “However, we also find that this kind of political process often leads to a higher quality of commitment from the all the parties, and therefore higher-quality projects,” he adds. Parmar remarks that what appears to be a centralised approach to city leadership in the Far East does not always work as effectively as one might expect. “Just because everyone in the meeting is nodding their head, it doesn’t necessarily mean they will do anything,” he says. Nevertheless, strong leadership is a must, he adds. “We’ve found that you need to have a city leader who has the personal ability to inspire, motivate and drive decision-making.” Another challenge, albeit one that both companies profess to have under control, is privacy. By wiring up the city with sensors and sophisticated analytics, technology companies are in danger of creating an unprecedented apparatus of surveillance, both legitimate and criminal. “Privacy issues are called out very early in our Smarter Cities engagements,” says Parmar. “The last thing we want after a huge project is for the newspapers to splash with ‘IBM shares personal data’. Reputation is key in this space, so we’re very cautious.” Cisco, meanwhile, points to its engagements with enterprise and defence clients as proof that it can handle sensitive data appropriately. “We have always been focused on finding ways to tap the power of the network without violating the anonymity of the end-user or mingling data streams that need to be kept separate,” says Fuller. “It’s hard work, but it’s something we are absolutely dedicated to.” Neither Cisco nor IBM break out their smart city revenue or income figures, but whether or not their projects have turned a profit yet is immaterial – they are clearly playing a long game. Both companies are bidding to embed IT infrastructure in the environment in which 50% of the world’s population now lives. The technology foundations they lay today may well impact city living for decades to come.
<urn:uuid:5993c967-847c-4966-8bb2-b946dd7a24f4>
CC-MAIN-2017-09
http://www.information-age.com/ibm-cisco-and-the-business-of-smart-cities-2087993/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00647-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952712
1,686
2.515625
3
There are many ways to keep your computer secure. Your own behavior affects it a lot and we at F-Secure are happy to help protecting you with our products. But there are also many tools that can improve your security even if that wasn’t their initial purpose. Melissa and Sean described how you can use separate browsers to lower the risk for human errors. Virtualization is another technology that can improve security as a side effect. It’s a like the separate browsers idea, but takes it a lot further. Read on to learn more. Virtualization in computing means to simulate something with software. What we talk about here is to create a whole virtual computer inside a real computer. It’s complex under the hood, but there are luckily easy products that can be used by almost anyone. This technology is by the way used extensively in the software industry. Huge number of virtual computers can be used to process data or test software. A large portion of the Internet is also provided by virtual servers. But how can this improve my security? Most malware is made for profit and interfering with your on-line banking is a common payload. But what if you run your on-line banking on a separate computer? Buying another machine costs money and consumes space, but that can be solved by using a virtual computer instead. That virtual machine would only be used for banking, nothing else. A malware infection could happen if your guard is down and you open a malicious file in the mail. Or surf to a site witch is infected with a drive-by download. Both cases could infect your real computer, but the malware can’t see what you are doing with the bank inside the virtual machine. One could also use the opposite strategy. Use a virtual machine when doing something risky, like looking for downloads on shady servers. A previously made snapshot can easily be restored if something bad hits the virtual machine. An additional benefit is that this gives you an excellent opportunity to play around with different operating systems. Install Linux/Windows/OS X just to become familiar with them. Do you have some hardware which driver won’t work in your new machine? No problem, install a virtual machine with an older operating system. OK, sounds like a good idea. But can I do it? Here’s what it takes. I’m not going to provide detailed instructions for this. That depends too much on which virtualization product and operating system you use. And it would beside that be like reinventing the wheel. You will find plenty of step-by-step instructions by Googling for what you want to do, for example “install Linux in VirtualBox”. But for your convenience, here’s an overview of the process. Edited to add: It is of course a good habit to exercise the same basic security measurements inside virtual machines as in real computers. Turn on the operating system’s update function, install your anti-virus program and make sure your browser is kept up to date. Doing just banking with the virtual machine reduces the risk a lot, but this is good advice even in that case. And needless to say, the virtual machine’s armor is essential if you use it for high-risk tasks. Thanks Dima for providing feedback. With Safer Internet Day set to arrive again the seventh of February, we all need to… January 23, 2017 It was supposed to be a dream vacation. And it really was, until he came… December 9, 2016
<urn:uuid:553f302b-46d7-4223-98b9-0a1540849166>
CC-MAIN-2017-09
https://safeandsavvy.f-secure.com/2013/12/13/virtual-computer-real-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00523-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938053
723
3.03125
3
Special Fiber is constructed with a non-cylindrical core or cladding layer, such as Polarization Maintaining Fiber (or PM fiber) and fiber designed to suppress whispering gallery mode propagation. The Special Points Of PM fiber PM fiber is a specialty optical fiber with strong built-in birefringence, preserving the properly oriented linear polarization of an input beam. Polarization maintaining fiber (PM fiber) is a special type of single mode fiber. Normal single mode fibers are capable of carrying randomly polarized light. However, PM fiber is designed to propagate only one polarization of the input light. PM fibers contain a feature not seen in other fiber types. Besides the fiber core, there are stress rods in the fibers. The stress rods are two circles in the Panda PM fiber, an elliptical clad in elliptical-clad PM fiber and two bow-ties in the Bow-Tie type PM fiber. Polarization-maintaining PANDA fiber (left) and bow-tie fiber (right) Why PM Fiber Are Needed? Optical fibers always exhibit some degree of birefringence, even if they have a circularly symmetric design, because in practice there is always some amount of mechanical stress or other effect which breaks the symmetry. As a consequence, the polarization of light propagating in the fiber gradually changes in an uncontrolled (and wavelength-dependent) way, which also depends on any bending of the fiber and on its temperature. This problem can be fixed by using a polarization-maintaining fiber, which is not a fiber without birefringence, but on the contrary a specialty fiber with a strong built-in birefringence. Provided that the polarization of light launched into the fiber is aligned with one of the birefringent axes, this polarization state will be preserved even if the fiber is bent. The physical principle behind this can be understood in terms of coherent mode coupling. The propagation constants of the two polarization modes are different due to the strong birefringence, so that the relative phase of such copropagating modes rapidly drifts away. Therefore, any disturbance along the fiber can effectively couple both modes only if it has a significant spatial Fourier component with a wavenumber which matches the difference of the propagation constants of the two polarization modes. If this difference is large enough, the usual disturbances in the fiber are too slowly varying to do effective mode coupling. A disadvantage of using PM fibers is that usually an exact alignment of the polarization direction is required, which makes production more cumbersome. Also, propagation losses are higher than for standard fiber, and not all kinds of fibers are easily obtained in polarization-preserving form. PM fibers should not be confused with single-polarization fibers, which can guide only light with a certain linear polarization. PM fibers are rarely used for long distance transmission because of their expensive price and higher attenuation than single mode fiber cable, either. Polarization Maintaining Optical Fiber (PMF) have been more and more widespread applied in optical fiber communication and optical fiber sensing system because the strong ability on polarization maintaining to the linearly polarized light and the good compatibility with the ordinary single mode optical fiber. People have done much works and made significant development on the theoretical design and the fabrication technology and the parameter survey of PMF.
<urn:uuid:6eed0f9a-4770-40a9-badb-434deab9e213>
CC-MAIN-2017-09
http://www.fs.com/blog/polarization-maintaining-fiber-of-special-fiber.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00115-ip-10-171-10-108.ec2.internal.warc.gz
en
0.924039
683
3.125
3
A traditional method for checking water height after a flood involved manually marking water levels on a post from a rowboat. Samuel Cox had a better idea: Flood Beacon. He assembled a microprocessor, accelerometer, ultrasonic sensors, rechargeable battery, cellular GSM and GPS into a floating shell created on a 3D printer. The end result is a device that can measure water turbulence via the accelerometer, and water depth with the ultrasonic sensors. The data, which is sent to Xively, an IoT-specific cloud firm, can be viewed on a mobile app. It took Cox about six weeks and less than $700 to produce a working prototype, something that would have cost more than $10,000 before 3D printing and the advent of do-it-yourself hardware development.
<urn:uuid:8b400ad8-fc77-46aa-809d-bfd987c6b34a>
CC-MAIN-2017-09
http://www.networkworld.com/article/2462865/data-center/154918-Ten-things-about-things.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00115-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944256
162
3.3125
3
Sixty years after bulky "rabbit ears" TV antennas, engineers are designing tiny custom ones that can fit inside of wearable devices. Some of the wearables will be implanted inside the human body for medical purposes, posing challenges for antennas that carry a Bluetooth wireless signal through skin, muscle and bone to reach out to a smartphone or other device. Other antennas will run in smartwatches or even through entire sets of clothing, allowing the them to stretch over a greater distance. "When wearables started appearing three years ago, we started exploring how we can cram the radios inside to make them user friendly," said Arun Venkat, principal antenna engineer at Cambridge Consultants in Boston. The company, based in Cambridge, England, outsources research and development. Venkat holds doctorate in electrical engineering and has helped design 18 custom antennas for wearables over the past eight years. The list of products includes some wearable tags to help the U.S. Army track its soldiers. "The human body affects radios, which is key to making a successful device," he said in an interview. "It's important to know what the body does to the radio signal and how much range will result to have a good device." In that sense, Venkat is as much a biomedical engineer as an antenna engineer. If a person wears a smartwatch on his left wrist that transmits heart rate data to a smartphone attached to his right arm, designers have to take the size of the user into account, along with other factors, such as how perspiration will interfere with a radio signal. In a recent demonstration, Cambridge showed how it will be possible to implant a device in a person's back to stimulate nerves for back pain at a greater depth than before. When a device is implanted within skin at greater than 4 centimeters, the Bluetooth signal traveling over a 2.4 GHz pathway can fall off, making a connection to a smartphone difficult. The demonstration by Cambridge showed that a custom designed antenna design could allow signal to travel up to 2 meters when implanted at 6 centimeters under the skin. Bluetooth Low Energy signals normally travel 10 to 15 meters in an open space. "Skin has electrical properties, both electric and magnetic, and it's important to balance the two to get a good signal out of the body," Venkat said. "It matters if the skin is wet or dry." Through 5 mm of skin, a Bluetooth signal will lose 3 decibels on average. That compares to the loss of 6 decibels for a Bluetooth signal going through a standard concrete wall with no rebar inside, Venkat said, reciting such data as easily as his ABC's. With the human body, there are different radio signal losses for bone, fat, and muscle that have to be taken into account in antenna design. "It's a big mishmash of signal loss in a body," Venkat said. "If you implant something in a muscle layer, the signal reflects off the fat layer before it gets out and you lose a lot of that as heat. It's such a difficult process with so many different factors." Venkat said he's absolutely confident that wearable devices aren't radiation hazards. "Bluetooth Low Energy is so weak that it's not going to impact health," he said. Wearables, like smartphones, have to meet Specific Absorption Rate standards for the amount of radio frequency energy absorbed, as set by the Federal Communications Commission. Venkat and Cambridge work with clients, some of them small startups, interested in designing wearable computers amid pressure to lower manufacturing costs and the final cost to consumers. With many smartwatches now selling for $150 to $200, there's plenty of pressure to cut costs to reach to more buyers. Many times, to keep costs down, wearables are designed with low-cost off-the-shelf antennas without the flexibility or efficiency of a custom antenna that takes advantage of the materials and design of the wearable, he said. Custom antennas potentially raise the materials price in a wearable by 10% to 15%, he estimated. Today, a small off-the-shelf antenna made of copper or ceramic could cost from 30 cents to 60 cents when purchased in bulk of 10,000 or more units. Venkat estimated that 90% of wearables on the market today rely on the off-the-shelf, or reference design, antennas, mostly made by smaller companies. Companies like Samsung, which already produce smartwatches and smartbands, have the capacity to design custom antennas to fit the wearables being sold. Some wearable antennas will be printed from copper on the inside of a wearable case to make the antenna larger and more efficient for wireless transmission, he predicted. For instance, a wearable shirt computer could have an antenna running the length of the shirt. With better antennas, other benefits can result. A fitness buff wearing a smartwatch might not need to carry the smartphone while working out nearby, taking advantage of the maximum range of Bluetooth. And runners or bikers might even be able to communicate with Bluetooth Low Energy beacons arrayed along a path. "The antenna will start to dominate how well that sensor information gets out," he said. "If you make more efficient use of the space inside a wearable, you can maximize performance." Matt Hamblen covers mobile and wireless, smartphones and other handhelds, and wireless networking for Computerworld. Follow Matt on Twitter at @matthamblen or subscribe to Matt's RSS feed. His email address is [email protected]. Read more about mobile/wireless in Computerworld's Mobile/Wireless Topic Center. This story, "Inside the world of designer antennas for wearables" was originally published by Computerworld.
<urn:uuid:8c0851be-e017-4968-a25c-0190a1f28350>
CC-MAIN-2017-09
http://www.itworld.com/article/2698790/mobile/inside-the-world-of-designer-antennas-for-wearables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00467-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950498
1,181
3.21875
3
While most people are busy singing the praises of HTML5 and all the nifty things the programming language can do, there are some who are warning about the dangers of using the technology. While there is still just a tiny fraction of the mobile apps being built using HTML5, those applications are said to pose a threat to both iOS and Android (News - Alert) devices. By 2016, industry analysts believe that about half of all mobile applications will be built using HTML5 language. Those same industry analysts say that means there are quite a few mobile phones that are going to be at risk of a new Cross-Device Scripting (XDS) attack that is coming fast and hard. Researchers from Syracuse University, Xing Jin, Tongbo Luo, Derek G. Tsui, and Wenliang Du have said that anyone running HTML5 based applications are going to be at risk of malicious code injection. The researchers say attackers can inject malicious code through a number of different channels including Wi-Fi scanning and SMS messaging. Even scanning 2D barcodes, Bluetooth pairing and playing MP3 and MP4 files can put the malicious code onto phones through the HTML5 apps. The problem with those who are using more HTML5 codes is that not only will their devices be compromised, but attacked phones will transmit viruses to others through SMS messaging of contacts. The contacts don’t need to have the same kind of phone as the people who are carriers. It turns out that the benefit of HTML5 is also the danger. Because the programming is able to be used on any device, that also means that any device can catch certain viruses that normally wouldn’t be transmitted from an iPhone (News - Alert) to a Galaxy S. Du said his research team has come across one HTML5 app that has been downloaded by more than 1 million people. He said he couldn’t say which one it was, but the developer has been notified and is looking for a fix. While that developer doesn’t seem to want to have any exploits out there, other developers might actually be looking to take advantage of people through their applications and that’s a problem. Edited by Cassandra Tucker
<urn:uuid:e230cd6f-6f26-4d42-8566-a71535f7c353>
CC-MAIN-2017-09
http://www.html5report.com/topics/html5/articles/375311-html5-apps-bringing-new-dangers-mobile-devices.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00343-ip-10-171-10-108.ec2.internal.warc.gz
en
0.962786
444
2.671875
3
Internet Explorer is a browser developed by Microsoft. Windows Internet Explorer (formerly Microsoft Internet Explorer) is based on the Trident render engine. Internet Explorer Collection contains multiple IE versions, which are standalone so they can be used at the same time. This is useful for web developers. Conditional Comments are important for web developers, because they use them to target specific versions of Internet Explorer. In the standalone versions Conditional Comments work exactly the same as in the native versions. The original version number is shown correcty in the User Agent string. - Updated Internet Explorer Developer Toolbar from 1.00.2188.0 to 1.00.2189.0 - Added the Firebug Web Development Extension for Internet Explorer - Improved compatibility with Windows 7 - Improved the installer - Improved the uninstaller - Improved default settings - Minor improvements Reviewing 188.8.131.52 (May 6, 2014) Strongly advise against using this browser since the latest version found a problem with the protection of personal data Reviewing 184.108.40.206 (Oct 7, 2011) I already have multiple IE installed for versions up to IE6. I could not get IE 7 standalone to work because of a bug with the form input fields- so I downloaded this one just for IE7- I didn't install any of the other IE's, but have no doubt that they wouldn't work as well. Make sure you guys check the compatibility chart to see if your version of Windows is compatible with the browser you want to test Reviewing 220.127.116.11 (Sep 15, 2011) Contains a trojan. Reviewing 18.104.22.168 (Sep 7, 2011) Installed on Windows 7, and my experience is: - It doesn't install IE9 (?). - The machine already had IE8 installed, so it didn't install that. - Can't get IE7 on Win7, yeah, ok, I get that. - So it installed two versions of IE6, neither of which work, they just crash on launch. I tried running them as Administrator, no luck. So I started out with a machine with IE8. And now I have a machine that still has IE8, but also two broken installs of IE6. tldr; Don't bother. Reviewing 22.214.171.124 (Aug 26, 2011) Nice idea but in my opinion create a pack with only the last 3 IEs. Thats all any user would really care about. I have posted already a review.. I have used this tool in the past on win XP and it is very useful. But now I have windows 7, and when I try to install this tool, it is not possible to select IE 7 and IE 8 and so not possible to install these two versions!. These are for development the most needed ones and still important. Why can I not install these two versions on win 7 64 bit? I can install IE1.5, 4, 5, 5.5, but I'm less interested in these versions.
<urn:uuid:23894a03-c958-4058-87cb-708d9d6dd5c8>
CC-MAIN-2017-09
https://fileforum.betanews.com/detail/Internet-Explorer-Collection/1217189605/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00043-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922444
646
2.578125
3
Here's how you can rearrange tiles in Windows 8. 1. To move a tile, drag it up or down. 2. Drag the tile anywhere you want. 3. You arrange your tiles in any way that you like. 4. You can group tiles however your like as well. For more, see the original article at the link below. Windows 8 Tips | Microsoft
<urn:uuid:59717ae7-d5cb-471f-82a7-3a811e9262ac>
CC-MAIN-2017-09
http://www.itworld.com/article/2719680/windows/rearrange-windows-8-tiles.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00163-ip-10-171-10-108.ec2.internal.warc.gz
en
0.872957
78
2.515625
3
Population Health Management: All you need to know about What is Population Health Management? PHM can be broadly defined as “operational processes designed to foster health and quality improvements while managing costs (McAlearney, 2003).” Thus Population Health Management is the aggregation of patient data across multiple health information technology resources, the analysis of that data into a single, actionable patient record, and the actions through which care providers can improve both clinical and financial outcomes. The idea is to produce a healthcare service which does not start and finish at the hospital door, although intertwines all aspects of community and primary care. Some key drivers in this definition include the following: - Lifestyle Management - Demand Management - Disease Management - Care Coordination KLAS describes four pillars of Population Management as: - Data Aggregation / Collection: Collection of data from various sources to bridge the gap between providers - Risk Stratification / Analysis: Segmenting patients to prioritize interventions and mitigate high utilizers - Care Coordination / Reporting: Efficient use of resources for better clinical outcomes - Outreach / Communication: Patient engagement and education for those who are at high riskWhy it matters? Population Health management has many advantages: - Better Health Outcomes: The ultimate goal is to improve the quality of care by reducing goals - Preventing diseases: IT solutions that track and monitor patient health to manage care - Closing care gaps: When physicians have real time access to patient data, they can address patient needs in a timely manner. When all the systems like laboratory, EHR, Billing systems are integrated, providers can address gaps in service quality - Cost savings for providers: PHM helps in reducing costs by improving clinical outcomes Conclusion: Automation is crucial to ensuring that every patient receives appropriate preventive, chronic and transitional care. Automation can also help organizations perform PHM efficiently so that they can make the transition from fee for service to Accountable Care while enhancing financial and organizational sustainability. EHRs and automation tools should be used to support these essential PHM functions: - Population identification - Identification of care gaps - Patient engagement - Care management - Outcomes measurement
<urn:uuid:4cde23d1-f213-47bb-9c4f-c3a84cec7e11>
CC-MAIN-2017-09
http://www.altencalsoftlabs.com/blogs/2016/04/01/population-health-management-all-you-need-to-know-about/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00039-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92279
450
2.953125
3
The recent wave of high profile data breaches and Internet of Things attacks has put organizations – especially financial institutions – under extraordinary pressure to ensure that their systems are secure and their data is protected. For banks and other financial institutions, this means conducting regular security assessments of their systems in order to check for vulnerabilities, and to comply with the Payment Card Industry Data Security Standard (PCI DSS). A part of these assessments may include a network penetration test (pentest). [For More of Our Security Coverage: Improving Security In the Fast-Paced World of Mobile] Two types of pentests are black box and white box testing. No prior knowledge of the corporate system is given to the third party tester in a black box scenario. The black box method is an accurate simulation of how a hacker would typically see a network and attempt to break into it, so it tends to be the preferred method of testing. In a white box test, the tester is given information about the network, systems on it, and source code files in order to identify weaknesses from any of the available information. For banks and financial institutions, a pentest is useful for gaining security assurance of critical web facing systems. Additionally, it is also one technique used within an assessment for meeting PCI DSS compliance when operating an online payments system. Regardless of why it’s conducted, a pentest requires specific certifications and documentation. The PCI Security Standards Council (SSC) manages programs that facilitate the assessment of compliance with the PCI DSS. They have created certifications for: Qualified Security Assessors (QSA) - companies or individuals that are qualified by the PCI SSC to perform PCI DSS assessments; Approved Scanning Vendors (ASV) – vendors qualified to scan PCI ecosystem networks for vulnerabilities. They may use QSAs or approved appliances to perform the scanning; and Internal Security Assessor (ISA) – certification for larger members of the PCI ecosystem wishing to perform their own internal audits. Pentests should not be taken lightly. If done improperly, they can have serious negative impacts on a network. A poor pentest can cause systems crashing, or worst, can result in compromise of the systems by unauthorized attackers. Only highly qualified and trusted individuals should perform pentests. This is why the PCI SSC requires strict training and standardization for pentesting as part of a security assessment. There are three phases of assessment: assess, remediation, and reporting. Assessment involves identifying cardholder data, taking an inventory of IT, and reviewing business processes involved in payment card processing, and then analyzing them for vulnerabilities that could expose cardholder data. Pentesting is done in this phase. Remediation involves mitigating vulnerabilities identified in the assessment phase. Reporting involves compiling and submitting required remediation validation records and compliance reports. Reporting is also the official mechanism by which banks and financial institutions verify compliance with the DSS to their respective card brand. A minimal report on compliance, or ROC, should include the following: an executive summary, a description of scope of work and approach taken and details of the reviewed environment. This could include network diagrams, description of bank environment, network segmentation used, entities requiring compliance with PCI DSS, version of PCI DSS used to conduct the assessment, contact information, report date, quarterly scan results performed by an ASV, and findings and observations. Because a pentest is an attempt to crack or expose the security of system, it is not a considered full security audit. After conducting a pentest, it’s important to remember that the results are temporary, and it is just one view of a system’s security at a single moment in time. System vulnerabilities and weakness can change and/or newly appear shortly after a pentest is conducted. Cyber crooks and the techniques they employ are constantly changing, costing banks and businesses billions of dollars each year. Despite its drawbacks and limited applications, pentesting is a useful exercise for ensuring better bank security. If done safely and properly, it’s a small price to pay for banks and financial institutions to keep their web facing systems properly secured. What’s more, compliance through pentesting further provides the bank customers with peace of mind and continued faith in the PCI ecosystem by helping to keep their financial data protected. When was the last time you tested your network security? Timber Wolfe is a Principal Security Engineer (BSCE, ECSA, LPT, CEH, CHFI) at TrainACE, a hacking and cyber security training and content organization that employs security trainers and researchers to develop bleeding edge training classes and helpful security resources.
<urn:uuid:4f313612-1375-4070-9d07-0ee757728a4f>
CC-MAIN-2017-09
http://www.banktech.com/how-to-leverage-penetration-testing-for-cyber-security-assessments/a/d-id/1296968
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00039-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933185
945
2.515625
3
The U.S. military is working to create a chip that can be implanted in a soldier's brain to connect it directly to computers that can deliver data on an enemy's position, maps and battle instructions. The implanted chip would essentially create soldier cyborgs that would be safer and better fighters. "Today's best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem," said Phillip Alvelda, manager for DARPA's Neural Engineering System Design program. "Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics." DARPA announced this week that it formed the new program to develop a neural interface -- a system working at the intersection of a biological nervous system and a device -- that would create "unprecedented signal resolution and data-transfer bandwidth" between the human brain and the digital world. Think of the neural interface as a translator that would turn digital signals into electrochemical language that the brain can read, and vice versa. DARPA, the Defense Advanced Research Projects Agency, focuses on emerging technology for the military, wants the neural interface to be no larger than one cubic centimeter, with the volume of two nickels stacked back to back. There are already some neural interfaces approved for use with humans, but they've proven to be imprecise and provide a user with a cacophony of information, more like noise than helpful data. DARPA said it wants to improve the technology so the system that can communicate clearly and individually with any of up to one million neurons in a specific region of the brain. For years now, industry players have said that mixing biology and machines is going to create the most powerful technology in the future. More than six years ago, an Intel researcher said that computer chips implanted in people's brains will control computers - eliminating the need for a keyboard and mouse - by 2020. And nearly four years ago, researchers at Northwestern University reported that they enabled a paralyzed hand to move by using a device to deliver a message from the brain directly to the necessary muscles. Just last September, a professor of electrical engineering and computer science at the University of California at Berkeley, told Computerworld that in just 10 years we may have computer sensors in the walls of our house, in our furniture -- and in our brains. The military's new research project will involve scientists from various fields, including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing. The scientists will need to work on advanced mathematical and neuro-computation techniques to translate between biological-electromechanical language and digital ones and zeros. The military is hoping industry players will become partners in the research, offering prototyping and manufacturing services and intellectual property. To pull researchers and partners together, DARPA is hosting a two-day meeting in Arlington, Va. on Feb. 2-3. This story, "U.S. military wants to create cyborg soldiers" was originally published by Computerworld.
<urn:uuid:4b5adc93-8ea7-4c45-b053-b07ce9f117ca>
CC-MAIN-2017-09
http://www.itnews.com/article/3025321/emerging-technology/us-military-wants-to-create-cyborg-soldiers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00159-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938061
633
3.265625
3
Back before Apple Inc. made computers that fit in your pocket, it made computers that fit on your desk. Some were big-box machines, others were not so portable portables and still others were -- literally -- cube-shaped. But the first Macintosh, the one that started Apple's rise to iconic status, is to the computer industry what the wheel was to cave men. It was launched during the Super Bowl on Jan. 22, 1984 -- in a minute-long commercial directed by Ridley Scott that became a classic of its own -- and went on sale two days later. It was the first of a string of Apple computers that would captivate users for the next quarter of a century. Much has changed in technology over the course of the past 25 years, with Apple often at the center of the advances we now take for granted. To celebrate the Mac's 25th anniversary, I looked back over the years and picked 10 Apple computers that altered the company's course and changed the way the world works and communicates. My first pick, naturally, is the first Mac. The Macintosh (1984) The original Mac, with its compact all-in-one design, innovative mouse and user-friendly graphical user interface (GUI), changed the computer industry. Like the wheel, the Mac just made things convenient for the rest of us. Most computers in the early 1980s were controlled exclusively through text commands, limiting their audience to true geeks. True, Apple had released a GUI with the introduction of the $9,995 Lisa in 1983, but the Mac, priced at $2,495, was the first computer to capture the attention of everyday people, who could now use a computer without learning an entirely cryptic command-line language. The mouse, coupled with a user interface that closely followed the physical "desktop" metaphor, allowed users to tackle tasks unheard of for rival computers using its two included applications: MacWrite and MacPaint. Thus was born desktop publishing. Coupled with the Postscript software licensed from Adobe Systems Inc., Apple was able to also sell the Apple Laserwriter, which helped bring about WYSIWYG design, allowing artists to output precisely what was on the Mac's 9-in. black-and-white screen. In case you forgot, the first Mac came with 128KB of RAM and zipped along with an 8-MHz processor. Reviewers were not always friendly, but the stories of those who helped bring it to life, collected at Folklore.org, offer a fascinating look at the first computer to capture mainstream attention. The PowerBook 100 series (1991) On Oct. 21, 1991, Apple unveiled its new portable lineup, which included the PowerBook 100, 140 and 170. These "good, better and best" models, the culmination of a joint venture between Apple and Sony Corp., featured a 10-in. monochrome screen and yielded a design that became the blueprint for all subsequent laptop designs from all computer manufacturers. Apple's earlier attempt at a portable Macintosh -- aptly named the Macintosh Portable -- weighed in at a not-so-portable 16 lb. But the Macintosh Portable did introduce the trackball to mobile computing, in this case located to the right of the keyboard. The PowerBook line placed the keyboard back toward the LCD screen, allowing room for users to rest their palms. It also conveniently allowed Apple to locate the trackball at the center of the palm rest. That made it easy for either left- or right-handed users to operate the machine. The PowerBook series also introduced Target Disk Mode, which allowed the laptop to be used as a hard drive when connected to another Macintosh using the built-in SCSI port. It also came in a fashionable dark gray, breaking from the standard beige of the PC industry. The PowerBook 100 series brought in $1 billion in revenue for Apple in its first year, and its impact is still felt to this day. If you're using a laptop with a trackball or track pad between your palms, you can thank the PowerBook 100 design. (If you've got a track pad, you can thank the PowerBook 500. In 1991, that particular model was still three years away.)
<urn:uuid:d4fdd63a-1be2-450f-8b97-7e4bf81919b0>
CC-MAIN-2017-09
http://www.computerworld.com/article/2530440/computer-hardware/opinion--the-top-10-standout-macs-of-the-past-25-years.html?nsdr=true
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00335-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966093
863
2.609375
3
In the aftermath of the 2008 financial crisis, regulators across the globe began assessing the root causes and seeking regulatory prophylaxis to mitigate the risk of recurrence. Much attention was ultimately focused on the regulation of over-the-counter derivatives, particularly swaps. In this context, swaps are financial contracts that typically involve the exchange of one stream of cash flows for a different stream of cash flows. Nexidia's phonetically-based technology helps reduce inaccuracies caused by factors such as background noise and foreign languages. Also, because phonetic searches are not tied to dictionary recognized text, it eases the processing and searching of industry jargon, proper nouns and other non-standard language.
<urn:uuid:10dc7916-2795-4119-b5ef-d2a360c352d3>
CC-MAIN-2017-09
http://nexidia.com/knowledge-center/a-review-of-swap-transactions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00511-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93423
139
2.546875
3
Stuxnet, Duqu Date Back To 2007, Researcher SaysTwo pieces of malware likely were developed by the same team on the same platform along with similar variants, according to Kaspersky Lab. (click image for larger view) Slideshow: 10 Massive Security Breaches The origins of the dangerous Stuxnet computer virus that targeted Iran's nuclear power program last year could date back as far as 2007, according to new research. Stuxnet and the related Duqu virus discovered earlier this year share a similar architecture and may have been developed by the same team of developers--along with other pieces of malware--several years ago, according to a security researcher at Kapersky Lab. Researchers have dubbed the platform "Tilded" because its authors tend to use file names which start with "~d," said Alexander Gostav, head of Kapersky's Global Research and Analysis Team, in a blog post. [ Improve your security. Learn about the 6 Worst Data Breaches Of 2011 "There were a number of projects involving programs based on the 'Tilded' platform throughout the period 2007-2011," Gostav said. "Stuxnet and Duqu are two of them--there could have been others, which for now remain unknown." Researchers discovered the connections between the pieces of malware and their origins by examining their drivers, he said. Gostav warned that the Tilded platform is continuing to develop and more modifications of the viruses are likely to be a threat in the future. Stuxnet was first discovered in June 2010 when it attacked software and equipment used by various organizations facilitating and overseeing Iran's nuclear program. The virus was especially worrisome for researchers because of its unprecedented complexity; it contains more than 4,000 functions, which is comparable to the code in some commercial software. Researchers at the Budapest University of Technology and Economics' CrySyS lab discovered Duqu this past September, saying the malware appears to have been designed to steal industrial control design documents. After examining Duqu, researchers at Symantec said it was nearly identical to Stuxnet. Both viruses attack Microsoft Windows systems using a zero-day vulnerability, which tries to exploit application vulnerabilities that haven't been discovered yet. Superworms like Stuxnet and Duqu--which seem to have been created to target the critical infrastructure and control systems of particular countries--are of great concern for federal cybersecurity officials who are working to prevent such dangerous threats to the U.S. power grid and other essential facilities. Role-based access control based on least user privilege is one of the most effective ways to prevent the compromise of corporate data. Our new report explains why proper provisioning is a growing challenge, due to the proliferation of "big data," NoSQL databases, and cloud-based data storage. Download the report now. (Free registration required.)
<urn:uuid:b4345511-9e21-4947-95dc-98ece5d7a1a4>
CC-MAIN-2017-09
http://www.darkreading.com/vulnerabilities-and-threats/stuxnet-duqu-date-back-to-2007-researcher-says/d/d-id/1102015?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00035-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954916
586
2.71875
3
The Napa Valley Unified School District has been using a hybrid diesel-electric school bus for nearly a year and has seen significant benefits. With the diesel-electric bus, the school district has been able to reduce its green house gas emissions and double the gas mileage it gets with the hybrid bus as compared to its diesel-only buses. As a result, the school district saves about $5,000 in fuel costs for the hybrid bus. While most "diesel-only" powered school buses achieve an average of six to seven miles per gallon, Ralph Knight, transportation director at Napa Valley School District, was surprised to learn just how much fuel the hybrid diesel electric school bus could save. "Fuel costs are a major concern to me," said Knight. "Cutting annual fuel costs in half for this bus is a major advantage -- both for taxpayers' wallets and for the environment." The fuel efficiency of the hybrid bus was close to 13 miles per gallon -- nearly double the fuel efficiency of a typical diesel school bus. Based on 13,000 miles the hybrid bus traveled during the 2007-08 school year, annual fuel costs for a standard school bus would be just under $10,000 at $4.87 per gallon. Conversely, fuel for the hybrid bus costs approximately $5,000 at the same price per gallon. Traveling about 65 miles per day, the hybrid bus typically transports roughly 60 children each morning and 60 each afternoon through a mixed route of highway and city driving. Even the community has started to recognize the impact the bus could have on the environment and are excited about it. "The children are excited to be riding one of the first hybrid school buses in the nation," said Knight. "The parents have also commented on the positive environmental benefits of the bus." Drivers also enjoy driving the bus. To the driver, it operates similar to a standard school bus. However, the diesel engine receives assistance from an electric motor at certain points during acceleration and deceleration. The hybrid drive system on Napa Valley's bus is recharged by plugging it into a standard outlet at night or between morning and afternoon routes. The word in the industry has gotten out. Knight says he has fielded calls from school districts all over the country asking him about the performance of this new bus. "I've told them the truth," said Knight. "I'm very pleased with the hybrid school bus." One of the other advantages of the bus hasn't really been "seen." The exhaust of the hybrid school bus is smokeless with dramatically reduced emissions compared to older buses operating in California. In fact, emissions of particulate matter have been reduced by up to 90 percent. "There's a host of new technologies incorporated into the hybrid school bus that provide the improvement in fuel economy and reduction in emissions," said David Hillman, marketing director at IC Bus. "With a year of customer experience in Napa, and the additional experience gained from hybrid buses at customers throughout the U.S. and Canada, we have shown that hybrid technology is a viable solution for bus operators in North America. The volume provided by our current customer base has allowed us to reduce our prices by $30,000 to $40,000. We encourage further efforts to provide federal and state funding, such as the California Proposition 1B funds, to help offset purchase prices and provide the opportunity for more school districts and bus operators to implement this environmentally vital technology." In the case of Napa's hybrid unit, PG&E provided $30,000 to help with the purchase of the plug-in hybrid school bus. An additional $30,000 to fund the bus was provided by the U.S. EPA through the Clean School Bus USA program and the West Coast Collaborative, a public-private partnership to reduce diesel emissions. Schools in California can use funds allocated by Proposition 1B to direct toward the purchase of a hybrid school bus. Funding to districts to support hybrid purchases from Proposition 1B and distributed through the California Air Resources Board can be up to $40,000 per bus.
<urn:uuid:c9e09a9c-8305-4e3a-a727-91684560eed1>
CC-MAIN-2017-09
http://www.govtech.com/products/Napa-Valley-Calif-School.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00207-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971081
832
2.546875
3
Preservation Road: Visual and audio records say more than text ever could The hundreds of tourists who line up outside the National Archives to see the original U.S. Constitution and Declaration of Independence are a daily testament to how much Americans cherish their official history. What they see inside also shows the ingenuity of modern preservation science and the lengths to which the U.S. government goes to protect its founding documents in perpetuity. The declaration, as signed by Thomas Jefferson, Ben Franklin, John Hancock and others, is sealed in a bronze, bulletproof glass case, protected with humidified helium to block oxygen and other irritants, plus a filter to screen out harmful light. It’s then lowered 22 feet into a vault every night. It’s also watched by a $3 million camera and a computerized system designed by NASA's Jet Propulsion Laboratory that can detect deterioration invisible to the human eye, such as minute changes in readability because of ink flaking or fading. Those charters of freedom might receive tourists’ devotion — in addition to a lot of technological attention — but they don’t hog all of the federal government’s preservation efforts. The National Archives and Records Administration also is home to billions of pages of material that is considered important enough to be kept as permanent records of the nation’s history. NARA gets those records from all federal agencies under individually agreed-upon terms and timetables. It then puts them through a variety of processes to preserve them and ultimately make them accessible to the public now and in the future. In this report: NARA reinvents preservation for the digital age Library of Congress defines the state of the art at Culpeper facility Increasingly, those vital records are not just texts of agency memos, national economic reports, census documents or e-mail messages. They are pictures, sounds and moving images stored in various formats on audio and visual files. They capture everything from NASA astronauts’ first walk on the moon to documentary images of the endangered American bald eagle, videotape of President Bill Clinton’s Senate impeachment trial, audio files of oral arguments before the Supreme Court, moving images captured by Landsat and other satellites orbiting the earth, and — someday soon — full-motion video of battlefield conditions in Afghanistan. NARA’s audiovisual holdings include 360,000 reels of film, 225,000 sound recordings and more than 110,000 videotapes. And millions more are on the way as the digital revolution takes hold throughout the federal government. Those recorded sources, be they analog or digital, will tell the story of U.S. policy-making and information gathering to the ages in ways that plain text never could. And their preservation needs also present challenges. “As wide of a technical variety as exists today — from professional broadcast media to low-resolution capture devices such as iPhones — you’re likely to find it somewhere in the federal government,” said Leslie Waffen, chief of NARA’s Motion Picture, Sound and Video Branch. Although it's becoming easier for agencies to produce audiovisual content, it’s becoming more difficult to establish policies, procedures and technical guidelines for how they preserve the content — partially because the technology used to produce audiovisual content is constantly evolving. Moreover, the preservation of nontextual records requires extra care, state-of-the art equipment and special storage conditions to ensure that future generations will be able to replay them and experience them the way their original viewers did. Even then, the ever-accelerating digital revolution has promulgated many standards and formats for different types of records, making it exceedingly difficult to settle on standards that will stand the test of time. “If you give me a handful of digital files now, I can’t guarantee that they’re going to be around tomorrow, let alone a hundred years from now,” said Richard Green of the International Association of Sound and Audiovisual Archives. And here’s the rub: There’s no enforcer in a position to impose such standards should they be resolved. Mission-driven government officials produce content to meet their agencies’ immediate, present-day needs, not those of archivists. Content from, say, a video recording device on an Air Force drone bomber must serve warfighting purposes first and foremost, even though those images might eventually become a permanent record at NARA. Through its relationships with agencies — particularly the biggest producers of audiovisual materials, such as NASA and the Defense Department — NARA tries to influence the format in which agencies submit records. However, officials say, they have only so much influence, and their power is limited. “You could say, 'Well, we want this particular format and this way to capture video,'” said Jason Love, supervisory audio/video preservation specialist at NARA’s Special Media Preservation Lab. “But when you look at the military and they’re putting these small, small recorders in the cockpits of jet fighters or in tanks, a lot of that technology is based around needs.” Setting Standards for Digital Formatting Although the digital world ushers in new recording and capturing possibilities, it also raises questions and perils for archivists. That’s where Carl Fleischhauer, program officer at the Library of Congress’ National Digital Information Infrastructure and Preservation Program, comes in. Fleischhauer leads a working group of federal agencies that focuses on developing federal digitization guidelines for audiovisual materials. The group is part of the Federal Agencies Digitization Guidelines Initiative, and in addition to representatives from the Library of Congress and NARA, its members include Voice of America, the Smithsonian Institution, the Defense Visual Information Directorate and the Government Printing Office. A separate working group is dedicated to still images. The audiovisual group’s goal is to disseminate information about standards and practices for the digital reformatting of audiovisual materials by federal agencies, according to the group’s charter. “We want to define a set of specifications for the creation of digitized content that are as much as possible in common between different federal agencies…in part to guide our own work when we do it internally but especially with vendor relationships and the acquisition of services,” Fleischhauer said. The group has its work cut out for it. Unlike still images, for which work on digitization standards began years ago, efforts to develop standards for digital moving images are more recent, and formats vary. If digitization standards for still images could be called mature, Fleischhauer said, he would describe the agreed-upon practices for digitizing sound recordings as semimature, while conventions for video recordings are less advanced. His group is heading toward the publication of an advisory guideline for sound recordings but is nowhere near that point for video recordings. “Part of the reason it is challenging is that there are any number of practices in place for digitizing video,” he said, adding that the technical solutions that broadcasters have used aren’t considered by some to be appropriate for preservation-quality reformatting. “The point is, we haven’t hit a point of consensus about this, and that’s where that exploration continues.” Fleischhauer said there are many questions about how to store and digitize video files because the file sizes are huge, and storage space and transmission time are considerations. There also is disagreement about file formats. For example, for decades, the film stock used to make motion pictures was fairly constant, and there was a steady supply of it for people such as Audrey Amidon, a motion picture preservation specialist at NARA, to make copies for preservation. “Mostly our goal is always to stabilize the original, to inspect it and assess its condition, store [it] in the proper housing and keep it in the right environment so that it will last as long as possible — and if necessary, we copy it to a new piece of film stock [that] should last another hundreds of years,” Amidon said. “The day that we will no longer be able to do that is coming soon.” Changes in how movies are produced mean that companies that have traditionally produced film stock will not be able to continue to do so, experts say. Those changes aren’t limited to film. Other analog formats are also becoming obsolete, and agencies increasingly digitally produce sound, video and images that might eventually be considered permanent records. A common thread running through NARA’s diverse audiovisual holdings is that agency originators generally didn’t create them with archivists in mind. The recordings' formats are often more a reflection of the needs of the agency than NARA’s preference. Jeffrey Reed, supervisory preservation imaging specialist at NARA’s photographic imaging lab, said that although NARA officials have conversations with agencies about preferred formats, they don't wield enforcement power. “Even though there might be a format that’s ideal for us to receive and keep things in, we really aren’t in a position to dictate that to the people who are creating the records.” Reed said. “But when asked, we’d love to be able to recommend to them, ‘If you start out this way, it’ll make our lives a lot easier.' ” Meanwhile, Waffen said, the digital revolution has made it even more crucial for archivists to pay attention to the state of incoming records. “When we were an analog archive, you had less choices and you had a little more time before obsolescence kicked in,” he said. “When you get into digital formats coming from agencies to the archive, then you’re an active archive and you have to constantly be vigilant. There’s refreshing and migration and all kinds of digital asset management that comes into play, as opposed to a more passive archive that can let something sit on the shelf for a while.” NARA employees say proprietary software makes it harder for archivists to maintain or play back records. The agency might think it was getting a great deal on software, but records that require hard-to-acquire licensed software can quickly become a headache. Dan Rooney, an archivist at NARA’s Special Media Services division, said he would ask chief information officers to think more about long-term viability and stay away from proprietary formats, storage systems, servers and tape-writing capabilities. “Those are all things that are going to be problematic for NARA once they start transferring their records over here,” he said. Drawing the Line on Metadata The working groups' recommendations are voluntary for agencies, but officials hope that if the audiovisual or still-image group issues a guideline, agencies will insist that their employees follow it as much as possible, Fleischhauer said. Issues on the audiovisual working group’s to-do list include writing a useful, explanatory, broad guideline for sound recordings; consulting with experts to compare different digital video formats; and coming up with guidance for maintaining metadata about technical aspects of recordings. Fleischhauer said a common theme in the group’s discussions concerns the metadata, or technical or content data, of the records. “It’s a line-drawing exercise, and not everybody agrees about where to draw the line on how much metadata to embed,” he said. For example, how much data about the digital formats should be included in the video of the moon landing, which NARA has worked with NASA to restore? The debate over metadata is especially important because such information can restore context that got lost in the digitization process. “Are we providing enough metadata that there is enough context that somebody could really understand those things about an individual item?” Rooney asked. “I agree that some of the context is kind of lost, and it suffers in just a purely online delivery. It’s a little colder.” But even with great metadata and digitization technology, original analog materials can’t be completely replicated. NARA’s Waffen said he worries that analog material might not be saved in the long term. “I think you need to retain the original material as much as you can to truly be called an archive,” Waffen said. “There’s something in the intrinsic value of the older material and the formats it’s on that I would hate to see lost.”
<urn:uuid:bf099f5e-889a-43e2-bf21-9ac5fb547264>
CC-MAIN-2017-09
https://fcw.com/articles/2010/02/08/cover-story-preservation-audio-visual-records-federal-agencies.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00207-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942924
2,639
3.3125
3
Political candidates aren't the only ones hoping to sway voters this election season; plenty of other groups are engaging in campaigns -- and those efforts are increasingly driven by big data, even at smaller organizations with limited resources. The Sierra Club, for instance, doesn't have the resources of a national presidential campaign. Unlike the Obama for America campaign, it can't hire a small army of predictive analysts and data scientists to model every aspect of its election-year strategy. But the nonprofit is using data mining to identify swing voters who are most likely to be motivated by its environmental message -- and who are most likely to be moved to vote for the candidates the club has endorsed in the 2012 election. What politicians know about you By combining voter records with their own donor lists, consumer databases and online information, political campaigns and advocacy groups can assemble highly detailed databases with hundreds of fields, or bits of information, that describe each voter's party affiliation, likes and dislikes, socio-economic background and more. This data comes from sources such as these: State voter registration rolls. These lists include voters' names and addresses, as well as registration status, party affiliation and voting history. Consumer databases. Available for purchase from third-party marketing firms, these databases contain socio-economic and demographic data, including details such as information about people's hobbies, interests, lifestyles and magazine subscriptions. Campaign donor and volunteer databases. These repositories include people's names, street addresses, email addresses, contribution histories (with the dollar amounts of their donations) and volunteer activities. They might even contain information about civic actions people have taken, such as signing petitions. Miscellaneous online sources, including websites and social networks like Facebook, Groupon, Twitter and LinkedIn. Records of people's activities on social media sites represent a rich source of psychographic data (interests, hobbies, lifestyles). That information is integrated with data from traditional sources through a process called cookie matching, by scraping sites for information or by encouraging voters to self-identify, which they do when they, for example, like a campaign or advocacy group's page or click through to the campaign's website and respond to a request for a donation. For the Sierra Club and organizations like it, the objective isn't to win at any cost, but to win cost-effectively. "We target both people who might sit the election out unless sufficiently motivated and folks who may be undecided with a message that will be effective," says Sierra Club political director Cathy Duvall. In this way, the organization doesn't waste resources reaching out to voters who are already on board or those who are unlikely to be persuaded. "We have a more clean shot at the voters we want, and in most cases the return on investment is immediate," she says. It's a two-step process, Duvall explains. Analysts apply data mining techniques against a massive database that provides very detailed profiles of its own members as well as "look-alikes" who fit the profile of swing voters. From there they develop models that predict which voter profiles will be most likely to respond positively to a campaign message and which type of issue will be most likely to move them to action. "In some instances, we can take this research a level deeper through real-world experimentation," Duvall says. To accomplish this, Sierra staffers try out a range of specific messages on test groups to determine which will be the most effective before launching the campaign to the target audience. "We can see which messages are moving the voters. Before we could do cross-tabs and see the broad categories of people who might be moving, but with data mining we can go much deeper." The 2012 election is shaping up to be the year of the data-driven, big data campaign. Political operatives in virtually every campaign, and across the political spectrum, are applying data mining techniques to mountains of new information from online sources that offer unparalleled insights into voter interests and habits. For example, armed with more data, analysts can predict more accurately how individuals are likely to vote and whether they are Republicans or Democrats. The ability for niche groups "to communicate only with people likely to support their cause didn't exist four years ago," says Patrick Ruffini, president of Engage DC. As they combine online data -- including social media posts -- with traditional data sources such as consumer databases, analysts can target groups of voters that fit very detailed profiles and choose the messages that will be the most likely to achieve the desired response. This sort of analytics work, known as microtargeting, was already under way during the last presidential election cycle. But since then, the amount of information available about individual voters has exploded. Campaigns have become more sophisticated in its use, and the tent has expanded, with smaller advocacy groups and campaigns coming on board. "That ability for niche groups, such as the Sierra Club, to communicate only with people likely to support their cause didn't exist four years ago," says Patrick Ruffini, president of Engage DC, a firm that handles online advertising and analytics work for the Republican National Committee and individual Republican candidates. In search of the like-minded Many of the voters the Sierra Club wants to reach aren't in its own member database, so Duvall works with Catalist, a consortium of progressive organizations that maintains a 500-terabyte database of information describing both registered and unregistered U.S. voters. "Our database is about civic behavior and transactions, what issues you care about, what causes you support, whether you tend to vote or not, and so on," says Catalist CEO Laura Quinn. Catalist matches up the Sierra Club's member database with its own data and provides access to the full database, which combines state voter registration lists with commercial consumer data that includes demographic (race, gender, age, income) and psychographic (interests, hobbies, lifestyles) information on individuals and households. Catalist buys commercial consumer data from traditional data aggregators and reporting agencies such as Acxiom and Equifax. Voter lists come from the states. For those states that don't release voter registration data, Catalist has developed models that predict, at the household level, who is likely to be Republican or Democrat and how they're likely to vote -- something it couldn't do in 2008. "Our database is about civic behavior and transactions, what issues you care about, what causes you support, whether you tend to vote or not, and so on," says Catalist CEO Laura Quinn. Yair Ghitza, senior scientist at Catalist, explains further: "Our clients determine the likelihood that someone is going to vote, care about certain issues or has leanings on certain issues, their partisanship and ideologies, and the actions they're most likely to engage in when they take civic action," he says. Aristotle Inc. offers a similar service to trade associations and campaigns, including both presidential campaigns, according to CEO John Aristotle Phillips. Its database of more than 700 data fields, which describe the traits of more than 85 million registered voters, is used for both fundraising and get-out-the-vote initiatives. "What we're seeing in 2012 is much more effective use of real-time access" to databases about voters, says John Aristotle Phillips, CEO of Aristotle Inc. Clients use it to create models that find people who are similarly minded or likely to contribute. Aristotle then helps them deliver a targeted message to individuals who match the criteria through various channels, including TV, direct mail, email and social media. The more sophisticated campaigns were doing this in the last election cycle, says Aristotle. "What we're seeing in 2012 is much more effective use of real-time access to these databases. You know as contributions are coming in who else to email of a similar demographic," he says. "Digital is no longer a separate division in campaigns," says Patrick Hynes, president of Hynes Communications, a consultancy specializing in online and new media communications strategy that currently serves as an adviser to the Romney campaign. "It's cross-portfolio -- everyone has to work in a digital environment." But the next election cycle, he says, will be all about mobile. "Mobile will be first in the minds of everyone" -- for everything from polling to press releases, sentiment measurement and fundraising, he says. Mobile is already changing the game, particularly in the area of door-to-door campaigning, where canvassers are increasingly taking advantage of mobile apps and the Square mobile payment service. Square offers a small card reader that attaches to a smartphone, enabling the user to accept payments anywhere, at any time. Canvassers who use the device can take campaign donations right on voters' doorsteps. As campaign volunteers go door to door, they might rely on mobile apps for customized messages about specific households. They could look at profiles that not only indicate whether an individual is a Republican or a Democrat, but also offer guidance about how much of a donation to ask for based on the person's past history of campaign donations. In addition, canvassers can use apps to capture details of interactions with voters and upload that information to the campaign database, thereby providing continuous, real-time feedback. "The Obama campaign has taken it up a notch," says Engage's Ruffini. "They're recording what people say when they knock on doors. They make thousands of phone calls every night. They do text analysis, and then make decisions on TV and ad spending." (Obama for America did not return calls asking for comment.) On the Republican side, mobile apps improve the efficiency of door-to-door campaigning, because they can tell canvassers exactly which doors to knock on in rural areas to reach the party faithful, independents and swing voters, says Hynes. Because Democrats tend to live in urban areas, Democratic campaign workers can be effective by canvassing entire neighborhoods. However, "Republicans live in the suburbs and exurbs, so it's been harder to go door to door," says Hynes, adding that mobile is helping to level the playing field. Coming up, in part 2 of this story: Merging online and offline data, privacy issues and more. Read more about big data in Computerworld's Big Data Topic Center. This story, "Campaign 2012: Mining for voters" was originally published by Computerworld.
<urn:uuid:35c74514-d424-4d0b-814a-335290f33433>
CC-MAIN-2017-09
http://www.itworld.com/article/2719660/big-data/campaign-2012--mining-for-voters.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00203-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959441
2,116
2.796875
3
The System Bus The System Bus is one of the four major components of a computer. This logical representation is taken from the textbook. system bus is used by the other major components to communicate data, addresses, instructions, and control signals. The CPU: A High–Level Description Just how does the CPU interact with the rest of the system? 1. It sends out and takes in data (in units of 8, 16, 32, or 64 bits) 2. It sends out addresses indicating the source or destination of such data. An address can indicate either memory or an input/output device. 3. It sends out signals to control the system bus and arbitrate its use. 4. It accepts interrupts from I/O devices and acknowledges those interrupts. 5. It sends out a clock signal and various other signals. 6. It must have power and ground as input. The CPU Interacts Via the System Bus The System Bus allows the CPU to interact with the rest of the system. Each of the logical pinouts on the previous figure is connected to a line in the system bus. Ground lines on the bus have two purposes 1. To complete the electrical circuits 2. To minimize cross–talk between the signal lines. Here is a small bus with three data lines (D2, D1, D0), two address lines (A1, A0), a system clock (F) and a voltage line (+ V). In our considerations, we generally ignore the multiple grounds, and the power lines. Notations Used for a Bus Here is the way that we would commonly represent the small bus shown above. The big “double arrow” notation indicates a bus of a number of Our author calls this a “fat arrow”. Lines with similar function are grouped together. Their count is denoted with the “diagonal slash” notation. From top to bottom, we have 1. Three data lines D2, D1, and D0 2. Two address lines A1 and A0 3. The clock signal for the bus F Power and ground lines usually are not shown in this diagram. Computer Systems Have Multiple Busses Early computers had only a single bus, but this could not handle the data rates. Modern computers have at least four types of busses 1. A video bus to the display unit 2. A memory bus to connect the CPU to memory, which is often SDRAM. 3. An I/O bus to connect the CPU to Input/Output devices. 4. Busses internal to the CPU, which generally has at least three busses. Often the proliferation of busses is for backward compatibility with older devices. Backward Compatibility in PC Busses Here is a figure that shows how the PC bus grew from a 20–bit address through a 24–bit address to a 32–bit address while retaining backward compatibility. Backward Compatibility in PC Busses (Part 2) Here is a picture of the PC/AT bus, showing how the original configuration was kept and augmented, rather than totally revised. Note that the top slots can be used by the older 8088 cards, which do not have the “extra long” edge connectors. Notation for Bus Signal Levels The system clock is represented as a trapezoidal wave to emphasize the fact that it does not change instantaneously. Here is a typical depiction. Others may be seen, but this is what our author uses. Single control signals are depicted in a similar fashion, except (of course) that they may not vary in “lock step” with the bus clock. Notation for Multiple Signals A single control signal is either low or high (0 volts or 5 volts). A collection, such as 32 address lines or 16 data lines cannot be represented with such a simple diagram. For each of address and data, we have two address or data is valid address or data is not valid For example, consider the address lines on the bus. Imagine a 32–bit address. At some time after T1, the CPU asserts an address on the address lines. This means that each of the 32 address lines is given a value. When the CPU has asserted the address, it is valid until the CPU ceases assertion. Reading Bus Timing Diagrams we need to depict signals on a typical bus. Here we are looking at a synchronous bus, of the type used for connecting memory. This figure, taken from the textbook, shows the timings on a typical bus. the form used for the Address Signals: between t0 and t1 they change value. According to the figure, the address signals remain valid from t1 through the end of t7. Read Timing on a Synchronous Bus The bus protocol calls for certain timings to be met. maximum allowed delay for asserting the address after the clock pulse TML the minimum time that the address is stable before the MREQ is asserted. Read Sequences on an Asynchronous Bus Here the focus is on the protocol by which the two devices interact. This is also called the “handshake”. The bus master asserts MSYN and the bus slave responds with SSYN when done. Attaching an I/O Device to a Bus figure shows a DMA Controller for a disk attached to a bus. It is only slightly more complex than a standard controller. Each I/O Controller has a range of addresses to which it will respond. Specifically, the device has a number of registers, each at a unique address. When the device recognizes its address, it will respond to I/O commands sent on the command bus. A number of I/O devices are usually connected to a bus. Each I/O device can generate an Interrupt, called “INT” when it needs service. The CPU will reply with an acknowledgement, called “ACK”. The handling by the CPU is simple. There are two signals only INT some device has raised an interrupt ACK the CPU is ready to handle that interrupt. We need an arbitrator to take the ACK and pass it to the correct device. The common architecture is to use a “daisy chain”, in which the ACK is from device to device until it reaches the device that raised the interrupt. Details of the Device Interface Each device has an Interrupt Flip–Flop that is set when the device raises the interrupt. Note that the interrupt line is grounded out as a signal to the CPU. The ACK comes from the left of the figure and is trapped by the AND gate. The device identifies itself by a “vector”, a pointer to the address of the device controller that will handle the I/O.
<urn:uuid:10a0cac8-528b-4933-92df-0b97b0d61a17>
CC-MAIN-2017-09
http://edwardbosworth.com/My5155_Slides/Chapter12/SystemBusFundamentals.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00379-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925029
1,472
3.828125
4
At the OFC 2014 Conference in San Francisco, Huawei introduced a 400G WDM prototype for ultra-long-haul transmission that employs Faster Than Nyquist (FTN) technology to increase the 400G transmission distance beyond 3000 km. Huawei said its FTN technology breaks the limit of the Nyquist sampling theorem, which defines a maximum transmission speed for a fixed channel bandwidth, by leveraging spectrum compression and signal distortion compensation algorithms. This enables the FTN technology to achieve long-haul transmission of high baud-rate signals on existing networks (similar to transmitting a large file that has been compressed, which requires less bandwidth than transmitting the original file). The technology enables two types of 400G WDM solutions based on 100 GHz channel spacing — a short-haul solution and the ultra-long-haul solution unveiled at OFC 2014. The ultra-long-haul solution, which applies to backbone transmission, adopts 2SC-PDM-QPSK modulation and supports a record transmission distance of over 3000 km. The short-haul solution, which applies to metropolitan area network (MAN) transmission, adopts 1SC-PDM-16QAM modulation and allows high-quality 400G transmission over a single carrier wave. Huawei's single-carrier 400G solution was recently tested on EXATEL's live network in Poland. “Huawei has been investing heavily in the high-speed WDM field. The new FTN technology introduced in the prototype exceeds traditional technological limits to deliver ultra-long-haul 400G transmission, breaking new ground for the 400G industry,” said Jack Wang, President of the Huawei transmission network product line.
<urn:uuid:b068264c-347c-41de-8a46-c5f311128b11>
CC-MAIN-2017-09
http://www.convergedigest.com/2014/03/huaweis-400g-wdm-prototype-promises.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00431-ip-10-171-10-108.ec2.internal.warc.gz
en
0.856674
337
2.59375
3
Updated rules to avoid phishing scams By now, you’ve probably seen an example or two of a phishing attempt. Maybe it was an e-mail message that asked you to quickly follow a mysterious URL to “verify your account” or “confirm billing information.” Once you have clicked on the link or supplied personal information, the phisher is able to access your accounts. Once the phisher has access to this information, your chances of becoming a victim of identity theft triple. A phishing phone call may appear to be from a familiar source, a highly-recognizable business or a survey conductor attempting to gain personal and financial information. Recent reports cite phishing (one of the oldest computer scams) is still one of the fastest growing forms of fraud, and one of the most successful. As consumers and employees, it’s important to be able to identify a phishing scam to not only protect our personal and financial data, but also the company data we can access. Below is a list of general rules to help you avoid phishing scams: - Be cautious when opening emails that manipulate you emotionally. Phishers understand human psychology, and will use all sorts of tricks to get you to open or respond to emails: promising free gifts, warning you that your account has been suspended or even an urgent security warning that seems to come from your computer technician should all be suspect if they ask for inappropriate information (like your social security number or usernames and passwords). - Never respond to emails that request personal or financial information. Your bank or your employer will never ask you for bank account details, Social Security number or passwords by email. The email requesting this information may look absolutely legitimate – it can have the right logo, even the right design and typeface, of a reputable company – or it may even seem to be from someone you personally know and trust. Still, always delete these without replying or taking any action. If ever in doubt, call the bank or the person the email is supposedly from to verify that they sent it. - Never go to your bank’s or a vendor’s website by clicking on a link included in an email. Do not click on hyperlinks or links attached in emails, as they could take you to fraudulent websites that lure you into “logging in” to your bank or other high-value e-commerce account. These fraudulent websites might look absolutely genuine, but what you are really doing is handing over they keys to your accounts to criminals. Type in the URL directly into your browser whenever you want to visit a financial or e-commerce website. - Check that the websites you visit are secure. If the websites you visit are on secure servers, they should start with https:// (the “s” stands for “security”) rather than the usual http://. Never enter personal or financial information except into an https web page. - Keep your computer secure. Phishing emails often contain spyware and keyloggers (programs that can record your keystrokes and what you do online) or create a back door to allow attackers into your computer. Make sure you have antivirus software and that it’s up to date to catch these malicious programs before they can do harm. At CenturyLink, we encourage customers to be aware of and to report suspicious activity to [email protected]. Read the full article originally published on our Bright Ideas blog “Five Tips to Avoid Falling for Phishing Tricks.”
<urn:uuid:494bd260-8773-4b9c-8281-5f2717b17717>
CC-MAIN-2017-09
http://news.centurylink.com/blogs/security/updated-rules-to-avoid-phishing-scams
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00131-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931035
730
2.609375
3
Efforts to reuse the heat that data center operators work so hard to extract are nothing new. But officials in Seattle are hoping to take things one step further. The Seattle Office of Sustainability & Environment is developing a plan to reuse waste heat from nearby data centers and other sources to power a so-called “district heating” system that would deliver sustainable hot water and heat to buildings in the city’s South Lake Union and Denny Triangle neighborhoods. The city is working with tenants, local heating utility Seattle Steam, and Corix, a Vancouver, Ontario-based provider of sustainable utility infrastructures, on the plan. While the project is only in a discussion phase, a full analysis should be completed by late November, at which point the plan would be presented to the mayor’s office, and potentially the city council. Read the rest of this article on Network Computing.
<urn:uuid:372ba054-1e5c-43d9-8f8c-a81a704473ce>
CC-MAIN-2017-09
http://www.networkcomputing.com/data-centers/seattle-plans-warm-city-data-center-waste-heat/1935830278
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00551-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937995
187
2.8125
3
Anatomy of the Linux kernel History and architectural decomposition Given that the goal of this article is to introduce you to the Linux kernel and explore its architecture and major components, let's start with a short tour of Linux kernel history, then look at the Linux kernel architecture from 30,000 feet, and, finally, examine its major subsystems. The Linux kernel is over six million lines of code, so this introduction is not exhaustive. Use the pointers to more content to dig in further. A short tour of Linux history While Linux is arguably the most popular open source operating system, its history is actually quite short considering the timeline of operating systems. In the early days of computing, programmers developed on the bare hardware in the hardware's language. The lack of an operating system meant that only one application (and one user) could use the large and expensive device at a time. Early operating systems were developed in the 1950s to provide a simpler development experience. Examples include the General Motors Operating System (GMOS) developed for the IBM 701 and the FORTRAN Monitor System (FMS) developed by North American Aviation for the IBM 709. In the 1960s, Massachusetts Institute of Technology (MIT) and a host of companies developed an experimental operating system called Multics (or Multiplexed Information and Computing Service) for the GE-645. One of the developers of this operating system, AT&T, dropped out of Multics and developed their own operating system in 1970 called Unics. Along with this operating system was the C language, for which C was developed and then rewritten to make operating system development portable. Twenty years later, Andrew Tanenbaum created a microkernel version of UNIX®, called MINIX (for minimal UNIX), that ran on small personal computers. This open source operating system inspired Linus Torvalds' initial development of Linux in the early 1990s (see Figure 1). Figure 1. Short history of major Linux kernel releases Linux quickly evolved from a single-person project to a world-wide development project involving thousands of developers. One of the most important decisions for Linux was its adoption of the GNU General Public License (GPL). Under the GPL, the Linux kernel was protected from commercial exploitation, and it also benefited from the user-space development of the GNU project (of Richard Stallman, whose source dwarfs that of the Linux kernel). This allowed useful applications such as the GNU Compiler Collection (GCC) and various shell support. Introduction to the Linux kernel Now on to a high-altitude look at the GNU/Linux operating system architecture. You can think about an operating system from two levels, as shown in Figure 2. Figure 2. The fundamental architecture of the GNU/Linux operating system At the top is the user, or application, space. This is where the user applications are executed. Below the user space is the kernel space. Here, the Linux kernel exists. There is also the GNU C Library (glibc). This provides the system call interface that connects to the kernel and provides the mechanism to transition between the user-space application and the kernel. This is important because the kernel and user application occupy different protected address spaces. And while each user-space process occupies its own virtual address space, the kernel occupies a single address space. The Linux kernel can be further divided into three gross levels. At the top is the system call interface, which implements the basic functions such as write. Below the system call interface is the kernel code, which can be more accurately defined as the architecture-independent kernel code. This code is common to all of the processor architectures supported by Linux. Below this is the architecture-dependent code, which forms what is more commonly called a BSP (Board Support Package). This code serves as the processor and platform-specific code for the given architecture. Properties of the Linux kernel When discussing architecture of a large and complex system, you can view the system from many perspectives. One goal of an architectural decomposition is to provide a way to better understand the source, and that's what we'll do here. The Linux kernel implements a number of important architectural attributes. At a high level, and at lower levels, the kernel is layered into a number of distinct subsystems. Linux can also be considered monolithic because it lumps all of the basic services into the kernel. This differs from a microkernel architecture where the kernel provides basic services such as communication, I/O, and memory and process management, and more specific services are plugged in to the microkernel layer. Each has its own advantages, but I'll steer clear of that debate. Over time, the Linux kernel has become efficient in terms of both memory and CPU usage, as well as extremely stable. But the most interesting aspect of Linux, given its size and complexity, is its portability. Linux can be compiled to run on a huge number of processors and platforms with different architectural constraints and needs. One example is the ability for Linux to run on a process with a memory management unit (MMU), as well as those that provide no MMU. The uClinux port of the Linux kernel provides for non-MMU support. Major subsystems of the Linux kernel Now let's look at some of the major components of the Linux kernel using the breakdown shown in Figure 3 as a guide. Figure 3. One architectural perspective of the Linux kernel System call interface The SCI is a thin layer that provides the means to perform function calls from user space into the kernel. As discussed previously, this interface can be architecture dependent, even within the same processor family. The SCI is actually an interesting function-call multiplexing and demultiplexing service. You can find the SCI implementation in ./linux/kernel, as well as architecture-dependent portions in ./linux/arch. Process management is focused on the execution of processes. In the kernel, these are called threads and represent an individual virtualization of the processor (thread code, data, stack, and CPU registers). In user space, the term process is typically used, though the Linux implementation does not separate the two concepts (processes and threads). The kernel provides an application program interface (API) through the SCI to create a new process (fork, exec, or Portable Operating System Interface [POSIX] functions), stop a process (kill, exit), and communicate and synchronize between them (signal, or POSIX mechanisms). Also in process management is the need to share the CPU between the active threads. The kernel implements a novel scheduling algorithm that operates in constant time, regardless of the number of threads vying for the CPU. This is called the O(1) scheduler, denoting that the same amount of time is taken to schedule one thread as it is to schedule many. The O(1) scheduler also supports multiple processors (called Symmetric MultiProcessing, or SMP). You can find the process management sources in ./linux/kernel and architecture-dependent sources in ./linux/arch). Another important resource that's managed by the kernel is memory. For efficiency, given the way that the hardware manages virtual memory, memory is managed in what are called pages (4KB in size for most architectures). Linux includes the means to manage the available memory, as well as the hardware mechanisms for physical and virtual mappings. But memory management is much more than managing 4KB buffers. Linux provides abstractions over 4KB buffers, such as the slab allocator. This memory management scheme uses 4KB buffers as its base, but then allocates structures from within, keeping track of which pages are full, partially used, and empty. This allows the scheme to dynamically grow and shrink based on the needs of the greater system. Supporting multiple users of memory, there are times when the available memory can be exhausted. For this reason, pages can be moved out of memory and onto the disk. This process is called swapping because the pages are swapped from memory onto the hard disk. You can find the memory management sources in ./linux/mm. Virtual file system The virtual file system (VFS) is an interesting aspect of the Linux kernel because it provides a common interface abstraction for file systems. The VFS provides a switching layer between the SCI and the file systems supported by the kernel (see Figure 4). Figure 4. The VFS provides a switching fabric between users and file systems At the top of the VFS is a common API abstraction of functions such as open, close, read, and write. At the bottom of the VFS are the file system abstractions that define how the upper-layer functions are implemented. These are plug-ins for the given file system (of which over 50 exist). You can find the file system sources in ./linux/fs. Below the file system layer is the buffer cache, which provides a common set of functions to the file system layer (independent of any particular file system). This caching layer optimizes access to the physical devices by keeping data around for a short time (or speculatively read ahead so that the data is available when needed). Below the buffer cache are the device drivers, which implement the interface for the particular physical device. The network stack, by design, follows a layered architecture modeled after the protocols themselves. Recall that the Internet Protocol (IP) is the core network layer protocol that sits below the transport protocol (most commonly the Transmission Control Protocol, or TCP). Above TCP is the sockets layer, which is invoked through the SCI. The sockets layer is the standard API to the networking subsystem and provides a user interface to a variety of networking protocols. From raw frame access to IP protocol data units (PDUs) and up to TCP and the User Datagram Protocol (UDP), the sockets layer provides a standardized way to manage connections and move data between endpoints. You can find the networking sources in the kernel at ./linux/net. The vast majority of the source code in the Linux kernel exists in device drivers that make a particular hardware device usable. The Linux source tree provides a drivers subdirectory that is further divided by the various devices that are supported, such as Bluetooth, I2C, serial, and so on. You can find the device driver sources in ./linux/drivers. While much of Linux is independent of the architecture on which it runs, there are elements that must consider the architecture for normal operation and for efficiency. The ./linux/arch subdirectory defines the architecture-dependent portion of the kernel source contained in a number of subdirectories that are specific to the architecture (collectively forming the BSP). For a typical desktop, the i386 directory is used. Each architecture subdirectory contains a number of other subdirectories that focus on a particular aspect of the kernel, such as boot, kernel, memory management, and others. You can find the architecture-dependent code in ./linux/arch. Interesting features of the Linux kernel If the portability and efficiency of the Linux kernel weren't enough, it provides some other features that could not be classified in the previous decomposition. Linux, being a production operating system and open source, is a great test bed for new protocols and advancements of those protocols. Linux supports a large number of networking protocols, including the typical TCP/IP, and also extension for high-speed networking (greater than 1 Gigabit Ethernet [GbE] and 10 GbE). Linux also supports protocols such as the Stream Control Transmission Protocol (SCTP), which provides many advanced features above TCP (as a replacement transport level protocol). Linux is also a dynamic kernel, supporting the addition and removal of software components on the fly. These are called dynamically loadable kernel modules, and they can be inserted at boot when they're needed (when a particular device is found requiring the module) or at any time by the user. A recent advancement of Linux is its use as an operating system for other operating systems (called a hypervisor). Recently, a modification to the kernel was made called the Kernel-based Virtual Machine (KVM). This modification enabled a new interface to user space that allows other operating systems to run above the KVM-enabled kernel. In addition to running another instance of Linux, Microsoft® Windows® can also be virtualized. The only constraint is that the underlying processor must support the new virtualization instructions. This article just scratched the surface of the Linux kernel architecture and its features and capabilities. You can check out the Documentation directory that's provided in every Linux distribution for detailed information about the contents of the kernel. - GNU GPL - The GNU C Library, or glibc - Kernel command using Linux system calls - Access the Linux kernel using the /proc filesystem
<urn:uuid:71aea265-1515-4588-bd0f-3d7ee16ba873>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/linux/library/l-linux-kernel/?S_TACT=105AGX59&S_CMP=GR&ca=dgr-btw01LKernalAnatomy
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00127-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927827
2,632
3.53125
4
NASA to test Google 3D mapping smartphones - By Kathleen Hickey - Mar 27, 2014 Project Tango, Google’s prototype 3D mapping smartphone, will be used by NASA to help the international space station (ISS) with satellite servicing, vehicle assembly and formation flying spacecraft configurations. NASA’s SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellites) are free-flying bowling-ball-sized spherical satellites that will be used inside the ISS to test autonomous maneuvering. By connecting a smartphone to the SPHERES, the satellites get access to the phone’s built-in cameras to take pictures and video, sensors to help conduct inspections, computing units to make calculations and Wi-Fi connections to transfer data in real time to the computers aboard the space station and at mission control, according to NASA. The prototype Tango phone includes an integrated custom 3D sensor, which means the device is capable of tracking its own position and orientation in real time as well as generating a full 3D model of the environment. “This allows the satellites to do a better job of flying around on the space station and understanding where exactly they are,” said Terry Fong, director of the Intelligent Robotics Group at Ames. Google handed out 200 smartphone prototypes earlier this month to developers for testing the phone’s 3D mapping capabilities and developing apps to improve these capabilities. The customized, prototype Android phones create 3D maps by tracking a user’s movement throughout the space. Sensors “allow the phone to make over a quarter million 3D measurements every second, updating its position and orientation in real-time combining that data into a single 3D model of the space around you,” Google said. Mapping is done with four cameras in the phone, according to a post on Chromium. The phone has a standard 4MP color backside camera, a 180 degrees field-of-view (FOV) fisheye camera, a 3D depth camera shooting at 320×180@5Hz and a front-facing camera with a 120 degree FOV, which should have the same field of view as the human eye. The cameras are using Movidius’ Myriad 1 vision processor platform. Previous visual sensor technology was prohibitively expensive and too much of a battery drain on the phones to be viable; new visual processors use considerably less power. Myriad 1 will allow the phone to do motion detection and tracking, depth mapping, recording and interpreting spatial and motion data for image-capture apps, games, navigation systems and mapping applications, according to TechCrunch. The cameras, together with the processor, let the phone track and create a 3D map of its surroundings, opening up the possibility for easier indoor mapping, a problem facing urban military patrols and first responders. Kathleen Hickey is a freelance writer for GCN.
<urn:uuid:61221c98-bd7e-4515-9b27-6ba0fad8fd23>
CC-MAIN-2017-09
https://gcn.com/articles/2014/03/27/nasa-google-tango.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00303-ip-10-171-10-108.ec2.internal.warc.gz
en
0.894461
602
2.828125
3
Traditional cyber security is proving an increasingly inadequate response to the modern cyber threat landscape. It’s no longer sufficient to suppose that you can defend against any potential attack; you must accept that an attack will inevitably succeed. An organisation’s resilience to these attacks – identifying and responding to security breaches – will become a critical survival trait in the future. Cyber resilience is a key principle underpinning ISO 27001, and the wider issue of ICT’s role in business continuity is covered by ISO 27031. Continue reading as we explain a cyber resilience strategy in more detail. Figures from the Department for Business, Innovation and Skills (BIS) 2015 Information Security Breaches survey show that 90% of large organisations and 74% of small organisations suffered a data breach in 2014. Now that suffering a breach is almost inevitable, cyber security methods can no longer be completely relied upon to secure an organisation’s operations. The only sensible response is to adopt a robust cyber resilience strategy. Cyber resilience = cyber security + business resilience Cyber resilience is a broader approach, which encompasses cyber security and business resilience, and aims not only to defend against potential attacks but also to ensure your survival following a successful attack. An effective approach to cyber resilience is twofold: Ensure your cyber security is as effective as possible without compromising the usability of your systems. Ensure you have robust business continuity plans in place that cover your information assets so that you can resume normal operations as soon as possible if an attack is successful. Two International Standards provide the main guidance you need: ISO27001, which details the implementation of an information security management system (ISMS); and ISO22301, which details the implementation of a business continuity management system (BCMS). Within the bounds of the broader ISO22301 standard, it is also worth considering the guidance in ISO27031, which applies specifically to information and communication technology business continuity, and the requirements of ISO27001 and ISO22301 are mutually compatible. Cyber Essentials Scheme The Cyber Essentials Scheme was developed by the UK Government to help businesses deal with the business-critical issue of cyber security and cyber resilience. The scheme provides a set of controls that organisations can implement to achieve a basic level of cyber security. Withstand up to 80% of cyber attacks by obtaining certification to Cyber Essentials from as little as £300 >> ISO 27001 offers a cohesive approach, recognising that effective cyber security is a cultural as much as a technological issue, and addresses people, processes and technology. An ISMS helps you coordinate your security efforts across your organisation, will ensure that your systems are as safe as possible, and will reassure your customers, suppliers, shareholders and stakeholders that you are following international best-practice guidelines. For more detailed information about ISO 27001, please click here >>. For all products and services relating to ISO 27001, please visit our webshop. Business continuity for information and communication systems is fundamental to an ISMS. ISO 27031 (Guidelines for ICT Readiness for Business Continuity) provides detailed and valuable guidance on how this critical aspect should be tackled. While development of a broad business resilience strategy should fit within an organisation's enterprise risk management framework, you should not delay dealing with cyber resilience simply because a wider business resilience strategy is yet to be developed. If you’re not in a position to implement a standard-based approach, there are other means of addressing your cyber resilience requirements. Published by GCHQ, the 10 Steps to Cyber Security framework sets out a simple approach to handling cyber risk to help secure your information and ensure your business thrives in the Internet Age. IT Governance can carry out a robust assessment of your performance in each of the ten areas, providing you with a tailored and usable action plan that will help you close the gap between recognised good practice and what you’re actually doing. The 20 Critical Controls is a set of additional controls developed for organisations involved in critical national infrastructure and has much to offer larger organisations. Of the 20 controls, there are five “critical tenets”. IT Governance can provide a range of cyber resilience solutions to help you ensure your organisation is best placed to mitigate unexpected situations or events. Visit the following pages for more information:
<urn:uuid:c46c3ea6-2517-4b4b-9efa-f6d04bb112d9>
CC-MAIN-2017-09
https://www.itgovernance.asia/cyber-resilience
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00072-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920994
880
2.890625
3
For some people, deconstructing a computer can lead to a tangled mess of wires. Yet others are finding that by rewiring or recycling their Macs they're stumbling upon some very creative projects. More than just information storage facilities, computers also function as brains. A group consisting of more than a dozen students from the Humboldt University of Berlin and the Institute of Cognitive Science at the University of Osnabruck set up an Artificial emotion Project part of artificial intelligence, to study how to create emotion in machines. One aspect of the project was to create a WALL-E-type robot, whose "brain" consisted of a Mac mini. Joscha Bach, PhD, who is currently working in a technology startup company in Berlin, Germany, was part of the project. "We had developed components for a so-called cognitive architecture, called a 'MicroPsi'. This is a computer model of how humans perceive, act, have emotions and make plans. We looked for ways to test our software. And we came upon the idea of a fleet of little robots," he says. " MiniPsi was the logical title for the Mac mini version." The Mac mini was a small enough computer that the group could use it to create a small robot that ran on wheels. However, the downside was its demanding power supply, Bach says. Yet in the end, the Mac mini-based robot completed its task. "Our little robot navigated the lab floor, could build maps of its environment and search for light sources," Bach says. "We have learned quite a few things about adapting our software to the demands of real-time, real-world navigation." Not every Mac mini ends up being a research project's robot brain. Some are turned into the masterminds behind car stereo systems. Though there are plenty of mp3 player options for car stereos out there, Mike Welch, director of Technical Services at Emblemax, wasn't satisfied by their two to three hours of music life. Welch wanted a small case computer and two years ago a Mac mini was his only option for a computer this compact. The rest of the parts came from eBay, Home Depot, MP3Car and a nearby junkyard. Welch used his experience from a former University of Maryland class called "Future Truck", through which students upgraded a Ford Explorer to run on alternative fuel and electricity. The rest of Welch's knowledge came from online research, particularly at MP3Car, where he found information about building car stereos with Mac minis. Though some rewiring was involved, like linking the Mac mini's power supply to the automatic locks and ignition, Welch says that the hardest part of the project was modifying his junkyard found bezel— a radio's front that hides the internal parts. Unfortunately, getting the bezel to fit the Mac mini took three weeks—of alternatively applying epoxy and sanding to get the bezel's dimensions just right—so the completed project looked generic enough to deter thievery. Although he's enjoying his car's Mac mini—which holds 220 to 230 albums, a GPS system, and is set up for both television/movie watching and video game use—Welch says he won't install a similar system for anyone else for less than $3,000. Nobody said technology comes without a price though, especially the most innovative designs. Although not part of first generation tablet computers, Axiotron's Modbook was unveiled at the January 2007 Macworld Conference & Expo and released a year later. The Modbook is Axiotron's combination of Apple's MacBook, with Wacom's digitized pen technology. The Modbooks don't have a hard keyboard, but do have a software keypad for those who can't give up their keyboards. Though there aren't any shortcut keys on the keypad, the Modbook does offer InkWell software that allows users' handwriting to be converted into text via its handwriting recognition capability. The Modbook is great option for tight spaces, like on airplanes, and for artists who want to draw with a more "traditional" tool. And MacBook owners have the option of having their existing computers converted into Modbooks. The tablet computers Modbook and Modbook Pro have been updated for 2009. Still made sans hard covers, like traditional laptops have, the Modbooks have to be treated very carefully to protect the screen from damage that could perhaps ruin the entire machine. But what does a person do when a computer is so badly deteriorated or out of date that even reformatting isn't worth it? Recycle. Even if a Mac can't be used in its entirety anymore, at least its parts can be. Jake Harms, a well known iMacquarium creator from Nebraska, has shipped his creations all over the world. Harms runs a wedding photography and videography business and claims that he's always been creative. "I've always built things," he says. But just how did Harms get into the hobby of making iMacquariums? Harms saw potential for a non-working iMac he knew someone was throwing away. He did a Google search for information on making unique aquariums and came across aquariums made from televisions and other types of computers, but no specifications on how to create one. After he made his first one last February, a lot of people became interested in it so he made 50. Harms fulfilled his goal to sell out within the year, having sold his last iMacquarium just three weeks ago. He was commissioned to make a one-of-a-kind aquarium for a hospital raffle aimed at child cancer patients. Harms created a tank complete with a light that shone up through translucent gem gravel, an iPod charger on the side of the Mac and a matching keyboard and mouse set. Harms created two prototypes; the second became the model for all of his tanks. He did all the work himself, aside from using a local plastic manufacturer to create the specially shaped tanks to fit the not-quite-square shape of the iMac. He got his iMacs from local recyclers for under $10, removed or sealed the electrical parts, added a mount for the water tank and carved out extra plastic to fit the tank. After putting extra silicone around the tank joints, he set up the tanks in his bathtub for a few days to test leakage. Another key step was to polish each iMac to make it look like new. Harms has received numerous e-mails asking for advice, and he's willing to give away just about any of his tips and tricks—except for the dimensions of his water tanks. "It was a lot of extra work to get that thing set up!" he says. There's currently a waiting list for his iMacquariums. More alternative uses for Mac computers are popular these days. Apple fans have turned pieces of their old Macs into items like furniture and accessories, such as jewelry and backpacks. Mac usage has never been so outragous.
<urn:uuid:52892589-2bd3-4ab3-a6ec-bdec58427a50>
CC-MAIN-2017-09
http://www.cio.com/article/2431158/apple/don-t-want-to-throw-out-your-old-mac-computer--you-aren-t-alone-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00424-ip-10-171-10-108.ec2.internal.warc.gz
en
0.973271
1,447
3.125
3
FPipe v2.1 - Port redirector. FPipe is a source port forwarder/redirector. It can create a TCP or UDP stream with a source port of your choice. This is useful for getting past firewalls that allow traffic with source ports of say 23, to connect with internal servers. Usually a client has a random, high numbered source port, which the firewallpicks off in its filter. However, the firewall might let Telnet traffic through. FPipe can force the stream to always use a specific source port, in this case the Telnet source port. By doing this, the firewall 'sees' the stream as an allowed service and let's the stream through. FPipe basically works by indirection. Start FPipe with a listening server port, a remote destination port (the port you are trying to reach inside the firewall) and the (optional) local source port number you want. When FPipe starts it will wait for a client to connect on its listening port. When a listening connection is made a new connection to the destination machine and port with the specified local source port will be made - creating the needed stream. When the full connection has been established, FPipe forwards all the data received on its inbound connection to the remote destination port beyond the firewall. FPipe can run on the local host of the application that you are trying to use to get inside the firewall, or it can listen on a 3rd server somewhere else. Say you want to telnet to an internal HTTP server that you just compromised with MDAC. A netcat shell is waiting on that HTTP server, but you can't telnet because the firewall blocks it off. Start FPipe with the destination of the netcat listener, a listening port and a source port that the firewall will let through. Telnet to FPipe and you will be forwarded to the NetCat shell. Telnet and FPipe can exist on the same server, or on different servers. *** IMPORTANT *** Users should be aware of the fact that if they use the -s option to specify an outbound connection source port number and the outbound connection becomes closed, they MAY not be able to re-establish a connection to the remote machine (FPipe will claim that the address is already in use) until the TCP TIME_WAIT and CLOSE_WAIT periods have elapsed. This time period can range anywhere from 30 seconds to 4 minutes or more depending on which OS and version you are using. This timeout is a feature of the TCP protocol and is not a limitation of FPipe itself. The reason this occurs is because FPipe tries to establish a new connection to the remote machine using the same local IP/port and remote IP/port combination as in the previous session and the new connection cannot be made until the TCP stack has decided that the previous connection has completely finished up. The connection terminology used in the program and in the following documentation can be shown in the form of the following diagram. Local Machine <----------> FPipe server <---------> Remote machine This is the usage line as reported by typing "FPipe", "FPipe -h"or"FPipe -?". FPipe v2.1 - TCP/UDP port redirector. FPipe [-hvu?] [-lrs ] [-i IP] IP -?/-h - shows this help text -c - maximum allowed simultaneous TCP connections. Default is 32 -i - listening interface IP address -l - listening port number -r - remote port number -s - outbound source port number -u - UDP mode -v - verbose mode Detailed option descriptions -h or -? Shows the usage of the program as in the above text. Specifies the maximum number of simultaneous TCP connections that the program can handle. The default number is 32. If you are planning on using FPipe for forwarding HTTP requests it might be advisable to raise this number. Specifies the IP interface that the program will listen on. If this option is not used FPipe will listen on whatever interface the operating system determines is most suitable. Specifies the FPipe listening server port number. This is the port number that listens for connections on the FPipe machine. Specifies the remote port number. This is the port number on the remote machine that will be connected to. Specifies the outbound connection local source port number. This is the port number that data sent from the FPipe server machine will come from when sent to the remote machine. Sets the program to run in UDP mode. FPipe will forward all UDP data sent to and received from either side of the FPipe server (the machine on which FPipe is running). Since UDP is a connectionless protocol the -c option is meaningless with this option. Verbose mode. Additional information will be shown if you set the program to verbose mode. Specifies the remote host IP address. To best illustrate the use of FPipe here is an example. fpipe -l 53 -s 53 -r 80 192.168.1.101 This would set the program to listen for connections on port 53 and when a local connection is detected a further connection will be made to port 80 of the remote machine at 192.168.1.101 with the source port for that outbound connection being set to 53 also. Data sent to and from the connected machines will be passed through.
<urn:uuid:f3d76303-56ee-4db7-8e79-f55d6a717326>
CC-MAIN-2017-09
https://www.mcafee.com/sg/downloads/free-tools/fpipe.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00000-ip-10-171-10-108.ec2.internal.warc.gz
en
0.887201
1,131
3.0625
3
Google and IBM announced today that the two companies have partnered to offer millions of dollars in resources to universities in order to promote cloud computing projects. The companies say that the goal is to improve students' knowledge of parallel computing practices and better prepare them for increasingly popular large-scale computing that takes place in the "real world," such as search engines, social networking sites, and scientific computational needs. Both Google and IBM plan to provide several hundred computers as part of the initiative, which will be a combination of IBM BladeCenter and System x servers and Google machines. The servers will run a variety of open-source software, which students will be able to access through the Internet to test parallel programming projects. Additionally, the companies—in conjunction with the University of Washington—have made available a Creative Commons-licensed university curriculum to focus on parallel computing, and IBM has developed Eclipse-compatible open -source software that will aid students in developing programs for clusters running with Hadoop. Currently, only a select group of universities are piloting the program. That list includes the University of Washington, Carnegie-Mellon University, Massachusetts Institute of Technology, Stanford University, University of California at Berkeley, and University of Maryland. The companies hope to expand the program in the future and grow the cluster to over 1,600 processors. As an example of one of the projects that has already been performed on the cluster, Google says that University of Washington students were able to use the cluster to scan the millions of edits made to Wikipedia in order to identify spam and organize news by geographic location. The idea for the program came from Google senior software engineer Christophe Bisciglia, who said that while interviewing close to a hundred college students during his time at Google, he had noticed a consistent pattern. The pattern was that, despite the extreme talent of these potential job candidates, they "sort of stalled" when asked to think about algorithms the way that Google does. "They just didn't have the background to think about computation in a fundamentally distributed manner," Bisciglia told Ars. Biscliglia then began working with his alma mater, the University of Washington, to develop the curriculum in his 20-percent time (the paid time that Google allows employees to work on their own projects) to better prepare students for a changing industry. Bisciglia said that the ultimate purpose of the program is to "start closing the gap between industry and academia," and that there's a need to "break the single-server mindset." With such an explosion of content and users on the Internet, he said, no one machine is going to be powerful to meet any company's needs, but students haven't had much of an opportunity until now to explore parallel computing before being dumped into the real world. "Our goal is that, once we shake out the bugs and understand the needs of the schools and communities, we can bring on more schools as we learn more and make more resources available," he told Ars. Of course, the cloud computing initiative isn't designed just to offer resources to students—Google and IBM have a vested interest in making sure that students at these top universities keep coming to their companies after graduating. Google doesn't make much of an effort to hide this fact, either: "In order to most effectively serve the long-term interests of our users, it is imperative that students are adequately equipped to harness the potential of modern computing systems and for researchers to be able to innovate ways to address emerging problems," said Google CEO Eric Schmidt in a statement. The pairing may seem odd upon first blush, but both Google and IBM recognize that they bring two sets of expertise to the table that can make the project succeed. IBM's experience in running data centers, combined with Google's obvious experience in running web apps on giant clusters, complement each other.
<urn:uuid:00bb9077-8c1e-4f2e-98b6-3f31d1164800>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2007/10/google-and-ibm-team-on-cloud-computing-initiative-for-universitiesgoogle-and-ibm-team-on-cloud-computing-initiative-for-universities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00176-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965062
777
2.6875
3
Microsoft has released a quick fix for a vulnerability in older versions of its Internet Explorer browser that is actively being used by attackers to take over computers. The vulnerability affects IE versions 6, 7 and 8. The latest versions of the browser, 9 and 10, are not affected. The company occasionally issues quick fixes as a temporary protective measure while a permanent security update is developed if a vulnerability is considered particularly dangerous. Microsoft issued an advisory on Saturday warning of the problem, which involves how IE accesses "an object in memory that has been deleted or has not been properly allocated." The problem corrupts the browser's memory, allowing attackers to execute their own code. The vulnerability can be exploited by manipulating a website in order to attack vulnerable browsers, one of the most dangerous types of attacks known as a drive-by download. Victims merely need to visit the tampered site in order for their computer to become infected. To be successful, the hacker would have to lure the person to the harmful website, which is usually done by sending a malicious link via email. Security vendor Symantec described such a scenario as a "watering hole" attack, where victims are profiled and then lured to the malicious site. Last week, one of the websites discovered to have been rigged to delivered an attack was that of the Council on Foreign Relations, a reknowned foreign policy think tank. The attack delivers a piece of malware nicknamed Bifrose, a malware family first detected around 2004. Bifrose is a "backdoor" that allows an attacker to steal files from a computer. Symantec wrote that the attacks using the IE vulnerability appear to be limited and concentrated in North America, indicating a targeted attack campaign. Since the attacks already under way before the vulnerability was discovered, Symantec said it "suggests a high level of sophistication requiring access to resources and skills which would normally be outside most hackers capabilities." Send news tips and comments to [email protected]. Follow me on Twitter: @jeremy_kirk
<urn:uuid:832b54cc-204c-4383-af8d-138156b1c80c>
CC-MAIN-2017-09
http://www.computerworld.com/article/2494287/malware-vulnerabilities/microsoft-issues--fix-it--for-ie-vulnerability.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00348-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95325
419
2.625
3
After learning the last week’s blog post about submarine cables, we know that as of 2014, there are 285 communications cables at the bottom of the ocean, and 22 of them are not yet in use. And these un-used cables are called “dark cables” or “dark fiber” for fiber cable (we will mainly talk about dark fiber in this paper). In fact, except the un-used submarine fiber cables, all the optical fibers that have been laid but is not currently being used in fiber-optic communications are called dark fiber. As bandwidth requirements continue to climb, increasing demands will be placed on optical fiber. In order to save money and meet the increasing demands on bandwidth, companies and institutions are turning their attention to the secure and private “dark fiber” network. By leasing segments of a dark fiber network, businesses can use their own IT staff and networking equipment to connect their multiple locations to create a private network or to connect to other network hubs to gain access to various high speed internet providers. The following infographic provides a visual explanation of dark fiber and how it can help institutions maintain the fastest transmission speeds, as well as save money by scaling the amount of bandwidth purchased to more closely match a business’s specific requirements, while preserving an unlimited potential for growth.
<urn:uuid:cc2a3c17-9203-4bc5-8697-a162dcf1140a>
CC-MAIN-2017-09
http://www.fs.com/blog/an-infographic-tells-you-what-is-dark-fiber.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00168-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948659
267
3.5625
4
DARPA out to break the 1 THz barrier for solid-state receivers The Defense Advanced Research Projects Agency (DARPA) is on a mission to create the first solid-state transistor-based receiver that will achieve gain at frequencies of over 1 terahertz (THz). Earlier this year researchers came closer than ever before, making one that worked at 0.85 THz. Since the previous milestone was 0.67 THz, they can definitely say that they are making progress. This is all part of DARPA's Terahertz Electronics program. The goal of this well-titled program is to make high-performance integrated circuits that operate at frequencies exceeding 1.0 THz. Just so you know, this is high up into the infrared region of the electromagnetic spectrum, where frequencies are well above the radio waves that mobile devices use. That region still gets use in the military though, in programs like DARPA's Video Synthetic Aperture Radar (ViSAR), which will be used by aircraft to perform accurate reconnaissance in overcast conditions. This part of the spectrum is called the sub millimeter wave (sub-MMW) frequency band, operating above 300 GHz , where wavelengths are shorter than 1 mm, DARPA says. Making use of those frequencies to date has required "frequency conversion," that is, multiplying frequencies to make them manageable. But conversion carried complications of its own, such as requiring large-footprint devices to operate in those frequencies, DARPA said. The Defense Department agency is aiming to use the frequencies for imaging, radar, spectroscopy, and some communications systems. Even though civilians' smart phones and tablets won't directly benefit from advances in the THz Electronics program, there's nothing to say that breakthroughs in mobile technology won't come from DARPA's advances in this area. Posted by Greg Crowe on Oct 18, 2012 at 9:39 AM
<urn:uuid:b16e8ecd-bad6-4744-bb23-4795c6b16ff3>
CC-MAIN-2017-09
https://gcn.com/blogs/mobile/2012/10/darpa-to-break-1-thz-barrier-for-solidstate-receivers.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00168-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939443
394
2.765625
3
For the second day in a row, an asteroid is buzzing past Earth. NASA announced that an asteroid measuring about 25 feet across will pass safely past Earth today at 4:21 p.m. ET. The asteroid, dubbed 2014 EC, is expected to approach the Earth at a distance six times closer than the moon. The news comes just a day after another asteroid -- this one named 2014 DX110 -- whizzed by Earth closer than the distance between the Earth and the moon. "This is not an unusual event," said Paul Chodas, a senior scientist at NASA's Jet Propulsion Laboratory, in a statement. "Objects of this size pass this close to the Earth several times every year." The approaching asteroid will not be visible to the unaided eye. The Virtual Telescope Project has been trying to track the asteroid but has been unable to capture images of the asteroid because of poor weather and cloud cover. If conditions improve, the group will post images or video to its web site. The Catalina Sky Survey, based near Tucson, Ariz., first spotted the asteroid on Wednesday, according to NASA. Its closest distance is expected to be about 38,300 miles above the Earth's surface. Scientists are increasingly interested in studying asteroids to help protect the planet in the event of a possible devastating collision. They also want to learn whether the makeup of asteroids might offer clues to the birth of the universe. Last year, NASA unveiled a plan to study near-Earth asteroids by 2025. The plan includes finding a nearby asteroid that weighs about 500 tons but would be 25 or 30 feet long. NASA would pull the asteroid into orbit around Earth and then land astronauts on its surface to study it. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "Another Asteroid Buzzes By Earth Today" was originally published by Computerworld.
<urn:uuid:af06bfae-d9d4-4630-90ea-aa836e3c939d>
CC-MAIN-2017-09
http://www.cio.com/article/2378110/government/another-asteroid-buzzes-by-earth-today.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00464-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940268
453
3.34375
3
Explore new firmware that’s helping organizations see attacks as they happen. Security attacks are constantly evolving, making it harder for your intrusion detection and prevention systems (IDS and IPS, respectively) to keep pace. Network security professionals have been contending with a variety of vulnerabilities for years, and while most network devices will drop certain types of malicious packets, it’s still important to keep an eye on the source of the threat to ensure you are detecting and preventing future malware variants. Gaining visibility to IP fragmentation attacks has been a difficult security challenge. Until now. IP fragmentation occurs when the data being transmitted over a network connection exceeds the receiving network’s maximum transmission unit (MTU) and must be broken down into smaller fragments. Once the fragments reach their destination, they are reassembled. This process can be very beneficial to networks by: • Enforcing traffic ordering by allowing priority packet fragments to get processed first • Providing compatibility to network devices with a lower MTU than the size of the packet going over the network • Optimizing the network’s overall performance However, IP fragmentation also introduces opportunities for hackers to inject malicious data into your network, bypassing certain security devices like firewalls or your IDS or IPS. Most of these devices have been designed with measures to mitigate this type of attack, but two common strategies used by hackers have been found to successfully bypass these security features. How Hackers Use IP Fragmentation to Get Into Your Network 1. Tiny Fragments: Some security devices filter out unwanted traffic by looking at specific parts of the packet header, however, with the use of Tiny Fragments, hackers can break the header, which can be as large as 60 bytes long, into fragments as small as 8 bytes. Since the entire packet isn’t available for the device to analyze, the fragments could pass through, including malicious data. 2. Overlapping Fragments: This method takes advantage of the reassembly process once fragments are filtered through the IDS. Pieces of malicious data are initially transmitted with “safe” criteria in the header along with random data, allowing them through the filters. Later on, the remainder of the attack is sent under the same criteria, allowing it to once again pass through undetected, then override the random data to complete the attack. A strong security architecture needs to be able to identify and stop attacks, but also identify their source and prevent future iterations or variations of the original attack. Datacom Systems has recently released Version 1.4 firmware for several of our VERSAstream™ network packet brokers, allowing access to fragmented packets that will provide connected monitoring tools with enhanced visibility. This will help monitoring tools like firewalls, IDS, and IPS to better detect instances of attacks that are using tiny or overlapping fragmented packets. These typically unseen packets can now be visualized and their sources identified to prevent future security issues.
<urn:uuid:adbd909b-5ce9-4326-8f5e-1db962a13bf0>
CC-MAIN-2017-09
http://www.datacomsystems.com/news-events/news/2015/nov/3/explore-new-firmware-that-s-helping-organizations-see-attacks-as-they-happen
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00464-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942787
594
2.71875
3
In this PGP encrypted hard drive recovery case study, the client had used full-drive encryption to secure the data on their laptop. With Symantec PGP whole disk encryption, the entirety of their hard drive was password-protected. PGP encryption, also known as “Pretty Good Privacy” encryption, was invented by Phil Zimmerman in 1991. Technology companies such as Symantec offer software that uses this strong encryption method to protect users’ data. PGP encryption helps protect the data on your hard drive from unwanted access. But it doesn’t protect your hard drive from any physical or logical damage. When this client’s laptop failed to boot up one day, the client removed the hard drive. They found that the drive grew very hot when they tried to power it on, and could not get it to detect on another machine. The client quickly contacted our recovery client advisers here at Gillware Data Recovery and send the hard drive to our data recovery lab. PGP Encrypted Hard Drive Recovery Case Study: Laptop Not Booting Drive Model: Hitachi HTS725050A7E630 Drive Capacity: 500 GB Operating System: Windows Situation: Laptop became very hot and wouldn’t boot Type of Data Recovered: User Word and Excel documents Binary Read: 67.2% Gillware Data Recovery Case Rating: 9 Firmware and Parts Compatibility Issues When our data recovery engineers inspected the client’s hard drive in our cleanroom, they found that the drive’s read/write heads had crashed. There was some moderate damage to the drive’s platters as well. The drive needed its read/write heads replaced. Even when two hard drives share the same model number, they are both still special snowflakes. Each hard drive has to be calibrated in the factory for its unique tolerances and minor defects separately. The calibration makes sure the drive’s internal components work properly, according to its unique differences. The calibration data is stored in a ROM chip on the drive’s control board. A hard drive will never truly behave optimally if it has another drive’s read/write heads inside it. This is simply because the drive’s calibrations just do not line up with the unimaginably tiny variations between the two sets of read/write heads. This can make finding suitable donor parts frustrating. This hard drive was particularly uncooperative with our engineers. Normally, when a hard drive powers on, its read/write heads find the firmware, read it, and store the data in the drive’s RAM before continuing its normal operations. The drive’s new read/write heads wouldn’t do this properly. They could read the firmware, but our engineers had to manually load it into the drive’s RAM. Due to adaptive drift, it took multiple sets of donor heads to read this hard drive. As a repaired hard drive continues to operate, its operating conditions change. When the conditions shift too far, the hard drive’s replacement parts become incompatible, and must be themselves replaced. Eventually, after multiple donors have been used on this drive and the drive’s condition had continued to degrade, we had gotten all we could get: 67% of the drive’s binary. Symantec PGP Decryption Symantec PGP whole drive encryption encrypts the entire hard drive (hence the name). Well, almost all of it. The only part of the drive that remains unencrypted is a small portion at the beginning of Sector 0 that tells anything talking to the drive how it’s encrypted. There’s no way to decrypt the drive on the fly, unfortunately, which puts our engineers in a bind when the drive is damaged. There isn’t any way to target used areas of the disk, because there is no way to discern encrypted data from encrypted zeroes. When a drive is damaged to the point where a full (or near-full) disk image isn’t possible, the situation is very worrying for our engineers. There’s no way of knowing how much stuff we’ve gotten. If the tiny parts of the disk that contain the encryption metadata couldn’t be recovered, then we can’t decrypt the recovered data, even with the correct password. And so our logical engineer Cody took the encrypted disk image out of our cleanroom, used the client’s password to decrypt the disk, crossed his fingers, held his breath, and waited. As a byproduct of its design, Symantec PGP whole disk encryption actually takes a very long time to undo. Our engineers are, unfortunately, at its mercy. Cody began the decryption process on a Friday morning. By the end of that day, about five percent of the disk had been decrypted. It wasn’t until the next Tuesday that the process finished. PGP Encrypted Hard Drive Recovery – Conclusion Cody reviewed the results of this data recovery as soon as the operation finished. The results were very good. Imaging the drive had been a shot in the dark due to the encryption. But our engineers had gotten 99.8% of the drive’s file definitions. Of the 99.8% of the files we knew about on the disk, the vast majority had been completely recovered. All of the client’s critical data was there. We rated this PGP encrypted hard drive recovery case a 9 on our ten-point data recovery case rating scale.
<urn:uuid:d303c59d-306d-4f83-b529-2ce87dc6e9ec>
CC-MAIN-2017-09
https://www.gillware.com/blog/data-recovery-case/pgp-encrypted-hard-drive-recovery-case-study/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00040-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955681
1,154
2.84375
3
The launch of Sputnik in October 1957 had forced the United States to dramatically accelerate its space program. Its first successful satellite, Explorer I, entered orbit on 31 January 1958. The space program had advanced considerably by the 1970s. Between January 1971 and May 1975, for example, a civilian program launched a series of "Intelsat IV" communication satellites from Cape Canaveral, each over forty times heavier than Explorer I. Yet though the space race had prompted this acceleration of US satellite technology, it would be the nuclear arms race that would in part create the conditions for the expansion of ARPANET to its first international node by satellite. In 1957 the US had conducted the "Rainier" test, its first underground nuclear detonation. The blast was detected by seismic devices across the globe and proved the value of seismology in nuclear detection. The Limited Test Ban Treaty of 1963 banned open-air testing in favor of safer underground tests and made the speedy processing of seismic data critically important. In June 1968 the US and Norway concluded an agreement to cooperate in building a large seismic detection facility at Kjeller, near Oslo. The facility, called NORSAR, was built in 1970 and began sending seismic data to the US Seismic Data Analysis Center in Virginia via the Nordic satellite station in Tanum, Sweden. ARPA decided to "piggy-back" on the original NORSAR satellite link, connecting the NORSAR facility to ARPANET20 in June 1973. University College London (UCL) was connected to the ARPANET the following month via landline to NORSAR. The ARPANET connection at UCL was used by researchers at centers across the UK working on diverse subjects including computer aided design and network analysis. The NCP (Network Control Protocol) that controlled communications on the original landline ARPANET was not appropriate for radio and satellite networking. Instead, a new internetworking protocol would give each connected "host" computer a far greater degree of responsibility for control of the network. International interest in packet-switched networking was growing. Robert Kahn, now at ARPA, believed that satellite networking could support the connection of US and international networks. ARPA began to investigate the possibility of creating an Atlantic satellite network. In 1974 the UK Post Office agreed to cover the UK's costs for a satellite connection to the US, and in September 1975 ARPA initiated the SATNET Atlantic networking program using civilian Intelsat IV earth stations in the US and the UK. In late 1977, Norwegian Defense Establishment was also linked via a separate earth station in Tanum. By May 1979 ARPANET access from the UK was provided almost exclusively over SATNET and the landline connection via Tanum was removed at the end of the year. Thus by the mid 1970s ARPA had built three functioning networks: ARPANET, PRNET and SATNET, using cable, radio and satellite. Now it remained to network the different networks. By 1973 a new protocol was required to internetwork the ARPANET, PRNET and SATNET. After organizing the 1972 International Conference on Computer Communication at which the ARPANET was demonstrated in Washington, Robert Kahn had joined ARPA (now named DARPA) where, following some unrelated projects, he resumed his work on packet networking. In the spring of 1973 he approached Vint Cerf, one of the group of graduate students that had developed the NCP protocol for the ARPANET, and outlined the need for a new internetworking protocol that would allow computers to communicate together across cable, radio and satellite networks. Cerf had just become an Assistant Professor at Stanford and ran a series of seminars to tease out the problem. He drew together a group of researchers who would later hold key positions in the networking and computer industries. Participants included Robert Metcalfe, who was representing the Xerox PARC research center, and Gerard Lelann, who was visiting Cerf's lab from Cyclades, the French packet network project. Cerf's group continued the inclusive, open approach of drafting RFCs. They were influenced by the example of Cyclades, which had adopted a centrifugal approach in which data transmission was not regulated by the equipment of the network itself but by the computers sending and receiving the data at its edges. At the Xerox company's PARC research center, Robert Metcalfe was working on something similar. Xerox had just unveiled a revolutionary new computer called the Alto. The Alto had a mouse, graphical display, a desktop and system of windows and folders for storing files. The machine was two decades ahead of its time and represented a paradigm shift in computing that the senior management of Xerox failed spectacularly to capitalize upon. Metcalfe was working on a network to connect many Altos in an office, developing "Ethernet" and Local Area Networking (LAN). He grew impatient with the consensus approach that Cerf, Lelann and others were taking and decided to move ahead on his own. In 1973 Metcalfe's PhD thesis had refined the Hawaiian Aloha method. He now applied these mathematical improvements to develop a system informally named Alto Aloha, which became PUP (PARC Universal Packet). PUP adopted the same centrifugal datagram approach of Cyclades, and its network was dumb to the extent that it was merely a system of cables. Unlike the ARPANET, where the IMP machines controlled many of the network's functions, the PUP network had no capability to control transmission or flow of data, or to verify delivery or repeat transmission of lost or partially delivered packets. Instead the software protocols running on the connected host computers would control the network. As a later PARC memo on the specifications of the PUP noted: Pup communication is end-to-end at the packet level. The inter-network is required only to be able to transport independently addressed Pups from source to destination. Use of higher levels of protocol is entirely the responsibility of the communicating end processes. This moved control over the operation of the network from the connecting infrastructure to the actual devices participating in the network themselves. This was a centrifugal approach, and it suited the requirements of the network of networks that ARPA had in mind. The NCP (Network Control Protocol) that controlled communications on the original landline ARPANET was not appropriate for radio and satellite networking. Instead, a new internetworking protocol would give each connected "host" computer a far greater degree of responsibility for control of the network. The new protocol, which would run on each host computer connected to the network, would not only establish connections between hosts, it would assume the functions that the dedicated Interface Message Processor (IMP) computers had performed: verifying safe delivery of packets, retransmitting them where necessary and controlling the rate of data flow. Simply put, to allow data to flow across a network that included landline, satellite and radio connections, a new protocol would take a much more flexible approach to communication control. In May 1974, Cerf and Kahn published an outline of the new Transmission Control Protocol (TCP): a simple but very powerful and flexible protocol which provides for variation in individual network packet sizes, transmission failures, sequencing, [and] flow control. This internetworking protocol is, in a technical sense, the essence of the Internet, and in its priorities and functions can be discerned the cardinal characteristics of the new medium. TCP is centrifugal by necessity, as one of its designers notes: We wanted as little as possible at the center. Among other reasons, we knew quite well that it's much easier to scale a system that doesn't have choke points in the middle. Some of the enthusiasm for the centrifugal approach of relying on the host computers themselves and abandoning the IMPs may have arisen from social rather than technical reasons. The IMP machines connecting host computers at each participating facility to the ARPANET were controlled by BBN, ARPA's main contractor, which gave BBN control over the network itself. From their central offices BBN engineers could remotely update, repair and monitor the use of IMPs across the network. Increasingly, researchers preferred the idea of a dumb network controlled by a community of computers using a common protocol without the IMP standing between the host and the network. It is a small irony that the IMPs were originally introduced in 1969 not to control the network but to convince the ARPA-funded researchers that connecting to the ARPANET would not directly impose a burden on the processing power of their host computers. Support for the removal of the IMPs only five years later was a sign of how far networking had come. Much as Paul Baran's original, decentralized network prioritized survivability over other considerations, the TCP prioritized robustness over accountability and control. Billing and accounting, which would have been foremost in the minds of commercial designers, were entirely absent from the ARPA internetworking protocol. TCP was also heterogeneous by nature. It was designed so that machines on different networks using different technologies could seamlessly communicate as though they were on the same network. Various networks were bridged by so-called "gateway" machines that maintained routing tables with the addresses of computers on their own local networks. TCP underwent several revisions, and following a meeting in January 1978 between Cerf and two researchers, Jon Postel and Danny Cohen, at the University of South California, it was split into two parts to streamline the functions of the gateway computers. TCP would handle communication between computers and an additional Internet Protocol (IP) handled internetwork connections between networks. The combination of TCP and IP would avoid the gateway computers from duplicating functions already performed by host computers within local networks. What remained was to make sure that internetworking actually worked in real world conditions. Cerf, who had joined ARPA in 1976, oversaw a series of practical internetworking tests. A particularly ambitious test was conducted on 22 November 1977. As the SRI packet radio van drove along the California freeway, the radio equipment onboard broadcast data packets via PRNET to a gateway machine that connected to ARPANET. Travelling across the ARPANET by cable, the packets sent from the van reached a gateway machine on the East Coast of the United States that connected to SATNET. From this point the packets were relayed by orbiting satellite to Goonhilly Downs in the UK, and thereafter back via ARPANET to California. To monitor the fidelity of the network's transmission, a screen in the van generated patterns from the data it was receiving. Errors in the data transmission would be immediately clear from flaws in the pattern. Yet the system performed so well that whenever the signal was blocked by bridges and other objects that the van's radio could not penetrate the pattern would only pause and then resume when the signal returned. There were no errors. This test had spanned three networks and two continents. Cerf recalls, "the packets were travelling 94,000 miles round trip . . . We didn't lose a bit!" Another test put network computers aboard aircraft from the Strategic Air Command to simulate wartime conditions: . . . airborne packet radio in the field communicating with each other and to the ground using airborne systems to sew together fragments of the Internet that had been fragmented by nuclear attack. Here was proof that internetworking was possible between radio, satellite and landline networks across the globe and under adverse conditions. An optimistic observer might have thought that now, after proof had been offered, the telephone companies would embrace TCP/IP and bring the Internet to the masses. Instead, however, the telephone industry rushed to develop its own standard. It was keen to maintain its central control over the network. How the TCP/IP Internet suite of protocols differed from and eventually overcame the alternative put forward by the telephone companies says much about the character of the Internet and the nature of enterprises that are best suited to prosper upon it.
<urn:uuid:b3c00cad-a3ba-4879-a034-6e9fdb3941ca>
CC-MAIN-2017-09
https://arstechnica.com/tech-policy/2011/03/the-essence-of-the-net/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00036-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966155
2,435
3.53125
4
Architectural manifesto, Adopting agile development, Part 4 Using user stories to define project requirements From the developerWorks archives Date archived: December 15, 2016 | First published: July 01, 2008 In Part 4 of this series, learn about how to define requirements in an agile environment. In all software development projects, everything is based on requirements. Because agile development emphasizes spoken communication over written documents and welcomes changes late in development, traditional methods of writing requirements might not be adequate. In this article, learn about agile requirements and how user stories can help describe them. This content is no longer being updated or maintained. The full article is provided "as is" in a PDF file. Given the rapid evolution of technology, some steps and illustrations may have changed.
<urn:uuid:8c208ec8-dc34-423d-96d1-ef3d8a644f66>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/library/ar-archman4/index.html?S_TACT=105AGY75
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00508-ip-10-171-10-108.ec2.internal.warc.gz
en
0.90536
157
2.59375
3
Secure Shell (SSH) is a cryptographic network protocol for secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. It connects, via a secure channel over an insecure network, a server and a client running SSH server and SSH client programs, respectively. The protocol specification distinguishes between two major versions that are referred to as SSH-1 and SSH-2. The best-known application of the protocol is for access to shell accounts on Unix-like operating systems, but it can also be used in a similar fashion for accounts on Windows. It was designed as a replacement for Telnet and other insecure remote shell protocols such as the Berkeley rsh and rexec protocols, which send information, notably passwords, in plaintext, rendering them susceptible to interception and disclosure using packet analysis.The encryption used by SSH is intended to provide confidentiality and integrity of data over an unsecured network, such as the Internet. You can use your Android phone, remote computer, iPAD or anything to login to a SSH server and execute command as if you’re sitting on that workstation. So let’s see how you can install a SSH server (we will be using openSSH-Server here) on Kali Linux. After this guide you will be able to do the followings: - Install Kali Linux remote SSH – openSSH server - Enable Kali Linux remote SSH service on boot - Change Kali default ssh keys to avoid MITM attack - Set MOTD – Message of the Day message with a nice ASCII - Troubleshoot and fix “ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED” error during SSH session. - Change SSH server port for extra safety Step 1: Install Kali Linux remote SSH – openSSH server Issue the following command on Kali Linux terminal to install root@kali~:# apt-get install openssh-server Now the next logical step is to enable ssh server (as you can see I’ve issued the following command above). root@kali~:# service ssh start It works, but there’s a problem. If you restart your Kali Linux machine, SSH server will be disabled. So we will ensure that SSH server remains up and running all the time (even after restart). Please note that if you don’t want this to happen, then skip Step 2 and move to Step 3. Why? Because if you enable SSH server on your machine, that means your machine will be available via internet and anyone who knows your password (or your password is just ‘123’ or ‘password’ can break into your machine). So use a secured password and if not sure skip to Step 3 for now. Anyway, moving on.. Step 2: Enable Kali Linux remote SSH service Now we are about to enable SSH service and keep that running the whole time. (changes wont get lost after boot). First of all remove run levels for SSH. root@kali~:# update-rc.d -f ssh remove Next load SSH defaults to run level root@kali~:# update-rc.d -f ssh defaults Check if SSH service is up and running root@kali~:# chkconfig ssh If you don’t have chkconfig installed, install via root@kali~:# apt-get install chkconfig You can run chkconfig to see a lot more too: root@kali~:# chkconfig -l ssh (or) root@kali~:# chkconfig -l
<urn:uuid:9f93c520-1886-4141-97a2-73598735ffc5>
CC-MAIN-2017-09
https://www.blackmoreops.com/2014/06/19/kali-linux-remote-ssh/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00208-ip-10-171-10-108.ec2.internal.warc.gz
en
0.821374
759
2.71875
3
Education Department sets up new Web site for data display ED Data Express showcases dropout rates, achievement, demographics - By Alice Lipowicz - Aug 11, 2010 The Education Department has created a new interactive Web site to showcase its data on student achievement, dropout rates and other educational data in a single location, officials have announced. The ED Data Express Web site allows visitors to collect data from multiple sources to create individual reports. For example, people could compare graduation rates at high schools in their state. Users can also create charts and graphics with the data. This is the first time the data has been made accessible from a single Web page. The site was created to conform with the goals of the department’s open-government plan, which was developed to meet the White House’s aims for transparency and accountability, Education officials said. HHS starts sharing health care data Education Department launches Web site offering public data “Robust data gives us the road map to reform," Education Secretary Arne Duncan said. "This new Web site will give parents and educators reliable, accurate and timely data that they can use to evaluate reforms." Users can get data from the department's program offices, the National Center for Education Statistics and the College Board. The information includes the results of state tests and the National Assessment of Educational Progress, graduation rates, budget figures, and demographics. In June, the department created a separate site, Data.ed.gov, that offers large datasets to the public and program developers. It allows advanced users to download entire datasets for their exploration. In April 2009, the White House created Data.gov as a single source for federal data and encouraged departments to make more data available to the public. Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week.
<urn:uuid:c5cbdf67-4aa5-4f2b-945f-329b59d252c8>
CC-MAIN-2017-09
https://fcw.com/articles/2010/08/11/education-department-set-up-new-web-site-for-data-display.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00260-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93092
380
2.625
3
Intel is taking the first steps to implement thin fiber optics that will use lasers and light as a faster way to move data inside computers, replacing the older and slower electrical wiring technology found in most computers today. Intel's silicon photonics technology will be implemented at the motherboard and rack levels and use light to move data between storage, networking and computing resources. Light is considered a much faster vehicle to move data than copper cables. The silicon photonics technology will be part of a new generation of servers that will need faster networking, storage and processing subsystems, said Justin Rattner, Intel's chief technology officer, during a keynote at the Open Compute Summit in Santa Clara, California, on Wednesday. At the conference, Intel and server maker Quanta Computer are showing a prototype server rack architecture that is capable of moving data using optical modules. The server uses an Intel silicon switch and supports the chip maker's Xeon and Atom server chips. The new rack architecture with silicon photonics is a result of more than a decade of research in Intel's laboratories, Rattner said. He said silicon photonics could enable communication at speeds of 100G bps (bits per second), and transfer data at high speeds while using lesser power compared to copper cables. The technology could also consolidate power supplies and fans in a data center, reducing component costs. Intel's research revolved around the production of devices needed to implement silicon photonics at the rack level, including modulators and detectors. The company is now producing silicon photonics modules that can transfer data at 100G bps, and is offering it to a few clients for testing. Silicon photonics could potentially redefine server designs, Rattner said. With the high-speed bandwidth, processing and storage units could be decoupled from servers and stored in separate boxes. Once the infrastructure with silicon photonics is in place, server designs could change even more, Rattner said. Intel is working with Facebook to define new server technologies that will lead to the decoupling of computing, networking and storage resources. The high-bandwidth connection offered by silicon photonics will be key in bringing the rack technologies to reality, and the processor, switch and other modules need to work together on power management, protocol support, load balancing and handshakes to make high-speed data transfers possible. Critical to this step is "the introduction of silicon photonics in not just the inter-rack fabric, but also the intra-rack fabric," Rattner said. Intel is already using fiber optics with its Thunderbolt connector technology, which like USB 3.0, shuffles data between host devices and peripherals. At last week's International CES show in Las Vegas, Corning announced Thunderbolt Optical Cables that can stretch up to 100 meters. Intel is being aggressive with pushing silicon photonics into the data center, said Jason Waxman, general manager of the cloud platforms group, in an interview. He said it could be in use in fewer than five years, but did not commit to a timeline. There are multiple protocols that could be supported for high-speed data transfers, including InfiniBand, Ethernet and PCI-Express, Waxman said. Intel said it will implement the InfiniBand networking technology inside its chips, which could enable faster data transfers. It is only a matter of time until copper wires are replaced by fiber optics, said Dean McCarron, principal analyst at Mercury Research. "Over time you will see the server communication infrastructure -- which includes switches -- to include photonics," McCarron said. High-speed communication networks use optical technology, and so far the bandwidth in servers was adequate, McCarron said. But with more data flowing through networks, there is a growing demand to crank up the speed over connections, which is where silicon photonics comes into play. "We're going to keep seeing continued demands for the interconnect. It is a forgone conclusion we will have to go to photonics," McCarron said. Initial implementations may be expensive, and there may be a need to introduce protocols that could enable high speed data transfers over fiber optics. "Eventually the signalling gets far too complex, and the move to photonics makes sense," McCarron said. "The motivation is how do you economically get to higher speeds."
<urn:uuid:c85219bc-2974-47db-869a-10532a13fe0d>
CC-MAIN-2017-09
http://www.itworld.com/article/2715249/hardware/intel-prepares-to-use-lasers--light-to-shuffle-data-between-computers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00256-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936688
879
2.734375
3
Port Numbers – How does Transport layer identifies the Conversations Computers are today equipped with the whole range of different applications. Almost all of these applications are able in some way to communicate across the network and use Internet to send and get information, updates or check the correctness of user purchase. Consider they all these applications are in some cases simultaneously receiving and sending e-mail, instant messages, web pages, and a VoIP phone calls. In this situation the computer is using one network connection to get all this communication running. But how is it possible that this computer is never confused about choosing the right application that will receive a particular packet? We are talking about the computer that processes two or more communications in the same time for two or more applications running.
<urn:uuid:d5bdb3b9-b27b-4c5f-898d-6f22cb8fe288>
CC-MAIN-2017-09
https://howdoesinternetwork.com/tag/port-numbers
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00256-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925935
151
3.5625
4
Database: IBM Databases: Top 10 Innovations of the Past 50 Years IBM and the Birth of Database Software Before introducing DB2, IBM led the way for the database software industry by developing innovations to organize data for large and complex government projects, starting in 1966 with the IBM Information Management System or IMS/DB, which was a hierarchical database created for the Apollo space program. On April 3, IBM announced new database software to help clients make faster decisions and better understand relationships between disparate types of data for improved decision making. With DB2 10, IBM has introduced new features like time travel query and multi-temperature data management, as well as enhancements to enterprise-level performance, security, workload management, monitoring, high availability and resiliency. Over the last four decades, DB2 has become a true business analysis tool, allowing businesses to do what they need better and faster compared with their competition. Here, eWEEK showcases how DB2 has evolved to support key technology needs and milestones. In the era of big data, organizations are struggling to gain insights from information assets to transform business operations and be competitive in their industries. The challenge is compounded by new high-performance applications that require instant access to massive amounts of data along with new data types from social networks, sensors and mobile devices, along with the overall exponential increase in data generated by business applications. To help clients meet these challenges, IBM is unveiling DB2 10 and InfoSphere Warehouse 10 software that easily integrates with big data systems, automatically compresses data into tighter spaces to prevent storage sprawl, and slices information from the past, present and future to eliminate expensive application code.
<urn:uuid:9fb75d52-f956-45cf-8072-02251bd71fcf>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Database/IBM-Databases-Top-10-Innovations-of-the-Past-50-Years-766209
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00432-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933676
334
2.75
3
January 24, 2017 Algebra was invented in Persia nearly one thousand years ago. It is one of the fundamental branches of mathematics and its theories are applied to many industries. Algebra ranges from solving for x to complex formulas that leave one scratching their head. If you are interested in learning linear algebra, then you should visit Sheldon Axler’s Web site. Along with an apparent love for his pet cat, Axler is a professor of mathematics at San Francisco State University. On his Web site, Axler lists the various mathematics books he has written and contributed too. It is an impressive bibliography and his newest book is titled, Linear Algebra Abridged. He describes the book as: Linear Algebra Abridged is generated from Linear Algebra Done Right (third edition) by excluding all proofs, examples, and exercises, along with most comments. Learning linear algebra without proofs, examples, and exercises is probably impossible. Thus this abridged version should not substitute for the full book. However, this abridged version may be useful to students seeking to review the statements of the main results of linear algebra. Algebra can be difficult, but as Axler wrote above learning linear algebra without proofs is near impossible. However, if you have a grounded understanding of algebra and are simply looking to brush up or study linear principles without spending a sizable chunk on the textbook, then this is a great asset. The book is free to download from Axler’s Web site, along with information on how to access the regular textbook. Whitney Grace, January 24, 2017 November 23, 2016 It is inevitable in college that you were forced to write an essay. Writing an essay usually requires the citation of various sources from scholarly journals. As you perused the academic articles, the thought probably crossed your mind: who ever reads this stuff? Smithsonian Magazine tells us who in the article, “Academics Write Papers Arguing Over How Many People Read (And Cite) Their Papers.” In other words, themselves. Academic articles are read mostly by their authors, journal editors, and the study’s author write, and students forced to cite them for assignments. In perfect scholarly fashion, many academics do not believe that their work has a limited scope. So what do they do? They decided to write about it and have done so for twenty years. Most academics are not surprised that most written works go unread. The common belief is that it is better to publish something rather than nothing and it could also be a requirement to keep their position. As they are prone to do, academics complain about the numbers and their accuracy: It seems like this should be an easy question to answer: all you have to do is count the number of citations each paper has. But it’s harder than you might think. There are entire papers themselves dedicated to figuring out how to do this efficiently and accurately. The point of the 2007 paper wasn’t to assert that 50 percent of studies are unread. It was actually about citation analysis and the ways that the internet is letting academics see more accurately who is reading and citing their papers. “Since the turn of the century, dozens of databases such as Scopus and Google Scholar have appeared, which allow the citation patterns of academic papers to be studied with unprecedented speed and ease,” the paper’s authors wrote. Academics always need something to argue about, no matter how miniscule the topic. This particular article concludes on the note that someone should get the number straight so academics can move onto to another item to argue about. Going back to the original thought a student forced to write an essay with citations also probably thought: the reason this stuff does not get read is because they are so boring. October 11, 2016 The lawless domain just got murkier. Apart from illegal firearms, passports, drugs and hitmen, you now can procure a verifiable college degree or diploma on Dark Web. Cyber criminals have created a digital marketplace where unscrupulous students can purchase or gain information necessary to provide them with unfair and illegal academic credentials and advantages. The certificates for these academic credentials are near perfect. But what makes this cybercrime more dangerous is the fact that hackers also manipulate the institution records to make the fake credential genuine. The article ADDS: A flourishing market for hackers who would target universities in order to change grades and remove academic admonishments This means that under and completely non-performing students undertaking an educational course need not worry about low grades or absenteeism. Just pay the hackers and you have a perfectly legal degree that you can show the world. And the cost of all these? Just $500-$1000. What makes this particular aspect of Dark Web horrifying interesting is the fact that anyone who procures such illegitimate degree can enter mainstream job market with perfect ease and no student debt. August 29, 2016 Think Amazon is the only outfit which understands the concept of strategic pricing, bundling, and free services? Google has decided to emulate such notable marketing outfits as ReedElsevier’s LexisNexis and offering colleges a real deal for use of for-fee online services. Who would have thought that Google would emulate LexisNexis’ law school strategy? I read “Google Offers Free Cloud Access to Colleges, Plays Catch Up to Amazon, Microsoft.” I reported that a mid tier consulting firm anointed Microsoft as the Big Dog in cloud computing. Even in Harrod’s Creek, folks know that Amazon is at least in the cloud computing kennel with the Softies. According to the write up: Google in June announced an education grant offering free credits for its cloud platform, with no credit card required, unlimited access to its suite of tools and training resources. Amazon and Microsoft’s cloud services both offer education programs, and now Google Cloud wants a part in shaping future computer scientists — and probably whatever they come up with using the tool. The write up points out: Amazon and Microsoft’s cloud services offer an education partnership in free trials or discounted pricing. For the time being, Microsoft Azure’s education program is not taking new applications and “oversubscribed,” the website reads. Amazon Web Services has an online application for its education program for teachers and students to get accounts, and Google is accepting applications from faculty members. How does one avail oneself of these free services. Sign up for a class and hope that your course “Big Band Music from the 1940’s” qualifies you for free cloud stuff. Stephen E Arnold, August 29, 2016 August 22, 2016 The article on Inside HPC titled IBM Partners with University of Aberdeen to Drive Cognitive Computing illustrates the circumstances of the first Scottish university partnership with IBM. IBM has been collecting goodwill and potential data analysts from US colleges lately, so it is no surprise that this endeavor has been sent abroad. The article details, In June 2015, the UK government unveiled plans for a £313 million partnership with IBM to boost big data research in the UK. Following an initial investment of £113 million to expand the Hartree Centre at Daresbury over the next five years, IBM also agreed to provide further support to the project with a package of technology and onsite expertise worth up to £200 million. This included 24 IBM researchers, stationed at the Hartree Centre, to work side-by-side with existing researchers. The University of Aberdeen will begin by administering the IBM cognitive computing technology in computer science courses in addition to ongoing academic research with Watson. In a sense, the students exposed to Watson in college are being trained to seek jobs in the industry, for IBM. They will have insider experience and goodwill toward the company. It really is one of the largest nets cast for prospective job applicants in industry history. Chelsea Kerwin, June 22, 2016 Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016. Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/ August 3, 2016 Does your business need a mentor? How about any students or budding entrepreneurs you know? Such a guide can be invaluable, especially to a small business, but Google and Bing may not be the best places to pose that query. Business magazine Inc. has rounded up “Ten Top Platforms for Finding a Mentor in 22016.” Writer John Boitnott introduces the list: “Many startup founders have learned that by working with a mentor, they enjoy a collaboration through which they can learn and grow. They usually also gain access to a much more experienced entrepreneur’s extensive network, which can help as they seek funding or gather resources. For students, mentors can provide the insight they need as they make decisions about their future. One of the biggest problems entrepreneurs and students have, however, is finding a good mentor when their professional networks are limited. Fortunately, technology has come up with an answer. Here are nine great platforms helping to connect mentors and mentees in 2016.” Boitnott lists the following mentor-discovery resources: Music platform Envelop offers workshops for performers and listeners. Mogul focuses on helping female entrepreneurs via a 27/7 advice hotline. From within classrooms, iCouldBe connects high-school students to potential mentors. Also for high-school students, iMentor is specifically active in low-income communities. MentorNet works to support STEM students through a community of dedicated mentors, while the free, U.K.-based Horse’s Mouth supports a loosely-organized platform where participants share ideas. Also free, Find a Mentor matches potential protégés with adult mentors. SCORE supplies tools like workshops and document templates for small businesses. Cloud-based MentorCity serves entrepreneurs, students, and nonprofits, and it maintains a free online registry where mentors can match their skill sets to the needs of inquiring minds. Who knew so much professional guidance was out there, made possible by today’s technology, and much of it for free? For more information on each entry, see the full article. Cynthia Murrell, August 3, 2016 August 2, 2016 The article on TheNextWeb titled Teenagers Have Built a Summary App that Could Help Students Ace Exams might be difficult to read over the sound of a million teachers weeping into their syllabi. It’s no shock that students hate to read, and there is even some cause for alarm over the sheer amount of reading that some graduate students are expected to complete. But for middle schoolers, high schoolers, and even undergrads in college, there is a growing concern about the average reading comprehension level. This new app can only make matters worse by removing a student’s incentive to absorb the material and decide for themselves what is important. The article describes the app, “Available for iOS, Summize is an intelligent summary generator that will automatically recap the contents of any textbook page (or news article) you take a photo of with your smartphone. The app also supports concept, keyword and bias analysis, which breaks down the summaries to make them more accessible. With this feature, users can easily isolate concepts and keywords from the rest of the text to focus precisely on the material that matters the most to them.” There is nothing wrong with any of this if it is really about time management instead of supporting illiteracy and lazy study habits. This app is the result of the efforts of an 18-year-old Rami Ghanem using optical character recognition software. A product of the era of No Child Left Behind, not coincidentally, exposed to years of teaching to the test and forgetting the lesson, of rote memorization in favor of analysis and understanding. Yes, with Summize, little Jimmy might ace the test. But shouldn’t an education be more than talking point mcnuggets? Chelsea Kerwin, August 2, 2016 July 21, 2016 Is big data good only for the hard sciences, or does it have something to offer the humanities? Writer Marcus A Banks thinks it does, as he states in, “Challenging the Print Paradigm: Web-Powered Scholarship is Set to Advance the Creation and Distribution of Research” at the Impact Blog (a project of the London School of Economics and Political Science). Banks suggests that data analysis can lead to a better understanding of, for example, how the perception of certain historical events have evolved over time. He goes on to explain what the literary community has to gain by moving forward: “Despite my confidence in data mining I worry that our containers for scholarly works — ‘papers,’ ‘monographs’ — are anachronistic. When scholarship could only be expressed in print, on paper, these vessels made perfect sense. Today we have PDFs, which are surely a more efficient distribution mechanism than mailing print volumes to be placed onto library shelves. Nonetheless, PDFs reinforce the idea that scholarship must be portioned into discrete units, when the truth is that the best scholarship is sprawling, unbounded and mutable. The Web is flexible enough to facilitate this, in a way that print could never do. A print piece is necessarily reductive, while Web-oriented scholarship can be as capacious as required. “To date, though, we still think in terms of print antecedents. This is not surprising, given that the Web is the merest of infants in historical terms. So we find that most advocacy surrounding open access publishing has been about increasing access to the PDFs of research articles. I am in complete support of this cause, especially when these articles report upon publicly or philanthropically funded research. Nonetheless, this feels narrow, quite modest. Text mining across a large swath of PDFs would yield useful insights, for sure. But this is not ‘data mining’ in the maximal sense of analyzing every aspect of a scholarly endeavor, even those that cannot easily be captured in print.” Banks does note that a cautious approach to such fundamental change is warranted, citing the development of the data paper in 2011 as an example. He also mentions Scholarly HTML, a project that hopes to evolve into a formal W3C standard, and the Content Mine, a project aiming to glean 100 million facts from published research papers. The sky is the limit, Banks indicates, when it comes to Web-powered scholarship. Cynthia Murrell, July 21, 2016 There is a Louisville, Kentucky Hidden Web/Dark Web meet up on July 26, 2016. Information is at this link: http://bit.ly/29tVKpx. July 19, 2016 Deep learning is another bit of technical jargon floating around and it is tied to artificial intelligence. We know that artificial intelligence is the process of replicating human thought patterns and actions through computer software. Deep learning is…well, what specifically? To get a primer on what deep learning is as well as it’s many applications check out “Deep Learning: An MIT Press Book” by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Here is how the Deeping Learning book is described: “The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free. The print version will be available for sale soon.” This is a fantastic resource to take advantage of. MIT is one of the leading technical schools in the nation, if not the world, and the information that is sponsored by them is more than guaranteed to round out your deep learning foundation. Also it is free, which cannot be beaten. Here is how the book explains the goal of machine learning: “This book is about a solution to these more intuitive problems. This solution is to allow computers to learn from experience and understand the world in terms of a hierarchy of concepts, with each concept de?ned in terms of its relation to simpler concepts. By gathering knowledge from experience, this approach avoids the need for human operators to formally specify all of the knowledge that the computer needs.” If you have time take a detour and read the book, or if you want to save time there is always Wikipedia. There is a Louisville, Kentucky Hidden Web/Dark Web meet up on July 26, 2016. Information is at this link: http://bit.ly/29tVKpx. June 15, 2016 The Dark Web and deep web can often get misidentified and confused by readers. To take a step back, Trans Union’s blog offers a brief read called, The Dark Web & Your Data: Facts to Know, that helpfully addresses some basic information on these topics. First, a definition of the Dark Web: sites accessible only when a physical computer’s unique IP address is hidden on multiple levels. Specific software is needed to access the Dark Web because that software is needed to encrypt the machine’s IP address. The article continues, “Certain software programs allow the IP address to be hidden, which provides anonymity as to where, or by whom, the site is hosted. The anonymous nature of the dark web makes it a haven for online criminals selling illegal products and services, as well as a marketplace for stolen data. The dark web is often confused with the “deep web,” the latter of which makes up about 90 percent of the Internet. The deep web consists of sites not reachable by standard search engines, including encrypted networks or password-protected sites like email accounts. The dark web also exists within this space and accounts for approximately less than 1 percent of web content.” For those not reading news about the Dark Web every day, this seems like a fine piece to help brush up on cybersecurity concerns relevant at the individual user level. Trans Union is on the pulse in educating their clients as banks are an evergreen target for cybercrime and security breaches. It seems the message from this posting to clients can be interpreted as one of the “good luck” variety. Megan Feil, June 15, 2016
<urn:uuid:984816c2-2e31-4a33-ae10-460a8b2d9e60>
CC-MAIN-2017-09
http://arnoldit.com/wordpress/category/education/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00552-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94743
3,800
3.125
3
Japan's Internet infrastructure has remained surprisingly unaffected by last week's devastating earthquake and tsunami, according to an analysis by Internet monitoring firm Renesys. Most Web sites are operational and the Internet remains available to support critical communication functions, Renesys CTO James Cowie wrote in a blog over the weekend . In the immediate aftermath of the earthquake off the Japanese coast, about 100 of Japan's 6,000 network prefixes -- or segments -- were withdrawn from service. But they started reappearing on global routing tables just a few hours later. Similarly, traffic to and from Japan dropped by about 25 gigabits per second right after the Friday quake, but returned to normal levels a few hours later. And traffic at Japan's JPNAP Layer 2 Internet exchange service appears to have slowed by just 10% since Friday, according to Renesys. "Why have we not seen more impact on international Internet traffic from this incredibly devastating quake? We don't know yet," Cowie wrote. An unknown number of people were killed and whole cities devastated by what was one of the worst earthquakes in over 100 years. The quake, which initially measured 8.9 on the Richter scale, generated a huge tsunami that inundated parts of Japan and put almost the entire Pacific coastline on a tsunami alert. The effects of the quake, in terms of human loss and economic damage, are expected to be huge. The quake also disrupted electricity supplies and knocked two nuclear power plants out of commission . One reason Internet connectivity appeared to fare better could be that undersea cables remained relatively untouched by the quake, unlike in 2006 when an earthquake in Taiwan resulted in a large number of major cable breaks, Cowie said. This time, the only noticeable breaks were in two segments of Pacnet's EAC submarine cable system, Cowie said. The system, which is owned by a consortium of six companies, is designed to provide up to 1.92 terabits per second of capacity across the Pacific. The breaks led to outages in several networks in Japan, the Philippines and Hong Kong. Sections of the Pacific Crossing undersea cabling system connecting the U.S to Asia also appear to have been damaged. A note posted on Pacific Crossing's Web site this morning noted that two of the cables are currently out of service. Pacific Crossing's cable lading station in Ajigaura, Japan was evacuated as a result of the tsunami. No information is available about when restoration efforts will resume, Pacific Crossing said. Renesys noted that "lingering" problems with landing station equipment could generate new problems over the next few weeks. Even so, "it's clear that Internet connectivity has survived this event better than anyone would have expected," Cowie said. He noted that Japan's attempts to build a "dense web" of domestic and international Internet connectivity appears to have paid off. "At this point, it looks like their work may have allowed the Internet to do what it does best: route around catastrophic damage and keep the packets flowing, despite terrible chaos and uncertainty." Jaikumar Vijayan covers data security and privacy issues, financial services security and e-voting for Computerworld. Follow Jaikumar on Twitter at @jaivijayan or subscribe to Jaikumar's RSS feed . His e-mail address is [email protected] . Read more about internet in Computerworld's Internet Topic Center. This story, "Japan's Internet Largely Intact After Earthquake, Tsunami" was originally published by Computerworld.
<urn:uuid:facecb18-64b9-4bc7-893f-4dd7349a2505>
CC-MAIN-2017-09
http://www.cio.com/article/2410273/internet/japan-s-internet-largely-intact-after-earthquake--tsunami.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00252-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963812
726
2.53125
3
Biometrics is the science and supporting technology of measuring and analyzing biological data. In information technology, biometrics refers to technologies that measure and analyze human body characteristics (such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns, and hand measurements), primarily for authentication purposes. Security and convenience. Authentication by biometrics is becoming increasingly common in corporate and public security systems, consumer electronics and point of sale (POS) applications. In addition to security, the driving force behind biometric verification is convenience. Biometrics is most commonly used as a form of identity access management and access control (both physical and logical access). The overall proposition for the adaption of biometrics in humans’ lives is very compelling as the technology allows for a far greater level of convenience and security in most places where it can be applied. Allowing that implementation of this technology can begin to remove the need for carrying 3rd party identification documents (passports, driver licenses, credit/debit cards, etc.), currency and keys, the notion that we can live more comfortably and more safely, is real.
<urn:uuid:d06af37f-7efb-4649-a874-bb6df923828d>
CC-MAIN-2017-09
https://www.bsminfo.com/doc/biometrics-0003
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00425-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919461
230
3.0625
3
A new type of internet cookie threatens users' privacy and security by tracking their online behaviour for advertising management, profiling, and other reasons, the EU's cyber security agency Enisa warns. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Describing the latest breed of cookies (short bits of code that help to regulate a user's visit to a website via the browser) Enisa says the advertising industry has led the drive for new, persistent and powerful cookies, with privacy-invasive features for marketing practices and profiling. It says both the user's browser and the origin server must assist informed consent, and that users should be able to manage their cookies easily. Enisa says the new cookies support user identification in a "persistent manner". They do not have enough "transparency" in how they are being used, so it is hard to quantify their security and privacy implications, it says. Enisa says informed consent should guide the design of systems using cookies and that their use and the data stored in cookies should be transparent to users. "All cookies should have user-friendly removal mechanisms which are easy to understand and use by any user," Enisa said. It says storage of cookies outside browser control should be limited or banned, and that users should have an alternative service channel if they do not accept cookies. Enisa executive director Udo Helmbrecht said these next-generation cookies need to be as transparent and user-controlled as regular HTTP cookies. "This would safeguard the privacy and security aspects of consumers and business alike," he said.
<urn:uuid:0976f1d1-c5ec-4543-bfde-bfe0d8cbff92>
CC-MAIN-2017-09
http://www.computerweekly.com/news/1280095226/New-internet-cookies-could-steal-users-identities-invade-privacy-says-EU-cyber-agency
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00473-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951052
335
2.65625
3
I ran across an article on Information Week via HPCWire about NASA’s new supercomputing application. The application, which will be unveiled later today, is powered by the world’s sixth most powerful computer (according to the Top 500 ranking in 2009) and will be called the NASA Earth Exchange (NEX for short). This application will allow scientists from all over the world to log into NEX and use its computing power virtually to perform computational research. According to Information Week, even scientists that are not computer wizards would be able to utilize this user-friendly scientific application. The article also mentioned some interesting political implications resulting from a government-owned and globally-shared supercomputer like NEX. Foreign scientists often lack the U.S. government security clearance to access such tools and data as on NEX. The scientific community has always prided itself on being cosmopolitan: it’s concerns are for the advancement of science, not towards any political end. Nevertheless, that’s not always the case (e.g. scientists’ close collaboration with government objectives to develop the nuclear bomb in the 1940s). A new challenge for U.S. government will be to balance global scientific collaboration with national security concerns. The Information Week article links to an interesting website called Gov 2.0 Expo, which is a conference that examines the role of information technology in helping make government more efficient, accountable, responsive and transparent. We’ve covered this topic on the blog before and will continue to follow it, as it provides an interesting perspective on the age-old problem of government responsibility.
<urn:uuid:9cd53bab-30b5-410d-a1af-62b142ae7fd1>
CC-MAIN-2017-09
http://www.icc-usa.com/insights/nasas-new-supercomputing-application-and-gov-2-0/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00649-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940231
330
2.8125
3
The demise of the mainframe computer has been predicted for nearly 20 years. First the mini-computer from vendors like Computervision, Data General, DEC, Honeywell, Hewlett Packard, IBM, Prime, and Wang Computer challenged the dominance of the mainframe and expanded computing to midsized enterprises. Advances in x86 based micro-computer technologies all but eliminated the mini-computer market and now these same devices are threatening the existence of the mainframe. The technologies where the mainframe dominated are migrating to the smaller systems: - High availability. - Shared storage. - Single point of management. While the mainframe, in particular the one remaining star of the field, the IBM System z, continues to have advantages over distributed x86-based systems, advances in technology at the “low end” have now reached the point of seriously threatening the mainframe world for all but legacy applications.
<urn:uuid:64fea829-be24-4b10-8203-f6f844779f73>
CC-MAIN-2017-09
https://www.infotech.com/research/ibm-system-z-growing-versatility-on-a-fading-platform
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00293-ip-10-171-10-108.ec2.internal.warc.gz
en
0.859877
188
2.703125
3
With the rate of innovation, it’s challenging to keep up with emerging technology. Although the price of 3D printers is coming down, most households don’t have one yet. So it was surprising to come across news that makes it appear almost as if 3D printing is old news. Meanwhile DARPA wants self-destructing tech a bit like messages in Mission Impossible. A potential way to go about this might be shapeshifting 4D-printed technology. DARPA -- Your device will poof in 5 seconds When you purchase electronics, you hope it lasts longer than the warranty. In fact, you probably hope it lasts until it’s practically obsolete and has been replaced with a newer, faster and all-around better product. What if electronics simply disappeared when no longer needed . . . as in poof, dissolving into the environment? Those wild DARPA scientists created a Vanishing Programmable Resources (VAPR) program earlier this year “with the aim of revolutionizing the state of the art in transient electronics or electronics capable of dissolving into the environment around them.” In a post called, “This web feature will disappear in 5 seconds,” DARPA explained that electronics used on the battlefield are often scattered around and could possibly be captured and reverse-engineered by the enemy "to compromise DoD’s strategic technological advantage." The electronics still need to be rugged and maintain functionality, but “when triggered, be able to degrade partially or completely into their surroundings. Once triggered to dissolve, these electronics would be useless to any enemy who might come across them.” DARPA program manager Alicia Jackson said, “The commercial off-the-shelf, or COTS, electronics made for everyday purchases are durable and last nearly forever. DARPA is looking for a way to make electronics that last precisely as long as they are needed. The breakdown of such devices could be triggered by a signal sent from command or any number of possible environmental conditions, such as temperature.” How could anyone pull off DARPA’s James Bond-esque self-destructing tech that would more or less vaporize when triggered? The Self-Assembly Lab at MIT may be heading in that direction with shapeshifting technology, otherwise called 4D printing. 3D printing is old news: Welcome to 4D printing & shapeshifting tech 3D printing is exciting and has become increasingly more sophisticated, but what if a 3D-printed object could morph into another object? Computer scientist, MIT Department of Architecture faculty member and TED senior fellow Skylar Tibbits is working in collaboration with Stratasys Inc. on 3D printing with a twist; the goal is to make it more adaptive and more responsive so it can change from one thing into another thing. During a TED talk called “The Emergence of 4D Printing," Tibbits stated, “The idea behind 4D printing is that you take multi-material 3D printing -- so you can deposit multiple materials -- and you add a new capability, which is transformation, that right off the bed, the parts can transform from one shape to another shape directly on their own. And this is like robotics without wires or motors. So you completely print this part, and it can transform into something else.” A 3D-printed object would have a “program embedded directly into the materials” that would allow it “to go from one state to another.” Describing a potential 4D printed object in a CNN video, Tibbets said it has invented within it a “potential energy, that activation, so it can transform on its own.” He gave potential environmental activation examples of water, heat, vibration, sound or pressure. Tibbets suggested that 4D printing could have practical applications in fashion, or perhaps in sports equipment, and in “things that need to respond as the conditions are changing.” If you can’t take eight and half minutes to watch the TED video, then the following links will jump you directly to the cool demonstrations. In the first demonstration, Tibbets showed “a single strand dipped in water that completely self-folds on its own into the letters M I T.” In the second, a single strand self-folds into a three-dimensional cube without human interaction. He said, “We think this is the first time that a program and transformation has been embedded directly into the materials themselves. And it also might just be the manufacturing technique that allows us to produce more adaptive infrastructure in the future.” He also suggested that a potential use of self-assembly 4D tech in space might include a highly functional system that can transform into another highly functional system. In an example about more adaptive infrastructure in the future, Tibbits stated, “Let's go back to infrastructure. In infrastructure, we're working with a company out of Boston called Geosyntec. And we're developing a new paradigm for piping. Imagine if water pipes could expand or contract to change capacity or change flow rate, or maybe even undulate like peristaltics to move the water themselves. So this isn't expensive pumps or valves. This is a completely programmable and adaptive pipe on its own.” That could be great, but our infrastructure is currently a bit of a mess and very hackable . . . the same as many embedded medical devices. Let's hope that if 4D printing is used in infrastructure that it would be more secure. Detect heart-rate via webcam and app Lastly, while less sensational, this next one is more attainable for most geeks. If you have ever wanted to detect your heart-rate using a webcam and a Python app, then you are in luck. The project is on GitHub and interested parties might want to start with the README. The Changelog states: webcam-pulse-detector is a cross-platform Python application that can detect a person’s heart-rate using their computer’s webcam. I could write 1,000 words about it, or just show you this:
<urn:uuid:6c42606f-287f-47dd-86af-4d882e1b9d7e>
CC-MAIN-2017-09
http://www.computerworld.com/article/2475113/emerging-technology/cool--darpa-self-destructing-tech---4d-printed-tech-that--shapeshifts-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00469-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955995
1,272
2.8125
3
How cloud storage could catch up with big data - By John Moore - Apr 17, 2012 Cloud computing has managed to make the world’s already colossal appetite for data storage even more voracious. Last year, IDC, an IT market research firm, cited public cloud-based service providers, from Amazon Web Services to YouTube, as the most significant drivers of storage consumption in the past three years. The government sector contributes as well: IDC noted that the private clouds of government and research sites compare in scope and complexity to their public cloud counterparts. The so-called big data problem has surfaced in the past two years to rank among the primary IT challenges. Technologies such as the Apache Hadoop distributed computing framework and NoSQL databases have emerged to take on the challenge of very large — and unwieldy — datasets. And now another technology, already at work behind the scenes, could grow in importance in the coming years. Erasure coding has been around since the 1980s, but until recently its use in storage circles has mainly been confined to single storage boxes as a way to boost reliability more efficiently. Now erasure coding is moving into distributed storage. Its application becomes trickier here, but industry executives and storage researchers believe erasure coding — particularly in conjunction with increasingly popular techniques such as object-based storage — will play a growing role in cloud storage. Potential government adopters include Energy Department labs and other agencies with vast data stores. Why it matters When it comes to storage, everything is getting bigger, whether it’s an individual disk, a storage system or a cloud-based repository. Erasure coding, an error-correcting algorithm, plays a role across this range of ever-growing storage platforms. Vendors most commonly use erasure coding to boost the resiliency and performance of their Redundant Array of Independent Disks (RAID) storage systems, said Bob Monahan, director of management information systems at DRC, a consulting and IT services firm. But it’s the use of erasure coding as an alternative to data replication that is attracting new interest in this storage mechanism. In many traditional cases, redundancy is achieved by replicating data from primary storage devices to target arrays at the data center or an off-site location. Mirroring data in that way provides protection but also consumes lots of storage, particularly when organizations make multiple copies of data for greater redundancy. The approach becomes particularly unwieldy for organizations that deal with petabytes or more of data. Erasure coding offers an alternative way to achieve redundancy while using less storage space, said Russ Kennedy, vice president of product strategy, marketing and customer solutions at storage vendor Cleversafe, which uses erasure codes in its object-based storage solutions. Organizations that rely on replication might make three or four copies of data — one copy at another location then a copy of the copy to be safe and so on. In comparison, the overhead to make a sufficiently fault-tolerant copy with erasure coding is less than double the size of the original volume, Kennedy said. Jean-Luc Chatelain, executive vice president of strategy and technology at DataDirect Networks, said financial concerns are driving interest in erasure coding among customers who don’t want to replicate data two or three times. DataDirect takes advantage of erasure coding in its RAID system, file storage offerings and Web Object Scaler product for cloud storage. The prospect of saving space and money hasn’t been lost on the cloud community. The major providers are on their way to adopting erasure coding, said James Plank, a professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee. His research focuses on erasure codes in storage applications. “Pretty much every cloud installation you can think of is either using erasure coding or converting to erasure coding,” he said, citing Amazon, Google and Microsoft as examples. “They are using erasure coding for fault tolerance because the disk space savings is huge.” There’s a bandwidth benefit as well. “While the big savings today would come from reduced capacity requirements, the big win, from my standpoint, is the two- or threefold reduction in bandwidth [compared to what is] used during replication,” said Galen Shipman, group leader of the Technology Integration group at Oak Ridge National Laboratory’s National Center for Computational Sciences. Erasure coding might have implications for the nascent cloud, but the technology has been around the storage block a few times. In a storage setting, the technique encodes data into fragments from which the original data can be reconstructed. For example, erasure coding is the underlying technology of Cleversafe’s dispersed storage method, which takes a data object (think of a file with self-describing metadata) and chunks it into segments. Each segment is encrypted and cut into 16 slices and dispersed across an organization’s network to reside on different hard drives and servers. If the organization has access to only 10 of the slices — because of disk failures, for instance — the original data can still be put back together, Kennedy said. Numerous experts see erasure coding paired with object-based storage as a good option for achieving more fault-tolerant repositories with petabytes and even exabytes of capacity. Government clouds and data centers have yet to jump on erasure coding, apart from agencies using RAID storage devices that embed the technique. “It is less well understood and therefore less mature in commercially available solutions,” Monahan said. “As it becomes more mature, the use cases for when it is more appropriate will drive implementation scenarios.” Performance is another limitation. Shank Shome, a storage engineer at Agilex Technologies, said the impact of erasure coding on storage performance has yet to be fully explored. He added that reading the data back from an erasure-coded system is generally fast, but the real performance cost lies in writing the data to storage. “If the data is generally static with very few rewrites, such as media files and archive logs, creating and distributing the data is a one-time cost,” Shome said. “If the data is very dynamic, the erasure codes have to be re-created and the resulting data blocks redistributed.” Erasure coding also runs into problems with high-performance computing. One complication arises when data is being written simultaneously from many sources and at a high rate, said Robert Ross, a computer scientist at DOE’s Argonne National Laboratory and a senior fellow at the University of Chicago’s Computation Institute. That activity requires a level of coordination that isn’t easy with current approaches. In general, storage experts believe erasure coding faces the biggest obstacle in frequently accessed “hot data.” Accordingly, they believe a key initial use case lies in protecting data that has cooled enough to move to long-term storage. Monahan said the benefits of erasure coding are “higher local availability at a lower cost and highly available dispersed archival systems that are an order of magnitude less expensive than traditional systems.” The trick is knowing when to use replication to get data out of a system quickly and when to use erasure coding to create more economical, resilient long-term storage, Ross said. “Both have important roles moving forward in high-performance computing,” he added. The Oak Ridge lab is now exploring the use of erasure coding for the Oak Ridge Leadership Computing Facility. That facility already uses RAID 6 systems from DataDirect Networks. Shipman said erasure coding could play a significant role in two distributed storage systems: a Lustre parallel distributed file system and the large-scale archival High Performance Storage System, which uses replication for data integrity and resiliency. “Erasure coding will likely emerge as a viable alternative to replication due to savings in the media and bandwidth consumed for replication,” Shipman said. He acknowledged the computational demands of the more advanced erasure-coding techniques but said ongoing research on algorithms aims to minimize that cost. Next steps: Updating the storage toolbox As data storage needs continue to grow and cloud-based models introduce new options for distributed systems, agencies should constantly re-evaluate their storage strategies. Specifically, they should: - Monitor current storage options. Erasure coding might not be at the top of your agenda today, but if your storage growth is outpacing your budget, it probably makes sense to add the technology into the mix of current or near-term future options. - Assess likely use cases. Beyond data archiving, erasure coding could prove useful for maintaining and protecting large quantities of sensor-derived data. For example, Cleversafe recently signed GeoEye, a provider of high-resolution satellite imagery, as a customer.
<urn:uuid:47b9a99c-a059-4251-b2b0-1549a6e1347b>
CC-MAIN-2017-09
https://fcw.com/articles/2012/04/30/feat-biztech-cloud-storage.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00469-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929135
1,838
2.53125
3
TechCrunch has posted an interesting Q&A with veteran IT innovator Marc Andreesen on "the future of enterprise." It includes an insightful (if not wholly original) summary of the evolution of the computer industry, in which Andreesen makes the case that government used to fund and develop the big advances in computing, which would then make their way to private companies and, ultimately, consumers. Now, Andreesen argues, the situation is reversed. Here's how he characterizes the shift: Of course, many federal officials have been saying things like this for years -- that government used to lead the way in technology innovation, and now must struggle with antiquated procurement processes just to be able to take advantage of the latest advances. But in a bring-your-own-device world, the playing field might actually be leveled, with people and organizations of all sizes getting the newest tools at the same time. So the computer industry started in 1950 and basically ran for 50 years with the same model, which was a model where all of the new computers, all the new technology, all the new software started out being sold for the highest prices to the biggest organizations. So originally the customer was the Department of Defense. It was the first customer for the computer. In fact, one of the big first computers was called SAGE, which was a missile defense, the first missile-defense computer, which was like one of the first computers in the history of the world which got sold to the Department of Defense for, I don’t know, tens and tens of millions of dollars at the time. Maybe hundreds of millions of dollars in current dollars. And then five years later computers became — they dropped half in price and then the big insurance companies could buy them, and that’s when Thomas Watson, who ran IBM at the time, was quoted as saying, “There’s only a market need in the world for five computers.” The reason that wasn’t crazy when he said it is because there were only five organizations that were big enough to buy a computer. So that’s how it started. And then IBM came along and productized the mainframe, and then all of a sudden big normal companies — manufacturing companies and banks — could start to buy computers. And then DEC came along and came out with the minicomputer, and then all of a sudden smaller companies could start to buy computers. And then the PC came out and then all of a sudden individuals could start to buy computers. But the PC only ever got to hundreds of millions of people. It never got to billions of people. Now, the smartphone has come out and it can get to billions of people. And so it has always been this kind of trickle-down model for 50 years. We think that basically about 10 years ago the model flipped. And so we think that the model flipped to a model where, today, where the most interesting and advanced new technology now comes out for the consumer first. And then small businesses start to use it. And then medium-size businesses start to use it, and then large businesses start to use it, and then eventually the government starts to use it.
<urn:uuid:11b04948-6f69-4988-b7bc-92f66c17e75a>
CC-MAIN-2017-09
http://www.nextgov.com/technology-news/tech-insider/2013/01/government-now-caboose-technology/60914/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00645-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971811
658
2.59375
3
Over the last forty years, the world of computing has gone through a series of important shifts to get us to where we are today. 1970s: The birth of personal computing, characterized by a focus on the device and what it could do. 1980s: The birth of local networking, characterized by a focus on linking devices and accessing shared resources. 1990s: The birth of the Internet for the masses, characterized by a focus on electronic communications and commerce. 2000s: The birth of mobile computing, characterized by whole new classes of devices, and a focus virtual identity and community. Now we find ourselves in the second decade of the 21st century (the 2010s…two-thousand-teens?) and computing continues to evolve at a tremendous pace. So what is the focus now? With entire identities and relationships built in the ether, today's focus is increasingly on data access and protection. It's no longer about the hardware, or the network, or the application. It's about data, and the ability to access that data anytime anywhere. Today's computing infrastructure needs to support this paradigm, which brings us to the idea of Adaptive Infrastructure. Adaptive Infrastructure provides fast and reliable access to data and the applications that rely on that data. When a problem occurs, you better be able to get things running again in very short order. To help put the importance of data, the National Archives & Records Administration reports that 93% of companies that lose access to their data for 10 days or more file for bankruptcy within one year; 50% filed for bankruptcy immediately. An Adaptive Infrastructure helps you avoid this fate by providing the following:
<urn:uuid:24b6a405-fb85-4db4-898c-f8ab1fd95c6b>
CC-MAIN-2017-09
https://www.bsminfo.com/doc/adaptive-infrastructure-managing-resources-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00345-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943489
339
2.9375
3
Given:route add host lion tiger 1Which two statements are true? (Choose two.) Which daemon can configure a default router dynamically? Given:host name IP addressmyhost 18.104.22.168printsvr 22.214.171.124The machine myhost needs to send data to printsvr. The routing table on myhost has no entry for printsvr. The routing table on myhost has no entry for the 126.96.36.199 network.Which entry does the routing algorithm look for next? You have a host with two hme network interfaces.There are no routing daemons running. Which two files could cause this? (Choose two.) Two hosts in the same network are connected to different subnets. Which routing method is used to transmit packets between these hosts? Given the current origin is gvon.com, which two zone file lines correctlydelegate to a server called centauri.gvon.com the domain training.gvon.com.? (Choose two.) A dump of the DNS cache reveals that a workstation name is cached aswww.gvon.com.gvon.com. The correct name in the cache should be www.gvon.com. The current origin isgvon.com.Which zone line would cause this incorrect cache entry? Which local file performs the same function as DNS for the local host only? Which two statements about the named.conf file are true? (Choose two.)
<urn:uuid:be5595b8-ac9f-448e-a927-16030da5a77c>
CC-MAIN-2017-09
http://www.aiotestking.com/sun/category/sun-certified-network-administrator-for-solaris-8/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00521-ip-10-171-10-108.ec2.internal.warc.gz
en
0.851058
304
2.796875
3
Japanese researchers have developed a new type of lithium-ion conductor that could help prevent the kind of lithium-ion battery fires that grounded the Boeing 787 Dreamliner aircraft last year. While easily rechargeable, lithium-ion batteries contain flammable organic solvents that present a risk of fire, as seen in a recall of Apple MacBook Pro replacement batteries. Many researchers have been trying to develop a battery based on a solid-state material than can conduct lithium ions. A team at Tohoku University in northern Japan has used lithium borohydride (LiBH4), an agent used in organic chemistry processes, to create a conductor that could become the basis for a new solid-state battery. The team focused on the rock salt-type crystal structure of LiBH4 as a potential conductor, but it was only stable under high-temperature, high-pressure conditions. To make it stable at room temperature, the researchers used a process called doping to add small amounts of LiBH4 to potassium iodide, an inorganic compound used to iodize table salt. The result, as described in a study published in the journal APL Materials, proved to be a pure lithium ion conductor despite its low lithium ion content. The researchers dubbed it a "Parasitic Conduction Mechanism" because the LiBH4 acts as a kind of "parasite" instead of a host material. "This parasitic conduction mechanism has a possibility to be applied in any lithium-ion conductors," Hitoshi Takamura, an associate professor at Tohoku University who led the study, wrote in an email interview. "This mechanism can take place if a small amount of lithium ion can be doped to any oxides, sulfides, halides and nitrides to be a host framework." The risk of fire in a battery using the mechanism would be much reduced because the amount of LiBH4 is limited, he added. The battery is still in the research phase. While the researchers have not estimated how much such batteries might cost, Takamura said the lower fire risk would enable a simplified and cheaper cell structure. The team plans to enhance the prototype battery's lithium-ion conductivity and explore various doping agents that could optimize the lithium-ion pathways.
<urn:uuid:ffb3a83e-7127-4b97-a3a3-21cf4ad90683>
CC-MAIN-2017-09
http://www.cio.com/article/2376074/mobile/new-conductor-could-prevent-lithium-ion-battery-fires.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00045-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932363
473
3.453125
3
At a recent public demonstration, the Commonwealth Scientific and Industrial Research Organization (Australia's national science agency) showed off new wireless technology that is capable of streaming data over a wireless link at speeds faster than gigabit Ethernet. During a live demonstration, a CSIRO team successfully transmitted 16 streams of 480p video over a single 6Gbps link at the same time. The signal travelled over a 250-meter link in the 85GHz frequency band without dropping any frames. The 85GHz frequency used in the demo has the advantage of being largely unused by other applications, but that high frequency also makes it unsuitable for anything other than line-of-sight transmissions, as signals at such high frequencies cannot penetrate buildings. By way of contrast, WiMAX equipment will operate in either the 2-11GHz (unlicensed) or 10-66GHz (licensed) bands. According to Dr. Jay Guo, director of CSIRO's Wireless Technologies Laboratory, the demonstration used only a portion of the link's available bandwidth. Guo said that the CSIRO was aiming for direct connections of 12Gbps down the road. The technology is geared towards replacing or supplementing current wired links, perhaps where laying fiber is too expensive or infeasible for other reasons. As long as there is a good line-of-sight connection, the technology demonstrated by the CSIRO appears capable of easily surpassing gigabit Ethernet speeds wirelesslyif it can be translated into real-world conditions.
<urn:uuid:cc4edae9-1ea9-4354-b1c4-53e1d16b5c93>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2006/12/8400/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00165-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928455
298
2.890625
3
A SIP call session between two phones is established as follows: - The calling phone sends out an INVITE. - The called phone sends an information response 100 – Trying – back. - When the called phone starts ringing a response 180 – Ringing – is sent back. - When the caller picks up the phone, the called phone sends a response 200 – OK. - The calling phone responds with ACK – acknowledgement. - Now the actual conversation is transmitted as data via RTP. - When the person calling hangs up, a BYE request is sent to the calling phone. - The calling phone responds with a 200 – OK. It’s as simple as that! The SIP protocol is logical and very easy to understand.
<urn:uuid:b5aa14f6-71f2-41d0-8859-0b3183608fce>
CC-MAIN-2017-09
https://www.3cx.com/pbx/sip-call-session/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00337-ip-10-171-10-108.ec2.internal.warc.gz
en
0.887813
158
2.640625
3
You most likely have your proprietary software thoroughly tested, QAed and reviewed via static code analysis on a regular basis. But what about the open source components? Open source components may have a direct impact on the quality of your software or service. Open source components may have a direct impact on the quality of your software or service. Security vulnerabilities in open source components are discovered from time to time, and while often fixed very quickly, you need to make sure that you know of them when they are discovered and can apply the right measures when necessary.Here are 3 measures you should take to control the risks that open source components may introduce to your software: 1 – Know what’s in your software. Open source components are part of your software, so you need to know which ones are embedded in your software, at all times. What makes this task hard is that most open source components have dependencies (components they use) – in fact, we researched the 300K open source components (out of a database of millions) that are most commonly used by our customers, and discovered that on average, each has 7.1 dependencies. So if you have 50 open source components that you know of in your software, the actual number is probably much higher. Knowing what you are using is an essential step on the path to full control of your software. 2 – Control what’s being added into your software. Checking what open source components are added in real time allows you to check whether they conform to your open source license and risk policy and decide whether you want to use them before too much effort is put into developing your software around these components. You just don’t want to spend precious development resources on components that you cannot use because their license is too restrictive. There are usually plenty of alternatives with friendly licenses your development team can consider if they know that they should. 3 – Track security vulnerabilities and stale libraries. Another reason to check open source components as they are added to your software is security vulnerabilities. As we mentioned above, since open source software is like any other software, it too may contain security vulnerabilities. The good news: open source components are tested, used and fixed by an entire community. Our research of over 6,000 projects proves that if open source components are properly managed and regularly patched, most projects (98% of them) would not include an unfixed security vulnerability. Controlling what’s in the software and what’s being added to it, is the first step on the path to secure software. Being able to create a full detailed report in a click, having full visibility and transparency and making all these part of a foolproof process that does not rely on the development team are key to successful management of open source usage. | About the author: Rami Sass (@whtsrc) is CEO and Founder of WhiteSource. WhiteSource helps engineering executives to effortlessly manage the use of open source components in their software.Open source components make up a significant part of commercial software but are often undermanaged. WhiteSource fully automates all open source management needs, reducing risks, and guaranteeing the continuity and integrity of open source component management.Editor’s Note: The opinions expressed in this article are solely those of the contributor, and do not necessarily reflect those of Checkmarx. Sign up today & never miss an update from the Checkmarx blog Interested in trying CxSAST on your own code? You can now use Checkmarx's solution to scan uncompiled / unbuilt source code in 18 coding and scripting languages and identify the vulnerable lines of code. CxSAST will even find the best-fix locations for you and suggest the best remediation techniques. Sign up for your FREE trial now. Checkmarx is now offering you the opportunity to see how CxSAST identifies application-layer vulnerabilities in real-time. Our in-house security experts will run the scan and demonstrate how the solution's queries can be tweaked as per your specific needs and requirements. Fill in your details and we'll schedule a FREE live demo with you.
<urn:uuid:6f816d04-7fed-4a0f-867a-f389d88d2477>
CC-MAIN-2017-09
https://www.checkmarx.com/2015/03/05/application-open-source-components1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00089-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939299
841
2.53125
3
Medical schools and health care trainers are using advanced gaming technologies to convey what it’s like to practice high-pressure, critical-care medicine - By John Pulley - Apr 16, 2007 The Serious Games Initiative In 1972, the big medical stories included the surgeon general’s warning that exposure to secondhand smoke was a serious risk to public health. The development of durable lithium batteries dramatically increased the popularity of pacemakers. And the U.S. government stopped requiring people to receive smallpox vaccinations. But the most significant medical development of the year — in terms of its long-term implications — might have been the introduction of Pong. Atari’s video arcade game was an instant hit. A paragon of simplicity, Pong consisted of a long line representing a net, two shorter lines for paddles, a square ball and digital numbers for keeping score. Players manipulated knobs that controlled the up and down motion of virtual paddles, which batted a blip of light across a crude, low-definition video screen. The game had the right combination of graphics, plausible physics and difficulty — not too hard or too easy — to engage users in a new and powerful way. And although no one knew it at the time, Pong was a harbinger of a significant technological shift in the way certain professionals today are trained. Since then, the potential for electronic games to radically change the nature of teaching — for the medical profession in particular — has reached critical mass. Today, medical schools are experimenting with multimillion-dollar game-based software systems that simulate the working conditions and decision-making skills required at some of the biggest hospitals in the world. Until recently, tinkerers and game enthusiasts typically developed games for medical education. The games’ sophistication varies from simple applications such as online crossword puzzles and “Jeopardy”-type quiz shows to games that — in the spirit of the TV series “Survivor” — rate players according to how well they perform challenging tasks relative to their peers. “Over the past two or three years, it has become a very active area,” said Parvati Dev, director of the Stanford University Medical Media and Information Technologies lab, which seeks to advance medical education by using information technology in innovative ways. “People are realizing that a lot of learning requires you to experience a situation before you touch a real patient,” Dev said. “Today, we do book learning and learning in the hospital. Simulation, games and virtual reality provide an intermediate learning experience that can be very realistic, very safe for you and the patient, and which gives you a lot of practice.” The use of games in medical education might also be one of the best ways to spark learning in a generation of digital natives — students for whom cell phones, video games and iPods are indispensable tools of daily life. William Brescia, the newly appointed director of instructional technology at the University of Tennessee Health Science Center, said he had barely unpacked when students began arriving at his office. “I had 10 students knocking on my door saying, ‘How can we do this or that with technology?’ ” he said. “They have to pass the boards. They have to remember a lot of stuff. Wouldn’t it be great if we had a game to take them through this stuff over and over so that they could practice?”Sweat and fear The most sophisticated medical education games are now multimillion-dollar, graphic-intensive, first-person applications created by the commercial gaming industry. At the leading edge of this category is Pulse, a $10 million virtual learning lab developed by Texas A&M University-Corpus Christi and game developer BreakAway. The Yale University School of Medicine, the Johns Hopkins School of Medicine and the National Naval Medical Center are testing the game, which replicates in minute detail the operations of the National Naval Medical Center’s intensive care unit. Like other sophisticated electronic learning systems, the game seeks to maximize the time students spend on specific tasks by engaging them with compelling story lines and believable virtual environments. “The Pulse project represents an important convergence of gaming technology, educational theory and clinical need,” said Dr. Kirk Shelley, associate professor of anesthesiology and medical director of ambulatory surgery at Yale. “The increasing complexity of the critical-care environment mandates that we find innovative ways to rapidly train health care providers.” A key innovation is the games’ ability to convey the intense atmosphere of the medical theater. Writing in an online forum, Tim Holt, a research assistant at Oregon State University who worked on the Pulse project, vouched for its emotional realism. “We want people to sweat, be scared, feel challenged, get pissed, but try again when the patient dies,” he said. Immune Attack, developed by Brown University and the University of Southern California under the auspices of the Federation of American Scientists, also reminds its players that medicine is a matter of life or death. The game teaches immunology by placing learners on an electronic playing field where a teenage prodigy with a rare immunodeficiency disorder must teach his immune system how to function “or die trying.”Virtual field hospital Duke University hosts one of the more sophisticated projects to simulate serious medical situations. The university’s Human Simulation and Patient Safety Center functions like a flight simulator for doctors-in-training. Using computer-controlled mannequins that exhibit the symptoms of medical emergencies — a patient whose airway is swelling shut, for example — the simulations force students to make decisions before they face such situations in the real world. The center has partnered with Virtual Heroes, a gaming company best known for creating the America’s Army game the military uses as a recruitment tool, to explore adding 3-D and virtual reality solutions. Virtual Heroes is also developing HumanSim, a medical training tool that will integrate advanced game technology. Duke’s new game is set in a field hospital’s emergency room, and the situations that arise will have relevance to military and civilian medical teams. The game will connect players via a network that allows them to interact in the virtual environment. “We focused on team training because [poor communication] has such a huge impact on injury and death caused by health care workers,” said Dr. Jeffrey Taekman, the center’s director. The Defense Department also faces its share of mistakes with deadly consequences. To minimize risks, DOD uses game technology to train military employees to respond to a range of events, from soldiers reacting to a woman and child approaching a military checkpoint in Afghanistan to military doctors dealing with a critically injured soldier. “DOD is the only [arena] where every day your job is to train. They know mistakes are costly,” said Kay Howell, vice president of information technologies at the Federation of American Scientists. “Similarly, the medical field spends a lot of money on education and training — and mistakes are costly. The return on investment in gaming could be extremely high” for medical education. When designing such games, developers must maintain a careful balance between entertainment and seriousness, experts say. “If you don’t do it right, somebody might die,” Brescia said. “You don’t want a fun game that gives out faulty information.” Health business games Not all health-related games are about treatment. Some developers are using games to help health care professionals learn to make business and management decisions. At the University of Virginia, for example, doctors and technologists have come together to create a game that teaches the economic implications — for doctors, patients and society — of various treatment options. The game has several disease scenarios, including gastroesophageal reflux disease, chronic high blood pressure and emphysema. For each one, doctors choose from among several courses of treatment that vary in terms of direct costs, recuperation time, insurance coverage, out-of-pocket expenses and physician payment. “The treatment options are a tossup in terms of efficacy, but there are big differences in the economic outcomes to physicians, patients and society as a whole,” said John Jackson, director of educational technology at the University of Virginia’s Office of Medical Education. The game runs on open knowledgeware that users can customize to suit their needs. To date, the University of Colorado, the University of California at San Francisco and Temple University have adopted the game. Dr. John Voss, an associate professor of medicine at the University of Virginia, wrote the game’s algorithms. “People really like the interactive graphics,” Voss said. “The quickest way to put medical students to sleep is to lecture them about health care economics.”Pedantic pushback Of course, not everyone is enthusiastic about games — even serious ones — having a prominent place in medical education. Sophisticated new applications can be expensive to develop, and it’s not clear who will pay those costs. In addition, the medical community has a reputation for adopting IT at a glacial pace, in marked contrast to its willingness to embrace sophisticated tools for diagnosing and treating diseases. And although promising, most medical education games lag far behind commercial ones in their ability to engage players on a deep level. “The medical games I’ve seen so far have what I call a very short story,” Brescia said. “We need to think about realistic medical stories. The story is what drives the player to finish the games.” “My impression of medical education is that they are struggling, like lots of education people, with issues of how new media affect them,” said Ben Sawyer, co-founder of the software development firm Digitalmill. “You add gaming and you get even more pushback because to say that a game would be a serious tool for something like cutting open a human body may sound like anathema to some.” Such resistance notwithstanding, Sawyer said he is encouraged that universities are bringing together medical professionals and game designers. He was pleased to hear a doctor at a medical conference express his desire to have Sam Fisher as a virtual patient. If you haven’t met Fisher, a field agent for the super-secret government agency Third Echelon, you can make his acquaintance in the virtual world of Tom Clancy’s Splinter Cell computer game.
<urn:uuid:f159309b-0c5d-457e-b488-8478e8d692ef>
CC-MAIN-2017-09
https://fcw.com/articles/2007/04/16/serious-games.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00033-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947421
2,183
2.90625
3
Being an information security enthusiast/professional I am often asked how one can go about hiding their IP address while on the Internet. Here’s the analogy I give them: What would happen if you were to give a fake address when making a pizza delivery order? Simple — you wouldn’t get a pizza. While you could pretty easily fool the pizza place, you ultimately wouldn’t accomplish much. The very thing that you wanted (the pizza) would get sent to some other guy’s house, and you’d go hungry. It’s the same with the Internet. You need a valid IP address in order to receive web pages and email — just like you need a valid house address in order to receive a pizza delivery or a package from the post office. If you didn’t have an address people simply wouldn’t know how to bring you the things you wanted. One popular way of obscuring your IP address is by using a proxy. But proxies don’t break the rules; they just add a bit of complexity. Using our pizza delivery analogy, a proxy is like calling your buddy at his house and having him order your pizza. Then, when the pizza gets there, he brings it to YOU. But guess what? You still need an address. If your buddy didn’t know where you lived he couldn’t bring you your pizza. So you hid from the pizza place, but not from everyone. The key here is that the pizza shop didn’t hear your voice (what browser you’re using), your accent (your operating system), or get your mailing address (your IP). They got your friend’s information instead, and that’s what people like about proxies. Just remember that proxies aren’t magical; they simply add extra hops in the middle. Each person still has an address (you, your buddy, and the pizza place). So don’t think of it as “hiding” or “becoming invisible”; this isn’t how the Internet works. The Internet needs to know your IP addressees or else you can’t use it. If you were truly hidden, nobody would be able to bring you the stuff you asked for — whether that something was a pizza or an email from a friend.: [ Jan 2007 ]
<urn:uuid:f6dc7cff-9c0b-4d16-95eb-8ce41167d471>
CC-MAIN-2017-09
https://danielmiessler.com/study/hiding_your_ip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00209-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954788
496
3.03125
3