text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Photo Gallery & Information about New 7 Wonders of the World The Taj Mahal (1630 A.D.) Agra, India Symbol of Love & Passion! This immense mausoleum was built on the orders of Shah Jahan, the fifth Muslim Mogul emperor, to honor the memory of his beloved late wife. Built out of white marble and standing in formally laid-out walled gardens, the Taj Mahal is regarded as the most perfect jewel of Muslim art in India. The emperor was consequently jailed and, it is said, could then only see the Taj Mahal out of his small cell window. The Pyramid at (before 800 A.D.) Yucatan Peninsula, Mexico Symbol of Worship & Knowledge! Chichn Itz, the most famous Mayan temple city, served as the political and economic center of the Mayan civilization. Its various structures - the pyramid of Kukulkan, the Temple of Chac Mool, the Hall of the Thousand Pillars, and the Playing Field of the Prisoners can still be seen today and are demonstrative of an extraordinary commitment to architectural space and composition. The pyramid itself was the last, and arguably the greatest, of all Mayan temples. Christ Redeemer (1931) Rio de Janeiro, Brazil Symbol of Welcoming & Openness! This statue of Jesus stands some 38 meters tall, atop the Corcovado mountain overlooking Rio de Janeiro. Designed by Brazilian Heitor da Silva Costa and created by French sculptor Paul Landowski, it is one of the world?s best-known monuments. The statue took five years to construct and was inaugurated on October 12, 1931. It has become a symbol of the city and of the warmth of the Brazilian people, who receive visitors with open arms. The Great Wall of China (220 B.C and 1368 - 1644 A.D.) China Symbol of Perseverance & Persistence! The Great Wall of China was built to link existing fortifications into a united defense system and better keep invading Mongol tribes out of China. It is the largest man-made monument ever to have been built and it is disputed that it is the only one visible from space. Many thousands of people must have given their lives to build this colossal construction. Machu Picchu (1460-1470), Peru Symbol of Community & Dedication! In the 15th century, the Incan Emperor Pachactec built a city in the clouds on the mountain known as Machu Picchu ("old mountain"). This extraordinary settlement lies halfway up the Andes Plateau, deep in the Amazon jungle and above the Urubamba River. It was probably abandoned by the Incas because of a smallpox outbreak and, after the Spanish defeated the Incan Empire, the city remained 'lost' for over three centuries. It was rediscovered by Hiram Bingham in 1911. Petra (9 B.C. - 40 A.D.), Jordan Symbol of Engineering & Protection! On the edge of the Arabian Desert, Petra was the glittering capital of the Nabataean empire of King Aretas IV (9 B.C. to 40 A.D.). Masters of water technology, the Nabataeans provided their city with great tunnel constructions and water chambers. A theater, modelled on Greek-Roman prototypes, had space for an audience of 4,000. Today, the Palace Tombs of Petra, with the 42-meter-high Hellenistic temple facade on the El-Deir Monastery, are impressive examples of Middle Eastern culture. The Roman Colosseum (70 - 82 A.D.) Rome, Italy Symbol of Joy & Suffering! This great amphitheater in the centre of Rome was built to give favors to successful legionnaires and to celebrate the glory of the Roman Empire. Its design concept still stands to this very day, and virtually every modern sports stadium some 2,000 years later still bears the irresistible imprint of the Colosseum's original design. Today, through films and history books, we are even more aware of the cruel fights and games that took place in this arena, all for the joy of the spectators. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:8697ef32-6ea3-4cf3-85d6-3bc01db1e421>
CC-MAIN-2017-09
http://www.knowledgepublisher.com/article-391.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00083-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944701
970
2.71875
3
Know the 4 denial of service types that can threaten the Domain Name System - By William Jackson - Jan 25, 2013 With the number of denial of service (DOS) attacks growing overall, a variety of techniques are being used to take advantage of the Domain Name System’s openness to direct attacks against DNS servers and even against targets that do not maintain a DNS server. The asymmetrical nature of DNS queries — the response often is much greater than the query — can turn the system against itself by amplifying attack traffic. With the number of attacks on the rise, security experts have recommended that organizations change their approach to defending against DOS attacks. Radware’s Global Application & Network Security Report for 2012 cited a 170 percent increase in DNS denial of service attacks from 2011 to 2012 and described four types of attacks targeting or using DNS. Basic DNS flood This is much like a brute-force DOS attack against any server, using high volumes of traffic to overpower a DNS server. This can use UDP (User Datagram Protocol) packets, which are accepted by DNS servers and do not require a connection, making it easy to spoof the IP address and hide the identity of the attacking computers. Even though this is a brute force attack, the attack resources needed are relatively small, since just 10 PCs generating 1,000 DNS requests per second could swamp the capacity of a typical DNS server. Additional computers could be used to further distribute and hide the source of the attack. Reflective DNS attack This technique actually manipulates DNS servers into directing attack traffic at a target through the use of spoofed IP addresses. Requests are sent to a third-party DNS server or servers using the address of the intended target. Replies are sent to the target server, which can be overwhelmed by the volume of DNS traffic. The volume of attack traffic is increased because a DNS reply typically is three to 10 times larger than the request. This amplification can be increased another tenfold by using specific DNS requests that require longer answers. The attacker remains hidden behind the DNS servers that are sending replies to the target. Recursive DNS attack This leverages the hierarchical nature of DNS, which Radware calls the most sophisticated and asymmetric type of DNS attack. When a recursive DNS server receives a request to resolve a domain name that it does not have cached, it sends out queries to other DNS servers, hoping to get an answer that can be returned. By sending multiple recursive requests for domain names not cached by the target server, an attacker can force the target to send out many requests of its own and wait for responses, quickly using up processing power, memory and bandwidth. Because of the low amount of traffic needed to generate a recursive attack, it often can fly under the radar of defenses that are tuned to high volumes of traffic. Garbage DNS attack This is a volume-based attack using large UDP packets to overwhelm network pipes, which takes advantage of the fact that DNS is a necessity. Because availability on the Internet requires the Domain Name System, organizations will not block the targeted DNS port at the router level, giving a clear shot at the target for a distributed DOS attack. William Jackson is a Maryland-based freelance writer.
<urn:uuid:19f3f9ac-f736-48ef-89ff-6f47fc96aeff>
CC-MAIN-2017-09
https://gcn.com/articles/2013/01/25/4-denial-of-service-types-threaten-dns.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00435-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930297
656
2.6875
3
The telescope doors are open on a NASA spacecraft that could give scientists clues to space weather that affects communication systems, electronics and power networks. Less than a month after launching the Interface Region Imaging Spectrograph (IRIS), NASA opened its telescope door on Wednesday. Simply uncovering the spacecraft's telescope lens was a significant milestone for the project, according to NASA. IRIS is a solar telescope designed to give scientists information about how material gathers, moves and heats up in the Sun's lower atmosphere. Better understanding this part of the sun's atmosphere, which sits below the corona, could help scientists understand and model phenomena like the ejection of solar material -- something that, if it's large enough, can damage electronic circuits, power distribution networks and communications systems on Earth. IRIS was launched on June 27 from a Pegasus rocket that was dropped from the belly of an L-1011 TriStar aircraft flying above the Pacific Ocean, about 90 miles off the central coast of California. The solar telescope is in the midst of a 60-day test period that began at launch. The first 30 days, which ends July 27, consists of spacecraft system checks, according to the space agency. The team will use the next 30 days for initial solar observations to fine tune the telescope's instruments. If the tests go well, NASA plans to move IRIS into science mode by Aug. 26. IRIS is set to focus on two parts of the lower solar atmosphere that exhibit an unusual effect: temperatures in the region are believed to be around 6,000 Kelvin near the Sun's surface and heat up to around a million Kelvin at the top of the region. That's different than our conventional experience with heat sources, where temperatures rise as the source is approached. "There's not been a push to look at this region because the atomic physics in this region is very, very, very complicated," Alan Title, IRIS principal investigator at Lockheed Martin, said in a statement last month. In the last decade, hopes have risen that computer models can accurately handle the complex information expected to come in about the Sun's lower atmosphere. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is [email protected].
<urn:uuid:11e86732-a875-410d-a698-9657ed6926ae>
CC-MAIN-2017-09
http://www.computerworld.com/article/2484104/emerging-technology/nasa-telescope-opens-eye-to-spy-on-the-sun.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00135-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939821
495
3.421875
3
GCN LAB IMPRESSIONS Einstein was wrong? Fire up the Falcon! - By John Breeden II - Sep 27, 2011 This article has been updated to correct the value of the speed of light, which, as several comments point out, was listed incorrectly . Let me start off by saying that I’m a huge fan of Albert Einstein. He put forth almost all of his theories about how the universe works doing nothing other than working things out in his head. In his head! There he was, working as a junior examiner in the Swiss Patent Office, scribbling down the laws that make up the universe in his free time. Amazing. He didn’t have fancy scientific instruments, and yet he blazed a new trail that we’ve followed for more than 100 years. The level of intelligence he possessed must have been greater than that of anyone who ever lived. And, incidentally, many of his theories have been put to real-world tests, and all have stood up to modern scrutiny. So it was with mixed feelings that we get the first inklings that one of his most important theories might be wrong. Basically, Einstein’s theory of special relativity states, among other things, that nothing can move faster than the speed of light in a vacuum, which is 299,792,458 meters per second. But recently, something did. At least we think it did. Although it has yet to be verified, the data looks pretty clear-cut. Scientists at the European Organization for Nuclear Research (CERN), the largest physics lab in the world, shot neutrinos, which are subatomic particles, 454 miles from Switzerland into Italy. This was a race with neutrinos against light. Given that nothing can move faster than light, it was a shock when the neutrinos arrived 60 billionths of a second sooner. The margin of error in the instruments has been calculated to 10 billionths of a second, so the speedy neutrinos handily defeated the speed of light. Nobody wants to contradict Einstein, and thus almost everything we believed about physics over the past century, so scientists are moving very slowly to verify these results, which could take six months. An American lab that can run similar tests is scheduled to be brought back online in a few years, so these tests and others might even be repeatable outside of CERN. Personally, I think this is going to be verified. And I’m happy about it. Despite my admiration for Einstein, I never really liked the theory of special relativity. That one theory made both “Star Wars” and “Star Trek” impossible, because no real space exploration, and certainly no space empires, could ever exist without faster-than-light travel. Take our own little backwoods solar system. The closest star to us is Proxima Centauri, at 4.2 light years away. Under Einstein’s rules, that is the shortest time we could ever get there, even if we could move at the speed of light. And it’s a red dwarf star, so we probably wouldn’t find anything interesting there anyway. There are currently 26 known stars within 12 light years of Earth, but it’s not like they are all along a road in the same direction. We would need 26 different spacecrafts to get to them all within 12 years at the speed of light. Otherwise, there would be a lot of timely backtracking added to those numbers. It’s not like we are anywhere close to even baseline speed-of-light travel. But if this experiment is proven correct, it means that there are no laws to hold us back. The universe is our oyster, just waiting for us to develop the technology to properly explore it. So you can have your rocket-powered flying cars. I want my Millennium Falcon. John Breeden II is a freelance technology writer for GCN.
<urn:uuid:385ad2c6-e6cc-477f-8c79-7f2fc3b20b54>
CC-MAIN-2017-09
https://gcn.com/articles/2011/09/27/if-einstein-wrong-about-speed-of-light.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00311-ip-10-171-10-108.ec2.internal.warc.gz
en
0.969586
813
2.65625
3
A new worm is targeting x86 computers running Linux and PHP, and variants may also pose a threat to devices such as home routers and set-top boxes based on other chip architectures. According to security researchers from Symantec, the malware spreads by exploiting a vulnerability in php-cgi, a component that allows PHP to run in the Common Gateway Interface (CGI) configuration. The vulnerability is tracked as CVE-2012-1823 and was patched in PHP 5.4.3 and PHP 5.3.13 in May 2012. The new worm, which was named Linux.Darlloz, is based on proof-of-concept code released in late October, the Symantec researchers said Wednesday in a blog post. "Upon execution, the worm generates IP [Internet Protocol] addresses randomly, accesses a specific path on the machine with well-known ID and passwords, and sends HTTP POST requests, which exploit the vulnerability," the Symantec researchers explained. "If the target is unpatched, it downloads the worm from a malicious server and starts searching for its next target." The only variant seen to be spreading so far targets x86 systems, because the malicious binary downloaded from the attacker's server is in ELF (Executable and Linkable Format) format for Intel architectures. However, the Symantec researchers claim the attacker also hosts variants of the worm for other architectures including ARM, PPC, MIPS and MIPSEL. These architectures are used in embedded devices like home routers, IP cameras, set-top boxes and many others. "The attacker is apparently trying to maximize the infection opportunity by expanding coverage to any devices running on Linux," the Symantec researchers said. "However, we have not confirmed attacks against non-PC devices yet." The firmware of many embedded devices is based on some type of Linux and includes a Web server with PHP for the Web-based administration interface. These kinds of devices might be easier to compromise than Linux PCs or servers because they don't receive updates very often. Patching vulnerabilities in embedded devices has never been an easy task. Many vendors don't issue regular updates and when they do, users are often not properly informed about the security issues fixed in those updates. In addition, installing an update on embedded devices requires more work and technical knowledge than updating regular software installed on a computer. Users have to know where the updates are published, download them manually and then upload them to their devices through a Web-based administration interface. "Many users may not be aware that they are using vulnerable devices in their homes or offices," the Symantec researchers said. "Another issue we could face is that even if users notice vulnerable devices, no updates have been provided to some products by the vendor, because of outdated technology or hardware limitations, such as not having enough memory or a CPU that is too slow to support new versions of the software." To protect their devices from the worm, users are advised to verify if those devices run the latest available firmware version, update the firmware if needed, set up strong administration passwords and block HTTP POST requests to -/cgi-bin/php, -/cgi-bin/php5, -/cgi-bin/php-cgi, -/cgi-bin/php.cgi and -/cgi-bin/php4, either from the gateway firewall or on each individual device if possible, the Symantec researchers said.
<urn:uuid:3835489b-101f-43db-8b3e-67290656c7b5>
CC-MAIN-2017-09
http://www.computerworld.com/article/2486333/malware-vulnerabilities/this-new-worm-targets-linux-pcs-and-embedded-devices.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00011-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936657
702
2.609375
3
Ransomware attacks cause downtime, data loss, possible intellectual property theft, and in certain industries a ransomware attack is now looked at as a possible data breach. Ransomware is vicious malware that locks users out of their devices or blocks access to files until a sum of money or ransom is paid. There are different variants of ransomware; some ransomware is designed to attack windows PCs while other strains infect Macs and even mobile devices. Ransomware is highly effective because the methods of encryption or locking of the files are practically impossible to decrypt without paying ransom. September 2013 is when ransomware went pro. Ransomware is malware that usually gets installed on a user’s workstation ( PC or Mac) using a social engineering attack where the user gets tricked in clicking on a link or opening an attachment. Once the malware is on the machine, it starts to encrypt all data files it can find on the machine itself and on any network shares the PC has access to. Next, when a user wants to access one of these files they are blocked, and the system admin who gets alerted by the user finds two files in the directory that indicate the files are taken ransom, and how to pay the ransom to decrypt the files. Strains of ransomware come and go as new cyber mafias muscle into the "business". Some examples are CryptoLocker, CryptoWall, Locky and TeslaCrypt. Ransomware is a very successful criminal business model. As an illustration. CryptoWall has generated over 320 million dollars in revenues. Once these files are encrypted, the only way to get them back is to restore a recent backup or pay the ransom. Problem is, backups often fail. Storage Magazine reports that over 34% of companies do not test their backups and of those tested 77% found that tape backups failed to restore. According to Microsoft, 42% of attempted recoveries from tape backups in the past year have failed. Paying the criminals is usually an amount of about $500 within the first deadline, and when that deadline expires, the ransom doubles. They require to be paid in untraceable crypto-currencies like Bitcoin. Bitcoin is a new kind of money, (call it a digital currency) here is a link with more about it. Here is a link to a page with a Bitcoin 101 that can help you get going if you need to pay Ransom. Recently, sophisticated cyber gangs penetrate a whole network, infect all machines at the same time, and extort tens of thousands of dollars. Many more ransomware strains are expected. This is only the early days, and as we said, it’s a very successful criminal business model with many copycats. New strains of ransomware regularly get spotted in the wild, cybercrime is furiously innovating in both the technical and social engineering areas. “You could spend a fortune purchasing technology and services, and your network infrastructure could still remain vulnerable to old-fashioned manipulation.” — Kevin Mitnick Here Is The Timeline The first ever ransomware virus was created in 1989 by Harvard-trained evolutionary biologist Joseph L. Popp. It was called the AIDS Trojan, also known as the PC Cyborg. Popp sent 20,000 infected diskettes labeled “AIDS Information – Introductory Diskettes” to attendees of the World Health Organization’s international AIDS conference. The AIDS Trojan was “generation one” ransomware malware and relatively easy to overcome. The Trojan used simple symmetric cryptography and tools were soon available to decrypt the file names. But the AIDS Trojan set the scene for what was to come. Note, this timeline has been created with approximate dates and does not claim to be fully complete. It does show the explosive growth though. 17 years after the first ransomware malware was distributed, another strain was released but this time it was much more invasive and difficult to remove than its predecessor. In 2006, the Archiveus Trojan was released, the first ever ransomware virus to use RSA encryption. The Archiveus Trojan encrypted everything in the MyDocuments directory and required victims to purchase items from an online pharmacy to receive the 30-digit password. June 2006 - the GPcode, an encryption Trojan which spread via an email attachment purporting to be a job application, used a 660-bit RSA public key. At the same time GP Code and it’s many variants were infecting victims, other types of ransomware circulated that did not involve encryption, but simply locked out users. WinLock displayed pornographic images until the users sent a $10 premium-rate SMS to receive the unlocking code. Two years after the initial GP Code virus was created, another variant of the same virus called GPcode.AK was unleashed on the public using a 1024-bit RSA key. Mid 2011 - The first large scale ransomware outbreak, and ransomware moves into the big time due to the use of anonymous payment services, which made it much easier for ransomware authors to collect money from their victims. There were about 30,000 new ransomware samples detected in each of the first two quarters of 2011. July 2011 - During the third quarter of 2011, new ransomware detections doubled to 60,000. January 2012 - The cybercrime ecosystem comes of age with Citadel, a toolkit for distributing malware and managing botnets that first surfaced in January 2012. Citadel makes it simple to produce ransomware and infect systems wholesale with pay-per-install programs allowing cybercriminals to pay a minimal fee to install their ransomware viruses on computers that are already infected by other malware. Due to the introduction of Citadel, ransomware infections surpassed 100,000 in the first quarter of 2012. Cyber criminals begin buying crime kits like Lyposit—malware that pretends to come from a local law enforcement agency based on the computer’s regional settings, and instructs victims to use payment services in a specific country—for just a share of the profit instead of for a fixed amount. March 2012 - Citadel and Lyposit lead to the Reveton worm, an attempt to extort money in the form of a fraudulent criminal fine. Reveton first showed up in European countries in early 2012. The exact “crime” and “law enforcement agency” are tailored to the user’s location. The threats are "pirated software" or "child pornography". The user would be locked out of the infected computer and the screen be taken over by a notice informing the user of their "crime" and instructing them that to unlock their computer they must pay the appropriate fine using a service such as Ukash, Paysafe or MoneyPak. April 2012 - Urausy Police Ransomware Trojans are some of the most recent entries in these attacks and are responsible for Police Ransomware scams that have spread throughout North and South America since April of 2012. July 2012 - Ransomware detections increase to more than 200,000 samples, or more than 2,000 per day. November 2012 - Another version of Reveton was released in the wild pretending to be from the FBI’s Internet Crime Complaint Center (IC3). Like most malware, Reveton continues to evolve. July 2013 - A version of ransomware is released targeting OSX users that runs in Safari and demands a $300 fine. This strain does not lock the computer or encrypt the files, but just opens a large number of iframes (browser windows) that the user would have to close. A version purporting to be from the Department of Homeland Security locked computers and demanded a $300 fine. July 2013 - Svpeng: This mobile Trojan targets Android devices. It was discovered by Kaspersky in July 2013 and originally designed to steal payment card information from Russian bank customers. In early 2014, it had evolved into ransomware, locking the phones displaying a message accusing the user of accessing child pornography. By the summer of 2014, a new version was out targeting U.S. users and using a fake FBI message and requiring a $200 payment with variants being used in the UK, Switzerland, India and Russia. According to Jeremy Linden, a senior security product manager for Lookout, a San Francisco-based mobile security firm, 900,000 phones were infected in the first 30 days. August 2013 - A version masquerading as fake security software known as Live Security Professional begins infecting systems. September 2013 - CryptoLocker is released. CryptoLocker is the first cryptographic malware spread by downloads from a compromised website and/or sent to business professionals in the form of email attachments that were made to look like customer complaints controlled through the Gameover ZeuS botnet which had been capturing online banking information since 2011. Cryptolocker uses a 2048-bit RSA key pair, uploaded to a command-and-control server, and used it to encrypt files with certain file extensions, and delete the originals. It would then threaten to delete the private key if payment was not received within three days. Payments initially could be received in the form of Bitcoins or pre-paid cash vouchers. With some versions of CryptoLocker, if the payment wasn’t received within three days, the user was given a second opportunity to pay a much higher ransom to get their files back. Ransom prices varied over time and with the particular version being used. The earliest CryptoLocker Payments could be made by CashU, Ukash, Paysafecard, MoneyPak or Bitcoin. Prices were initially set at $100, €100, £100, two Bitcoins or other figures for various currencies. November 2013 - The ransom changes. The going ransom was 2 Bitcoins or about $460, if they missed the original ransom deadline they could pay 10 Bitcoins ($2300) to use a service that connected to the command and control servers. After paying for that service, the first 1024 bytes of an encrypted file would be uploaded to the server and the server would then search for the associated private key. Early December 2013 - 250,000 machines infected. Four Bitcoin accounts associated with CryptoLocker found that 41,928 Bitcoins had been moved through those four accounts between October 15 and December 18. Given the then current price of $661, that would represent more than $27 million in payments received, not counting all the other payment methods. Mid December 2013 - The first CryptoLocker copycat software emerges, Locker, charging users $150 to get the key, with money being sent to a Perfect Money or QIWI Visa Virtual Card number. Late December 2013 - CryptoLocker 2.0 – Despite the similar name, CryptoLocker 2.0 was written using C# while the original was in C++ so it was likely done by a different programming team. Among other differences, 2.0 would only accept Bitcoins, and it would encrypt image, music and video files which the original skipped. And, while it claimed to use RSA-4096, it actually used RSA-1024. However, the infection methods were the same and the screen image very close to the original. Also during this timeframe, CryptorBit surfaced. Unlike CryptoLocker and CryptoDefense which only targets specific file extensions, CryptorBit corrupts the first 212 or 1024 bytes of any data file it finds. It also seems to be able to bypass Group Policy settings put in place to defend against this type of ransomware infection. The cyber gang uses social engineering to get the end-user to install the ransomware using such devices as a rogue antivirus product. Then, once the files are encrypted, the user is asked to install the Tor Browser, enter their address and follow the instructions to make the ransom payment – up to $500 in Bitcoin. The software also installs cryptocoin mining software that uses the victim’s computer to mine digital coins such as Bitcoin and deposit them in the malware developer’s digital wallet. February 2014 - CryptoDefense is released. It used Tor and Bitcoin for anonymity and 2048-bit encryption. However, because it used Windows’ built-in encryption APIs, the private key was stored in plain text on the infected computer. Despite this flaw, the hackers still managed to earn at least $34,000 in the first month, according to Symantec. April 2014 - The cyber criminals behind CryptoDefense release an improved version called CryptoWall. While largely similar to the earlier edition, CryptoWall doesn’t store the encryption key where the user can get to it. In addition, while CryptoDefense required the user to open an infected attachment, CryptoWall uses a Java vulnerability. Malicious advertisements on domains belonging to Disney, Facebook, The Guardian newspaper and many others led people to sites that were CryptoWall infected and encrypted their drives. According to an August 27 report from Dell SecureWorks Counter Threat Unit (CTU): “CTU researchers consider CryptoWall to be the largest and most destructive ransomware threat on the Internet as of this publication, and they expect this threat to continue growing.” More than 600,000 systems were infected between mid-March and August 24, with 5.25 billion les being encrypted. 1,683 victims (0.27%) paid a total $1,101,900 in ransom. Nearly 2/3 paid $500, but the amounts ranged from $200 to $10,000. Koler.a: Launched in April, this police ransom Trojan infected around 200,000 Android users, 3⁄4 in the US, who were searching for porn and wound up downloading the software. Since Android requires permission to install any software, it is unknown how many people actually installed it after download. Users were required to pay $100 – $300 to remove it. May 2014 - A multi-national team composed of government agencies managed to disable the Gameover ZeuS Botnet. The U.S. Department of Justice also issued an indictment against Evgeniy Bogachev who operated the botnet from his base on the Black Sea. iDevice users in Australia and the U.S. started seeing a lock screen on their iPhones and iPads saying that it had been locked by “Oleg Pliss” and requiring payment of $50 to $100 to unlock. It is unknown how many people were affected, but in June the Russian police arrested two people responsible and reported how they operated. This didn’t involve installing any malware, but was simply a straight up con using people’s naiveté and features built into iOS. First people were scammed into signing up for a fake video service that required entering their Apple ID. Once they had the Apple ID, the hackers would create iCloud accounts using those ID’s and use the Find My Phone feature, which includes the ability to lock a stolen phone, to lock the owners out of their own devices. July 2014 - The original Gameover ZeuS/CryptoLocker network resurfaced no longer requiring payment using a MoneyPak key in the GUI, but instead users must to install Tor or another layered encryption browser to pay them securely and directly. This allows malware authors to skip money mules and improve their bottom line. Cryptoblocker – July 2014 Trend Micro reported a new ransomware that doesn’t encrypt files that are larger than 100MB and will skip anything in the C:\Windows, C:\Program Files and C:\Program Files (x86) folders. It uses AES rather than RSA encryption. On July 23, Kaspersky reported that Koler had been taken down, but didn’t say by whom. August 2014 - Symantec reports crypto-style ransomware has seen a 700 percent-plus increase year-over-year. SynoLocker appeared in August 2014. Unlike the others which targeted end-user devices, this one was designed for Synology network attached storage devices. And unlike most encryption ransomware, SynoLocker encrypts the files one by one. Payment was 0.6 Bitcoins and the user has to go to an address on the Tor network to unlock the files. This was discovered midsummer 2014 and Fedor Sinitisyn, a security researcher for Kaspersky. Early versions only had an English language GUI, but then Russian was added. The first infections were mainly in Russia, so the developers were likely from an eastern European country, not Russia, because the Russian security services quickly arrest and shut down any Russians hacking others in their own country. Late 2014 - TorrentLocker – According to iSight Partners, TorrentLocker “is a new strain of ransomware that uses components of CryptoLocker and CryptoWall but with completely different code from these other two ransomware families.” It spreads through spam and uses the Rijndael algorithm for file encryption rather than RSA-2048. Ransom is paid by purchasing Bitcoins from specific Australian Bitcoin websites. Early 2015 - CrytoWall takes off, and replaces Cryptolocker as the leading ransomware infection. April 2015 - CrytoLocker is now being localized for Asian countries. There are attacks in Korea, Malaysia and Japan. May 2015 - It's heeere. Criminal ransomware-as-a-service has arrived. In short, you can now go to this TOR website "for criminals by criminals", roll your own ransomware for free, and the site takes a 20% kickback of every Bitcoin ransom payment. Also in May 2015 a new strain shows up that is called Locker and has been infecting employee's workstations but sat there silently until midnight May 25, 2015 when it woke up. Locker then started to wreak havoc in a massive way. May 2015 - New "Breaking Bad-themed ransomware" gets spotted in the wild. Apart from the Breaking Bad theme, CryptoLocker.S is pretty generic ransomware. It is surprising how fast ransom Trojans have developed. A year ago every new strain was headline news, now it's on page 3. This version grabs a wide range of data files, encrypts it using a random AES key which then is encrypted using a public key. June 2015 - SANS InfoSec forum notes that a new version of CryptoWall 3.0 is in the wild, using resumes of young women as a social engineering lure: "resume ransomware". June 2015 - The FBI, through their Internet Crime Complaint Center (IC3), released an alert on June 23, 2015 that between April 2014 and June 2015, the IC3 received 992 CryptoWall-related complaints, with victims reporting losses totaling over $18 million. Ransomware gives cybercriminals almost 1,500% return on their money. July 2015 - KnowBe4 released the first version of their Ransomware Hostage Rescue Manual. Get the most informative and complete hostage rescue manual on Ransomware. This 20-page manual is packed with actionable info that you need to prevent infections, and what to do when you are hit with malware like this. You also get a Ransomware Attack Response Checklist and Prevention Checklist. July 2015 - An Eastern European cybercrime gang has started a new TorrentLocker ransomware campaign where whole websites of energy companies, government organizations and large enterprises are being scraped and rebuilt from scratch to spread ransomware using Google Drive and Yandex Disk. July 2015 - Security researcher Fedor Sinitsyn reported on the new TeslaCrypt V2.0. This family of ransomware is relatively new, it was first detected in February 2015. It's been dubbed the "curse" of computer gamers because it targets many game-related file types. September 2015 - An aggressive Android ransomware strain is spreading in America. Security researchers at ESET discovered the first real example of malware that is capable to reset the PIN of your phone to permanently lock you out of your own device. They called it LockerPin, and it changes the infected device's lock screen PIN code and leaves victims with a locked mobile screen, demanding a $500 ransom. September 2015 - The criminal gangs that live off ransomware infections are targeting Small Medium Business (SMB) instead of consumers, a new Trend Micro Analysis shows. The reason SMB is being targeted is that they generally do not have the same defenses in place of large enterprise, but are able to afford a 500 to 700 dollar payment to get access to their files back. The Miami County Communication Center’s administrative computer network system was compromised with a CryptoWall 3.0 ransomware infection which locked down their 911 emergency center. They paid a 700 dollar Bitcoin ransom to unlock their files. October 2015 - A new ransomware strain spreads using remote desktop and terminal services attacks. The ransomware is called LowLevel04 and encrypts data using RSA-2048 encryption, the ransom is double from what is the normal $500 and demands four Bitcoin. Specifically nasty is how it gets installed: brute force attacks on machines that have Remote Desktop or Terminal Services installed and have weak passwords. October 2015 - The nation’s top law enforcement agency is warning companies that they may not be able to get their data back from cyber criminals who use Cryptolocker, Cryptowall and other malware without paying a ransom. “The ransomware is that good,” said Joseph Bonavolonta, the Assistant Special Agent in Charge of the FBI’s CYBER and Counterintelligence Program in its Boston office. “To be honest, we often advise people just to pay the ransom.” October 2015 - Staggering CryptoWall Ransomware Damage: 325 Million Dollar. A brand new report from Cyber Threat Alliance showed the damage caused by a single criminal Eastern European cyber mafia. The CTA is an industry group with big-name members like Intel, Palo Alto Networks, Fortinet and Symantec and was created last year to warn about emerging cyber threats. November 2015 - CryptoWall v4.0 released and displays a redesigned ransom note, new filenames, and now encrypts a file's name along with its data. In summary, the new v4.0 release now encrypts file names to make it more difficult to determine important files, and has a new HTML ransom note that is even more arrogant than the last one. It also gets delivered with the Nuclear Exploit Kit, which causes drive-by infections without the user having to click a link or open an attackment (sic). November 2015 - A Ransomware news roundup reports a new strain with a very short 24-hour deadline, researchers crack the Linix.Encover strain and British Parliament computers get infected with ransomware. Here is a graph made by the folks at Bromium over the years 2013-2015 as an illustration: January 2016 - A stupid and damaging new ransomware strain called 7ev3n encrypts your data and demands 13 bitcoins to decrypt your files. A 13 bitcoin [almost $5,000] ransom demand is the largest we have seen to date for this type of infection, but that is only just one of the problems with this ransomware. In addition to the large ransom demand, the 7ev3n crypto-ransom malware also does a great job trashing the Windows system that it was installed on. DarkReading reports on a Big Week In Ransomware. February 2016 - Ransomware criminals infect thousands with a weird WordPress hack. An unexpectedly large number of WordPress websites have been mysteriously compromised and are delivering the TeslaCrypt ransomware to unwitting end-users. Antivirus is not catching this yet. February 2016 - It's Here. New Ransomware Hidden In Infected Word Files. It was only a matter of time, but some miscreant finally did it. There is a new ransomware strain somewhat amateurishly called "Locky", but this is professional grade malware. The major headache is that this flavor starts out with a Microsoft Word attachment which has malicious macros in it, making it hard to filter out. Over 400,000 workstations were infected in just a few hours, data from Palo Alto Networks shows. Behind Locky is the deadly Dridex gang, the 800-pound gorilla in the banking Trojan racket. March 2016 - MedStar receives a massive ransomware demand. The MedStar Hospital Chain was hit with ransomware and has received a digital ransom note. A Baltimore Sun reporter has seen a copy of the cybercriminal's demands. "The deal is this: Send 3 bitcoins — $1,250 at current exchange rates — for the digital key to unlock a single infected computer, or 45 bitcoins — about $18,500 — for keys to all of them." April 2016 - News came out about a new type of ransomware that does not encrypt files but makes the whole hard disk inaccessible. As if encrypting files and holding them hostage is not enough, cybercriminals who create and spread crypto-ransomware are now resorting to causing blue screen of death (BSoD) and putting their ransom notes at system startup—as in, even before the operating system loads. It's called Petya and clearly Russian. April 2016 - The Ransomware That Knows Where You Live. It's happening in the UK today, and you can expect it in America tomorrow [correction- it's already happening today]. The bad guys in Eastern Europe are often using the U.K. as their beta test area, and when a scam has been debugged, they go wide in the U.S. So here is what's happening: victims get a phishing email that claims they owe a lot of money, and it has their correct street address in the email. The phishing emails tell recipients that they owe money to British businesses and charities when they do not. April 2016 - Hello mass spear phishing, meet ransomware! Ransomware is now one of the greatest threats on the internet. Also, a new ransomware strain called CryptoHost was discovered, which claims that it encrypts your data and then demands a ransom of .33 bitcoins to get your files back (~140 USD at the current exchange rate) . These cybercrims took a shortcut though, your files are not encrypted but copied into a password protected RAR archive . April 2016 - The Future of Ransomware: CryptoWorms? Cisco's Talos Labs researchers had a look into the future and described how ransomware would evolve. It's a nightmare. They created a sophisticated framework for next-gen ransomware that will scare the pants off you. Also, a new strain of ransomware called Jigsaw starts deleting files if you do not pay the ransom. April 2016 - Ransomware On Pace To Be A 2016 $1 Billion Dollar Business. CNN Money reports about new estimates from the FBI show that the costs from so-called ransomware have reached an all-time high. Cyber-criminals collected $209 million in the first three months of 2016 by extorting businesses and institutions to unlock computer servers. At that rate, ransomware is on pace to be a $1 billion a year crime this year. Here is a graph created by the folks of Proofpoint which illustrates the growth of new strains in Q1, 2016: Late April 2016 - Scary New CryptXXX Ransomware Also Steals Your Bitcoins. Now here's a new hybrid nasty that does a multitude of nefarious things. A few months ago the 800-pound Dridex cyber gang moved into ransomware with Locky, and now their competitor Reveton follows suit and tries to muscle into the ransomware racket with an even worse criminal malware multitool. At the moment CryptXXX spreads through the Angler Exploit Kit which infects the machine with the Bedep Trojan, which in its turn drops information stealers on the machine, and now ads professional grade encryption adding a .crypt extension to the filename. Here is a blog post that looks at the first 4 month of 2016 and describes an explosion of new strains of ransomware.
<urn:uuid:65728d9b-6df6-4d26-8f16-7ea5418584ba>
CC-MAIN-2017-09
https://blog.knowbe4.com/a-short-history-evolution-of-ransomware
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00187-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94705
5,670
2.921875
3
A greenhouse is a structure developed for growing plants in a controlled environment. These structures capture the incoming visible solar radiation and retain heat to provide a favorable environment for plant growth. Traditionally, majority of the greenhouses used soil as the base for growing plants. However, hydroponic or soil-less horticulture has recently started gaining popularity in the greenhouse industry. The greenhouse market is analyzed by different types and on the basis of technologies used. The type segments considered for the market estimation of greenhouses include hydroponic and non-hydroponic techniques. The greenhouse market can be segmented by ingredients and sub-markets. Ingredients of this market are polylactic acid, polyhydroxyalkanoate (PHA), dispersants, polyethylene (PE), superabsorbents, solvents, plastic films and sheets, and amphoteric surfactants. Sub-markets of this market are permanent greenhouse, macro tunnels, and low tunnels. Key Questions Answered - What are market estimates and forecasts; which of the greenhouse markets are doing well and which are not? What makes our report unique? - This report provides most granular segmentation on permanent greenhouse, macro tunnels, and low tunnels. - This report provides market sizing and forecast for the greenhouse market, along with the drivers, restraints, and opportunity analysis for each of the micro markets. Audience for this report - Global greenhouse companies - Manufacturing companies - Traders, distributors, and suppliers - Governmental and research organizations - Associations and industry bodies - Technology providers Along with the market data, you can also customize MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standard and deep dive analysis of the following parameters: - Demand estimation for greenhouse equipment for each specific country/geographic region - Prioritize the equipment after classifying them based on their value, application, and frequency - Intricate research to study the dominant market forces that steer the product’s regional growth by analyzing its supply and demand - To recognize the highly sought and most suited equipment specifications for each crop type and regional market Supply Chain Analysis - To identify the least cost green supplier sourcing across the world based on a set of qualifying criteria - To identify the potential distribution pockets based on selective benchmarks - To study the efficiency of the distributional patterns of the competitor’s product range with greenhouse equipment - To decipher and alert supply chain disruptions from procurement through distribution, and reduce the risk with possible alternative solutions - A very critical study on the competitor’s strengths and capabilities, upcoming strategic moves, and their business innovations for this sector to single out effective solutions - Analysis of the effectiveness of competitor’s product and service portfolio in any desired location - Analysis of the market share of the products in your preferred state/region in order to converge on their success factors - To provide a smooth legal passage through the regulatory framework of local authorities by analyzing the barriers in conducting trade - To analyze the duty and tax regulations in the procurement and assembling of components in a preferred region/country in case of setting up a manufacturing plant - In-depth trend analysis of the functional patterns of greenhouse equipment in your preferred choice of region/state/country - To provide information on crop suitability and industrial innovations for greenhouse applications and its future prospects Social Connect Forum - To discern the sustainable and customer-friendly novelties - To understand the product’s stage and rate of adoption through customer opinions - Using quality function deployment, expert views, and ideas that are considered to redesign and validate a promising product for the future Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement Permanent Greenhouse and Macro Tunnels and Low Tunnels adds up to total... Macro Tunnels and Permanent Greenhouse and Low Tunnels adds up to... Low Tunnels and Permanent Greenhouse and Macro Tunnels adds up...
<urn:uuid:bf3c1a2f-ecc6-45d1-b8d5-d9a857a26ffc>
CC-MAIN-2017-09
http://www.micromarketmonitor.com/market-report/greenhouse-reports-2983600293.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00007-ip-10-171-10-108.ec2.internal.warc.gz
en
0.894242
849
2.671875
3
Nearly fourteen billion years ago, the big bang of the universe transferred the world from a hot dense state to a totally different new world. Every moment something new is happening. As a small part of this new world, human now is also experiencing a big bang. However, this time, it’s about information, which is changing people’s life tremendously and is known as big data. Big data is no longer a strange word to us. It is generally used to describe a massive volume of both structured and unstructured data which is so large and complex that can hardly be processed by traditional database and software techniques. It is also a high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making. Usually data scientists break big data into four dimensions: Since the appearance of the Internet, the way people communicating have changed a lot and the information produced had increased largely with the characteristics of immense, variety and velocity. The right use of big data allows analysts to spot trends and gives niche insights that help create value and innovation much faster than conventional methods. To better understand the benefits of big data, here listed several things that big data can do to improve people’s daily life. Apparently Big data has tremendous possibilities. However, as data are accumulating at exponentially increasing rates, to make use of those information won’t be that easy. Except difficulties like understanding the data, display meaningful results and dealing with outliers, the using of big data is also facing challenges like data transmitting and data storage which are related to optical communication and are affecting the industry of optical communication. As most of the information produced today is being transmitted via Internet. Big data needs to storage and transmit those information which means the transport network for big data must be efficient and with high transmit capacity. In addition, there is another thing stays closely with big data – “Cloud” which refers to a type of computing that relies on sharing computing resources from applications to data centers over the Internet instead of your own computer’s hard drive. Cloud stores everyone’s cold, hard data like a big hard drive in the sky. And now, big data will store all the warm and fuzzy relationships between those data sets, a kind of social media for bits and bytes. According to research that the application of Cloud which also requires high speed transmit capacity is currently the biggest driver in continued growth in optical. All in all, big data needs big network, and optical network is now the best solution to satisfy these demands. Some companies today also manufacture optical communication products for big data. In this way, big data and optical communication industry promote each other. 57.6% of organizations surveyed say that big data is a challenge. We currently only see the beginnings of the big data applications which is just the tip of the iceberg. The great potential and possibilities are still there to be explored.
<urn:uuid:b091b5b5-885b-4568-85db-94aa76b1a180>
CC-MAIN-2017-09
http://www.fs.com/blog/when-information-explosion-meets-big-data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00127-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948591
600
3.578125
4
Sony and Lego are working together on creating a new generation of products that bridge the gap between toys and video games. A team of researchers at Sony Computer Science Laboratories in Tokyo is embedding tiny motors, cameras, and actuators into Lego blocks. One demonstration uses two small motorized Lego platforms, one of which is computer-controlled and will relentlessly pursue the other, which can be maneuvered using a wireless PlayStation controller or by hand. (See a video of the project and others on YouTube.) Any combination of blocks can be built up on the platforms, and more platforms can be added. The system could be used to create Lego battles, or simply play tag. The research team has also added actuators that can cause Lego structures to crumble on demand, and camera blocks that can beam first-hand video of the action to tablets and smartphones. "Lego is concerned about losing kids to video games," said Ganta Kondo from Sony's research and development division. "We want to keep the size small, but add interactive games." The project is currently in the experimental stages, and there are no concrete plans for consumer products from the collaboration. Current problems include the short battery life that comes with tiny components and accurately keeping track of the active components. The research was shown as part of an open house at the Sony Lab. Another project was an application that can stretch and twist video of objects in real time, allowing for instance restaurants to be enlarged based on their popularity. Other research includes a small quadcopter that is controlled by the physical movements of its operator, who wears a Sony head-mounted video display, and an Android-based server for remote control of home appliances.
<urn:uuid:b6411709-b918-49c1-9f12-14aadfe5d2b9>
CC-MAIN-2017-09
http://www.networkworld.com/article/2166611/data-center/sony-and-lego-collaborating-on-toy-research.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00303-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949701
344
3.125
3
CVE Terminology and FAQ What is CVE? CVE is a list of information security vulnerabilities and exposures that aims to provide common names for publicly known problems. The goal of CVE is to make it easier to share data across separate vulnerability capabilities (tools, repositories, and services) with this "common enumeration." Please visit http://cve.mitre.org/about/faqs.html for more information What is a "Vulnerability?" An information security "vulnerability" is a mistake in software that can be directly used by a hacker to gain access to a system or network. What is an "Exposure?" An information security exposure is a mistake in software that allows access to information or capabilities that can be used by a hacker as a stepping-stone into a system or network. What is a CVE Identifier? CVE Identifiers (also called "CVE names," "CVE numbers," "CVE-IDs," and "CVEs") are unique, common identifiers for publicly known information security vulnerabilities. Each CVE Identifier includes the following: Who owns CVE? The MITRE Corporation maintains CVE and this public Web site, manages the compatibility program, and provides impartial technical guidance to the CVE Editorial Board throughout the process to ensure CVE serves the public interest. Please visit cve.mitre.org for more information
<urn:uuid:979cec8f-85a1-4e88-aae0-30138ec13e23>
CC-MAIN-2017-09
https://www.cvedetails.com/cve-help.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00355-ip-10-171-10-108.ec2.internal.warc.gz
en
0.89875
275
2.96875
3
The new strain of bird flu infecting and killing people in China is on the move. All of the reported cases had been contained to a relative few hotspots, but the first reported case of a human infection outside mainland China arrived Wednesday, and that's got the world's top scientists pretty worried about this H7N9 strain—even if it's not being transmitted from person to person. A 53-year-old man from Taiwan recently returned from a trip to mainland China and showed signs of being infected with the new bird flu virus three days after landing home. (Taiwan is technically part of the larger Republic of China, but also its own country. It's complicated.) He's in critical condition and remains in quarantined at a Taiwanese hospital, where he's been since April 16. "This is the first confirmed H7N9 case in Taiwan who was infected abroad," Taiwan's Health Minister Chiu Wen-ta told reporters. This is also very concerning because the World Health Organization just finished their own investigation into the H7N9 virus and they're worried it could be even worse than SARS or the H591 bird flu. Remember SARS and the first bird flu? They weren't fun. "This is one of the most lethal influenza viruses we have seen so far," the WHO's assistant director-general for health security, Dr. Keiji Fukuda, said at briefing Wednesday. "This is an unusually dangerous virus for humans," he added.
<urn:uuid:8fc56827-9265-466d-be59-8714b547b854>
CC-MAIN-2017-09
http://www.nextgov.com/health/2013/04/bird-flu-has-spread-beyond-china-and-its-one-most-lethal-ever/62754/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00055-ip-10-171-10-108.ec2.internal.warc.gz
en
0.981217
300
2.640625
3
When a computer system is able to reduce tax expenditures by millions of dollars and generate as much as $27 in savings for every dollar spent on expenses, you would think state officials would be scrambling to use it. But that's not the case with the Public Assistance Reporting Information System, better known as PARIS, a four-year-old system designed to reduce improper payments in public assistance programs. So far, only 16 states are using PARIS to identify individuals or families who may be receiving benefit payments from TANF (Temporary Assistance for Needy Families), Medicaid or Food Stamps in more than one state, according to a report by the General Accounting Office (GAO). In February 2001, PARIS identified almost 33,000 instances in which improper payments were potentially made to individuals who appeared to reside in more than one state. Just under half of the potential improper payments involved Medicaid benefits; the rest involved some combination of TANF, Medicaid and Food Stamps. So far, four states and the District of Columbia have collected data on the benefits of the interstate matching system and have documented $16 million in savings. According to the GAO, their analysis suggests PARIS could help other states save program funds by identifying and preventing future improper payments. Among the 34 states not participating when the GAO released its report in September were California, Texas, Michigan and Ohio, all of which account for a significant portion of welfare expenditures. Lack of Information Each year, the United States spends approximately $230 billion on public assistance, Medicaid and Food Stamps. Millions are lost annually when individuals and families receive duplicate benefit payments from more than one state. Part of the problem has been the lack of information sharing between federal agencies that run the welfare programs and states that administer them. In 1997, the Department of Heath and Human Services started PARIS so states could share eligibility information and identify improper payment benefits. PARIS works by comparing states' benefit recipient lists with one another using individual social security numbers, as well as name and address information. Computers at the Defense Manpower Data Center search for matches and any hits are forwarded to the appropriate state, where staff can take steps to verify the information and decide whether to cut off benefits. Few states have taken the time to compare the program's costs to the benefits, but studies of its benefits clearly indicate that computer matching saves tax dollars. For example, Pennsylvania estimated that PARIS uncovered more than $2.8 million in savings in its TANF, Medicaid and Food Stamp programs. Maryland said that it saved $7.8 million in the Medicaid program during the first year PARIS was in operation. Kansas estimated that PARIS produced a savings-to-cost ratio of about 27 to 1. According to the GAO, if states used data from all three public assistance programs in their matching activities (not all do), the net savings could outweigh the costs of PARIS. On average, the savings-to-cost ratio would be 5 to 1. Based on data provided by the three states, approximately 20 percent of match hits end up valid. In addition to the savings generated by participation in PARIS, states also gain from the program's internal controls that help ensure public assistance payments are only made to or on behalf of people who are eligible for them. Even with the success so far, PARIS has been limited in its effectiveness. Most notably, only one-third of the states participate, leaving a large portion of the public assistance population not covered by the matching system. Second, PARIS has been hampered by coordination and communication problems among its participants. Third, some participating states give PARIS low priority, resulting in many duplicate payments left unresolved. Finally, the system suffers from the fact that it can't prevent duplicate payments from occurring, but can only identify and stop those
<urn:uuid:66a13bab-07c3-4e99-8ad5-88e31cdca685>
CC-MAIN-2017-09
http://www.govtech.com/e-government/Shunning-PARIS.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00475-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964821
776
2.515625
3
Once again, our friends at sister brand Dr. Dobbs have provided me with yet more interesting blog fodder. In his editor's note in the daily Dr. Dobbs Update email, editorial director Jonathan Erickson discusses a breakthrough by Australian researchers who claimed to have developed the first "bug free" embedded software.Erickson notes that it's great to say you'd like to have bug free applications, but what happens when the underlying operating system is infested with bad code? Windows XP consists of approximately 40 million lines of code. Try finding the error in that tangle! To solve this problem, researchers at Open Kernel Labs (OK Labs) and Australia's National Information and Communications Technology Research Centre (NICTA) examined the correctness, reliability and security of the microkernel technology underlying OKL4, OK Labs' virtualization platform for mobile devices. Erickson says their approach was to "create a mathematical method for proving the correctness of the underlying source code, using formal logic and programmatic theorem checking. The verification process eliminated a wide range of exploitable errors, such as design flaws and common code-based errors, buffer overflows, null-point dereferences, memory leaks, arithmetic overflows, and exceptions." Once the work was done, OK Labs claimed it created "the world's first 100 percent verified 'bug free' embedded software." The researches said this helped establish a new level of software security and reliability for mission-critical applications, such as aerospace and defense. Additionally, this same verification process can be applied to business-critical applications in mobile telephony, business intelligence, and even mobile financial transactions. Erickson drills down a bit deeper: "All in all, the researchers verified approximately 7,500 lines of source code, proving over 10,000 intermediate theorems in over 200,000 lines of formal proof. The verified code base-the seL4 kernel (short for 'secure embedded L4')-is a third-generation microkernel, comprising 8,700 lines of C code and 600 lines of assembler, that runs on ARMv6 and x86 platforms. According to OK Labs, this is the first formal proof of functional correctness of a complete, general-purpose operating-system kernel. In this case, 'functional correctness' means that the implementation always strictly follows a high-level abstract specification of kernel behavior. This includes traditional design and implementation safety properties (such as the kernel will never crash, and it will never perform an unsafe operation). It also proves that programmers can predict precisely how the kernel will behave in every possible situation." I'm not pretending to understand the minutia of programming, but the implications of these findings seem to indicate a potential for greater reliability in mobile payments systems. OKL4 is OK Lab's virtualization application for mobile platforms. It sounds like virtualization for mobile is in its early stages, from what the company's website says, but still presents an interesting case for how mobile applications will evolve. OK Labs claims OKL4 enables mobile OEMs and semiconductor suppliers to incorporate "must-have features" into new mobile designs more quickly and less expensively. The idea is to reduce costs through hardware consolidation, allowing device manufacturers to "create smartphones at featurephone prices." So I decided to contact OK Labs and see exactly how this would help m-financial services. According to Rob McCammon, the VP of product management, Open Kernel Labs, the main benefit will come from strong security for the mobile channel. The operating system tends to be the most attractive target to hackers in any computerized system. They exploit certain software bugs that can compromise the security of the OS, he says. Once the operating system is compromised the rest of the software in the system is made vulnerable. "Mobile financial transactions require and benefit from strong security," he says. "Stronger security can lower the risk of financial loss from fraud or theft. Additionally, confident users of systems can lead to higher (and more secure) transactions." He concludes, "The completion of this research demonstrates that it is possible to create an operating system kernel or hypervisor that is free of a wide range of bugs. The presence of bugs in a system opens the door to attacks on a mobile phone's privileged mode software. The research shows that a higher level of security and confidence can be provided than was previously thought possible." The company hopes to bring this secure and verified "Microvisor" to market in its virtualization platforms for mobile OEMs, mobile network operators, and IT managers building mobile-to-enterprise applications. Since mobile financial services is still in its early stages (certainly in the U.S.), there haven't been many reports of exploits outside of eavesdropping on NFC signals in contactless payment transactions. This research, if broadly embraced, might help financial institutions start off on the right foot when developing mobile applications-with the security baked in from the start.
<urn:uuid:e9d74e28-7e27-45dc-8357-7d8f7504b21f>
CC-MAIN-2017-09
http://www.banktech.com/channels/aussies-claim-to-have-developed-bug-free-os-for-mobile-platforms/d/d-id/1293115
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00651-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931916
1,003
2.5625
3
Multimode fibers are identified by the OM (“optical mode”) designation as outlined in the ISO/IEC 11801 standard: Today, this evolution continues with the development of OM4 fiber as the industry prepares itself for speeds of 40 and 100 Gb/s. OM3 and OM4 are both laser-optimized multimode fiber (LOMMF) and were developed to accommodate faster networks such as 10, 40, and 100 Gbps. Both are designed for use with 850-nm VCSELS (vertical-cavity surface-emitting lasers) and have aqua sheaths. When the 10 Gigabit Ethernet (10GbE) standard released in 2002, the fiber optic links of 10GBASE-SR were standardized to at least 300 meters over Optical Multimode 3 (OM3) fiber. OM3 is the leading type of multimode fiber being deployed today in the data center, but it isn’t the best fiber any more. Optical Multimode 4 (OM4) fiber was standardized in 2009 by the Telecommunications Industry Association. OM4 is now the latest and greatest multimode fiber and the IEEE is setting the supported distance of 10GbE to at least 400 meters. Given the premium for single mode transceivers, OM4 fiber is the best option for the small percentage of users needing to run 10Gb/s over links between 300 and 550 meters (or the even smaller percent who anticipate running 40 or 100Gb/s between 100 and 150 meters). OM4 cable could be regarded as improvement on the existed OM3 standards. The key performance differences lie in the bandwidth specifications for which the TIA standard stipulates the following three benchmarks: effective modal bandwidth of at least 4,700 MHz-km at 850 nm; overfilled modal bandwidth of at least 3,500 MHz-km at 850 nm; overfilled modal bandwidth of at least 500 MHz-km at 1,300 nm. Both rival single-mode fiber in performance while being significantly less expensive to implement. When the 40GbE and 100GbE standard was released in 2010, OM4 was designed into the standard and achieved a distance of 150 meters on OM4 fiber while OM3 fiber went 100 meters. 90 percent of all data centers have their runs under 100 meters so it really just comes down to a costing issue right there. Laser-optimized multimode fiber is recognized as the medium of choice to support high-speed data networks. Actually, OM4 fiber defines the next generation multimode optical fiber for high speed fiber optic transmission. Economically the cost for OM4 fiber is much lower than using singlemode due to the price of the Opti-electronics. At 40 Gbps the approximate cost is 3 times higher and at 100 Gbps the approximate cost is 10 times higher. Unlike the Multimode, the Singlemode solution does not use multiple strands and lasers to accomplish the speed of 40/100 Gbps; instead the use of CWDM is leveraged.
<urn:uuid:46e00cd9-3b6b-4abc-b067-0ff8907fa5db>
CC-MAIN-2017-09
http://www.fs.com/blog/om4-fiber-for-high-speed-applications.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00527-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940707
623
2.796875
3
1. DOMAIN NAME SERVICE Domain Name Service (“DNS”) is a corner stone capability/requirement in any use of the Internet. Domain names and DNS servers are essential to the proper function of anyone who uses or provides services via the Internet. As specified by Internet RFC’s 1034 and 1035, “There must be a valid Internet Domain Name attached to any network connected to the Internet.” As such, Customer must have a registered Internet Domain Name before Electric Lightwave Holdings, Inc. and its wholly owned subsidiaries (“Company”) can host primary DNS for Customer’s network, or provide secondary DNS for Customer’s network. 2. ROUTING ABILITY ON THE INTERNET Customer acknowledges and recognizes that the Internet is a world-wide interconnection of privately owned networks and as such, the ability to route or transmit or receive messages, data and/or files is limited to the capabilities of the various systems and the individual policies of the network owners. COMPANY will maintain its own network in its sole discretion, and in a fashion that will provide the necessary bandwidth to carry Customer’s contracted traffic in an efficient manner. COMPANY will filter non- aggregated routes at a level that is consistent with best engineering practices and enhances COMPANY’s network stability. While COMPANY strives to deliver as near error free transmission and access Services as reasonably possible, it accepts no responsibility for failure of routes, connections, packet loss or router/server rejections that are beyond its control. COMPANY, from time to time, purchases network access from other national service providers to facilitate its own deployed backbone networks. Because the information flow and network traffic changes dynamically, COMPANY may find it necessary to rebalance its own backbone to provide efficient routing capabilities. These changes may impact the routing paths that a Customer’s information uses to enter or exit COMPANY’s network. For these reasons, COMPANY does not guarantee specific network entrance or exit points. 3. DEMONSTRATION OF A WORKING CONNECTION COMPANY will use the following methods to demonstrate that its Internet data network is functioning between COMPANY equipment and Customer’s equipment, as specified. These methods will determine whether COMPANY has met its obligations to provide a working interconnection with COMPANY’s routing equipment: (a) Internet Access Services. (i) If Customer has no terminating equipment installed at Customer’s end of the circuit, Customer or COMPANY will provide an electrical loopback at the furthest reasonable point. COMPANY will transmit a properly framed signal to the loopback and will monitor the returned data for proper timing and framing. This demonstrates a functioning circuit. (ii) If Customer installs a CSU/DSU, COMPANY will send a loopup command to the CSU/DSU and will perform the same tests as in 6(a)(i) above, provided the CSU/DSU responds to the loopup command. (iii) If Customer has a working router attached to the CSU/DSU, COMPANY will perform the tests in 6(a)(ii), and COMPANY will send data grams to the router and watch for them to be echoed back without errors. If the physical link tests good and the datagrams return without error, then COMPANY has met its obligation for connectivity between Customers location and COMPANY’s terminating equipment. 4. DEMONSTRATION OF ROUTING IN COMPANY’S AUTONOMOUS SYSTEM COMPANY requires that Customer uses static routing protocol according to the specifications contained in RFC1812. BGP4 routing protocol may be used if approved by COMPANY’s Data Engineering department in writing prior to implementation and use of the BGP4 protocol. If BGP4 is approved, Customer will be allowed to transit Customer’s approved autonomous system number across COMPANY’s network. Requests to transit any additional autonomous system numbers across COMPANY’s network may be approved on a case-by-case basis and for a fee to be determined at the time of request. COMPANY will broadcast its BGP4 information to its network neighbors according to specifications contained in RFC1267. Customer may request that COMPANY respond to route failures. If the failure is caused by Customer’s network, this Customer will be charged time and materials at COMPANY’s prevailing rates. 5. RIGHTS AND OBLIGATIONS OF CUSTOMER (a) Customer shall, at Customer’s expense, undertake all necessary preparation required to comply with COMPANY’s installation and maintenance instructions. Customer is responsible for obtaining IP addresses prior to order completion. IP addresses may be obtained either from the ARIN at ARIN.net directly or via COMPANY. Clients must either complete the appropriate ARIN template located at the Internet address http://www.arin.net/library/templates/net-isp.txt for ISP’s, http://www.arin.net/library/templates/net-end-user.txt for other users, or follow the instructions located on the Internet at http://www.electriclightwave.com/ip. All IP address space allocated or assigned by COMPANY is non-portable. Renumbering IP networks is considered a part of normal network management activities. All costs associated with all such renumbering activities, whether voluntary or involuntary, are solely the responsibility of Customer. Customer’s failure to obtain IP addresses prior to the installation and testing of Services does not release Customer from it’s obligation to accept such Services. In addition, if COMPANY supplies routers or other equipment to Customer as part of COMPANY Services (“Equipment”), Customer shall be responsible for the costs of relocation of such Equipment once installed by COMPANY, and shall provide to COMPANY and suppliers of communications lines reasonable access to Customer’s premises to maintain such Equipment or to perform any acts required by this Agreement. (b) Customer shall maintain a deliverable hostmaster@[Customer’s Internet Domain Name] mailbox, and agrees to actively review said mailbox on a regular basis. (c) Customer shall maintain a deliverable postmaster@[Customer’s Internet Domain Name] mailbox, and agrees to actively review said mailbox on a regular basis. (d) Customer shall maintain a deliverable abuse@[Customer’s Internet Domain Name] mailbox and to review and respond to messages received no less frequently than once per business day. (e) Customer understands further that the Internet contains unedited materials some of which are sexually explicit or may be offensive to some people. Customer and End users and authorized users access such materials at their own risk. COMPANY has no control over and accepts no responsibility whatsoever for such materials. (f) Neither COMPANY nor its affiliates warrants that the Service will be uninterrupted or error free or that any information, software or other material accessible on the Service is free of viruses, worms, Trojan horses or other harmful components (g) COMPANY has no obligation to monitor the Service. However, Customer agrees that COMPANY has the right to monitor the Service electronically from time to time and to disclose any information as necessary to satisfy any law, regulation or other governmental request, to operate the Service properly, or to protect itself or its subscribers. As provided above, COMPANY will monitor the transmission of the Service. However, COMPANY will not monitor the content of any of the Service, including, but not limited to, any private electronic-mail messages. COMPANY reserves the right to refuse to post or to remove any information or materials, in whole or in part, that are in violation of this Agreement. (h) COMPANY does not guarantee sequential delivery of datagrams. Packet loss and latency are inherent in IP design. COMPANY will use reasonable efforts to maintain delivery of streaming media through User Datagram Protocol (“UDP”).
<urn:uuid:8b5c805c-f2ce-4f18-a154-6c0302fe125e>
CC-MAIN-2017-09
http://support.electriclightwave.com/knowledge-base/internet-access-policy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00047-ip-10-171-10-108.ec2.internal.warc.gz
en
0.898446
1,644
2.6875
3
Milky Way in High Resolution / February 7, 2012 A team of scientists led by the Max Planck Institute for Astrophysics (MPA) has produced the highest resolution map of the Milky Way's magnetic field by using more than 41,000 measurements from 26 projects, as reported in Gizmag.com. Each of the map’s 41,330 individual data points represents a Faraday depth measurement, which is a value of magnetic field strength along a particular line of sight. “Polarized light from radio sources in space is observed for the Faraday effect, which describes the rotation of the plane of polarization,” Gixmag.com reported. “The degree and direction of rotation are determined, and from this the magnetic field strength in a given direction is established.” Above is a view of the Milky Way by Bala Sivakumar from our point of view; below is the actual map, whose red areas indicate the parts of sky where the magnetic field points toward the observer and blue areas indicate parts of sky where the magnetic field points away from the observer.
<urn:uuid:cb506b5f-2634-4feb-9092-514b9cf6b4cf>
CC-MAIN-2017-09
http://www.govtech.com/photos/Photo-of-the-Week-Milky-Way-in-High-Resolution-02042012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00047-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935673
228
3.46875
3
With the coming of the digital age, corporate work forces are becoming a custom to different and new types of learning. ELearning Industry reports 41% of global fortune 500 companies use some sort of educational technology to instruct employees. Because the average age of the online learner report by Aurion Learning being 34 years old, traditional learning methodologies aren’t successful in knowledge retention. For learning to be effective online it needs to be highly interactive and participant-centered to keep everyone involved and engaged with the content. Information should not be pushed in a continuous stream of PowerPoint slides, but in a collaborative, interactive way that will allow the learners to pull the information they need from the training session and provide their own thoughts simultaneously.
<urn:uuid:c967e51f-8e7b-4768-b8e5-a5113a5f6364>
CC-MAIN-2017-09
https://blog.inxpo.com/type/image/page/29/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00099-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929586
145
2.6875
3
DARPA seeks to shape young minds Educational program targets grade schoolers - By Henry Kenyon - Oct 12, 2010 The Defense Department’s research and development agency has started an initiative to increase the number of computer science graduates in the United States. The three-year, $14.2 million dollar program will use a variety of online tools and educational approaches to guide interested middle and high school students into pursuing computer science careers. Managed by the Defense Advanced Research Projects Agency, the computer science-science, technology, engineering and mathematics (CS-STEM) program’s goal is to expand the talent pool of applicants available for jobs to support secure DOD networks and to accelerate computer science innovation. The interest in increasing the number of CS graduates has national security implications, DARPA officials said. According to the agency, since 2002, the number of U.S. college graduates with computer science or related degrees has dropped by 58.5 percent. To reverse these numbers, CS-STEM will focus on creating interesting activities and opportunities for middle and high school students that will increase in complexity as they progress through their education. CS-STEM is built around three components: a distributed online community, an online robotics academy, and an extracurricular online community for students. The first section, known as “Teach Ourselves,” developed by the University of Arizona, will be a distributed online community of students and teachers. This environment is intended to introduce students to the knowledge economy, which is a matrix where students can track their progress through a variety of subjects. The second component is the Fostering Innovation through Robotics Exploration, (FIRE) online robotics academy. Developed by Carnegie Mellon University, it will allow students to sharpen their skills at solving complex problems by educating them with algorithmic thinking skills, engineering processes, math, and programming knowledge. DARPA officials said FIRE is intended to significantly improve the educational value of robotics competitions by developing online cognitive tutors and simulation tools designed to use robots and programming to teach major mathematics concepts. The final part of CS-STEM is an extracurricular online community for middle and high school students. It will use ongoing, age-appropriate practice, activities and competitions, educational content, discussion boards, mentoring and role models to develop skills and foster interest in CS-STEM careers. Collaborative activities, puzzles, games, webisodes, workshops and other content will be used to attract students to the site on a daily basis.
<urn:uuid:8cfecd56-c6f0-4237-a46f-fd37751b14cd>
CC-MAIN-2017-09
https://gcn.com/articles/2010/10/12/darpa-launches-computer-science-education-program.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00391-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943325
510
2.9375
3
Energy & Climate Change Strategy EMC’s primary GHG emissions arise from the generation of the electricity needed to run our business—including our supply chain—and power our products. Therefore, our energy and climate change strategy focuses on the following key areas: - Reducing emissions from our own operations by: - Decreasing the demand for energy - Maintaining a highly efficient infrastructure - Optimizing logistics routes and modes to decrease carbon intensity and footprint - Designing and operating our data centers and facilities for energy efficiency - Identifying opportunities to adopt renewable energy sources that are economically and environmentally sound - Engaging suppliers in measuring and reporting - Collaborating with suppliers in taking measures to reduce emissions - Working with the IT industry to develop standards for reporting supply chain emissions - Supplying energy-efficient products - Developing innovative approaches to manage the exponential growth of data in their operations - Delivering services to help customers implement the most energy-efficient solutions for their businesses - Supplying information solutions to optimize business functions, accelerate research, leverage data assets, and enhance public infrastructure American Business Act on Climate Pledge In 2015, EMC joined the American Business Act on Climate Pledge to encourage global leaders to reach an international climate change agreement at COP 21 in December 2015. In addition to EMC’s commitment (see below), Chief Sustainability Officer Kathrin Winkler participated in a White House panel on climate change as part of the announcement. In support of our goal to achieve 80% absolute reduction in greenhouse gas emissions by 2050 in accordance with the 2007 Bali Climate Declaration, EMC Corporation pledges to: - Realize a 40 percent absolute reduction of global Scope 1 and 2 GHG emissions below 2010 levels by 2020 - Obtain at least 20 percent of global grid electricity needs from renewable sources by 2020 - Have all hardware and software products achieve increased efficiency in each subsequent version by 2020 - Reduce energy intensity of storage products 60 percent at a given raw capacity and 80 percent for computational tasks from 2013 to 2020 In 2015, EMC continued our engagement with the Renewable Energy Buyers’ Alliance (formerly the Corporate Renewable Buyers’ Partnership) sponsored by the World Wildlife Fund, World Resources Institute, Rocky Mountain Institute, and Business for Social Responsibility. In addition, EMC joined the Business Renewables Center, an initiative of the Rocky Mountain Institute, to learn from and share best practices with other corporate purchasers of renewable energy. Climate Change Policy Statement Our Goals and Performance We began measuring our GHG emissions in 2005. Since then, our energy intensity by revenue – the amount of global GHG we emit per $1 million we earn – has declined by more than 46 percent, from 32.99 to 17.55 metric tons. In 2015, EMC purchased 40,000 MWh of Green-e® Energy certified Renewable Energy Certificates (RECs) which helped us achieve our 2015 target. - 40 percent reduction of global Scopes 1 and 2 GHG emissions per revenue intensity below 2005 levels by 2015. Achieved in 2012, 2013, 2014, and 2015 - 20 percent of global electricity needs served by renewable sources by 2020 (excluding VMware) - 40 percent absolute reduction of global Scopes 1 and 2 GHG emissions below 2010 levels by 2020 (excluding VMware) - 50 percent of global electricity needs to be obtained from renewable sources by 2040 (excluding VMware) - 80 percent absolute reduction of global Scopes 1 and 2 GHG emissions below 2000 levels by 2050 (excluding VMware) Determining Our Goals To set our long-term goals, we began with the imperative to achieve an absolute reduction of at least 80 percent by 2050 in accordance with the Intergovernmental Panel on Climate Change’s (IPCC’s) Fourth Assessment Report recommendations. We then modeled various reduction trajectories; our goal was to identify a solution that would be elastic enough to adjust to changes in our business, while achieving a peak in absolute emissions by 2015, in accordance with recommendations from the 2007 Bali Climate Declaration. Our model was based on the Corporate Finance Approach to Climate-stabilizing Targets (C-FACT) proposal presented by Autodesk in 2009. The model calculates the annual percentage reduction in intensity required to achieve an absolute goal. We selected this approach because intensity targets better accommodate growth through acquisitions (in which net emissions have not changed but accountability for them has shifted), and aligns business performance with emissions reductions performance rather than forcing tradeoffs between them. Setting an intensity trajectory also drives investment beyond one-time reductions to those that can be sustained into the future. The C-FACT system, however, is “front-loaded” as it requires a declining absolute reduction in intensity each year. EMC developed a variant of the model that requires reductions to be more aggressive than the previous year. This makes better economic sense for the company as it takes advantage of the learning curve for alternative fuels as they become more efficient and cost effective. Please see the “Trajectory Diagram” in this section for more information. While EMC put much thought into setting our long-term goals, some stakeholders felt that they were too distant for most people to conceptualize. In response to this feedback, in 2014, we established our new 2020 targets to mark progress. The basis of our mid-term targets is an understanding of the contribution that businesses must make to GHG mitigation to avoid dangerous climate change, as described in the CDP and World Wildlife Fund report “3% Solution.” We believe these mid-term goals are aggressive and aspirational, particularly given the anticipated growth in our business. However, we also realize the potential for a combination of escalating effects of climate change and a lack of collective action could require that all businesses, including EMC, accelerate their mitigation plans. We will continue to monitor conditions and adjust our targets accordingly. Energy Management and Renewable Energy EMC’s reduction targets will best be achieved through a holistic approach to all aspects of energy management – including supply, demand and procurement. We continue to explore strategies for meeting our renewable energy goals by investigating renewable energy options that are economically and environmentally sound. In 2015, our efforts included: - Establishing cross-functional representation for a Global Energy Team to drive long-term energy strategy for EMC. This body is tasked with long-term planning of our energy supply, demand and procurement in all of our four global theaters – Asia Pacific and Japan (APJ), Europe, Middle East and Africa (EMEA), Latin America, and North America. - Retaining a new Global Energy Advisory firm to assist our Global Energy Team with mapping out a strategy for global energy management. This service includes identifying global renewable energy programs that we can investigate as part of our renewable energy and emission reduction goals. - Implementing a new tool for managing our global carbon accounting and reporting. The tool can also be expanded to assist us in global water accounting and reporting. - Breaking ground on a solar field project in Bellingham, Massachusetts. EMC, working in conjunction with Borrego Solar, is installing three 650 kilowatt ground-mounted solar photovoltaic (PV) arrays totaling 1.95 megawatts on EMC-owned property. The system is comprised of more than 6,000 solar PV panels, and is expected to generate 2,520,000 kilowatt hours of energy. The solar farm project is expected to be completed by mid-2016. - Conducting more detailed research on other opportunities for solar photovoltaic (PV) energy generation in the U.S., including potential hosting of more solar PV generation facilities, becoming a consumer of solar PV generated off-site through purchased power agreements (PPAs), and other possible solar PV models. These efforts are continuing into 2016. - Continuing to investigate other potential alternative energy purchasing in the U.S., India, Ireland and other locations where we have large global facilities and a greater proportion of Scope 2 GHG emissions. During 2015, EMC purchased 40,000 MWh of Renewable Energy Certificates (RECs) in support of renewable energy generated in the U.S. The RECs purchased supported renewable electricity delivered to the national power grid by alternative energy sources. The RECs are third-party verified by Green-e Energy to meet strict environmental and consumer protection standards. The 40,000 MWh represents 7 percent of the grid electricity consumed at all EMC facilities in the U.S. during 2015. Although we purchased fewer RECs in 2015, this aligns with our long-term strategy to invest in on-site renewable projects. Also in 2015, in alignment with our position on national climate policy, EMC approved the use of an internal price on carbon. This price has been initially set at $30 per metric ton CO2e, and will be reviewed periodically to adjust for market and regulatory conditions. The intention of this shadow price is to educate and inform business decision makers about the expected future costs associated with GHG emissions. As an initial step, training has been deployed to employees in our finance organizations who support major electricity-consuming or -producing capital expenditures such as new buildings, lease renewals, data center relocation, and lab consolidation. Reporting & Accountability We are committed to reporting our progress transparently and disclosing our GHG emissions annually to CDP. To learn more, see our 2016 CDP Climate Change questionnaire response. Our Ireland Center of Excellence (COE) has continued to participate in the European Emissions Trading Scheme (ETS), which is a cap and trade Scope 1 emissions program that has now entered the third trading phase from 2013 to 2020. This COE has consistently remained within its operating allowance for the previous phases since 2005, but phase three of trading has, as expected, proved to be challenging, and the Ireland COE produced 2,594 metric tons of CO2e against an allowance of 2,550. Previous years of strong performance against our allowance ensured that we have more than adequate additional spare allowances available to cover this excess. Further energy reduction projects commissioned during 2015 within the Ireland COE have brought our total thermal rated input to below 20 MW. As a result, we now fall outside of the criteria to be a member of the EU ETS. We will, however, continue to monitor and drive reductions in our CO2e emissions. - EMC 2016 CDP Climate Change Information Request Response - EMC 2015 CDP Climate Change Response - EMC 2014 CDP Climate Change Response - EMC 2013 Investor CDP Response - EMC 2012 Investor CDP Response EMC RECOGNIZED FOR CLIMATE DISCOSURE AND GHG MANAGEMENT CDP 2015 S&P 500 Climate Disclosure Leadership Index (CDLI) For the seventh time, EMC was included on the CDLI, earning a score of 100 for the depth and quality of the climate change data disclosed to investors and the global marketplace. To learn more, read the Press Release. In addition, EMC was recognized as a world leader in supplier action on climate change by CDP in 2015, securing a position on the CDP Supplier Climate A List. Scope 3 Emissions At EMC, we continually strive to increase the breadth and depth of our GHG reporting. In our 2015 CDP Climate Change questionnaire response, we reported estimated global corporate emissions for eight of the 15 categories of Scope 3 emissions based on the WRI Greenhouse Gas Protocol Corporate Value Chain (Scope 3) Accounting and Reporting Standard. The following five reported categories represent the greatest opportunity to drive improvement and minimize emissions through our own actions and influence. In 2015, the GHG emissions associated with business travel was 145,726 metric tons CO2e, including VMware. We track global corporate business travel miles from commercial flight and rail via our corporate travel booking tool. In addition, we estimate the GHG emissions associated with global business travel car rentals and global hotel stays based on data provided by our Travel department. The methodology for calculating the emissions associated with business travel is aligned with the GHG Protocol Corporate Accounting and Reporting Standard. We continually seek to reduce GHG emissions associated with employee business travel by implementing advances in technology, business processes and resource management. We apply technology to allow us to perform changes remotely to customer technical environments, resulting in reduced emissions from travel. To learn more, visit Employee Travel & Commuting. As of the publication of this report, our 2015 global GHG emissions from employee commuting have not yet been estimated. Please refer to EMC’s 2015 CDP Climate Change response for updated information. EMC maintains a comprehensive employee commuter services program focused on minimizing single-occupancy vehicles and unnecessary local employee travel. To learn more about our employee commuting programs, visit Employee Travel & Commuting. Direct Tier 1 Suppliers In 2015, the GHG emissions associated with EMC’s direct material suppliers was 211,809 metric tons CO2e. This reflects Scope 1 and Scope 2 GHG emissions data reported by direct Tier 1 suppliers comprising 96 percent of our annual spend. Using economic allocation, we use their data to calculate our share of their GHG emissions. Because this allocation approach requires access to supplier revenues, a small number of private companies are excluded from the analysis. The total reported metric tons of CO2e is extrapolated to provide an estimated figure for 100 percent of our direct materials supplier emissions. To learn more, visit Supply Chain Responsibility. EMC’s Global Logistics Operations generated approximately 76,989 metric tons CO2e in 2015, a 26 percent reduction in absolute carbon footprint from 2014. We attribute this reduction to various factors including our continuous efforts to shift to lower-emitting modes of transport. This number covers inbound, outbound, interplant, and customer service transportation and logistics, but excludes in-country goods freighting for Brazil, Japan, and Russia. In 2015, we collected data related to carrier operations representing 92 percent of our logistics spend and extrapolated total emissions proportionately based on the reports we received. To learn more, visit Logistics. Use of Sold Products Environmental Lifecycle Analyses conducted prior to 2012 confirmed our expectations that more than 90 percent of lifecycle impacts are due to electricity consumed during the product use phase. EMC estimates that the lifetime GHG emissions from use of EMC products shipped to customers during 2015 will be approximately 3,392,352 metric tons CO2e, including VMware. This value represents our customers’ Scope 2 GHG emissions from the generation of electricity that is powering our equipment. To learn more about how we provide ongoing information to end-use customers about how to use our products more efficiently, visit Our Products.
<urn:uuid:33883bcf-94ab-4b87-bb4c-51881b201eed>
CC-MAIN-2017-09
https://www.emc.com/corporate/sustainability/operations/energy-climate-change-strategy.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00139-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931175
3,032
2.546875
3
Photo: Press conference on the introduction of the ADA Restoration Act, July 26, 2007. On July 26, 1990, a landmark piece of legislation became law. The Americans with Disabilities Act (ADA) was signed by President George H.W. Bush who called it "the world's first comprehensive declaration of equality for people with disabilities." What was first intended to increase access to physical spaces such as government offices, improve employment opportunities and create real social integration, now includes the technological realm as well. The ADA addresses the need to make telephone communications services accessible to individuals who have impaired hearing or speech. Yet according to a Brown University Study only 54 percent of federal Web sites and 46 percent of state Web sites meet the World Wide Web Consortium (W3C) disability guidelines. "When the original version of the ADA was enacted 17 years ago, the Internet was not a factor," explains Ron Graham, writer of the Access Ability blog. "However, as times and technologies evolve, so should our laws." Today marks the 17th anniversary of the ADA. In those 17 years, much has improved for people with disabilities -- many have gained better employment, accessibility has become more prevalent in everyday life. Yet for some, the glimmer of hope as been snuffed out simply because of advancements in medicine and court decisions. The Supreme Court has been chipping away at the protections of the ADA, leaving millions of citizens vulnerable to a narrow interpretation of the law. Take Tony Coelho for example: Coelho has epilepsy, and was on the receiving end of discrimination much of his life. In 1998 the Supreme Court decided, in Sutton v. United Airlines, that the effects of "mitigating measures" (such as taking medication) are required to be considered in determining if someone has a disability under the ADA. So because of the improvements in anti-seizure medications and other medical devices, Coelho and others with epilepsy are no longer protected under the Act. The irony is that the Americans with Disabilities Act was written by Coelho. People with other disorders and disabilities such as diabetes, muscular dystrophy and those who use hearing aids are also "not disabled enough" to be protected under the ADA -- people who were intended to be. "The rulings that have been handed down under current guidelines allow employers to say somebody is too disabled to do a job, but not disabled enough to be covered by the ADA," said Graham. "Just because a person manages his/her disability with prosthetics, hearing aids, or medications does not mitigate the existence of the disability or its impact on the total functionality of that individual."Restoration Today, the ADA Restoration Act of 2007 was introduced in Congress. This bi-partisan legislation means to restore the initial meaning of the ADA, ensuring that being "not disabled enough" no longer hampers the lives of people with varying degrees of disability. "The language change in the ADA Restoration Act removes the hurdle of people claiming discrimination having to first prove the degree of disability, thus allowing the discrimination claim to be heard, not being collectively dismissed because the claimant failed to demonstrate they were a party covered by the law," continued Graham. "The Supreme Court's interpretation has created a vicious circle for Americans with disabilities," said Congressman Jim Sensenbrenner, co-sponsor of the Act. "It has created a broad range of people who benefit from 'mitigating measures' such as improvements in medicine, who still experience discrimination from employers, yet have been labeled 'not disabled enough' to gain the protections of the ADA. This is unacceptable." House Majority Leader Steny H. Hoyer pointed out the issues the bill intends to improve: "Among other things, the bipartisan House bill -- which already has more than 130 co-sponsors -- will restore the original intent of the ADA by: "The fact is," Hoyer continued, "the Supreme Court has improperly shifted the focus of the ADA from an employer's alleged misconduct, and onto whether an individual can first meet -- in the Supreme Court's words -- a 'demanding standard for qualifying as disabled.'" In an official proclamation current President Bush stated: "On the anniversary of the Americans with Disabilities Act (ADA), we celebrate our progress towards an America where individuals with disabilities are recognized for their talents and contributions to our society ... I call on all Americans to celebrate the vital contributions of individuals with disabilities as we work towards fulfilling the promise of the ADA to give all our citizens the opportunity to live with dignity, work productively, and achieve their dreams."
<urn:uuid:0c869c52-f400-4a64-98cf-7759f19365fd>
CC-MAIN-2017-09
http://www.govtech.com/health/Americans-With-Disabilities-Act-Sees-Possible.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00315-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965176
931
3.1875
3
Bluetooth Low Energy (BLE) was incorporated into the Bluetooth 4.0 specification in 2010 and experienced rapid uptake, including all the major operating systems, many of the smartphones and tablets we use today, plus a new breed of devices like fitness bands and simple RF tags. BLE excels at sharing small packets of data, referred to as attributes, over a low energy link, and is frequently used for health monitoring, proximity detection, asset tracking, and in-store navigation. Access points contain an integral Bluetooth radio and dedicated antenna, providing superior coverage and convenience to support these applications. In other words, no other hardware is required, it’s all built-in. BLE, as its name suggests, is designed to sip power, enabling some dedicated beacons to run for years on a single battery, opening up new practical applications at low cost. Bluetooth is also an efficient standard when it comes to radio interference. Operating in the 2.4 GHz ISM band, it uses frequency hopping technology to circumvent interference problems often seen in this band. Cell sizes can be tuned to the application requirement, with potential range comparable to 2.4 GHz WiFi, even taking into account the low power requirements of the standard. Support enables customers to begin developing practical applications for BLE devices. These can be broadly categorized into ‘push’ applications, where the AP informs an aware device that it is in a certain location, or ‘pull’ applications, where the AP listens for beacons and uses this information to assist with asset tracking and control through the dashboard. Location analytics based on BLE generally work on an opt-in basis, with the consumer enticed via an app which leverages location for mutual benefit.
<urn:uuid:ea859a9d-91f2-4226-b810-32cf2907aa2c>
CC-MAIN-2017-09
https://gtacknowledge.extremenetworks.com/articles/Q_A/What-is-BLE-support
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00315-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925927
349
2.609375
3
A year ago, President Obama and 46 other heads of state launched the Open Government Partnership, an initiative designed to increase transparency within governments around the world. The group’s declaration said that all adherent countries would: - Promote openness, because more information about governmental activities should be timely and freely available to people; - Engage citizens in decision-making, because this makes government more innovative and responsive; - Implement the highest standards of professional integrity, because those in power must serve the people and not themselves; and - Increase access to new technologies because of their unprecedented potential to help people realize their aspirations for access to information and a more powerful voice in how they are governed. According to OpenTheGovernment.org, a watchdog group, the administration has made substantial progress in areas ranging from public participation in government to whistleblower protections for personnel. Patrice McDermott, the organization’s executive director, said the Obama administration put a great deal of work into the initiative, but it needed to do more to fulfill international commitments. The administration aims to implement the full tenets of the partnership by January 2013. But OMBWatch noted that the administration has made little significant progress in some important areas, including efforts to modernize policies for managing public records and reduce response times for Freedom of Information Act requests. NextGov has reported previously that the administration’s claims of transparency don’t always reflect the experience of journalists, lawmakers and other citizens seeking information about government actions and activity. What’s more, unreliable and faulty data sets render some of the transparency rhetoric meaningless. Correction: This story originally misspelled Patrice McDermott's name. It has been corrected.
<urn:uuid:0fbabc04-fc06-405e-b981-263a96a326f3>
CC-MAIN-2017-09
http://www.nextgov.com/big-data/2012/09/open-government-partnership-marks-first-anniversary/58295/?oref=ng-HPriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00612-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939945
342
2.890625
3
"Transforming our nation's grid has been compared in significance with building the interstate highway system or the development of the Internet. These efforts, rightly regarded as revolutionary, were preceded by countless evolutionary steps."-- U.S. Department of Energy The Smart Grid: An Introduction West Virginia Gov. Joe Manchin visited the Research Ridge Test Facility -- a "smart grid" initiative in Morgantown -- that demonstrates technologies to support electricity reliability and energy efficiency. Developed by electricity provider Allegheny Energy and technology company Augusta Systems, the facility demonstrates how a smart grid can link all of the various technologies -- from the customer's air conditioner to the utility's substation -- into a single, more efficient network. "Smart grid is a hot topic at the national level as utilities attempt to modernize their electricity delivery systems," said Manchin in a release. "The Research Ridge Test Facility is a great example of how smart grid innovation is occurring in West Virginia. This project demonstrates how smart grid can benefit customers in our state and throughout the nation with improved electricity delivery and cost-effective management of energy usage." Microsoft Corp. announced it has developed a reference architecture that can serve as the basis for development of the "integrated utility of the future." The Microsoft Smart Energy Reference Architecture (SERA) is Microsoft's first comprehensive reference architecture that addresses technology integration throughout the full scope of the smart energy ecosystem, according to a company release. SERA helps utilities by providing a method of testing the alignment of information technology with their business processes to create an integrated utility. This is the second utility offering to be released from Microsoft in four months, following the announcement of Microsoft Hohm, an online application developed to enhance the experience of utilities' customers and provide further insight into the supply and demand of residential energy use.
<urn:uuid:ef39b9e1-e821-4cf6-913d-12d1acb7ff06>
CC-MAIN-2017-09
http://www.govtech.com/technology/Smart-Grid-Update.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00312-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931485
365
2.546875
3
University of Oklahoma associate professor Amy McGovern is working to revolutionize tornado and storm prediction. McGovern’s ambitious tornado modeling and simulation project seeks to explain why some storms generate tornadoes while others don’t. The research is giving birth to new techniques for identifying the likely path of twisters through both space and time. McGovern’s work was recently detailed in an article by Scott Gibson, science writer for the National Institute for Computational Sciences at the University of Tennessee, Knoxville. A simulated radar image of a storm produced by CM1. The hook-like appendage in the southwest corner is an indication of a developing vortex. Source. This latest round of updates adds to an earlier report that was released in May 2011. That NICS article was published after a severe weather event spawned nearly 200 tornadoes, devastating large swaths of the southern United States. The disaster left 315 dead and caused billions of dollars worth of damage. McGovern says she and the other researchers “hope that with a more accurate prediction and improved lead time on warnings, more people will heed the warnings, and thus loss of life and property will be reduced.” Part of the challenge has been devising a realistic and reliable model and coming up with usable input data. A numerical model known as CM1 has proven valuable to researchers allowing them to focus more on the science and less on the workflow. The team has also made strides with regard to storm simulations. Rather than model storms that actually happened, they base their models on the conditions that are required for a tornado to take form. They start with a bubble of warm air, which sets off the storm-building process. Then they introduce equations and parameters that factor into the storm’s development. Getting the friction right is especially challenging as even the grass on the ground can affect this variable. The research team is working with the National Weather Service to implement an early storm warning system, called Warn-on-Forecast. The goal of the project is to inform the public of impending storms with 30-60 minutes of lead time. Getting this level of accuracy requires a high-resolution model, and that takes a lot of computing power. The researchers want to figure out “what actually generates the tornado, and the only way you can confirm that is to make the high-resolution simulations,” McGovern explains. “Those are not feasible to do all across the U.S. right now on a Warn-on-Forecast basis. We are running on 112 by 112 kilometer domain; now scale that up to the U.S. and ask it to run in real time. We’re not quite there yet.” They’re using the University of Tennessee’s Kraken supercomputer to run the simulations and and the UT’s Nautilus supercomputer to analyze them. “The biggest thing that Nautilus does for us right now is process the data so that we can mine it, because we’re trying to cut these terabytes of data down to something that’s usable metadata,” McGovern reports. “I am able to reduce one week of computations down to 30 minutes on Nautilus, and post-processing time is reduced from several weeks to several hours.” The researchers expect to have a more precise storm prediction system in place by December.
<urn:uuid:b55396bc-40e8-4507-8b37-2a68cef669b7>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/04/23/revolutionizing_tornado_prediction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00608-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94116
702
3.453125
3
By: Candid Wueest, Software Engineer at Symantec Virtual machines (VM) have been used for many years and are popular among researchers because malware can be executed and analyzed on them without having to reinstall production systems every time. These tests can be done manually or on automated systems, with each method providing different benefits or drawbacks. Every artifact is recorded and a conclusion is made to block or allow the application. For similar reasons, sandbox technology and virtualization technology have become a common component in many network security solutions. The aim is to find previously unknown malware by executing the samples and analyzing their behavior. However, there is an even bigger realm of virtual systems out there. Many customers have moved to virtual machines in their production environment and a lot of servers are running VM, performing their daily duty with real customer data. This leads to a common question when talking to customers: “Does malware detect that it is running on a virtual system and quit?” It is true that some malware writers try to detect if their creation is running on a VM by using tricks such as: - Checking certain registry keys that are unique to virtual systems - Check if helper tools like VMware tools are installed - Execute special assembler code and compare the results - And more. In some rare cases we have encountered malware that does not quit when executed on a VM, but instead sends false data. These “red herrings” might ping command-and-control servers that never existed or check for random registry keys. These tactics are meant to confuse the researcher or have the automation process declare the malware a benign application. Malware authors want to compromise as many systems as possible, so if malware does not run on a VM, it limits the number of computers it could compromise. So, it should not come as a surprise that most samples today will run normally on a virtual machine and that the features can be added if the cybercriminal wishes to do so. In order to answer the initial question with some real data, we selected 200,000 customer submissions since 2012 and ran them each on a real system and on a VMware system and compared the results. For the last two years, the percentage of malware that detects VMware hovered around 18 percent. On average, one in five malware samples will detect virtual machines and abort execution. This means that malware still detects if it is running on a VM, but only in some minor cases. Symantec recommends that virtualized systems should be properly protected in order to keep them safe from threats. Symantec engineers are always on the lookout for new techniques that malware authors may employ to bypass automated analysis. With the combination of various proactive detection methods, like reputation based detection, we can ensure maximum security for our customers.
<urn:uuid:ff9254e2-4202-4f09-92b6-41f68b262fb1>
CC-MAIN-2017-09
http://www.itbestofbreed.com/sponsors/symantec/best-tech/does-malware-still-detect-virtual-machines
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00184-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941218
565
2.53125
3
Black Box Explains... Speaker wire gauge Wire gauge (often shown as AWG, for American Wire Gauge) is a measure of the thickness of the wire. The more a wire is drawn or sized, the smaller its diameter will be. The lower the wire gauge, the thicker the wire. For example, a 24 AWG wire is thinner than a 14 AWG wire. A lower AWG means longer transmission distance and better integrity. As a rule of thumb, power loss decreases as the wire size increases. When it comes to choosing speaker cable, consider a few factors: distance, the type of system and amplifier you have, the frequencies of the signals being handled, and any specifications that the speaker manufacturer recommends. For most home applications where you simply need to run cable from your stereo to speakers in the same room—or even behind the walls to other rooms—16 AWG cable is usually fine. If you’re considering runs of more than 40 feet (12.1 m), consider using 14 AWG or even 12 AWG cable. They both offer better transmission and less resistance over longer distances. You should probably choose 12 AWG cable for high-end audio systems with higher power output or for low-frequency subwoofers. As a rule of thumb, power loss decreases as the wire size increases. To terminate your cable, choose gold connectors. Because gold resists oxidation over time, gold connectors wear better and offer better peformance than other connectors do.
<urn:uuid:0179aa53-eecf-4e93-a8b2-525e1c2be136>
CC-MAIN-2017-09
https://www.blackbox.com/en-au/products/black-box-explains/black-box-explains-speaker-wire-gauge
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00536-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915401
304
2.546875
3
ISO Approves Ada 2012 Programming Language Standard Ada is a structured, statically typed, imperative, wide-spectrum and object-oriented high-level computer programming language, extended from Pascal and other languages. It has strong built-in language support for explicit concurrency, offering tasks, synchronous message passing, protected objects and nondeterminism. Ada was originally designed by a team led by Jean Ichbiah of CII Honeywell Bull under contract to the United States Department of Defense from 1977 to 1983 to supersede the hundreds of programming languages then used by the DOD. The programming language was named after Ada Lovelace, a mathematician who is sometimes regarded as the world's first programmer because of her work with Charles Babbage. She was also the daughter of the poet Lord Byron. Ironically, the Ada 2012 standard announcement comes just days after Lovelace's Dec. 10 birthday. Ada was originally targeted at embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics in the early 1990s, improved support for systems, numerical, financial and object-oriented programming (OOP). Ada is designed for the development of very large software systems. Ada packages can be compiled separately, and their specifications can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts. The Ada programming language is designed for large, long-lived applications—and embedded systems in particular—where reliability and efficiency are essential.ISO and the International Electrotechnical Commission (IEC) are the two primary organizations for international standardization. They resolve the problem of overlapping scope by forming a Joint Technical Committee, JTC1, to deal with all standardization in the scope of information technology. JTC1 deals with its large scope of work by subdividing the responsibility among a number of subcommittees. SC22—which deals with programming languages, their environments and system software interfaces—is the parent body of WG9. In turn, SC22 subdivides its scope of work among several Working Groups. WG9 is responsible for the "development of ISO standards for programming language Ada." That gives you ISO/IEC JTC1/SC22/WG9. The formal approval of the standard was issued Nov. 20 by ISO/IEC JTC 1, and the standard was published Dec. 15. A technical summary of Ada 2012, together with an explanation of the language's benefits and a set of links to further information, is available at www.ada2012.org, which the Ada Resource Association maintains. The language revision, known as Ada 2012, was under the auspices of ISO/IEC JTC1/SC22/WG9 and was conducted by the Ada Rapporteur Group (ARG) subunit of WG9, with sponsorship in part from the ARA and Ada-Europe.
<urn:uuid:23a87f01-0e4b-45c0-95d8-7ce70ca2e021>
CC-MAIN-2017-09
http://www.eweek.com/developer/iso-approves-ada-2012-programming-language-standard-3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00532-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950432
599
2.859375
3
GCN LAB IMPRESSIONS If you need cool servers, open a window - By John Breeden II - Feb 08, 2012 A couple months ago I reported on a facility in Stockholm that was using the heat from a bank of liquid cooled servers to heat an entire research facility. Using the heat from our electronics for a useful purpose instead of letting that energy go to waste, or treating it as something bad that needs to be mitigated, is a smart way of thinking. It’s taking something we produce anyway, even if by circumstances or accident, and turning it into something useful. That’s a pretty cool story, but most of the time we experience the opposite problem. The heat from computers builds up in a server room or data center, and the high temperature makes the machines run less efficiently. And if you don’t do something about the problem, it will keep getting worse. A variety of cooling devices have been invented to help out, like the Tripp Lite SRCOOL12k, which we use to cool down our own GCN Lab. But no matter which method of cooling you use, it’s going to take a lot of electricity, and the First Law of Thermodynamics states that you can’t cool a place without generating an equivalent amount of heat, given that heat is energy. And that heat needs to be vented somewhere, which could result in the heat problem happening somewhere else in your building. A lot of times the heat removed from a server room is vented outside, unless you happen to work in Stockholm where it can keep your offices nice and toasty instead. However, what if the opposite were true? What if outside air could be brought into a server room to cool it down, and give it a nice fresh meadow-like smell at the same time? That’s kind of what scientists at the Energy Department are thinking about with their soon to be constructed Berkeley Computing Lab. The lab will hold the world’s newest supercomputer, so there will likely be a lot of heat generated there. Apparently, the breezy atmosphere surrounding San Francisco bay is perfect for cooling down the servers. Scientists estimate the outside air can be used to cool computers 90 percent of the time without any extra devices being brought online, which is impressive given how much heat an industrial supercomputer can probably produce. This is a good thing and will save energy, but I wonder if they have taken every factor into consideration. There will certainly be humidity problems that will have to be addressed, necessitating some other equipment be brought online even if they are just pumping air into the building to cool it. Furthermore, even though San Francisco has a cool climate, the region also sometimes experiences scorching summer days, especially in El Nino years. If the DOE were truly serious about using the natural climate to cool the computer, they could pick a spot in Alaska where it’s a lot colder and the air is also dryer. Then again, they wouldn’t have a beautiful castle overlooking the San Francisco Bay to work in, which must be a nice perk. They might have fewer earthquakes though, which would be a big plus in my book. John Breeden II is a freelance technology writer for GCN.
<urn:uuid:e68e6586-4b64-4e9d-9c08-670467aeb898>
CC-MAIN-2017-09
https://gcn.com/articles/2012/02/08/impressions-energy-cool-server-window-berkley-supercomputer.aspx?admgarea=TC_BigData
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00532-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949669
673
2.5625
3
Free Online NFPA, IBC, and ADA Codes and StandardsAuthor: Brian Rhodes, Published on Nov 29, 2016 Finding applicable codes for security work can be a costly task, with printed books and pdf downloads costing hundreds or thousands. However, a number of widely referenced codes are avavailable free online if you know the right places to search. This post provides link to a number of free code resources common to security including: - NFPA 70 - NFPA 72 - NFPA 80 - NFPA 101 - International Building Codes (IBC) NFPA Online Free The NFPA provides the standards used as code basis for multiple aspects of security integration, including the National Electrical Code, authoritative Life-Safety guidelines for access control, and multiple related standards for Fire Alarms, Firewalls, and Fire Doors. The NFPA provides free online reference access to all ther latest versions of all standards after free registration is completed. The most relevant NFPA standards used in security include: NFPA 70: NEC, The National Electrical Code In most of North America, the most comprehensive guide is NFPA 70, most commonly called the 'NEC' or National Electrical Code. While the scope of the codes mainly apply to high-voltage electrical work of more than 100 Volts, security work and devices like PoE or small gauge cabled hardware using less voltage are also given prime attention. We examine NEC in detail in our Low Voltage Codes and Video Surveillance note, but the source code can be accessed here: - NFPA 70: National Electrical Code (registration required) NFPA 101: Life Safety One of the most important guidelines of electronic access is NFPA 101, the foundation behind how to install access and still preserve safe egress. We examine those elements closely in our Codes Behind Access Control post, but free access is available here: - NFPA 101: Life Safety Code (registration required) NFPA 80: Fire Door Modifications Because fire doors have important functions to prevent the spread of fire and to withstand direct flames for some time, modifying them for electronic access use is limited. In most cases, NFPA 80 describes the extent and size of cutouts or holes allowed in a fire door, or the acceptable behavior of that hardware given the location of the door. The link below offers direct access to the section: - NFPA 80: Standard for Fire Doors and Other Opening Protectives (registration required) International Building Code Taking central importance in legal building design, and retrofit systems like access, IBC is often cited by local jurisdictions as the authority on how to construct systems safely. As we cover in Building Occupancy Codes and Access Control Tutorial and our Codes Behind Access Control notes, the actual version that is adopted can vary by year, with verbiage and citations change between them. Below are the most common versions cited today: Finally, codes that govern how to implement access controls, intercoms, and even workstation design can be found in the Americans with Disabilities Act, that we cover in Disability Laws, ADA and Access Control note. The most recent versions of those guidelines and mandates can be accessed here: Fair Use Copyright Applies Here In general, free online code resources are read-only and users are not able to download, notate, or print copies for offline circulation. If users want this, then standards and codes are available for purchase, often at prices ranging from ~$100 for a single standard to upwards of $5000 for a full set of comprehensive codes. For example, NFPA explains: "These online documents are "read-only" - they cannot be downloaded or printed, because NFPA relies on the revenues from individuals who purchase copies of these documents to fund our mission. But these "read only" documents are available to anyone who wants to familiarize themselves with a code or check a requirement." Under terms of 'Fair Use', citation and republishing of excerpts for public commentary or criticism is allowed, but wholesale republishing of the codes or standards can only be done under conditions given by the authoring agency. Most Recent Industry Reports The world's leading video surveillance information source, IPVM provides the best reporting, testing and training for 10,000+ members globally. Dedicated to independent and objective information, we uniquely refuse any and all advertisements, sponsorship and consulting from manufacturers.
<urn:uuid:d4b1c6f1-ce13-4109-b3bd-5cb904cc61ee>
CC-MAIN-2017-09
https://ipvm.com/reports/free-codes-standards-online
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00056-ip-10-171-10-108.ec2.internal.warc.gz
en
0.874942
899
2.515625
3
The Xenon's cache hierarchy: when is a cache not a "cache"? Level 1 and level 2 caches sit logically (if not physically) between the CPU and main memory, replicating the contents of main memory and reducing the amount of time that the CPU has to wait on code and data. The complex process of moving code and data between main memory and different levels of the cache hierarchy happens out of the sight of the programmer and is under the control of the hardware, which also concerns itself with keeping the information that is stored and replicated at different levels of the hierarchy in sync. Thus the two hallmarks of a "cache" as it's conventionally understood are: a) that it temporarily stores (or "caches") information near the CPU and b) that it's managed by the CPU and not by the programmer. Xenon's L1 and L2 caches can function in the conventional manner described above, but they can also function quite differently. More specifically, Xenon invests programmers with an unprecedented level of control over how their applications use the caches. Insofar as they can fall under the explicit control of the programmer, the Xenon's caches, and its L2 cache in particular, can function remarkably like the "local storage" that's attached to each of the Synergistic Processing Elements (SPEs) in IBM's Cell processor. (For more on the Cell and its SPEs, see Part I and Part II of my Cell coverage.) Producing, consuming, and buffering In the kinds of traditional real-time rendering applications described previously in this article, main memory and the GPU have a producer-consumer relationship: main memory produces vertex data and the GPU consumes it. In the Xbox 360's procedural synthesis scheme, the GPU is still a consumer, but the role of the producer that feeds it is played by the Xenon CPU. The Xenon can produce "decompressed" vertex data in greater volumes and at a much higher rate than traditional main memory could, which is the whole point of procedural synthesis. Often, however, the Xenon will produce vertex data faster than the GPU can consume it. When Xenon overproduces vertex data, that data has to be stored somewhere while it waits for consumption by the GPU. Fortunately, the Xenon's PPEs have a very fast data store that is both near at hand and sits directly between the PPEs and the GPU. This data store is the Xenon's shared L2 cache. If the Xenon's L2 cache were a "normal" L2, then constant stream of vertex data output from even a single data generation thread would soon fill up the cache and begin to squeeze out the data associated with other threads. As new vertex data is written into the cache, the cache would think that this data was "recently used" and is therefore likely to be used again shortly. The cache would have no way of knowing that this data would not be used again, so it would fill up cache block after cache block with vertex data that it would keep around needlessly until a compulsory miss caused by another thread forced a block of it out. In other words, even one data generation thread running wide open would cause a 1MB cache to "thrash," thereby degrading the performance of the host thread and of the whole system. The Xenon's designers solved this problem by allowing each data generation thread to use the cache in a special way. The Xenon's L2 is arranged as an n-way set associative cache (the value of n hasn't been publicly disclosed as of yet). A programmer can place a thread in write streaming mode, which means that the system will wire down (to use traditional virtual memory terminology) or lock (to use Microsoft's terminology) a set of cache lines and attach that set directly and exclusively to a particular thread. This set is then initialized as a FIFO queue, so that it can act as a write buffer to store the output of a data generation thread. The data generation thread feeds its vertex data output directly into this FIFO queue (bypassing the L1 cache entirely), and the GPU reads that vertex data directly from this queue using a modified DMA protocol. Thus the write buffer logically couples a single data generation thread to the GPU by acting as a conduit for vertex data. In the Xenon's L2 cache, an arbitrary number of the sets can be locked and turned into FIFO queues for private, exclusive use by individual data generation threads. The sets that aren't locked look to the Xenon like normal L2 cache. This means that non-write-streaming threads can use the non-locked L2 cache space normally, as can threads that are write-streaming. A write-streaming data generation thread that has its own private locked set can also access the pool of generally available, non-locked L2 cache just like any other thread, but with the exception that it can't use too much of it. A write-streaming thread is not allowed to get greedy and use too much L2, because the system will restrict its L2 usage so that it plays nicely with the other running threads. In sum, write streaming allows the programmer to carve up the L2 cache into small chunks of private, local storage shared between each thread and the GPU. This local storage, in the form of a FIFO queue, acts kind of like a pipe that transports data directly through the L2 without allowing that data to spill over and pollute the rest of the L2.
<urn:uuid:5ee17cae-04f9-4f2a-868b-cf0c30352924>
CC-MAIN-2017-09
https://arstechnica.com/features/2005/05/xbox360-1/5/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00476-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952398
1,138
2.890625
3
This is a demo of how we can create web-based services that consume and analyze data, and present that data in interesting ways. Step 1: We will attempt to calculate your location based on your IP address. Step 2: We'll ask you to confirm that the location we guessed is correct, or to try a different method to find where you are. Step 3: We reward you by showing recent Tweets around this location, which you can interact with directly on the map or in a list. To start, we attempt to to calculate your location based on your IP address. The IP address we receive from your web browser depends on your computer or mobile device, the broadband, cable or cellular network you are attached to, and a number of other network devices that sit between your device and our server. Due to the many variables, the IP address we receive is often not accurate. Based on the IP address we receive, we perform a lookup against a database of over 2.5 million IP address ranges. It is in one of these ranges your IP address should fall. For each block of numbers, we have a latitude and longitude, plus some data that references country, state or province and city names for that point. If the IP address we receive and the IP address information we have in our database is correct, we show you your current location. Did we calculate your location correctly using your IP address? Take a look at the map and the address. If we guessed your location within about a mile, tell us Yes. If not, tell us No and we'll try using a different method. Since we didn't make a good calculation for your location based on your IP address, we will try asking your web browser for a location. If you are on a computer attached to a wired or wifi network this often works reasonably well. If you are on a mobile device with GPS location services, this may be quite accurate. Click a marker on the map to view the tweet. Or scroll through the tweet list and click it to see where on the map it was located.
<urn:uuid:306bf57b-d26d-4510-ac5e-abf660ca64fd>
CC-MAIN-2017-09
https://services.consected.com/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00228-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956853
428
2.875
3
The Secure Boot security mechanism of the Unified Extensible Firmware Interface (UEFI) can be bypassed on around half of computers that have the feature enabled in order to install bootkits, according to a security researcher. At the Hack in the Box 2014 security conference in Amsterdam, Corey Kallenberg, a security researcher from nonprofit research organization Mitre, also showed Thursday that it's possible to render some systems unusable by modifying a specific UEFI variable directly from the OS, an issue that could easily be exploited in cybersabotage attacks. UEFI was designed as a replacement for the traditional BIOS (Basic Input/Output System) and is meant to standardize modern computer firmware through a reference specification that OEMs and BIOS vendors can use. However, in reality there can be significant differences in how UEFI is implemented, not only across different computer manufacturers, but even across different products from the same vendor, Kallenberg said. Last year, researchers from Intel and Mitre co-discovered an issue in UEFI implementations from American Megatrends, a BIOS vendor used by many OEMs, Kallenberg said. In particular, the researchers found that a UEFI variable called Setup was not properly protected and could be modified from the OS by a process running with administrative permissions. Modifying the Setup variable in a particular way allowed the bypassing of Secure Boot, a UEFI security feature designed to prevent the installation of bootkits, which are rootkits that hide in the system's bootloader and start before the actual OS. Secure Boot works by checking if the bootloader is digitally signed and on a pre-approved whitelist before executing it. Bootkits have been a serious threat for years. In 2011, security researchers from Kaspersky Lab said TDL version 4, a malware program that infects the computer's master boot record (MBR), had infected over 4.5 million computers and called it the most sophisticated threat in the world. McAfee reported in 2013 that the number of malware threats that infect the MBR had reached a record high. Aside from bypassing Secure Boot, the unprotected Setup variable can also be used to "brick" systems if the attacker sets its value to 0, Kallenberg revealed Thursday for the first time. If this happens, the affected computer will not be able to start again. Recovering from such an attack would be hard and time consuming because it involves reprogramming the BIOS chip, which requires manual intervention and specialized equipment, the researcher said. The attack could be launched from the OS by malware running with administrative privileges and could potentially be used to sabotage an organization's computers. It wouldn't be the first time when such destructive attacks occur. In April 2012 Iran's oil ministry and the country's state-owned oil companies were hit with data-wiping malware that deleted information from some of their systems. A few months later, in August, a hacker group used malware to disable 30,000 workstations belonging to Saudi Aramco, the national oil company of Saudi Arabia. Aside from the Setup variable issue, which American Megatrends has since fixed, Kallenberg presented another way to bypass Secure Boot which stems from OEMs not using a security mechanism called SMI_LOCK in their UEFI implementations. The absence of this protection feature allows code running inside the kernel, like a system driver, to temporarily suppress SMM (System Management Mode) and add a rogue entry into the list of pre-approved bootloaders trusted by Secure Boot. When the system is then rebooted, the malicious bootloader would be executed with no Secure Boot error. Researchers from Mitre developed a Windows software agent called Copernicus that can analyze the system's BIOS/UEFI and report the different security features it uses. They ran the tool in their organization across 8,000 systems and found that around 40 percent of them did not use the SMI_LOCK protection. Those results are not representative across many OEMs, because Mitre uses a large number of Dell systems in particular, Kallenberg said, but the effort was expanded outside of the organization and around 20,000 systems have now been scanned. The goal is to scan 100,000 systems and then release a report, the researcher said. Another issue is that even if OEMs release BIOS updates with SMI_LOCK protection in the future and customers install them, Kallenberg estimates that on 50 percent of systems it would be possible for an attacker to flash back an older, unprotected BIOS version from inside the OS. Pulling off the SMM suppression attack requires access to kernel mode, also known as ring 0, but that's not necessarily a big issue for attackers, Kallenberg said. If an attacker manages to execute malware as a user process with administrative privileges -- for example by exploiting a vulnerability in other software applications or in the operating system itself -- he could install an older digitally signed driver from a legitimate hardware manufacturer that's known to have a vulnerability. Since the driver runs in kernel space, he could then exploit the vulnerability to execute malicious code with ring 0 privileges, the researcher said. There are also other chipset dependent ways to suppress SMM and modify the Secure Boot whitelist that are still being investigated, but are not ready to be publicly disclosed yet, Kallenberg said. Over the past year OEMs have started to pay a lot more attention to BIOS security research and have started to react, Kallenberg said. "I think we're finally at a place where you'll see OEMs take this more seriously." Investigating BIOS security issues in the past was very hard for researchers because specialized BIOS debugging equipment that makes analysis easier costs US$20,000, Kallenberg said. However, the openness of UEFI allows more researchers to look into these problems and, hopefully, over the next few years the most obvious vulnerabilities will be identified and fixed, he said.
<urn:uuid:b3462413-4336-4557-b43e-7ad645873a7d>
CC-MAIN-2017-09
http://www.csoonline.com/article/2304849/new-attack-methods-can-brick-systems-defeat-secure-boot-researchers-say.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00524-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947101
1,214
2.640625
3
The World Wildlife Fund (WWF) recently recognized IBM as an early leader in setting greenhouse gas emission reduction goals. In its new report Corporations: Unlikely Heroes in America’s Quest for Climate Action, WWF describes how companies like IBM know they can save money by becoming more efficient, securing less-expensive renewable energy, and reducing market volatility risk while lowering emissions. IBM has been taking aggressive actions against climate change since the 1990s, and our commitment remains to this day. Among these actions is our call to reduce carbon dioxide (CO2) emissions associated with our electricity and fuel consumption. Key ways that we have achieved our CO2 emissions reductions goals have included increasing our purchases of renewable electricity, and executing a wide range of energy conservation and efficiency projects throughout our operations. IBM is working with utility providers, wind and solar developers, and our industry peers around the globe to exceed our goal to purchase 20 percent of our electricity from renewable sources by 2020. This goal does not include the credits for renewable energy that IBM receives automatically via grid purchases. We also have developed analytics and cognitive based IT solutions to enable utilities and grid operators to better integrate renewable electricity generated from wind and solar projects into the electricity grid. Taken together, IBM’s initiatives and partnerships are helping us operate more efficiently and cost-effectively while maintaining our leadership role as stewards of the environment. Wayne Balta is IBM’s Vice President for Corporate Environmental Affairs and Product Safety. You can learn more about IBM’s environmental sustainability leadership here. Share this post:
<urn:uuid:a43d2ce3-c406-4b6b-b38d-20aadb71b9b8>
CC-MAIN-2017-09
https://www.ibm.com/blogs/citizen-ibm/2017/02/balta_ibm_early_co2_leader.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00524-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951064
319
2.671875
3
Recent accidents involving Tesla cars may have been a setback for self-driving cars, but Nvidia believes a fast computer under the hood could make autonomous cars and cabs truly viable. The company's new Drive PX 2 model is a palm-sized computer for autonomous cars that will marry mapping with artificial intelligence for automated highway and point-to-point driving. The computer's horsepower will help a car navigate, avoid collisions and make driving decisions. The Drive PX 2 could be attractive to companies like Uber, which want to deploy autonomous cars as taxis. The computer is also targeted at car makers looking to develop fully or partially autonomous cars, which would typically need human intervention. The computer is already ticketed for use in a self-driving car from Chinese company Baidu. The car will rely on the computer and cloud-based AI for car control, navigation, self parking and collision avoidance. The car is more a demonstration vehicle to highlight emerging autonomous car technologies. In late August, Nvidia and Baidu announced they would jointly create a "cloud-to-car autonomous car platform" for car makers. It would use Baidu’s cloud services and mapping technology with Nvidia's self-driving hardware and programming tools. Automotive and tech companies are developing fully autonomous vehicles. The U.S. government is keeping a close watch, encouraging the development of autonomous cars and asking companies to keep driver safety in mind. However, legal questions about self-driving cars remain. Standards are also being established by mapping and automotive companies so cars can exchange critical sensor data -- like road conditions and weather -- via the cloud. The sensor data, if freely available for autonomous cars to reference, could contribute to safer driving. The new Nvidia computer is like a small PC, and combines computing power and algorithms to help cars navigate, self park, avoid obstacles and recognize signals, signs and lanes. It has a Tegra chip that combines one six-core CPU code-named Parker and one Pascal-based GPU. It is a watered down version of the original Drive PX 2 announced at CES, which sits in the trunk of a car and is far more powerful with two CPUs and two GPUs. Over time, the Drive PX 2 can be trained to provide better location and contextual awareness. Cars will get better at recognizing objects and can make better decisions on a wider variety of situations over time. The Drive PX 2 trains autonomous cars by analyzing data gathered from cameras, GPS, ultrasonic sensors, radar, lidar and other components. If the car can't recognize an object or find its way to a location, it can refer to a larger knowledge base in the cloud. Nvidia said it will be possible to transfer a training model and data gathered by Drive PX 2 to the cloud, which will strengthen the knowledge base. The autonomous car is central to Nvidia's wide-ranging deep learning strategy, in which the company's GPUs are being used for image recognition, natural language processing and scientific simulations. The GPUs are also being used to navigate drones and robots. Nvidia is off to a good start in the autonomous car market. Volvo is developing an autonomous car based on the original Drive PX 2 introduced at CES, and the chip maker is already working with Ford, Audi and BMW on autonomous vehicle technologies. Intel is Nvidia's biggest competitor in the car market. BMW hopes to put an autonomous car on the road by 2021, for which Intel and Mobileye are supplying the hardware and software technologies. The single-chip version of the Drive PX 2 will ship in the fourth quarter. Car makers, automotive suppliers and researchers are developing autonomous car prototypes based on various Drive PX computers. The Parker CPU has four 64-bit Cortex-A57 cores and two homegrown Denver 2.0 CPU cores, both based on ARM architectures.
<urn:uuid:f5a015e4-d451-4e24-9568-2e765769838b>
CC-MAIN-2017-09
http://www.itnews.com/article/3119218/nvidias-updated-drive-px-2-computer-will-drive-autonomous-cabs.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00048-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947902
783
2.578125
3
Basic Linux commands you need to know for CompTIA’s new A+ As is the norm every few years, the CompTIA A+ exams (currently numbers 220-801 and 220-802) are being updated to reflect the current technology environment. The new exams (to be named 220-901 and 220-902) are expected by the end of the year and — for the most part — they expand what is being currently tested: adding more content than subtracting. One of the topics being added is that of Basic Linux commands. The most recent iterations of the exams focused on Microsoft Windows, but with administrators regularly encountering Linux-based servers it makes sense that the coverage is being enlarged to require them to know of a few command-line tools that work there. The commands they have honed in on, in domain order, are: ● pwd vs passwd Because of space limitations, we will look at approximately the first half of the commands this month and the remainder next month. Before getting to the commands, though, it is important to understand some basics. With Linux, there is a shell that serves as an interpreter between the user and OS: this is often bash, but can be one of several others as well. Because a shell interprets what you type, knowing how the shell processes the text you enter is important. All shell commands have the following general format (assuming the command takes options — some commands have no options). command [option1] [option2] . . . [optionN] On a command line, you enter a command, followed by zero or more options (or arguments). The shell uses a blank space or a tab to distinguish between the command and options. This means you must use a space or a tab to separate the command from the options and the options from one another. If an option contains spaces, you put that option inside quotation marks. For example, to search for a name in the password file, enter the following grep command (grep is used for searching for text in files): grep “Emmett Dulaney” /etc/passwd When grep prints the line with the name, it looks like this: If you create a user account with your username, type the grep command with your username as an argument to look for that username in the /etc/passwd file. In the output from the grep command, you can see the name of the shell (/bin/bash) following the last colon (:). Because the bash shell is an executable file, it resides in the /bin directory; you must provide the full path to it. The number of command-line options and their format depend on the actual command. Typically, these options look like -X, where X is a single character. For example, you can use the -l option with the ls command. The command lists the contents of a directory, and the option provides additional details. Here is a result of typing ls -l in a user’s home directory: drwxr-xr-x 2 edulaney users 48 2015-09-08 21:11 bin drwx—— 2 edulaney users 320 2015-09-08 21:16 Desktop drwx—— 2 edulaney users 80 2015-09-08 21:11 Documents drwxr-xr-x 2 edulaney users 80 2015-09-08 21:11 public_html drwxr-xr-x 2 edulaney users 464 2015-09-17 18:21 sdump If a command is too long to fit on a single line, you can press the backslash key (\) followed by Enter. Then, continue typing the command on the next line. For example, type the following command. (Press Enter after each line.) The cat command then displays the contents of the /etc/passwd file. You can concatenate (that is, string together) several shorter commands on a single line by separating the commands by semicolons (;). For example, the following command … cd; ls -l; pwd … changes the current directory to your home directory, lists the contents of that directory, and then shows the name of that directory. You can combine simple shell commands to create a more sophisticated command. For example, suppose that you want to find out whether a device file named sbpcd resides in your system’s /dev directory because some documentation says you need that device file for your CD-ROM drive. You can use the ls /dev command to get a directory listing of the /dev directory and then browse through it to see whether that listing contains sbpcd. Unfortunately, the /dev directory has a great many entries, so you may find it hard to find any item that has sbpcd in its name. You can, however, combine the ls command with grep and come up with a command line that does exactly what you want. Here’s that command line: ls /dev | grep sbpcd The shell sends the output of the ls command (the directory listing) to the grep command, which searches for the string sbpcd. That vertical bar (|) is known as a pipe because it acts as a conduit (think of a water pipe) between the two programs — the output of the first command is fed into the input of the second one. There are literally hundreds, if not thousands, of Linux commands that exist within the shell and the system directories. Fortunately, CompTIA asks that you know a much smaller number than that. The following table lists the Linux commands by category. Linux Commands by Category |Managing Files and Directories| |cd||Change the current directory| |chmod||Change file permissions| |chown||Change the file owner and group| |ls||Display the contents of a directory| |mv||Rename a file and move the file from one directory to another| |pwd||Display the current directory| |dd||Copy blocks of data from one file to another (used to copy data from devices)| |grep||Search for regular expressions in a text file| |apt-get||Download files from a repository site| |ps||Display a list of currently running processes| |shutdown||Shut down Linux| |vi||Start the visual file editor| |passwd||Change the password| |su||Start a new shell as another user (the other user is assumed to be root when the command is invoked without any argument)| |sudo||Allows you to run a command as another user (usually the root user)| |ifconfig||View and change information related to networking configuration| |iwconfig||Similar to ifconfig, but used for wireless configuration| Becoming the Root/Superuser When you want to do anything that requires a high privilege level (for example, administering your system), you have to become root. Normally, you log in as a regular user with your everyday username. When you need the privileges of the superuser, though, use the following command to become root: That’s su followed by a space and the minus sign (or hyphen). The shell then prompts you for the root password. Type the password and press Enter. After you’ve finished with whatever you want to do as root (and you have the privilege to do anything as root), type exit to return to your normal username. Instead of becoming root by using the su – command, you can also type sudo followed by the command that you want to run as root. In some distributions, such as Ubuntu, you must use the sudo command because you don’t get to set up a root user when you install the operating system. If you’re listed as an authorized user in the /etc/sudoers file, sudo executes the command as if you were logged in as root. Type man sudoers to read more about the /etc/sudoers file. Every time the shell executes a command that you type, it starts a process. The shell itself is a process as are any scripts or programs that the shell runs. Use the ps ax command to see a list of processes. When you type ps ax, bash shows you the current set of processes. Here are a few lines of output from the command e ps ax -cols 132. (The -cols 132 option is used to ensure seeing each command in its entirety.) PID TTY STAT TIME COMMAND 1 ? S 0:01 init 2 ? SN 0:00 [ksoftirqd/0] 3 ? S< 0:00 [events/0] 4 ? S< 0:00 [khelper] 9 ? S< 0:00 [kthread] 19 ? S< 0:00 [kacpid] 75 ? S< 0:00 [kblockd/0] 115 ? S 0:00 [pdflush] 116 ? S 0:01 [pdflush] 118 ? S< 0:00 [aio/0] 117 ? S 0:00 [kswapd0] 711 ? S 0:00 [kseriod] 1075 ? S< 0:00 [reiserfs/0] 2086 ? S 0:00 [kjournald] 2239 ? S < s 0:00 /sbin/udevd -d . . . lines deleted . . . 6374 ? S 1:51 /usr/X11R6/bin/X :0 -audit 0 -auth /var/lib/gdm/:0.Xauth -nolisten tcp vt7 6460 ? Ss 0:02 /opt/gnome/bin/gdmgreeter 6671 ? Ss 0:00 sshd: edulaney [priv] 6675 ? S 0:00 sshd: edulaney@pts/0 6676 pts/0 Ss 0:00 -bash 6712 pts/0 S 0:00 vsftpd 14702 ? S 0:00 pickup -l -t fifo -u 14752 pts/0 R+ 0:00 ps ax –cols 132 In this listing, the first column has the heading PID and shows a number for each process. PID stands for process ID (identification), which is a sequential number assigned by the Linux kernel. If you look through the output of the ps ax command, you see that the init command is the first process and has a PID of 1. That’s why init is referred to as the mother of all processes. The COMMAND column shows the command that created each process, and the TIME column shows the cumulative CPU time used by the process. The STAT column shows the state of a process: S means the process is sleeping, and R means it’s running. The symbols following the status letter have further meanings; for example < indicates a high-priority process, and + means that the process is running in the foreground. The TTY column shows the terminal, if any, associated with the process. The process ID, or process number, is useful when you have to forcibly stop an errant process. Look at the output of the ps ax command and note the PID of the offending process. Then, use the kill command with that process number to stop the process. For example, to stop process number 8550, start by typing the following command: In Linux, when you log in as root, your home directory is /root. For other users, the home directory is usually in the /home directory, for example the home directory for a user logging in as edulaney is /home/edulaney. This information is stored in the /etc/passwd file. By default, only you have permission to save files in your home directory, and only you can create subdirectories in your home directory to further organize your files. Linux supports the concept of a current directory, which is the directory on which all file and directory commands operate. After you log in, for example, your current directory is the home directory. To see the current directory, type the pwd command. To change the current directory, use the cd command. To change the current directory to /usr/lib, type the following: Then, to change the directory to the cups subdirectory in /usr/lib, type this command: Now, if you use the pwd command, that command shows /usr/lib/cups as the current directory. These two examples show that you can refer to a directory’s name in two ways: Absolute or Relative. An example of an absolute pathname is /usr/lib, which is an exact directory in the directory tree (think of the absolute pathname as the complete mailing address for a package that the postal service will deliver to your next-door neighbor). An example of a relative pathname is cups, which represents the cups subdirectory of the current directory, whatever that may be (think of the relative directory name as giving the postal carrier directions from your house to the one next door so the carrier can deliver the package). If you type cd cups in /usr/lib, the current directory changes to /usr/lib/cups. However, if you type the same command in /home/edulaney, the shell tries to change the current directory to /home/edulaney/cups. Use the cd command without any arguments to change the current directory back to your home directory. No matter where you are, typing cd at the shell prompt brings you back home. The tilde character (~) is an alias that refers to your home directory. Thus, you can change the current directory to your home directory also by using the command cd ~. You can refer to another user’s home directory by appending that user’s name to the tilde. Thus, cd ~superman changes the current directory to the home directory of superman. Also, a single dot (.) and two dots (. .) , often referred to as dot-dot, also have special meanings. A single dot (.) indicates the current directory, whereas two dots (. .) indicate the parent directory. For example, if the current directory is /usr/share, you go one level up to /usr by typing the following: cd . . You can get a directory listing by using the ls command. By default, the ls command, without any options, displays the contents of the current directory in a compact, multicolumn format. To tell the directories and files apart, use the -F option (ls –F). The output will show the directory names with a slash (/) appended to them. Plain filenames appear as is. The at sign (@) appended to a indicates that this file is a link to another file. (In other words, this filename simply refers to another file; it’s a shortcut.) An asterisk (*) is appended to executable files (the shell can run any executable file). You can see even more detailed information about the files and directories with the -l option. The rightmost column shows the name of the directory entry. The date and time before the name show when the last modifications to that file were made. To the left of the date and time is the size of the file in bytes. The file’s group and owner appear to the left of the column that shows the file size. The next number to the left indicates the number of links to the file. (A link is like a shortcut in Windows.) Finally, the leftmost column shows the file’s permission settings, which determine who can read, write, or execute the file. This column shows a sequence of nine characters, which appear as rwxrwxrwx when each letter is present. Each letter indicates a specific permission. A hyphen (-) in place of a letter indicates no permission for a specific operation on the file. Think of these nine letters as three groups of three letters (rwx), interpreted as follows: Leftmost group: Controls the read, write, and execute permission of the file’s owner. In other words, if you see rwx in this position, the file’s owner can read (r), write (w), and execute (x) the file. A hyphen in the place of a letter indicates no permission. Thus, the string rw- means the owner has read and write permission but not execute permission. Although executable programs (including shell programs) typically have execute permission, directories treat execute permission as equivalent to use permission: A user must have execute permission on a directory before he or she can open and read the contents of the directory. Middle group: Controls the read, write, and execute permission of any user belonging to that file’s group. Rightmost group: Controls the read, write, and execute permission of all other users (collectively thought of as the world). Thus, a file with the permission setting rwx—— is accessible only to the file’s owner, whereas the permission setting rwxr–r– makes the file readable by the world. Most Linux commands take single-character options, each with a hyphen as a prefix. When you want to use several options, type a hyphen and concatenate (string together) the option letters, one after another. Thus, ls -al is equivalent to ls -a -l as well as ls -l -a. Next month, we will look at the rest of the basic Linux commands CompTIA wants you to be familiar with for the upcoming A+ certification exams.
<urn:uuid:97438bee-924b-4944-8911-5d00f0687522>
CC-MAIN-2017-09
http://certmag.com/basic-linux-commands-need-know-comptias-new/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00576-ip-10-171-10-108.ec2.internal.warc.gz
en
0.883116
3,705
3.78125
4
Compression is very important. While resolution gets the attention, compression is critical and can be a silent killer - both for quality and bandwidth. Regardless of resolution, all surveillance video is compressed. And even if 2 cameras have the same resolution, their compression levels can be much different. [See our compression / quality tutorial for background.] Thankfully, compression in H.264 is standardized on a scale of 0 to 51, as shown in the image below: However, camera manufacturers almost never disclose Q levels used. Instead, they use a variety of homemade scales and naming systems. Here is a sample of ones we tested inside: So you can have 2 manufacturer's cameras with the same resolution but significantly different compression levels, and therefore varying image quality and bandwidth consumption. An industry first, IPVM has analyzed each of these manufacturers and answered these key questions: - What is the real H.264 quantization level for each camera manufacturer's default settings? How do they vary? Who defaults the lowest and highest? - To normalize the H.264 quantization levels so that each manufacturer had the same compression, what camera settings should be used? - How does the range of compression levels used for each manufacturer map to H.264 quantization levels? - What is the impact of bandwidth as H.264 quantization / compression levels are varied for different manufacturers? If you really care about image quality and optimizing bandwidth / storage use, this is a critical report.
<urn:uuid:f1dc957a-716c-40ff-bdb8-c53873a3feec>
CC-MAIN-2017-09
https://ipvm.com/reports/ip-camera-compression-comparison
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00044-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931249
300
2.734375
3
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) was meant in part to help maintain the privacy and security of medical records that contain Protected Health Information (PHI). HIPAA grew out of the understanding being that when large amounts of PHI move into electronic formats, the risks of large-scale data breaches increase significantly. An issue with HIPAA is that it was so broadly written, and often narrowly interpreted. With that, its implementation has led to many cases of unintended consequences. One example is the ubiquitous patient status board. These boards have long played an important role in coordinating and communicating about patient care in hospitals. In busy wards, the status board often serves as the central access point for operational and patient-related information. Status boards have transitioned from dry-erase whiteboards to real-time electronic boards. Irrespective of the type of board, HIPAA mandates that the information on the board, which long included the patients' names and other medical data, can no longer include that in the public view. The narrow, yet definitively accepted HIPAA interpretation means that the regulation does not delineate between a 50GB PHI database, and a localized white-board with 15 names on it. Strictly speaking, status boards can contain PHI in the limited situation where only those involved with the listed patients’ PHI have a need to see it for Treatment, Payment and Health Care Operations (TPO). So, during an active meeting, putting names on a white board is OK, or if the room is restricted to only those who provide TPO activities for the listed patients. Otherwise, patient PHI should not be on white boards. The HIPAA privacy and security requirements prohibit displaying patients’ names in public via these status boards. Patients though can sign a waiver allowing their names to be displayed on the status board. But for most institutions, HIPAA has made use of status boards much harder. Another issue is when it comes to mental health and substance-abuses issues; HIPAA has placed significant constraints on healthcare providers. Medical staff often cannot proactively reach out to family members because HIPAA patient privacy rules prevents them from telling family members about mental health issues and addiction issues if the patient is a legal adult, without their express consent. It should be noted though that the HIPAA final rule in section 45 C.F.R. § 164.502, allows healthcare providers to disclose protected health information for treatment purposes without patient consent in some limited cases. HIPAA also stymies a physician’s ability to reply to negative online reviews and social media interactions. Sites such as vitals.com, healthgrades.com, RateMDs.com and countless others exist where patients (and trolls) can post a review of a physician. Yelp has reviews for restaurants, and also reviews for doctors in all major cities. A similar situation arises with teachers, as the Family Educational Rights and Privacy Act (FERPA) also limits how teachers can reply to teacher rating sites. While restaurants often reply to negative reviews, physicians who attempt a direct reply to a patient’s social media posting may be in violation of HIPAA. The cruel reality is that a patient can post just about anything they want about a physician. But that same physician may be violating HIPAA if they reply to their patient via social media. It is also important to note that just because a patient blogs about their condition or tweets about their medical status; in no way does that mean they are waiving their HIPAA rights. [ ALSO ON CSO: Cyberattacks will compromise 1-in-3 healthcare records next year ] As to the quality of these reviews, Niam Yaraghi, a fellow at The Brookings Institution, writes that patients are often neither qualified nor capable of evaluating the quality of the medical services that they receive. How can a patient, with no medical expertise, know that the treatment option that they received was the best available one? How can a patient's family whose relative died know that physician had provided their loved one with the best possible medical care? If patients are not qualified to make medical decisions and rely on physicians' medical expertise to make such decisions, then how can they evaluate the quality of such decisions and know that their doctor's decision was the best possible one? In the next blog post, I’ll conclude with some action items, in addition to sage advice from Rebecca Herold. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:f5fd2e10-fca7-4791-8b4b-d635571897a3>
CC-MAIN-2017-09
http://www.csoonline.com/article/3014612/security-awareness/physicians-and-social-media-where-there-s-no-second-opinion.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00220-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960214
924
2.578125
3
Law enforcement technology may not have reached the point where officers are replaced by cyborgs (think RoboCop), but new automated devices and robots are making public safety efforts more efficient and significantly less dangerous. According to experts, unmanned ground robots, 3-D technology and various scientific developments are slowly but steadily changing how police, tactical and rescue personnel spend their time and do their jobs. Four-wheeled drones (that have more in common with Mars rovers than screenwriter Ed Neumeier’s RoboCop character) are increasingly being used to extend the eyes and ears of police and military personnel. A variety of companies are producing these robots, which are designed to keep people out of harm’s way. For instance, a line of ground and maritime robots from iRobot, a robot designer and manufacturer, is focused on achieving mission objectives such as observation and investigation. The company’s small unmanned ground vehicles have been used by bomb squads and SWAT teams to gather information prior to raids. Knob Moses, head of iRobot’s Government and Industrial Robots Division and a retired Navy supply officer, said giving people the ability to diffuse bombs and investigate scenes with a remote presence that features audio and video feeds is a huge safety benefit. Whether it’s a hostage situation or a drug lab, the ability to see and hear what’s going on from a distance improves situational awareness and saves lives. But Moses cautioned that robotics must improve before the devices can truly be a force multiplier. He explained that although iRobot’s machines can cut wires and lift certain items, the extent of that manipulation is less than that of a 7-year-old’s fingers, so the robots really can’t replace human hands for complex tasks. Current-generation robots also can’t right themselves if they tip over, he said, so a person would need to go and get the machine. “What we really would like to be able to do is have a robot go into a building ... know there are stairs there and climb [them],” Moses said, adding that if a robot’s audio and visual signals are lost, he’d also like to see a machine be able to automatically return to the last place it had communications. “In terms of enhancing what a law enforcement or military unit can do, [a robot] is definitely a good tool ... but you still really would not want to use it for more sensitive operations,” Moses admitted. The 30-pound iRobot SUGV is a tactical mobile robot that gathers situational awareness data for soldiers and public safety officials in dangerous situations. Various payloads and sensors may be added to extend its usefulness. The robot is used for surveillance and reconnaissance, bomb disposal, checkpoint inspections and explosive detection. Environmental conditions aren’t much of a hindrance, as the SUGV is designed to operate on both rough terrain and city streets. Automation is changing public safety in other ways too. Take the seemingly mundane task of verifying a person’s identity. Now fingerprints can be checked against law enforcement databases in 60 seconds on the roadside using mobile devices connected wirelessly to internal systems. This is replacing the time-consuming task of hauling suspects to a police station for a standard fingerprint check. Although mobile fingerprint technology has been around for years, its use is now common in law enforcement, including the Los Angeles Police Department (LAPD) where officers utilize Blue Check fingerprinting devices as a part of their regular routine. Blue Check may not be a “James Bond-type” futuristic tool, said LAPD Officer Steve Dolan, but it has significantly impacted officers’ jobs. “As you look at the technology, it is not just what kind of cool gadget it is,” Dolan said. “It is how efficient you can get workable data in a meaningful and timely manner to the officers, and this [fills that] gap.” Mobile fingerprinting technology also helps with public relations: A routine traffic stop where there are questions about a person’s identity now doesn’t have to frustrate a citizen with a drive to the station, Dolan explained. “Now we can do it in just a couple of minutes, and the quality of contact between the officer and citizen is much better,” he said. “Therefore your experience with the police department is more favorable, regardless of whether you get a ticket or not.” This field-based biometric technology can be viewed as a force multiplier, because it makes officers more efficient. In addition, the technology improves officer and public safety by letting officers understand who they’re dealing with in a timelier manner. Other technologies also provide officers with more accurate information in a faster time frame. For instance, 3-D technology is making accident and homicide investigations more dynamic. Though not quite at the level seen on popular TV dramas like CSI, 3-D scanners enable law enforcement personnel to capture data from crime scenes and create a model that takes a judge or jury on a virtual walk-through of the event. The Central Virginia Regional Crash Team — a multijurisdictional unit composed of law enforcement agencies in the city of Bedford, Bedford and Franklin counties, the towns of Vinton and Rocky Mount, as well as the Virginia State Police and National Park Service — received two Leica HDS4500 3-D scanners in August for use during investigations. Capt. Jim Bennett of the Bedford Police Department said officers investigating a crash scene typically rely on tape measures and other equipment to draw two-dimensional diagrams and perform calculations to determine the various factors that led to the accident. But 3-D scanner technology not only produces three-dimensional images, it creates those images much faster and with less manpower. For typical accident investigations, Bennett said six people are sent to the scene, but the 3-D scanners’ efficiency means that a crew of two do the same job. “While it takes us hours or days to take these measurements, [the 3-D scanner] can do a 360-degree scan of a scene in six to eight minutes,” he said. The process is not much different than what’s seen in a science fiction movie, where a complete room scan is done by a laser, Bennett said. The scanner measures the entire scene in eighth-inch increments. Those data points are downloaded onto a computer, where software uses them to reconstruct a 3-D model of the scene. In addition to providing the model for investigators and people in a courtroom to visualize certain elements during a trial, the technology lets officers respond in court if their version of an incident is challenged by opposing legal counsel. If there was a murder scene and the suspect was 6 feet 4 inches tall, and a defense attorney offered an unknown suspect of differing height and build as the murderer, Bennett said officers can input that data into the 3-D model and use the information to confirm or deny the plausibility. “You can go back and re-create [scenarios] and come up with a truer picture of what may have happened, or you can dispel a theory,” Bennett said. While many people associate technology with gadgets, Dustin Haisler, director of government innovation for Spigit, a software developer that focuses on idea collection and management, said crowdsourcing and collaboration techniques can make law enforcement more effective. Software programs that collect the thoughts and ideas of officers from a “bottom-up” perspective, he says, help break down some of the barriers associated with strict hierarchies in law enforcement agencies. Haisler — former assistant city manager of Manor, Texas — made extensive use of crowdsourcing during his public-sector career. Spigit’s solutions provide a Web-based platform where chatter from officers — in the form of ideas or suggestions — can be captured and then voted up or down by the officers’ peers and others. Haisler explained that the process is completely transparent so that a chief or decision-maker can see the idea, see how well it may be accepted, and ultimately make a decision on the idea. “We are using behavioral science to make this process fun in an agency, but also to allow this chain of command that is traditionally within an agency to really be broken down,” Haisler said. “We operate on the premise of allowing anyone, whatever their role is within an agency, to submit their recommendations or to help validate and comment.” This way, he says, it grows through this process into something more actionable. The open data movement being embraced globally is another advancement Haisler sees growing in importance for law enforcement agencies. While there are countless streams of data being compiled by databases and published online, Haisler said he believes citizens and police officers adding “contextual intelligence” to the data will add value to it and help investigators solve complex crimes. Haisler also said most police officers serving a community know where crime hot spots are, but interactive use of open data might solve the deeper questions about the root cause of some crimes. “It is probably going to be driven by allowing even citizens to look at open crime maps and use them like a virtual bulletin board,” he said. “[Right now], they can see it, but they can’t say, ‘Here is a piece of information about it or something you need to know.’ Allowing that information to get back to officers can help them better do their jobs.”
<urn:uuid:d7197e9e-2e6c-4dae-9ba4-786e108dea31>
CC-MAIN-2017-09
http://www.govtech.com/technology/RoboCop-Revisited-How-Automation-Is-Transforming-Public-Safety.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00572-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952463
1,982
2.71875
3
Snow Leopard's answer to the concurrency conundrum is called Grand Central Dispatch (GCD). As with QuickTime X, the name is extremely apt, though this is not entirely clear until you understand the technology. The first thing to know about GCD is that it's not a new Cocoa framework or similar special-purpose frill off to the side. It's a plain C library baked into the lowest levels of Mac OS X. (It's in libSystem, which incorporates libc and the other code that sits at the very bottom of userspace.) There's no need to link in a new library to use GCD in your program. Just #include <dispatch/dispatch.h> and you're off to the races. The fact that GCD is a C library means that it can be used from all of the C-derived languages supported on Mac OS X: Objective-C, C++, and Objective-C++. GCD is built on a few simple entities. Let's start with queues. A queue in GCD is just what it sounds like. Tasks are enqueued, and then dequeued in FIFO order. (That's "First In, First Out," just like the checkout line at the supermarket, for those who don't know and don't want to follow the link.) Dequeuing the task means handing it off to a thread where it will execute and do its actual work. Though GCD queues will hand tasks off to threads in FIFO order, several tasks from the same queue may be running in parallel at any given time. This animation demonstrates. You'll notice that Task B completed before Task A. Though dequeuing is FIFO, task completion is not. Also note that even though there were three tasks enqueued, only two threads were used. This is an important feature of GCD which we'll discuss shortly. But first, let's look at the other kind of queue. A serial queue works just like a normal queue, except that it only executes one task at a time. That means task completion in a serial queue is also FIFO. Serial queues can be created explicitly, just like normal queues, but each application also has an implicit "main queue" which is a serial queue that runs on the main thread. The animation above shows threads appearing as work needs to be done, and disappearing as they're no longer needed. Where do these threads come from and where do they go when they're done? GCD maintains a global pool of threads which it hands out to queues as they're needed. When a queue has no more pending tasks to run on a thread, the thread goes back into the pool. This is an extremely important aspect of GCD's design. Perhaps surprisingly, one of the most difficult parts of extracting maximum performance using traditional, manually managed threads is figuring out exactly how many threads to create. Too few, and you risk leaving hardware idle. Too many, and you start to spend a significant amount of time simply shuffling threads in and out of the available processor cores. Let's say a program has a problem that can be split into eight separate, independent units of work. If this program then creates four threads on an eight-core machine, is this an example of creating too many or too few threads? Trick question! The answer is that it depends on what else is happening on the system. If six of the eight cores are totally saturated doing some other work, then creating four threads will just require the OS to waste time rotating those four threads through the two available cores. But wait, what if the process that was saturating those six cores finishes? Now there are eight available cores but only four threads, leaving half the cores idle. With the exception of programs that can reasonably expect to have the entire machine to themselves when they run, there's no way for a programmer to know ahead of time exactly how many threads he should create. Of the available cores on a particular machine, how many are in use? If more become available, how will my program know? The bottom line is that the optimal number of threads to put in flight at any given time is best determined by a single, globally aware entity. In Snow Leopard, that entity is GCD. It will keep zero threads in its pool if there are no queues that have tasks to run. As tasks are dequeued, GCD will create and dole out threads in a way that optimizes the use of the available hardware. GCD knows how many cores the system has, and it knows how many threads are currently executing tasks. When a queue no longer needs a thread, it's returned to the pool where GCD can hand it out to another queue that has a task ready to be dequeued. There are further optimizations inherent in this scheme. In Mac OS X, threads are relatively heavyweight. Each thread maintains its own set of register values, stack pointer, and program counter, plus kernel data structures tracking its security credentials, scheduling priority, set of pending signals and signal masks, etc. It all adds up to over 512 KB of overhead per thread. Create a thousand threads and you've just burned about a half a gigabyte of memory and kernel resources on overhead alone, before even considering the actual data within each thread. Compare a thread's 512 KB of baggage with GCD queues which have a mere 256 bytes of overhead. Queues are very lightweight, and developers are encouraged to create as many of them as they need—thousands, even. In the earlier animation, when the queue was given two threads to process its three tasks, it executed two tasks on one of the threads. Not only are threads heavyweight in terms of memory overhead, they're also relatively costly to create. Creating a new thread for each task would be the worst possible scenario. Every time GCD can use a thread to execute more than one task, it's a win for overall system efficiency. Remember the problem of the programmer trying to figure out how many threads to create? Using GCD, he doesn't have to worry about that at all. Instead, he can concentrate entirely on the optimal concurrency of his algorithm in the abstract. If the best-case scenario for his problem would use 500 concurrent tasks, then he can go ahead and create 500 GCD queues and distribute his work among them. GCD will figure out how many actual threads to create to do the work. Furthermore it will adjust the number of threads dynamically as the conditions on the system change. But perhaps most importantly, as new hardware is released with more and more CPU cores, the programmer does not need to change his application at all. Thanks to GCD, it will transparently take advantage of any and all available computing resources, up to—but not past!—the optimal amount of concurrency as originally defined by the programmer when he chose how many queues to create. But wait, there's more! GCD queues can actually be arranged in arbitrarily complex directed acyclic graphs. (Actually, they can be cyclic too, but then the behavior is undefined. Don't do that.) Queue hierarchies can be used to funnel tasks from disparate subsystems into a narrower set of centrally controlled queues, or to force a set of normal queues to delegate to a serial queue, effectively serializing them all indirectly. There are also several levels of priority for queues, dictating how often and with what urgency threads are distributed to them from the pool. Queues can be suspended, resumed, and cancelled. Queues can also be grouped, allowing all tasks distributed to the group to be tracked and accounted for as a unit. Overall, GCD's use of queues and threads forms a simple, elegant, but also extremely pragmatic architecture.
<urn:uuid:893abb3b-5acd-4b87-aa62-d1108365205f>
CC-MAIN-2017-09
https://arstechnica.com/apple/2009/08/mac-os-x-10-6/12/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00448-ip-10-171-10-108.ec2.internal.warc.gz
en
0.962556
1,597
2.859375
3
The seventh-grade classroom at Aptos Middle School buzzed with animated kids, many of whom whispered to friends and shot curious looks at the visitors scattered around their classroom. Local politicians, the superintendent of schools and media visited the classroom of 25 kids last Friday to watch a special lesson designed to teach children how to protect themselves online. All 55,000 elementary to high school students in San Francisco got the lesson on the same day, part of fulfilling a new requirement to a U.S. law called the Children's Internet Protection Act (CIPA). To get federal funding, public schools have to instruct students how to protect their privacy, avoid cyberbullying and practice ethical behavior online. You can watch an IDG News Service video of the classroom here. The U.S. is the only nation that requires online safety instruction at public schools, but other countries may soon join it. The European Commission is mulling over a law that would mandate educating kids about online safety, and in the UK, newly appointed adviser on childhood, Claire Perry, is talking about the need to make online safety part of the public school curriculum. But much of what is taught about online safety is not rooted in evidence, according to Stephen Balkam, CEO of the Family Online Safety Institute, a global organization based in Washington, D.C. "There's precious little research on the effectiveness of online safety education," said Balkam, who also worries that a fear-based message about the dangers online can overlook the Internet's many benefits. The U.S. Department of Education cites a 2008-2009 poll that shows 28 percent of students reported being bullied at school, while 6 percent were bullied online. The next public event to draw attention to kids' use of the Internet is coming up on Feb. 5, when the EU and U.S. will observe Safer Internet Day.
<urn:uuid:a2cc0c7a-9591-4fa4-9d14-3a4ee08cfc1e>
CC-MAIN-2017-09
http://www.cio.com/article/2388957/internet/a-lesson-in-cyberbullying.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00568-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957205
383
3.171875
3
Explanation of the difference between creating a backup and disk cloning The Backup operation of Acronis software creates an image file for backup and disaster recovery purposes, while the Disk Clone tool simply copies/moves the entire contents of one hard disk drive to another. Here's how both tools work and when you should use them. When you create an backup with Acronis True Image or Acronis Backup, you get a compressed .tib file containing an exact copy of your hard disk, a disk partition or individual files or folders (you make this choice when you create an image archive). If you create a backup of disk or partition, this backup contains everything that resides on the selected disk/partition, including operating system, applications and all files and folders. You can save this image to any supported storage device and use it as a backup or for disaster recovery purposes. When you use the Disk Clone tool, you copy all contents of one hard disk drive onto another hard disk drive: as a result, both the source and the target disk have the same data. This function allows you to transfer all the information (including the operating system and installed programs) from a small hard disk drive to a large one without having to reinstall and reconfigure all of your software. Disk Clone operation it is not generally used as a backup strategy, as it offers little flexibility. In general, disk clone is a one time operation designed to clone one disk to a different one for the purpose of migrating to a larger hard drive or to a new machine. A backup operation offers greater flexibility as a backup strategy: - Backup can be scheduled (e.g. regular automatic backups that require no user interaction); - Backup changes can be appended incrementally or differentially (i.e. after a full backup, subsequent backups will take less time and occupy less space than the first one); - Backups allow you to keep several versions of the backed up data and you can restore to one of the previous versions (e.g. you can keep backups from one, two and three weeks ago on the same disk and you can recover the backup from the moment that you need); - Backup can be mounted and searched through (e.g. if you want to quickly find, view and copy a file from it). Either way (backup and recovery of the entire disk or disk clone) you can transfer the whole operating system and installed programs to a new disk. Backing up with Acronis True Image 2016: Cloning with Acronis True Image 2016: - Acronis True Image 2014: Cloning Disks - True Image 2013 by Acronis: Cloning Basic Disks - Acronis Disk Director 12: Cloning Basic Disks - Cloning Laptop Hard Disk - Resizing Partitions during Disk Cloning - Transferring a System from IDE to SATA Hard Disk and Vice Versa - Creating a Sector-By-Sector Backup with Acronis Products
<urn:uuid:5d34df74-7c99-45a9-acdc-a8144b246d26>
CC-MAIN-2017-09
https://kb.acronis.com/content/1540
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00092-ip-10-171-10-108.ec2.internal.warc.gz
en
0.90118
620
2.703125
3
Google, Microsoft and Mozilla announced on Jan 3 that they are revoking trust in two digital certificates accidentally issued by Turkish certificate authority (CA) TURKTRUST. When you start talking about another CA fiasco, there are many people whose eyes glaze over when reading technical details because they know it's bad, but really have no idea why it is so dangerous for digital certificates that are considered trusted to end up being untrusted. The root problem with a bad digital certificate is that it is a certified lie that allows people to easily be compromised by bad actors, cybercriminals or by Big Brother in your browser. Silently in the background of most browsers, new digital certificates from valid CAs are accepted. Many people feel fairly safe when they see the padlock in their browser window which allegedly indicates an SSL-enabled secure connection for private communications like banking or email. But an eavesdropping attacker who can obtain a fake digital certificate can successfully impersonate every encrypted website you visit without you knowing that you are not on the genuine site. By using a fraudulent certificate, an eavesdropper can quietly launch a man-in-the-middle (MITM) attack to watch or record all encrypted web traffic while the user is clueless that it's happening. In other words, there is nothing private or secure about your encrypted web browsing. Adam Langley, Google Software Engineer, wrote that "Chrome detected and blocked an unauthorized digital certificate for the "*.google.com" domain" on Christmas Eve. "We investigated immediately and found the certificate was issued by an intermediate certificate authority (CA) linking back to TURKTRUST, a Turkish certificate authority. Intermediate CA certificates carry the full authority of the CA, so anyone who has one can use it to create a certificate for any website they wish to impersonate." On Christmas, Google blocked that intermediate CA and alerted TURKTRUST and other browser vendors. TURKTRUST told Google that in August 2011, "they had mistakenly issued two intermediate CA certificates to organizations that should have instead received regular SSL certificates." On Dec 26, Google pushed "another Chrome metadata update to block the second mistaken CA certificate and informed the other browser vendors." Langley added, "Given the severity of the situation, we will update Chrome again in January to no longer indicate Extended Validation status for certificates issued by TURKTRUST." Regarding Mozilla Firefox, Michael Coates wrote that Mozilla is suspending the inclusion of a TURKTRUST root certificate. "There are currently two TURKTRUST root certificates included in Mozilla's CA Certificate program. TURKTRUST had requested that a newer root certificate be included, and their request had been approved and was in Firefox 18 beta. However, due to the mis-issued intermediate certificates, we decided to suspend inclusion of their new root certificate for now." Although there is a technical discussion on the Mozilla developers security policy group, Coates has the best non-technical description of just how dangerous the issue can be. He described the impact as: An intermediate certificate that is used for MITM allows the holder of the certificate to decrypt and monitor communication within their network between the user and any website. Additionally, If the private key to one of the mis-issued intermediate certificates was compromised, then an attacker could use it to create SSL certificates containing domain names or IP addresses that the certificate holder does not legitimately own or control. An attacker armed with a fraudulent SSL certificate and an ability to control their victim's network could impersonate websites in a way that would be undetectable to most users. Such certificates could deceive users into trusting websites appearing to originate from the domain owners, but actually containing malicious content or software. Microsoft also issued a Fraudulent Digital Certificates Could Allow Spoofing security advisory. "Microsoft is aware of active attacks using one fraudulent digital certificate issued by TURKTRUST Inc., which is a CA present in the Trusted Root Certification Authorities Store." All supported Microsoft Windows releases are affected by this issue. "This fraudulent certificate could be used to spoof content, perform phishing attacks, or perform man-in-the-middle attacks against several Google web properties." So "Microsoft is updating the Certificate Trust list (CTL) and is providing an update for all supported releases of Microsoft Windows that removes the trust of certificates that are causing this issue." Meanwhile, Microsoft has allegedly received a "crushing blow." After investigating for 19 months, the FTC closed the Google antitrust review with nothing more than a slight slap on the wrist. The tech giant agreed to top scraping rival's content and to allow its competitors access to some of its mobile patents. David Drummond, Google's chief legal officer, wrote, "The conclusion is clear: Google's services are good for users and good for competition." Meanwhile, ReadWriteWeb wrote, "This is a crushing blow to Microsoft, which has spent millions of dollars on lobbyists and phony grassroots groups over the past several years hoping to land Google in hot water." In other Microsoft news, the websites affected by watering hole attacks that exploit the critical zero-day hole in IE now also include energy manufacturer Capstone Turbine Corp and other political sites. It may come to light that more websites are also hosting this IE zero-day exploit that allows attackers to gain control of machines running fully patched versions of Internet Explorer 6, 7 and 8. Of Microsoft's two critical fixes coming on Patch Tuesday, one will close vulnerabilities in Windows XP to Windows 8, Windows Server 2003, 2008, 2008 R2 and 2012. You should plan on reboots. But for now users will have to stick with the Band-Aid quick fix for the zero-day exploiting IE. That or switch browsers. Like this? Here's more posts: - Critical Infrastructure Malware Infections: From ICS-CERT report to SCADA Strangelove - Police State starts in tiny Arkansas town - Killer robots, indestructible drones & drones that fly and spy indefinitely - Naughty or nice? Verizon DVR will see and hear you to find out before delivering ads - Terrorism Fear button and funding: Ridiculous DHS spending - Microsoft issues quick fix for critical zero-day hole in IE - Airborne intelligence: U.S. Army building NextGen surveillance planes - TSA: All your travel are belong to us? - Intelligence report predicts IT in 2030, a world of cyborgs with Asia as top power - Future smart spies: Innovative leaps in 2012 Follow me on Twitter @PrivacyFanatic
<urn:uuid:a30ffe4c-f9c9-4369-b4c5-b8cf0936fcb1>
CC-MAIN-2017-09
http://www.networkworld.com/article/2223772/microsoft-subnet/chrome--firefox--ie-to-block-fraudulent-digital-certificate.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00444-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93232
1,338
2.578125
3
Fred Baker, Cisco Systems In today's Internet, site multihoming—an edge network configuration that has more than one service provider but does not provide transit communication between them—is relatively common. Per the statistics at www.potaroo.net, almost 40,000 Autonomous Systems are in the network, of which about 5,000 seem to offer transit services to one or more customers. The rest are in terminal positions, possibly meaning three things. They could be access networks, broadband providers offering Internet access to small companies and residential customers; they could be multihomed edge networks; or they might be networks that intend to multihome at some point in the future. The vast majority, on the order of 75 percent, are multihomed or intend to multihome. That is but one measure; you do not have to use Border Gateway Protocol (BGP) routing to have multiple upstream networks. Current estimates suggest that there is one multihomed entity per 50,000 people worldwide, and one per 18,000 in the United States. We also expect site multihoming to become more common. A current proposal in Japan suggests that each home might be multihomed; it would have one upstream connection for Internet TV, and one or more other connections provided by Internet Service Providers (ISPs), operating over a common Digital Subscriber Line (DSL) or fiber-optic infrastructure. That scenario has one multihomed entity for every four people. Why do edge networks multihome? Reasons vary. In the Japanese case just propounded, it is a fact of life—users have no other option. In many cases, it is a result of a work arrangement, or a strategy for achieving network reliability through redundancy. For present purposes, this article considers scaling targets derived from a world of 10 billion people (circa 2050), and a ratio of one multihomed entity per thousand people—on the order of 10,000,000 multihomed entities at the edge of the Internet. Those estimates may not be accurate 40 years from now, but given current trends they seem like reasonable guesses. RFC 1726 , the technical criteria considered in the selection of what at the time was called IP Next Generation (IPng), did not mention multihoming per se. Even so, among the requirements are scalable and flexible routing, of which multihoming is a special case. When IPv6 was selected as the "next generation," multihoming was one of the topics discussed. The Internet community has complained that this particular goal was not fulfilled. Several proposals have been proffered; unfortunately, each has benefits, and each has concerns. No single perfect solution is universally accepted. In this article, I would like to look at the alternatives proposed and consider the effects they have. In this context, the goals set forth in RFC 3582 are important; many people tried to state what they would like from a multihoming architecture, and the result was a set of goals that solutions only asymptotically approach. The proposals considered in this article include: BGP Multihoming involves a mechanism relatively common in the IPv4 Internet; the edge network either becomes a member of a Regional Internet Registry (RIR) [APNIC, RIPE, LACNIC, AFRINIC, ARIN] and from that source obtains a Provider-Independent (PI) prefix, or obtains a Provider-Allocated (PA) prefix from one provider and negotiates contracts with others using the same prefix. In any case, it advertises the prefix in BGP, meaning that all ISPs—including in the PA case—the provider that allocated it, must carry it as a separate route in their routing tables. The benefit to the edge is easily explained, and in the case of large organizations it is substantial. Consider the case of Cisco Systems, whose internal network rivals medium-sized ISPs for size and complexity. With about 30 Points of Attachment (PoAs) to the global Internet, and at least as many service providers, Cisco has an IPv6 /32 PI prefix, and hundreds of offices to interconnect using it. One possible way to enumerate the Cisco network would be to use the next five bits of its address (32 /37 prefixes) at its PoAs, and allocate prefixes to its offices by the rule that if their default route is to a given PoA, their addresses are derived from that PoA. By advertising the PoAs /37 and a backup /32 into the Internet core at each PoA, Cisco could obtain effective global routing. It would also obtain relative simplicity for its internal network—only one subnet is needed on any given Local-Area Network (LAN) regardless of provider count or addressing, and routing can be optimized independently from the outside world. The problem that arises with PI addressing, if taken to its logical extreme, is that the size of the routing table explodes. If every edge network obtains a PI prefix—neglecting for the moment both BGP traffic engineering and the kind of de-aggregation suggested in Cisco's case—the logical outcome of enumerating the edge is a routing table with on the order of 107 routes. The memory required to store the routing table, and in the Secure Interdomain Routing (SIDR) case the certificates that secure it, is one of the factors in the cost of equipment. The volume of information also affects the time it takes to advertise a full routing table, and in the end the amount of power that a router uses, the heat it produces, and a switching center's air conditioning requirements. Thus both the capital cost of equipment used in transit networks and the cost of operations would be affected. In effect, the Internet becomes the "poster child" for the Tragedy of the Commons. Steve Deering proposed the concept of exchange-based addressing at the IETF meeting in Stockholm in 1995, under the name Metropolitan Addressing. In this model, prefixes do not map to companies, but to Internet exchange consortia, likely regional. One organizing principle might be to associate an Internet exchange with each commercial airport worldwide, about 4000 total, resulting in a global routing table on the same order of magnitude in size. Edge networks, including residential networks, within that domain obtain their prefix from the exchange, and they are used by any or all ISPs in the region. Routes advertized to other regions, even within the same ISP, are aggregated to the consortium prefix. The benefits to the edge network in exchange-based addressing are similar to the benefits of PI addressing for a large corporation. In effect, the edge networks served by an exchange consortium behave like the "departments" of a "user consortium," and they enjoy great independence from their upstream providers. They can multihome or move between providers without changing their addressing, and on a global scale the routing table is contained to a small multiple of the number of such consortia. However, the benefit to users is in most cases a detriment to their ISPs; the ISPs are forced to maintain routes to each user network served by the consortium—or at least routes for their own customers and a default route to the exchange. Thus, the complexity of routing is moved from the transit core to the access networks serving regional consortia. In addition, if there is no impediment to a user flitting among ISPs, users can be expected to flit, imposing business costs. The biggest short-term effect on the ISP might well be the reengineering of its transit contracts. In today's Internet, a datagram sent by users to their ISPs is quickly shuttled to the destination’s ISPs, which then carry it over the long haul. In an exchange-based network, there is no way to remotely determine which local ISP or ISP instance is serving a given customer. Hence, the sender's ISP carries the datagram until it reaches the remote consortium, whence it switches to the access network serving the destination. One could argue that a "sender-pays" model might have benefits, but it is very different from the present model. The edge network has problems, too. If the edge network is sufficiently distributed, it will have services in several exchange consortia, and therefore several prefixes. Although there is nothing inherently bad about that, it may not fit the way a cloud computing environment wants to move virtual hosts around, or miss other requirements. Level 3 Multihoming: Shim6 The IETF's shim6 model starts from the premise that edge networks obtain their prefixes from their upstream ISPs—PA Addressing. If a typical residential or small business does so, there is no question of advertising its individual route everywhere; the ISP can route internally as its needs to, but globally, the number of ISPs directs the size of the routing table. If that is, as potaroo suggests, on the order of 10,000, the size of the routing table will be on the same order of magnitude. The benefit to the ISP should be obvious; it does not have to change its transit contracts, and although there will be other concerns, it does not have the routing table ballooning memory costs or route exchange latencies. However, as exchange-based addressing moves operational complexity from the transit core to the access network, shim6 moves such complexities to the edge network itself and to the host in it. If a network has multiple upstream providers, each LAN in it will carry a subnet from each of those providers—not one subnet per LAN, but as many as the providers of the host's LAN will use. At this point, the ingress filtering of RFC 3704 at the provider becomes a problem at the edge; the host must select a reasonable address for any session it opens, and must do so in the absence of specific knowledge of network routing. A wrong guess can have dramatic effects; a session routed to the wrong provider may not work at all, and an unfortunate address choice can change end-to-end latency from tens of milliseconds to hundreds or worse by virtue of backbone routing. Application layer referrals and other application uses of addresses also have difficulties. Although the address a session is using will work both within and without the network, if a host has more than one address, one of the other addresses may be more appropriate to a given use. Hence, the application that really wants to use addresses is saddled with finding all of the addresses that its own host or a peer host might have. There is also an opportunity. TCP today associates sessions with their source and destination addresses. The shim6 model, implemented in the Stream Control Transmission Protocol (SCTP) and Multipath TCP (MPTCP) , allows a session to change its addresses, meaning that a session can survive a service provider outage. Doing the same in TCP requires the insertion of a shim protocol between IP and TCP; at the Internet layer, the address might change, but the shim tracks the addresses for TCP. There are, of course, ways to solve the outstanding problems. For simple cases, RFC 3484 [3, 4] describes an address-selection algorithm that has some promise. In the Japanese case, a residential host might use link-local addresses within its own network, addresses appropriate to the television service on its TV and set-top box, and an ISP's prefix for everything else. If there is more than one router in the residential LAN serving more than one ISP, exit routing can be accomplished by having the host send data using an ISP's source address to the router from which it learned the prefix. When the network becomes more complex, though, we are looking at new routing protocols that can route based on a combination of the source and the destination addresses, and we are looking at network management methodologies that make address management simpler than it is today, adding and dropping subnets on LANs—and as a result renumbering networks—without difficulty. It also implies a change to the typical host implementing the shim protocol. Those technologies either do not exist or are not widely implemented today. Identifier-Locator Network Protocol The concept of separating a host's identity from its location has been intrinsic to numerous protocol suites, including the Xerox Network Systems (XNS), Internetwork Packet Exchange (IPX), and Connectionless Network Service (CLNS) models. In the IP community, it was first proposed in Saltzer's ruminations on naming and binding, RFC 1498 , and in Noel Chiappa’s NIMROD routing architecture, RFC 1992 . In short, a host (or a set of applications running on a host, or a set of sessions it participates in) has an identifier independent of its network topology, and sessions can change network paths by simply changing the topological locations of their endpoints. Mike O'Dell, in Internet Drafts in 1996 and 1997 called 8+8 and GSE, suggested an implementation of this scenario using the prefix in the IPv6 address as a locator and the interface identifier as an identifier. One implication of the GSE model is the use of a network prefix translation between an edge network and its upstream provider whatever prefix the edge network uses internally, in the transit backbone, the locator appears to be a PA prefix allocated by the ISP in question. As a result, the routing table, as in shim6, enumerates the ISPs in the network—on the order of 10,000. The Identifier-Locator Network Protocol (ILNP) takes the solution to fruition, operating on that basic model and adding a Domain Name System (DNS) Resource Record and a random number nonce to mitigate on-path attacks that result from the fact that the IPv6 Interface Identifier (IID) is not globally unique. As compared to the operational complexities and costs of PI Addressing, Exchange-Based Addressing, and shim6, ILNP has the advantage of being operationally simple. Each LAN has one subnet, when adding or changing providers no edge network renumbering is required, and, as noted, the cost of the global routing table does not increase. Additionally, it is trivial to load-share traffic across points of attachment to multiple ISPs, because the locator is irrelevant above the network layer. And unlike IPv4/IPv4 Network Address Port Translation (NAPT), the translation is stateless; as a result, sessions using IP Security (IPsec) Encapsulation Security Protocol (ESP) encryption can cross it. In this case, the complexities of the network are transferred to the application itself, and to its transport. The application must, in some sense, know all of its "outside" addresses. It can learn them, of course, by using its domain name in referrals and other uses of the address; in some cases however, the application really wants to know the address itself. If it is communicating those addresses to other applications—the usual usage—the assumption that its view of its address is meaningful to its remote peer is, in the words of RFC 3582 , Unilateral Self-Address Fixing (UNSAF), and the concerns raised in RFC 2993 are the result. To mitigate those concerns, ILNP excludes the locator from the TCP and User Datagram Protocol (UDP) pseudo-headers (and as a result from the checksum). The implication of ILNP is, as a result, that TCP and UDP must be either changed or exchanged for other protocols such as Stream Control Transmission Protocol (SCTP) or Multipath TCP (MPTCP), and that applications must either use DNS names when referring to themselves or other systems in their network—sharply dividing between the application and network layers—or devise a means by which they can determine the full set of their "outside" addresses. Network Prefix Translation, Also Known as NAT66 Like ILNP, Network Prefix Translation (NPTv6) derives from and can be considered a descendant of the GSE model. It differs from ILNP in that it defines no DNS Resource Record, defines no end-to-end nonce, and requires no change to the host, especially its TCP/UDP stacks. To achieve that, the translator updates the TCP/UDP checksum in the source and destination addresses. If the ISP prefix is a /48 prefix, this prefix allows for load sharing of sessions across translators leading to multiple ISPs; if the ISP prefix is longer, such as a /56 or /60, the checksum update must be done in the IID, and as a result load sharing can be accomplished only across translators between the same two networks. Like ILNP and unlike IPv4/IPv4 NAPT, the translation is stateless; as a result, sessions using IPsec ESP encryption can cross it. The complexities of the network are again transferred to the application itself, but not to its transport. The application must, in some sense, know all of its "outside" addresses. Using its domain name in referrals and other uses of the address can determine these addresses; in some cases, however, the application really wants to know the address itself. If it is communicating those addresses to other applications—the usual usage—the assumption that its view of its address is meaningful to its remote peer is, again in the words of RFC 3582 , "UNSAF," and some of the concerns raised in RFC 2993 result. The implication of NPTv6 is that applications must either use DNS names when referring to themselves or other systems in their network—sharply dividing between the application and network layers—or devise a means by which they can determine the full set of their "outside" addresses. However, the IPv6 goal of enabling any system in the network to communicate with any other given administrative support is retained. From the perspective of this author, the choice of multihoming technology will in the end be an operational choice. The practice of multihoming is proliferating and will continue to do so. There is a place for provider-independent addressing; it may not in reality make sense for 40,000 companies, but it probably does for the largest edge networks. At the other extreme, shim6-style multihoming makes sense in residential networks with a single LAN; as described earlier, there are simple approaches to making that work through reasonable policy approaches. For the vast majority of networks in between, policy suggestions that do not substantially benefit the network or users who implement them do not have a good track record. Hence, while Exchange-Based Addressing materially assists in edge network problems, there is no substantive reason to believe that the transit backbone will implement it. Similarly, although shim6 materially helps with the capital and operational expenses of operating the transit backbone, it is not likely that edge networks will implement it. We also have a poor track record in changing host software. For example, SCTP is in many respects a superior transport protocol to TCP—it allows for multiple streams, it is divorced from network layer addressing, and it allows endpoints to change their addresses midsession. In a 2009 "Train Wreck" workshop at Stanford University, in which various researchers argued all day in favor of the development of a new transport with requirements much like those of SCTP, the research community acted as if ignorant of it when the protocol was brought up in conversation. NPTv6 is not a perfect solution, but this author suspects that it will be operationally simple enough to deploy and manage and close enough to the requirements of edge networks and applications that it will, in fact, address the topic of multihoming.
<urn:uuid:a1b40a58-9b37-4a1e-ae8d-5d59bf8dd437>
CC-MAIN-2017-09
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-52/142-multihoming.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00620-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946924
4,055
3
3
IBM's Watson cognitive computer will soon be helping Mayo Clinic enroll patients in clinical trials in an effort to increase the speed of new discoveries while offering patients more and better treatment possibilities. The collaboration will begin with research studies in cancer. "Ultimately, we believe Watson will help advance scientific discoveries into promising new forms of care that clinicians can use to treat all patients," Rhodin says. "Through this effort, Mayo Clinic can consistently offer more medical options to patients and conclude clinical trials faster." "In an area like cancer, where time is of the essence, the speed and accuracy that Watson offers will allow us to develop an individualized treatment plan more efficiently so we can deliver exactly the care that the patient needs," adds Steven Alberts, M.D., chair of medical oncology at Mayo Clinic. Filling Clinical Trials Is a Data-Intensive Task According to the Center for Information and Study on Clinical Research Participation, $95 billion is spent on medical research in the U.S. each year, but only six percent of clinical trials are completed on time. One reason for that is the data-intensive nature of clinical trial recruitment. Clinicians and researchers typically have to manually cross reference patient data with criteria for thousands of available clinical trials -- there are nearly 170,000 clinical trials in progress worldwide at any given time and more than 8,000 of them in progress at the Mayo Clinic alone. Big Blue says Mayo clinicians could instead leverage Watson's natural language processing and data analytics capabilities to quickly sift through millions of pages of clinical trial and patient data to complete the process in seconds. IBM and Mayo experts are training a version of Watson, geared specifically for working with the Mayo Clinic, to analyze patient records and clinical trial criteria to determine appropriate matches for patients. As the pilot gets underway, Watson will learn more about the clinical trials matching process, becoming more efficient. IBM and Mayo experts are currently feeding Watson's corpus of knowledge with all clinical trials at the Mayo Clinic and in public databases like ClinicalTrials.gov. IBM says Watson may also be able to help locate patients for hard-to-fill trials, like those involving rare diseases. This is a big deal, IBM says, because many clinical trials aren't completed due to a lack of adequate enrollment -- only five percent of Mayo Clinic patients currently take part in trials, despite well-organized efforts. Nationally, the rate is even more dismal at three percent. Mayo hopes Watson's help will allow it to include up to 10% of its patients in clinical trials. "With shorter times from initiation to completion of trials, our research teams will have the capacity for deeper, more complete investigations," says Nicholas LaRusso, M.D., a Mayo Clinic gastroenterologist and the project lead for the Mayo-IBM Watson collaboration. "Coupled with increased accuracy, we will be able to develop, refine and improve new and better techniques in medicine at a higher level." Mayo clinicians and clinical trial coordinators will begin piloting Watson as a patient enrollment tool in early 2015. IBM notes it is also discussing other potential Watson applications with Mayo Clinic. IBM is also bringing Watson's cognitive computing capabilities to bear in a number of other life sciences and healthcare research collaborations, including the following: - MD Anderson Cancer Center, which is using Watson to help oncologists create individualized treatment plans for leukemia patients - New York Genome Center, which is using a version of Watson designed for genomic analysis to advance personalized cancer treatment - Baylor College of Medicine, which is leveraging Watson to automate the process of reviewing comprehensive scientific data and formulating hypotheses - Johnson & Johnson, which is exploring how Watson can help advance comparative effectiveness studies to determine which drugs are most effective for subsets of the population - Memorial Sloan-Kettering Cancer Center which is working with IBM to co-develop a Watson-powered app that will help oncologists anywhere develop personalized treatment options for cancer patients. Follow Thor on Google+ This story, "Mayo Clinic turns to IBM's Watson to fill clinical trials" was originally published by CIO.
<urn:uuid:5060b710-6988-419a-bf39-154e1b9d7282>
CC-MAIN-2017-09
http://www.itworld.com/article/2694387/data-center/mayo-clinic-turns-to-ibm-s-watson-to-fill-clinical-trials.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00620-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94011
838
2.609375
3
Manage and Maintain Physical and Logical Devices These questions are based on 70-290 – Managing and Maintaining a Microsoft Windows Server 2003 Environment Self Test Software Practice Test Objective: Manage and maintain physical and logical devices. Sub-objective: Troubleshoot server hardware devices. Single answer, multiple-choice You are a network administrator for your company. A Windows Server 2003 computer named SQL1 runs SQL Server 2000 and hosts a database with customer data. All hard disks on SQL1 are dynamic disks. Database files are located on a RAID-5 volume, and transaction logs are located on a mirrored volume. Employees access the database through a custom client application. SQL1 experiences a catastrophic hardware failure. After analyzing the nature of the failure, you discover that repairing the failed computer would not be cost-efficient. The hard disks have not been damaged, and no data loss has occurred. You have a spare computer, and you decide to use it instead of the failed computer. You install Windows Server 2003 and SQL Server 2000 on the new computer, and then you move the disks that host the RAID-5 volume with the customer database files and the disks that host the mirrored volume with the transaction log files to the new computer. You must ensure that users can access the customer database without making any changes to the client application. What should you do next? - Reactivate the disks. - Repair the disks. - Import the foreign disks. - Initialize the disks. C. Import the foreign disks. To ensure users can access the customer database without making any changes to the configuration of the client application, you should assign the new computer the same name as that of the failed computer. When dynamic disks are moved between computers, the volumes on those disks usually retain their original drive letters. However, the disks that have been moved from the failed computer in this scenario have a different signature than the existing disks on the new computer. Therefore, the moved disks will be marked as foreign disks, and the volumes on those disks will not be accessible. To make the volumes with the database and transaction log files accessible on the new computer, you should import the foreign disks. Note that all of the disks that constitute a multi-disk volume should be moved between computers and imported at the same time. Otherwise, the volume will not function properly. You would need to reactivate an existing disk if the disk became temporarily unavailable due to a transient hardware problem or data corruption. Disks cannot be repaired; only RAID-5 volumes can. For example, if one of the disks in a RAID-5 volume failed and there was sufficient unallocated space on another disk, then you could repair the volume by replacing the RAID-5 region on the failed disk with another region, which would be automatically created in the unallocated space. When a new disk is added to a computer, it is necessary to initialize it before you can start using the new disk. Initializing involves writing a master boot record and a signature on the new disk. In this scenario, the disks that host the customer database are not new; they already contain a signature. Windows Server 2003 Online Help, Contents, “Disks and Data,” “Managing Disks and Volumes,” “Disk Management,” “Concepts,” “Understanding Disk Management,” “The Disk Management Window.” Windows Server 2003 Online Help, Contents, “Disks and Data,” “Managing Disks and Volumes,” “Disk Management,” “How To…,” “Manage Disks,” “Move disks to another computer.” Windows Server 2003 Online Help, Contents, “Disks and Data,” “Managing Disks and Volumes,” “Disk Management,” “Troubleshooting.”
<urn:uuid:56ad981f-c012-451c-b9a1-f0c11edbea37>
CC-MAIN-2017-09
http://certmag.com/manage-and-maintain-physical-and-logical-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00616-ip-10-171-10-108.ec2.internal.warc.gz
en
0.894056
817
2.84375
3
Japan's space agency said Friday that sensitive information on a new long-range rocket project may have been stolen by a computer virus. The Japan Aerospace Exploration Agency, or JAXA, said it found evidence that a single employee's computer was infected by a virus that collected information and transmitted it externally. The agency said it was still unclear what information had been sent, but the computer in question contained specifications and operation information on its Epsilon rocket program, as well as several related rockets. An investigation into what information was leaked is currently underway, as well as whether other computers have also been infected. The employee reportedly worked at the agency's facilities in Tsukuba, northeast Tokyo. JAXA said it detected a virus on the computer on Nov. 21 and immediately disconnected the computer from the network. The agency said that it carried out additional investigations and discovered on Wednesday that information had been leaked. The Epsilon rocket is a three-stage rocket powerful enough to put heavy loads of up to 1,200kg into low earth orbit, and could be used for military purposes. It was meant to be a more advanced and lower cost version of Japan's existing rockets. The rocket is unique both for its physical build and its newly designed launch system, which is meant to allow for remote system checks and launches from a laptop computer connected remotely over the Internet. An initial launch using the new rocket is planned for next year, carrying a new space telescope. JAXA apologized for the leak and said it will investigate further to determine exactly what was leaked, as well as adopting stricter security to prevent other incidents. In March, JAXA said it had concluded an investigation into a incident last year where an employee infected a computer with a virus by clicking on software sent in a targeted mail. That incident, which occurred in July 2011, resulted in the leak of non-secret image data as well as about 1,000 email addresses stored on the computer.
<urn:uuid:e21b7127-4a58-4dab-9fe6-d51413efb020>
CC-MAIN-2017-09
http://www.cio.com/article/2390021/data-breach/japan-space-agency--virus-may-have-stolen-space-rocket-data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00492-ip-10-171-10-108.ec2.internal.warc.gz
en
0.978607
396
2.65625
3
At some point in our lives, we’ve all gone through some online account creation process and created a password. Frequently, we’re required to choose a password that includes something like at least one capital letter, one number, et cetera. The stricter the criteria, the more layers of security we think we’ve added to our passwords. However, that’s not actually the case. The math behind password security To understand why, let’s do some math. (It will be really simple, I promise.) One method of attempting to defeat a password is to simply try all possible character combinations. This tactic is referred to as a “brute force” attack. It goes without saying that a password with fewer possible combinations is easier to brute force than a password with more possibilities. In cryptography, this is referred to as the ‘keyspace’. Statistically, if an attacker is brute forcing a password, they must try fifty percent of the keyspace before they have a better than fifty percent chance of gaining access. Now, suppose I’m getting an account for a site that only requires two-character passwords. Since there are a total of 95 printable ASCII characters, that means we have a total of 95 x 95 possible combinations. Out of those 9,025 combinations, suppose an attacker could try five passwords per second. To try half the possible combinations would take about 900 seconds. But what if the site decides to enforce a policy that one character in the password must be a number? This requirement reduces the possible number of passwords to only 950 (95 x 10). The same attacker would be able to try half the possible passwords in only 95 seconds. Of course, no site would allow two-character passwords (I hope). The math however, is similar with longer passwords. Other password pitfalls That’s also not to say that a password should be something as simple as ‘walrus’ – walrus is a common word found in the dictionary and is subject to a “dictionary attack”. A dictionary attack is an attempt to gain access to a system by trying common words and passwords (such as ‘letmein’, ‘password123’, et cetera). Beyond these common pitfalls, there are other ways that a password can be less secure – or easy to guess. Things like a spouse’s name, your date of birth, mailing address, et al. are examples of what not to use in a password. The building blocks of a good password Longer passwords are better, and contrary to the common misconception, they don’t have to be difficult to remember. A simple phrase can be easier to remember, but long enough to be difficult to brute force. Something like ‘ILikeDrinkingWhiskey,ButNotMoreThan5Shots.’ mixes upper and lower case, special characters (comma, ampersand, et cetera), and numbers but, due to one particularly eventful evening, is very easy for me to remember (although that’s not my actual password). It’s also a good practice to reset your passwords frequently. For example, when getting started with a new server, we recommend immediately resetting your password and following these guidelines: - Use at least 8 characters. - Use a combination of upper-case letters, lower-case letters, and numbers. - Avoid words or names, especially your name or the name of your business. - Avoid a password that shares the same characters as the previous password. For example, changing “Ccodero1” to “codero2” is not a safe practice. The moral of the story is that a password should use as many different characters from as many different character groups as possible. No password will make any system perfectly secure, but by using the tips above, you can make it as hard on the attackers as you can. After all, passwords are one of your lines of defense keeping your dedicated server environment secure. Of course, sometimes even the best laid plans go awry, and the most thoughtful passwords get lost in the rush of day to day life. If you ever lose your Codero Cloud password, you can recover it by following these steps, or chatting with one of our experts. Tags: online security
<urn:uuid:5ad7d139-bb39-47ca-b16a-e4a53a0eff3a>
CC-MAIN-2017-09
http://www.codero.com/blog/password-security-best-practices-and-tips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00085-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930187
912
3.734375
4
To help itself, Sun may finally relax its grip on the net's programming language. The way Suns Java programming language has been developed is something of a paradox. Sun owns the core code behind Java, which is used to create applications that can run across an almost-unlimited number of computers in a "distributed" environment, such as the Internet. But a majority of the underlying technology for the version most widely used by corporations, Java 2 Enterprise Edition, came from outside Sun. This code was developed by the "Java Community Process," sort of a standards group created by Sun and other Java proponents. In fact, a lot of that technology actually comes from Suns rival, IBM. Yet Sun may not be able to have it both ways. Java has become too big for Sun to maintain by itself, say key programmers at the company. Meanwhile, major partners are loath to contribute more intellectual property while still having to pay Sun licensing fees for using Java technology. Those partners keep pushing Sun to make Java an "open source" piece of programming. Now, according to a number of technologists inside and close to Sun, the company is preparing to do just that. If it did, that would lower the costs of developing Java applications. Users could modify Java code to their own needs, without having to rely onand paySun for upgrades. "Open source" doesnt mean "free software." Despite what Microsoft or SCO Group might tell people, it doesnt mean "flesh-eating bacteria" either. It just means allowing others to build new features atop that codeprovided that those features can be reviewed by the original owner of the code, and made freely available to everyone else who uses it. The original owner then can decide what is included in the next upgrade. Until now, at least publicly, Sun has resisted the pressure to open up Java, jealously guarding its licensing revenue. Instead, it has turned several Java-related pieces of technology into open-source projectssuch as the NetBeans Java development tool. Sun also has launched java.net, a collaboration site for open-source development with Java, and offered up "millions of lines of code" as open-source softwareincluding the Java programming interfaces for Web services and integration with XML (eXtended Markup Language). Sun has used these open-source projects to turn customers into co-developers. Over the long run, that lowers the cost of software maintenance. If a feature is important enough to someone, that someone will develop it and maintain itand that feature can be incorporated back into the core product. Sun, presumably, can still sell that product, with the bells and whistles that come along with packaged softwarelike professional customer support. So whats keeping Sun from taking the plunge? Lawyers. Sun is still fighting its lawsuit against Microsoft for violating the terms of its Java license. While the injunction that required Microsoft to ship Suns version of Java with Windows was recently overturned, theres still the actual lawsuit itself to be fought over Microsofts violation of its licensing agreement with Sun. Suns lawyers will undoubtedly be very careful about the wording of any open-source licenseand what pieces of Java get placed under itto protect Suns ability to continue making money off Java. When the dam breaks, Sun may not stop at just opening up the source code of Java. Sun recently bought the rights to some elements of SCO Groups Unix operating system, to help it run its own version of Unix, called Solaris, on Intel processors. Sun released the source code for Solaris for noncommercial use back in 1999; with its intellectual property rights now secured, Sun could conceivably take Solaris completely into open source for commercial use as well. Which would leave Sun right back where Sun chief executive officer Scott McNealy is most comfortable-in the hardware business. Discuss this in the eWeek forum.
<urn:uuid:a2cc4eed-086b-44d8-95bf-f60513f08d28>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Application-Development/Will-Java-Roast-on-An-Open-Fire
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00137-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959729
795
2.6875
3
When future historians look back on 2011, they’ll certainly conclude that we were a society obsessed with video games, minicomputers masquerading as phones and an endless supply of online distraction. But in a few years, many technologies developed in service of these functions may be repurposed in extraordinarily sensible ways. Motion control, for example, is driving a revolution in video gaming, but may soon help doctors diagnose patients via video conference. Augmented reality, used on smartphones to track down bars, might soon make police officers smarter and safer. In two decades, unmanned aerial vehicles plying the skies might be mundane. The following five emerging technologies are poised to go from amazing to ordinary — and the change will most certainly benefit us. Whether you play video games or not, you’ve no doubt heard of the Nintendo Wii. Launched in 2006, the video game console sparked a revolution in interactive entertainment. Now Sony, Microsoft and others have leapt into the motion control market with more powerful and accurate motion controllers. In Microsoft’s case, the premise of the Xbox 360 maker’s new Kinect peripheral is that you are the controller. The technology not only opens the door for innovative video games, but also can transform how people work in the classroom, the operating room or even on the battlefield. “Right off the bat, areas outside of gaming that have sparked the most interest for the use of Kinect and our natural user interfaces are health care and education,” said Chris Niehaus, director of innovation for Microsoft Public Sector. Kinect uses a 3-D image viewer and a highly sensitive microphone to isolate a user’s movements and voice. This allows Kinect to respond to both gestures and verbal commands. “I think public safety would be one you would think about right away for that sort of biometric recognition ability,” he said. “In the next few months, you’ll be seeing more announcements and pieces of our technology coming forward around speech recognition.” Niehaus said Microsoft is refining the Kinect technology’s sensitivity to pick up subtle movements like hand tremors and fluttering eyelids — a capability that will make Kinect technology a tool for doctors conducting telemedicine. “If [a doctor] is doing a video conference with someone in the living room, the Kinect sensor is not only providing a video link so that you’re seeing and talking to the other person, but it’s also watching different movements to determine if those movements are indicative of pain or side effects,” Niehaus said. “That’s going to assist with early diagnosis and evaluation.” Microsoft, he says, has talked with the U.S. Department of Defense about using the technology for rehabilitation therapy for wounded veterans. On the education front, Niehaus said there’s interest from schools to create interactive curriculum using Kinect. “There is a big trend toward gamification [adding game mechanics to otherwise traditional activities] and personalized learning,” he said. “There are some education-based games already available for the Xbox — and a lot of them are really STEM (science, technology, education and math) focused.” For example, 20 Chicago-area public school districts are experimenting with Xbox and Kinect in their classrooms and after-school programs, Niehaus said. “We’re getting a lot of support from organizations like Get up and Move, Play 60 and different nonprofit programs that are focusing on getting kids up and moving, active and keeping them engaged. When you combine that with education, it is really taking off.” A common problem on the front lines — be it in war, a disaster or any other emergency — is a lack of communication. In the years since walkie-talkies made their debut, technology has evolved, making it easier for soldiers and first responders to communicate. But most communications improvements have hinged on fixed, physical infrastructure to transmit voice and data over distance. In remote areas, this usually requires personnel to erect radio repeater towers atop geographical high points to facilitate communication over a wide area. In a battle, such personnel are vulnerable to the enemy. In a forest fire, crews risk getting caught behind fire lines. What if that troublesome tower could be replaced with a balloon? You’d have what Chandler, Ariz.-based Space Data calls a balloon-borne repeater platform — a floating communications hub that can be deployed in minutes by personnel miles from danger. The company offers what are essentially weather balloons loaded with radio repeater gear. The StarFighter model, already used by troops in Afghanistan, facilitates two-way radio communication up to 500 miles while floating at 80,000 feet, safely away from enemy fire. The StarFighter soon may be used by emergency responders. “It’s a platform that you really can put any type of communication on,” said Gerald Knoblach, CEO of Space Data. “It takes 15 to 20 minutes to prepare it for launch, and the platform rises at about 1,000 feet per minute, so it gets to 90,000 feet in an hour and a half, and then it levels off there and starts relaying voice and data traffic across this big footprint.” Knoblach said the balloons suspend for about 12 hours and operate at one-tenth the cost of communications aircraft. The range, responsiveness and interoperability of the balloons might make them ideal for emergency responders who suddenly find gaps in their communication networks after a disaster. “This is tailored for wide communications and really complex terrain,” Knoblach said. “We all see how fast phones are becoming smaller and more capable; we can take all that kind of consumer technology and put it inside this, and every year get more capacity.” Augmented reality apps are popular for iPhones and Androids. Generally such apps ask users to point their in-phone camera at a horizon, and the software then overlays the image with restaurant and bar information, or it provides walking directions and other details. A few apps use augmented reality to make a video game of the real world, using large smartphone screens to place digital bad guys on an otherwise normal cityscape. It’s one of those nascent technologies that gets many people excited about future possibilities. Public safety officials say augmented reality can make public safety personnel more effective while keeping them safer, something Motorola is exploring. “We spend a lot of time trying to understand the needs of public safety officers and folks in the federal government police and security forces. And it is just not understanding what their needs are today, but also understanding what they are tomorrow,” said Curt Croley, Motorola’s senior director of Innovation and Design. “What piqued our curiosity, and what we are very much watching, is the augmented reality space.” The confluence of data analytics, high-speed wireless data and sophisticated end-user devices is enabling significant developments in augmented reality, a lot of which is being developed in the consumer world, said Craig Siddoway, Motorola’s director of Advanced Radio Concepts, Innovation and Design. “And we can learn a lot from that. The challenge here is to really allow [a police officer] to focus on what he is trying to do, and that obviously changes under certain conditions.” An approach might be to provide officers with lightweight glasses that flash different colors in the officer’s peripheral vision indicating danger, or display simple data gleaned from a license plate. The trick is to provide data via augmented reality that improves situational awareness without overwhelming or distracting an officer. “The context always has to be, first and foremost, the safety of the officer,” Siddoway said. “If he is at a traffic stop, there might be a covert alert that is either a vibration, audible via earpiece or something visual by glasses that suggests, ‘Heads up. Something is going on.’” Other public safety applications for augmented reality include speech or facial recognition to find suspects in a crowd. Building inspectors could be equipped with 3-D maps of a structure. These capabilities could all be made available in a smart handheld device or even a heads-up display. But there’s only so much data a human can process at once. “There are variables that we have to understand,” said Motorola CTO Paul Steinberg. “How much information can you present before users shift their focus from something that is more important?” While no amount of augmented reality is going to lead to a real-life Robocop, the future of augmented reality is so bright you’ve got to wear data-analyzing, situationally aware shades. Photo: Augmented reality technology may safeguard officers while making them more effective. Photo courtesy of Motorola Smart infrastructure, intelligent transportation systems, even the so-called “Internet of things” — all add up to an environment that’s more than meets the eye. But there’s at least one common fixture few of us give a second thought to, yet it’s uniquely positioned to deliver an array of high-tech services — the humble streetlight. A company called Illuminating Concepts transforms typical streetlights into highly intelligent network nodes that do more than fend off darkness. The Farmington Hills, Mich., company launched a product called Intellistreets that adds lighting control, wireless communication, audio, video and digital signage to any standard streetlight. Ron Harwood, president and founder of Illuminating Concepts, said Intellistreets can help cities save energy and enhance citizen safety, while even turning a small profit. For instance, restaurants could pay to run advertising messages on downtown intersections equipped with digital signage. Cities also could use visual or audio messages for emergency communications or to guide citizens to emergency evacuation routes. It’s unbelievable how much more the cities can communicate with pedestrians,” Harwood said. The wireless mesh network capability of Intellistreets also means the streetlights could display — or tell — people bus or train schedules, information on Amber Alerts, that an emergency vehicle is approaching, or help reroute drivers during road closures. Outfitting a streetlight with Intellistreets costs about $500, according to Harwood. Each fixture operates individually and includes a microprocessor, a dual-band radio system, audio amplifier, digital sound processor, video output and HD video card. He said the technology is an affordable option to implement smarter streetlights. “Los Angeles and Seattle are spending a lot of money in retrofitting streetlights, and departments of transportation in all 50 states are experimenting with LED fixtures,” Harwood said. “There is a lot of awareness in the cities around retrofitting, but for many, there’s just too little money available for it to happen.” Most of us are familiar with unmanned aerial vehicles (UAVs) — at least the variety used by American military forces to wage war in the air without risking pilots’ lives. But some might wonder why UAVs aren’t being used for mundane activities. It’s because UAVs have been federally regulated since their inception, meaning the marketplace hasn’t had the freedom to conjure up new ideas for these revolutionary machines, said James Grimsley, president and CEO of Norman, Okla.-based Design Intelligence Inc., a company that develops technology for unmanned aerial systems. “We call them unmanned aircraft, and we’re not describing them in terms of potential, we’re describing them in terms of what we see is missing, which is the man,” Grimsley said. “But that’s going to be changing in the next two to five years.” That change will be possible thanks to an evolution in how the Federal Aviation Administration (FAA) regulates UAVs. The FAA’s website states, “To address the increasing civil market and the desire by civilian operators to fly UASs [unmanned aircraft systems], the FAA is developing new policies, procedures, and approval processes.” But the agency says these changes aren’t anticipated until at least 2015. There are many potential uses for UAVs, Grimsley said, including package delivery. Think for a moment about sending a package overnight. It often means the package is put aboard a piloted airplane. It might then be loaded onto a truck and driven miles to a remote destination. “UPS charges you $15 to deliver a package, and they have to deliver it overnight regardless of the cost for them,” Grimsley said. “If we had planes that could handle 10 or 20 pounds of cargo that would fly to these small areas and regional hubs, we could move mail and very small cargo and packages. Small vehicles don’t require big airports, they don’t require the infrastructure planes do, and they’re cheaper and safer.” Grimsley points out all the problems that accompany manned flight just to deliver packages: safety devices, life-support systems, and the destruction that can occur if a large plane crashes. By using UAVS, this could be circumvented and things like organ delivery could be more streamlined. UAVs may also soon be used as communications relays. Instead of incurring the high cost of launching a satellite, solar-powered UAVs could stay aloft for years and serve the same function as orbit satellites. Another practical use for UAVs, Grimsley said, would be monitoring municipal assets. “Cities often buy large amounts of equipment that are all over the place, like tractors and trucks. Those things can be stolen, and it can take quite a while before the government will even realize they’re gone,” he said. “They can be implanted with RFID tags, and you could have a UAV flying around mapping all of these vehicles, and when one shows that it’s no longer within the map, you can go looking for it.” In the end, the development of the next generation of UAVs will primarily be driven by safety. Just as NASA came to accept robots as far superior for exploration in terms of safety, cost and efficiency, so too will the coming era of everyday UAVs. “We typically think of the sexy and exciting things first, but they don’t necessarily turn into big financial opportunities,” Grimsley said. “What turns into big opportunities are mundane things like delivering mail, cargo, packages — almost a sort of railroad-in-the-sky type thing. That’s what will really turn into major drivers and economic opportunities.”
<urn:uuid:aa73b21d-60b1-44bb-a023-b58e592a1b62>
CC-MAIN-2017-09
http://www.govtech.com/technology/Five-Emerging-Technologies-Soon-to-Hit-the-Government-Market.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00013-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942011
3,076
2.59375
3
Intel Debuts Wireless Chip Semiconductor giant Intel, announced Wednesday that they have produced the technology to bring many futuristic gizmos to today's market. Currently, a wireless device is made up of several different chips, each entitled to their own task such as communications, memory and processing. Intel plans to integrate each of these aspects into one main semiconductor, enabling users to take advantage of faster performance in smaller wireless Internet devices. The company also stated intentions of bringing these new chips to consumers by the middle of next year. According to Al Fazio, principal engineer for Intel's technology and manufacturing group, "What we're doing here is to take those various components and build them on a single process technology, all produced on a single wafer, a single chip and therefore all done in one facility." With Intel's newly developed technology, many new devices can be created. Such things as video watch phones, cell phones combined with personal digital assistants and Internet-ready PDAs are within reach. While the technology is still experimental, Intel is not speculative about the capabilities of their research. Developers claim that the new chips will be up to 5 times more powerful than those seen in current wireless devices, and are capable of operating at speeds up to 1 gigahertz. The new chip technology also significantly cuts down on battery usage. Cell phones which utilize the new semiconductor can run up to one month without requiring a charge. What do you think; is this a major step forward for wireless devices, or just another technology doomed to experience the same fate as Bluetooth?
<urn:uuid:5571eab5-16a4-4dda-a9ba-fcfe2ef7187c>
CC-MAIN-2017-09
https://betanews.com/2001/05/17/intel-debuts-wireless-chip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00013-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953873
320
2.6875
3
DARPA's dielets and the prominence of provenance The Defense Advanced Research Projects Agency is looking for proposals to develop a “dielet,” an electronic component to authenticate the provenance of other electronic components. DARPA wants to use the tool to help track counterfeit electronic parts infiltrating the military supply chain. DARPA’s project turns on concept of provenance and the metadata necessary to capture it. Provenance metadata is a cornerstone of metadata collection for every digital object in an organization. Here’s why. The most common definition of provenance is the origin or source of something. Provenance is also the history of ownership of an object, and it is especially used to establish the authenticity of works of art. Likewise, data provenance covers the provenance of computerized data. There are two main aspects of data provenance: ownership and usage. Metadata is used to capture the provenance of a particular data item that could be an individual file, archive, data set or data package. For example, a Word document may have a long chain of ownership via editors and reviewers before becoming a finished product. Capturing that lineage information establishes the provenance of that particular data object. The purpose of the lineage metadata is to establish the authenticity of an artifact by understanding its chain of ownership. In fact, the art community has a long history of using provenance in order to establish the authenticity of art work. Without the provenance metadata, there is no trust. The second important use case for provenance is to capture an artifact’s change process to be able to reconstruct it at any point in its lifecycle. This is a provenance requirement that I am currently designing a system to capture. To design this type of metadata there are several data standards to choose from. In 2013, the W3C created a provenance data model and representations in the Web Ontology Language (OWL), Dublin Core terms (a popular set of metadata) and an extensible markup language (XML) schema. This standard is compatible with linked data models being used in several government transparency initiatives. Establishing lineage, pedigree and provenance is so important that DARPA is paying to deliver it even for our computing hardware. Your organization should follow suit in order to deliver trust, reconstruction and high quality for your information consumers. Michael C. Daconta ([email protected] or @mdaconta) is the Vice President of Advanced Technology at InCadence Strategic Solutions and the former Metadata Program Manager for the Homeland Security Department. His new book is entitled, The Great Cloud Migration: Your Roadmap to Cloud Computing, Big Data and Linked Data. Posted by Michael C. Daconta on Apr 09, 2014 at 10:39 AM
<urn:uuid:17750203-4440-419a-b97a-77466bca85d2>
CC-MAIN-2017-09
https://gcn.com/blogs/reality-check/2014/04/data-provenance.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00309-ip-10-171-10-108.ec2.internal.warc.gz
en
0.900829
584
2.734375
3
Processor.com: Improve Your Understanding Of Cloud Computing Send a friend or colleague a link to this article Cloud computing can be confusing, with multiple models, different types of cloud services, and perceived money-saving features that may be too good to be true. When comparing the available cloud services, you need to find a solution that will fit your data and applications needs but won’t cause you to compromise on security or cost. Also, be sure the service you sign up for is a bona fide cloud computing environment. Cloud Computing Definitions The National Institute of Standards and Technology (NIST) states that a cloud environment should enable “ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources.” Users should be able to monitor the environment and receive “rapid elasticity” and “on-demand self-service,” according to NIST. Cloud computing has three service models (SaaS, PaaS, and IaaS) with four deployment models (private, community, public, and hybrid).
<urn:uuid:f1c6014b-0c91-45c2-9889-209982f8befe>
CC-MAIN-2017-09
https://www.infotech.com/research/it-processorcom-improve-your-understanding-of-cloud-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00485-ip-10-171-10-108.ec2.internal.warc.gz
en
0.894386
224
2.78125
3
Welcome to Internap’s Big Data Video series. First, let’s cover the Big Data Basics. What is meant by the term, Big Data? Why is it important? And what are some common Big Data use cases? Big data is a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. Big data is also defined by three characteristics: volume, variety, and velocity. Volume refers to the enormous amount of data being stored. It is a characteristic of a big data project or application that uses, potentially, petabytes of data and tens of millions of transactions per hour. For example, Twitter alone generates more than 7 terabytes of data every day. Variety refers to the wide range of data types used as part of the analytical and decision making process. Much of these unstructured or semi-structured data sets don’t fit into typical organizational schemas. For example, tweets, social media blurbs, security camera images, weather reports and the like, are all examples of data that can be highly variable. Velocity is the speed at which information arrives, is processed, and is delivered in an actionable presentation. Within a big data scenario, data streams with real-time or near real-time analysis requirements are not uncommon, and they can be far faster than transactional streams. The combination of these elements requires significantly more flexibility in organizing, processing and analyzing than traditional approaches can deliver. In the 1970s, data management systems were primitive, very structured, typically relied on mainframes and lacked complex relational capabilities. In the ’80s and ’90s, data became more usable via the development of multifaceted relational databases. Fortunately, a number of tools and processes have been developed to address big data processing and analysis needs within the past several years. MapReduce, a large-scale parallel processing system, developed and patented by Google, distributed file systems like HDFS and NoSQL databases, as well as on-demand, virtual and bare metal infrastructure, as a service. What is a big data being used for? Oftentimes, companies are either trying to use as much applicable data as available to answer why something happened, predict what’s going to happen next, or to determine which questions to ask. One common use scenario involves marketers using big data to understand consumer purchasing behavior. For example, when you’re at your local grocery store and you scan your savings card, an abundance of information is captured, such as what you bought, what time it was, was it on sale, what complementary products were also purchased, the time of the year, and so on, and then marketers attempt to use this information to put their products in front of you at the most advantageous time and price. Our own IP architecture group here at Internap provides a real-life, close-to-home example of Big Data in action. Managed Internet Route OptimizerTM (MIRO), our proprietary IP routing algorithm, captures more than 1 trillion path and performance data points over the course of a 90 day period. These include real time, NetFlow, and SMTP data, latency and jitter statistics, as well as path plotting decisions. Through the use of a commercial Hadoop distribution in our own AgileSERVERS, we’re able to scale out to address exponential data growth, while efficiently processing and analyzing tremendous amounts of high-velocity variable data. In our next video, we’ll provide more detail on the different tools and processes, and infrastructure architectures used in big data applications and tell you which ones fit, and which ones don’t. Watch next: Why is Big Data Important?
<urn:uuid:4e31a70c-b685-47cb-990d-6e0752d133f9>
CC-MAIN-2017-09
http://www.internap.com/resources/video-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00009-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935506
776
3.421875
3
$25 computer could help create the techies of tomorrow It’s always great to see efforts to bring young people into the world of computers and programming. It keeps the public sector and industry alike from worrying about not having enough new people to carry the torch. Along those lines, one of the most innovative projects in recent years has been the Raspberry Pi, a single-board computer that can be purchased for just $25. There is also a kit that is a little more expensive, but allows the entire computer to be constructed from scratch, sort of like a modern day Erector Set. If only they had something like that when most of us were growing up. How cool would it have been to find a Raspberry Pi underneath the tree on Christmas morning? The Pi is impressive because of its ability to bring young people into the world of computing -- the Raspberry Pi Foundation promotes it for teaching basic computer science in schools -- as well as the potential for parental bonding, but it’s also a surprisingly robust computer. The Pi runs on a 700 MHz ARM processor that can be overclocked up to 1 GHz. The overclocking is one experiment you can perform with it that doesn’t void the warranty. It also has 256M of RAM, upgradable to 512M. To save money, the Pi doesn’t have a hard drive, using an SD card for booting and storage. You can easily attach a portable drive and use that as your storage medium if you want. It can run on a variety of operating systems, including its own Raspbian, plus Debian GNU/Linux, Fedora, Arch Linux ARM, RISC OS and FreeBSD. Basically, you just attach a mouse, keyboard and monitor and you have a pretty neat little computer that you can fool around with, experiment on, or actually use productively. The unit is selling quite well by all reports, but perhaps the biggest measure of success is that just before the holiday, the foundation announced the grand opening of the Raspberry Pi App store. The store can be accessed easily though the Raspbian OS or through a standard browser on a non-Pi computer. Most of the apps are free, though some have to be purchased. They include productivity suites, utility programs and even games. This will add yet another element to the educational value of the Pi, encouraging users to program their own apps, which can then be distributed or sold in the store. In addition to professionally programmed titles like the “Storm in a Teacup” game from Cobra Mobile, I’ve already seen news reports about modern-day lemonade stand-type businesses being setup by kids who plan to create and sell their wares within the Pi community. I’ll tell you the truth: I was a little skeptical when plans to create the Raspberry Pi were announced. But I’m really pleased that it’s doing so well. I wonder how many children will get a Raspberry Pi as a gift over the holidays, and how many of those kids will go on to become the great hardware makers and software programmers of tomorrow? Government has worried for years about a coming IT talent drain. Maybe something like the Pi can help. Good luck, kids! I’ll look for your apps in the store. Posted by John Breeden II on Dec 21, 2012 at 9:39 AM
<urn:uuid:c1886a15-acfd-4d7a-bdcf-d39490e26d8b>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2012/12/raspberry-pi-could-help-create-the-techies-of-tomorrow.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00009-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96287
691
2.90625
3
Researchers across scientific disciplines are clamoring for exascale systems that can handle bigger, more complex models. When it comes to the climate modeling and weather forecasting business, researchers are finding promise in using new HPC architectures, such as the one used in the Green Flash cluster, to get closer to the exascale goal. Green Flash is a specialized supercomputer designed to showcase a way to perform more detailed climate modeling. The system uses customized Tensilica-based processors, similar to those found in iPhones, and communication-minimizing algorithms that cut down on the movement of data, to model the movement of clouds around the earth at a higher resolution than was previously possible, without consuming huge amounts of electricity. The computational and power-consumption problems that had to be overcome to get the higher resolution climate models are clearly explained in this Berkeley Science Review article. In short, scientists are eager to improve upon the current cloud climate modeling systems, which have a resolution of 200 km. A model that’s composed of a grid with data points that are 1 km to 2 km apart would be much more useful, and would result in much more accurate weather forecasts and a greater understanding of the science behind climate modeling. However, the computational demands involved in high resolution climate modeling don’t increase linearly–they increase geometrically. Not only is the mesh in the grid much more compact, but more “time steps” are required to keep the equations from falling apart. Dr. Michael Wehner, a researcher at LBL, ran the numbers and found that the 2 km model requires 1 million times as many FLOPs as the 200 km model. Translated into real world figures, such a high-resolution system would require 27 petaflops of sustained capacity, and a peak capacity of 200 petaflops, according to the BSR story. This theoretical system–bigger than anything ever actually built–would require 50 to 200 megawatts of power to run, which is comparable to the electric demands of an entire city. Its power bill would be hundreds of millions of dollars a year. Clearly, a different approach was needed. Instead of building a general purpose supercomputer, Wehner and others with LBL, UC Berkeley’s Electrical Engineering and Computer Science Department, and the RAMP (Research Accelerator for Multiprocessors) project decided to try a customized system, where hardware and software are designed together. The design came together with Green Flash, which combines energy-efficient Tensilica processors with communication-minimizing algorithms. Currently, Green Flash, which has been called “the iPod supercomputer,” is running 4 km models. The combination is predicted to yield the capability to run the 2 km cloud model on a system with only 4 megawatts of power, which is 12 to 40 times smaller than a conventional supercomputer would need to run the same model. This approach does have its downsides, however. Because Green Flash was designed specifically for climate modeling workloads, it won’t work with other types of HPC applications, such as analyzing genes or financial transactions. (In fact, it doesn’t even work with all the different climate modeling systems that are in use.) It’s not nearly as flexible as other supercomputers in the LBL stable, such as Hopper, BSR notes in its story. However, when one considers the energy wall that’s imposed when taking the generic approach, the custom-built approach to designing the next generation of supercomputers to solve specific HPC problems may be part of solution for the exascale equation.
<urn:uuid:5d50aae5-c494-4d2b-93a1-97679021fb42>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/07/18/green_flash_heralds_potential_breakthrough_in_climate_modeling_exascale_design/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00009-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951451
748
3.25
3
NASA's rover Curiosity is set to spend the next several days using the camera on its mast in a search for the next route to travel on Mars. Scientists are already on the lookout for a rock on the surface of Mars to try out the rover's hammering drill, which sits at the end of the its robotic arm. The tool will be used to drill into a rock and collect the resulting powder. The Curiosity rover is three months into what NASA hopes will be a two-year mission to find signs of whether the planet has or has ever had the ability to support life. The rover has already found signs that the planet may've once been capable of supporting organic life. In September, the nuclear-powered, SUV-sized super rover found evidence of a "vigorous" thousand-year water flow on the surface of Mars. It was a key discovery because water is one of the key elements needed to support life. NASA scientists also may be sitting on what they're calling a potentially important discovery. John Grotzinger, NASA's principal investigator for the Mars rover Curiosity mission, last week told NPR.org that the agency is getting 'interesting' results from the rover's SAM instrument, an onboard chemistry lab. Grotzinger told NPR that NASA is holding off on discussing the results until the findings are confirmed. "We're getting data from SAM as we sit here and speak, and the data looks really interesting," Grotzinger told NPR. "This data is gonna be one for the history books." On Sunday, the rover completed what scientists are calling a touch-and-go rock inspection before turning and driving toward its next target. Curiosity reached out and examined a rock with its Alpha Particle X-Ray Spectrometer, taking two 10-minute readings of the chemical elements in the rock. The rover then stowed the arm and traveled 83 feet. It was the first time the rover did this kind of scientific exam and traveled in the same day. "We have done touches before, and we've done goes before, but this is our first 'touch-and-go' on the same day," said Curiosity Mission Manager Michael Watkins of NASA's Jet Propulsion Laboratory. "It is a good sign that the rover team is getting comfortable with more complex operational planning, which will serve us well in the weeks ahead," Watkins added. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her e-mail address is [email protected].
<urn:uuid:080c1c0f-7642-4c7c-82ca-00ed224aa23f>
CC-MAIN-2017-09
http://www.computerworld.com/article/2493369/emerging-technology/nasa-s-mars-rover-gets-thanksgiving-mission.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00185-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949504
549
3.390625
3
The Norwegian town of Rjukan is located in a valley between steep hills -- so steep, in fact, that the town is impenetrable by sunlight for five months of the year. But three sets of giant mirrors, with a surface area of 538 square feet, are poised to shed some light on Rjukan during its dark months. The mirrors, located on surrounding mountains, will get their first real-world test in September, the month in which the darkness descends on the town. The mirrors will be remotely controlled via a computer at the town hall, in order to reflect sunlight into a 2,150-square-foot area of the town square. The idea was first considered more than 100 years ago, but the technology to turn the idea into reality did not yet exist. The installation cost the town of more than 3,000 residents approximately $850,000. The mirrors will be powered with solar and wind energy.
<urn:uuid:58cedfbb-0c31-45e7-8887-d833f87b395e>
CC-MAIN-2017-09
http://www.govtech.com/Photo-of-the-Week---Giant-Mirrors-Light-Norwegian-Valley.html?flipboard=yes
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00361-ip-10-171-10-108.ec2.internal.warc.gz
en
0.975171
194
2.625
3
Microsoft has partnered with a San Francisco-based company to encode information on synthetic DNA to test its potential as a new medium for data storage. Twist Bioscience will provide Microsoft with 10 million DNA strands for the purpose of encoding digital data. In other words, Microsoft is trying to figure out how the same molecules that make up humans' genetic code can be used to encode digital information. While a commercial product is still years away, initial tests have shown that it's possible to encode and recover 100 percent of digital data from synthetic DNA, said Doug Carmean, a Microsoft partner architect, in a statement. Using DNA could allow massive amounts of data to be stored in a tiny physical footprint. Twist claims a gram of DNA could store almost a trillion gigabytes of data. Finding new ways to store information is increasingly important as people generate more and more data in their daily lives, and as millions of connected IoT sensors start to come online. It's also important for Microsoft, which operates one of the biggest public cloud platforms. Finding more efficient ways to store data could reduce its costs, and DNA-based storage has the potential to last longer than existing media. “Today, the vast majority of digital data is stored on media that has a finite shelf life and periodically needs to be re-encoded. DNA is a promising storage media, as it has a known shelf life of several thousand years, offers a permanent storage format and can be read for continuously decreasing costs,” said Twist CEO Emily Leproust in a statement. It remains to be seen what the results of the collaboration will be, but it's a further step towards making DNA-based computers a practical reality.
<urn:uuid:b2a4bc96-ab16-4cb6-bc12-c4688ba14127>
CC-MAIN-2017-09
http://www.itnews.com/article/3062694/microsoft-is-making-big-data-really-small-using-dna.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00361-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948742
345
2.921875
3
Memory and Storage Lots of Memory The standard and maximum amount of RAM is a common entry in printer spec sheets, but unless you know what it's used for-holding print jobs in a queue, rasterizing additional pages while other pages print, or something else altogether-it doesn't tell you much. More important, it doesn't tell you what you'll gain from adding the maximum amount.Hard drives will almost always show up in a spec sheet if a printer includes one, whether as standard or as an option. As with memory, however, drives can be used for any number of different functions. Unless the spec sheet tells you what the printer uses the drive for, it's not telling you anything useful. Prints from USB Key Printing files from a USB key is a useful convenience, but it's important to know which file formats the feature works with. Printing JPG files, for example, won't be as useful in most offices as printing PDF files. On the other hand, printing JPGs may be the better choice for businesses that use photos, including, for example, real estate.
<urn:uuid:a75e4eaa-faba-4abc-987d-9534ba5b5ea4>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Printers/Playing-Fast-and-Loose-with-Printer-Specs/4
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00129-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937054
223
2.75
3
Children learn by playing. Smart people have known this for eons. Plato suggested that in order to teach, we should “Not keep children to their studies by compulsion but by play.” So if you want to teach a child to grow up and become a coder, the best way – according to some mighty philosophers – would be to play a game that teaches the concept. Later, when their minds are ready to sit down and work, they can learn the details. According to Ralph Waldo Emerson, “The child amidst his baubles is learning the action of light, motion, gravity, muscular force.” That is the sneaky plan behind a new game, Robot Turtles, from ThinkFun. Robot Turtles is a board game. No computers. No code. No screen. Just a playing board, pieces, and cards. But by playing it, with an adult, children – as young as preschoolers and up to about age eight -- learn basic principles of coding. From commands to subroutines to the idea that the computer will do only exactly as you tell it. In fact, that last is the role the parent plays. “This game gives the adult a very specific role,” explains Bill Ritchie, President of ThinkFun. “That of the computer. And it’s very important that the parent not cheat. Because that is the concept you are teaching through play: The computer will do exactly as the child tells it.” If you have ever played a board game with a child, you know that not cheating is not as easy as it sounds. Children want to bend the rules, win when they didn’t, or give you points you didn’t earn. They are charming about it. And they are often persistent. It’s very tempting to let them have their way. It is only a game, after all, right? With this game, though, not cheating is integral to the game and you have to take your job …er play… seriously. The game started as a Kickstarter campaign and, after raising $630,000, completely sold out – over 25,000 copies -- there. But ThinkFun bought it up and is bringing it out, updated and more affordable ($25), this summer. If you preorder it at ThinkFun you get a free special edition expansion pack. For Bill Ritchie this game, of all the games ThinkFun has brought to the market, holds a special meaning. Ritchie’s brother was Dennis Ritchie, the creator of the C programming language and co-creator of the UNIX operating system. Bill is bringing the game to market in honor of his brother. “Dennis was considered by many to be the greatest coder of all time,” says Bill. “Robot Turtles would have been his favorite ThinkFun game yet!”
<urn:uuid:cd199cfe-5e36-448a-8427-6711f8c172cf>
CC-MAIN-2017-09
http://www.itworld.com/article/2700530/careers/teach-a-child-to-code.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00533-ip-10-171-10-108.ec2.internal.warc.gz
en
0.978854
594
3.0625
3
Block Traffic among two VLAN’s but only in one way, how to do that? VLAN and VLAN configurations are very useful in all kinds of different ways. This configuration will be useful sooner or later for all network administrators out there. It was a big challenge to resolve this tricky communication security requirement. The problem actually does not seem like a big deal but when you try to make it work you see that it is. The goal was to make unidirectional communication filter between two VLANs. The request was to allow VLAN 10 to access VLAN 20 but not the opposite. The computers from VLAn 10 needed to access resources in VLAN 20 normally but computers from VLAN 20 had to be prevented to access VLAn 10. Actually there is a simple solution. I needed a lot of time to get to this and I didn’t get to the solution by myself, it was a team think work you can say. So it’s worth sharing. There is a special type of Access list called reflexive. This kind of access list will allow traffic from one VLAN to another only if the communication is established in other direction before that. It can’t be used for IP traffic but only for every protocol separately so you will need to use more rows in ACL to allow TCP, ICPM etc, but it will solve your problem. Here is how is done: Let’s say that you have two VLANs: VLAN 10 and VLAN 20. VLAN 10 INTERFACE = 10.10.10.1 /24 VLAN 20 INTERFACE = 10.10.20.1 /24 VLAN 10 can access VLAN 20 but, VLAN 20 can’t access VLAN 10. That was the whole problem, to allow access only in one direction. To be able to do so, you need to let the traffic from VLAN 10 go to VLAn 20 but you need also to let this communication to go back to VLAn 10 in order to close the communication bidirectional functionality. Almost every communication needs to get back to source in order to make the circle functional. But, if you allow this communicaton to go back to VLAN 10, you will alow all the communication in both ways, and this is the problem that we can solve using reflexive ACLs. We will make extended named ACL with name EASYONE: ip access-list extended EASYONE permit tcp 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 established - The work established at end of this ACL row means that this TCP traffic from VLAN 20 to VLAN 10 will only be allowed when it’s from some communication that was started from VLAN 10, a going back traffic. permit icmp 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 echo-reply - This echo-reply row will allow VLAN 20 to reply to ping and other ICMP requests deny ip 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 permit ip any any - This row will deny all other traffic from VLAN 20 directed to VLAN 10 but with permit ip any any it will allow VLAN 20 to go let say to gateway and further to internet and other VLANs. Finally, we will put the ACL EASYONE to VLAN 20 L3 interface interface vlan 20 ip access-group EASYONE in To conclude the config without comments, indeed easy now when is done: ip access-list extended EASYONE permit tcp 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 established permit icmp 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 echo-reply deny ip 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 permit ip any any exit interface vlan 20 ip access-group EASYONE in The credit for the solution goes to my mentor and friend Sandra who did the configuration and lab for it but more than that she came out with the established word at end of the ACL and whole reflexive ACL solution.
<urn:uuid:10353d12-2942-45f0-85bb-b793274597fa>
CC-MAIN-2017-09
https://howdoesinternetwork.com/2012/allow-vlan-access-but-no-back
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00533-ip-10-171-10-108.ec2.internal.warc.gz
en
0.890931
951
2.703125
3
When IP Addresses Conflict Q: Every now and then, I get a pop-up on my work computer that reads, “There is an IP address conflict with another system on the network.” This happens at times when I need my network the most, but, in order to recover, a reboot is required. So far I have been unable to find out what is causing it. A: An IP address conflict is a network problem that is caused by having two network devices (computers, printers, routers, firewalls, etc.) with the same IP address on the same broadcast domain, local area network (LAN) or virtual local area network (VLAN) at the same time. The IP protocol, version 4, requires every network device to have a unique identifier — an IP address — in order to function properly. When two network devices have the same identifier, the traffic that needs to get to them will be inconsistent. It might be that all the traffic will end up in only one of them, or that one packet will go to one and another will go to the other. This is unacceptable and can cause major disruption in the transferred information, and therefore network devices that implement the IP protocol are programmed to detect and avoid those conflicting conditions. The detection mechanism is usually based on a probe address resolution protocol (ARP), where the host sends a broadcast ARP probe packet upon the interface configuration — either manual or dynamic addressing network (DHCP). At this point, a host with a conflicting address will reply to the probe packet and will cause the newly configured host to stop using its newly assigned IP address. When this happens in a DHCP, address renewal or a reboot will cause the conflicting device to request a new address and to eliminate the conflicting conditions. According to your description, this is probably the case. RFC 5227 is covering this mechanism in depth for the deep-diving readers. An IP address conflict is almost always a configuration mistake. While software bugs are the exception to that, I have yet to see one that causes this. It can be a mistake made by the network administrator in a DHCP-based network or by another user if the addresses are allocated manually in your network. In a DHCP environment, it can be caused by having two DHCP servers on the same network, or by having long address lease time and hosts that do not have a battery-based clock, which keeps track of time while they are turned off, like a VMware guest OS in suspended mode. Another common DHCP scenario is when an excluded range is not allocated for devices that have a static IP in a DHCP-controlled subnet. In a static or manual addressing network, a user might assign an address that is not available and create a conflict by doing that. The way to detect and fix these conflicts depends on the network size. In a small office environment — up to 10 network devices — you can check the settings on all of them and find the conflicting device. In larger networks, it can take days to cover all network devices, and a better approach is to use the media access control (MAC) address to track down the conflicting device. The first step is to find the two conflicting MAC addresses. You can usually find this info in the event log of a Windows-based device — sometimes the error message itself will contain the conflicting device’s MAC address. Once the address is identified, you can use the network switches to track down a network device by running a query against the layer two forwarding table and finding out what port is connected to the conflicting address. In a Cisco-switching-based network, the command “show MAC-address-table” will display the entire layer two forwarding database, and by using the question mark, you can learn how to leverage additional parameters to show only the information you want, because in a large network, the results can be multiple pages long. Avner Izhar, CCIE, CCVP, CCSI, is a consulting system engineer at World Wide Technology Inc., a leading systems integrator providing technology and supply chain solutions. He can be reached at editor (at) certmag (dot) com.
<urn:uuid:519149e3-923f-4943-8959-d39919faedd0>
CC-MAIN-2017-09
http://certmag.com/when-ip-addresses-conflict/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00057-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934639
847
2.890625
3
A race to restore the voices of the past - By William Jackson - Feb 08, 2012 In the 1880s, Alexander Graham Bell and his associates working in Washington performed some of the earliest experiments with optical transmission and sound recording. Nearly 130 years later, a team of physicists, curators and preservationists is using high-resolution digital imaging to tease the sound out of these and a handful of other experimental recordings. Six recordings, created by Volta Laboratory Associates between 1880 and 1885, are among nearly 200 recordings Bell deposited with the Smithsonian Institution. They were played back in 2011 using IRENE (for Image Reconstruct, Erase Noise, Etc.), an imaging workstation developed by physicists at the Energy Department’s Lawrence Berkeley National Laboratory and programmed to interpret the imaged grooves. The results are not exactly broadcast quality, said Carlene Stephens, a curator at the Smithsonian’s National Museum of American History. “You have to suspend your 21st-century sensibilities of what is good-quality sound,” she said. But considering that these recordings were made using a variety of techniques on media, including glass, copper, brass and wax and that no playback equipment was ever created for them, the recovery is remarkable. IRENE: Key to unlocking mute recordings Library of Congress preservation program works with millions of items, terabytes of data in a full spectrum of formats The Library of Congress now has two IRENEs, a 2-D and a 3-D version, and is using them to preserve recordings from the library’s collection that are in danger of being lost or becoming unable to be readily played. The Library of Congress is the largest library in the world, with more than 147 million items in its collections, and preserving and accessing these recordings is a major challenge. “The bulk of our collections are not books,” said Dianne Van der Reyden, the library's director for preservation. The library’s Packard Campus for Audio-Visual Conservation in Culpepper, Va., is home to more than 4 million items, including millions of analog audio recordings in a large variety of formats and media and in varying conditions of preservation. “Few people are researching what to do with this material,” Van der Reyden said. Some media, such as shellac discs, are relatively hardy and can last nearly forever. But some recordings lack playback equipment, some are too fragile to be played, and some are physically deteriorating. “This is a ticking time bomb," she said. "We’re in danger of losing much of our culture.” High-energy physics and sound preservation Because of the danger of losing some recordings forever, “we have really embraced digital preservation,” said Peter Alyea, the library’s digital conversion specialist. Reformatting old analog recordings to an archival standard can not only preserve them for the foreseeable future but also make them easily available for listening without additional wear and tear to the original. But some way is needed to recover sound from obsolete, damaged or fragile media without damaging them. Carl Haber, a scientist at Lawrence Berkeley, became intrigued by this challenge in 2000. “I’m a physicist,” Haber said. “I work in high-energy physics, and my particular area of interest and expertise is instrumentation” for data collection. He and fellow scientist Earl Cornell put their heads together, and “we saw immediately that there was some relevance of the techniques we were using” to audio conservation. After some brainstorming and “Saturday experimenting” on the concept, Haber and Cornell came up with promising results and approached the Library of Congress. The library and several other institutions contributed some funding, and Haber, Cornell and the occasional grad student spent the next several years developing IRENE. Additional support came from DOE, the National Archives and Records Administration, the University of California, the Institute of Museum and Library Services, the National Endowment for the Humanities, the Andrew P. Mellon Foundation, and the John Simon Guggenheim Memorial Foundation. Conceptually, the idea was simple: Use a scanner to produce a high-resolution digital image of the grooves in a record, cylinder or other recording media, showing the details in three dimensions. Then, clean the images up to compensate for imperfections, wear, damage or errors in the imaging. “With that information, we have algorithms that can calculate how the needle would move through that,” Haber said. Those virtual movements then can be used to duplicate the sounds that would be produced by a real needle or stylus. The evolution of IRENE “The first demonstration was pretty easy,” Haber said. “It followed fairly directly.” But the devil was in the details, and making it work well on objects of different shapes with recordings in different formats without constantly having to tweak the software was more complex. Although it is not yet as user-friendly as it could be, IRENE has become a versatile tool. “As long as there is something like a groove, we have parameters built into it to adjust for the basic things that characterize it, such as size, depth, et cetera,” Haber said. By 2006, the first 2-D IRENE was installed at the Library of Congress for production use. The 3-D version was installed in 2009. It is capable of producing images of grooves in three dimensions, providing more information on the depth and height of grooves. IRENE has now been used to digitally read and copy hundreds of rare recordings, preserving them digitally while gathering information on the technology’s strengths and weaknesses and its possible uses. “We are in the middle of a broad study of how well we can tune IRENE,” Alyea said. It has proved capable of accurately imaging and reproducing many types and shapes of recordings, and because it does this noninvasively it does not risk the damage that could be done by playing a disc or cylinder with a traditional stylus. It is not perfect, however. “It doesn’t get quite the fidelity you would get out of a turntable and stylus,” Alyea said. So, for stable, robust recordings such as shellac discs and for more recent high-fidelity disc recordings from the 1960s onward, it probably makes more sense to continue playing them the traditional way. Simple tech, complex art For the time being, getting the best results out of IRENE remains something of an art. “In some ways it’s fairly simple technology, but it is also complex,” Alyea said. “Sometimes it works really well,” but wear and variations in different types of recordings can degrade results and require additional tuning or tweaking of the software. The software is being refined so that it does not have to be tweaked and tuned for each type of recording and can be used more easily by people without technical expertise. Over the past year, the library has collected 4 terabytes of data with IRENE from recordings of many different formats and conditions. One of the possible additional uses of IRENE is to analyze recordings before they are played with a stylus, so that technicians can tell in advance what the best shape and style of stylus would be. This currently is often determined by trial or error, which adds to the wear and tear on old recordings. “We’ve had some successes and some failures,” Alyea said. “It certainly is getting better.” One of IRENE’s successes came in 2008 when it was able to recover the contents from what is believed to be the first sound recording, made on smoked paper in France in the 1860s, well before Thomas Edison’s invention of the phonograph in 1877. This recording was made as an experiment to show that sound travels in waves and was never intended to be played back. Using two-dimensional imaging, IRENE was able to read the tracings on the paper and reproduce the sound. When Stephens read of this, she thought of the 200 Volta Lab recordings locked away at the Smithsonian. “This is what I’ve been waiting for 35 years” to hear, she said. Bell's Volta Lab The Volta Laboratory was established in the Georgetown area of Washington by Alexander Graham Bell, his chemist cousin Chichester Bell, and Charles Sumner Tainter in 1880. Over the next several years, they experimented with the transmission and recording of sound. Using revenue from the lab, Bell was able in 1887 to found the Volta Bureau, an institute to aid people with speech and hearing disabilities. The 1880s was a period of innovation in recording and intense competition among Bell, Edison and Emile Berliner. To document their work to support patent claims, these inventors deposited about 400 early recordings with the Smithsonian along with notes and other records of the experiments. Some of the documentation also is housed in the Library of Congress. Some of the Volta experiments were commercially successful. The graphophone, which recorded on a wax-covered cylinder, become a popular business tool for dictation and eventually evolved into the Dictaphone. “But most of the recordings in the collection predate any kind of commercially available recording or playback systems,” Stephens said. “They were mute artifacts,” well-cared-for but incapable of being played. IRENE, with its noninvasive, format-agnostic approach, offered hope of unlocking the old recordings, and Stephens approached the Library of Congress about the project. The library collaborated with Haber on a pilot program to demonstrate whether it was possible to recover the audio. “Nothing about it was easy,” Stephens said. “The challenge was to tune the equipment for the nonstandard formats.” Deciphering the earliest recordings One of the earliest recordings recovered was from a photographic glass disc made in 1884. Bell had experimented with different techniques to modulate a beam of light by width and intensity, which was recorded in a spiral on the disc. IRENE was able to treat the images like the grooves on a record. The disc’s label identified it as a man saying “barometer,” but it took a while to identify the sound because the man was saying the word one syllable at a time — “ba-ro-me-ter.” A wax recording of Hamlet’s soliloquy on a brass cylinder was easier to identify, Stephens said. “Right up front you could hear ‘To be or not to be.’ ” A recording of "Mary Had a Little Lamb" was a little harder to make out but still possible to hear. Six of the old Bell recordings have been played back in the feasibility study, and now Stephens would like to see a broader program to recover early recordings in the Smithsonian’s collection. “I don’t think it is feasible to do all of them,” she said. “Some are too fragile or too partial to get sound from.” She estimated that about half of the 200 Volta recordings are good prospects for playback and some of the others are possibilities. Some experts maintain that Bell made some recordings himself. “We would love to get confirmation of a recording of Alexander Graham Bell’s voice,” Stephens said. There now are no known samples of it, so identifying it on a recording would not be easy. But the Library of Congress has the lab notes of Bell and his cousin Chichester, and the Smithsonian has Tainter’s notes. “It’s a matter of collating the information.
<urn:uuid:fb536fec-d8fb-41fa-8f88-e8d0440c3fb2>
CC-MAIN-2017-09
https://gcn.com/articles/2012/02/06/doe-digital-imaging-of-bell-recordings.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00057-ip-10-171-10-108.ec2.internal.warc.gz
en
0.962755
2,456
3.234375
3
Frame Relay, part 1 Network Consultants Handbook - Frame Relay by Matthew Castelli Frame Relay is a Layer 2 (data link) wide-area networking (WAN) protocol that operates at both Layer 1 (physical) and Layer 2 (data link) of the OSI networking model. Although Frame Relay internetworking services were initially designed to operate over Integrated Services Digital Network (ISDN), the more common deployment today involves dedicated access to WAN resources. NOTE: ISDN and Frame Relay both use the signaling mechanisms specified in ITU-T Q.933 (Frame Relay Local Management Interface [LMI] Type Annex-A) and American National Standards Institute (ANSI) T1.617 (Frame Relay LMI Type Annex-D). Frame Relay is considered to be a more efficient version of X.25 because it does not require the windowing and retransmission features found with X.25. This is primarily due to the fact that Frame Relay services typically are carried by more reliable access and backbone facilities. Frame Relay networks are typically deployed as a cost-effective replacement for point-to-point private line, or leased line, services. Whereas point-to-point customers incur a monthly fee for local access and long-haul connections, Frame Relay customers incur the same monthly fee for local access, but only a fraction of the long-haul connection fee associated with point-to-point private line services. The long-haul charges are typically usage-based across the virtual circuit (VC). NOTE: The long-haul fee associated with point-to-point private (leased) line services is sometimes known as the inter-office connection fee. Service providers generally file a tariff with the FCC regarding these fees, comprising a base cost plus a per-mile charge. NOTE: X.25 was designed for use over less reliable transmission medium than what is available in the marketplace today. Due to this unreliable nature, X.25 took on the error detection and correction (windowing and retransmission) mechanisms within the protocol stack. This resulted in higher overhead on the network, yielding less available bandwidth for data throughput. NOTE: Frame Relay is a packet-switched technology, enabling end nodes to dynamically share network resources. Frame Relay was standardized by two standards bodiesinternationally by the International Telecommunication Union Telecommunication Standardization Sector (ITU T) and domestically by ANSI. Frame Relay Terms and Concepts Frame Relay is a frame-switched technology, meaning that each network end user, or end node, will share backbone network resources, such as bandwidth. Connectivity between these end nodes is accomplished with the use of Frame Relay virtual circuits (VCs). Figure 15-1 illustrates the components of a Frame Relay WAN. Table 15-1 defines the common and relevant Frame Relay terms. Table 15-1: Frame Relay Terms and Definitions |Bc||Committed burst. Negotiated tariff metric in Frame Relay internetworks. The maximum amount of data (measured in bits) that a Frame Relay internetwork is committed to accept and transmit at the committed information rate (CIR). Bc can be represented by the formula Bc = CIR Tc.| |Be||Excess burst. Negotiated tariff metric in Frame Relay internetworks. The number of bits that a Frame Relay internetwork will attempt to transfer after Bc is accommodated. Be data is generally delivered with a lower probability than BC data because Be data is marked as discard eligible (DE) by the network.| |BECN||Backward explicit congestion notification. A Frame Relay network in frames traveling in the opposite direction of frames that are encountering a congested path sets this bit. Data terminal equipment (DTE) receiving frames with the BECN bit set can request that higher-level protocols take flow control action as appropriate, such as the throttling back of data transmission.| |CIR||Committed information rate. Rate at which a Frame Relay network agrees to transfer information under normal conditions, averaged over a minimum increment of time. CIR, measured in bits per second (bps), is one of the key negotiated tariff metrics.| |DCE||Data communications equipment. The DCE provides a physical connection to the network, forwards traffic, and provides a clocking signal used to synchronize data transmission between DCE and DTE.| |DE||Discard eligible. If the Frame Relay network is congested, DE-marked frames can be dropped to ensure delivery of higher-priority trafficin this case, CIR-marked frames.| |DLCI||Data-link connection identifier. Values used to identify a specific PVC or SVC. In Frame Relay internetworks, DLCIs are locally significant. In a Frame Relay LMI extended environment, DLCIs are globally significant because they indicate end devices.| |DTE||Data terminal equipment. Device at the end of a User-Network Interface (UNI) that serves as either a data source or destination.| |FECN||Forward explicit congestion notification. Bit set by a Frame Relay network to inform the DTE receiving the frame that congestion was experienced in the path from origination to destination. The DTE that is receiving frames with the FECN bit set can request that higher-level protocols take flow control action as appropriate, such as throttling back data transmission.| |LMI||Local Management Interface. Set of enhancements to the basic Frame Relay specification. LMI includes support for keepalive mechanisms, verifying the flow of data; multicast mechanisms, providing the network server with local and multicast DLCI information; global addressing, giving DLCIs global rather than local significance; and status mechanisms, providing ongoing status reports on the switch-known DLCIs.| |NNI||Network-to-Network Interface. Standard interface between two Frame Relay switches that are both located in either a private or public network.| |PVC||Permanent virtual circuit. Frame Relay virtual circuit that is permanently established (does not require call-setup algorithms).| |SVC||Switched virtual circuit. Frame Relay virtual circuit that is dynamically established via call-setup algorithms. Usually found in sporadic data transfer environments.| |Tc||Tc is a periodic interval. This interval is triggered anew when data is incoming to the network. When there is no data traffic when time Tc has elapsed, a new interval does not begin until new data traffic is sent to the network.| |UNI||User-Network Interface. Frame Relay interface between a Frame Relay switch in a private network (such as a customer premise) and a public network (such as a service provider). Sometimes referred to as a Subscriber Network Interface (SNI).| Figure 15-2 illustrates some of the Frame Relay terminology used. The remainder of the terms will be illustrated where appropriate throughout this chapter. Part 2 of this chapter will cover Frame Relay components.
<urn:uuid:299e590e-5049-47b5-9fea-fe786c5e5768>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/10953_951371_3/Frame-Relay-part-1.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00405-ip-10-171-10-108.ec2.internal.warc.gz
en
0.883325
1,469
3
3
It’s something tech-savvy grandchildren have warned their elders about for years: Microsoft’s Internet Explorer has long been ridiculed as a “lame duck” Web browser, although version 11 was a step in a more modern direction. Now, a grave and widespread vulnerability in the application makes the argument for switching to a different browser about more than just aesthetics. The United States Computer Emergency Readiness Team (CERT) identified the security flaw and published a report Saturday. The flaw allows Internet Explorer versions 6 through 11 to be exploited remotely, possibly causing a complete compromise of a user’s machine. More than half of all computers run Internet Explorer, according to market share data. Hackers can exploit the bug through Adobe Flash, according to CERT, and they’ve already begun taking advantage of it. “Although no Adobe Flash vulnerability appears to be at play here, the Internet Explorer vulnerability is used to corrupt Flash content in a way that allows (address space layout randomization) to be bypassed via a memory address leak,” reads the report on the CERT website. “This is made possible with Internet Explorer because Flash runs within the same process space as the browser. Note that exploitation without the use of Flash may be possible.” After getting a user to view an HTML document – either through an email attachment or soliciting a phony website and inviting users to click – online ne’er-do-wells can execute code affecting their system without detection or security flags. The vulnerability allows hackers to bypass Windows authentication because their code is hiding behind existing memory addresses. An intruder would have the same administrative rights as the user who was duped, according to Microsoft. So, child users who may have restricted system access would make less of an impact if pinched by exploiters. Although Microsoft has not yet fixed the vulnerability, there are myriad workarounds. For starters, users can simply install a different browser. The two most popular options are Google Chrome and Mozilla Firefox. Most browser- and client-based email applications, including Microsoft Outlook, Outlook Express, Windows Mail, Google Mail and Mozilla Thunderbird, open HTML attachments securely, disabling scripting functionality that could be used maliciously. Not all email applications do this, however. The sound advice is to be more vigilant of suspicious emails or links to potentially malicious websites. According to cybersecurity firm FireEye Inc., IE users can also enable “Enhanced Protected Mode” to break the exploit or simply disable the Adobe Flash plug-in in their browser, but this greatly limits the online content users will be able to view. Instructions on employing any of these solutions can be found on the right sidebar. According to Microsoft, any Internet Explorer version being used on a system that is running the Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012 and Windows Server 2012 R2 operating systems is, by default, safe. ©2014 The Tribune-Democrat (Johnstown, Pa.)
<urn:uuid:ed00ebcb-6d66-42ea-8743-7d9b4d5f280d>
CC-MAIN-2017-09
http://www.govtech.com/internet/Internet-explorer-vulnerability-makes-browser-switch-a-good-idea.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00097-ip-10-171-10-108.ec2.internal.warc.gz
en
0.893131
624
2.578125
3
It’s Tax Identity Theft Awareness Week in the United States, which means that tax-related identity theft happens often enough to get the government’s attention. The Federal Trade Commission (FTC) and Internal Revenue Service (IRS) have provided the public with a number of resources to help raise awareness on this issue. In this article, the FTC gives a short description on how tax identity theft works: For many people, the term “hacking” means that a criminal has broken through a firewall to get access to a network. The firewall is one of the easiest security concepts for people to understand, and often is thought of as the guard at the gate who provides entry based on a list of authorized visitors or other criteria. It helps that the term “firewall” originated outside of IT as a literal physical wall that was meant to prevent a fire from spreading, so the word itself was already in the public vernacular before the Internet was popular. ‘Firewall’ is also one of the oldest internet security terms, having been formally introduced by academia in the 1980’s. Because of the history and context of the term, it makes sense that people tend to think that the firewall is what gets “broken” in a hack. Modern firewalls are much more than a gate that allows traffic in and out based on simple rules. The latest firewalls provide several other functions, such as DHCP, secure VPNs, Link balancing, and more. As business needs have evolved with the rise of branch offices, remote workers, and SaaS applications, the network firewall has evolved to keep pace and aggressively protect the network perimeter and provide the necessary services to enable the business it protects. So far in our series we’ve talking about ransomware, threat vectors, and the technologies that we use to protect you. Now let’s take a look at email and why it’s the biggest and most exploited threat vector of all. The weakest point of security in any organization is the users, either due to a lack of awareness or security fatigue. Attackers know this, and they target users through email because with a working email address, a malicious but well-crafted attack could easily get in front of a vulnerable employee. Attackers are also very determined, so they will continue to pursue a target-rich environment until they find a gap in defenses. A recent Consumer Affairs article reports that as many as one-third of AV scanners failed to find malware samples in a two month test. That’s why attackers keep trying, even when they know a company has anti-virus protection in place. Barracuda Essentials for Office 365 customers can now access security training at no cost thanks to a new partnership between Barracuda and KnowBe4. The training focuses on helping users identify potential threats like phishing and ransomware. For full details, see the press release here. KnowBe4 is one of the world’s most popular integrated new school security awareness training and simulated phishing platform, is used by more than 6,500 organizations worldwide. Because users are often the weakest link in a security system, KnowBe4 offers educational courses and simulations that help users become an additional layer of security for the company. According to the FBI, Business Email Compromise (BEC) is now a $3.1B business. The FBI defines BEC as “a sophisticated scam targeting businesses working with foreign suppliers and/or businesses that regularly perform wire transfer payments. The scam is carried out by compromising legitimate business e-mail accounts through social engineering or computer intrusion techniques to conduct unauthorized transfers of funds.” This has also become known as Spear Phishing. I spend a lot of time talking with customers about their business and how they run their IT infrastructure to meet those business needs. Traditionally, IT’s primary role has been to deploy and manage infrastructure and applications that drive their business. Because of the evolving threat landscape, IT has been forced to a position of protecting users from themselves. As the Senior Vice President and General Manager of the Security Business at Barracuda, I would like to personally explain the recent incident that significantly degraded our email security service and impacted our customers. The facts of the incident At 7pm PST on Tuesday, November 1, 2016, the Barracuda Essentials for Email Security Service began experiencing an unusually high volume of unsolicited inbound DNS responses appearing to be from thousands of globally distributed hosts. This traffic, which was spurious and polymorphic impacted email delivery, message log, and quarantine logs. Our real-time monitoring system immediately identified the increased traffic, and we quickly began deploying defensive measures to address the surge. These measures restored mail flow through the day as we mitigated the impact of the increased traffic load. While mail delivery was delayed for some customers, there was no email lost in this incident. Furthermore, Barracuda threat scanners remained operational, and the UI was accessible throughout the troubleshooting process. Normal delivery has resumed and any email temporarily delayed has been successfully processed. At this time, all systems remain fully operational. We are closely monitoring the situation and implementing additional measures to strengthen our infrastructure. Some customers using the Barracuda Essentials for Email Security solution are experiencing delays in incoming email delivery, accessing the message log and end user quarantine interfaces. Other parts of the administrative user interface are fully accessible. We are actively working on the problem. Our initial investigations revealed an unusually high volume of inbound connections from multiple unverified source IPs. We are in the process of sanitizing this traffic. As a result, the quality of service is gradually improving. Our first priority is to restore the services to full capacity. We will provide more information as it becomes available. Thank you for your patience. For more information on this issue, visit our Essentials for Email Security peer support forum here. You can also follow our updates on our status page at status.barracuda.com.
<urn:uuid:066ab693-1dbb-410c-9699-89e351d6c104>
CC-MAIN-2017-09
https://blog.barracuda.com/category/barracuda-products/barracuda-email-security-service/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00093-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958634
1,232
2.734375
3
BPM 101: Introduction to BPM [Video] This video is brought to you by BPM Basics and sponsored by Appian. BPM stands for Business Process Management. BPM can be defined as a management approach to continuously improve processes and achieve organizational objectives through a set of methodologies. BPM has multiple input types. These include people and process. People can be internal or external cross function and department boundaries. A process requires a series of actions to achieve a certain objective. BPM processes are continuous but also allow for ad hoc action. Processes can be simple or complex based on number of steps, number of systems involved etc. They can be short or long running. Longer processes tend to have multiple dependencies and a greater documentation requirement. BPM has multiple outputs. These include analytics via a dashboard or reports, case information updates and phone or email alerts. The end goal of the process is achieve desirable outcomes and bring process to an end. An IT Request process has been provided as an example of BPM.
<urn:uuid:79f0d4e2-6574-498f-b9dd-6a0529f29f7c>
CC-MAIN-2017-09
http://www.appian.com/blog/technology-insights/bpm-101-introduction-to-business-process-management
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00145-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937807
211
3.015625
3
Femtocell is quickly becoming a buzzword in the mobile space, and it also impacts wireless data users one way or another. Basically, femtocell is a term used to describe a small base station or a signal strengthener/repeater, but there are many factors to consider other than just the technical ones when it comes to femtocell. Of course, that does not mean that the entire concept of micro/relay towers is even technically sound in all cases. What is Femtocell Really? Wireless signals can be fickle things, and troubleshooting wireless problems can be a serious undertaking. We here are High Speed Experts routinely receive requests to help troubleshoot wireless problems, and have created a three-part wireless troubleshooting guide. Our guide covers signals that come from laptops, desktops, and wireless routers, and that means two things: shorter distances and higher power. Simply put, distance and low-power requirements both make transmissions difficult, add in high-throughput and the problem is rather severe. In fact, 3G and 4G data standards are really amazing considering the power profiles offered by most portable devices/cellular phones. A femtocell tower is basically a mini-relay station that can be installed in a property or in a vehicle. The idea is that by providing a much closer tower with much greater power consumption capabilities than those provided by a cellular phone or smartphone battery, that signals will be better. Of course, theory and reality collide in many cases, which leads people to ask… Does Femtocell Really Work? There are many reports of femtocell stations that do not do what they say they will do, and this could be due to many reasons. It could truly be that some of the early micro/relay stations do not work as advertised or are in need of a firmware upgrade, a common problem for early tech adopters. It could be that some consumers have cellular phones that are not designed to be compatibility with femtocell towers. Some networks do not support femtocell technology, and some only support very specific femtocell technology. Add to this the laundry list of technical standards and compliance-issues, and it is entirely possible that most of the horror stories are the result of early adopters, unprepared firmware, and other issues that are easily explained. Despite the fact that it is possible to easily attribute so many possible complaints to one reason or another, the early roll-out of femtocell technology has left a bad taste in the mouths of many. This is particularly bad for those looking into femtocell technology in order to boost their mobile broadband performance and/or reliability. The Argument Against Femtocell The most obvious and common argument femtocell would be that service providers should be responsible for building out networks and ensuring connectivity. This argument certainly has merit, but there are some technical considerations that make it unreasonable for network providers to offer affordable service that has very high quality levels and no dead spots. The reason has to do with the incredible cost of even smaller cellular relay towers. While it would be possible for carriers to build out their networks in such a way in theory, in practice the results would probably be a network that would cost too much. That cost would slow R&D and be passed on to consumers. In short, it might seem ironic but poor service helps make mobile broadband and cellular plans affordable. That does not mean that all carriers should be let off the hook for poor service, but that there might be a place for femtocell technology. The Argument For Femtocell The aforementioned argument for femtocell is simple: cellular providers have an obligation to provide great service in a wide area, but they cannot ensure high-quality/high-performance connectivity in all areas. The future of mini-relay stations might have consumers willing to offer open access to larger relaytowers receiving compensation in some form, which could help networks build quicker and more thoroughly (or deep in network lingo). A similar arrangement is being used in many British and French cities in regards to open WiFi access, and has proven quite effective. Is Femtocell Right For You? - If, after reading this article, you feel femtocell might be for you, then you need to do the following: - Call your wireless service provider and ask them which standard(s) they support. - Ensure that your mobile device/phone supports femtocell technology. - Buy an appropriate micro/relay tower. - Ensure that your new femtocell tower is running the appropriate firmware. Note the use of the word appropriate instead of latest. This may take a little research. - Learn where to place your new femtocell tower for best effect.
<urn:uuid:702777c5-5fb4-4610-ba48-79ab40f246cb>
CC-MAIN-2017-09
http://www.highspeedexperts.com/femtocell-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00565-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948038
970
3.078125
3
Sending data in plain text just doesn't cut it in an age of abundant hack attacks and mass metadata collection. Some of the biggest names on the Web--Facebook, Google, Twitter, etc.--have already embraced default encryption to safeguard your precious data, and the next-gen version of the crucial HTTP protocol will only work for URLs protected by HTTPS. Mark Nottingham, chair of the HTTPbis working group developing the HTTP 2.0 protocol for the Internet Engineering Task Force, made the announcement early WednesdayA in a Worldwide Web Consortium mailing list. "I believe the best way that we can meet the goal of increasing use of TLS [Transport Layer Security] on the Web is to encourage its use by only using HTTP/2.0 with https:// URIs," Nottingham wrote. Leaping through hoops for backward compatibility Non-encrypted HTTP URLs would continue to use the current HTTP protocol, though Nottingham says the HTTP 2.0 protocol will still need to formally define how the protocol handles unencrypted URLs. That's because the "HTTP 2.0 requires HTTPS" line becomes a but blurry further down om Nottingham's announcement. "To be clear--we will still define how to use HTTP/2.0 with http:// URIs, because in some use cases, an implementer may make an informed choice to use the protocol without encryption," he wrote. "However, for the common case--browsing the open Web--you'll need to use https:// URIs and if you want to use the newest version of HTTP." Michael Sweet, a senior printing engineer at Apple and chair of the IEEE's printer working group, wondered if those exemptions could hamper the overall effectiveness of the proposal. "... I honestly don't see how this WG can actually enforce/mandate https:// and still allow http:// URIs," SweetA wrote. "So long as unencrypted URIs are supported by HTTP/2.0, the best you can do is make security recommendations since TLS is not REQUIRED for the open web." While mandating HTTPS for HTTP 2.0 will surely spur more widespread use of encryption in an age of pervasive surveillance, it won't magically make all Internet traffic inherently spy-proof, and it wouldn't be a completely painless rollout. (Here's hoping SSL certificate authorities are ready for a rush of new customers!) Nor is the proposal proceeding unchallenged. "I strongly believe that support for unencrypted HTTP/2.0 is still needed and useful, particularly when you are routing it over an already 'secure' channel to a resource-constrained device," Sweet wrote as part of his response to Nottingham. "...I also believe that HTTP/1.x has been so successful because of its ease (and freedom) of implementation. But [in my opinion] restricting [HTTP 2.0's] use to https:// will only limit its use/deployment to sites/providers that can afford to deploy it and prevent HTTP/2.0 from replacing HTTP/1.1 in the long run." Other HTTPbis members have expressed reservations about the "HTTPS Everywhere" declaration. The IETF's decision to (mostly) mandate HTTPS encryption isn't the end-all and be-all of the discussion, however. There are other encryption proposals on the table, and Nottingham said that "As HTTP/2 is deployed, we will evaluate adoption of the protocol and might revisit this decision if we identify ways to further improve security." One more step on a long road to a faster, safer Web The HTTPbis Working Group's last call for HTTP 2.0 is slated to occur in April 2014, before submitting HTTP 2.0 to the Internet Engineering Standards Group in November 2014 for consideration as a formal standard. The current HTTP 2.0 draft implementation was inspired by Spdy, an open protocol that's compatible with standard HTTP and uses TLS encryption almost universally. When it goes live, the IEFT hopes for HTTP 2.0 to include Spdy-esque features like header compression, connection multiplexing, and (it now seems) default encryption--all of which would make the World Wide Web faster and more secure. Microsoft, meanwhile, released an HTTP 2.0-friendly "Katana" server implementation in April 2013 to help convince the IETF to include some of its proposed changes in the final HTTP 2.0 protocol. For a long and insightful discussion on the pros and cons of mandating HTTPS for HTTP 2.0, check out this Reddit thread. This story, "Next-Gen HTTP 2.0 Protocol Will Require HTTPS Encryption (Most of the Time)" was originally published by PCWorld.
<urn:uuid:cceac48b-e0cc-4a2a-98c4-980369990f84>
CC-MAIN-2017-09
http://www.cio.com/article/2380926/internet/next-gen-http-2-0-protocol-will-require-https-encryption--most-of-the-time-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00089-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921655
962
2.515625
3
Last week, the U.S. Department of Homeland Security's Science and Technology Directorate began a pilot of an interoperable communications system with the District of Columbia Office of the Chief Technology Officer (OCTO). The Radio Over Wireless Broadband (ROW-B) project will demonstrate how to connect existing wireless radio systems with advanced broadband technologies, such as laptops and smart phones. In addition to traditional, handheld or vehicle-mounted radios, emergency responders are increasingly using separate, wireless broadband systems to communicate. Wireless broadband services are often supplied by a commercial cellular service provider. Because the radio and broadband systems serve specific and different needs, they were not designed to communicate with each other. The lack of interoperability between these two systems may compromise emergency response operations when responders using a broadband system are unable to communicate with responders using a radio system. That's why the pilot is so important. "The ROW-B pilot represents an important milestone in our efforts to advance interoperability progress," said Dr. David Boyd, director of the DHS' Science and Technology Directorate's Command, Control and Interoperability Division. "The capability to communicate among radio and broadband system users will significantly improve emergency response operations by allowing non-radio users to communicate with response units in the field." During July-August 2008, the ROW-B pilot connected OCTO's existing land mobile radio system--wireless radio systems that are either handheld or mounted in vehicles--with broadband devices using the Bridging Systems Interface. This allows a single user to reach multiple users through talk groups on a city-operated 700MHz broadband network. By allowing users to create talk groups in real-time, this technology saves critical response time. ROW-B also will use GIS technology to identify the location of other vehicles, equipment, and responders. GIS databases display these locations on maps that include important information such as roads, buildings, and fire hydrants--enabling emergency responders to access the locations of critical resources, and to form dynamic talk groups based on proximity.
<urn:uuid:1198ce59-e9a6-476d-acdc-9f5655c2b9a0>
CC-MAIN-2017-09
http://www.govtech.com/public-safety/DHS-Pilots-Interoperable-Wireless-Network-with.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00617-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92389
411
2.796875
3
The Hiller Advanced Research Division (A.R.D.) incorporated a five foot fiberglass round wing, (ducted fan) with twin counter rotating coaxial propellers powered by two 44hp/4000 rpm, four cylinder opposed, two-cycle, Nelson H-59 Engines. The Nelson engine was the first two-cycle engine certified by the FAA for aircraft use. Utilizing the Bernoulli principle, 40% of the vehicle's lift was generated by air moving over the ducted fan's leading edge. The remaining 60% of lift was generated by thrust from the counter rotating propellers. Of the six Flying Platforms that were built, the (ONR) vehicle is on exhibit at the Hiller Aviation Museum, and the National Air & Space Museum.
<urn:uuid:8ee79736-a50d-4b83-809d-47d46a541b42>
CC-MAIN-2017-09
http://www.networkworld.com/article/2287179/data-center/151670-The-zany-world-of-identified-flying-objects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00617-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964594
158
2.9375
3
It's spring break, and high school students are eager to put away their books, binders, pencils and... iPads? High school classrooms, teaching techniques and the very way students learn may receive a tech infusion in the near future. Already some schools across the country, most notably the Los Angeles Unified School District (LAUSD), are bringing in tablets and other technology. "Technology is becoming pervasive in the classroom and playing a strategic role," says Carolyn April, senior director of industry analysis at CompTIA. CompTIA surveyed teachers, admins and students late last year and found that idea of technology in the classroom is exciting for everyone. All tallied, 58 percent of schools with 1,000 or more students use some education technology, compared to 45 percent of smaller schools. Putting Tech to the Test Three out of four teachers believe technology positively impacts the education process. Many teachers are under duress from the pressure of raising test scores, April says, and so they see the use of technology helping them hit their goals and improve student achievement. Three out of four principals and vice principals believe technology plays an important role in recruitment, particularly among millennials holding newly minted teaching credentials. April says young teachers who grew up with technology simply expect to work with tools such as tablets, not so much chalkboards. Most importantly, nine out of 10 students believe the use of technology in the classroom will be crucial in helping them get jobs down the road. It'll also change the way they learn. Technology can remake the classroom experience from listening to lectures to learning interactively. When most people think about technology in the classroom, they think about the tablet or, more narrowly, the iPad. Indeed, Comptia's survey found that the tablet is the number one technology that schools plan to invest in within the next few years. Educational Technology Is More Than Tablets But it's important to note that the tablet is only the tip of the iceberg, especially when considering the many cloud services in the education market. There's classroom management software and online curriculum for teachers, game-based learning software for students, and wireless network infrastructure and backend software, such as mobile device management, tying everything together. Technology in the classroom isn't easy to do, and early adopters have had a rough learning curve. Educators across the country who dream of iPads are surely cringing as they watch missteps in the massive iPad rollout at LAUSD. In the summer of 2013, LAUSD began a $1.3 billion effort to put an iPad in the hand of every student, teacher and admin. Since then, there have been rumors of stacks of iPads collecting dust, students using iPads inappropriately, L.A. schools Supt. John Deasy resigning under pressure over his close ties with Apple and Pearson, (which provided the online curriculum) and, most recently, the FBI seizing documents related to the contract bidding process, Los Angeles Times reported. Then there's this alarming stat from an outside firm hired by LAUSD to assess the project's progress last fall: Only one teacher out of 245 classrooms visited was using Pearson's online curriculum, Los Angeles Times reported. Four out of five high schools reported that they rarely used the iPads. Apparently, the rush to get iPads in people's hands outpaced what teachers and students should do with them. Development and training lagged behind, a common problem in large scale technology deployments. In fact, CIOs say change management and user training are often the biggest hurdles in a project. You Can’t Stop the March of Tech Nevertheless, the troubled LAUSD iPad project hasn't dampened enthusiasm to bring technology to the classroom. "I don't think this one case is going to stop the march of time when it comes to different types of devices in the classroom," April says, adding, "If you're seeing pushback, it's because [teachers] are being handed a bunch of tools but not taught how to use them... that's not a technology problem." This story, "Can tech help teachers teach and students learn?" was originally published by CIO.
<urn:uuid:418a98d9-e0db-4d9e-ad9b-4a920206bd04>
CC-MAIN-2017-09
http://www.networkworld.com/article/2894246/tablets/can-tech-help-teachers-teach-and-students-learn.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00317-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963251
842
3.015625
3
The Food and Drug Administration plans to apply the same strict regulations to mobile apps as it does to medical devices, such as blood pressure monitors, if those apps perform the same functions as stand-alone or computer based devices. The FDA has developed a “tailored” approach to regulation of mobile apps that would allow use of some apps without oversight, according to Dr. Jeffrey Shuren, director of the FDA’s Center for Devices and Radiological Health. “Some mobile apps carry minimal risks to consumers or patients, but others can carry significant risks if they do not operate correctly,” he said. “The FDA’s tailored policy protects patients while encouraging innovation.” The FDA said that "if a mobile app is intended for use in performing a medical device function (i.e. for diagnosis of disease or other conditions, or the cure, mitigation, treatment, or prevention of disease), it is a medical device, regardless of the platform on which it is run,” in a guidance document for industry and its staff released Monday. The agency said its oversight approach to mobile apps “is focused on their functionality, just as we focus on the functionality of conventional devices, with oversight not determined by the platform.” Bakul Patel, senior policy advisor to Shuren, said the agency would regulate a mobile medical app that helps measure blood pressure by controlling the inflation and deflation of a blood pressure cuff (a blood pressure monitor), just as it regulates traditional devices that measure blood pressure But, he said, a mobile app that doctors or patients use to log and track trends with their blood pressure would not be regulated as a device. Mobile medical apps that recommend calorie or carbohydrate intakes to people who track what they eat also are also not within the current focus of FDA's regulatory oversight. “While such mobile apps may have health implications, FDA believes the risks posed by these devices are low and such apps can empower patients to be more engaged in their health care,” the agency said. The agency said that, based on industry estimates, 500 million smartphone users worldwide will be using a health care application by 2015; by 2018, 50 percent of the more than 3.4 billion smartphone and tablet users will have downloaded mobile health applications. These users include health care professionals, consumers, and patients. The FDA emphasized it won’t regulate the sale or ordinary use of smartphones and tablets, allaying concerns that the agency would try to regulate all mobile gadgets. The new regulations do not cover mobile electronic health record apps. Mobile apps, the FDA said, can help people manage their own health and wellness, promote healthy living, and gain access to useful information when and where they need it, and the agency “encourages the development of mobile medical apps that improve health care and provide consumers and health care professionals with valuable health information.”
<urn:uuid:f58b94a8-ca82-40f3-8f1d-fd1715058eb6>
CC-MAIN-2017-09
http://www.nextgov.com/mobile/2013/09/fda-will-regulate-some-mobile-medical-apps-devices/70760/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00317-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953069
591
2.546875
3
Assisting the GPS unit GPS was designed around devices operated outdoors, and that's why there's room for improvement. This is where AGPS, and services that can also use GPS data, come in. A standalone GPS is most often used outdoors, where there's often a good line of sight to several satellites. Because its most common use is in an automobile, the receiver's position doesn't change while it's powered off; and it's used over a period of time, so an initial fix on location is often less critical. A Mini with an in-dash, GPS-enabled Nokia N800 "Generally speaking, where you leave your car is where you'll find it the next time you turn on your GPS," said Don Fuchs, the head of chipmaker Broadcom's GPS hardware and services division. With a handset, you might turn it off in New York and power it back up in another city. "You have such a tremendous time difference, and initial position difference, that the [unassisted] GPS can take a very, very long time to start: 4, 5, 6 minutes," he said. Likewise, standalone GPS receivers have far better antennas than cellphones and other mobile devices. Fuchs said that a typical outdoor GPS has a 25 mm square antenna that costs $1 in total to add to a receiver. A cell phone antenna might be 3 to 4 mm by 1 mm, and add 1.5 cents to the bill of materials for a device. "The performance of a 1-cent antenna is pretty much scalable to what you get on the $1 antenna," he said. AGPS offers a way to work with a fraction of a GPS satellite transmission, a shortage of satellites, and an error-ridden reception, yet still manage to get an accurate fix on a location quickly. Other systems use GPS data (even preprocessed raw data or incomplete data) and combine it with other inputs to find a location in difficult places, typically indoors. As handsets have gained in intelligence, so, too, has GPS assistance modified and expanded. How the network knows where your handset is AGPS originally arose in the late 1990s as a response to a then-future and now-current FCC mandate that required cellular operators to provide a caller's location to 911 operation centers. Some carriers opted to use a cell-tower based approach that provides fairly coarse accuracy; Sprint and Verizon chose the GPS path, assuming they'd be able to offer other services eventually to make up for the cost. There's also another factor in the rise of AGPS. Sprint (now Sprint Nextel) and Verizon both use CDMA cellular systems, which rely on GPS for timing for network purposes; AT&T and T-Mobile use the competing GSM standard, which is far more popular worldwide. Each CDMA base station has a GPS receiver, and it was an easy step for CDMA developer Qualcomm to enhance a GPS receiver in a handset with the GPS data that operators were already receiving with fixed receivers that were part of network base stations. (What has now become AT&T Wireless, along with T-Mobile, opted for U-TDOA [Uptime Time Difference of Arrival], a method of multilateration that allows a network to compute a handset user's coordinates based on the fixed and known locations of cellular base stations. No special hardware is required on the cellular phone, and no GPS satellites are consulted.) GPS chips in this early timeframe were expensive, and they had the various warm and cold start problems described earlier. It wouldn't be tolerable for an E911 call to require several minutes to provide a location, or to fail to provide a location at all. And because many E911 calls could originate inside buildings, some way to supplement and aid GPS was required. (The FCC requires that a certain percentage of areas across each carrier's network provide accuracy within certain ranges; the required areas and location accuracy will continue to be tightened over time.)
<urn:uuid:06add1d0-5eb3-4119-8b4e-fcf5628786a1>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2009/01/assisted-gps/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00086-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965293
814
2.6875
3
American space agency NASA has built an Artificial Intelligence (AI) platform capable of aiding firefighters when they enter a burning building. The platform, called Audrey, is the product of a partnership between the agency’s Jet Propulsion Lab (JPL) and the Department for Homeland Security (DHS). This project forms part of the Next Generation First Responder program, which aims to identify ways firefighters, police and paramedics can stay safe while in the field. Mass of data from AI platform Audrey collects data about heat, gases and other signs of danger to help first responders get through the flames safely and quickly, letting them save victims. To make the AI platform possible, the designers used several technologies developed by NASA and the Department of Defence. It’s been in the works for nine months. Mark James, lead scientist of the Audrey project at JPL, explained that the platform works with mobile devices and fire equipment. “As a firefighter moves through an environment, Audrey could send alerts through a mobile device or head-mounted display,” he said. Integrated with IoT What makes it innovative is the fact that it’s not limited to one user and can track an entire team of firefighters. It sends recommendations to individuals on how they can work together more effectively. It’s been designed to work alongside the Internet of Things, utilising devices and sensors that communicate with each other. For example, wearable tech attached to a firefighter’s jacket could provide information on their location. The cloud plays a pivotal role here and allows Audrey to watch situations as they develop. It can analyse them and predict the exact resources that’ll be needed next, saving the firefighters much needed time. The platform was demonstrated in June at the Public Safety Broadband Stakeholder Meeting, which was held by the Department of Commerce. During the presentation, it utilised several sensors and made safety recommendations, with field tests to follow within a year. John Merrill, NGFR program manager for the DHS Science and Technology Directorate, said the technology improves the skillset of first responders in the field and helps them build new strengths. “The proliferation of miniaturized sensors and Internet of Things devices can make a tremendous impact on first responder safety, connectivity, and situational awareness,” Merrill said. “The massive amount of data available to the first responder is incomprehensible in its raw state and must be synthesized into useable, actionable information.” A guardian angel Edward Chow, manager of JPL’s Civil Program Office and program manager for Audrey, said: “When first responders are connected to all these sensors, the AUDREY agent becomes their guardian angel. Because of all this data the sensor sees, firefighters won’t run into the next room where the floor will collapse.” “Most A.I. projects are rule-based. But what if you’re only getting part of the information? We use complex reasoning to simulate how humans think. That allows us to provide more useful info to firefighters than a traditional A.I. system.”
<urn:uuid:b36d068c-d736-498c-a6ed-5b3b4334e825>
CC-MAIN-2017-09
https://internetofbusiness.com/nasa-builds-ai-platform-firefighters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00086-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92833
645
2.90625
3
Question: Do you remember the Amiga computer? I was an early fan is this wonderful machine and operating system. Starting with AmigaOS 2.0, a macro language called ARexx (with a letter “A” like “Amiga) was added. This language, derived from REXX (“REstructured eXtended eXecutor“) developed initialy by IBM. The power of ARexx was the communication skills with other applications using scripts. If the application had an ARexx port, it was possible to interact with it. A nice example was the automation of repetitive tasks (like images conversions). It was in 1990! Since this wonderful period, we saw a lot of other solution developed to interact with applications. On UNIX, you can interact with a process using named pipes. There are also commercial solution like the OPSEC (“Open Platform for SECurity“). protocol developed by Checkpoint. Did you ear about the IF-MAP protocol? It was announced in 2008 by TNG: “Trusted network connect–part of the Trusted Computing Group–published its Interface for Metadata Access Point protocol on April 28 to provide a common framework for sharing event metadata. This means there’s finally a way for security and network devices from a variety of vendors to communicate, and thus make better assessments on whether to grant or deny access to everything from PCs to switches.” Modern IT infrastructures become more and more complex and integrate several types of devices. In most cases, devices cannot talk to each other. If an IDS alert is send to the network administrator, his job is to take actions like blocking the suspicious traffic with a firewall. And if it was fully automatic? That’s the purpose of IF-MAP. Initially developed as a strong base for a NAC solution, IF-MAP is flexible enough to be used as an information resource by any networked device. Some commercial manufacturers already tested it. We can imagine so much interaction between network components like: - Shut down a switch port if suspicious activity is detected by an IDS. - Rate-limit an Internet connectivity for an IP detected on P2P networks. - Changing a user profile (from full-internet access to restricted access) if suspicious traffic is detected - Temporary allow some applications thru a firewall (ex: patches download) - … (just let your imagination work!) IF-MAP relies on a client-server approach where clients can retrieve or publish metadata about assets. The MAP servers keeps an instantaneous view of the network: Today, open source application are moving to IF-MAP! There is an ongoing project in a German university called IRON or “Intelligent Reaction on Network Events“. Based on well-known open source applications (Nagios, Snort, ISC DHCP and netfilter), students are busy to extend them to support IF-MAP. The goal of this project is to mitigate today’s threats like malware infection, data leakage or internal attacks. The project started in September and sounds promising! Some source are already available. Project homepage: IRON.
<urn:uuid:86425c2b-88d1-44fb-8eb8-8786993dc42d>
CC-MAIN-2017-09
https://blog.rootshell.be/2009/12/09/protect-your-infrastructure-with-iron/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00314-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94393
654
2.9375
3
When a Canadian producer of high-quality technical outerwear could no longer afford the high cost of labor in Canada, it decided to head south. The company chose El Salvador, which offered a relatively large industry of approximately 150 textile and apparel companies and a large pool of experienced workers including operators, mechanics and supervisors — many looking for work after some companies had closed during the recession. San Salvador, El Salvador, also offered wage rates similar to those found in China ($0.90/hour) without the time-zone changes and long travel times. Additionally, the factory was to be located in a Free Zone offering no duty and good security. Three years in, and the company was growing and adding customers, but it did face one particular challenge that was a direct result of the pleasant climate. Temperatures of 75 degrees to 90 degrees will put a smile on the face of many a Canadian, but they were wreaking havoc with the company's lamination processes. Lamination, essentially a gluing process, was being used to apply pocket closures, patch pockets and bottom hems to the company's product. (Lamination of pockets and zippers costs more than sewing, but it provides lighter weight and water-repellent seams.) NASA, we have a problem During the laminating process, glue affixed to a paper backing (made by Bemis) was melted using a hot press. Upon completion of the process, a tandem press (cold press) was used to cool, or set, the glue to hold the component being attached in place. Failure to completely cool the laminated element would keep it from setting properly. To cool the water for the cold press, the industry traditionally — for low-volume manufacturing,— has placed a frozen two-liter soft-drink bottle into a container of water, recycling the water as it becomes warm. The process worked well in Canada, but in the warm climate of El Salvador, the frozen bottles were thawing too quickly, leaving the water to warm up too fast. With up to 10 pairs of presses in use for lamination, significant time was required to change out the melted water bottles for frozen ones. Specifically, the company needed to replace two bottles per hour per pair of presses, for a total of 20 per hour. Automated cooling systems for high-volume production, with a price tag in the $150,000 range, were not an option. The company sought to design a more effective and economical cooling system for the cold press to improve the quality of lamination while reducing the labor and time required to replace the water bottles. One small step for cold presses That's where Jim Prim comes in. One evening over dinner with Prim, Barbee, a consultant to the industry working with the outerwear company, shared the problem they were trying to tackle. Prim, a NASA systems engineer who helped develop features on the first Lunar Lander, and Barbee, who was in El Salvador developing a two-year textile college (a project initiated by North Carolina State University (NCSU)), came up with an initial solution utilizing a household freezer adjusted to a temperature just above freezing, and a coolant loop. While the first prototype did not work (the loop did not permit an adequate flow of water), the concept proved to be the right approach, and solved the problem. One giant leap for efficiency The cost of the solution came to less than $450 for four workstations and was comprised of a small household freezer, half filled with water cooled to a temperature just above freezing, with insulated tubes drawing the water from the freezer, passing it through the cold press and recycling it back to the freezer. One operator was able to manage two presses. The solution not only saved time and money by eliminating the significant amount of labor previously required to constantly replace frozen bottles, but it also improved the process by providing a steady temperature throughout. Completed cooling system Prim, who trained the first seven U.S. astronauts, including John Glen and Neil Armstrong, notes that not all innovative solutions have to be "rocket science," nor do they have cost a lot of money. In working to lighten the load of Lunar Lander so that it could return to Earth, Prim also found a common-sense solution that didn't require a big budget. The Lunar Lander was comprised of two sections, one meant for ascending and the other for landing. The latter was to be left on the moon. The ascent module was too heavy to return from the moon to Earth. Prim came up with the idea to place everything not needed on the return flight in the bottom of the lander module and leave it on the moon. Stowing the equipment in the lander section lightened the load enough for the ascent module to take off and return to Earth. Prim's commonsense philosophy combined with Barbee's engineering techniques led to an innovative solution that was developed and implemented quickly and at low cost. Gene Barbee is a professional engineer registered in Canada and an international consultant with projects around the world. He is an adjunct professor at NC State University, and was a visiting lecturer in the College of Textiles for four years. Jim Prim retired from the civil service after 18 years with NASA and seven years with the Dept. of Energy.
<urn:uuid:4c515c9d-9efa-44ad-b681-ad25ab917adf>
CC-MAIN-2017-09
http://apparel.edgl.com/magazine/February-2013/-In-Apparel-and-In-Spaceships,-Big-Solutions-Can-Come-in-Small-Sizes83331
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00490-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965893
1,076
2.859375
3
The island of Sriharkiota, off the east coast of India, was the site of the launch of the country's first mission to Mars on Tuesday, Nov. 5. Approximately 45 minutes later, the 3,000-pound "Mangalyaan" orbiter separated from the rocket, embarking upon an elliptical path around Earth. From there, after about three weeks' time, it will begin its accelerated path toward the red planet. The orbiter needs to travel 485 million miles in a period of roughly 300 days in order to reach an orbit around Mars in September 2014. "The biggest challenge will be precisely navigating the spacecraft to Mars," said K. Radhakrishnan, chairman of the Indian Space and Research Organization. "We will know if we pass our examination on Sept. 24, 2014." Photo courtesy of El Economista.
<urn:uuid:d1bbd778-65da-4f1c-ba5d-d44735b1691d>
CC-MAIN-2017-09
http://www.govtech.com/Photo-of-the-Week-Indias-Mission-to-Mars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00490-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936147
176
3.265625
3
Social engineering – they know what you’re thinking Social engineering is the practice of manipulating people to obtain confidential information to commit fraud. A social engineer will commonly use the telephone or Internet to trick people into revealing sensitive information. They will collect basic pieces of information and then use these seemingly insignificant pieces of data to appear “credible.” This allows them to gather increasingly substantial information. Recognizing the characteristics Be aware of the methods used for social engineering: • Phone solicitation: caller fakes a survey to collect information. • Phishing: e-mails seeking information or validation of e-mail address. • Spam: e-mails that contain malicious software, such as worms or viruses. • Dumpster diving: thieves looking for sensitive information in the trash. Recognizing the signs Here are some signs that you may have a social engineer on the phone. The caller: • Refuses to give their contact information. • Rushes you for quicker responses. • Intimidates you. • Speaks in a muffled or difficult-to-understand voice. What do I do? If you suspect that you have a social engineer on the phone, hang up without offering personal information. Call the company’s corporate headquarters and ask to speak to a supervisor to try and verify that the person who called you is, in fact, a representative of the company. For additional tips on Internet security, visit: http://news.centurylink.com/resources/tips/centurylink-consumer-security-tips-online-security
<urn:uuid:fa6c53f5-53af-457b-908d-c4ea0c61395f>
CC-MAIN-2017-09
http://news.centurylink.com/blogs/security/social-engineering-they-know-what-youre-thinking
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00190-ip-10-171-10-108.ec2.internal.warc.gz
en
0.88823
330
2.8125
3
Undoubtedly, corporations are realising the benefits of IP voice systems. Voice over internet protocol (VoIP) can bring substantial cost savings and productivity enhancements to a business by transforming its circuit-switched networks to IP packet switching networks and running voice and data applications over a single infrastructure. However, businesses need to be aware that there are potential risks involved, they need to take some necessary steps to protect their interests. When voice and data are merged onto a single network, voice becomes an application on the network and is, therefore, exposed to the same threats as data applications. These threats include infrastructure and application-based attacks, denial-of-service (DoS) attacks, eavesdropping, toll fraud and protocol-specific attacks. However, with the right procedures in place, VoIP security risks and threats can be managed and mitigated--maximising the benefits of VoIP while minimising exposure. Infrastructure and application-based attacks In VoIP, voice is essentially an application on the data network, fine-tuned to maintain voice-quality performance. VoIP equipment and end-point devices such as IP phones are becoming standardised and commoditised just like other data components such as PCs--meaning that VoIP is just as vulnerable to cyber-attacks. Hackers can exploit voice devices and disrupt the network from normal service and/or perform criminal actions such as data theft. IT managers need to maintain current patch levels on all IT and network equipment and applications, and have appropriate anti-virus software installed and up-to-date. Virtual local area networks (VLANs) can also be implemented and used to protect voice traffic from data network attacks. By implementing application gateways between trusted and untrusted zones of the network, a VLAN will complement the protection offered by corporate firewalls. Denial-of-service (DoS) attacks A DoS attack occurs when someone deliberately floods a particular network with so much illegitimate traffic that it blocks legitimate traffic. Obviously, if your voice traffic is being transmitted over the same network, a DoS attack will have significant impact on business operations. DoS attacks are difficult to stop and prevent, but proper intrusion prevention practices, special network devices and proper patch updates can minimise the risk of exposure. In order to prevent data network problems from affecting voice traffic, voice and data traffic should logically be separated from administrative traffic. Traffic shaping can also provide another layer of protection and control for the network. Intercepting data traffic is a trivial endeavour for most hackers so it stands to reason that with voice and data convergence, the same can be said for voice traffic over the network. Many tools are freely available to collect packets associated with VoIP conversations and reassemble them for illicit purposes. Two measures that can be taken to prevent eavesdropping include isolating VoIP traffic using virtual private networks (VPNs) and applying encryption on voice packets. However, IT managers need to carefully evaluate the use of encryption of VoIP as it can increase latency in the network. Encryption of voice data could be selectively applied based on business requirements, for example, encryption and decryption can be used only for those conversations over untrusted networks. When choosing a managed service provider, companies should ensure that appropriate security protocols are actively used by the potential provider to ensure secure conversations within the network. Just as with traditional voice systems, toll fraud cannot be ignored when considering VoIP systems. Using toll fraud, attackers gain unauthorised access to a private branch exchange (PBX) call-control system to make long-distance or international calls, which can mean significant financial impact to the business. Poor implementation of authentication processes could allow calls from unauthorised IP phones and/or allow unauthorised use of the VoIP network. Companies need to impose proper control for access to VoIP systems, including gateways and switches, in order to avoid the occurrence or toll fraud. Centralisation of management and configuration control is also recommended. Since VoIP was developed on an open standard, the protocols that support communications are well known and thus vulnerable to probing for their weaknesses and security flaws. Session initiation protocol (SIP) is gaining popularity -- SIP is a session and call-control protocol, components of which are used by standards-based IP PBX and IP telephony systems. In addition to the standard IP vulnerabilities, SIP brings additional risks. SIP is a text-based protocol, like the common HTTP and SMTP. Therefore attackers can easily monitor and analyse traffic and then transition into various application-level attacks. Attacks can include impersonation of registration for system access, unauthorised access to corporate directory information, taking control of calls to disrupt business and also placing unsolicited calls and voice messages. Obviously, in a malicious attack, this could be highly detrimental to a business. It managers need to be aware of these vulnerabilities and thus implement strong authentication and authorisation processes. IP voice security While convergence and VoIP implementations are fast becoming mainstream among multinational corporations, they are, at the same time, posing serious security challenges. Whether you are planning to build your own converged network or utilise the services of a managed service provider, the primary goal should be the implementation of VoIP security that is properly built and validated, with ongoing management support. Security has to be managed through proactive monitoring, event management, remediation and regular follow-up to ensure a stable and reliable corporate communications infrastructure. However, with the right security in place, VoIP can be a valuable asset to a company. This story, "IP voice security: are you susceptible or strong?" was originally published by CSO Online (Australia).
<urn:uuid:010de8be-8015-4e5f-9bb3-3f39bc0ab45c>
CC-MAIN-2017-09
http://www.itworld.com/article/2726525/security/ip-voice-security--are-you-susceptible-or-strong-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00190-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935945
1,151
2.640625
3
Four months after an explosion tore through its signature Falcon 9 rocket during fueling, destroying the rocket and its multimillion-dollar cargo in just 93 milliseconds, SpaceX says it has isolated what went wrong and is ready to fly again. If the Federal Aviation Administration issues the company a license, likely following the completion of a full-scale engine test scheduled for Jan. 3, the company will launch 10 communications satellites for Iridium on Jan. 8, including an attempt to land the reusable first stage of the rocket on a sea-going robotic platform. In a statement, Elon Musk’s space company said the problem had to do with special tanks inside the rocket’s engine, known as composite over-wrapped pressure vessels, or COPVS. Made of carbon fiber and lined with aluminum, the tanks are designed to hold cold helium under incredibly high pressure. They’re fastened inside larger containers of super-cool liquid oxygen consumed to propel the rocket. In a series of tests, SpaceX found “buckles”—spaces between the aluminum liner and the carbon-fiber wrapping of the COPVs—where the liquid oxygen could pool; at the super-low temperatures SpaceX is working in, the oxygen could even become a solid. As the pressure in the tank increases, oxygen trapped in those buckles could be ignited if the carbon fibers crack or rub together to generate friction, an even higher risk when the oxygen becomes a solid. For upcoming launches, the company’s engineers will re-configure the helium COPVs to keep them warmer, and also load propellant according to a method the company has previously used without incident more than 700 times. But over the long term, SpaceX acknowledged it will need to re-design its tanks to keep these “buckles” from occurring at all. This could pose a problem for the company’s goal of making its rockets largely reusable to drive down the costs of space access. In 2016, SpaceX began a new pre-flight fueling process that allowed it to use even colder liquid oxygen in its rockets; because the cold liquid oxygen is so dense, more can be stored into in a tank of the same volume, allowing the rocket to fly further, including returning to earth after a mission. The innovative fueling process was considered important to creating full reusability of the rocket, with Musk telling MIT students in 2014 that “when the propellants are cooled close to their freezing temperature to increase the density, we could definitely do full reusability.” COPV technology has long been seen as useful for rocket construction, but engineers at NASA and other companies have encountered problems when the organic material in the carbon interacts and even combusts with the liquid oxygen frequently used as a propellant. SpaceX appeared to have solved these problems and takes great pride in its carbon-wrapping technology; Musk’s company is currently testing an enormous COPV intended for use in the company’s mooted inter-planetary vehicle. Successfully tested the prototype Mars tank last week. Hit both of our pressure targets – next up will be full cryo testing. pic.twitter.com/GGTlgUQCRY— SpaceX (@SpaceX) November 16, 2016 A successful launch in the days ahead would be a boon for SpaceX, which has a crowded manifest of commercial launches to attend to after missing the last quarter of 2016. The company also promises to debut its new Falcon Heavy rocket this year and is planning for a fall test flight of its Dragon 2 spacecraft, which the company hopes to be the first private vessel to carry astronauts in 2018. SpaceX watchers had been expecting the company to return to flight soon; it had originally anticipated a mid-December launch date. Over the weekend, there was another preview when Iridium reported its satellites had been loaded into the shell that will protect them when the rocket takes off:
<urn:uuid:81000549-c79d-4a04-b739-5e52944c5fcd>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2017/01/spacex-says-it-figured-out-why-its-rocket-exploded-and-will-fly-again-within-days/134283/?oref=ng-trending
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00610-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954616
798
2.96875
3
By: Joel Carleton, CSID Director of Cyber Engineering We’ve all heard that it’s important to pick long, complicated passwords. What you may not realize is why this becomes crucial in the context of a breach. While ensuring you don’t pick from some of the most common passwords is important, it’s still not enough. Some background information on how passwords work: while we still see websites storing passwords unencrypted (in this case, if you are part of a breach, the complexity of your password makes no difference), it is most common for websites to encrypt your password with a one-way hash. Put simply, this is a method that takes your password and transforms it into a long string of characters that is then stored in the website’s database. The website does not know your original password. When you log in to the website it applies the transformation and compares the long string to what it has stored in the database. If they match, then it knows you have entered the correct password. When a company is breached, a common result is the selling and or sharing of that company’s user accounts. They could be publicly disclosed, shared in criminal forums and chat rooms, or sold to the highest bidder. The breached company may have taken steps to secure your account credentials, but the strength of your password can be your best friend or worst enemy. When a breach happens on a website where the passwords have been hashed, the criminal steals a list of user ids/emails and associated hashed passwords. They do not yet have your original password. The criminal has to decrypt the hash to retrieve the original password. While there are many sophisticated techniques at the criminals’ disposal, one of the most popular is referred to as the “brute force” method. Every possible password is tried. Given the short and simple passwords that are routinely used, the criminal can quickly decrypt the majority of the encrypted passwords. To find out just how simple it is to decrypt a password, try to Google the encrypted hash of a common password, “d8578edf8458ce06fbc5bb76a58c5ca4”. It’s pretty easy to see what the original password is even without using brute force guessing software. Let’s assume you’ve chosen something more complicated. For passwords with 6 characters, how many brute force guesses are necessary? Assuming your password at least has mixed upper and lower case letters, there are 19 billion possible passwords. There are two things that make cracking this type of password trivial for the criminal: - They do not have to attempt to log in to the website for each of their guesses. It would be impossible to make the necessary number of attempts to log in. They are able to make as many guesses as they want without anyone knowing what they are doing because they have the hashed password. - Computers are very good at making very fast guesses. An average computer with an upgraded graphics card can make 500 million guesses a second. Your 6-character password length can be guessed in 38 seconds or less. Adding numbers and the full set of non-alphanumeric characters, the password can now be guessed in 26 minutes or less. Parting advice: the easiest way to make your passwords better is to make them longer (at least 9 characters). If you still use only alphanumeric characters but your password is 10 characters, a criminal would need over 18,000 days to crack it. Hopefully he won’t have this much time on his hands and will move on to an easier target!
<urn:uuid:347505c4-1a83-4474-8423-7632fe605f73>
CC-MAIN-2017-09
https://www.csid.com/2012/05/password-complexity-why-it-makes-a-difference-in-a-breach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00610-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948699
744
3.1875
3
At first, GPUs could be used for a very narrow range of tasks (try to guess what), but they looked very attractive, and software developers decided to use their power for allocating a part of computing to graphics accelerators. Since GPU cannot be used in the same way as CPU, this required new tools that did not take long to appear. This is how originated CUDA, OpenCL and DirectCompute. The new wave was named ‘GPGPU’ (General-purpose graphics processing units) to designate the technique of using GPU for general purpose computing. As a result, people began to use a number of completely different microprocessors to solve some very common tasks. This gave rise to the term “heterogeneous parallelism”, which is actually the topic of our today’s discussion.
<urn:uuid:11fbfa67-cc73-4aa3-8a20-8050e48eb297>
CC-MAIN-2017-09
https://hackmag.com/author/yurembo/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00186-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967015
167
3.6875
4
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn’t changed is the MPI programming model. To get around the bottleneck that MPI poses to exascale computing, developers are banking on the new GPI programming model to unlock the potential of future parallel architectures. GPI, which stands for Global Address Space Programming Interface, takes an entirely new approach than MPI for enabling communication among processors in a supercomputer. The model implements an asynchronous communication paradigm that’s based on remote completion, according to a story in Phys.org. Each processor in a parallel HPC system can directly access all data, regardless of where it resides and without affecting other parallel processes. This gives GPI the potential to scale beyond what’s possible with MPI, and to fully exploit today’s highly parallel clusters of multicore systems, using traditional HPC programming languages, such as C and Fortran. The effort to create GPI was spearheaded by Dr. Carsten Lojewski from the Fraunhofer Institute for Industrial Mathematics ITWM. Lojewski was working on HPC problem involving seismic data, and the existing methods weren’t working. “The problems were a lack of scalability, the restriction to bulk-synchronous, two-sided communication, and the lack of fault tolerance,” Lojewski tells Phys.org. “So out of my own curiosity I began to develop a new programming model.” The GPI model, which was first unveiled at the ISC 2010 conference in Hamburg, continues to be developed by dozens of developers around the world, including Rui Machado from Fraunhofer ITWM and Dr. Christian Simmendinger from T-Systems Solutions. Together with Lojewski, the three HPC developers were awarded the Joseph von Fraunhofer prize this year for their work. GPI is also finding its way into production as development continues. According to Simmendinger, the European aerospace industry worked with the German Aerospace Center (DLR) to port an aerospace HPC program called TAU to use GPI. The results have been impressive. “GPI allowed us to significantly increase parallel efficiency,” Simmendinger tells Phys.org. GPI is not a plug-in replacement for MPI, and requires developers to port applications to use the new low-level API. Squeezing the most benefit from GPI also requires applications to be multi-threaded, which may also bring additional work. But based on early reports, GPI has a promising future as the communications protocol for tomorrow’s exascale supercomputers.
<urn:uuid:1b98b38e-0868-4b1c-846d-1cff8232c892>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/06/19/developers_tout_gpi_model_for_exascale_computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00538-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94184
568
3.078125
3
The words ‘facilitate’, ‘facilitation’ and ‘facilitator’ are in common use in a wide variety of contexts. Unfortunately, there can be some confusion because these words’ meanings seem, at best, fluid and, at worst, malleable. Facilitation is a basic life skill which is much used in the business world. There, it can be used – profitably – in: - identifying issues - resolving issues - encouraging productive interaction - developing accurate objectives - defining the scope of change projects - strategic planning - encouraging and empowering contributions within a safe, non-threatening environment - engaging stakeholders Facilitation can support organizations as they face difficulties. It can enable people to work in a collaborative and participative way to tackle key issues and also to make fundamental decisions. Effective facilitation can make the difference between a poor decision and a brilliant one. It can make the difference between a solution that has all kinds of hidden problems and one that is robust and can be made to work. Obviously, facilitators have a key part to play in all this. They provide a way to deliver answers to complex issues without necessarily being a subject matter expert. They need to use the most appropriate model, tool and/or technique to get the most helpful answer, allowing groups to make decisions and reach a lasting, robust agreement which has commitment and buy-in. This gives organizations an effective participative change management toolkit. Moreover, they can use facilitation as a core management process and, possibly, as a core role within the organization. This strategy requires organizations to build a facilitation-friendly culture to discover and then promote best practice internally – and one of the key ways to manage and embrace this change in approach to facilitation is to invest in structured and accredited facilitation. We will soon be launching a qualification called Facilitation to help managers get the most out of their teams. For further information on Facilitation, visit our website.
<urn:uuid:73684636-5a0e-43b9-87e0-904f0fd4df8d>
CC-MAIN-2017-09
http://blog.apmg-international.com/index.php/2013/01/24/facilitation-in-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00130-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944528
412
2.765625
3
NASA's NuSTAR Takes Collaboration Into the StarsBy Samuel Greengard | Posted 2016-05-04 Email Print A cloud-based collaboration platform enables a global team of astrophysicists and other scientists to boldly go where knowledge sharing has never gone before. Few fields generate as much research matter as astronomy and astrophysics. Incredibly complex mathematical equations, vast quantities of data and mountains of analysis lead to detailed scientific papers. However, the ability to share and collaborate on these projects is essential. "There are enormous challenges related to keeping research and papers in sync," says Brian Grefenstette, a research scientist in the Space Radiation Lab at the California Institute of Technology (Caltech) and a member of NASA's NuSTAR project. The initiative, which processes data collected by the Nuclear Spectroscopic Telescope Array (NuSTAR) X-ray telescope, connects a group of about 150 scientists in 10 working groups scattered across the globe—including the United States, the United Kingdom, Germany, Japan and India. In the past, the group relied heavily on email to exchange crucial documents, including PDFs and PowerPoint decks. "As all the information goes back and forth, people insert comments, and everything eventually winds up coalesced into a paper that is submitted to a journal like Nature or The Astrophysical Journal," Grefenstette says. This approach created some cosmic headaches, including information that has sometimes disappeared into a black hole. Since its inception, the NuSTAR group has exchanged upward of 25,000 emails, and it has thousands of files in its achieve. In the past, emails sometimes bounced because attachments were too large, so they were rejected by servers. Simply put, the universe of data was becoming completely unmanageable. "Manually exchanging files and information simply was not feasible," Grefenstette explains. "It required far too much time sorting everything out and moving papers along. Cloud Collaboration Offers a Quantum Leap in Efficiency The NuSTAR group began exploring options that could lead to a quantum leap in efficiency. It selected cloud collaboration solutions provider Huddle. The technology offered features such as a central portal, a simple interface, strong project management features, versioning and syncing, a whiteboard and strong security. "We are now able to organize, view and process information far more effectively," Grefenstette says. In fact, the team is now able to use the portal to view every new observation as it is recorded, changed and commented on at every step of the process. The collaboration software also helps the team manage calendars, discussion threads, file management and notifications. "It presents a very robust environment with Web 2.0 features," he says. Teams can access data across time zones, devices and platforms. The result has been stellar, as the scientists are able to work faster and more effectively. "We have witnessed huge improvements in efficiency and achieved gains in output," Grefenstette reports. In fact, the NuSTAR group has produced more than 100 published scientific papers—an incredibly large number for the astrophysics field and for academia in general. Finally, the cloud-based software has greatly simplified IT and administration requirements. "We see incremental updates to the interface and constant improvements in features," he notes, adding that there was little resistance to the change. "People immediately recognized that this was a giant step forward," Grefenstette says. "The platform makes it much easier to get to a finished scientific paper."
<urn:uuid:df85076b-8b4b-4fde-8cba-51c7a52c2ec3>
CC-MAIN-2017-09
http://www.baselinemag.com/messaging-and-collaboration/nasas-nustar-takes-collaboration-into-the-stars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00306-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955169
720
2.515625
3
Diminishing resources and a growing world population underscore the absolute necessity of protecting our environment for the benefit of future generations. This will require us to rethink our existing patterns of resource use to globally establish a shared frame of reference and take concerted action. We have no other choice than to act collectively to achieve Major forces at work would appear to support a general movement toward a sustainable society. A sustainable society is one that can fulfill its needs without experiencing catastrophic setbacks in the foreseeable future. The principles of a sustainable society are interrelated and mutually supporting, giving equal consideration to social development, economic development and preservation of the environment. The very concept of sustainability itself is defined as the confluence of these three constituent parts. eGovernment is among the leading figures of this global movement. It is driven by a real awareness of the need to redirect a significant portion of the benefits of growth toward better human development. Digital modernization of processes used by public bodies and private enterprise will play an active role in striking a new balance between current and future needs. Lesson 1: The myth of return on investment Return on investment can’t be reasonably expected from a movement that seeks to transform society over several generations. Although eGovernment programs have a major role to play in the emergence of a sustainable society, significant short-term budgetary gains are unrealistic. In fact, the lack of quantifiable gains from sustainable development often leads to disappointing results or even the failure of publicly funded programs. This is because performance targets are either not feasible or set without a relevant analytical frame of reference. eGovernment initiatives modernize a state by endowing it with the technical infrastructure necessary to enable future government processes. France has already made a successful foray into eGovernment with online income tax filing, paperless medical expense claims and online VAT filing and payment. While measurable cost-benefit analysis has yet to be carried out, tremendous progress has been made through increased administrative productivity, lowered processing costs and an overall reduction in the country’s carbon footprint. Lesson 2: The myth of progress from green IT Green IT can act as a powerful catalyst to close the social divide and promote human development. Even so, transformation is a process brought about by human endeavor, not technological achievement. Technology can actually widen the gap between the most affluent and the most disadvantaged due to unequal access to technological means, social differences and inequalities, and uneven penetration of usage. That is why bringing the Internet into villages for example is not enough to transform them. Drishtee project is a social enterprise focused on information and communication technologies. It provides a kiosk-based platform to deliver IT training and micro-financing and enable eCommerce in over 4,000 villages in rural India. Facilitating access to technology was not enough to achieve the social and economic transformations initially hoped for, however. Although it gave many agricultural communities access to expertise, the Drishtee project did not prove sustainable because deployment was not well rooted locally. Lesson 3: Green IT is really about people and their roots Green IT is particularly effective when the right approach is used to introduce it into people’s everyday lives. Change is easier to implement when modernization does not alter frames of reference and meaning. As one example, the eco-development of sugar cane in the Philippines demonstrates that green IT can be a powerful way to leverage sustainable development policy. As the 11th largest producer of sugar cane worldwide, the Philippines enacted a law on biofuels in 2006, opening the way for the production of ethanol for fuel. Commercial production of sugar cane as the main source of raw material is aimed at helping the country to diversify its energy use and ensure energy security. Innovative new technologies were required to make Philippine sugar and biofuels more competitive, particularly on the world market. Convinced of its effectiveness, farmers eagerly adopted drip irrigation, which raised production yield considerably while ensuring sustainable use of water resources. Lesson 4: Healthcare, a hotbed for green IT development Telemedicine is one promising area where green IT will have a fundamental impact on personal and social well-being. A top priority for any emerging sustainable society is the ability to provide a better standard of healthcare to an ever-growing number of patients. Using telecommunication and information technologies, telemedicine makes it possible to provide remote assistance to medically dependent persons and perform detailed diagnosis in more isolated regions that specialists cannot visit. Telemedicine enables people to overcome the geographical and socio-economic barriers that isolate medically underequipped rural regions by providing them with access to healthcare services through multimedia technologies. Advanced satellite technologies also make it possible to project the spread of major diseases such as malaria, dengue fever and cholera, as well as other diseases responsible for millions of deaths worldwide. These epidemics can now be tracked by satellite sensors and monitored through tele-epidemiology. Lesson 5: Green IT, the key to sustainable traceability The society of the 21st century will be one that is fundamentally mobile and traceable. For world health authorities, broad traceability can help determine the causes of food contamination, reducing the potential danger to the public at large. For private enterprise, traceability can help companies comply with standards and promote development to increase competitiveness and improve quality management. Traceability can also contribute to an efficient digital world for public authorities and citizens. Electronic identity, signature, time stamping and archiving are essential to ensuring legal protection and redress. In this regard, eID forms the key link in the chain of trust. When requested, the governments of Portugal are required to disclose any electronic records maintained on individual citizens. Belgian citizens can also contest whether their government has the right or legal obligation to maintain certain records. In all instances, allowing traceability to serve citizens’ interests fosters civic behavior and self-regulation. Lesson 6: The right to be different in a sustainable society Based on the harmonious co-existence of diversity, the sustainable society is a social model that places people, human development and personal well-being at the center of how society is to be structured. Naturally, such a society believes that inspiring people to fulfill their potential is essential. This in turn means encouraging individuals to stand out from the crowd. With the advent of new technology, eID takes on special significance. By managing the relationship between individual identity and all secondary identities without risk, eID contributes to the emergence of new, multi-layered identities. Rooted in the citizen’s social, hereditary and professional titles, eID enables management of secondary identities for each circle of trust to which the citizen belongs. Over and above national and regional borders, what really defines individuals are the circles of trust they belong to, in which they can freely express a multi-layered identity in all its richness and complexity. Lesson 7: There can be no green IT without green spirit Behind efforts to fight climate change rages a debate on how best to create a more sustainable, balanced society and close the very divides that current development trends threaten to widen. Social innovation and new business-minded ideas are key to boosting productivity, both in the public sector and in philanthropic endeavors. Green technology already enables thousands of local projects to work more efficiently and on a grander scale. New behaviors and methods—particularly those involving emerging technologies or tools—can only take root in society when used by locally based intermediaries, however. These individuals are well placed to ensure that transformations catch on and achieve results. This green spirit is embodied by the United Kingdom’s Big Society, which fosters social action, community engagement and public sector reform through a framework of policies and strategies. The initiative’s expansion of cooperative social activity aims to benefit individuals, communities and society. Lesson 8: Green IT, a powerful tool for sustainable governance To effect lasting change, sustainable governance must transform uncertainty into opportunity, creating an exponential capacity for innovation and new initiatives. National governments realize that a sustainable society model for the post-industrial era still needs to be created. A new governmental organization is required at all levels to reconcile central mandates with local objectives. Green IT’s ease in facilitating point-to-point exchanges will prove instrumental to achieving cooperation at a local level rather than strictly conforming to policy set forth by central authorities. Support from public authorities provides the critical resources needed to create communities of interest and local systems of government enabled by green IT. Like the UK’s Big Society, the Obama administration’s Office of Social Innovation and Civic Participation uses public policy as a catalyst of local civic engagement. The recently created agency promotes the use of new communications technology to identify and fund innovative community solutions with demonstrated results. Lesson 9: Building a sustainable society means doing one’s part The sustainable society is not merely a concept. It is rooted in ideal social values that determine the highest priorities for the enablement of human development. Both the 2010 Deutsche Post DHL study on green business trends and the third annual National Geographic/GlobeScan Consumer Greendex demonstrate that the general public is more than ready to take voluntary action to enter the era of sustainable development. Consumers would appear to have regained control of the social value system by taking action to do their part to promote sustainability. Two-thirds of consumers surveyed say they would make purchasing choices based on the sustainable development policy of a company or brand. Respondents also said they expect greener alternatives to be available at the same price as conventional services in the near future. For their part, 56% of businesses surveyed believe consumers prefer greener solutions to cheaper ones. Lesson 10: From window dressing to genuine green spirit Public authorities and businesses are coming to realize that unsubstantiated claims are just window dressing for an increasingly sophisticated public and clientele. Today’s citizen has high expectations of concrete action to achieve sustainability goals. To effect the cultural change necessary to achieve a sustainable society, organizations must move beyond empty claims to produce products and services that can herald the society of the future. For public authorities, eID programs and eGovernment services are vital because they go to the very heart of the bond between citizens and public services. These increasingly widespread services enable citizens to exercise their rights and become more involved in administrative procedures. eID is destined to become the standard-bearer and symbol for bridge-building between citizens and public services. Indeed, it represents renewed stock given to local needs, the hope for a brighter future, and the emergence of a new society over the long term.
<urn:uuid:e857be89-4543-4e76-b23a-9160db9ccee4>
CC-MAIN-2017-09
http://www.gemalto.com/govt/inspired/green-it-egov-slide
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00006-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926684
2,154
3.25
3
Data Masking and Encryption Are Different A common misconception within the data community is that encryption is considered a form of data masking. Even worse is that there are some that erroneously identify both as one and the same. Data masking and Data encryption are two technically distinct processes. Encryption at the field level is considered a data masking function. There are many similarities between data masking and data encryption, although the differences are substantial. Each of them is designed to ensure data protection, which can be substantially improved when both are used in synergy. Fundamental Difference: For encryption, reversibility is required, and for masking, reversibility is a weakness. If a masking algorithm is reversible. then it is potentially weak as it still contains the original information but in a different form. The best methods for data masking are those that are not based on the original data at all. Data masking hides data elements that users of certain roles should not see and replaces them with similar-looking fake data, which are typically characters that will meet the requirements of a system designed to test or still work with the masked results. Masking ensures vital parts of personally identifiable information (PII), like the first 5 digits of a social security number, are obscured or otherwise de-identified. There is a niche for the data masking. Dynamic data masking can transform the data on the fly based on the user role (privileges). It is used to secure real time transactional systems and speeds up data privacy, compliance implementation, and maintenance. Data masking does not encrypt information. We can see all data records in their native form and no decryption key is necessary. But we will see only what we are allowed to see today and not a byte more. And tomorrow we may see even less, if the rules change overnight. The best ciphers can be cracked (maybe in a million years using today’s technology), while masked data cannot be unmasked. The resulting data set does not contain any references to the original information. That makes it absolutely useless for the attackers. Data encryption involves converting and transforming data into scrambled, often unreadable, cipher-text using non-readable mathematical calculations and algorithms. Restoring the message requires a corresponding decryption algorithm and the original encryption key. Data encryption is the process of transforming information by using some algorithm (a cipher) to make it unreadable to anyone except those possessing a key. It is widely used to protect files on a local, network or cloud disk drives, network communications, or just web/email traffic protection. When would we choose to use data masking versus data encryption? Data masking is often used by those who need to test with sensitive data or perform research and development on sensitive projects. Companies commonly request production data for testing. Because this sensitive data is passed through many hands, it is at great risk of theft or misuse. Through the process of redacting (stripping, covering over, or removing) the important elements of the data set, such as names, addresses, patient information are protected. This process, however, is often irreversible. Common terms such as anonymization and de-identification also refer to such processes that irreversibly sever the identifying information in the data set. They prevent future identification of the original data even by the people conducting the research or testing. For example, one cannot discern or re-identify a social security number that presents with its first 5 digits covered by X’s. Data encryption is often used to protect data that is transferred between computers or networks so that it can be later restored. Data like this, whether in transit or at rest, can be extremely vulnerable to a breach. Conversion of data into non-readable gibberish (or even format preserved cipher text which is hard to crack) creates highly secure results. The only way to gain access to the data is to unlock it with a key or password which only those authorized can access. Security Perspective: From a data security point of view, the best masking solution is random generation as it is completely independent of the underlying data. Encryption does not constitute good masking. We do not need reversibility. We can abandon the concept of one input one output (a 1‐1 map) and the concept of determinism. Abandoning these two core principles of encryption allows for more secure data masking solutions. To summarize examination of encryption and masking, if we want to protect our production data from unauthorized entry, but the data is important in its current context, then use encryption. However, if we require need to use our production data in a test environment, where the actual content of the data is meaningless, then we use masking. Not only is masking more secure than encryption, we may also find it to be a much more efficient process. It may be easy to think of data masking and data encryption as the same things, since they are both data-centric means of protecting sensitive data. However, it is their inherent procedures and purposes that differentiate them. Both technologies are relatively easy to implement, for as long as we know what and how to do it right. And an investment into a secure environment will preserve company reputation and customer loyalty for years to come.
<urn:uuid:aa96d8b2-d969-45ca-926c-72aeefa71305>
CC-MAIN-2017-09
https://blogs.informatica.com/2015/10/05/data-masking-and-encryption-are-different/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00534-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936058
1,075
3.109375
3
In nearly every research discipline, the number of scientific instruments available to add to the stream of data input has been climbing. While this has spurred any number of software developments in recent years, without adequate hardware processing capabilities to handle the delgue, there can be no match for the possibilities that lie in the incoming data. Accordingly, a number of research institutions are findings new ways to handle the data deluge, both in terms of reinventing grid-based paradigms and looking to cloud computing models to extend already stretched computational resources. Astronomy is one of several areas that is suffering from the glut of data brought about by more streamlined, complex, and numerous instruments and not surprisingly, researchers are looking to grid and cloud models to handle the well of data. Researchers Nicholas Ball and David Schade discussed the concept of astroinformatics in detail, stating that, “in the past two decades, astronomy has gone from being starved for data to being flooded by it. This onslaught has now reached the stage where the exploitation of these data has become a named discipline in its own right…This naming follows in analogy from the already established fields of bio- and geoinformatics, which contain their own journals and funding.” Canada’s astronomy community is, like other nations with advanced astronomy research programs, looking for ways to approach their big data problem in an innovative way that combines elements of both grid and cloud computing. Their efforts could reshape current views of astroinformatics processing and help the country move toward its goals of becoming a global center for advancements in astronomical research. The Canadian Advanced Network for Astronomical Research (CANFAR) is behind an ongoing project in conjunction with CANARIE (a national research network organization) to create a cloud-based platform to support astronomy research. The effort is being led by researchers at the University of Victoria in British Columbia in conjunction with the Canadian Astronomy Data Centre (CADC) and with participation from 11 other Canadian universities. The goal of the project is to “leverage customized virtual compute and storage clouds, providing astronomers with access to many datasets and resources previously constrained by their local hardware environment.” The CANFAR platform will take advantage of CANARIE’s high-speed network and a number of open source and proprietary cloud and grid computing tools to allow the country’s astronomy researchers to better handle the vast datasets that are being generated by global observatories. It will also be propelled by the storage and compute capabilities from Compute Canada in addition to the expertise from the Herzberg Institute of Astrophysics and the National Research Council of Canada. CANFAR is driven forward by a number of objectives to support its mission to create a “global machine” that will help researchers further their astronomy goals. The creators of the project stated, “All of the necessary components exist to support science but they don’t work well together in that mission. The type of service layer that is needed to support a high level of integration of these components for astronomy does not exist and needs to be invented, installed, and operated” What CANFAR Can Do The value proposition of CANFAR is that it will enable astronomers to process the data from astronomical surveys using a wide array of custom software packages and, of course, to widen the set of computational resources available for these purposes. A report on the project described CANFAR as “an operational system for the delivery, processing, storage, analysis, and distribution of very large astronomical datasets” and as a project that pulls together a number of Canadian entities, including the Canadian National Research Network (CANARIE), Compute Canada’s extensive grid and storage capabilities, and the CADC data center to create a “unified storage and processing system.” The report also describes the CANFAR project’s technical details, stating that it has “combined the best features of the grid and cloud processing models by providing a self-configuring virtual cluster deployed on multiple cloud clusters” that takes elements from grid-based services as well as a number of cloud services, including “Condor, Nimbus or OpenNebula, Eucalyptus or Amazon EC2, Xen, VOSpace, UWS, SSO, CDP and GMS.” The researchers behind the CANFAR project noted that when considering different virtualization options, they considered both Xen and KVM, but settled on Xen because of its wider popularity at the time and because it was the only one that facility operators had used on an experimental basis in the past. On the scheduler front, there were complexities because the CANFAR virtual cluster needed a batch job processing system that would provide the functionality of a grid cluster, thus making both Grid Engine and Condor natural options. The team settled on Condor, however, because upon examination of the environment, they found that using Grid Engine would mean that they would have to modify the cluster configuration anytime a VM was added or removed. The team selected Nimbus as the “glue between cloud clusters” which “examined the workload in the Condor queue and used resources from multiple cloud clusters to create a virtual cluster suitable for the current workload” and used the Nimbus toolkit as the primary cloud technology behind the cloud scheduler. The team also developed support for openNebula, Eucalyptus and Ec2, but decided on Nimbus because it was open source and permitted the “cloud workload to be intermixed with conventional batch jobs unlike other systems. “ The research team behind CANFAR stated that they believed “that this flexibility makes the deployment more attractive to facility operators.” With Linux as the operating system and an emphasis on interoperability and open source, CANFAR will be a proving ground for the use of these scheduling and cloud-based management tools on large datasets. In addition to other projects that make use of similar (although diverse in terms of packages used) interoperability and open source paradigms like NASA’s Nebula cloud, there will likely be a number of exciting proof of concept reports that will emerge over the course of the next year. CANARIE’s vision for the project is that it will also “provide astronomers with novel and more immediate hands-on and interactive ways to process and share very large amounts of data emerging from space exploration.” In addition to helping research better manage the incredible amounts of data filtering in from collection sites, the project’s goals are also tied to aiding collaboration opportunities among geographically dispersed scientists. As the CANFAR team noted, “a schematic of contemporary astronomy research shows that the system is essentially a networked global array of infrastructure with scientists and telescopes as I/O devices.” Slides describing some of the current research challenges and potential benefits as well as some of the context for the project can be found here.
<urn:uuid:f8b985e4-4670-4dae-a12f-3633fd23c5e7>
CC-MAIN-2017-09
https://www.hpcwire.com/2011/01/17/canada_explores_new_frontiers_in_astroinformatics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00534-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942952
1,439
2.875
3
Refer to the exhibit:What will Router1 do when it receives the data frame shown? (Choose three.) Refer to the exhibit. Which three statements correctly describe Network Device A? (Choose three.) Which layer in the OSI reference model is responsible for determining the availability of thereceiving program and checking to see if enough resources exist for that communication? Which of the following describes the roles of devices in a WAN? (Choose three.) Refer to the exhibit.Host A pings interface S0/0 on router 3. What is the TTL value for that ping? A network administrator is verifying the configuration of a newly installed host by establishing anFTP connection to a remote server. What is the highest layer of the protocol stack that the networkadministrator is using for this operation? Refer to the exhibit.After HostA pings HostB, which entry will be in the ARP cache of HostA to support this transmission? A network interface port has collision detection and carrier sensing enabled on a shared twisted pairnetwork. From this statement, what is known about the network interface port? A receiving host computes the checksum on a frame and determines that the frame is damaged. Theframe is then discarded. At which OSI layer did this happen? Which of the following correctly describe steps in the OSI data encapsulation process? (Choose two.)
<urn:uuid:5279a64b-6c35-4140-b65e-b5d349cf2e16>
CC-MAIN-2017-09
http://www.aiotestking.com/cisco/category/exam-200-120-cisco-certified-network-associate-ccna-update-april-24th-2016/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00058-ip-10-171-10-108.ec2.internal.warc.gz
en
0.90926
283
3
3
We all know that teaching others is a time honored activity. Parents teach their children about the world. Master craftsmen teach their apprentices the tools, tricks, and techniques of the trade. Many athletes become coaches and many technologists often mentor new, less experienced, members of the profession. As you may expect, this training is of great value to those being trained; the children, the apprentices, the young athletes, and less seasoned techies. What is the true value to the teacher? The answer to this question is the thrust of my topic. Teaching others in the workplace, as either a professional instructor or as a helpful mentor is in many ways more valuable to the teacher than to the student. I believe that most IT professionals would agree, that in the workplace, we should all help, mentor, and guide those with less experience than ourselves. After all, it’s good for the company and it’s the right thing for us to do as human beings. That said, teaching your technical skills to others also has the following personal advantages: 1. The people you teach will become your internal company supports 2. It expands your knowledge of the topic 3. Helps to position you as a thought leader within the company 4. Helps you define a very positive professional brand 5. Helps maximize your value to the company The reason that the people you teach will become your internal supporters is: • You have personally taken the time to help and/or mentor fellow techies. This type of kindness is generally greatly appreciated. • You will be viewed as an internal expert and thought leader. This type of reputation brings respect, which will help you create a personal following within the company. • If you helped someone once, you will most likely be willing to help them again. That said, they will hope you are successful and do well within the company in case they need your assistance again at a future time. The reason that it expands your knowledge of the topic is: • When you teach/explain something you know to someone it makes you put your thoughts into words. These mental gymnastics make you think more deeply about the topic. • When you teach/explain something you know to someone, you actually listen to what you are saying, which reinforces your knowledge of the topic. • Student questions very often cause you to think about the topic from different perspectives which can provide a previously unforeseen understanding of the topic. • Student questions and/or preparation for your instruction may cause you to do additional research on the topic, therefore, learning new things you may have not otherwise come to know. The reason that it helps you to position yourself as a thought leader within the company is: • You become known internally as a person who deeply understands a specific topic and as a result, has the answers to important questions. • You become viewed as a person who is willing to share his/her knowledge. The reason it helps you define a very positive professional brand is: • From #1, teaching/helping/mentoring others help you create internal company contacts that you can call to ask a favor as the need arises. This ability allows you to get things done internally that others within the company, without your contacts, may not be able to move forward. Therefore, you gain the reputation as a person who knows how to get things done within the company. • From #2, you are deepening your knowledge in your area of expertise. The more you learn the better position you are in to share your expanded knowledge and build your personal brand as an internal expert. • From #3, being viewed as an internal thought leader, that is to say, not only knowledgeable, but willing to share it and help/teach others is a great reputation to have. This combination can make you the “go to person” inside the company on a specific technology. Then, as the “go to person”, the more people that come to you for help, the more your reputation and personal brand will grow. The reason it helps maximize your value to the company is: • You are helping others within the company get their work done. • You are helping grow the overall technical knowledge within the company. • Because of your internal contacts you have the ability to get things done that others within the company cannot seem to complete. In closing, teaching, helping, and mentoring others in the company is great for you personally, for those you help, and your company in general. That said, next time someone in your company asks for your assistance, please do so and in the long run, the person you may be helping most is yourself. Taking these classes will not only provide you with insights that will help you professionally, it will also enhance your resume and help you get that next interview. If you have any questions about your career in IT, please email me at [email protected] or find me on Twitter at @EricPBloom. Until next time, work hard, work smart, and continue to grow. Read more of Eric Bloom's Your IT Career blog and follow the latest IT news at ITworld. Follow Eric on Twitter at @EricPBloom. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:96bb26e6-6fdb-403d-ad4d-6197708863ac>
CC-MAIN-2017-09
http://www.itworld.com/article/2718454/careers/5-reasons-teaching-technology-to-others-is-good-for-you.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00478-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965697
1,083
2.6875
3
Women in IT often end up feeling like "lone wolves," not only because 83% of the technology workforce is male, but also because many feel that their skills are under-valued. As a result, 40% more women than men choose to leave their jobs after 10 years in the industry, according to Belinda Parmer, CEO of campaigning agency Lady Geek, which aims to make technology and games more accessible to women. Speaking at FDM's Women in IT event at Glaziers Hall, Parmer said that although women are some of the most enthusiastic consumers of technology, they are still not adequately represented in the technology workforce. This is partly an image problem, she said, claiming that the biggest issue for young girls is that they do not see technology as a creative career. "We need to shatter the perception that people who work in technology are pizza-guzzling IT nerds who can't get a girlfriend and never see sunlight." This can partly be achieved through better IT education, but even those that do overcome the stereotype and succeed in breaking into the IT industry are often not made to feel wanted. Parmer controvertially pointed to the studies of Professor Simon Baron-Cohen who came up with a method of classifying people on the basis of empathy (the drive to identify a person's thoughts and feelings) and systems (the drive to analyse or construct a system). According to Baron-Cohen's theory, more women are empathisers and more men are systemisers. However, Parmer claims that technology companies currently employ about 85% systemisers, meaning that the empathisers find it difficult to fit in. "The systemiser devises a structure that suits the systemiser, and often the values that an empathiser brings - collaboration, communication, etc. - don't feel valued in this systemiser culture," said Parmer. "The truth is, there has never been a more important time for companies to understand the empathisers, and how to deliver a truly fantastic customer experience." She said that companies need to learn that having empathetic employees can benefit their business, and recommends training staff (both male and female) to be empathetic. The focus on empathy sparked some unease among women who attended the event. Some attendees felt it could be construed as a more sophisticated version of the gender stereotyping that they were concerned to overcome. Lyn Grobler, VP and CIO of Functions at BP, said that another way to make female technology workers feel valued is to enable more flexible working practices that allow women to stay in work for longer if they choose to have families. "We've got to find some role models that are quite senior in the organisation who will start working in this way, so that more junior people will start seeing it as acceptable," said Grobler. "There's still this perception that if you're not there full time, you're not serious about your career. So we've got past the HR model, the next one is looking for role models." Angela Morrison, CIO of Direct Line Group, said that using technologies such as video conferencing and secure networks to enable flexible working is also an important tool in driving women beyond the middle management level. "I have had some great middle management females working for me, but none of them wanted to step up. They said no, I'm very happy, because where I sit in the organisation I am in control of what I do. So when I want to go home I can just pack up and go home. If I go up one level I lose control," she said. "We do need to drive woman beyond middle management. As a working society we need to address the way we operate." Parmer added that the way job applications are written has a big influence on who applies for them. Men tend to apply for jobs if they have 50% of the skills, whereas women feel that they need a 90% skill match in order to apply. By changing the language to make the job sound more accessible, more women are likely to apply, she said. The European Commission published new figures at the World Economic Forum in Davos last week, showing an increase in the number of women on boards to 15.8%, up from 13.7% in January 2012. The boost comes after the European Commission introduced a 40% objective for women on boards based on merit in November. "The proof is in the pudding: regulatory pressure works," said Vice-President Viviane Reding, the EU's Justice Commissioner. "Companies are finally starting to understand that if they want to remain competitive in an ageing society they cannot afford to ignore female talent."
<urn:uuid:006740d6-a293-43ed-824c-cdcbaca68bb5>
CC-MAIN-2017-09
http://www.itworld.com/article/2715713/it-management/women-in-it-should-not-feel-like--lone-wolves---says-lobby-group.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00354-ip-10-171-10-108.ec2.internal.warc.gz
en
0.970232
958
2.625
3
What ever happened to IPv5? On the road to IPv6, everyone seems to have overlooked a mile marker - By Brian Robinson - Jul 31, 2006 In all of the debate about the transition from IPv4 to IPv6, few people ever seem to ask about whatever happened to IPv5. Like Formula 408 and Special J cereal, some wonder whether an IPv5 ever existed. If so, what happened to it? Indeed, IPv5 did exist. To understand why there is a jump from Version 4 to Version 6, we must review some Internet history. The timeline stretches back to the early 1970s when the Advanced Research Projects Agency — now the Defense Advanced Research Projects Agency — began fleshing out its fledgling ARPAnet. That eventually turned into NSFnet, a network the National Science Foundation operated primarily for government scientists. That, in turn, grew into the modern Internet. IP was first developed as a counterweight to TCP, which was the first complete set of protocols developed for ARPAnet. TCP is the transport layer of the Internet, layer four in the Open System Interconnection’s seven-layer reference model. It manages network connections and data transport. IP is layer three, the component that enables addressing and routing, among other activities. For several years, there was only TCP, which scientists were developing as a host-level, end-to-end protocol and a packaging and routing protocol. By the late 1970s, however, people realized they were trying to do too much with a single protocol. IP was created to handle packaging and routing functions. The engineering world rarely discards anything, however. TCP development alone included two versions of the protocol. So by the time developers decided to split the work and create IP, the TCP line had already reached its third version. When the first complete set of TCP/IP protocols were announced in 1980, it was the fourth iteration, hence IPv4. So IPv4 was actually the first standardized version of IP. But as early as the 1970s, people realized the network would not be able to handle future requirements, so engineers created the Internet Stream (ST) Protocol to experiment with voice, video and distributed simulation via the network. Separate development of ST eventually led to ST2 in the 1990s. IBM, NeXT, Apple Computer and Sun Microsystems used that version in their commercial networking products. ST2, which offered connection-based communications and guaranteed quality of service, was considered a great advance over IP and was formally designated IPv5. By the time that happened, however, the idea of the next generation of the Internet, or IPng, had already started to percolate. IPng work began in 1994. Instead of moving smoothly through the ST2-based IPv5 to this next-generation Internet, people working on the upgrades decided to improve IPv4 and add everything they thought would be needed for the future Internet. That meant skipping from IPv4 to IPv6. Brian Robinson is a freelance writer based in Portland, Ore.
<urn:uuid:8367ad9e-c38f-484d-8f02-a22e408cbf88>
CC-MAIN-2017-09
https://fcw.com/articles/2006/07/31/what-ever-happened-to-ipv5.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00406-ip-10-171-10-108.ec2.internal.warc.gz
en
0.968547
626
3
3
The fingerprinting program that began last month for visitors and non-U.S. citizens entering the U.S., certainly underscores the need for a comprehensive authentication system to help strengthen our borders. While this program will begin to enhance the security of our nation, the debate surrounding domestic identity verification programs is still not adequately addressing one of the most visible threats aimed at our National Security-namely the growing problem of Identity Fraud. As we all know, the hijackers used false identifiers and false identification documents as their tickets to board the ill-fated planes on 9/11. In immediate response, our government took important steps to develop effective identity verification systems. Biometrics, facial recognition and other scanning devices are currently undergoing tests at various airports and other secure places in the U.S. There has also been much talk of implementing some type of National Identification card. As these efforts continue, however, we also need to be mindful of some of the many challenges that exist in developing a comprehensive solution to effectively combat Identity Fraud. When you stop and think for a moment, a biometric, which is an authentication system based on one's physical attributes, isn't effective if the identity is stolen and utilized at the point of registration. Additionally, how can we administer an authentic national identity card system or biometric if the applicant has already taken one of the thousands of Social Security numbers reported stolen over the past couple of years, or just as disturbing, created a new identity from the plethora of websites easily accessible online? Furthermore, how do we identify or authenticate someone if we do not have the data? So, given these obstacles, the critical task at hand is figuring out how we go about answering the question that lies at the fundamental root of the Identity Fraud problem: How do I really know you are who you say you are? One possible approach is through a knowledge-based system, better known as identity authentication. Unlike a biometric, which is based on one's physical attributes, "authentication" would identify someone by obtaining from them specific information that is general in nature and unique only to that individual when analyzed. The logic behind this model is predicated on the theory that an imposter may know some of the information pertaining to a real individual, but he or she will likely not know all of an individuals identifying information, especially when the information is modeled. Domestically, similar knowledge based identity verification systems are already being used in the financial world, for example, to verify someone applying for credit cards. On the international front, properly authenticating visitors coming into the U.S. presents more of a challenge because we have not yet obtained the necessary public information that exists globally, mostly in written form, on non-U.S. citizens in their respective countries. However, the means to get this information does exist and the technology structure needed to integrate it electronically is already in place. As we continue to strengthen and secure our homeland, we cannot ever lose sight of the fact that Identity Fraud is more than just a National Security or terrorism issue. It is also a global commerce issue. In fact, under the newly enacted USA Patriot Act, banks and other financial institutions are required to have in place systems to properly authenticate their customers. No doubt, people already living in or trying to enter the U.S. for the worst imaginable purposes have and will continue to try and change their identities. To that end, it is imperative that we grow trust in the identity authentication process to avoid the adverse consequences of a fallen economy or further incidents of terrorism.
<urn:uuid:3583652d-a19b-4868-889f-c242efbc4017>
CC-MAIN-2017-09
http://www.banktech.com/core-systems/lexisnexis-is-identity-fraud-an-identity-crisis/d/d-id/1289121
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00050-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946974
724
2.671875
3
Practice: Create geolocation components From the developerWorks archives Date archived: January 4, 2017 | First published: October 07, 2011 The use of geographical assets to determine where someone or something is located, and then selling that specific set of information to anyone who wants to use it is the essence of geolocation. It is the creation of quality, utility, and value for customers, while at the same time creating economic and financial benefits for stakeholders that drive business. Geolocation, particularly in the mobile environment, provides opportunity for both the general populace and businesses of all types. This content is no longer being updated or maintained. The full article is provided "as is" in a PDF file. Given the rapid evolution of technology, some steps and illustrations may have changed.
<urn:uuid:e8698219-8e00-40e1-8015-7fa8000037d3>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/library/wa-geolocation-pr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00050-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934658
161
2.65625
3
New competition for sovereign nations Until the 1960s, Europe had few commercial radio stations. Broadcasting was a government monopoly in many European countries, and listening options were limited to a few staid government stations such as the BBC. But as Erwin Strauss tells it, that changed when enterprising "pirate radio" ships began dropping anchor off the shores of European countries and blasting the latest pop music in violation of those countries' laws. The governments were not amused, but because the ships were in international waters, there was little they could do. Most European governments began refusing the pirate radio ships access to their harbors, but the ships were able to find harbor elsewhere. European governments finally succeeded in shutting down the pirate radio stations in the late 1960s by passing laws prohibiting their subjects from doing business with the broadcasters—including purchasing advertising from them. But the episode created a political constituency for private radio stations and the broadcast of more pop music. In the UK, for example, private, commercial radio broadcasting was finally legalized in the early 1970s. History has many examples of hierarchical institutions being disrupted by technological advances. The invention of the printing press helped undermine the authority of the Catholic Church. Today, the Internet is undermining traditional copyright industries. An audacious new project aims to achieve a similar result by creating new competition for the world's sovereign nations. The Seasteading Institute, the brainchild of two Silicon Valley software developers, aims to develop self-sufficient deep-sea platforms that would empower individuals to break free of the cozy cartel of 190-odd world governments and start their own autonomous societies. They envision a future in which any group of people dissatisfied with its current government would be able to start a new one by purchasing some floating platforms—called seasteads—and build a new community in the open ocean. History is littered with utopian schemes that petered out after an initial burst of enthusiasm, something the Seasteading Institute's founders readily acknowledge. Indeed, they chronicle these failures in depressing detail on their website. With names like the Freedom Ship, the Aquarius Project, and Laissez-Faire City, most of these projects accomplished little more than receiving a burst of publicity (and in some cases, raising funds that were squandered) before collapsing under the weight of their inflated expectations. There are many reasons to doubt that the Seasteading Institute will realize its vision of floating cities in the sea; but there are at least two reasons to think that seasteading may prove to be more successful than past efforts to escape the grasp of the world's governments. First, the project's planners are pragmatic—at least by the standards of their predecessors—pursuing an incrementalist strategy and focusing primarily on solving short-term engineering problems. Second, they recently announced a half-million dollar pledge from PayPal co-founder Peter Thiel, giving them the resources to begin serious engineering and design work. While there are many obstacles to be overcome before they will have even a functioning prototype—to say nothing of a floating metropolis—their project doesn't seem as obviously hopeless as most of the efforts that have preceded it. Ars talked to Seasteading Institute co-founder Patri Friedman about the seasteading project and the engineering and political problems it will face. Friedman describes himself as an "enthusiastic libertarian," a description that would fit Thiel as well. Their libertarian convictions are a major source of inspiration for the seasteading project. Frustrated with the size and intrusiveness of existing governments, they view the sea as a frontier that will allow people more freedom to experiment with different ways of organizing human societies. They depict government as an industry that suffers from unreasonably high barriers to entry. At present, experimenting with a new form of government requires winning an election or starting a revolution, prohibitively expensive options for small groups. As a result, they argue, even democratic governments are insufficiently responsive to their customers, the voters. The seasteaders seek to lower barriers to entry in the government business in order to create more competition and choice. While his own convictions run in a libertarian direction, Friedman is quick to emphasize that the core value of seasteading is the diversity it would allow. Not all seasteading communities would be libertarian. For example, some seasteads might be founded by environmentalists interested in more sustainable lifestyles. Others might be owned by religious communities seeking to separate themselves from the broader society. Still others might be founded as egalitarian workers' communes. A key advantage of seasteads is what Friedman calls "dynamic geography," the fact that any given seasteading unit is free to join or leave larger units within seasteading communities. Seasteading platforms would likely band together to provide common services like police protection, but with the key difference that any platform that was dissatisfied with the value it was receiving from such jurisdictions could leave them at any time. He argues that this would "move power downward," giving smaller units within society greater leverage to ensure the interests of their members are being served.
<urn:uuid:6418d905-d4b9-4d85-b198-882dcd4e9198>
CC-MAIN-2017-09
https://arstechnica.com/features/2008/06/seasteading-engineering-the-long-tail/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00050-ip-10-171-10-108.ec2.internal.warc.gz
en
0.9701
1,045
2.71875
3
This story has been updated. Persistent mental health conditions -- anxiety, depression and sleep disorders -- along with neck, back, and joint pains among Afghanistan and Iraq war veterans may someday “be recognized as signature scars of the long war,” that began with the U.S. invasion of Afghanistan in 2001, the Armed Forces Heath Surveillance Center reported today. The center, which conducts epidemiological and health surveillance studies for the Defense Department, said in the April issue of its Medical Surveillance Monthly Report, that “mental disorders were the only illness/injury category for which hospitalization rates markedly increased.” They were 8 percent higher in 2006 than 2002 and more than twice as high in 2012. The center examined health care provided to active duty service members from 2001 through 2012. The long-war study came up with several unanticipated findings. Most notably, that “no major categories of illnesses or injuries accounted for marked increases in rates of hospitalizations or ambulatory visits among U.S. military members until the fifth year (2006) of the war period.” The Center said its findings have important implications, including that “there may be dose-response relationships between the cumulative exposure of a military forces to war fighting and the natures and magnitudes of their health care needs.” The Center said that clinical manifestations, such as mental and musculoskeletal disorders, resulting from continuous exposure to combat, may not appear for several years. War-related stresses may increase over time: “Prolonged exposures to war fighting may be chronic and resistant to treatment. If so, clinical manifestations of the war may persist among many war veterans long after war fighting ends,” the report said. This means that veterans of the Afghanistan and Iraq wars could require long-term care, the Center said. “To the extent that some adverse health effects of prolonged periods of war fighting may be persistent and resistant to treatment, medical care may be needed by large numbers of war veterans long after war fighting ends.” It concluded, “If so, someday persistence of anxiety, depression, sleep disorders, neck, back, and joint pains, headache, and various ill-defined conditions among Afghanistan/Iraq war veterans may be recognized as signature scars of the long war.” In April 2008, the nonprofit RAND Corp. estimated the cost of dealing with post-combat stress and psychological illnesses of Afghanistan and Iraq veterans at $6.2 billion for just the first two years after troops return home. These costs include direct medical care, lost productivity and suicides. The long war report backs up a study done by the center last November that showed mental disorders accounted for 63 percent of excess hospitalizations for active duty troops from October 2001 through June 2012 compared to pre-war rates. The center also reported in its April surveillance report that in 2012, mental disorders topped the list as the cause of hospitalizations for active duty troops -- 16,175, or 26 percent out of a total of 85,901 hospital admissions. “Adjustment reactions (including post-traumatic stress disorder) and episodic mood disorders were associated with more hospitalizations among active component members than any other specific condition,” the report said. “Together, these two conditions accounted for 18 percent and 20 percent of all hospitalizations of males and females (excluding pregnancy/delivery), respectively,” it said. Alcohol dependence accounted for 11 percent of hospitalizations in 2012. Hospitalizations for mental health disorders have jumped by more than 50 percent since 2008. The Center said this sharp increase likely reflects repeated deployments, prolonged exposure to combat stress, improved mental health screening and a heightened awareness by commanders and families about mental health issues.
<urn:uuid:0f294548-f12a-4289-be00-f06fd76b9a26>
CC-MAIN-2017-09
http://www.nextgov.com/health/2013/04/poor-mental-health-signature-scar-afghanistan-and-iraq-wars/62757/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00226-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951837
763
2.546875
3
Is the Matrix next? A team of scientists at MIT has created a computer simulation of the universe that is being called the most accurate ever. The computer model simulation, developed by a team lead by Mark Vogelsberger, begins shortly after the creation of the universe and continues until the current age. A description of the simulation was published Wednesday in the journal Nature, and the creation is being acclaimed because it is the best depiction yet of the scale of the whole universe as well as in the details from individual galaxies. The team employed Illustris simulation software, and began the journey at a relatively young age for the universe -- a mere 12 million years after the Big Bang. It then had to also cover an additional 13 billion years to bring it to the present. 350 Million Light-Years To go with that enormous time range is the unimaginable range of space, about 350 million light-years across. One light-year is about 10 million million kilometers. This kind of enormous time and space range has never been captured before in a single simulation. The smallest details shown are about 1,000 light-years across, which the scientists expect to get down to several light-years across -- but it could take a decade more of refining the simulation. And the simulation also tackles the range of elements, from the simplest -- hydrogen and helium in the early years -- to heavier and more numerous elements later. There are also more than 40,000 different types of galaxies, such as elliptical and disk galaxies. One scientist not on the team, University of Maryland astronomer Michael Boylan-Kolchin, noted in a news article in Nature that the realism in the images of the galaxies was previously "possible only for simulation of individual galaxies." The team members said that the simulated images of galaxies resembled actual images of those galaxies, as depicted by the Hubble Space Telescope. They pointed to similarities in density of objects, colors, sizes and morphologies. But there are some differences, of course, and the scientists will be tweaking their theories to refine the simulation further. Six Months of Calculations For instance, while the galaxies' proportions seem more or less correct, the stars in galaxies that are much smaller than our Milky Way appear in the simulation to be older than they should be. The team indicated this is because the simulation created those galaxies too soon in the life of the universe. The simulation required six months of supercomputer calculations, which the scientists estimated would have taken a regular desktop computer about 2,000 years to generate. The imagery was derived from laws of physics and theories of how the universe evolved, and one of the purposes is to test how well some of those theories seemed to have worked in creating structures and images that resemble the current universe. One of the many problems in recreating this universe is the depiction of dark matter and dark energy, because their actual makeup is still not determined. The simulation model simulates their behavior, with dark matter helping matter in the young universe to assemble, and dark energy helping to propel the universe's expansion.
<urn:uuid:d6944fcf-198f-45ec-aeb6-a4296c24b870>
CC-MAIN-2017-09
http://www.cio-today.com/article/index.php?story_id=92576
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00402-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954746
617
3.53125
4
Researchers from the University of Strathclyde in Scotland have begun developing a new Light Fidelity (know as ‘Li-Fi’) technology that could turn ordinary LED lights into a sophisticated wireless communications network like WiFi. The technology works much like infrared remote controls, only much more powerful. This is not a new idea, with German scientists creating an 800Mbps capable wireless network by using nothing more than normal red, blue, green and white LED light bulbs back in 2011. But the University of Strathclyde is leading a groupof UK universities, funded by the Engineering and Physical Sciences Research Council (EPSRC), to develop its own version of the technology. How it works LED lights flicker on and off thousands of times a second. By altering the length of the flickers you can introduce digital communications. Most of the other teams are developing Li-Fi LEDs of around 1mm2 in size, while at Strathclyde University they plan to use micron-sized LEDs. This is because the smaller micron-sized LEDs are able to flicker on and off around 1,000 times quicker than the larger LEDs, offering faster data transfers and taking up less space. The LEDs could be used as lights, say for a screen displaying information, at the same time as providing internet. Professor Martin Dawson of Strathclyde University explains the potential benefits: “This is technology that could start to touch every aspect of human life within a decade. Imagine a LED array beside a motorway helping to light the road, displaying the latest traffic updates and transmitting internet information wirelessly to passengers’ laptops, netbooks and smartphones. This is the kind of extraordinary, energy-saving parallelism that we believe our pioneering technology could deliver.” There are, however, drawbacks. For example, light is easily blocked by things like walls. This could be a great breakthrough in internet communications, but don;t expect to see it implemented any time soon. What do you think? Is Li-Fi technology something you’d like to see for the future of wireless communications? (Image by Ed.ward) To be the first to receive our content, subscribe below.
<urn:uuid:a7ae3cdc-1dcc-4be3-b2b6-22bdb9a4b85a>
CC-MAIN-2017-09
https://www.gradwell.com/2013/02/13/led-lights-set-to-deliver-wireless-internet-communications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00578-ip-10-171-10-108.ec2.internal.warc.gz
en
0.910018
454
3.46875
3
Researchers and business analysts have witnessed two parallel paths of development in analytics technology over the last few decades: databases and other datastores as repositories of information to be analyzed (together with an engine for querying that information); and analytics applications that provide users with mechanisms for easily generating business insights from their data. These two layers of the analytics technology stack have been trying to keep up with each other for over thirty years, leading to ever-increasing sophistication and scalability in analytics capabilities. The history of databases properly begins in the 1960s, but the sort of relational databases that we would recognize really got going in the 1970s, substantially driven by work at IBM and at the University of California, Berkeley. By the early 1980s there was an abundance of commercial databases including DB2 and Oracle, and the market matured during the 1990s as large numbers of software vendors produced increasingly complex products built on top of these data platforms, such as customer relationship management and enterprise resource planning applications. Offline, the data generated by all these new applications created a rich source of new analytics. In the 1980s Teradata pioneered the commercial development of large-scale data warehouses through massively parallel processing (MPP), in which data queries are executed in parallel on a number of independent or semi-independent servers; and companies like banks and telcos started to create rich repositories of financial audit trails and customer histories. In recent years, there has been a well-documented explosion in the size of data to be managed as hosted applications, web-sites and finally mobile applications served ever larger audiences. Internet companies like Google and Facebook have created a need for rapid and concurrent access to more dynamic data structures, and in the rush to fill the gap left by the traditional databases there was suddenly a number of open-source NoSQL systems such as Cassandra and HBase. Meanwhile, for handling data with an almost extreme level of flexibility, the Hadoop framework that essentially came out of Google and Yahoo! represented an effort to reliably and efficiently handle massive loads of data processing across many low-end computers. At its heart is the MapReduce paradigm, an analog of basic SQL aggregation queries but able to work with almost any data. It has since multiplied into a variety of commercial and open-source versions that are increasingly becoming the default platform for large-scale ETL and analytics. Newer MPP databases such as Vertica and Greenplum then tried to leapfrog Teradata by emulating the scalability, cost and flexibility of Hadoop and NoSQL, and even creating their own hybrid platforms. Meanwhile the less traditional technologies have been in a race to provide simple SQL interfaces and application layers to mask the complexity of MapReduce and NoSQL. Sitting atop these data platforms (to a large extent) is the world of analytics tools and applications, a spectrum of technologies which is itself split broadly into two areas: descriptive analytics, or business intelligence, which generally offers the ability to summarize datasets through basic aggregations and grouping, with facilities for drilling down into areas of interest; and advanced or predictive or inferential analytics, which apply statistical and mathematical modeling techniques to data to draw conclusions that aren’t already explicitly present in the data, such as correlations or time series predictions. It was advanced analytics that got off to an earlier start through academic projects. The grandfather of them all, SAS, started as an academic project to analyze agricultural data for the US government, centered at North Carolina State, and was then founded in 1976 as a commercial entity. Similarly, SPSS started at the University of Chicago as a statistical package for the social sciences (hence the name) and became a successful commercial venture in the 1980s. Meanwhile, at Bell Labs in 1976 a group in statistical computing invented the S language for data analysis and graphics. It never really took off, but it did inspire two professors at the University of Auckland in New Zealand to create R, an open-source implementation of the S language that has become enormously popular in the last two decades. On the other end of the analytics spectrum, business intelligence (or BI) got going in the1980s with QUIZ from Cognos, and basic reporting and OLAP tools aimed at particular functions or verticals (e.g. Express from IRI for marketing, Comshare’s System W for financial applications) and the first ROLAP application, Metaphor, designed for CPG. BI and reporting reached a broader audience with general applications like Crystal in 1991, Actuate in 1993, and so on. The powerful combination of BI applications and data warehouses started in the late 1980s and exploded in the 1990s, with Essbase providing the template, followed by a move to hybrid MOLAP/ROLAP in the mid ’90s, (MicroStrategy, BusinessObjects). The database vendors have responded with OLAP extensions to SQL that make it even easier to aggregate and analyze data within the database. Since then, BI has followed two broad trends: to make it easy and to make it scale. There is an increasingly visual and accessible approach; applications are now largely web-based and use lots of simple drag-and-drop metaphors; there is the broad appeal of tools like Tableau; and then there is widespread support for MPP databases; and new vendors like Platfora and Datameer are trying to see if BI can be made to fit on top of Hadoop. After twenty years, BI has arguably reached a level of maturity. Now appears to be the time for advanced analytics to catch up with BI, and there’s a broad acceptance that the methods of mathematical modeling and statistical analysis should be a standard part of the analytics arsenal for every organization, and for a much broader set of users. The parallel with BI is clear: make the solutions scale by integrating with the most powerful data platforms, taking advantage of even more advanced SQL extensions and open-source analytics libraries for Hadoop; and reduce the complexity and use visual metaphors to make it more accessible. In general, as databases and other data platforms have increased in complexity and power, the analytics applications have raced to take advantage of them – and then generated yet more advanced requirements. The query layers and the parallelism (in particular) of modern platforms can now support sophisticated analytics applications on even the largest datasets. This is really the definition of ‘big data’: a powerful array of analytics capabilities from reporting to mathematical modeling, made accessible to the broadest possible audience, fueled by unlimited data. Learn more about Alpine Data Labs here.
<urn:uuid:c279e14d-22e8-4637-aff2-a2f447528ffd>
CC-MAIN-2017-09
http://alpinedata.com/a-brief-history-of-analytics-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00570-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949041
1,339
2.640625
3