text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
a Lumberjack and I’m OK:
Prof M. E. Kabay, PhD, CISSP-ISSMP
ne of the more arcane aspects of running a multiuser computer system is logging. No, not the kind of logging that the Monty Python character sings about in his famous (or notorious) song about British Columbia – this kind of logging refers to keeping records of many different types of events on the system. For example, operating systems on mainframes, minicomputers, and some LANs (or on LANs equipped with appropriate monitoring software) can record events such as
Logging serves many functions. Basically, logging keeps a record of who was doing what to which data on what systems at which times. Simply knowing that logging is taking place and contribute to self regulation: knowing that actions are monitored and reduce harmful behavior. Log files provide information for controlling the system; for example, system administrators and managers can limit access to certain resources in response to observations of abuse and they can change parameters and resources in response to trends. Log files also serve a fundamental purpose in forensic investigations.
In the early days of computing, system managers had to decide how often to close log files; disk space was expensive. However, these days, disk space is not much of an issue. In 1980, as I’ve mentioned in other commentaries, a 120 MB hard disk the size of a washing machine cost U$25,000 – about U$75,000 in current value and roughly U$625/MB. In contrast, today as I write in early 2009, a 1 TB Maxtor hard disk the size of a book costs less than $200 or roughly U$0.0002/MB (~1/50th of a penny per MB). So the price was ~3,276,800 times greater 30 years ago, which incidentally works out to about a 61% drop per year in compounded change. So the main issue today with respect to logging onto disk is simply preventing data loss if the system or the logging process crashes; one approach is to disable buffered I/O and force immediate writes to the disk. Since one can and should dedicate a storage device exclusively, performance should not be a problem as long as the logging process is separate and does not hold up what is being monitored.
The system manager must decide how long to archive log files. Usually, there are legal requirements which can guide the establishment of definite policies. These policies must be monitored and enforced to avoid serious problems such as contempt of court citations if records are deleted in violation of those requirements. For legal and functional purposes, log files must be archived in environmentally sound and secure storage facilities; normally, these are off site just like those for other system backups.
Each operating system can have particular variations in log file structure; you should look for log-file analysis tools that are specific to your environment. A search engine such as GOOGLE provides a wealth of references for the string “operating system” “log file analysis”; for example, at the top of the 13,400 hits located in 0.94 seconds in my search, there were dozens of products listed right from the start of the results. The Exefind Software Search service alone which was located by the GOOGLE search listed five pages of names and descriptions of log-analysis utilities.
Such analytical software is necessary because it is usually impossible to examine all the records - there could be millions. One needs to be able to break out the unusual events. Using appropriate software, one can set filters to scan for unusual conditions. Some systems define baseline events – that is, the norm – and spot unusual ones using statistical methods. Human beings can scan the exception reports and look for patterns; more sophisticated software systems can use artificial intelligence (AI) to help spot patterns and anomalies.
AI systems may use accumulated observations and statistical methods to spot outliers that signal unusual and perhaps dangerous events; for example, suppose that no more than one user logon in 1,000 over the last year has used an ID from the accounting department between the hours of midnight and six in the morning – so why did “Ralph” try to logon at 3:30 in the morning overnight? And if the real “Ralph” from accounting has not had to try his password more than twice in the last 1,523 logons, then how come this “Ralph” tried 18 passwords at 3:30 in the morning before giving up?
In the early days of computing, users normally got charged for every aspect of resource utilization; for example they might accumulate charges of $0.00001/disk I/O and $0.00002/process initiation. Users received the itemized bills showing their resource utilization – typically monthly; such bills promoted efficient use of resources. I remember personally being alerted by the Vice President of Operations at Mathema, the big data center where I worked in the 1980s, that one of our clients was generating reports that were three to four times more expensive in the last few months than in previous years. He told me to find out why and to help reduce the costs. Searching the log files for changes in their resource utilization, I noted that their total disk I/O charges had been climbing rapidly in recent months; further analysis showed that both online access and batch job I/O during the reports on orders had gone out of line. I investigated their databases and found that they had not repacked their most-used detail data set for a long time. As a result, records for line items in individual orders were no longer placed in contiguity according to the primary index, order number. Thus printing out lists of all the order details forced the database to make multiple reads over the data set. I told the client database administrator to repack the data set so that related records would end up in the same data block, reducing the number of I/Os in reads using the order number by the blocking factor (records per block). The clients’ report costs for their monthly global report dropped from $1,200 back down into the $300 range.
Chargeback systems can also play a direct role in improving system security by increasing user involvement. Any user being charged more than expected can alert system management to the anomaly, which may be the result of hacking or of malware.
If log files are to be useful in forensic work, they must be protected against unauthorized alterations. There are many ways of doing so including checksums, digital signatures, and encryption. Checksums are hash totals generated using a standard algorithm which are appended to each record; any change to the record that does not recompute the checksum using the right algorithm will allow immediate identification of the corrupted record. In addition, if the checksum is computed by including the checksum from the preceding record, an attacker would have to recompute the checksums for every single record following an insertion, deletion, or modification of the original records.
Once a log file has been closed, one can use public key cryptography to sign the entire file with an authorized signing key. No one without access to the signing key will be able to generate a valid digital signature for a modified log file.
Large systems often made use of memory dumps, which in the 1980s were still frequently called core dumps. These core dumps were files containing the entire contents of active memory (RAM) and were very useful for debugging and forensics. There were two types of dumps: some were obtained through diagnostic utilities (debuggers) in real time and others were captured after abnormal system shutdown using special utilities.
System level DEBUG utilities gave administrators complete access to RAM, thus allowing a total bypass of the system security. These were extremely powerful and therefore dangerous tools; they allowed users with root privileges to copy or alter any portion of memory, to access system tables by name and make changes such as stopping processes, altering priorities and so on. It was critically important to control access to these tools; often, it was forbidden for any one user to initiate the system debug without the presence of a colleague.
Early core dumps were so tiny by today's standards that they could actually be printed to paper; for example, an HP 3000 Series III maximum memory size in 1980 was 2 MB. A perfectly normal PC today can easily have a minimum of 2 GB of RAM – quite impossible to print and read manually. Even in the 1980s, however, we much preferred to work directly on electronic versions of the core dumps (usually mag tapes) and do our analysis on our terminals using the analytical utilities.
A core dump can be a major security vulnerability. It contains cleartext versions of vast amounts of confidential and possibly of encrypted data; it can included I/O buffers such as input from keyboards and files or output to displays and files. It would be a disaster to release a core dump to unauthorized personnel. There is even a serious question about whether vendors should be permitted to see memory dumps.
This informal overview should get you interested in finding out about logging in your own case-study organization or industry. Ask your system and network managers about their practices and discuss the costs and benefits of current standards and possible enhancements with them.
And we haven’t even touched the issue of application logging. . . .
Sincere thanks to MSIA Administrative Director Elizabeth Templeton for proofreading.
Jones, T. M. Palin, & F. Tomlinson
(1969). I’m a Lumberjack and I’m OK. Monty Python Show.
Official video at < http://pythonline.com/node/18116931 >.
Lyrics at < http://www.metrolyrics.com/im-a-lumberjack-lyrics-monty-python.html >.
Students may be surprised to learn that the data center manager would want to reduce income from the charges to a client. However, we took a long-term view of our relationship with our clients; our job was to help them make the best of our services so that they would stay with us as long as it made sense and then buy computer systems from us (we were an authorized HP distributor) when it was time for them to become autonomous.
A detail data set might have many indexes, but one of them was supposed to be identified as the most frequently used one and could be designated the primary index. In a special operation called a data set condense or repack, all the records would be rewritten so that they would fall into place according to the increasing value of their primary index. Thus if the order number were the primary, as many records as possible for order #1245 would be placed one after another – perhaps in the same block (the unit of physical I/O; i.e., what got read into a memory buffer by the disk driver). Thus reading all the records for a specific order number would be measurably faster from a packed data set because the blocking factor would determine the total number of I/Os required to read the entire file. If there were, say, 100,000 record packed 10 records per block, only 10,000 I/Os would be required instead of 100,000. Since the drives of that time could not reasonably complete more than 40 I/Os per second, the difference between 100,000 I/Os and 10,000 I/Os was the difference between 41 minutes just for physical I/O and 4 minutes. CPU overhead would normally add at least as much time, so the difference in job time could be 80 minutes for the unpacked data set versus 8 minutes for the properly packed data set.
Here’s one of my favorite stories about tech support from the days when I was in charge of the Phone In Consulting Service (PICS) at Hewlett-Packard in the early 1980s. Here’s an excerpt from a real conversation with a customer from that time:
Customer: The HP3000 crashed 10 minutes ago.
Mich: So did you take a dump?
Customer: [long pause] Yes, but I don’t see what that has to do with the computer.
Mich: NO, a core dump!
Customer: Oh – a core dump – yeah, sure, I did that.
Suppose we use 256 characters x 88 lines = 22,528 bytes/page. Then 1 MB would take ~46 pp and 2 GB would take ~95,000 pp. If the manual inspection rate were 1 minute per page (and that would be fast), it would take ~66 days to read the dump once.
|
<urn:uuid:3ecefab9-0f8f-4432-9375-a54345e1d4a2>
|
CC-MAIN-2017-09
|
http://www.mekabay.com/opsmgmt/logging.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00446-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.962123 | 2,589 | 2.546875 | 3 |
Briody C.,Dublin Institute of Technology |
Briody C.,Center for Elastomer Research |
Duignan B.,Dublin Institute of Technology |
Duignan B.,Center for Elastomer Research |
And 3 more authors.
Polymer Testing | Year: 2012
Compressive creep gradually affects the structural performance of flexible polymeric foam material over extended time periods. When designing components, it is often difficult to account for long-term creep, as accurate creep data over long time periods or at high temperatures is often unavailable. This is mainly due to the lengthy testing times and/or inadequate high temperature testing facilities. This issue can be resolved by conducting a range of short-term creep tests and applying accurate prediction methods to the results. Short-term creep testing was conducted on viscoelastic polyurethane foam, a material commonly used in seating and bedding systems. Tests were conducted over a range of temperatures, providing the necessary results to allow for the generation of predictions of long-term creep behaviour using time-temperature superposition. Additional predictions were generated, using the William Landel Ferry time-temperature empirical relations, for material performance at temperatures above and below the reference temperature range. Further tests validated the results generated from these theoretical predictions. © 2012 Elsevier Ltd. All rights reserved. Source
|
<urn:uuid:33ddcac6-dec9-44aa-b4f3-2cadba1698b0>
|
CC-MAIN-2017-09
|
https://www.linknovate.com/affiliation/center-for-elastomer-research-2662847/all/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00146-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.886706 | 273 | 2.578125 | 3 |
Architecture: Though its name suggests otherwise Intels CISC (Complex Instruction Set Computer) architecture is easier to audit for security holes than the RISC (Reduced Instruction Set Computer)based chips from Motorola, said Lurene Grenier, a software vulnerability researcher and Mac PowerBook user in Columbia, Md.
"With Complex Instruction Set instructions, there are more of them, and they do more for you. Its just simpler to read and write to CISC systems and get them to do something," she said.
Those differences make it easier for vulnerability experts and exploit writers to understand and write exploit code for systems that use the Intel architecture, and removes a big barrier to writing exploits for Mac systems, analysts agree.
"OS X will become more popular as prices drop. I think you have a variety of malicious folks who know the Intel chip set and instruction set. Now that Mac OS X runs on that, people can port their malware and other things over to OS X quickly and easily," said David Mackey, director of security intelligence at IBM.
"If I want to pop some box, Mac on a Motorola chip is a barrier," says Josh Pennell, president and CEO of IOActive Inc. in Seattle.
The population of individuals who can reverse-engineer code and read and write Assembly language is small, anyway.
To read more details about Apples Intel-based Macs, click here.
Within that tiny population, there are far more who can do it for CISC as compared to RISC-based systems, Grenier said.
"There are payloads and shell code written for PowerPC, but there are far fewer people who can or care to write it," Grenier said.
Tools: Hackers need tools to help them in their work, and more of them exist for machines using Intels x86 than Motorolas PowerPC, experts agree.
Popular code disassembly tools like IDA Pro work for programs that run on both Intel and PowerPC, but theres a richer variety of tools such as shell code encoders and tools for scouring code that work with the Intel platform than for PowerPC, Grenier said.
"There are tools that are not written for PowerPC because theres not the user base or the interest," she said.
Windows, Linux and Unix all use the x86 architecture, and exploit writers interested in those platforms have developed more tools to help them over the years.
Those tools, in turn, speed development of exploit code for buffer overflows and other kinds of vulnerabilities that require knowledge of the underlying architecture, Grenier said.
"I dont think [Intel] will make Mac more or less secure. But there will be a ton more exploits coming out for Mac," Grenier said.
Next Page: Other factors.
|
<urn:uuid:909f0ed5-1ead-494a-86b5-93f3b9ef15c7>
|
CC-MAIN-2017-09
|
http://www.eweek.com/c/a/Security/Apples-Switch-to-Intel-Could-Allow-OS-X-Exploits/1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00322-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.947572 | 577 | 2.75 | 3 |
Robert J. Lefkowitz at Howard Hughes Medical Center at Duke University, and Brian K. Kobilka from Stanford University School of Medicine have won the Nobel Prize in chemistry. Lefkowitz and Kobilka worked on a group of biochemicals called G-protein-coupled receptors.
These receptors sit in the cell-membrane and allow small molecules outside the cell to manipulate the response of the cell. G-proteins play a role in vision, smell, mood regulation, and the immune system. They also help control cell density—that is, the G-proteins stop cells from packing themselves in too tightly. This very wide role makes them a very important role biology, so understanding them is worthy of recognition.
As I understand it, the protein sits in the cell-membrane. A very small part of the protein is exposed to the surrounding environment, while the rest hangs inside the cell. When a small molecule docks to outside, it changes the charge balance, causing the part of the protein inside the cell to change its structure. This change in structure allows different proteins inside the cell to dock and begin a cascade of reaction.
The cool thing about this is that the internal changes depend on what molecule has docked on the outside, allowing the same protein to communicate many different messages.
Lefkowitz and Kobilka worked together to track down this path of activity and discovered the core similarity at the heart of cell signaling.
|
<urn:uuid:07dce84e-fcd5-4b60-9511-c790bdbb64e6>
|
CC-MAIN-2017-09
|
https://arstechnica.com/science/2012/10/nobel-prize-for-the-heart-of-cell-signaling/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00143-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.939788 | 303 | 3.65625 | 4 |
Data doesn't always have to be big data to have a huge effect on the way we live. Whether it's big data or small data, data management is increasingly a thorny issue.
For NASA, data management is at the heart of a key challenge: improving human-to-robot communication to make space exploration easier and safer.
"When you look at the way NASA and other agencies around the world have done work and exploration in space, often it's been divided between pure human exploration on the one side and on the other side is purely robotic exploration," says Terry Fong, director of the Intelligent Robotics Group at the NASA Ames Research Center in Moffett Field, Calif.
"We're interested in the middle ground of having humans and robots working together," Fong says. "How can you combine humans with robots?"
For the past three years, Fong and his team have been pursuing that question with the Human Exploration Telerobotics (HET) project. They want to pave the way for robots that can perform many of the routine, in-flight maintenance tasks that are time-consuming, highly repetitive and often dangerous for astronauts to perform manually.
The team is also investigating ways to improve the ability of astronauts to remotely control robots on a planetary surface. By doing so, the HET project aims to improve and hasten human space exploration missions to new destinations.
Two of the robots the project is working with—Robonaut 2 and SPHERES (short for Synchronized Position Hold, Engage, Reorient Experimental Satellites)—are aboard the International Space Station and controlled by operators on the ground.
Two others—the K10 planetary rover and ATHLETE (All-Terrain, Hex-Limbed Extra-Terrestrial Explorer robot)—operate at NASA field centers, controlled by astronauts on the International Space Station.
Android and Linux in the Final Frontier
The HET robots make extensive use of open source software and platforms. For instance, Android and Linux are used for most of their computing. The SPHERES robots, free-flying mobile sensor platforms about the size of a volleyball, use an Android "Nexus S" smartphone for data processing (it was the first commercial smartphone certified by NASA to fly on the space shuttle and cleared for use on the International Space Station).
Collecting data is the easy part. Aggregating and transmitting it are trickier. In space exploration, robots have to work remotely in extreme conditions, operated over highly constrained communication networks. After all, to work, data must move bidirectionally across a link that is fundamentally intermittent.
As a result, although some telerobots are now in use on Earth, they aren't well-suited for space operations. New, advanced designs and control modes are necessary. Despite variations in purpose, technology and design, all the HET robots must be equipped for both high-speed (local) and low-bandwidth, delayed (satellite) communications.
Dealing with that challenge requires a common, flexible, interoperable data communications interface that can readily integrate across each robot's disparate applications and operating systems.
"This is really what the Internet of Things is all about: machines that generate data and need to receive data in the form of commands telling them what to do," says Curt Schacker, chief commercial officer of Real-Time Innovations (RTI), a specialist in real-time software infrastructure that has worked closely with NASA on HET.
"If you look at a factory floor or hospital or aircraft engine, you're looking at potentially hundreds of thousands of discrete sources of data," Schacker says. "How do we move that data from its source to its destination in a coherent, managed way so that decisions can be made on that data?"
"The killer app that I hear the most about with the Internet of Things is predictive analytics," Schacker says. "There's an interesting characteristic of machines in that they will tell you that they're going to break. Vibrational patterns change, temperatures change, orientations change. We know this because when we do post-processing of machines that have broken, there was information available that, if we would have had that information, we would have known the problem would occur. The problem is how do you get that information and can you get in such a way that you can act on it?"
DDS Open Standard Helps NASA Create a 'Space Internet'
That's where a middleware technology called Data Distribution Service for Real-Time Systems (DDS) comes in. Developed by RTI in the U.S. and the Thales Group in France, DDS is an open standard intended to enable scalable, real-time, dependable, high performance and interoperable data exchanges between publishers and subscribers.
It is coming to play a large role in the Internet of Things, and is currently used on smartphone operating systems, transportation systems and vehicles, software-defined radio and so forth. It forms the core of NASA's Disruption Tolerant Networking software, which allows it to compensate for intermittent network connectivity and delays when sending data between computers on the ground and robots in space (or vice versa). NASA and other space agencies are using it to create what it calls a "Space Internet."
"Space is an incredibly rigorous production environment," says Stan Schneider, CEO and founder of RTI, which NASA turned to for its Connext DDS software. "As a direct result, the software needed to support mission-critical telerobotics communication applications must meet stringent requirements and become certified before use in this rugged environment. RTI Connext DDS solutions are tolerant to the time delay and loss of signal that can occur with signals bouncing between the space station, satellites and land-based devices."
"You've typically got very limited bandwidth," adds Schacker. "It's not like you're sitting on 100 Gigabit Ethernet. You've got delays on the links. And the links can be lossy. Sometimes you drop packets. DDS lends itself extremely well to that problem. The capabilities to deal with these issues are built into the middleware itself."
Fong says that the International Space Station is orbiting the earth about once every 90 minutes. Although satellites in geosynchronous orbit around the earth give NASA's control stations "pretty good coverage," data must generally travel from the space station to a satellite to a ground station and then to a control station through multiple hops.
"That's a lot of bounces," he says. "And there are times when we lose the signal for periods of minutes--up to 15, 20 or 30 minutes. The geometry is changing all the time. The communication link end-to-end is both intermittent as well as jittery in terms of time delay."
Thor Olavsrud covers IT Security, Big Data, Open Source, Microsoft Tools and Servers for CIO.com. Follow Thor on Twitter @ThorOlavsrud. Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn. Email Thor at [email protected]
|
<urn:uuid:87e5bd66-3870-4da0-9d9e-6f9a8d8d0c05>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2383689/data-management/nasa-turns-to-open-source-middleware-for-human-to-robot-communications.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00615-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.940183 | 1,460 | 3.265625 | 3 |
Cloud Computing Enables Business Scalability And Flexibility
The most common meaning of the term cloud computing refers to the delivery of scalable IT resources over the Internet as opposed to hosting and operating those resources locally. Cloud computing enables your company to react faster to the needs of your business, while driving greater operational efficiencies.
Cloud computing has a great impact on business thinking. It facilitates a change in the way companies operate, by offering shared and virtualized infrastructure that is easily scalable. It is also changing how we manage these resources. The challenge is no longer about how many physical servers a company has, but more about being able to manage these virtual resources.
Cloud computing offers businesses flexibility and scalability when it comes to computing needs:
- Flexibility. Cloud computing allows your employees to be more flexible – both in and out of the workplace. Employees can access files using web-enabled devices such as smartphones, laptops and notebooks. The ability to simultaneously share documents and other files over the Internet can also help support both internal and external collaboration. Many employers are now implementing “bring your own device (BYOD)” policies. In this way, cloud computing enables the use of mobile technology.
- Scalability. One of the key benefits of using cloud computing is its scalability. Cloud computing allows your business to easily upscale or downscale your IT requirements as and when required. For example, most cloud service providers will allow you to increase your existing resources to accommodate increased business needs or changes. This will allow you to support your business growth without expensive changes to your existing IT systems.
- Impact of scalability on managed data centers. Because of the highly scalable nature of cloud computing, many organizations are now relying on managed data centers where there are cloud experts trained in maintaining and scaling shared, private and hybrid clouds. Cloud computing allows for quick and easy allocation of resources in a monitored environment where overloading is never a concern as long as the system is managed properly. From small companies to large enterprise companies, managed data centers can be an option for your business.
Private cloud computing is a solution for scalable, customized and secure resources where control has to reside with your internal IT department.
Beyond the improvements on business flexibility and scalability, cloud computing has fundamentally changed the way we pay for resources. In the past, tasks that required considerable processing power or space needed significant capital investments in the necessary hardware. Now, cloud computing allows these users to purchase scalable space for heavy duty data crunching on demand, paying for only what they use.
Moreover, a new report by the Software & Information Industry Association, which includes opinions of 49 technology CXOs and VPs, notes that cloud computing’s impact grows even more significant when coupled with mobile and big data analytics.
From infrastructure, to mobility, virtualization, and on-demand applications, the true value of cloud is making its presence known throughout the entire IT ecosystem. Companies will continue to make infrastructure choices, either with or without cloud capabilities, but cloud solutions will continue to offer competitive advantages over traditional solutions.
By Ricks Blaisdell/ RicksCloud
|
<urn:uuid:801c7f0b-d9ab-4ce9-ac1d-3054235b59c2>
|
CC-MAIN-2017-09
|
https://cloudtweaks.com/2012/09/cloud-computing-enables-business-scalability-and-flexibility/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00315-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.929109 | 632 | 2.515625 | 3 |
|From The Editor
Today's Internet is comprised of numerous interconnected Internet Service Providers (ISPs), each serving many constituent networks and end users. Just as individual regional and national telephone companies interconnect and exchange traffic and form a global telephone network, the ISPs must arrange for points of interconnection to provide global Internet service. This interconnection mechanism is generally called "peering," and it is the subject of a two part article by Geoff Huston. In Part I, which is included in this issue, he discusses the technical aspects of peering. In Part II, which will follow in our next issue, Mr. Huston continues the examination with a look at the business arrangements (called "settlements") that exist between ISPs, and discusses the future of this rapidly evolving marketplace.
In the early 1990s, concern grew regarding the possible depletion of the IP version 4 address space because of the rapid growth of the Internet. Predictions for when we would literally run out of IP addresses were published. Several proposals for a new version of IP were put forward in the IETF, eventually resulting in IP version 6 or IPv6. At the same time, new technologies were developed that effectively slowed address depletion, most notably Classless Inter-Domain Routing (CIDR) and Network Address Translators (NATs). Today there is still debate as to if and when IPv6 will be deployed in the global Internet, but experimentation and development continues on this protocol. We asked Robert Fink to give us a status report on IPv6.
We've already discussed the historical lack of security in Internet technologies and how security enhancements are being developed for every layer of the protocol stack. This time, Marshall Rose and David Strom examine the state of electronic mail security. We clearly have a way to go before we see "seamless integration" of security systems with today's e-mail clients.
Our first Letter to the Editor is included on page 46. As always, we would love to hear your comments and questions regarding anything you read in this journal. Please contact us at [email protected]
-Ole J. Jacobsen, Editor and Publisher [email protected] .
|
<urn:uuid:d4572aee-3e9d-4e0a-8ed3-76a2670c6295>
|
CC-MAIN-2017-09
|
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents/from-editor.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00015-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.948963 | 447 | 2.765625 | 3 |
Although progress in battery capacity is snail-like compared to the progress in energy consumption of the devices powered by batteries, rechargeable batteries have come a long way the past decade or two. All current Apple laptops, iPods, and the iPhone use lithium ion or lithium polymer rechargeable batteries. Apart from spontaneous combustion (which is rare), the main problem with rechargeable batteries is that they lose their capacity over time. This is especially worrisome with devices like the iPod and the iPhone where the battery isn't easily replaceable. So what can we do to keep our lithium batteries in good health until old age?
Apple has a battery page that has some general information about the way lithium batteries are charged and discharged. They stress that you should charge your batteries early and often, rather than let them drain (almost) completely. For the purpose of battery wear, several partial charge cycles count as one full charge cycle. A battery should still have 80 percent capacity after the equivalent of 500 full charge cycles.
Apple also has pages that go into detail about battery issues for notebooks (they carefully avoid calling them "laptops"), the iPod family, and the iPhone. One recommendation is that you "keep the electrons flowing" by working on the battery from time to time—at least once a month. And you need to run on the battery until the device sleeps once every 30 cycles or so to make sure that the charge indicator stays in sync with the actual battery capacity.
For each type of devices, there is an optimum temperature range that the batteries like, which is 32 to 95 degrees Fahrenheit (0 to 35 Celsius) for the iPhone and the iPods, and 50 to 95 F (10 to 35 C) for laptops. I can personally attest to the fact that when it's freezing outside, an iPod's battery charge will seem to disappear into the cold, thin air. Although the batteries will work fine under more tropical conditions, the problem with that is that they age prematurely. According to BatteryUniversity.com, it's a public secret that lithium batteries start losing their capacity as soon as they're manufactured, but this process is greatly accelerated by two factors: a high temperature and a full charge. This is probably the reason why laptop batteries can be completely worn out after a few years: the battery is completely charged most of the time, and the insides of a computer generate a lot of heat.
When I got a PowerBook almost four years ago, I wanted to keep the battery in as good a shape as possible, so I tried to avoid going through too many charge cycles. I averaged less than three a month. But I still only have 1600 milli-amp-hours left, which is less than half the original capacity. But the second battery that I got 2.5 years ago, which has seen about two charge cycles a month, is still at 4200 mAh, 95% or its original capacity. Apple's laptop batteries have a voltage rating of 10.8 V, so multiply the mAh capacity by 10.8 and divide by 1000 to get rid of the milli, and you'll have the capacity in Watt hours.
This difference probably has something to do with advances in battery manufacturing, but I'm attributing most of it to the fact that I keep this battery in the refrigerator with about half a charge when it's not in use, while the original battery occupies the battery slot in the PowerBook. Moral of the story: the toasty underside of your laptop will prematurely age the battery anyway, so trying to avoid charge cycles is probably not worth the effort. If you have a second battery, store it in the fridge (not the freezer) at 40 percent charged when you're not using it. But most of all: keep your electronics out of the sun and especially out of parked cars.
|
<urn:uuid:e03b3fd2-dff4-480e-a9cf-2e3527ca4f53>
|
CC-MAIN-2017-09
|
https://arstechnica.com/apple/2007/08/on-treating-your-laptop-ipod-and-iphone-batteries-right/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00367-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.967506 | 772 | 3.0625 | 3 |
The Internet has functioned well for decades with minimal regulation of either access or edge providers. The Federal Communications Commission’s open-Internet order replaces that stable equilibrium with an asymmetric regime that is inherently unstable and antithetical to investment and innovation.
The FCC’s order classifies access providers as common carriers and imposes on them “bright line” rules against blocking, throttling and paid prioritization. At the same time, it specifically excludes edge providers from both that utility classification and those rules. Access providers must be “just and reasonable.” Edge providers have no such obligation.
In effect, the order places sole responsibility for ensuring the smooth flow of Internet traffic on the access providers, while depriving them and the FCC itself of the ability to influence the behavior of the edge providers. That is a highly unrealistic and dangerous approach to the way the Internet works today.
In the traditional telephone network, the phone company made it possible for its subscribers to communicate by connecting them directly to one another. In today’s Internet, broadband Internet access providers take part in a multistage communication process with edge providers that combines transmission and information services at each stage. Access providers and edge platforms all use combinations of transmission, caching and computing functions to ensure that an individual’s communication reaches its desired targets over the Internet.
For example, individuals wishing to communicate with friends, family, gamers or followers over the Internet will use their access providers to reach Skype or Facetime or Facebook or Tumblr or Twitch or Twitter or any of a number of other communications platforms. A videographer who wishes her video to be seen by a global audience will send it via her access provider to a platform like YouTube or Netflix, so that they can relay that video to those spectators. They, in turn, may rely on other edge platforms, such as content delivery networks, peering and transit providers, and backbone networks.
For the individual who wishes to communicate, both the access and edge platforms are equally vital. To have one’s message blocked, degraded or delayed at any point in the path is equally devastating. Nevertheless, the FCC’s order allows all of the edge platforms to block, throttle and prioritize at will, while demanding that the access providers that must partner with them ensure that all legal traffic reach its destination undeterred.
Not only does the FCC ignore the key role edge platforms play in facilitating an individual’s communications, the FCC ignores the power edge providers can exercise. The FCC’s stated rationale for creating this asymmetry by imposing its “bright line” rules and common-carrier regulations on the access providers is its assumption that they can act as gatekeepers that may have the ability and incentive to act in ways that will advantage themselves. If true, this reasoning applies equally to predicting the conduct at the edge. The Internet ecosystem is full of powerful edge providers that can act as gatekeepers and may have both the motivation and ability to advantage themselves.
Conversely, the FCC ignores the deleterious effect regulation has on access providers. The FCC exempted the edge providers from its rules because it believes their claim that they will not be able to innovate and invest if they are regulated. That is, of course, a valid argument but is all the more true of the highly capital-intensive access providers. Regulation at either level is detrimental to investment and innovation.
The FCC’s asymmetric order destabilizes the balance that has allowed the Internet to thrive and to be open. It hurts consumers by emboldening edge providers to increase their already enormous power to stifle communications even as it discourages access providers from innovating and investing. A better option would be legislation that creates a new framework that ensures an open Internet without threatening investment and innovation throughout the Internet ecosystem. Only Congress can assure a truly open and vibrant Internet for all Americans.
Anna-Maria Kovacs is a visiting senior policy scholar at Georgetown University’s Center for Business and Public Policy. She has covered the communications industry for more than three decades as a financial analyst and consultant.
This story, "Restoring a vibrant and open Internet" was originally published by Computerworld.
|
<urn:uuid:e558bd44-8376-4ab9-a5f9-5bd617ea7646>
|
CC-MAIN-2017-09
|
http://www.itnews.com/article/2926723/internet/restoring-a-vibrant-and-open-internet.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00011-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.923833 | 848 | 2.53125 | 3 |
Robert Kosara, a visual analysis researcher at Tableau Software, argues that if you want to learn how to tell stories with data, look at the Web. If you examine the right projects, you will find a classroom full of useful exhibits that use data to tell stories and examine questions—and provide lessons for both decision-makers and analytics professionals.
In a video that accompanies this article, Kosara reviews the lessons from four data visualization projects. These presentations include two from The New York Times (about political parties’ views on a jobs report in September 2012, and discussions about carbon emissions in the U.S. and China at the time of the Copenhagen climate conference).
In the third exhibit, Kosara discusses a chart-filled blog post by Elon Musk of Tesla Motors responding to a Times test drive of his company’s electric car. The fourth visualization is a Washington Post examination of National Rifle Association contributions to candidates for Congress. All use data to tell a story. And each shows how interpretations of data can vary depending on one’s point of view.
Data storytelling is still fairly new, Kosara says, and it pays to study the work of others to glean ideas of what techniques work well and to analyze why. “Looking at what the media are doing in particular when it comes to visual representation and the narrative of the building of the story, can be quite effective and especially because these are things that are available,” he says.
Below are summaries of the data visualizations with website links to the originals, accompanied by excerpts from Kosara’s video presentation.
How Political Parties in the U.S. Viewed a Jobs Report
The visualization: “One Report, Diverging Perspectives,” New York Times, October 5, 2012.
Context: In October 2012, one month before the presidential election, the U.S. reported that employers added 114,000 jobs in September. The jobless rate dipped to 7.8 percent. Coming one month before the presidential election, the news sparked debate about the Obama Administration’s economic policies.
What the visualization shows: The visualization presents three panels: a middle view where the September jobs numbers are put in context with monthly job creation and the unemployment rate. On the left is a view of “how a Democrat might see things” and on the right there is a tab to show “how a Republican might see things.”
Kosara’s take: “It shows something that I find quite common in data reporting or when people talk about data is that there are different ideas about the same data,” he says. “The New York Times here makes a very interesting point about how the same numbers, in this case this is the unemployment numbers that came out late last year, how they can be seen differently, the same numbers can be seen differently, by Democrats or Republicans.”
He adds: “If you were to present this data somewhere in a presentation to a decision-maker, you might either want to try and show both sides, and try and not to take sides, which is what this example is showing. Or if you want to be clear about which side you think the interpretation should go. Then you need to make a point why one side is the right one. And of course that should be something done in business but not in the reporting.”
Different Ways of Counting Carbon Emissions
The visualization: “Copenhagen: Emissions, Treaties and Impacts,” New York Times, December 5, 2009.
Context: Before the international climate conference in Copenhagen in 2009, policymakers discussed stemming the growth of carbon dioxide emissions and how to balance that goal with economic growth in developing countries.
What the visualization shows: This presentation charts data about carbon emissions growth in the U.S., Europe, China and India, and how they can be calculated—by geography, on a per capita basis, or per dollar of GDP. If you examine emissions as a function of GDP, China’s growing economy produces less carbon dioxide per dollar. If you examine emissions by total metric tons, however, China is projected to produce far more pollutants than the U.S. (Note: Kosara’s comments focus on one section of the visualization; the Times project also looks at the Kyoto Protocol climate treaty and projected effects of climate change.)
Kosara’s take: “These are different views, again of the same data as in the previous example. And they are interesting because they are shown quite nicely and the structure walks you through these different views and kind of gives you a sense of why these different views exist and how they impact what the decision would be going forward.
“So this is an interesting template, an interesting idea, for how to present information when you are looking at decision making and picking the path forward.”
A Disputed Test Drive of an Electric Car
The use of data: Charts presented in a blog post, “A Most Peculiar Test Drive,” by Tesla Motors Chairman Elon Musk on the company’s blog, Feb. 13, 2013.
Context: In “Stalled Out on Tesla’s Electric Highway,” on Feb. 8, New York Times reporter John M. Broder published an account of a test drive he took of the Tesla Model S, which ended with his having to hire a tow truck when the car ran out of battery charge. Five days later, Musk published his blog using data from sensors on the car to refute the Times story. Among many reader comments and online discussion about the story, Broder posted his point-by-point response to Musk’s blog. The newspaper’s public editor also weighed in, writing that while Broder “left himself open to valid criticism by taking what seem to be casual and imprecise notes” about his trip, he took the test drive in good faith.
What the visualization shows: Musk’s piece is a criticism of The Times article that includes annotated charts that show what sensors on the Model S recorded, measuring conditions such as speed and distance, cabin temperature, battery charge levels and estimated range based on battery charge, among other data.
Kosara’s take: The charts using data are effective because of the way they are annotated and part of a coherent argument, Kosara says. “It’s interesting that the way they are making this point here is that they are saying well, there are these points that were made in the story, but we have actual data to show that some of these are not actually true, or at least we can [say that the reporter] was just not taking proper notes about what he was doing.”
This approach created a lot of sympathy for Tesla, Kosara notes. “But of course it is also a bit dangerous because there are different interpretations of the same data and if you give the journalists a bit more benefit of the doubt that not all of these numbers were exactly correct, you can see that some of the patterns [the reporter] describes are still visible in the data.”
He concludes: “This is a nice example of using data in a very public way to make a very strong point, and really provides the evidence behind that point.”
The Gun Lobby’s Influence
The visualization: “How the NRA exerts influence on Congress,” The Washington Post, Jan. 15, 2013.
Context: With the gun control debate on the national agenda in the wake of the December 14, 2012 shootings at an elementary school in Newtown, Conn., The Washington Post examined campaign finance data to create a display of candidates, including incumbent members of Congress, who received and did not receive contributions from the National Rifle Association (NRA).
What the visualization shows: The presentation shows dots to represent each candidate for office and then moves and changes the size of those dots depending on whether the candidate won election, how much each received from the NRA, their party affiliation, whether they are in the House or Senate, and how the NRA rates the voting records on gun-related issues. A user can follow a particular lawmaker through the various views.
Kosara’s take: “This is an interesting way of walking through a fairly complex transformation of data to look at different ways of slicing and dicing the data. Looking the parties, looking at winners and losers, looking at the Senate versus the House and so on.”
Kosara says this presentation has lessons for business visualizations. “That’s also a common thing that you would do in a lot of business cases, where you want to present data, not just as one view, as one set of numbers. But there are different ways of looking at data that are not necessarily even about different interpretations, but just different ways of breaking the data down into smaller pieces and then trying to figure out which of those are actually interesting, which of those are useful, to make decisions and so on. You very often need to do that,” Kosara says.
He adds: “But to actually turn that into a reasonably cohesive story is very difficult and so this is a good example of how this can be done.”
Michael Goldberg is editor of Data Informed. Email him at [email protected].
Editor’s note: The original version of this story referred to Robert Kosara giving a talk at an event on marketing analytics and customer engagement. That event is now a series of webinars, and information about those speakers is available here.
|
<urn:uuid:dc00a8af-1e60-4f81-9d19-633d796c54ba>
|
CC-MAIN-2017-09
|
http://data-informed.com/tableau-softwares-robert-kosara-on-using-data-to-tell-a-story/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00363-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.950645 | 1,998 | 2.671875 | 3 |
- Source code management on UNIX and Linux systems
- Why Mercurial?
- Creating and using Mercurial repositories
- Getting help in Mercurial
- Checking repository status
- Adding files to a repository
- Committing changes
- Pushing changes to a remote repository
- Pulling changes from a remote repository
- Undoing changes in Mercurial
- Downloadable resources
- Related topics
Managing source code with Mercurial
A powerful, flexible system for managing project source code
Source code management on UNIX and Linux systems
Identifying and tracking the changes made by multiple developers and merging them into a single, up-to-date codebase makes collaborative, multi-developer projects possible. VCS software, also referred to as revision control systems (RCS) or source code management (SCM) systems, enable multiple users to submit changes to the same files or project without one developer's changes accidentally overwriting another's changes.
Linux® and UNIX® systems are knee-deep in VCS software, ranging from dinosaurs such as the RCS and the Concurrent Versions System (CVS) to more modern systems such as Arch, Bazaar, Git, Subversion, and Mercurial. Like Git, Mercurial began life as an open source replacement for a commercial source code management system called BitKeeper, which was used to maintain and manage the source code for the Linux kernel. Since its inception, Mercurial has evolved into a popular VCS system that is used by many open source and commercial projects. Projects using Mercurial include Mozilla, IcedTea, and the MoinMoin wiki. See Related topics for links to these and many more examples.
VCS systems generally refer to each collection of source code in which changes can be made and tracked as a repository. How developers interact with a repository is the key difference between more traditional VCS systems such as CVS and Subversion, referred to as centralized VCS systems, and more flexible VCS systems such as Mercurial and Git, which are referred to as distributed VCS systems. Developers interact with centralized VCS systems using a client/server model, where changes to your local copy of the source code can only be pushed back to the central repository. Developers interact with distributed VCS systems using a peer-to-peer model, where any copy of the central repository is itself a repository to which changes can be committed and from which they can be shared with any other copy. Distributed VCS systems do not actually have the notion of a central, master repository, but one is almost always defined by policy so that a single repository exists for building, testing, and maintaining a master version of your software.
Mercurial is a small, powerful distributed VCS system that is easy to get started with, while still providing the advanced commands that VCS power users may need (or want) to use. Mercurial's distributed nature makes it easy to work on projects locally, tracking and managing your changes via local commits and pushing those changes to remote repositories whenever necessary.
Among modern, distributed VCS systems, the closest VCS system to Mercurial is Git. Some differences between Mercurial and Git are the following:
- Multiple, built-in undo operations: Mercurial's
rollbackcommands make it easy to return to previous versions of specific files or previous sets of committed changes. Git provides a single built-in
revertcommand with its typical rocket-scientist-only syntax.
- Built-in web server: Mercurial provides a simple, integrated web server that makes it easy to host a repository quickly for others to pull from. Pushing requires either ignoring security or a more complex setup that supports Secure Sockets Layer (SSL).
- History preservation during copy/move operations:
movecommands both preserve complete history information, while Git does not preserve history in either case.
- Branches: Mercurial automatically shares all branches, while Git requires that each repository set up its own branches (either creating them locally or by mapping them to specific branches in a remote repository).
- Global and local tags: Mercurial supports global tags that are shared between repositories, which make it easy to share information about specific points in code development without branching.
- Native support on Windows platforms: Mercurial is written in Python, which is supported on Microsoft® Windows® systems. Mercurial is therefore available as a Windows executable (see Related topics). Git on Windows is more complex—your choices are msysGit, using standard git under Cygwin, or using a web-based hosting system and repository.
- Automatic repository packing: Git requires that you explicitly pack and garbage-collect its repositories, while Mercurial performs its equivalent operations automatically. However, Mercurial repositories tend to be larger than Git repositories for the same codebase.
Mercurial and Git fans are also happy to discuss the learning curve, merits, and usability of each VCS system's command set. Space prevents that discussion here, but a web search on that topic will provide lots of interesting reading material.
Creating and using Mercurial repositories
Mercurial provides two basic ways of creating a local repository for a project's source code: either by explicitly creating a repository or by cloning an existing, remote repository:
- To create a local repository, use the
hg init [REPO-NAME]command. Supplying the name of a repository when executing this command creates a directory for that repository in the specified location. Not supplying the name of a repository turns the current working directory into a repository. The latter is handy when creating a Mercurial repository for an existing codebase.
To clone an existing repository, use the
hg clone REPO-NAME[LOCALNAME]command. Mercurial supports the Hypertext Transfer Protocol (HTTP) and Secure Shell (SSH) protocols for accessing remote repositories. Listing 1 shows an example
hgcommand and the resulting output produced when cloning a repository via SSH.
Listing 1. Cloning a Mercurial repository via SSH
$ hg clone ssh://codeserver//home/wvh/src/pop3check wvh@codeserver's password: destination directory: pop3check requesting all changes adding changesets adding manifests adding file changes added 1 changesets with 12 changes to 12 files updating to branch default 12 files updated, 0 files merged, 0 files removed, 0 files unresolved remote: 1 changesets found
Note: To use the HTTP protocol to access Mercurial
repositories, you must either start Mercurial's internal web
server in that repository (
hg serve -d)
or use Mercurial's
hgweb.cgi script to
integrate Mercurial with an existing web server such as
Apache. When cloning via HTTP, you will usually want to specify a
name for your local repository.
After you create or clone a repository and make that repository your working directory, you're ready to start working with the code that it contains, add new files, and so on.
Getting help in Mercurial
Mercurial's primary command is
which supports a set of sub-commands that are similar to those in
other VCS systems. To see a list of the most common commands,
hg command with no
arguments, which displays output similar to that shown in Listing 2.
Listing 2. Basic commands provided by Mercurial
Mercurial Distributed SCM basic commands: add add the specified files on the next commit annotate show changeset information by line for each file clone make a copy of an existing repository commit commit the specified files or all outstanding changes diff diff repository (or selected files) export dump the header and diffs for one or more changesets forget forget the specified files on the next commit init create a new repository in the given directory log show revision history of entire repository or files merge merge working directory with another revision pull pull changes from the specified source push push changes to the specified destination remove remove the specified files on the next commit serve export the repository via HTTP status show changed files in the working directory summary summarize working directory state update update working directory use "hg help" for the full list of commands or "hg -v" for details
This short list displays only basic Mercurial commands. To
obtain a full list, execute the
Tip: You can obtain detailed help on any Mercurial command
by executing the
hg help COMMAND
command, replacing COMMAND with the name of any valid
Checking repository status
Checking in changes is the most common operation in any VCS
system. You can use the
command to see any pending changes to the files in your
repository. For example, after creating a new file or modifying an
existing one, you see output like that shown in Listing 3.
Listing 3. Status output from Mercurial
$ hg status M Makefile ? hgrc.example
In this case, the Makefile file is an existing file that has been
modified (indicated by the letter
the beginning of the line), while the hgrc.example file is a new
file that isn't being tracked (indicated by the question mark
?) at the beginning of the line.
Adding files to a repository
To add the hgrc.example file to the
list of files that are being tracked in this repository, use the
hg add command. Specifying one or more
file names as arguments explicitly adds those files to the list of
files that are being tracked by Mercurial. If you don't specify any
files, all new files are added to the repository, as shown in Listing 4.
Listing 4. Adding a file to your repository
$ hg add adding hgrc.example
Tip: To add automatically all new files and mark any files
that have been removed for permanent removal, you can use
hg addremove command.
Checking the status of the repository shows that the new file has
been added (indicated by the letter
at the beginning of the line), as shown in Listing 5.
Listing 5. Repository status after modifications
$ hg status M Makefile A hgrc.example
Checking in changes is the most common operation in any VCS system. After making and testing your changes, you're ready to commit those changes to the local repository.
Before committing changes for the first time
If this is your first Mercurial project, you must provide some
basic information so that Mercurial can identify the user who is
committing those changes. If you do not do so, you'll see a
message along the lines of
abort: no username
supplied... when you try to commit changes, and your
changes will not be committed.
To add your user information, create a file called .hgrc in your home directory. This file is your personal Mercurial configuration file. You need to add at least the basic user information shown in Listing 6 to this file.
Listing 6. Mandatory information in a user's .hgrc file
[ui] username = Firstname Lastname <[email protected]>
Replace Firstname and Lastname with your first and last names; replace [email protected] with your email address; save the modified file.
You can set default Mercurial configuration values that apply to all users (which should not include user-specific information) in the /etc/mercurial/hgrc file on Linux and UNIX systems and in the Mercurial.ini file on Microsoft Windows systems, where this file is located in the directory of the Mercurial installation.
The standard commit process
After creating or verifying your ~/.hgrc file, you can commit your changes
hg commit command,
identifying the specific files that you want to commit or
committing all pending changes by not supplying an argument, as in
the following example:
$ hg commit Makefile hgrc.example committed changeset 1:3d7faeb12722
As shown in this example output, Mercurial refers to all changes that are associated with a single commit as a changeset.
When you commit changes, Mercurial starts your default editor
to enable you to add a commit message. To avoid this, you can
specify a commit message on the command line using the
-m "Message.." option. To use a different
editor, you can add an
editor entry in
[ui] section of your ~/.hgrc file, following the
editor keyword with the name of the editor
that you want to use and any associated command-line options. For
example, after adding an entry for using
emacs in no-window mode as my default editor,
my ~/.hgrc file looks like that
shown in Listing 7.
Listing 7. Additional customization in a user's .hgrc file
[ui] username = William von Hagen <[email protected]> editor = emacs -nw
Tip: To maximize the amount of information that Mercurial
provides about its activities, you can add the
verbose = True entry to the
[ui] section of your Mercurial configuration
Pushing changes to a remote repository
If you are using a clone of a remote repository, you want to
push those changes back to that repository after committing
changes to your local repository. To do so, use Mercurial's
hg push command, as shown in Listing 8.
Listing 8. Pushing changes via SSH
$ hg push wvh@codeserver's password: pushing to ssh://codeserver//home/wvh/src/pop3check searching for changes 1 changesets found remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 2 changes to 2 files
Pulling changes from a remote repository
If you are using a clone of a remote repository and other users
are also using that same repository, you want to retrieve the
changes that they have made and pushed to that repository. To do
so, use Mercurial's
hg pull command, as
shown in Listing 9.
Listing 9. Pulling changes via SSH
$ hg pull wvh@codeserver's password: pulling from ssh://codeserver//home/wvh/src/pop3check searching for changes adding changesets adding manifests adding file changes added 1 changesets with 0 changes to 0 files (run 'hg update' to get a working copy) remote: 1 changesets found
As shown in the output from this command, this command only
retrieves information about remote changes—you must run the
hg update command to show the
associated changes in your local repository. This command
identifies the ways the repository has been updated, as
shown in Listing 10.
Listing 10. Updating your repository to show changes
$ hg update 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
Undoing changes in Mercurial
Mercurial provides the following built-in commands that make it easy to undo committed changes:
hg backout CHANGESET: Undoes a specific changeset and creates a changeset that undoes that changeset. Unless you specify the
--mergeoption when executing this command, you have to merge that changeset into your current revision to push it back to a remote repository.
hg revert: Returns to previous versions of one or more files by specifying their names or returning to the previous version of all files by specifying the
hg rollback: Undoes the last Mercurial transaction, which is commonly a
pullfrom a remote repository, or a
pushto this repository. You can only undo a single transaction.
See the online help for all of these commands before attempting to use them!
Mercurial and other distributed source code management systems are the wave of the future. Mercurial is open source software, and pre-compiled versions of Mercurial are available for Linux, UNIX, Microsoft Windows, and Mac OS® X systems. This article highlighted how to use Mercurial to perform a number of common VCS tasks, showing how easy it is to get started using Mercurial. For more advanced purposes, Mercurial provides many more advanced commands and configuration options to help you manage your source code and customize your interaction with a Mercurial installation.
- The Mercurial home page is a great starting point for getting information about Mercurial, and it also provides links to many other sources of related information.
- Mercurial: The Definitive Guide by Bryan O'Sullivan is the definitive work on Mercurial. The complete text of the book is available online in HTML and epub formats, which should hold you until your paper copy comes in the mail.
- Joel Spolsky's hginit.com site provides a great introductory tutorial for using and working with Mercurial.
- Projects using Mercurial include Mozilla, IcedTea, and the MoinMoin wiki. See the list at the Mercurial site for many more examples.
- The Linux developerWorks zone and AIX and UNIX developerWorks zone provide a wealth of information relating to all aspects of Linux, AIX, and general UNIX systems administration and expanding your AIX, Linux, and UNIX skills.
- New to Linux or New to AIX and UNIX? Visit the appropriate New to page to learn more.
- Browse the technology bookstore for books on this and other technical topics.
- The Mercurial wiki's Download page provides links to compiled versions of Mercurial for all supported platforms.
- TortiseHG provides a shell extension and command-line Mercurial applications for Microsoft Windows systems.
- The MercurialEclipse plug-in provides support for Mercurial within the Eclipse Integrated Development Environment.
- FogCreek Software's Kiln provides free trials and student/start-up versions of its online, Mercurial-based hosting service that is similar to online Git hosting services such as GitHub, Repo.Org.Cz, and so on.
- BitBucket provides free hosting for Open Source projects in its online, Mercurial-based hosting service, as well as paid hosting plans for larger groups of developers.
- Explore msysGit, which is Git for Windows.
|
<urn:uuid:88fedab1-8a35-413a-8a54-1900799b1139>
|
CC-MAIN-2017-09
|
http://www.ibm.com/developerworks/aix/library/au-mercurial/index.html?ca=drs-
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00539-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.851762 | 3,898 | 2.8125 | 3 |
I just read the WordPress article about World IPv6 Day, and many of the comments in response expressed that they only had a very basic understanding of what an IPv6 Internet address actually is. To better explain this issue, we have provided a 10-point FAQ that should help clarify in simple terms and analogies the ramifications of transitioning to IPv6.
To start, here’s an overview of some of the basics:
Why are we going to IPv6?
Every device connected to the Internet requires an IP address. The current system, put in place back in 1977, is called IPv4 and was designed for 4 billion addresses. At the time, the Internet was an experiment and there was no central planning for anything like the commercial Internet we are experiencing today. The official reason we need IPv6 is that we have run out of IPv4 addresses (more on this later).
Where does my IP address come from?
A consumer with an account through their provider gets their IP address from their ISP (such as Comcast). When your provider installed your Internet, they most likely put a little box in your house called a router. When powered up, this router sends a signal to your provider asking for an IP address. Your provider has large blocks of IP addresses that were allocated to them most likely by IANI.
If there are 4 billion IPv4 addresses, isn’t that enough for the world right now?
It should be considering the world population is about 6 billion. We can assume for now that private access to the Internet is a luxury of the economic middle class and above. Generally you need one Internet address per household and only one per business, so it would seem that perhaps 2 billion would be plenty of addresses at the moment to meet the current need.
So, if this is the case, why can’t we live with 4 billion IP addresses for now?
First of all, industrialized societies are putting (or planning to put) Internet addresses in all kinds of devices (mobile phones, refrigerators, etc.). So allocating one IP address per household or business is no longer valid. The demand has surpassed this considerably as many individuals require multiple IP addresses.
Second, the IP addresses were originally distributed by IANI like cheap wine. Blocks of IP addresses were handed out in chunks to organizations in much larger quantities than needed. In fairness, at the time, it was originally believed that every computer in a company would need its own IP addresses. However, since the advent of NAT/PAT back in the 1980s, most companies and many ISPs can easily stretch a single IP to 255 users (sharing it). That brings the actual number of users that IPv4 could potentially support to well over a trillion!
Yet, while this is true, the multiple addresses originally distributed to individual organizations haven’t been reallocated for use elsewhere. Most of the attempted media scare surrounding IPv6 is based on the fact that IANI has given out all the centrally controlled IP addresses, and the IP addresses already given out are not easily reclaimed. So, despite there being plenty of supply overall, it’s not distributed as efficiently as it could be.
Can’t we just reclaim and reuse the surplus of IPv4 addresses?
Since we just very recently ran out, there is no big motivation in place for the owners to give/sell the unused IPs back. There is currently no mechanism or established commodity market for them (yet).
Also, once allocated by IANI, IP addresses are not necessarily accounted for by anyone. Yes, there is an official owner, but they are not under any obligation to make efficient use of their allocation. Think of it like a retired farmer with a large set of historical water rights. Suppose the farmer retires and retains his water rights because there is nobody to which he can sell them back. The difference here is that water rights are very valuable. Perhaps you see where I am going with this for IPv4? Demand and need are not necessarily the same thing.
How does an IPv4-enabled user talk to an IPv6 user?
In short, they don’t. At least not directly. For now it’s done with smoke and mirrors. The dirty secret with this transition strategy is that the customer must actually have both IPv6 and IPv4 addresses at the same time. They cannot completely switch to an IPv6 address without retaining their old IPv4 address. So it is in reality a duplicate isolated Internet where you are in one or the other.
Communication is possible, though, using a dual stack. The dual-stack method is what allows an IPv6 customer to talk to IPv4 users and IPv6 users at the same time. With the dual stack, the Internet provider will match up IPv6 users to talk with IPv6 if they are both IPv6 enabled. However, IPv4 users CANNOT talk to IPv6 users, so the customer must maintain an IPv4 address otherwise they would cut themselves off from 99.99+ percent of Internet users. The dual-stack method is just maintaining two separate Internet interfaces. Without maintaining the IPv4 address at the same time, a customer would isolate themselves from huge swaths of the world until everybody had IPv6. To date, in limited tests less than .0026 percent of the traffic on the Internet has been IPv6. The rest is IPv4, and that was for a short test experiment.
Why is it so hard to transition to IPv6? Why can’t we just switch tomorrow?
To recap previous points:
1) IPv4 users, all 4 billion of them, currently cannot talk to new IPv6 users.
2) IPv6 users cannot talk to IPv4 users unless they keep their old IPv4 address and a dual stack.
3) IPv4 still works quite well, and there are IPv4 addresses available. However, although the reclamation of IPv4 addresses currently lacks some organization, it may become more econimically feasible as problems with the transition to IPv6 crop up. Only time will tell.
What would happen if we did not switch? Could we live with IPv4?
Yes, the Internet would continue to operate. However, as the pressure for new and easy to distribute IP addresses for mobile devices heats up, I think we would see IP addresses being sold like real estate.
Note: A bigger economic gating factor to the adoption of the expanding Internet is the limitation of wireless frequency space. You can’t create any more frequencies for wireless in areas that are already saturated. IP addresses are just now coming under some pressure, and as with any fixed commodity, we will see their value rise as the holders of large blocks of IP addresses sell them off and redistribute the existing 4 billion. I suspect the set we have can last another 100 years under this type of system.
Is it possible that a segment of the Internet will split off and exclusively use IPv6?
Yes, this is a possible scenario, and there is precedent for it. Vendors, given a chance, can eliminate competition simply by having a critical mass of users willing to adopt their services. Here is the scenario: (Keep in mind that some of the following contains opinions and conjecture on IPv6, the future, and the motivation of players involved in pushing IPv6.)
With a complete worldwide conversion to IPv6 not likely in the near future, a small number of larger ISPs and content providers turn on IPv6 and start serving IPv6 enabled customers with unique and original content not accessible to customers limited to IPv4. For example, Facebook starts a new service only available on their IPv6 network supported by AT&T. This would be similar to what was initially done with the iPad and iPhone.
It used to be that all applications on the Internet ran from a standard Web browser and were device independent. However, there is a growing subset of applications that only run on the Apple devices. Just a few years ago it was a forgone conclusion that vendors would make Web applications capable of running on any browser and any hardware device. I am not so sure this is the case anymore.
When will we lose our dependency on IPv4?
Good question. For now, most of the push for IPv6 seems to be coming from vendors using the standard fear tactic. However, as is always the case, with the development of new products and technologies, all of this could change very quickly.
|
<urn:uuid:7c834215-c8ec-49c2-9940-e6f325ccd5a7>
|
CC-MAIN-2017-09
|
https://netequalizernews.com/category/topics/ipv6-topics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00483-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.962882 | 1,723 | 3.15625 | 3 |
Question 4) Test Yourself on CompTIA i-Net+.
Objective : Internet Security
SubObjective: Understand and be able to Describe the Various Internet Security Concepts
Single Answer Multiple Choice
E-mail messages can be easily forged on the Internet. What can you use to be certain that a sender is who he or she claims to be?
Certificates (digital signatures) are primarily used to verify the identity of a sender, but they can also be used to ensure that data, such as documents, arrives at the destination untampered with.
An Access Control List (ACL) is a series of conditions that determine if network traffic should be passed on or blocked. Firewalls and routers use ACLs to filter out unwanted traffic.
Secure Sockets Layer (SSL) is a protocol used for secure Internet transactions.
Firewalls filter unwanted traffic from private networks.
These questions are derived from the Self Test Software Practice Test for CompTIA Exam #IK0-002: i-Net+.
|
<urn:uuid:f8229b17-b56a-4bda-b4df-8ed36c854a7c>
|
CC-MAIN-2017-09
|
http://certmag.com/question-4-test-yourself-on-comptia-i-net/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00007-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.826946 | 211 | 2.96875 | 3 |
Wi-Fi networks offer rich environment for spread of worms
- By William Jackson
- Jan 30, 2009
An international team of computer scientists has demonstrated in the lab that it is possible for overlapping Wi-Fi networks in densely populated areas to support the rapid spread of malicious code that could infect an entire city in a matter of weeks.
The study, results of which are published in the Jan. 26 issue of the Proceedings of the National Academy of Sciences, showed that by exploiting known security weaknesses tens of thousands of routers could be infected in as little as two weeks from a single point of infection.
“Within six to 24 hours, you could take control of the largest part of the network,” said Alessandro Vespignani, professor at Indiana University’s School of Informatics in Bloomington, one of the study’s authors. “The good news is that this type of network can be protected.”
Securing a relatively small number of routers and endpoints could effectively stem the spread of malware and localize the epidemic, said Vespignani, who also works at the Complex Networks Lagrange Laboratory at the Institute for Scientific Interchange in Turin.
Also contributing to the study were Hao Hu of IU’s School of Informatics and Physics Department; Steven Myers of IU’s School of Informatics; and Vittoria Colizza of Turin’s Complex Networks lab.
The fact that Wi-Fi routers are vulnerable to malicious code comes as no surprise, Vespignani said. “The surprise here is the extent of the proximity network you create with Wi-Fi.”
Wi-Fi is wireless local-area networking based on the IEEE 802.11 family of standards. Routers and access points typically have a range of from 10 to 80 meters, depending on their power and local conditions.
The fact that the networks are built on interoperable standards is both a strength and a weakness.
“If two routers are within that range, they communicate,” Vespignani said. Again, this is not a surprise. But the number of networks located close enough to communicate with each other in densely populated urban areas such as New York City or Chicago is greater than expected. “Basically, the entire city is a connected component. People didn’t expect that the network created by proximity would be that large.”
That is a problem because although Wi-Fi security features are available, many routers and access points are not securely configured.
“A lot of people just take the routers out of the box and deploy them completely open,” Vespignani said. Many routers that are configured to use security are not using the latest and strongest protocols.
The team conducted the study using mathematical models that simulate the spread of infectious diseases. This idea is not new. “Biological models have been used to study computer viruses and worms for a long time,” Vespignani said.
The simulations were run against mapping databases created by “war driving” through urban areas with Wi-Fi and Global Positioning Systems equipment to identify and accurately locate open networks. “This was made from real data,” Vespignani said, but the infections were carried out only in simulation.
“Is it possible to write this worm? Yes,” Vespignani said. “But we didn’t want to try this, even in the lab.”
Why hasn’t such an outbreak occurred already? “I don’t know,” he said. But much of the attention of the hacker and security world has been focused on the continuing game of exploit-and-defend being played out on the Internet, and only in recent years has the density of Wi-Fi networks reached the point that they could support an epidemic outbreak. Writing malicious code for Wi-Fi routers also is more difficult than for standard computers and servers because of their limited memory and the need to make the malware work in firmware.
The success of mathematical models in predicting the behavior of biological viruses gives a high degree of confidence to the results of this study, Vespignani said. “I am confident in the methodology.” But, he added, “what will happen in the real world will be different” from the simulation, depending on local conditions. “A lot of what will happen will depend on the characteristics of the worm. But the mathematical model is a good approach.”
The purpose of the study is not to frighten people away from Wi-Fi but to alert them to the need for security, he said. By adequately securing as few as 60 percent of Wi-Fi routers, using strong passwords and WPA (Wi-Fi Protected Access) rather than WEP (Wired Equivalent Privacy) protocols, an infection could be stopped before it was able to spread throughout an entire ecosystem.
William Jackson is a Maryland-based freelance writer.
|
<urn:uuid:6e7172ce-d654-4d84-a69a-01d95f295279>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2009/01/30/wifi-malware.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00179-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.960772 | 1,039 | 2.890625 | 3 |
Global science portal using federated search
- By Trudy Walsh
- Jun 29, 2007
A new portal that crosses both international and database boundaries was launched recently for people interested in scientific sources that are unavailable through commercial search engines such as Google.
WorldWideScience.org was developed by the Energy Department and the British Library, along with science and technology organizations in Australia, Brazil, Canada, Denmark, France, Germany, Japan and the Netherlands. It employs federated search technology ' a search method that simultaneously executes a query against an array of databases, then aggregates and ranks the results ' and gives users a single entry point for searching far-flung science portals in parallel with only one query.
'Scientific research results are archived globally in a plethora of sources, many unknown and unreachable through [the] usual search engines,' said Raymond Orbach, Energy's undersecretary for science. 'This international partnership will open up this vast reservoir of knowledge in a rapid and convenient manner, something that will add great value to our existing knowledge.'
WorldWideScience.org follows the model of Science.gov, the searchable portal for science databases of federal science agencies. WorldWideScience.org was developed and is maintained by Energy's Office of Scientific and Technical Information, which also played a central role in the development of Science.gov. The participating countries contributed databases that can be searched through the portal.
Trudy Walsh is a senior writer for GCN.
|
<urn:uuid:ec518793-dead-4bd9-a2e1-2cd22b524e5b>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2007/06/29/global-science-portal-using-federated-search.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00355-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.923072 | 300 | 2.703125 | 3 |
E-Voting Takes Another HitA group of computer scientists have shown how voting results, held in electronic voting machines, can be changed using a novel hacking technique. It's yet another reason why we need to have a verifiable, auditable, paper-trail for electronic voting machines.
A group of computer scientists have shown how voting results, held in electronic voting machines, can be changed using a novel hacking technique. It's yet another reason why we need to have a verifiable, auditable, paper-trail for electronic voting machines.The technique they used to change votes, dubbed return oriented programming, was first described by Hovav Shacham, a professor of computer science at UC San Diego's Jacobs School of Engineering. Shacham is also an author of a study that detailed the attack on voting systems presented earlier this week at the 2009 Electronic Voting Technology Workshop / Workshop on Trustworthy Elections (EVT/WOTE 2009).
From a statement:
To take over the voting machine, the computer scientists found a flaw in its software that could be exploited with return-oriented programming. But before they could find a flaw in the software, they had to reverse engineer the machine's software and its hardware-without the benefit of source code.
Essentially, return-oriented programming is a technique that uses pieces of existing system code to exploit the system. In this demonstration, the researchers successfully performed a buffer-overflow.
The team of scientists involved in the study included Shacham, as well as researchers from the University of Michigan and Princeton University. The hacked voting system was a Sequoia AVC Advantage electronic voting machine.
Shacham concluded that paper-based elections are the ay to go. I wouldn't go that far, but he did:
"Based on our understanding of security and computer technology, it looks like paper-based elections are the way to go. Probably the best approach would involve fast optical scanners reading paper ballots. These kinds of paper-based systems are amenable to statistical audits, which is something the election security research community is shifting to."
I'd settle for verifiable paper-based audit trail.
Professor Edward Felten, a long-time observer of electronic voting systems also commented:
"This research shows that voting machines must be secure even against attacks that were not yet invented when the machines were designed and sold. Preventing not-yet-discovered attacks requires an extraordinary level of security engineering, or the use of safeguards such as voter-verified paper ballots," said Edward Felten, an author on the new study; Director of the Center for Information Technology Policy; and Professor of Computer Science and Public Affairs at Princeton University.
In February 2008, Felten demonstrated how he was able to access several electronic voting systems at multiple locations in New Jersey.
|
<urn:uuid:78f438ce-4789-4186-93ce-5c6682d93812>
|
CC-MAIN-2017-09
|
http://www.darkreading.com/risk-management/e-voting-takes-another-hit/d/d-id/1082270
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00055-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.958667 | 573 | 2.734375 | 3 |
A document from the seemingly inexhaustible trove delivered by former NSA contractor Edward Snowden shows that the NSA can easily break the old and weak algorithm still used to encrypt billions of calls and text messages all over the world.
Developed in 1987, the A5/1 privacy algorithm – more commonly known as the GSM algorithm – has been cracked repeatedly by researchers, the last time in late 2009 by Karsten Nohl, chief scientist at Security Research Labs, and his team.
Despite that fact, it is still widely used by cellphones relying on the second-generation (2G) GSM technology. Sometimes even if the phone shows that a 3G or 4G network is used for the call, a 2G network is used in the background – especially where the connection is of inadequate quality.
More than 80 percent of cellphones worldwide use weak or no encryption for at least some of their calls, Nohl commented for The Washington Post. Finally, hackers can trick phones into using the less-secure 2G netwoks.
This NSA capability would not be such an important piece of news were it not for its recently revealed ability and propensity to collect phone call data and intercept phone calls around the world – most notably those made by high-ranking foreign government officials such as the German Chancellor Angela Merkel.
“The extent of the NSA’s collection of cellphone signals and its use of tools to decode encryption are not clear from a top-secret document provided by former contractor Edward Snowden. But it states that the agency ‘can process encrypted A5/1’ even when the agency has not acquired an encryption key, which unscrambles communications so that they are readable,” noted WaPo’s reporter Craig Timberg and independent security researcher Ashkan Soltani.
“Experts say the agency may also be able to decode newer forms of encryption, but only with a much heavier investment in time and computing power, making mass surveillance of cellphone conversations less practical.”
But implementing better forms of encryption is pricy, and that is likely the main reason why cellphone service providers haven’t jumped on the wagon before. Since the revelation of the tapping of Merkel’s phone, two leading German have stated that they will be implementing (still breakable, but stronger) A5/3 encryption for their 2G networks.
Let’s hope other providers – in Germany and in the rest of the world – will follow suit.
|
<urn:uuid:2e72379b-ed89-41f9-a446-4ca830518124>
|
CC-MAIN-2017-09
|
https://www.helpnetsecurity.com/2013/12/16/nsa-can-easily-decrypt-private-cell-calls/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00055-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.948843 | 504 | 2.53125 | 3 |
The National Cancer Institute has taken the first concrete step to make its wealth of genomic cancer information available to a broad base of researchers worldwide -- potentially speeding up cancer research significantly.
The ultimate goal for the project is to build one or more computer clouds filled with data from the institute’s Cancer Genome Atlas that outside researchers can tap into with new data mining and analysis tools. Using that information, scientists say, they’ll be able to learn vastly more about how cancers develop and spread, spot hidden similarities between tumors on different parts of the body and improve treatments.
A presolicitation document posted Monday aims to prepare universities and research labs to bid for the chance to create one of three pilot clouds. Information gleaned from those clouds might be used to create a new cancer cloud, managed by the government, a university consortium, or the private sector, or one of the clouds might develop into a full-scale model, according to the posting.
Because the types of cancer data and the tools used to mine it differ so greatly, it’s likely there will have to be at least two cancer clouds after the pilot phase is complete, George Komatsoulis, director and chief information officer of the National Cancer Institute’s Center for Biomedical Informatics and Information Technology, told Nextgov in August.
The Cancer Genome Atlas contains half a petabyte of information now, the equivalent of about 5 billion pages of text. By 2014 officials expect that figure will grow to 2.5 petabytes of genomic data drawn from 11,000 patients.
Just storing and securing that information would cost an institution $2 million per year, Komatsoulis said, a price tag that’s prohibitive for many small colleges, universities and other research institutions. By putting the data in the cloud and allowing researchers to access it remotely, perhaps on a pay-as-you-go model, the cancer institute could massively expand the number of researchers working on tough genomic problems, he said.
The federal government is working with Amazon on a separate initiative to put the Thousand Genomes Project in the company’s Elastic Compute Cloud where researchers could access the data set and only pay for the computing they use.
The cancer institute plans to hold an online conference in December to help institutions prepare their proposals to build one of the three clouds.
The institute is gathering ideas for how the clouds should be organized on the crowdsourcing website Ideascale.
|
<urn:uuid:38881a73-059e-4513-8189-3bd15699df75>
|
CC-MAIN-2017-09
|
http://www.nextgov.com/cloud-computing/2013/11/bidding-process-begins-cancer-curing-computer-clouds/74561/?oref=ng-dropdown
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00107-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.910304 | 500 | 2.796875 | 3 |
Reducing the Risk: Eliminating Medical Errors in Healthcare [Infographic]
Medical errors cause more than 250,000 deaths annually, a problem that can be addressed by leveraging modern technologies to coordinate care efforts and ensure stakeholders all have the data they need to keep patients healthy. Simple data management errors can lead to mistakes that put patients at risk.
The healthcare sector has long dealt with disparate data management and distribution across industry segments and distinct care provider environments. As a result, different records management procedures are often in place within hospital networks, let alone between different organizations. Throw in a continued dependence on paper records in some settings and entrenched silos within care environments and you are left with a confusing data climate for clinical staff to navigate.
The widespread move to electronic medical record systems (EMRs) has proven to be a step in the right direction when it comes to improving data quality in the healthcare industry. However, this has only been the beginning. Many EMRs do not integrate with others, and plenty of hospital networks are still working to integrate relevant patient data across the various data management systems they use. Even getting a digital image of a test result to a physician in another department can be a problem, especially when regulatory guidelines must be taken into account.
All of these factors create an environment in which healthcare companies need tools to consolidate, standardize and automate data sharing across departments. Rules-based systems can maintain compliance for organizations, giving them the combination of control and flexibility they need to wrangle data and prevent errors.
Want to learn how? Check out the infographic below to get more details on why data plays such a major role in medical errors and learn what you can do to nip the problem in the bud.
For even more information read the Stop the Silent Killer: How the Right Medical Tech can Eliminate Medical Errors eBook.
|
<urn:uuid:209c1f3e-4a6f-4a8b-a5f9-54b64fd203dd>
|
CC-MAIN-2017-09
|
http://www.appian.com/blog/technology-insights/reducing-the-risk-eliminating-medical-errors-in-healthcare-infographic
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00051-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.937866 | 371 | 3.0625 | 3 |
Want Stronger Passwords? Try Bad GrammarBeware passwords built using too many pronouns or verbs, Carnegie Mellon security researchers say. String together nouns instead.
Want to build a better password? Stick to nouns and adjectives, and avoid verbs and pronouns.
That finding comes from a research paper written by Ashwini Rao, a Ph.D. student at Carnegie Mellon University, and two colleagues, titled "Effect of Grammar on Security of Long Passwords." Rao will present the paper at next month's Association for Computing Machinery's Conference on Data and Application Security and Privacy (CODASPY 2013) in San Antonio, Texas.
"Use of long sentence-like or phrase-like passwords such as 'abiggerbetterpassword' and 'thecommunistfairy' is increasing," the researchers said in their paper. But could some types of passphrases be inherently less secure than others?
To find out, they built a grammar-aware passphrase cracker and preloaded it with a dictionary for speech, as well as an algorithm for recognizing the types of sequences that are typically used to generate passphrases, such as "noun-verb-adjective-adverb."
Passphrases are an often-recommended technique for creating complex passwords. Instead of creating a password like "sheep," for example, security experts recommend a passphrase such as "[email protected]" Simply put, passphrases can add complexity -- making passwords harder to crack or guess for an attacker -- while still being easy to remember.
[ For the latest on Java's security vulnerabilities, see Java Security Work Remains, Bug Hunter Says. ]
But as is always the case with passwords, some types of passphrase complexity are better than others. The researchers tested their proof-of-concept software against 1,434 passwords, each containing 16 or more characters, and found that by taking grammar into consideration, they were able to crack three times as many passwords as current state-of-the-art cracking tools. In addition, their tool alone cracked 10% of the password data set.
Cue their resulting finding: The strength of long passwords does not increase uniformly with length.
Based on their research, good grammar comes at a security cost because some grammatical structures are scarcer than others, which in passphrases makes them easier to crack. In terms of scarcity, the researchers noted that pronouns are far fewer in number than verbs, verbs fewer than adjectives, and adjectives fewer than nouns.
The bottom line: Emphasize nouns and adjectives in passphrases. For example, the researchers found that the five-word passphrase "Th3r3 can only b3 #1!" was easier to crack that this three-word passphrase: "Hammered asinine requirements." Meanwhile, they found that the passphrase "My passw0rd is $uper str0ng!" was 100 times stronger than "Superman is $uper str0ng!" and in turn that phrase was 10,000 times stronger than "Th3r3 can only b3 #1!"
The researchers said their findings should be applied to help users select more secure passwords. They also said the research may apply not just to passphrases but to securing other types of structures such as postal addresses, email addresses and URLs present within long passwords -- or perhaps even encrypted sets of data.
"I've seen password policies that say, 'Use five words,'" said report author Rao in a statement. "Well, if four of those words are pronouns, they don't add much security."
To keep those recommendations in perspective, complex passwords won't immunize people's passwords against every type of attack -- especially if the passwords get stored insecurely.
But in cases where attackers obtained stored passwords that have been hashed or encrypted, the more complex the password, the longer it may take for an attacker to decrypt the password, if it can be decoded at all. Any delay can buy website owners time to spot the related breach, notify customers and immediately expire all current passwords, thus helping to contain the breach.
Your employees are a critical part of your security program, particularly when it comes to the endpoint. Whether it's a PC, smartphone or tablet, your end users are on the front lines of phishing attempts and malware attacks. Read our Security: Get Users To Care report to find out how to keep your company safe. (Free registration required.)
|
<urn:uuid:9cc0870f-a517-492a-b1f1-6459760a2182>
|
CC-MAIN-2017-09
|
http://www.darkreading.com/risk-management/want-stronger-passwords-try-bad-grammar/d/d-id/1108425?cid=sbx_bigdata_related_mostpopular_encryption_big_data&itc=sbx_bigdata_related_mostpopular_encryption_big_data
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00403-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.953906 | 925 | 3.203125 | 3 |
A new HPC center will be launched by years end in Massachusetts. The Massachusetts Green High-Performance Computing Center (MGHPCC) will be outfitted with terascale hardware aims to deliver it in an environmentally friendly way. Beyond promises of power efficiency and reduced carbon footprint, the center is deviating from typical facility models. It will act as a shared resource between multiple universities, requiring users to develop new strategies of implementation.
Last week, IEEE Spectrum likened the facility to a Thanksgiving table of HPC for colleges. University members include MIT, Harvard University, Boston University, Northeastern University and the University of Massachusetts system. The $95 million center will be located in the town of Holyoke and provide the necessary infrastructure to house and remotely access compute resources. This includes power, network and cooling systems. The universities will provide their own hardware and migrate research computing to the center.
Francine Berman, professor of computer science at Rensselaer Polytechnic Institute (and former SDSC director), equated the design to building a city instead of a skyscraper. She expects to see social challenges between member parties. “As hard as the technical, computational, power, and software problems are, social engineering of the stakeholders is dramatically difficult,” said Berman. Potential conflicts may involve the center’s ability to produce groundbreaking research and papers versus expanding its user base, which typically receives less of a spotlight.
John Goodhue, MGHPCC’s executive director, wants the facility to be simple and highly accessible to its users. This provides somewhat of a challenge as the universities currently house compute resources locally. Transitioning to an external facility adds a number of layers between the users and their equipment. He seems confident that the model will work though, saying the center will provide “ping, power and pipe” for its members.
The challenge, he says, is making the physical hardware behave as a set of private local machines for the various users. But thanks to high bandwidth network pipes and machine virtualization, that should now be possible.
The facility is undertaking an ambitious mission. Assuming it can meet the technical needs of its users, the center will also need to deal with universities that share different priorities. If the project turns out to be a success, it could become the model for future collaborative efforts.
|
<urn:uuid:53716ee3-f124-4582-888f-a0c546994d20>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2012/05/29/massachusetts_offers_a_new_model_for_academic_hpc/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00099-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.947361 | 474 | 2.65625 | 3 |
It has married augmented reality to heads-up-displays and come up with a system to let pilots see through fog, glare, darkness and other conditions that contribute to the single most common factor in airline crashes.
Global positioning systems tell pilots exactly where they are, but don't have any information about rolling terrain, buildings, mountains and other obstacles that are as deadly as they are hard to see in challenging conditions.
"If pilots are not familiar with the airport, they have to stop and pull out maps," said Trey Arthur, an electronics engineer at NASA Langley Research Center in Virginia. "This display, in the new world where these routes are going to be digital, can tell them what taxiway they're on, where they need to go, where they're headed, and how well they're tracking the runway's center line."
The augmented-reality headset fits over one eye and display what looks like the actual runway, taxiways and other directional data as the plane is on the ground as well as the runway centerline and terrain details during the approach to landing.
The headset is a new product in itself, but doesn’t use any GPS or terrain data that aren't already available, according to NASA. The system works in ways similar to the heads-up-displayes fghter pilots use, but incorporates data from high-precision efforts to map the Earth's surface such as the Space Shuttle's Radar Topography Mission in 2000.
It has been tested in a unique NASA plane with two cockpits – one normal, with windows, the other totally enclosed so the "blind" pilot has to fly using only data from the augmented reality system.
The setup helped keep test pilots from crashing, but was primarily designed to verify the quality of NASA[s terrain and location data, the accuracy of the augmented reality display and the ability of pilots to fly using only a virtual picture of the real world.
NASA has been working on Synthetic Vision systems for civilian use since 1993. This headset, which is designed for airliners, could also be adapted to help drivers in cars, though the challenge of gathering enough detailed terrain data to make it practical for cars is a huge challenge.
The challenge of landing a plane, not to mention the downside of failing to notice the bit of terrain into which you're about to fly is incalculably higher. The number of locations needed to make the device useful for pilots and level of detail needed about airports is far lower than trying to provide the same level of detail in a form that would be useful for drivers on any of the 10s of thousands of miles of roads in the U.S.
If the headset ever makes it into the consumer market, it won't be for quite a while.
The headset does not yet even have an official name. NASA is still looking for commercial business partners who can bring the headset to market aimed specifically at pilots. The auto version will have to wait until later.
Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
|
<urn:uuid:1c802597-b0a4-484a-9a7b-d3cb9286a8b3>
|
CC-MAIN-2017-09
|
http://www.itworld.com/article/2730782/consumer-tech-science/nasa-set-to-market-headset-that-lets-pilots--and-drivers---see-through-fog.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00623-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.958685 | 659 | 3.296875 | 3 |
Who uses virtualization, and why?
Virtualization is used widely in small to large data centers by corporations, government entities, service providers, and ISVs, as well as within SMB IT environments.
Fundamental to cloud computing, virtualization enables IT organizations to pool and share resources across multiple users and deploy quickly without over provisioning. More efficient resource utilization results in lower equipment, space, and power and cooling costs. Virtualization also helps reduce complexity and management overhead, increase application availability, disaster recovery, and increase IT security.How this technology works
Virtualization in IT environments encompasses several forms including:
Server virtualization utilizes a thin software layer called a hypervisor to create “virtual machines” (VMs), an isolated software container with an operating system and application inside. The VM is called a guest machine and is completely independent, allowing many to run simultaneously on a single physical “host” machine. The hypervisor allocates host resources (CPU, memory) dynamically to each VM as needed.
Storage virtualization has many forms, spanning block, file, disk and tape. Physical storage is hidden and presented as logical volumes, including different mediums (e.g. tape as disk). Storage virtualization enables pooling devices and provisioning capacity to users as logical drives. Advanced solutions enable arrays to be managed as one logical unit and capacity provisioned from one logical pool.
Thin provisioning allocates shared physical resources (memory, CPU, disk) based on need versus the amount appearing available. This allows more resources to be allocated than physically available – called oversubscription – and avoids resources being left unused.
Network virtualization enables network resources (hardware and software) to be deployed and managed as logical vs. physical elements. Multiple physical networks can be consolidated into a single logical network, or a single physical network can be segmented into separate logical networks. Network virtualization also includes software emulating switching functionality between virtual machines.
Virtual Desktop Infrastructure (VDI) decouples the desktop from the physical machine. In a VDI environment, the desktop O/S and applications reside inside a virtual machine running on a host computer, with data residing on shared storage. Users access their virtual desktop from any computer or mobile device over a private network or internet connection.Benefits of virtualization
IT virtualization provides numerous benefits, to include:
- Rapid application deployment
- Higher application service levels and availability
- Greater utilization of infrastructure investments
- Fast and flexible scalability
- Lower infrastructure, energy, and facility costs
- Less administrative overhead
- Anywhere access to desktop applications and data
- Enhanced IT security
|
<urn:uuid:cf914f26-ee58-43ed-adff-bf35f23e6925>
|
CC-MAIN-2017-09
|
https://www.emc.com/corporate/glossary/virtualization.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00147-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.89713 | 540 | 3.28125 | 3 |
Smart city deployments are well under-way across the globe, but in India some problems around cost and governmental control are starting to emerge.
The aspirations of India’s Prime Minister Narendra Modi to push forward a smart cities programme suffered a serious setback when the country’s Congress said in December that the smart cities concept did not fit in with the country’s constitution.
The mismatch of Modi’s plans and the constitution comes because of amendments made by Rajiv Gandhi’s government in the 1980s stipulating that only municipal corporations and local bodies can make decisions on development works. Senior party leader Narayan Rane asserted that Modi’s plans involve private bodies bypassing an elected body to take decisions on redevelopment.
The decision may embarrass UK Prime Minister David Cameron whose November trade deal with India included national and state partnership between the UK’s Department of International Development and the Indian Ministry of Urban Development for national and state-led support for the development of smart and sustainable cities. Three cities were named: Indore, Pune and Amaravati.
This isn’t the only difficulty smart city development is likely to face in the coming years – with cost in particular an issue.
Also this week, the Nagpur Municipal Corporation says it will recoup more than two-thirds of its smart city development costs from citizens. In the UK, where local government finances are becoming ever tighter and local revenue raising opportunities restricted, such a move would likely be unpalatable even if it was possible. An obvious alternative – raising money from private sources, has its own difficulties in terms of, for example, ownership, revenue sharing, and priority setting.
The UK’s Centre for Cities identified key issues affecting the development of smart cities in 2014, and a year later said while progress had been made, the fundamental barriers remained unchanged. These include confusion about what ‘smart’ really means – it isn’t always about big, shiny new projects, but often about doing what we already do smarter. Smarter working includes things making better use of existing data, integrating ‘smart’ into core strategy, and more central government devolution of both decision making powers and financial control.
The good news is that real world smart city projects keep coming, and administrations do see the benefit of a more localised and integrated approach.
In the US, President Obama’s Smart Cities initiative will invest US$160 million (£108.4 million) in smart cities. One of the focus points is working with individuals, entrepreneurs, and non-profits interested in harnessing IT to tackle local problems and work directly with city governments. Meanwhile closer to home, the Manchester’s new CityVerve project , recently awarded £10 million from the UK Government’s Internet of Things competition, will use IoT to improve services for citizens in healthcare, energy and environment, culture and community across the city.
|
<urn:uuid:4b0fb498-153d-475c-877c-a90b14b2c55a>
|
CC-MAIN-2017-09
|
https://internetofbusiness.com/red-tape-threatens-to-dumb-down-smart-cities/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00620-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.948257 | 596 | 2.65625 | 3 |
In this conclusion of a two-part article, Oliver Rist covers what you need to know to develop a forensic-based response plan, evidence handling and documentation, and forensic tools and intrusion detection.
Articles by Oliver Rist
The science of finding, gathering, analyzing and documenting any sort of evidence is typically defined as 'forensics.' That discipline has branched off into a new specialty, that of 'computer forensics.' Network managers and corporate security teams don't need to be dedicated computer forensics specialists, but they do need to be at least acquainted with the edges of this discipline in order to effectively interact with law enforcement officials at the 'scene' of a computer crime. Oliver Rist reports.
|
<urn:uuid:86ec423e-ba49-4a4b-9839-8e0c4e4cd2a1>
|
CC-MAIN-2017-09
|
http://www.enterprisenetworkingplanet.com/author/79080/Oliver-Rist/2002-02
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00144-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.941257 | 145 | 2.625 | 3 |
Structure the Unstructured
Structure the unstructured
The insights researchers are able to gain when conversing informally are extremely rich, because human brains are adept at making contextually-relevant associations of which a structured database is incapable. For example, a human would know immediately that the words "auto," "automobile" and "car" mean the same thing, or that a past experiment may be "kind of" similar to the one being conducted in a current project. This is what the lunch table of the past delivered.
But what happens when your organization's head pharmacologist is in Boston, the lead chemist is in Beijing, and the available information base involves an enormous breadth of sources and data formats? Those contextually-relevant associations are not so easy to make.
Until organizations are able to "structure" (that is, categorize) the vast quantities of unstructured content at their disposal, they will miss out on a monumental amount of knowledge. This is where less rigid categorization technologies such as advanced semantic search and text analytics come in. But they have to be sophisticated enough to handle the highly-complex nature of scientific data. For instance, a molecule may be represented by name, by an ID number or as an image, so your search solution must be "scientifically aware" enough to recognize these variations.
Consider a company that needs to search a vast amount of unstructured content, ranging from external patents and journal articles to their own internal documents and research databases. The company needs to identify and extract information relevant to a key project. Using a scientifically-aware text analysis application capable of recognizing chemical structures and biological sequences, researchers would be able to query the content and quickly pinpoint the most relevant information. They would be able to do this without having to know exactly how the data is represented. Without this capability, the time and cost constraints involved in leveraging unstructured content would be too high and, most importantly, critical insights would be missed.
|
<urn:uuid:380fa568-d24d-425a-b1e2-d21ff29140a2>
|
CC-MAIN-2017-09
|
http://www.eweek.com/c/a/Cloud-Computing/How-to-Mine-Scientific-Business-Intelligence-in-the-Cloud/2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00496-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.939447 | 404 | 2.84375 | 3 |
Black Box Explains…Digital Visual Interface (DVI) connectors
The DVI (Digital Video Interface) technology is the standard digital transfer medium for computers while the HDMI interface is more commonly found on HDTVs, and other high-end displays.
The Digital Visual Interface (DVI) standard is based on transition-minimized differential signaling (TMDS). There are two DVI formats: Single-Link and Dual-Link. Single-link cables use one TMDS-165 MHz transmitter and dual-link cables use two. The dual-link cables double the power of the transmission. A single-link cable can transmit a resolution ?of 1920 x 1200 vs. 2560 x 1600 for a dual-link cable.
There are several types of connectors: ?DVI-D, DVI-I, DVI-A, DFP, and EVC.
•DVI-D is a digital-only connector for use between a digital video source and monitors. DVI-D eliminates analog conversion and improves the display. It can be used when one or both connections are DVI-D. •DVI-I (integrated) supports both digital and analog RGB connections. It can transmit either a digital-to-digital signals or an analog-to-analog signal. It is used by some manufacturers on products instead of separate analog and digital connectors. If both connectors are DVI-I, you can use any DVI cable, but a DVI-I is recommended.
•DVI-A (analog) is used to carry an DVI signal from a computer to an analog VGA device, such as a display. If one or both of your connections are DVI-A, use this cable. ?If one connection is DVI and the other is ?VGA HD15, you need a cable or adapter ?with both connectors.
•DFP (Digital Flat Panel) was an early digital-only connector used on some displays.
•EVC (also known as P&D, for ?Plug & Display), another older connector, handles digital and analog connections.
|
<urn:uuid:2b0020fb-9dcc-4327-a273-32ad985eda75>
|
CC-MAIN-2017-09
|
https://www.blackbox.com/en-au/products/black-box-explains/black-box-explains-digital-visual-interface-(dvi)-connectors
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00196-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.867456 | 436 | 3.484375 | 3 |
Buzz of the week | The space economy
In the coming months, presidential candidates will talk about all the things the federal government does wrong, and it is often easy to focus on problems, but 50 years ago, the government made an important step one might say one large step for humanity. It created NASA.
The space agency was created in 1958 in response to the October surprise months earlier: the Soviet Unions launch of Sputnik 1.
In celebration of the agencys upcoming golden anniversary, NASA Administrator Michael Griffin gave a lecture last week about the agency and what he termed the space economy, which he said drives innovation. NASA opens new frontiers and creates new opportunities, and because of that, [NASA] is a critical driver of innovation, Griffin said. We dont just create new jobs, we create entirely new markets and possibilities for economic growth that didnt previously exist.
We see the transformative effects of the space economy all around us through numerous technologies and life-saving capabilities, Griffin said. We see the space economy in the lives saved when advanced breast cancer screening catches tumors in time for treatment, or when a heart defibrillator restores the proper rhythm of a patients heart. We see it when GPS, the Global Positioning System developed by the Air Force for military applications, helps guide a traveler to his or her destination. We see it when weather satellites warn us of coming hurricanes, or when satellites provide information critical to understanding our environment and the effects of climate change.
Washington Post writer Joel Achenbach noted that one of the results of Sputnik has nothing to do with space. It was the creation of the Pentagons Defense Advanced Research Projects Agency, a technology think tank that went on to develop a computer network called ARPAnet. ARPAnet evolved into the Internet, he wrote.
Perhaps it is the law of unintended consequences, but NASAs upcoming 50th anniversary seems like a good opportunity to reflect on the realm of the possible and wonder how what we are doing today might affect how we live tomorrow.The Buzz contenders#2: VAs nine-hour ordeal
The Veterans Affairs Department experienced a nine-hour outage at a data-processing center in Sacramento, Calif., last month that crippled the clinical information systems at 17 medical facilities, including the San Francisco VA Medical Center. Dr. J. Ben Davoren, director of clinical informatics at the center, called the outage the most significant technological threat to patient safety VA has ever had. Ouch. Regional backup systems were unavailable or overwhelmed at four of the medical centers, he said. Although lawmakers have praised VAs efforts to standardize and aggregate its clinical information systems at regional centers, Davoren said physicians have had concerns that the regionalization of IT resources would create new points of failure that could not be controlled by the sites experiencing the impact, as happened last month.#3: Judge disappoints JPL employees
Twenty-eight employees at NASAs Jet Propulsion Laboratory are preparing to appeal a judges decision last week not to issue a temporary injunction that would have prevented JPL from conducting background checks. JPL is obligated under Homeland Security Presidential Directive 12 to conduct background checks before issuing new identification badges to employees and contractors who work in federal buildings. But the employees say the checks are unnecessarily invasive. The California Institute of Technology, which manages JPL for NASA, said anyone who does not comply with HSPD-12 requirements will be locked out of the lab beginning Oct. 27.#4: A new meaning for lowest bid
How does a small-business contractor stay in business when it submits a winning bid of zero? Thats the question some people might be asking since the General Services Administration selected Global Computing Enterprises (GCE) to provide federal contract and grant data for a new database, which must be online and accessible to the public by Jan. 1, 2008. As it turns out, the company had a slight advantage: a contract worth real money to operate GSAs Federal Procurement Data System-Next Generation. However, GCE recognized that GSA needs more than data so the enterprising company forged ahead and created a prototype of the new database (available at www.ffata.org). Could it be GCEs way of letting GSA kick the tires before making its next contract award?#5: Cheaper by the tens of thousands
The Agriculture Department took advantage of steep volume discounts last week when it issued the first task order under the governmentwide SmartBuy encryption contract. USDA placed an order for 180,000 Safeboot Device Encryption licenses for its 29 agencies. In announcing the SmartBuy encryption contract this summer, Tom Kireilis, director of the SmartBuy program, touted it as a sweet deal in which agencies could get discounts of 85 percent on orders of more than 100,000 licenses for encryption products. Thats a smart buy.
|
<urn:uuid:fcc7e22b-b941-49c7-ae1a-9e96fd973798>
|
CC-MAIN-2017-09
|
https://fcw.com/articles/2007/10/06/buzz-of-the-week--the-space-economy.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00440-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.938686 | 985 | 2.515625 | 3 |
German researchers are helping to push back the goalposts on large-scale simulation. Using the the IBM “SuperMUC” high performance computer at the Leibniz Supercomputing Center (LRZ), a cross-disciplinary team of computer scientists, mathematicians and geophysicists successfully scaled an earthquake simulation to more than one petaflop/s, i.e., one quadrillion floating point operations per second.
The collaboration included participants from Technische Universitaet Muenchen (TUM) and Ludwig-Maximillians Universitaet Muenchen (LMU) working in partnership with the Leibniz Supercomputing Center of the Bavarian Academy of Sciences and Humanities.
The effort hinged on retooling the SeisSol earthquake simulation code to harness more than one hundred thousand cores and over one petaflops of computing power. The popular simulation software is used in the study of rupture processes and seismic waves beneath the Earth’s surface. The goal of this geophysics project was to simulate earthquakes as accurately as possible, paving the way for improved predictive efforts. The project faced a limiting factor, however, in that the computational element was challenging even for a leadership-class system like the 3-petaflops SuperMUC, one of the world’s fastest.
To push beyond this barrier, Dr. Christian Pelties at the Department of Geo and Environmental Sciences at LMU teamed up with Professor Michael Bader at the Department of Informatics at TUM. The duo formed workgroups focused on optimizing the SeisSol program, tuning it for the parallel architecture of “SuperMUC.” The result was an impressive five-fold speedup and a new record on the SuperMUC.
In a virtual experiment, the team simulated the vibrations inside the geometrically complex Merapi volcano, located on the island of Java. The supercomputer chewed through the problem at 1.09 quadrillion floating point operations per second. And this wasn’t just a momentary peak, SeisSol maintained this high performance level during the entire three hour simulation run, incorporating all of SuperMUC’s 147,456 “Sandy Bridge” processor cores.
The official news release from LRZ asserts that this was only possible due to the extensive optimization and the complete parallelization of the 70,000 lines of SeisSol code, enabling peak performance of up to 1.42 petaflops. This corresponds to 44.5 percent of Super MUC’s theoretically available capacity (3.185 petaflops), making “SeisSol one of the most efficient simulation programs of its kind worldwide,” according to the institution.
“Thanks to the extreme performance now achievable, we can run five times as many models or models that are five times as large to achieve significantly more accurate results. Our simulations are thus inching ever closer to reality,” observes project lead Dr. Christian Pelties. “This will allow us to better understand many fundamental mechanisms of earthquakes and hopefully be better prepared for future events.”
“Speeding up the simulation software by a factor of five is not only an important step for geophysical research,” notes co-lead Professor Michael Bader of the Department of Informatics at TUM. “We are, at the same time, preparing the applied methodologies and software packages for the next generation of supercomputers that will routinely host the respective simulations for diverse geoscience applications.”
The researchers are planning to extend the project to simulate rupture processes at the meter scale as well as the seismic waves that propagate for hundreds of kilometers. The work has the potential to help humanity better prepare for these often damaging, and even deadly, natural forces.
SuperMUC, which was the world’s fourth fastest when it debuted in 2012, employs Intel Xeon processors running in IBM System x iDataPlex servers and has a LINPACK performance of 2.897 petaflops. It touts an innovative warm-water cooling system developed by IBM called Aquasar. The system currently holds the tenth spot on the TOP500 list and LRZ expects to double its performance in 2015.
The US Sequoia supercomputer (with 20 petaflops peak capability) may have trail-blazed the sustained petascale application front, but this level of scale is essential if exascale timelines are to be met. The earthquake simulation project will be highlighted at the International Supercomputing Conference in Leipzig, Germany this June with the session title: “Sustained Petascale Performance of Seismic Simulation with SeisSol on SuperMUC.”
|
<urn:uuid:a841644b-9120-4cfb-8770-4fca2b7a709d>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2014/04/15/earthquake-simulation-hits-petascale-milestone/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00616-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.915152 | 981 | 3.171875 | 3 |
Self-driving vehicles, traffic lights that adjust based on vehicle flow, bike sharing and smart pavement that provides public Wi-Fi access—those are just a few of the ideas for making cities smarter.
Not only have municipalities embraced the smart city concept—using technology to manage a city’s assets, improve the efficiency of services, reduce consumption of resources, reduce costs and improve the quality of life—but many are making it a reality.
The U.S. Department of Transportation has joined in to help cities implement smart city ideas, and it is offering a $40 million grant to the winner of its Smart Cities Challenge. The prize will go to the city that has the best plan for integrating innovative technologies such as self-driving cars, connected vehicles and smart sensors into their transportation network.
Recently the seven finalists made their final pitches, offering their ideas for the future of transportation. Below is a look at what they propose, including their video pitches.
The winner, which will be announced this month, will also receive $10 million from Vulcan Inc., as well as expertise, software and products from several companies involved in transportation, communication and environmental technologies.
Austin, Texas—Connected and automated vehicles, smart stations, mobility marketplace
Thanks to Austin’s economic energy, which includes a job market that grew during the 2008 recession while other cities’ shrank, the city has more people than ever living there. In fact, it has twice as many people as it did 30 years ago. Add to that the visitors who flock there for its music and social scene, and you have an urban mobility mess.
To fix the problem, the city has proposed several smart city ideas, including the following:
- Connected and automated cars: This builds on work Google is already doing with its self-driving cars. Austin’s proposal includes a prototype autonomous shuttle from the airport to the nearby smart station.
- Smart stations: These transit hubs bring together a variety of transportation services to help visitors. They serve as centers for deploying autonomous and connected cars, taxis and urban freight.
- Mobility marketplace: The idea is to connect travelers to the best mobility option for them and provide integrated payment options, real-time travel information via an app or a kiosk.
Columbus, Ohio—Real-time transportation systems, smart transportation corridors
Columbus has a three-pronged approach for its future: to be a beautiful city, a healthy city and a prosperous city. That includes connecting all of its neighborhoods, designing safer streets, ensuring all residents have access to quality and affordable transportation, and reducing consumption and emissions.
To achieve those things, it has proposed 12 smart city initiatives, including the following:
- Real-time information about traffic and parking conditions and transit options to minimize traffic issues associated with major events or incidents.
- Smart corridors to improve transit service and efficiently. This may include traffic information boards and electronic signs warning of incidents and providing detours.
- Expanded usage of electric and smart vehicles.
Denver—Mobility on demand, electric vehicles
Denver’s goal is to connect more with less. As its organizers wrote in the city’s proposal, “By connecting users, systems, and infrastructure with technology and information, our Smart City Program will generate fewer emissions, with fewer injuries and fatalities, and provide more transportation options and a higher quality of life.”
The smart city initiatives it proposed to achieve that include the following:
- Enterprise data management ecosystem: The system will incorporate data from many sources to provide a real-time picture of travel in the city, including where people are moving and how they’re getting there.
- Mobility on demand enterprise: Using information in the data management ecosystem, the city wants to create an app and interactive kiosks that integrate all public and private transportation providers. The goal is to help people decide the best transportation option for them.
- Transportation electrification: The city wants to add 103 electric vehicles for its fleet, including buses, taxis and car-sharing vehicles. To support them, the city plans to install 120 charging stations citywide.
Kansas City, Missouri—Autonomous shuttles, pedestrian mobility apps, smart street lighting
In recent years, Kansas City has experienced a city-wide revitalization. It has a thriving technical sector, a vibrant arts scene and the championship baseball team Kansas City Royals. It is no longer a fly-over city, and the growth in population and visitors is affecting the ability to easily travel in and around it.
To improve the situation, the city has proposed several smart city initiatives, including the following:
- Autonomous shuttles will be tested at the city’s airport with the potential to also run downtown.
- Pedestrian mobility apps will inform drivers if a pedestrian with a disability is crossing the street and will give the person more time to cross the street.
- Sensors will be used to monitor and improve mobility, emissions and safety, as well as provide smart street lighting.
Pittsburgh—Intelligent freight management, autonomous shuttle network, travel and accident reporting app
When Pittsburgh’s highway system was built, it provided a quick way into the business district but it created problems that the city is trying to fix today—bisected neighborhoods and increased asthma rates of people living near the highways, to name a couple. Plus, streets originally built for horse carts need to accommodate cars, busses, freight vehicles, bicycles and pedestrians.
The city hopes smart city initiatives can help resolve the problems and has proposed several ideas. Its proposals include:
- Surtrac: This real-time adaptive signal control system will monitor traffic and control lights on the streets that feed into downtown Pittsburgh. As part of the system, traffic lights would use sensors to identify transit and freight vehicles and allow them to move through the signals quicker, thereby reducing pollution that would occur while the vehicles are idling at a stop light.
- Autonomous shuttle: The city proposes using Second Avenue, one of the main streets that goes into the city, to test the use of self-driving autonomous transit vehicles. A charging canopy would be included.
Portland, Oregon—Transportation app, Wi-Fi travel kiosks, electric shuttle buses
Portland is at the forefront of alternative modes of transportation (it’s hard to find a community more pro-bicycle than it), but it still has issues. Transportation safety for those bicyclists, as well as pedestrians and car drivers, is one challenge. It also wants to reduce its carbon emissions by 80 percent.
City organizers say the smart city initiatives proposed will address those. The core of the plan includes an app that would allow residents to compare transportation options—walking, driving, taking public transit or using a ride service—and pay for the option they choose within the app.
Other initiatives include:
- Building traffic sensors and signals that can receive and transmit data, and installing technology in fleet vehicles to collect data on traffic conditions.
- Installing Wi-Fi-enabled kiosks that provide internet access and travel information at transit stops.
- Adding electric neighborhood shuttle buses to its transit service. The goal is that eventually these vehicles will drive autonomously.
San Francisco—Shared connected automated vehicles
San Francisco has been part of the international network of smart cities since 2011 and has been sharing its practices with its sister cities: Paris and Barcelona. It has used technology to make its building operations more efficient, reduce energy use, streamline its waste management system and expand its transportation system.
But city officials recognize that they can do more. The city’s new proposal is to expand and integrate mobility services across the city to incorporate shared connected automated vehicles. It would be a network of shared vehicles, accessible by an app. Officials say the network will eliminate traffic congestion, improve traffic safety, reduce emissions and noise, and more.
“Combined, shared mobility, public transit and CAV technology can reduce demand for street space and parking so that public right of way can be repurposed over time for walking, cycling, open space and to create more affordable housing,” the city’s proposal says.
This story, "Smart City Challenge: 7 proposals for the future of transportation" was originally published by Network World.
|
<urn:uuid:6de62ead-e3d7-49ee-8e7b-51ac1f3ae5a0>
|
CC-MAIN-2017-09
|
http://www.itnews.com/article/3084455/internet-of-things/smart-city-challenge-7-proposals-for-the-future-of-transportation.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00368-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.955258 | 1,699 | 2.578125 | 3 |
Each year, 850 billion gallons of raw sewage overflow into U.S. streams and rivers. If that amount was poured over New York City, it would create a pool 9 inches deep, said Luis Montestruque, CEO of EmNet, a startup designer of wastewater control systems.
Much of the overflow is caused by combined sanitary and stormwater sewer systems, which are prone to floods during storms; the channels for sewage and storm water runoff are only partially separated. South Bend, Ind., one of more than 700 U.S. cities grappling with the issue, is implementing a unique solution to direct sewage and rainwater to unused parts of its sewer system, preventing unnecessary spills.
The system, called CSOnet, is a "cyber-physical system" because it integrates computation with control, said Michael Lemmon, University of Notre Dame professor of electrical engineering. It watches and alters its own world, similar to how a traffic controller monitors traffic congestion and orchestrates light timing. "Probably what's unique about this is it includes actuation, so that we're actually controlling something," Lemmon said.
Engineers from Notre Dame, Purdue University, EmNet and the city of South Bend began work in 2004 to create a wireless sensor actuator network (WSAN) for the city's wastewater system.
"The advantage of CSOnet is that the intelligence is distributed throughout the system," Montestruque said. "This allows CSOnet to use local data more efficiently and robustly than conventional centralized systems."
A computer network communicates over wireless radio and is integrated into system components called "nodes" -- flow sensors, pressure sensors and smart valves that act in a feedback loop to efficiently store sewage and rainwater.
The system incorporates engineering innovations and is garnering interest for its cost-effective control of wastewater overflows. "It is arguably the largest permanently installed urban-scale wireless sensor network and one of the first cyber-physical systems in the world," Montestruque said.
During a storm, sensors in manhole covers detect high water levels and calculate amount of available sewer storage space, Montestruque said. The system sends a command signal to valves, pumps and gates to prevent overflows and maximize conveyance capacity. When the storm passes, sewage is slowly released into a wastewater treatment facility.
CSOnet's citywide installation was completed in February 2008 and has 110 wireless sensors installed throughout almost 40 square miles of South Bend. "This will allow the city to understand the details of the inner works of the sewer in preparation for the control phase," Montestruque said.
The real-time sensor information is collected by an EmNet server, for monitoring and archiving purposes, and is regularly accessed on the Internet by work crews that inspect for sewer changes or node malfunctions, said Gary Gilot, South Bend's director of public works.
EmNet will install 10 smart valve controllers to reduce dry weather overflows and flooding, and maximize storage in basins. That project is slated for completion in summer 2009. Beyond that, control may be extended to 30 other sites, he said.
The projected cost for CSOnet is $4 million, Gilot said.
Sewage overflows into nearby streams and rivers often occur during heavy rainfall, when excess water floods pipes in combined sewer systems. The resultant discharge, imbued with biological and chemical contaminants, is called a combined sewer overflow (CSO) event.
The overflows are toxic and can result in hefty fines for cities. Under the Clean Water Act of 1972, the U.S. Environmental Protection Agency (EPA) requires cities to monitor and reduce sewer overflows, and prepare long-term control plans. Fines are levied for not implementing and following a plan, or because of overflow events.
"It's a huge problem; it's basically like a federally unfunded mandate that all these cities are trying to address and are not sure how to," Lemmon said.
South Bend Mayor Steve Luecke estimated in his 2008 State of the City address that the city would need to spend $200 million to $400 million over 20 years to meet EPA guidelines -- and that's in addition to the $120 million now being spent on South Bend's long-term sewer plan.
However, the city could save $110 million to $150 million of that mandate with the use of CSOnet, Luecke said.
Though many newer areas of South Bend have segregated sewer systems, the water and sewage is inevitably combined upon entering the city's older sewer network.
In normal circumstances, the mix is treated at a wastewater treatment plant before its release. But wet weather can put the plant at full capacity. Even dry weather can cause overflows when storm debris plugs sewer lines, thereby flooding other pipes, Gilot said. The resulting EPA fine is $27,500 for each incident. With the monitoring system in place, Gilot said the city has already detected and corrected many potential dry weather overflows.
He said public works estimates that 2 billion gallons of sewage per year empties into South Bend's waterways, which includes the St. Joseph and Wabash rivers. Cities like South Bend drain the sewage mix into natural waterways to prevent it from backing up into homes and businesses.
The EPA estimates that reducing CSO events by 85 percent nationwide would cost $50 billion using traditional technology, Montestruque said. Traditional fixes include building new separated sewer systems, expanding wastewater treatment plants and building large reservoirs, or holding tanks, to temporarily hold sewage, as was done in Chicago.
"These solutions are highly unpopular because this is taxpayer money," Montestruque said. Taxpayers can't use or see sewer improvements, so it's hard to justify spending money on them, he said.
A 2005 estimate for totally segregating South Bend's sewage and storm water was $650 million, Gilot said. Water rates would have to rise 80 percent to amass the necessary capital.
Research and Development
CSOnet originated when Lemmon and Jeffrey Talley, an associate professor of civil and environmental engineering at Notre Dame, began discussing South Bend's water management needs with city officials.
In 2004, Talley led a research and development team to create a sensor and control prototype for stopping the city's sewer overflow problem. After he landed a $1 million grant from the Indiana 21st Century Research and Technology Fund, Purdue University and environmental engineering firm Greeley and Hansen joined the project. Granger, Ind.-based EmNet was founded in 2004 to commercialize the research from Notre Dame and Purdue.
Successful test runs at St. Mary's Lake near Notre Dame paved the way for a pilot in November 2005. The pilot, a small retention basin deployment with six sensors and one controller, prevented an estimated 6 million gallons of sewage from entering the St. Joseph River that month and increased the basin's capacity by 110 percent for about 1 cent per gallon, Montestruque said.
A subsequent study conducted by environmental engineering firm Malcolm Pirnie determined that a citywide installation would reduce CSOs up to 30 percent, Montestruque said.
Although there's been an occasional node malfunction, Gilot said, the system is robust because there's no single point of failure.
"We could see this had potential early," Gilot said. "[The pilot] showed that real-time control logic and communications worked and that the system was robust under tough, real-world conditions."
Battery-operated nodes and wireless communication make for fast implementation and less up-front cost. "It's very unique in that it doesn't require any structure to be in place beforehand," Montestruque said, "It's usable the moment it is installed."
Node-to-node communication that's linked to a hierarchical information structure lets the electronically simple system consume less power. The nodes also rely on highly efficient hardware and middleware to synchronize sleep and awake cycles, extending battery life to two
to three years. Purdue is working on the ability to reprogram the nodes wirelessly.
Additionally algorithms help optimize CSOnet's functions. "One allows communication using mesh network technology; another algorithm is responsible for utilizing sensor information to determine the optimal set points for the valves that control flows; yet another is responsible for energy management," Montestruque said.
The mesh network technology lets neighboring nodes communicate using many paths, thus bypassing obstacles.
The composite fiberglass manhole cover used to house the antenna and sensor adds to CSOnet's sturdiness by enabling radio signals to propagate easily from inside the sewer system. Another advantage of the retrofitted manhole cover is its easy installation.
There's already been interest in South Bend's CSOnet, including from places as far as India, Lemmon said. Gilot predicts interest will increase after the control phase ends and CSOnet can display its full value.
"There's a big potential here for this to have significant impact that goes beyond the horizon of academics," Lemmon said.
|
<urn:uuid:2f262e69-f6f5-4ef3-b453-ba0222a5c951>
|
CC-MAIN-2017-09
|
http://www.govtech.com/public-safety/Wireless-Sensors-Reduce-Flooding-in-Indiana.html?page=2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00544-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.946345 | 1,844 | 3.125 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, and it provides the authors analytical data about your interactions with their content.
Embed code for: Dialogue Toolkit ngss
Select a size
An Education Partnership • UC Davis • Sacramento State
A Framework for Thinking about Dialogue
There are numerous student dialogue techniques for use in the classroom. In instruction we can be strategic about selecting dialogue protocols for specific purposes. One way to increase the strategic aspect of our approach to dialogue is to consider a framework to help guide our technique selection process. In other
words, we benefit by having specific goals for student-to-student dialogue, considering those goals when selecting a technique to employ and thinking about
all of this within a framework that recognizes the classroom context.
Here is a frame with four aspects to consider when incorporating student-to-student dialogue in the classroom.
A. Set the task for talk (What engaging activity or event precedes dialogue?)
B. Regulate the time for talk
C. Structure the dialogue interaction
D. Attend to the accountability – define the product or goal (oral, written, drawing, other representation, etc.)
Using Dialogue Protocols
Possible Goals for Student Dialogue:
• Eliciting prior knowledge
• Practicing vocabulary
• Putting ideas on the table
• Creating hypotheses or potential explanations
• Constructing arguments with evidence
• Processing text
• Giving students a chance to modify their thinking, express ideas or gain ideas
Choosing a Dialogue Protocol:
Be strategic in matching a dialogue protocol to an instructional goal. Different protocols are good for different goals. Here is a breakdown of some dialogue protocols in different categories of use:
Student in groups
Students in pairs
Eliciting prior knowledge
Putting ideas on the table Brainstorming hypotheses or explanations
Talking Sticks Dialogue Dots Paraphrase Passport
Paired Verbal Fluency
Give One, Get One
Constructing arguments with
Dialogue Dots Paraphrase Passport Four Corners
Say Something Paraphrase Passport Four Corners
Quiz, Quiz, Trade
Getting students on their feet
In many dialogue situations it is often a benefit to provide students with stems for starting their responses. The stems serve as a starting point or focusing tool for the dialogue and assists in keeping student talk on topic. Stems are included with some of the dialogue protocols on the following pages. Here is a general list of dialogue stems organized into categories, with a few stems for general use. Stems need not be tied to a particular protocol and you can mix and match them as the need arises.
Expressing an Opinion:
I think/believe that… It seems to me that… In my opinion…
I don’t agree, because…
I got a different answer… I see it another way…
I agree with (a person) that… My idea is similar to/related to
My idea builds upon ____’s idea that …
Asking for Clarification:
What do you mean?
Will you explain that again? I have a question about…
That’s an interesting idea
I hadn’t thought of that
I see what you mean
Partner and Group
We decided/agreed that… We concluded that…
Our group sees it differently
We had a different approach
Soliciting a Response:
What do you think? Do you agree?
We haven’t heard from yet.
How did you get that answer?
I guess/predict/imagine that… Based on….. I infer that…
I hypothesize that…
Offering a Suggestion: Maybe we could… What if we…
Here’s something we might try…
I discovered from that… I found out from that…
______pointed out to me that…
______shared with me that….
So, you are saying that…
In other words, you think… Your thought is that…
Holding the Floor: As I was saying… Finishing my thought… I was trying to say…
Stems for general use:
“As I read this I was thinking …” “After doing that I think …” “After reading that I …”
“This makes sense to me because …”
“This doesn’t make sense to me because…”
“One question I have about this is …”
This protocol provides a method for grouping students that can save you time when making transitions into dialogue or other work in your classroom. Seasonal Partners and Clock Partners work in the same way. For Seasonal Partners, students divide a paper into quadrants and select partners so they can pair up later.
It helps to go through the process for students and model it as you proceed. You would say, “Draw a line down the middle of your paper. Next, draw a line across your paper halfway down the page. You now have four quadrants.” Drawing this out on a chart paper while saying the directions helps the process. At this point, model the labeling of the quadrants, beginning with spring, then summer, fall and winter (see below). Next, model how to
populate the quadrants with partners’ names, saying, “Find a spring partner, put their name in your spring quadrant and then put your name in their spring quadrant.” Modeling the process helps tremendously. Now tell students, “Do the same with a summer partner, then a fall partner and finally a winter partner.”
Clock Partners work much the same way as Seasonal Partners. But in Clock Partners there is a partner for each hour on the clock, so students find one o’clock, two o’clock, … up to 12 o’clock partners. This provides you with many options for pairing students.
A particularly creative eighth-grade teacher we worked with has her students choose new Seasonal Partners for each unit and uses terms from the science for groupings. For instance, for an astronomy unit her students had galaxy partners (spiral, elliptical, dwarf and lenticular, and for an ecology unit students had producer, consumer, omnivore and carnivore partners.
After partners are established you will notice some patterns. Students tend to populate the upper-left quadrant in Seasonal Partners with a best friend, and the lower-right quadrant tends to be someone they don’t know as well as the partners in the other three quadrants. This pattern holds for the other partnering protocols, with first options going to more familiar classmates and later options to less familiar classmates.
Say Something is a straightforward protocol and is effective in small groups. The Say Something protocol is useful in getting a lot of thinking on a topic out in a short period of time and it is effective in deeply exploring concepts or ideas.
Following an experience (science activity, lab, reading, video clip, or lecture, etc.), students (in their groups) take turns saying something about their thoughts on the topic. One student starts off (teachers may designate that person – i.e. the student sitting closest to the door) and makes a comment about the experience, activity, reading, or information while the other group members listen. Then the next group member makes a comment reacting to or adding something to the original comment or adding a new idea. The process proceeds until each group member has had an opportunity to contribute something to the original comment. The intent is to have all comments relate in some way to the topic under consideration as a means of thoroughly exploring ideas and understandings.
In Say Something, one person talks at a time, and the chance to “say something” proceeds around the group until all have had an opportunity. Generally, when students respond in Say Something, they do one or more of the following: make an observation or comment, clarify something, make an inference, make a connection or ask a question.
In every classroom there are students who are, for whatever reason, unwilling to share their thinking. One possibility is to offer students the option of saying something such as “I have nothing to add to the topic of – insert the topic here – at this time.” Having them mention the topic helps keep them listening.
In setting up the Say Something protocol a useful technique for training the students is to ask, “What would it look like and sound like if we are following the protocol?” The response for what we would see should include things like people sitting in small groups, one person’s mouth moving in each group and others looking at the speaker. The response for what we would hear would include a few voices at any one time and comments about the topic being considered.
Here are some useful stems that can be used in the Say Something protocol.
I noticed that … I think that …
I saw (heard, smelled)
This is good because … This is hard because … This is confusing because …
This makes sense because …
Now I understand … No, I think it means …
At first I thought …, but now …
I agree with you, and … What this means is …
I predict that … I bet that … Based on these data I think … One thing I think is …
I wonder if …
This reminds me of …
This process is
This is similar to
This … makes me think of … It also …
This … is like …
How did …
In what ways is
… like … What might happen if …
Do you think that
What evidence supports …
In other words,
are you saying
The Final Word protocol helps groups hear each other, think, and focus on information they have read. It is an excellent protocol for working on listening skills and processing written text. With this protocol it helps to keep groups small – three to five students, and of consistent size across the class. This protocol takes a while to execute as each person in the group will be commenting then listening to all other group members comment.
Again, it helps to talk with the students about what will be heard in the classroom if everyone is following the protocol – i.e. “We should hear one conversational voice from each group at
a time and no interruptions or cross-talk.” Making these expectations clear helps everyone adhere to the process.
Group members can take brief notes or write down their thoughts while others are speaking –
this helps support individual writing that may come later.
Final Word Directions
1. Students read the passage, selection or article (often in a Silent Sustained Reading) and while reading they underline, highlight or take notes on two or three statements from the reading that makes them think or makes an impression on them, or that they wonder about.
2. Once everyone in the group is finished reading and underlining or taking notes, one student reads one of his or her underlined statements – no additional comments, they just read the statement.
3. The person sitting to the speaker’s right then makes a comment about the statement that the first student read. The comment has to relate to the original statement and it can be almost anything the statement makes them think about. The other group members listen quietly.
4. After each person finishes their comment, the process proceeds around the group in round robin fashion until it gets back to the person who read the original statement.
5. The first speaker then has the opportunity to say something about the statement and the comments heard from the other group members. This person gets the “Final Word” on their original statement.
6. The process repeats with the second person and so on until everyone in the group has had a
Stems for use in the Final Word protocol: “After hearing that I wonder …”
“It seems that what that statement means is …” “It is interesting that you …”
“I always thought that …”
Paired Verbal Fluency is a dialogue protocol that produces an intensive exchange of knowledge between two people in a short amount of time. It can be useful to have a brief “silent think time” before the talking begins. You can do this before pairing, after pairing or even by having students writing down a few thoughts before pairing.
Paired Verbal Fluency Directions
The teacher decides on the idea, concept, topic, reading or process to be discussed. Students pair up and are instructed to think about what they know about this topic (or idea,
concept, process, reading, etc.).
Designate person A and person B. Have person A raise a hand and reiterate that they will be speaking first. Then have person B raise their hand and remind them that their first task is to listen to their partner (this clarifies the roles and expectations).
Round 1: Partner A speaks and partner B listens for 20 seconds. Pause then switch, B speaks and A listens for 20 seconds.
Round 2: Partner A adds to the topic, partner B listens for 30 seconds. Pause then switch, then B adds to the topic and A listens for 30 seconds.
Round 3: Partner A summarizes or adds and partner B listens for 45 seconds. Pause then switch, then B summarizes or adds and A listens for 45seconds.
Below is a tool for adding structure and written product to Think-Pair-Share. An added benefit is that students end up with a more usable product. Additionally, the organizer helps cure a common problem with Think-Pair-Share – the tendency for the think part to be very abbreviated. The organizer slows the process a bit and supports thinking, as well as provides a scaffold for future writing.
My name: Partner’s name:
Think – my thoughts or understanding at
Pair – what I understand my partner is telling
Share – our common understanding after talking, what we can share with others or what was most important in our dialogue
Quiz, Quiz, Trade is an active and engaging technique for review. It allows students to craft questions, check answers and cover recently addressed curriculum.
Quiz, Quiz, Trade is also a good technique for working on questioning skills. By collecting and posting questions, teachers can work with the class to differentiate between types of questions – lower order or recall and higher order or questions requiring figuring something out.
Quiz, Quiz, Trade Directions
1. Students write one question about the material on a Post-it Note, piece of paper or index card. They write the answer on the other side (teachers should remind them to use their own knowledge, the book or other resources to help them write a thoughtful question and check their answer).
2. Each student should find a partner.
3. Students ask a question, then wait for an answer. The partner doesn’t give an answer right away.
4. Next, the student’s partner will ask a question. Then each partner tries to answer the other partner’s question.
5. The students share the correct answers with each other – the teacher emphasizes how important it is for each of them to understand the question and answer.
6.The two students exchange follow-up questions and then find a new partner.
Repeat Steps 3-6 until each student has spoken with the designated number of partners.
The Talking Sticks protocol helps even out the contributions in a group, gets more students contributing and moderates the pace of dialogue.
Talking Sticks Directions
In Talking Sticks, each student in the group places his or her pencil/pen in the middle of the table (These are the “talking sticks.” An alternative is to have other items be the talking sticks
– one teacher we’ve observed uses colored craft sticks). To speak a student must pick up their
talking stick and hold it while holding they talk. Once finished, they set their talking stick in front of them and they are not allowed to comment again until all the other group members have had a turn (picked up their talking stick from the center of the group and commented). If a group member does not want to comment, he or she may pick up the talking stick and say, “pass.”
Paraphrase Passport builds understanding in three ways. First, it requires attentive listening. Second, it requires restatement of anther person’s statement in the form of a paraphrase. Third, it allows ideas and understanding to be verbalized and added to. Other benefits of using Paraphrase Passport include lessening the occurrence of one person dominating the dialogue and increasing involvement by all students.
Paraphrase Passport can be supported by modeling the protocol for the class and by having one person in each group designated to facilitate the process – they listen for paraphrasing going on and get group members to paraphrase if they are not paraphrasing. Students will need practice at paraphrasing, as they generally are not used to listening carefully enough to be able to paraphrase or to effectively construct a paraphrase statement. However, these are two very powerful skills in communication. Paraphrase Passport used effectively and repeatedly
builds the skill of listening to understand.
This protocol takes practice. It generally is not a norm of Western culture to paraphrase before adding comments or ideas. It helps to keep the groups to about four students each so there are not too many thoughts to react to. Paraphrase Passport can be supported by talking about what it should sound like – one voice from each group heard at any one time; what it should feel like – speakers feeling as if they were heard; and what it should look like – three people looking at one speaker in each group.
Paraphrase Passport Directions
1. Form groups or three or four students*.
2. One student begins by making a comment related to the topic.
3. The next person to speak must paraphrase the first comment before stating their comment.
4. Repeat the process (students paraphrase the student before them and then add their own comment) and continue for a predetermined time, or until the topic has been thoroughly discussed.
* It helps if there is one person in each group designated to facilitate this process – to listen for paraphrasing and remind group members to paraphrase.
3-2-1 is a great activity for review, for getting feedback about student understanding (formative assessment), for getting students out of their seats and for sharing ideas. 3-2-1 allows students to think about what they know and speak about what they know as well as what they have questions about. Doing 3-2-1 on index cards and collecting them as an exit ticket gives teachers a formative assessment that can inform instruction for the next day.
Think about . Students should write down the following:
3 things that they learned
2 questions they have
1 new idea or connection
Students can find a partner and take turns sharing their lists.
The 3-2-1 items (learning, questions and connections) have to be exactly that. Teachers could adapt the items to fit their curriculum. So a teacher may go with 3 things learned, 2 things the students already knew and 1 way to apply the new knowledge or some other permutation.
The Dialogue Dots protocol is another tools for increasing participation and involvement. It is reminiscent of Talking Sticks and Final Word in that all group members have an opportunity
to contribute to the dialogue. Dialogue Dots encourages listening and deeper thinking by moderating the pace of the conversation and ensuring that one person at a time is speaking in
There are several variations that can be employed – use stickers or colored markers/colored pencils, or have students actually write out their statements using their unique colored pencil on a share sheet. Teachers can quickly and easily monitor participation in groups by circulating throughout the class and looking at the colored dots, marks or comments.
Dialogue Dots Directions
1. Form groups. Each student in the group is given a sheet of different colored dots – the teacher can also substitute colored markers or colored pencils.
2. Provide an index card, sheet of paper or chart paper for the group.
3. A student starts by making a comment related to the topic and then placing a sticker on the index card or paper.
4. Other group members take turns making comments and placing their sticker on the card or paper until each student in the group has made one comment.
5. Once everyone has made a comment, repeat the process.
The Four Corners protocol affords several beneficial outcomes. First, it gets students up and moving around in a directed manner. Second, it allows for some student-to-student dialogue on what they think about something. Remember the importance of stance? Third, it has two built-in components of accountability – the recording of group thinking on the chart and the extension of writing a paragraph about their reasoning in support of their position. And lastly, it has the built-in potential for an entire class dialogue.
Four Corners Directions
1. Hang a sheet of chart paper in each of the room’s four corners.
2. Label the first chart “Strongly Agree,” the second “Agree,” the third “Disagree” and the fourth “Strongly Disagree.”
3. State the issue or controversy and have the students stand in the corner that best reflects their position on the issue.
4. Direct students in each corner to have a brief conversation about their positions. Then have them work as a group to list on the chart paper four or more reasons for their position.
5. Each group then presents their ideas to the class.
6. Organize a debate in which those who strongly agree – make a point by stating one of their arguments.
7. Then those who strongly disagree – can offer a counterpoint from their poster and state the reason for their argument.
8. Next those who agree – can respond by explaining why they agree, but do not strongly agree, with the argument.
9. Finally, those who disagree – can contribute by telling why they disagree, but do not strongly disagree, with the argument.
10. Continue steps six through nine for as many of the remaining statements as you wish.
11. As an extension, each student can use the statements to help them write a paragraph supporting their position.
Walkabout Review, as the name implies, is another excellent protocol for reviewing content. It gets students up, moving around and talking to one another about their knowledge and understanding in a structured manner. It has students listen and produce writing, so they are learning in multiple modalities and it allows for multiple perspectives or understandings to be considered.
Teachers can adapt Walkabout Review to have as few or as many partners interviewed as they wish. Teachers can also change the task in each box. In this example they are recollections, organization (how the content might be classified or grouped) and connections being made. To further adapt Walkabout Review, Bloom’s Taxonomy can be used as a source for thinking about other actions to put in the interview boxes.
Walkabout Review Directions
1. Students should have an in-depth interview with a partner about a topic, concept, or skill.
2. In the interview, partners should ask what they recall about the topic, concept or skill under consideration. Partners further should ask how they are organizing or categorizing the content and finally, what connections they are making to other knowledge.
3. Have each student fill out one column of the chart per interview.
4. Have each student interview a partner about the topic, concept or skill.
5. Have the students change partners and repeat the process.
Give One, Get One is a versatile protocol that can be used nearly anywhere in a unit of instruction. If the goal is to reveal initial thinking, it can be used at the beginning of a unit or lesson. If the goal is to check emerging understanding, it can be used during the unit or lesson, or if the goal is review, it can be used at the end of the unit or lesson. Once again, it is a protocol that gets students up, moving and exchanging thinking in a manner that has structure and accountability.
The protocol works on the skills of listening and paraphrasing as well as getting a lot of different perspectives and understandings circulating in the classroom. Teachers can peruse the Give One, Get One papers to assess students’ understanding on a topic.
Give One, Get One Directions
1. Construct the Get One, Give One table.
2. Have your students fill in three thoughts of their own about the concept, topic or activity.
3. Have the students pair off as partners A and B.
4. Partner A shares one of his or her thoughts. Partner B paraphrases partner A’s comment, then adds it to their table.
5. Partner B shares one thought with partner A. Partner A paraphrases partner B’s comment, then adds it to their table.
6. Students then find new partners and repeat steps 3 to 5 with a new thought.
7. The students pair off again and repeat steps 3 to 5 with their final thought.
My initial thoughts
Paraphrase of my
Odd One Out
Science Formative Assessment (Page Keeley)
Odd One Out combines seemingly similar items and challenges students to choose which item in the group does not belong. Students are asked to justify their reasoning for selecting the item that does not fit with the others (Naylor et. al, 2004)
Odd One Out provides an opportunity for students to access knowledge and analyze relationships between items in a group. By thinking about similarities and differences, students are encouraged to use their reasoning skills in a more challenging and engaging way. The technique can be used to stimulate small-group dialogue after students have had the chance to think through their own ideas. As student discuss their ideas in a group, they may modify their thinking or come up with ways to further test or research their ideas.
Odd One Out can be used at the beginning or instruction to find out what students know, during development of conceptual understanding to examine reasoning, or near the end of instruction to inform the teacher about how well students are reasoning with ideas, concepts and vocabulary.
Which is the Odd One?
Why is it the Odd One?
Force Inertia Object Friction
Integer Multiplier Coefficient Factor
Summary Protocol is an effective method for engaging students with text. Benefits of Summary Protocol include group members supporting one another and using dialogue to scaffold the task of making sense and taking notes. In using the protocol, groups of three work best – though there may be times when that needs to be adjusted. The teacher selects an appropriate amount of text for students to summarize based on the skill level in the
classroom. It is highly beneficial to model the Summary Protocol process and practice it
before students engage in it on their own.
Here are the directions for Summary Protocol:
• Form groups of three. One person will keep the group on-task.
• Read one paragraph silently (leader makes sure all group members know where paragraph starts and ends). Each person reads silently to him or herself, and only when everybody has finished with the paragraph does the group progress to the next step.
• Group discusses the content of the paragraph. All group members should contribute (the leader facilitates this).
• Group comes to consensus about one or two main ideas.
• Talk about how to write the main idea(s).
• Each group member writes down the main idea(s) on his or her own paper.
• Repeat for each paragraph of the reading.
Teachers must be strategic when selecting groups. And they should use their students’ reading abilities to construct the most supportive groups. Likewise, they can best judge the amount, length or level of text that their students can handle. Additionally, Summary Protocol is a technique that can be used strategically to help students move toward more independent reading and summarizing.
When Summary Protocol operates efficiently, observers should see and hear groups alternating between silent reading, group dialogue and group writing (the latter may include some dialogue as well). This pattern should repeat itself for each paragraph in the reading.
The Summary Protocol process generates student notes on the assigned reading. Teachers
can use the student work as a formative assessment to inform their instruction. Teachers who have used Summary Protocol notice that when first employing this technique, the students’ summary notes tend to be very similar, but over time the content of the summaries diverge from one another as the students gain skill in interpreting, comprehending and summarizing text.
Sacramento Area Science Project
Sacramento Science Project
http://sasp.ucdavis.edus comment, then adds it to their table.
When Summary Protocol operates efficiently, observers should see and hear groups alternating between silent reading, group dialogue and group writing (the latter may include some dialogue as well). This pattern should repeat itself
|
<urn:uuid:2f248cfc-6277-414b-8ea8-2b17f9879127>
|
CC-MAIN-2017-09
|
https://docs.com/jose-tagle-nava/5650/dialogue-toolkit-ngss
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00312-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.939 | 6,068 | 3.515625 | 4 |
Austrian start-up SmaXtec has invented a tool for farmers to remotely monitor the health and wellbeing of their livestock or IoT cows.
Taking advantage of the Internet of Things (IoT), the company is placing connected sensors insides cows’ stomachs to transmit health data over Wi-Fi. The monitor will track the cow’s health and send a text message to the farmer when she pregnant.
It’s already being used in two dozen countries across the world, according to Bloomberg.
Typically, it’s hard to tell if a cow is unwell or when it might give birth. If a farmer suspects either of those things, they will usually have to herd the cow into a cattle crush for a vet to check it over.
SmaXtec supposedly gets around some of these problems by placing weighted sensors – about the size of a hotdog – in through the cow’s throat and into its four stomachs. The device then transmits up-to-the-minute data about the temperature of the cow, the pH of her stomach, movement, and activity. A base station in the barn will pick up the signals and uploads all of the data to the cloud.
If the cow falls ill, the system e-mails the vet, supposedly before the cow is obviously sick. And when a cow is pregnant a text message will be sent to the farmer and his team, so that they can act accordingly.
SmaXtec claims the device should have around four years of battery life, and can predict with 95 percent accuracy when a cow is pregnant.
A global farming opportunity
So, is this device really better than a farmer’s instinct and expertise? “It’s easier, after all, to look at the situation from inside the cow than in the lab,” SmaXtec’s co-founder Stefan Rosenkranz told Bloomberg.
And while the technology might not be able to assess exactly what is making the cow sick, Helen Hollingsworth, a veterinary nurse at Molecare Veterinary Services (SmaXtec’s distributor in the UK), pointed out that things like temperature alarms “make you go and check earlier then you otherwise would. If you can detect illness early, you can start antibiotics earlier and ultimately use less.”
Roughly 350 farms across two dozen countries are using this technology, SmaXtec said. The devices have been implanted into 15,000 cows in Britain alone in the last six years. The setup costs of $600 to set up the network and between $75 and $400 per cow are incurred by the company or distributors, Bloomberg reported. Farmers, therefore, simply pay a $10 a month charge per cow to use this service.
SmaXtec is targeting industrial operations in China, the Middle East and the U.S., but it also has an eye on the 90 million cattle on dairy farms all over the world. It will have stiff competition from the likes of Telefonica and Cattle-Watch, but it seems the age of IoT cows is well on its way.
This is not the first IoT case study to involve cows however, with Fujitsu’s head of insurance Nick Dumonde telling IoB at the Internet of Insurance in June how farmers are increasingly using such technologies to monitor the health of pregnant cows.
|
<urn:uuid:91c840c2-d086-4f1b-9ae9-56e53af792be>
|
CC-MAIN-2017-09
|
https://internetofbusiness.com/iot-cows-farmers-health/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00312-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.949857 | 695 | 2.71875 | 3 |
An application that uses big data to enable clinicians to track superbug transmission in their region has reached 100,000 downloads in less than a month.
The app was created to track antibiotic resistant bacteria or "superbugs" such as e.coli and staph.
Every year more than 2 million people in the U.S. contract infections that are resistant to antibiotics, and at least 23,000 people die as a result, according to a report from the Centers for Disease Control and Prevention (CDC).
Many clinicians who work outside of hospitals are unaware of the bacteria types pervasive in their regions.
Epocrates wrote the app and collaborated with its parent company, AthenaHealth, to use the e-health record services and medical billing company's big data warehouse. AthenaHealth's cloud-based clinical database of 15 million patient records has antimicrobial susceptibility data connected to geolocated information about bacteria types and resistance patterns.
The new app provides geographical data on the superbugs and information on the appropriate prescriptions to offer for antibiotic resistant strains. The Bugs + Drugs app's results can be viewed for bacteria by specimen types and is paired with Epocrates' free online drug reference app to help with prescribing decisions.
"It is sometimes challenging to judge what the resistance is for a presenting bacteria. By displaying resistance patterns in my area, Epocrates Bugs + Drugs helps ensure that I prescribe only one course of antibiotic and avoid medications that have shown to be resistant," said Dr. Tolbert, a family physician in Burlington, Ky., said in a statement. "Epocrates helps me create a successful treatment plan, which provides higher levels of patient compliance and satisfaction."
The majority of bacterial searches in the app are in higher-risk metropolitan areas, such as New York, Los Angeles, and Chicago, according to Epocrates. Susceptibility patterns associated with organisms including e.coli and staphylococcus aureus -- commonly found in urine and skin infections -- are among the most viewed by the app's users.
Abbe Don, vice president of user experience at Epocrates, said the company adds more than 6,000 antibacterial lab data points to its cloud network every day to generate fresh results.
"This is a great example of how aggregated [electronic health record] data can be utilized, and clinicians have clearly responded to it," Don said. "The app has now been accessed more than 300,000 times since launch, with an average of approximately 4,000 uses a day."
This article, Superbug tracking app hits top medical spot in Apple App Store, was originally published at Computerworld.com.
Lucas Mearian covers consumer data storage, consumerization of IT, mobile device management, renewable energy, telematics/car tech and entertainment tech for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed . His e-mail address is [email protected].
|
<urn:uuid:ed94a124-4ea7-407e-aa59-d80656802fad>
|
CC-MAIN-2017-09
|
http://www.computerworld.com/article/2485625/mobile-apps/superbug-tracking-app-hits-top-medical-spot-in-apple-s-app-store.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00364-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.937689 | 604 | 2.609375 | 3 |
Increasingly, advanced computing technologies are embraced for their ability to bolster economic competitiveness. The advantages owed to a well-thought out and well-funded HPC strategy are embodied in the adage: to outcompute is to outcompete. One realm where the benefits of going digital have been especially prominent is product design.
Over the last decade or so, there has been a concerted effort in the United States and abroad to bring digital manufacturing tools into the hands of manufacturers who have traditionally been underserved. Often the companies, small to medium-sized shops, who would gain the most from these tools can least afford them. Thus, programs and technologies that lower the barrier to entry are sorely-needed as are campaigns that raise awareness of these benefits, thus helping to overcome cultural resistance associated with navigating the digital divide.
A recent article from London-based publishing house Raconteur Media makes it clear just how transformative this technology can be.
“While supercomputers aren’t mandatory for simulating the workings of products under development, there’s no doubt that the combination of high-powered computers and advanced visualisation technology is rapidly transforming the use of 3D product simulation and visualisation within product lifecycle management (PLM) and product design generally,” the author writes.
Swansea-based medical supply company Calon Cardio-Technology is focused on developing smaller and more efficient blood pumps for use as an alternative to heart transplants in patients with chronic cardiac failure. Designing these artificial hearts so that they are both safe and effective requires a thorough understanding of how blood flows through the pump. This is a sophisticated computational fluid dynamics problem that is beyond the scope of most desktop computers.
Calon researchers are carrying out their design work with the help of Swansea University’s Advanced Sustainable Manufacturing Technologies centre. New designs are simulated on a supercomputer cluster managed by HPC Wales. Using a well-equipped desktop computer, each 3D simulation, requiring a “mesh” of approximately two million elements, would take two to three days to complete, but the supercomputer shortens that time significantly.
“We need to model about a dozen scenarios for each pump design, after which we tweak and refine it,” explains Calon’s chief technology officer Graham Foster. “With a supercomputer, a scenario takes two to three hours, not two to three days, and we can send the dozen scenarios as a single batch. We’re saving time and also cost.”
Despite the competitive advantages conferred by HPC and a number of programs aimed at opening up access, the technology still remains out of reach to many “ordinary businesses with ordinary computers.”
This is problematic because as Peter Vincent, a lecturer in aerospace aeronautics at Imperial College London, points out, there are “a class of computational fluid dynamics problems that are currently intractable at an industrial level.” He adds that “they can be solved by supercomputers in national laboratories, but not at an industrial level.”
Hybrid computing using accelerators (NVIDIA GPUs) or coprocessors (Intel Phi) may offer one path to these much-needed FLOPS without breaking the bank. GPU maker NVIDIA is working with Imperial College to super-charge desktop computers with NVIDIA’s latest-generation Tesla parts. Each of these chips has about 2,500 compute “cores” while most desktop computers have perhaps two or four. The UK’s most powerful GPU-powered supercomputer, Emerald, has 372 Tesla M2090 GPUs.
“With the right computer algorithm and the right kind of simulation problem, there can be a five-to-tenfold increase in speed for a given expenditure on computing power,” observes Dr. Vincent on the topic of GPU-accelerated computing. “That can bring many more problems within reach.”
|
<urn:uuid:8eab38e9-62ea-4e26-b149-aa709e76791a>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2014/01/28/navigating-digital-divide/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00364-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.937751 | 809 | 2.78125 | 3 |
Lisson S.,University of Tasmania |
MacLeod N.,CSIRO |
McDonald C.,CSIRO |
Corfield J.,CSIRO |
And 14 more authors.
Agricultural Systems | Year: 2010
Bali cattle (Bos javanicus) account for about one quarter of the total cattle population in Indonesia and are particularly important in the smallholder farming enterprises of the eastern islands. The population of Bali cattle is declining in most areas of Eastern Indonesia because demand for beef cattle exceeds the local capacity to supply these animals. Indonesian agencies recognise that new strategies are required to improve the productivity of Bali cattle and to address major constraints relating to animal husbandry and nutrition. To date, the adoption of cattle improvement technologies has been historically slow in Indonesia, as is the case elsewhere.This paper reports on key findings from a long-term study conducted between 2001 and 2009 with smallholder households from six villages in South Sulawesi and Central Lombok, to develop and test an approach for evaluating and increasing the adoption of cattle and forage improvement technologies. The approach is based on the principles of farming systems and participatory research and involved four main steps; (1) benchmarking the current farming system; (2) identifying constraints to cattle production and strategies to address them; (3) desktop modelling of the production and economic impacts of selected strategies; and (4) on-farm testing of the most promising strategies with 30 participant smallholder households.The approach was found to be successful based on: (1) sustained adoption of a package of best-bet technologies by the 30 participating households; (2) evidence of positive production, social and economic impacts; and (3) significant diffusion of the cattle improvement technologies to other households in the project regions. © 2010. Source
|
<urn:uuid:6cb6904d-e2e4-4754-97d7-55f711ca17e5>
|
CC-MAIN-2017-09
|
https://www.linknovate.com/affiliation/balai-pengkajian-technology-pertanian-bptp-2513342/all/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00540-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.904565 | 362 | 3 | 3 |
By Art Reisman
Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.
Article Updated March 2012
As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.
Exactly what is deep packet inspection?
All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.
The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.
When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).
Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.
How is deep packet inspection related to net neutrality?
Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.
Why do some Internet providers use deep packet inspection devices?
There are several reasons:
1) Targeted advertising — If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.
2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.
3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.
When is it appropriate to use deep packet inspection?
1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.
2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.
3) Intrusion detection and prevention– It is one thing to be acting as an ISP and to eaves drop on a public conversation; it is entirely another paradigm if you are a private business examining the behavior of somebody coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home. In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.
4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam. I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising
Does Content filtering use Deep Packet Inspection ?
For the most part no. Content filtering is generally done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.
What about spam filtering, does that use Deep Packet Inspection?
Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.
What is all the fuss about?
It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.
For example, this is an excerpt from a recent PC world article:
Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.
Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:
Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.
— Digital Daily, March 10, 2008. Read the full article here.
Later in 2008, the FCC came down hard on Comcast.
In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.
By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.
— Wired.com, August 1, 2008.Read the full article here.
To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.
University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.
— Wired.com, May 22, 2008. Read the full article here.
However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.
The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.
— The Register, December 16, 2008. Read the full article here.
Canadian ISPs confess en masse to deep packet inspection in January 2009.
With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.
— Tech Spot, January 21, 2009. Read the full article here.
In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.
In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.
— Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.
The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.
7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.
— Kyle Brady, July 27, 2009. Read the full article here.
[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.
While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.
Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.
|
<urn:uuid:e6d99449-3c76-4546-8758-8af2f162b857>
|
CC-MAIN-2017-09
|
https://netequalizernews.com/2011/02/08/what-is-deep-packet-inspection-and-why-the-controversy/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00484-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951885 | 2,172 | 2.65625 | 3 |
Cyber Attacks and VoIP
Voice over Internet Protocol (VoIP) is a way to take audio signals and transmit them in digital format via the internet; turning an internet connection into a way to make essentially free worldwide calls. With VoIP becoming increasingly more attractive to businesses of all sizes, because of it’s cost effectiveness and scalability, it becomes imperative that business owners understand the security risks, and more importantly, how to combat them. Security risks are mostly taken care of by the VoIP host, where a dedicated team of security experts will work to keep your network safe and secure. However, there is still a number of risks that you should be aware of, and some relatively easy ways to combat them and keep your system as secure as possible.
To best protect your VoIP phone system, you will need to ensure that the computers and other hardware are all secure. One of the most effective ways to do this is to set up an SIP firewall. SIP (Session Initiation Protocol) regulates packets of voice data as it passes between two endpoints on a network – a SIP-based firewall monitors and regulates these voice packets and filters out any traffic that looks suspicious. This is a particularly important area today, when according to Cisco, toll fraud is “prized by a global armada of phone pirates, who are unrelenting in their attacks.” Chris Kruger of Cisco discussed a case depicting the dangers of toll fraud and disregarding security as a top priority:
“Unfortunately, a business decided they needed voice security after the fact… During a few hours one morning, a rogue user had easily accessed the call control in the SIP gateway and generated several thousands of dollars in calls to Eastern Europe.”
So you can see the inherent dangers in failing to take security seriously. Luckily, with top VoIP providers, there will often be security measures in place that will combat threats that a firewall would combat. For example, RingCentral provides top-class network protections that are optimized for handling voice and data. It also provides a continuous monitoring program from their team of security experts, in order to flag potential disruptions, data breaches, and fraud.
Restricting Access and Securing Passwords
Avoxi lists the restriction of unnecessary access to your network as one of the most important ways to keep your VoIP network secure. Allowing open access to all users on your system leaves your system incredibly vulnerable, especially if passwords are not secure, so business owners may need to think long and hard about who they allow to have access to certain privileges when setting up their VoIP.
RingCentral recommends that vendors should implement (at the very least) a stringent set of “strong password policies” as well as SSO (single sign-on) to alleviate log-in fatigue. SSO is a session- and user-authentication service that allows users to use one set of login credentials to gain access to multiple applications without further prompting for authentication.
However, RingCentral is more than aware of the security challenges that are presented by SSO. If a user’s primary password is discovered or changed by hackers, they could have access to multiple resources and applications. Hence the need for a strong password policy. Passwords are incredibly easy to secure with minimal effort; industry standards suggest an 8–16 letter combination of symbols, numbers, and upper- and lowercase letters. In addition to this, passwords should be changed/updated every 2–3 months to further reduce the risk of a security breach.
In order to aid this, many VoIP hosts will provide some form of authentication guidelines or policy. For example, RingCentral provides Duo Access Gateway prompts for two-factor authentication before access will be granted to the VoIP service. It also allows admins to control and enforce a unique policy for each individual SSO application, checking the user, device, and network before allowing access to the application.
Monitoring Network Activity
Just as consumers monitor their accounts for strange activity, so should businesses with regards to network activity and billing. While the measures already mentioned like restricting access and using firewalls will dramatically reduce your risk of a security breach, regular monitoring can provide another safety net if other measures fail. Call logs should also be frequently audited and monitored, as many hackers will attempt to use a VoIP to make international and often costly calls. Avoxi recommends that you schedule specific periods of time to analyze call records on a regular basis – thus giving you comprehensive insight into your own business, while maintaining a security standard at the same time.
Ensuring your VoIP provider has sufficient remote monitoring technology is a major part of this strategy. Remote monitoring can help to identify problems before damage becomes irreversible, or at times before anything can be done at all.
You should ensure that your service provider will provide protections built into the service layer, and offers counsel on how to best avoid human error leading to toll fraud. The RingCentral platform provides security settings that can help to detect toll fraud and service abuse, as well as a dedicated staff for monitoring use and service.
A hosted VoIP can provide so many benefits to a business, such as cutting call costs and offering a modern and competitive system. However, there are inherent security risks. In order to avoid unnecessary breaches to the system, it is key to eliminate all possibility of human error, by restricting access, ensuring there is a stringent password policy, and monitoring activity on the system. By working in collaboration with your VoIP host, you have the best chance of fostering a secure and safe network from which to operate your business.
Sponsored Series By RingCentral
By Josh Hamilton
|
<urn:uuid:829559d5-64f8-4455-9830-9394475f44a5>
|
CC-MAIN-2017-09
|
https://cloudtweaks.com/2017/01/protect-voip-cyber-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00536-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.945833 | 1,152 | 2.625 | 3 |
Apple has been awarded the patent for a MacBook shell that would combine smart glass and solar cells to generate power.
Apple today was awarded a patent for a MacBook that would be powered with solar cells (photovoltaics), meaning your laptop could be powered or at least recharged through light.
The patent, titled " Electronic device display module" describes a two-sided display for the lid of a portable computer, such as Apple's MacBook. The front of the lid facing the user would still sport the typical display screen but the rear would serve as more than just a cover.
The patent describes a rear plate made of "electrochromic glass" also known as " smart glass" or "switchable glass."
"Electrochromic glass, which is sometimes referred to as electrically switchable glass, may receive control signals (e.g., voltage control signals) from control circuitry," the patent submission states. "The control signals can be used to place electrochromic glass in either a transparent (light-passing) state or a translucent (light-blocking) state."
+ ALSO ON NETWORK WORLD The fierce battle over intellectual property +
A sketch of the proposed laptop, which uses an electrochromic glass back panel that can be made opaque or translucent through a small electrical charge. The back panel would have solar cells embedded in it to collect power from light.
In the light-blocking state, the interior of the MacBook's display would be hidden from the exterior view; the rear panel would appear opaque or translucent.
In the light-passing state, the rear panel would appear clear and allow images or other light output from status light-emitting diodes or other light sources, Apple stated.
The solar cells would be placed under the electrochromic glass layer on the rear plate.
"For example, photovoltaic cells may be interposed between a glass layer (rear plate) and liquid crystal display structures for display," the patent states. Photo voltaic cells produce electricity when exposed to light.
When the laptop is near a light source, the light rays would pass through the electrochromic glass that forms the rear plate.
The solar cells would take in light as it passed through the glass, converting it into electrical power at a rate of 10 milliwatts or more.
Apple proposed that the solar cells would be capable of producing from 100 milliwatts to 1 watt "or even more" in order to charge the laptop's battery or power the computer while it is in use.
The rear smart glass could also be used to display Apple's logo by incorporating an additional light emitting diode layer as well as backlighting.
"To ensure that display is evenly illuminated, the back light unit that provides backlight for display... may be provided with light-emitting diodes that are arranged along more than one of the edges of the light guide layer in the back light unit," Apple stated.
Lucas Mearian covers consumer data storage, consumerization of IT, mobile device management, renewable energy, telematics/car tech and entertainment tech for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is [email protected].
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Apple patents solar-powered MacBook" was originally published by Computerworld.
|
<urn:uuid:d43db7a2-aad0-4bc6-826d-73f9c8641e1d>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2173838/computers/apple-patents-solar-powered-macbook.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00236-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.931597 | 710 | 3.21875 | 3 |
Three researchers from North Carolina State University have developed a software protocol that better manages high traffic loads on a Wi-Fi router when too many users connect, the university said on Tuesday.
The IEEE's 802.11 specification allows clients connected to a Wi-Fi access point to share the same transmission channel, but downlink traffic eventually surpasses uplink traffic, causing packet loss and the access point to be saturated, the researchers wrote in an abstract.
"Various factors result in performance degradation of Wi-Fi for large audiences, and from our analysis of network traces, we observed traffic asymmetry being the major culprit," the researchers wrote.
The researchers' protocol, called WiFox, monitors the traffic and implements a "priority" mode when a router is in danger of being overloaded of traffic, according to the university. WiFox can be incorporated into routers as a software update.
WiFox was tested with between 25 and 45 clients connected to an access point. The router was able to respond an average of four times faster than a network not using the protocol.
Interestingly, throughput increased with more users on the network. The researchers found throughput increased by 700 percent with 45 users, and by 400 percent with 25 users.
The paper, "WiFox: Scaling WiFi Performance for Large Audience Environments," was written by doctoral students Jeongki Min and Arpit Gupta, and Injong Rhee, a professor of computer science, at the university.
A one-page summary of their research has been released, and they will present the paper next month at the ACM CoNEXT 2012 conference in Nice, France scheduled from Dec. 10-13.
|
<urn:uuid:f083281e-d3bb-4de6-b794-debf9b73e5de>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2161459/data-center/researchers-claim-performance-fix-for-overused-wi-fi-points.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00232-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951206 | 337 | 2.78125 | 3 |
Looking to take a giant step toward taking part in low Earth orbit transportation, exploration and servicing orbiting space structures, the European Space Agency today it would team with Thales Alenia Space Italia to begin building an experimental spacecraft for launch in 2013.
Planning for the ESA's IXV Intermediate eXperimental Vehicle (IXV) has been in the works for about two years and follows on the agency's Atmospheric Reentry Demonstrator flight which took place in 1998.
More on space: Eight hot commercial space projects
The IXV will be reusable, more maneuverable and able to make precise landings, the ESA stated. Its success will provide Europe with valuable know-how on reentry systems and flight-proven technologies that are necessary to support the Agency's future ambitions, including return missions from low Earth orbit, the agency stated.
According to the ESA, the IXV will be launched into a suborbital trajectory on ESA's small Vega rocket from Europe's Spaceport in French Guiana, IXV will return to Earth as if from a low-orbit mission, to test and qualify new European critical reentry technologies such as advanced ceramic and ablative thermal protection.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories:
|
<urn:uuid:9a07face-00e9-4f3d-9932-4653a7b95ebc>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2229559/security/european-space-agency-set-to-build-experimental-transport-spacecraft.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00232-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.881665 | 265 | 2.921875 | 3 |
Path Traversal, also known as Directory Climbing and Directory Traversal, involves the exploitation of sensitive information stored insecurely on web servers. This vulnerability is constantly showing up in globally-recognized vulnerability references such as the SANS 25 Top 25 Most Dangerous Software Errors and OWASP Top-10.
There are two primary security mechanisms available today in web servers:
Access Control Lists (ACLs) – These are basically whitelists that the web server’s administrator uses to monitor access permissions. These lists are used in the authorization process. Only users with permissions can access, modify or share sensitive files and information.
Root Directory – This directory is located in the server file system and users simply can’t access sensitive files above this root. One such example is the sensitive cmd.exe file on Windows platforms, which rests in the root directory that not everyone can access.
Path Traversals are made possible when access to web content is not properly controlled and the web server is compromised. This is basically an HTTP exploit that gives malicious attackers unauthorized access to restricted directories. They are eventually able to manipulate the web server and execute malicious commands outside its root directory/folder.
These attacks are usually executed with the help of injections such as Resource Injections, typically executed with the help of crawlers. The attack usually involves the following steps:
The following URLs show how the application deals with the resources in use:
In these examples it may be possible to insert a malicious string as the variable parameter to access files located outside the web publish directory.
http://some_site.com.br/get-files?file=../../../../some dir/some file
http://some_site.com.br/../../../../some dir/some file
The following URLs show examples of UNIX/Linux password file exploitation.
Important: In a windows system the malicious attacker can navigate only in a partition that locates web root while in the Linux he can navigate/access the whole disk.
Ways of mitigating the risk of Path Traversal include:
CxSAST detects data flows that are vulnerable to Path Traversal by following all user input that is used in a file creation or file reading context. If the input is not validated or sanitized (this being “..\” or “../”) before being used, CxSAST determines this path as vulnerable to Path Traversal. The developers can then implement the aforementioned remediation techniques.
Interested in trying CxSAST on your own code? You can now use Checkmarx's solution to scan uncompiled / unbuilt source code in 18 coding and scripting languages and identify the vulnerable lines of code. CxSAST will even find the best-fix locations for you and suggest the best remediation techniques. Sign up for your FREE trial now.
Checkmarx is now offering you the opportunity to see how CxSAST identifies application-layer vulnerabilities in real-time. Our in-house security experts will run the scan and demonstrate how the solution's queries can be tweaked as per your specific needs and requirements. Fill in your details and we'll schedule a FREE live demo with you.
|
<urn:uuid:a6b2ee46-ff4e-4121-8fbc-9f4dddc39134>
|
CC-MAIN-2017-09
|
https://www.checkmarx.com/knowledge/knowledgebase/path-traversal
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00108-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.868352 | 662 | 3.53125 | 4 |
ALEXANDRIA, VA--(Marketwire - Oct 25, 2012) - Adults age 65 and older are more likely to have diabetes than any other age group, but researchers and clinicians have the least amount of data regarding how best to treat this population, a consensus report published jointly by the American Diabetes Association and American Geriatrics Society concludes. The report, written by a Consensus Panel of diabetes experts from multiple disciplines, will be published simultaneously online Oct. 25 in Diabetes Care and in the Journal of the American Geriatrics Society. The report outlines what diabetes experts know about older adults with diabetes, how the disease affects them differently than younger adults, what can be done to prevent or treat it and how best to fill the critical gaps in evidence to better address their needs.
"With our nation's aging population, it becomes increasingly important for us to understand how diabetes is impacting older adults," said Geralyn Spollett, MSN, ANP-CS, CDE, President, Health Care & Education, American Diabetes Association. "We know a great deal about how to help middle-aged adults prevent and manage diabetes, but little about those in their later years, who are far more likely to be diagnosed and to suffer from the serious and life-threatening complications associated with this disease."
In February 2012, the American Diabetes Association convened a Consensus Development Conference on Diabetes and Older Adults (defined as those aged 65 years or older) to hear from researchers and other experts on what is known, and not known, about this population. The consensus report highlights what was learned in the following areas: the epidemiology and pathogenesis of diabetes in older adults; evidence for preventing and treating diabetes and its most common comorbidities in older adults; current guidelines for treating older adults with diabetes; issues that need to be considered in individualizing treatment recommendations; consensus recommendations for treating older adults with or at risk for diabetes; and how gaps in the evidence can be filled.
"One important issue is that older people are a very heterogeneous population, which means that recommendations cannot simply be based on age. One 75-year-old may have newly diagnosed diabetes but otherwise be quite healthy and lead a very active life, while another may have multiple diseases, dementia and longstanding diabetes with complications. It's critical to consider overall physical and cognitive function, quality of life and patient preferences when developing a treatment plan with an older patient," said Jeffrey B. Halter, MD, a member of the consensus panel, director of the Geriatrics Center at the University of Michigan, and past president of the American Geriatrics Society.
As people get older, insulin resistance increases and pancreatic islet cell function decreases, placing them at greater risk for the development of type 2 diabetes. The epidemic of type 2 in the United States, while clearly associated with the increase in overweight and obesity, is also greatly exacerbated by the aging of the population. In fact, the Centers for Disease Control and Prevention estimates that, even if diabetes incidence leveled off, prevalence rates would still double over the next 20 years as our population ages.
More than 25 percent of adults age 65 or older have diabetes, and roughly half have prediabetes. Older adults with diabetes also have the highest rate of diabetes-related lower limb amputations, heart attacks, vision problems and kidney failure of any other age group, with rates higher even still for those over the age of 75. Yet, the report noted, this group has not been included in most diabetes treatment trials, particularly those with comorbidites or cognitive impairment.
The panel, when developing consensus recommendations for clinical care, used a framework of considering older adults with diabetes in one of three groups: those in relatively good health; those with complex medical histories that might make self-care difficult; and those with significant comorbid illness and functional impairment, with different screening and treatment recommendations for each group. It also recommended further research be done that takes into account the complexity of issues facing older adults and that studies include patients with multiple comorbidities, dependent living situations and geriatric syndromes to get the most complete picture of the needs and challenges of frail or complex patients.
The American Diabetes Association Consensus Development Conference was supported by a planning grant from the Association of Specialty Professors (through a grant from the John A. Hartford Foundation), by Educational Grants from Lilly USA, LLC and Novo Nordisk Inc., and sponsorships from the Medco Foundation and Sanofi.
Diabetes Care, published by the American Diabetes Association, is the leading peer-reviewed journal of clinical research into one of the nation's leading causes of death by disease. Diabetes also is a leading cause of heart disease and stroke, as well as the leading cause of adult blindness, kidney failure, and non-traumatic amputations.
The American Diabetes Association is leading the fight to Stop Diabetes and its deadly consequences and fighting for those affected by diabetes. The Association funds research to prevent, cure and manage diabetes; delivers services to hundreds of communities; provides objective and credible information; and gives voice to those denied their rights because of diabetes. Founded in 1940, our mission is to prevent and cure diabetes and to improve the lives of all people affected by diabetes. For more information please call the American Diabetes Association at 1-800-DIABETES (1-800-342-2383) or visit www.diabetes.org. Information from both these sources is available in English and Spanish.
The American Geriatrics Society (AGS) is a not-for-profit organization of over 6,000 health professionals dedicated to improving the health, independence and quality of life of all older people. The Society provides leadership to healthcare professionals, policy makers and the public by implementing and advocating for programs in patient care, research, professional and public education, and public policy. For more information please visit www.americangeriatrics.org.
The Journal of the American Geriatrics Society is a comprehensive and reliable source of monthly research and information about common diseases and disorders of older adults. The journal is published by Wiley-Blackwell on behalf of the American Geriatrics Society. For more information, please visit http://wileyonlinelibrary.com/journal/jgs.
|
<urn:uuid:5e1fd2b9-e95a-4a50-ab02-9d553fa9b21a>
|
CC-MAIN-2017-09
|
http://www.marketwired.com/press-release/american-diabetes-association-american-geriatrics-society-publish-consensus-report-on-1717914.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00456-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.942145 | 1,267 | 2.5625 | 3 |
HOW IT WORKS
The return of the sneaker net
- By John Breeden II
- Sep 13, 2012
The other day I happened to use the term “sneaker net” and found that a young colleague didn’t know what I was talking about. True, it’s been a while since the heyday of sneaker nets, but they’re still around and, in fact, are even starting to come back. So for the young folks out there who never had to deal with dial-up, here’s a primer on the term.
What it is: "Sneaker net" is a somewhat comical term that came into fashion in the 1980s when bandwidth was low. It became easier and quicker to simply put a file onto a disk and then walk it over to a new computer to transfer it. This was mostly done by techies wearing sneakers, hence the term. To the ear, it sounds a little bit like Ethernet.
Once bandwidth got plentiful, sneaker nets fell out of fashion. Even the Usain Bolt of techies couldn’t outrace a 10/100 megabits/sec connection for short distances and small files.
But now that files are getting to be hundreds of gigabytes or even terabytes in size, the sneaker net is back. So get those running shoes ready again.
Examples: The SETI@home project, which searches for signs of extraterrestrial life, uses a form of sneaker net to transport massive amounts of data gathered by the radio telescope in Arecibo, Puerto Rico. Data is put onto magnetic tapes and then mailed to Berkeley, Calif., for processing. So it’s the mailman’s sneakers being used.
One of the most radical examples of sneaker net came from employees of a South African company who got tired of the slow transmission speeds they were getting from their provider. In 2009, they tried to transfer 4G of data 60 miles between cities using an ASDL line. At the same time, they put the data on a key drive and used a carrier pigeon to carry it the same distance, so it was sort of of a pigeon toe net. The bird made the flight in 1 hour, 8 minutes and it took another hour to transfer the data off the memory stick. Only 4 percent of the data had been transferred the traditional way by the time the pigeon was done.
Bottom Line: Although they went out of fashion as bandwidth increased, sneaker nets started to come back into vogue as file sizes rose. As long as a techie with a pair of sneakers, or a pigeon in some cases, can get the job done faster than a digital transfer, sneaker nets will live on.
John Breeden II is a freelance technology writer for GCN.
|
<urn:uuid:5fe78e1a-4c48-43b9-b0b4-3c6133c00076>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2012/09/13/how-it-works-sneaker-net.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00400-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.960185 | 579 | 2.6875 | 3 |
Massive Solar Eruption Showers Earth
/ January 24, 2012
A massive eruption on the sun occurred late on Jan. 22, 2012, causing the strongest radiation storm from the sun since 2005. The sun is, in essence, showering the Earth with high-powered solar particles traveling at about 5 million mph. Though those of us on the Earth’s surface aren’t affected, the eruption forced Delta Air Lines to redirect certain high-flying airplanes to prevent loss of radio communication and avoid exposing pilots and passengers to excessive radiation.
Shown above is a photo of the sun toward the end of the eruption — around 1:15 a.m. on Tuesday, Jan. 24. Below is a photo of the sun in the early hours of Monday, Jan. 23, 2012.
Photos courtesy of the NASA Solar and Heliospheric Observatory
|
<urn:uuid:0b82cbaf-ef9a-4e93-8216-7c1e49cd8e65>
|
CC-MAIN-2017-09
|
http://www.govtech.com/photos/Photo-of-the-Week-Massive-Solar-Eruption-Showers-Earth-01242012.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00452-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.883509 | 176 | 3.5625 | 4 |
The most common mistakes that jeopardize our online digital safety include giving too much information online to scam emails and Web sites (known as phishing) and choosing weak passwords, using the same password for multiple sites and failing to keep anti-virus/anti-spyware software up-to-date.
Weak passwords include something easily guessed, like a family member name or birthday, or choosing common, ordinary words. Hackers can use a “dictionary attack,” where they run a program that quickly tests every word in the dictionary to see if it is your password. To prevent anyone else from accessing your accounts, you should choose a strong password, and one that is unique for every online account. A strong password has at least eight letters and numbers, uses some capital letters and symbols ($!* etc.), cannot be easily guessed and is changed regularly.
For important online services, check whether your provider offers multi-factor authentication. Bank of America and Paypal, for example, lets users use a One Time Password token to further secure access to their online account and confirm transactions.
|
<urn:uuid:e7329635-e530-45cc-a269-16bb3b054570>
|
CC-MAIN-2017-09
|
https://www.justaskgemalto.com/us/what-are-the-most-common-mistakes-people-make-that-jeopardize-their-digital-safety/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00152-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.929312 | 222 | 3.3125 | 3 |
Samsung's new DDR3 memory has shrunk transistor size from 25nm to 20nm, and uses 25% less power, meaning your next laptop or tablet could have longer battery life.
Samsung this week announced a breakthrough in the evolution of smaller, more efficient DRAM memory. The company has produced its first 20nm, 4Gbit DDR3 DRAM.
As with all semiconductor technology, the smaller the transistors, the more capacity can go into a form factor. The smaller the circuitry, the cheaper it is to manufacture chips with the same or even greater capacity.
The DDR3 is used in desktops, notebooks, ultrabooks and tablets where Samsung said "customers will see a significant power savings," as well as cost savings.
Samsung's new 20nm DRAM. (Photo: Samsung)
"While we cannot speak for [device manufacturers], we believe that customers will benefit from a significantly lower total cost of ownership," a Samsung spokesman said in an email reply to Computerworld.
Samsung is already providing the new DDR3 chips to some manufacturers, and it expects that they will be available in computing devices later this year.
Over the past five years, Samsung has shrunk its DRAM transistors from 50nm (in 2009) to 30nm (in 2010) to 25nm (in 2011) to the 20nm process technology today.
While NAND flash has the lead in the race toward single-digit nanometer processes (it has been at 19nm for some time), DRAM circuitry is decisively more difficult to shrink.
NAND flash dies of different sizes. (Image: Micron)
With DRAM memory, each cell consists of a capacitor (where the electrical charge is held) and a transistor that are linked to one another, whereas with NAND flash memory each cell only has a transistor. So DRAM technology requires both the transistor and capacitor to shrink.
Samsung said it was able to refine its DRAM design and manufacturing technologies and came up with a "modified double patterning and atomic layer deposition."
In micro-circuitry, such as DRAM or NAND flash, dense repeating nanostructures are required. Atomic layer deposition (ALD) is a technique for depositing those nanostructures using thin films with precise uniformity. Double patterning, simply put, is a method for doubling the number of features.
Samsung's said its modified double patterning technology is a milestone. By enabling 20nm DDR3 production using current photolithography equipment, it has established a new core technology for the next generation of 10nm-class DRAM production.
Samsung also successfully created ultrathin dielectric layers of cell capacitors with an unprecedented uniformity, which has resulted in higher cell performance.
Applying new process innovations, Samsung's new 4Gbit 20nm DDR3 has improved manufacturing productivity by more than 30% over that of the preceding 25 nanometer DDR3, and more than twice that of 30nm-class DDR3, the company said.
"Also our new 20nm 4Gb DDR3-based modules can save up to 25% of the energy consumed by equivalent modules fabricated using the previous 25 nanometer process technology," Samsung said.
This article, Samsung achieves DDR3 size, calls it efficiency breakthrough, was originally published at Computerworld.com.
Lucas Mearian covers consumer data storage, consumerization of IT, mobile device management, renewable energy, telematics/car tech and entertainment tech for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is [email protected].
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Samsung achieves DDR3 size, calls it an efficiency breakthrough" was originally published by Computerworld.
|
<urn:uuid:7f4db87c-9fbe-437f-b0fb-55ace2049905>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2175138/network-storage/samsung-achieves-ddr3-size--calls-it-an-efficiency-breakthrough.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00572-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.928168 | 791 | 2.9375 | 3 |
The VoIP Peering Puzzle�Part 7: Implementing ENUM
In our last two tutorials, we examined the theory and principles of ENUMthe Electronic Numbering system that has been developed by the Internet Engineering Task Force (IETF), in concert with other standards organizations, such as the ITU-T. We first looked at some of the underlying protocols, such as the IETF's Domain Name System (DNS) and the ITU-T's E.164. Next, we looked at the ENUM protocol, as defined in RFC 3761 (see ftp://ftp.rfc-editor.org/in-notes/rfc3761.txt), and looked at its operation in more detail.
If a thorough understanding of the ENUM protocol was all that was required for the telecommunications industry to migrate to a new numbering system, then all of us might be looking for other ways to spend our working hours, since the protocol itself is straightforward. The implementation of that protocol is quite another story, however.
How, for example, do you build a database large enough to store all of the telephone numbers and IP addresses currently in useand also keep up with the ongoing moves, adds, and changes that occur to that database?
Moreover, since this topic involves the Public Switched Telephone Network (PSTN) and the Internet, both of which are global entities, public policy questions also factor into these discussions. For example, how can you assure that the records stored in that ENUM registry are secure? If one carrier obtained access to their competition's portion of that database, could you envision some marketing arbitrage in the works? Who "owns" the address, the carrier or the end user? Should government organizationssuch as those that regulate the communications and commerce of a countryalso participate in the ENUM development and implementation process?
And further complicating the implementation challenge is the fact that there are two different forms of ENUM under consideration, and defined in a recent IETF Internet Draft titled Infrastructure ENUM Requirements (see http://www.ietf.org/internet-drafts/draft-ietf-enum-infrastructure-enum-reqs-03.txt). First is Infrastructure ENUM (sometimes called provider or carrier ENUM):
Infrastructure ENUMM is defined as the use of the technology in RFC3761 by the carrier-of-record for a specific E.164 number to map a telephone number into a URI that identifies a specific point of interconnection to that service provider's network that could enable the originating party to establish communication with the associated terminating party. It is separate from any URIs that the end-user, who registers their E.164 number, may wish to associate with that E.164 number.
This is in contrast, to User ENUM:
User ENUM, defined in RFC3761, in which the entity or person having the right to use a number has the sole discretion about the content of the associated domain and thus the zone content. From a domain registration perspective, the end user number assignee is thus the registrant.
In other words, infrastructure and user ENUM differ in terms of the entity that is performing the address mapping, the registrant of any services associated with that number, and possibly other functions.
Field trials are under way using both variants of this technology; these are documented by the ITU-T at http://www.itu.int/ITU-T/inr/enum/trials.html. Countries noted include Austria, China, Finland, France, Japan, the Netherlands, Poland, Sweden, the United Kingdom, and the United States.
Of special interest are the notes on the ITU site regarding the United States' participation, which has now involved the Department of Commerce, the Department of State, and the Federal Communications Commission, with terms such as national sovereignty, competition, innovation, privacy, security, and interoperability being bandied about in their correspondence.
So will the politicians get deeply involved in the development of ENUM, or leave that work to the engineers? Our next tutorial will continue our exploration of ENUM implementation and look at some of the field trials that are underway in the United States.
Copyright Acknowledgement: © 2006 DigiNet ® Corporation, All Rights Reserved
Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons.
|
<urn:uuid:54a66016-aaeb-4b04-b2c8-4a69c6ea22f6>
|
CC-MAIN-2017-09
|
http://www.enterprisenetworkingplanet.com/print/unified_communications/The-VoIP-Peering-Puzzle151Part-7-Implementing-ENUM-3648911.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00272-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.929603 | 939 | 3.171875 | 3 |
The U.S. government launched a new agriculture data community on Monday during a special Group of 8 conference in Washington aimed at using open data to produce better crops and more efficient markets.
The two day G-8 Conference is focused on better sharing of information that can improve agricultural performance internationally, ranging from data about plants’ genetic makeup to information about climate change and the computer code that underlies new farming technology.
The conference is also aimed at spurring entrepreneurs to use open agriculture data to develop Web and mobile applications for farmers and researchers.
Officials hope sharing more agricultural data will have the same effect as opening up weather and Global Positioning System data, which produced billion dollar industries.
The agriculture community will join other data sharing communities on the U.S. government information repository Data.gov on topics such as energy, manufacturing and oceans.
“This is one very important step in President Obama’s effort to ensure the direct results of our scientific research are available for the public and industry and the entire scientific community,” Agriculture Secretary Tom Vilsack said. “It will create a data ecosystem to fuel innovation and meet agricultural challenges we all face.”
The G-8 is composed of representatives from eight of the world’s wealthiest nations.
As an example of agriculture fueled by data sharing, Microsoft founder and global development advocate Bill Gates cited the Next Generation Cassava Breeding project, which aims to significantly reduce the amount of time it takes to bring new breeds of the cassava plant to market. The project would, in turn, drastically improve yields for African farmers.
Vilsack cited mobile apps used by Kenyan farmers such as the State Department-sponsored iCow, which helps farmers determine when their cows are most fertile and allows them to track metrics such as daily milk production.
One of the greatest challenges for agriculture innovators and data collectors is the lack of solid metrics from developing nations in Africa and elsewhere, said World Bank Vice President Rachel Kyte. Just one in four African countries publishes basic national crop data, she said.
|
<urn:uuid:f78b41c5-ccb5-4ecf-8e79-4d2fddff6201>
|
CC-MAIN-2017-09
|
http://www.nextgov.com/mobile/2013/04/g-8-seeks-apps-agriculture/62866/?oref=ng-dropdown
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00272-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.921912 | 420 | 2.8125 | 3 |
For future computing, look (as always) to Star Trek
"The step after ubiquity is invisibility," Al Mandel used to say and it’s true. To see what might be The Next Big Thing in personal computing technology, then, let’s try applying that idea to mobile. How do we make mobile technology invisible?
Google is invisible and while the mobile Internet consists of far more than Google it’s a pretty good proxy for back-end processing and data services in general. Google would love for us all to interface completely through its servers for everything. That’s its goal. Given its determination and deep pockets, I’d say Google -- or something like it -- will be a major part of the invisible mobile Internet.
The computer on Star Trek was invisible, relying generally (though not exclusively) on voice I/O. Remember she could also throw images up on the big screen as needed. I think Gene Roddenberry went a long way back in 1966 toward describing mobile computing circa 2016, or certainly 2020.
Voice input is a no-brainer for a device that began as a telephone. I very much doubt that we’ll have our phones reading brainwaves anytime soon, but they probably won’t have to. All that processing power in the cloud will quickly have our devices able to guess what we are thinking based on the context and our known habits.
Look at Apple’s Siri. You ask Siri simple questions. If she’s able to answer in a couple words she does so. If it requires more than a few words she puts it on the screen. That’s the archetype for invisible mobile computing. It’s primitive right now but how many generations do we need for it to become addictive? Not that many. Remember the algorithmic Moore’s Law is doubling every 6-12 months, so two more years could bring us up to 16 times the current performance. If that’s not enough then wait awhile longer. 2020 should be 4096 times as powerful.
The phone becomes an I/O device. The invisible and completely adaptive power is in the cloud. Voice is for input and simple output. For more complex output we’ll need a big screen, which I predict will mean retinal scan displays.
Retinal scan displays applied to eyeglasses have been around for more than 20 years. The seminal work was done at the University of Washington and at one time Sony owned most of the patents. But Sony, in the mid-90s, couldn’t bring itself to market a product that shined lasers into people’s eyes. I think the retinal scan display’s time is about to come again.
The FDA accepted 20 years ago that these devices were safe. They actually had to show a worst case scenario where a user was paralyzed, their eyes fixed open (unblinking) with the laser focused for 60 consecutive minutes on a single pixel (a single rod or cone) without permanent damage. That’s some test. But it wasn’t enough back when the idea, I guess, was to plug the display somehow into a notebook.
No more plugs. The next-generation retinal scan display will be wireless and far higher in resolution than anything Sony tested in the 1990s. It will be mounted in glasses but not block your vision in any way unless the glasses can be made opaque as needed using some LCD shutter technology. For most purposes I’d like a transparent display but to watch an HD movie maybe I’d like it darker.
The current resting place for a lot of that old retinal scan technology is a Seattle company called Microvision that mainly makes tiny projectors. The Sony patents are probably expiring. This could be a fertile time for broad innovation. And just think how much cheaper it will be thanks to 20 years of Moore’s Law.
The rest of this vision of future computing comes from Star Trek, too -- the ability to throw the image to other displays, share it with other users, and interface through whatever keyboard, mouse, or tablet is in range.
What do you think?
|
<urn:uuid:cd73d6be-2556-42e6-b234-8b42fa39d7c2>
|
CC-MAIN-2017-09
|
https://betanews.com/2014/08/14/for-future-computing-look-as-always-to-star-trek/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed+-+bn+-+Betanews+Full+Content+Feed+-+BN
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00272-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.946275 | 860 | 2.5625 | 3 |
About how to distinguish optical fiber adapter and optical splitter, this question for many foreigners, they may say they are the same product called in different way. They just mistake the two kinds of products. And today I am going to tell differentiations about the fiber adapter and optical splitter, I hope after reading this article no one will be mixed up these two things.
Fiber adapter, is optical Fiber connectors connecting parts, optical fiber and an interface of the equipment. Series products include: FC, SC, ST, LC, The MTRJ. Which is widely used in the optical distribution frame(ODF),fiber optic communication equipment, instrumentation, etc. It’s in superior performance, stable and reliable.
According to the working way fiber optic adapter is divided into simplex, half-duplex. Simplex Communication refers to the communication process at any time, information only by one party like A to B, such as radio, television, etc. The simplex working way is rarely used now. Half duplex communication refers to at any time of the communication process, information can be transfer A to B, but also by B to A, but only by one transmission direction.
Optical splitter is also called the fiber coupler, which assigned the optical signals to multiple optic fiber components, which belongs to the optical passive components, in the telecommunications network, cable TV network, the user will applied to the circuit system, LAN, and fiber optic connectors were passive components used in most sports.
Optical splitter can be divided into standard coupler (1 x 2 double branch, the unit, also the light signal is divided into two power), star/tree couplers, wavelength multiplexer (WDM, if the wavelength is separated with high density, namely narrow wavelength spacing, it belongs to DWDM).
There are three kinds making ways: sintering (Fuse), micro-optics, optical wave guide. Sintering production is burn melt stretching get two optical fiber together, make the core aggregate, together with optical coupling effect, and one of the most important is melt burning machine production equipment, is also one of important steps, although important part of the process is worked by machine, but after sintering, still have workers detecting the encapsulation.
From the above description, fiber adapter and optical splitter has the very big differences. They are common used in the optical link. So don’t put them together.
|
<urn:uuid:86497d9d-b401-4afa-8178-6a5e7878b629>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/how-to-distinguish-optical-fiber-adapter-and-optical-splitter.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00324-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.940168 | 506 | 2.90625 | 3 |
Understanding Logical Processors
Logical processors subdivide a server's processing power to enable parallel processing. Shown here is a server with two physical processors with a view of how the OS recognizes the resulting logical processors.
A physical processor—also referred to as a CPU, a socket, or occasionally as a package—is a chip that is visible on a computer's circuit board. Most modern physical processors have two or more cores, which are independent processing units. Typical servers will have multiple physical processors with at least four or as many as 10 cores in each.
A logical processor is perceived by Windows as a processor, and each logical processor is capable of executing its own stream of instructions simultaneously, to which the OS can in turn assign simultaneous independent units of work. Windows Server enables each core to appear as a logical processor, so the server shown here, which has two quad-core physical processors, can have eight logical processors. Some processors support a technology called symmetric multithreading (which Intel calls "hyperthreading"), which enables a core to execute two independent instruction streams simultaneously. If the technology were enabled here, the result would be 16 logical processors.
While SQL Server 2012 offers licensing that is per-core, that licensing is based on physical cores. The number of logical cores is irrelevant to the per-core licensing costs when licensing physical servers, and instead only plays a role in the number of logical processors that Windows and SQL Server can technically support.
Virtual machines (VMs) are licensed based on the concept of a "virtual core," which is a processor as viewed by the VM guest OS. Logical processors have a potential effect in their licensing, as Microsoft has stated that assigning a virtual core to more than one thread at a time (two or more logical processors) or assigning a logical processor to more than one virtual core at a time may incur additional core license charges.
|
<urn:uuid:5b2fdc20-cf91-4b27-9484-d916dcdc7e2f>
|
CC-MAIN-2017-09
|
http://www.directionsonmicrosoft.com/licensing/30-licensing/3420-sql-server-2012-adopts-per-core-licensing-model.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00197-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951695 | 383 | 3.953125 | 4 |
Last night, while watching President Obama tout America’s manufacturing comeback, millions of Americans collectively blurted out “3-D what?!”
Yes, the president said “3-D printing” in his State of the Union address. In fact, he said “A once-shuttered warehouse is now a state-of-the art lab where new workers are mastering the 3-D printing that has the potential to revolutionize the way we make almost everything.”
And it does. A 3-D printer is just what it sounds like -- it’s a machine that “prints” 3-dimensional objects. The technology, which been anointed the driver of a “third industrial revolution,” deposits thin layers of material -- like plastic, glass, and ceramics -- over and over again to form complete objects in a single go. The process is called additive manufacturing, which stands in opposition to subtractive manufacturing, the traditional process in which objects are produced at factories by making small parts out of larger pieces of material, like sheets of metal. By allowing people to print objects, rather than drive to a store to buy an object made far away, the technology promises to end the system of large factories and long supply chains in the markets for many goods -- and to transform the global economy. Already, scientists are working to figure out how print meals in space and print a moon base with moon dust.
The lab mentioned by the president, the National Additive Manufacturing Innovation Institute in Youngstown, Ohio, was funded in part with $30 million from five federal agencies. The federal government is starting to get acquainted with 3-D printing in a handful of other ways as well, which is good news, because the technology is going to have wide-reaching policy implications in the years to come. Here’s how:
Currently, the executive branch is well ahead of Congress in anticipating the disruptive effects of 3-D printing. Aside from the 3-D printing lab mentioned by the president, the Commerce Department is working to develop universal standards for many aspects of additive manufacturing processes by next year, and the Army has already deployed 3-D printers in the field in Afghanistan.
But last month, Congress, too, entered the brave new world of 3-D printing after gun enthusiast Cody Wilson uploaded a video of himself on YouTube firing a semiautomatic rifle loaded with a homemade high-capacity magazine. The plastic magazine, manufactured on a 3-D printer, was designed to send a message: Congress, and the Obama administration, can try to ban such magazines, but technology is outpacing efforts at gun control.
Within days, Rep. Steve Israel, D-N.Y., proposed banning 3-D printed gun magazines and firearms that could evade metal detectors as part of a renewal of the Undetectable Firearms Act. “We have this new technology that allows criminals and terrorists to buy cheap 3-D printers, use them to literally manufacture firearm components that can fire bullets, and bring them onto airplanes,” he told National Journal last week. “I want to make it harder for the bad guys.”
Of course, a technology that does for objects what the Internet has done for information will present plenty more challenges for regulatory frameworks designed for an economic system in which the production of goods is centralized. Consider how the Internet and CD-burners changed music, then movie piracy from minor annoyances to industry-shaping forces. Now imagine a future Napster for Hermes handbags, iPhones, and proprietary industrial parts. This may be the first time 3-D printing is the subject of legislation, but it certainly won’t be the last.
There’s already a specific policy proposal taking shape for protecting intellectual property of 3-D printing technology itself. The rapid pace of innovation expected in 3-D printing could require a more agile form of protection, and Attorney William Cass suggests the U.S. could adopt a European-style “utility model” as an option for inventors. The utility model offers all of the rights and protections of a patent but can be obtained more quickly and cheaply, and it only undergoes exhaustive evaluation if challenged in court. The House and Senate Judiciary committees could see the utility model on their agendas in the future: Adoption of the model would require an act of Congress, according to Cass.
3-D printing also holds implications for the half-trillion dollars in annual defense appropriations. Banning Garrett, director of the Atlantic Council’s Strategic Foresight Initiative, predicts that rather than purchasing physical equipment and replacement parts, much military spending will be redirected to the purchase of designs. Spare parts will be printed at the point of use as the need arises. “That’s going to hugely reduce the long-term costs of weapons systems,” Garrett said. The technology is also likely to prompt a rethinking of military strategy, he said, as shorter supply lines and the ability to reequip in the field make the world’s military forces more nimble.
3-D printing could also create new anti-terror challenges, as groups might shed the need for state sponsors to keep them armed and supplied, according to a 2011 Atlantic Council paper Garrett coauthored. Garrett though, warned that lawmakers risk stifling the technology if they overreact to its downsides.
3-D printing also appears poised to bring about a global trade rebalancing, as the new economics of manufacturing rewards high-skill workforces like that of the U.S. and make supplies of cheap labor in countries like China less relevant. The committees will also have to adapt U.S. policy to the changing physical footprint of the global trade in goods and parts. “Instead of pushing molecules around, we’re going be pushing bits around,” said Tom Campbell, a professor at Virginia Tech who studies additive manufacturing and coauthored the Atlantic Council paper.
Campbell would like to see Congress take a largely hands-off approach to 3-D printing itself. “The last thing I want to do is have the government clamp down on new rules or laws that impede innovation,” he said. But he does believe countries such as Germany are gaining a competitive edge in certain aspects of the technology, and he sees a need for more government funding for basic research on applications of additive manufacturing that remain in the theoretical stage — like printing 3-D human organs. America, welcome to the future.
|
<urn:uuid:d097f2c7-3c73-46c0-981f-7e3551569a71>
|
CC-MAIN-2017-09
|
http://www.nextgov.com/emerging-tech/2013/02/obama-touts-future-manufacturing-3-d-printing/61293/?oref=ng-dropdown
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00441-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.952667 | 1,335 | 2.578125 | 3 |
Extending the reach of the classroom
As podcasting and YouTube have hit the mainstream, the academic world has started to explore the use of these resources as a way to help their students get more out of their classroom experiences. But the new technology can raise as many issues as it solves: what material do you put online? How do you provide students access to it? How does the material relate to what's presented in the classroom?
To get a greater sense of how the new technology is working out, Ars talked with three professors who are taking different approaches to making video material available online:
Richard McKenzie, professor of economics at the University of California-Irvine and coauthor of a textbook on microeconomics. Dr. McKenzie uses his own textbook in the classroom, and started producing short video segments that answered some of students' frequently asked questions. There are now over 60 of these videos; they're included as a DVD supplement to his textbook, available as iPod-compatible downloads from his web site; the shorter ones are now available on YouTube.
David Miller, professor of psychology at the University of Connecticut. Dr. Miller teaches courses in animal behavior and general psychology on the Storrs campus. As a supplement to his lectures in General Psychology, he maintains a collection of "precasts" and "postcasts" that prepare students for the lecture and reinforces the more difficult concepts, respectively. Audio-only content is available for the animal behavior course. All material is also available via iTunes.
Dr. Bradley Olwin, professor in the Department of Molecular, Cellular, and Developmental Biology at the University of Colorado. He teaches a developmental biology course in which podcasts of his lectures, as well as slides and lecture notes, are made available via an internal server at the university. A separate UC web system distributes pre-lecture questions and problem sets.
Video and lecture material: what goes where
For Olwin, who provides his lectures as podcasts, the videos are a way to help the students focus on the information being presented during lectures, rather than focusing on note taking. Students can then go back and review the videos and notes he provides to make sure they get everything straight.
In contrast, Miller isn't a big fan of this approach, which he calls "coursecasting." Instead, he views video and audio content as a way to expand his teaching opportunities. "I am in favor of extending the classroom experience," he said, "and podcasting affords me that opportunity. I expect (hope) that students will listen to the Precasts before each lecture because they are designed to give students an idea of particularly important points to watch for." Miller also provides postcasts that cover difficult concepts and recordings of student-led discussions.
The video material plays a similar role for Dr. McKenzie, who started recording them in response to being asked the same question by several students in quick succession. By recording a "best case" explanation, McKenzie can spend less time in class on these challenging topics. Students reporting spending an average of 4-5 hours a week," he said, "which means I have effectively doubled my lecture time." The videos give his class a chance to make sure they get the key concepts right: "Students report playing and replaying them until they get the material down."
McKenzie had also experimented with lecturecasting, but found that students didn't use those videos very much. Instead, he's focused his videos on two aspects of teaching: basic concepts that students should know coming into class, and complex arguments that might take some repetition for students to come to grips with.
Although he calls his videos "informal [and] nonscripted," Dr. McKenzie does take some pains to make sure the videos are high quality, as he will retake portions or edit them before making them available. Miller views some of his content as a way of correcting for parts of his lectures that might not have gone over smoothly: "That is one of the main functions of both the postcasts and the discussions. Anything I want to correct or expand upon will happen in these two podcasts." Olwin's use of lecturecasting ensures that his content is somewhat informal, as "the podcasts are the informal recording of the entire class: questions, discussions, jokes and all." Because they're capturing an unedited, live performance, they leave no space for corrections: "corrections or misconceptions would be revisited in [a future] class," Dr. Olwin noted.
|
<urn:uuid:2c190159-bca3-4f5a-b396-9b38d0d62795>
|
CC-MAIN-2017-09
|
https://arstechnica.com/features/2007/04/moving-beyond-podcasts/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00441-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.970392 | 909 | 2.640625 | 3 |
Quartz: A Third Generation Display Layer
Quartz was first demonstrated at WWDC in May of 1999, and believe it or not, no new abilities were demonstrated at MWSF 2000. Instead, Jobs showed an actual application of Quartz, and it had a much greater impact than the jargon-filled tech demonstration from 1999. But what is Quartz?
Quartz is what I call a third generation display layer. I'm loosely defining a "display layer" as an operating system's functional interface used by application programmers to draw to the screen. The "third generation" qualifier is also a loose categorization of my own making. I've chosen to define the three generations of display layer technology in order to illustrate the most important changes over the years.
First Generation: Simple Screen Element Addressing
Operating systems with first generation display layers can barely claim to provide a display layer at all. Applications written for such an OS need to specify the position and color of every single element on the screen. The most well-known example is DOS. DOS programs wrote to the screen by specifying a row, column, color, and character. Similar systems include the Apple II, Unix terminals, and virtually every mainframe and minicomputer from the 60's and 70's.
One kind of first generation display layer would require applications to write text by specifying each pixel in each letter (e.g. (0,0), (1,0), (2,0), (1,1), (1,2), (1,3) for a tiny letter "T" below). This is the bitmapped based method.
|(0,0), (1,0), (2,0), (1,1), (1,2), (1,3)|
This is obviously too much to ask of application developers nowadays, and I'm not aware of any such operating systems still being developed. For this reason, most operating systems with first generation display layers use a second method, the "character based" display. An application programmer may specify row 5, column 10, color red, and a capital letter "T" and the operating system renders the character in that position and color (perhaps with some rudimentary style choices like bold or underlined).
Second Generation: Lines, Shapes, and Fonts
Second generation display layers provide primitives (definition here) much more complex than simple screen element addressing (although that ability is retained, of course). For example, a line may be drawn by simply specifying the endpoints (rather than by specifying the position of every screen element in the line as would be necessary in a first generation display layer). Geometric shapes usually have their own primitives: circles may be drawn by specifying a center and a radius, rectangles may be drawn by specifying their upper-left corner, height, and width, and so on. The particular primitives implemented may vary, but the idea is the same: image generation no longer requires the programmer to specify every screen element. He can simply specify a few key screen elements (center, corner, etc.) and some pixel dimensions and the display layer will do the rest.
Most second generation display layers also provide primitives for color, stroke, and fill. For example, the circle below could be drawn by specifying a circle centered at (100,100), radius 80, stroke color red, stroke thickness 5, and fill color blue. A simplified, fictional example of such a function call appears below the image.
circle(100, 100, 80, red, 5, blue);
Drawing text is done in a similar fashion. Parameters include screen position, color, size, style, and typeface (font). The second generation display layer handles the details: reading the font definition file (be it bitmapped or vector) and rendering the characters, complete with basic line and character spacing.
Pre-defined bitmaps may also be loaded into memory and drawn to the screen at specified positions. And all screen drawing may be done in overlapping layers, with the necessary clipping handled by the display layer.
Operating systems with second generation display layers include Windows, classic Mac OS, and the X Window System. All include essentially the same drawing and text primitives (with X being substantially more "primitive" about its primitives than the other two).
In the context of a display layer, a "primitive" is the most basic image element, such as a line, arc, or circle, from which more complex images may be constructed. An application programmer can make a single API call to create any shape for which the display layer provides a primitive function. ↩
|
<urn:uuid:f39f0fe4-cf26-4c42-822c-14332c424df6>
|
CC-MAIN-2017-09
|
https://arstechnica.com/apple/2000/01/macos-x-gui/2/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00493-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.889818 | 947 | 2.921875 | 3 |
NOAA interactive map tracks Gulf oil spill
Online mapping tool allows public to follow closures, spill trajectory
- By Alice Lipowicz
- Jun 17, 2010
The public can now track Deepwater Horizon-BP oil spill recovery data online via a near-real-time interactive map at a new Web site created by the National Oceanic and Atmospheric Administration.
The GeoPlatform Web site, launched June 15, includes regularly updated geospatial data from several federal and state agencies on the oil spill trajectory, closed fishery areas, impact on wildlife and Gulf resources, daily position of research ships, and affected shorelines.
Data published on the site is received from NOAA, the U.S. Coast Guard, the Environmental Protection Agency, the U.S. Fish and Wildlife Service, the U.S. Geological Survey, the Homeland Security Department, NASA and several states.
“This Web site provides users with an expansive, yet detailed geographic picture of what’s going on with the spill,” Jane Lubchenco, NOAA administrator, said in the news release June 15. “It’s a common operational picture that allows the American people to see how their government is responding to the crisis.”
NOAA worked with the University of New Hampshire's Coastal Response Research Center to develop the Web-based geospatial platform designed for federal and local response activities and adapted for public use.
A separate public Web site, Deepwater Horizon Response sponsored by federal agencies, has been operating for several weeks offering news, announcements and information about the disaster. It is operated by the Deepwater Horizon Unified Command, which consists of DHS and the Defense and Interior departments, as well as other federal agencies, BP and other private entities.
In addition, the Unified Command recently set up a Deepwater Horizon Response Facebook page that links to its other Web site.
Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week.
|
<urn:uuid:2d836eac-a4da-4a34-a86d-60d091032141>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2010/06/16/noaa-debuts-online-interactive-map-to-track-gulf-oil-spill.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00493-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.93212 | 405 | 2.515625 | 3 |
The blockchain is the underlying technology that enables the bitcoin cryptocurrency to exist. A foundational component of this technology is its complex cryptosystem. The blockchain cryptosystem relies on public key algorithms based on Elliptic Curve and message digest functions like SHA-256 and RIPEMD-160. When you create a bitcoin wallet, under the hood you are creating an Elliptic Curve key pair based on Secp256k1 curves. The key pair has a private key and a public key. The private key is the one you keep secret and allows you to sign transactions. For example, when you send bitcoins to someone, you are signing this transaction with your private key and then you announce it to the network. The miners will pick up your transaction and verify that the transaction signature is valid and broadcast to the network until enough miners have validated the transaction and thus achieving consensus. The checks and balances of the Blockchain ledger are updated and when consensus is achieved, your transaction is written in “digital stone”.
On the other hand, the public key is the one used to create your bitcoin wallet address. The public key allows you to receive bitcoins. However, your bitcoin wallet address is not your raw Elliptic Curve public key There are additional steps performed in order to create an address. First, a digital representation of your public key is computed using SHA-256 followed by RIPEMD-160. Second, a byte with network id is prepended to this string. Third, a checksum of this string is computed by performing SHA-256 twice. From these results the first 4 bytes are appended to the string produced in second step. This string is encoded in Base58 and this is your bitcoin wallet address. The picture below illustrates this steps in a non-automated way.
There are many forms to store your bitcoins as well as to create wallets. One of the early methods to create bitcoin wallets was known as brain wallets. Unfortunately, this user-friendly method allowed you to enter a password or passphrase which was then hashed using an algorithm such as SHA-256 and used as seed to generate your private key. Due to its popularity and easy usage, many Brain wallets were used in the last few years with weak passwords or passphrases, transforming the Blockchain wallet address hashes in password or passphrases representation of your private key. This weak way of generating your private key allowed attackers to steal your bitcoins just by doing password cracking against the hashes stored in the Blockchain.
Although, this attack has been known for years, it only got widely known recently due to the work made by Ryan Castellucci. Ryan’s released on 7th August 2015 at DEFCON 23 the results of his work cracking brain wallets in conjunction with a tool called BrainFlayer : A proof-of-concept cracker for cryptocurrency brain wallets and other low entropy key algorithms. You can view Ryan’s talk here. Two months after Ryans’ initial release of BrainFlayer, he released a faster version. This was the result of the work done by Nicolas Courtois, Guangyang Song from
International Association for Cryptologic Research (IACR), University College London, and Ryan’s to optimize the speed of computing secp256k1 bitcoin elliptic curve. This has been detailed in the paper Speed Optimizations in Bitcoin Key Recovery Attacks. Furthermore, this year at the Financial Cryptography and Data Security conference, Marie Vasek presented another article The Bitcoin Brain Dain: A Short Paper on the Use and Abuse of Bitcoin Brain Wallets. This paper published the results of evaluating 300 billion passwords against Blockchain hashes and their findings about 884 brain wallets that had funds at a given time, suggesting they might have been drained by active attackers.
So, how do you perform such attack?
The attempt to recover a password just by knowing its encrypted representation can be made mainly using three techniques. Dictionary attacks, which is the fastest method and consists of comparing the dictionary word with the password hash. Another method is the brute force attack, which is the most powerful one but the time it takes to recover the password might render the attack unfeasible. This is of course dependable on the complexity of the password and the chosen algorithm. Finally, there is the hybrid technique which consists of combining words in a dictionary with word mangling rules.
With that being said let’s go over the steps needed to perform this attack against the Blockchain hash160 hashes using a dictionary. This is done in 6 steps:
- Bootstrap the Blockchain.
- Parse the Blockchain by running Blockparser and get allBalances.
- Extract hash160 strings.
- Create a Bloomfilter.
- Run BrainFlayer with your favorite dictionary.
- Use Addressgen to generate key pair.
First step is to bootstrap the blockchain. To perform this, we need to download, install and run the bitcoin software on a system connected to the Internet. The system then becomes a node and part of the peer-to-peer blockchain network. The first task performed by the node is to download the entire database of records i.e., the public transaction ledger and verify it using the transaction engine. In other words, the node downloads the entire blockchain and verifies the validity of all blocks by performing a series of checks This needs a lot of bandwidth and computing power. As I write this the Blockchain size is 90.633 Gb and contains 438556 blocks. The data contains every transaction that has been made in the blockchain since the genesis block was created on the 3rd of January 2009 at 18:15:05 GMT. To download the entire Blockchain, took me more than 72 hours . The image below illustrates the steps needed to perform the download, installation and running the bitcoin software.
Then, the picture below illustrates the steps needed to perform the configuration and running the bitcoin software. You can view the progress by executing the getblockchaininfo command and check the number of blocks that have been already downloaded.
After downloading the entire Blockchain we move into the second step. Parse the Blockchain. The tool to perform this heavy lifting exercise is called Blockparser and is a powerful utility, open source, written in C++ that was created by Znort987. The tool doesn’t seem to be maintained anymore but is still able to perform its work. When blockparser performs the parsing, it creates and keeps the index in RAM which means with the current size of the blockchain you need enough RAM to be able to parse it in reasonable amount of time. The tool can perform various task but for this exercise we are interested in the allBalances command. To perform the parsing, I used a system with 64 GB ram and the process was smooth. I tried it on a system with 32Gb and stopped it due to the heavy swapping that was happening. The allBalances produced a 30Gb text file. The image below exemplifies these steps.
Third step is to extract the hash160 addresses from the allBalances. We are interested in the hash160 because this field contains the representation of the Bitcoin public key. Below you can see the output of allBalances.
Forth step, we create a bloom filter with the tool hex2blf which is part of the brainflayer toolkit. We also need to create a binary file containing all the hashes sorted in order to be used with the bloom filter. This will reduce the false positives.
Fifth step, we launch brainflayer using our favorite dictionary against the bloom filter file we generated in the previous step. If there is a match you will see the password or passphrase and the corresponding hash. In the output of cracked password you could see C or U in the second column. This is to indicate if the key is Compressed or Uncompressed. In the below image you can see these steps.
Sixth step and last step is to create the Elyptic Curve key pair using the known password or passphrase. This can be done using the tool Addressgen created by sarchar. Addressgen is a utility written in Python 3 to generate private keys and their corresponding addresses using secp256k1. This utility will allow you to generate the ECDSA key pair which can be used to take over the wallet.
Financial gain is a significant incentive to have people performing all kinds of activities in order to attempt to steal your coins. If you are interested in attacks against the Blockchain I would suggest looking at the different papers created by the professor Dr. Nicolas Courtois and available on his website. On a different note, there are other researchers that are brute forcing the entire bitcoin private key keyspace in order to find private keys for addresses that have funds. There is one project that has the code name Large Bitcoin Collider which is a distributed effort with a pool where people can contribute computing power. The thread on Bitcointalk forum is quite interesting and the author has the following aim for this project: “allow the Bitcoin community to actually have a better shot at risk assessment of this threat vector. Right now, the math says the danger is negligible. Should there at some point be evidence or indication of the contrary, then it’s still better to have a project like this for analysis/experimentation of this concrete attack vector”. The author also writes that the project is a derivative of brainflayer and supervanitygen. Moreover, brainflayer can also perform brute force attack, sequentially against the entire private key space. This would be unfeasible to perform in a reasonable time frame but better to view the talk “Stealoing Bitcoin with Math – HOPE XI” given by Fillippo Valsorda and Ryan’s where among other things they show how quick brain wallets get drained, attacks against newer Brainwallet implementations and other attacks against Eliptic Curve Digital Signature Algorithm (ECDSA).
Mastering Bitcoin – Unlocking Digital Cryptocurrencies by Andreas M. Antonopoulos
|
<urn:uuid:9a1cfb0e-1e41-47c1-86e6-a701822264de>
|
CC-MAIN-2017-09
|
https://countuponsecurity.com/tag/brainwallet-cracking/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00545-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.942807 | 2,042 | 3.390625 | 3 |
BGP table hitting 512k limit in older routers (posted 2014-08-19)
It's never a good sign when the regular press reports about BGP-related issues, such as last week: Browsing speeds may slow as net hardware bug bites (BBC). The problem is that the BGP table has started to hit the 512k FIB limit in some older routers. Numerous outages and slowdowns were reported to be caused by this, but it's unclear to which degree that's accurate.
So what's a FIB and why would it be limited to 512k (524288) prefixes?
BGP routers actually have (at least) three different tables where IP address prefixes are stored, along with a next hop address: the BGP RIB, the main routing table / RIB and the FIB. The BGP Routing Information Base collects all information received over BGP that's not immediately filtered out. So if you have two ISPs, they will both send all the prefixes for all networks in the world that are currently reachable—which means that your router has two copies of every prefix, one with a next hop address pointing to ISP A and one with a next hop pointing to ISP B.
For each prefix, BGP then decides which path is better and sends the prefix plus next hop address pointing to either A or B to the main routing table. The main routing table also holds non-BGP routing information. In large networks, all the internal stuff and routes for customers can add up to thousands or tens of thousands of routing table entries.
Finally, a Forwarding Information Base (FIB) is constructed from the main routing table that is used to actually forward packets to the router identified by the next hop address. Some routers use regular RAM to store the FIB, others use a Ternary Content Addressable Memory. RAM sizes are pretty large these days and typically don't have a fixed limit, as it's shared by many processes running on a router. But TCAMs are special memories with a tiny bit of processing power. Basically, you can show a prefix to a TCAM and then the TCAM will tell you the address where that prefix is stored—you don't have to search through the memory one step at a time. This means TCAMs are very fast, but they are also more expensive than RAM and they run fairly hot. So TCAM sizes are limited.
Nothing new under the sun
Cisco 6500 and 7600 modular routers/switches used to have supervisor modules with a TCAM limit of 256k. And then in 2008 the routing table grew to 256k, so people had to upgrade in a hurry. If they bought new supervisor modules that can handle 512k, they got six years of use out of those, hence Geoff Huston's statement that "Nothing in BGP looks like it's melting".
Because different networks have different numbers of internal prefixes and there are also slight differences between the number of prefixes each ISP announces to its customers, different people get bitten by the issue at different times. Also, TCAMs are often partitioned into different parts: one for the IPv4 routing table, one for the IPv6 routing table, one for MPLS, one for filtering... In some cases, simply changing the partitioning is enough to get by for a while. Alternatively, it's always possible to filter BGP prefixes. As Randy Bush says:
❝half the routing table is deagg crap. filter it.❞
The trouble is, you then lose connectivity towards the filtered prefixes, and there is no obvious way to only filter out the prefixes that are unnecessary deaggregation. If your network is non-huge, the solution is to use a default route pointing to your ISP / one of your ISPs as a safety net. What I used to do many years ago when using severely underpowered routers to run BGP is simply filter out all AS paths longer than five ASes from both our ISPs. Then, if one ISP has a long path and one a short path, I'd still have the short path which I'd want to use anyway. If neither had a short path, chances were it was a non-critical prefix far away, so handling it through a possibly non-optimal default route was unlikely to be problematic.
However, large networks don't have anyone they can point a default route to. So they have to have more recent routers, and they pretty much always do. However, it's not unheard of for older network equipment live out its final years in far away corners of big networks, so they could still have minor issues.
Although during my training courses, I always warn people that they should buy routers big enough to hold enough prefixes for some years to come, I really should have been more explicit and posted a warning here on this site. At least Cisco did: The Size of the Internet Global Routing Table and Its Potential Side Effects.
Geoff Huston expects the IPv4 table to hit 1 million in 2019 and recommends buying routers that can handle at least 2 million prefixes. Unfortunately, it's not always obvious how many prefixes a router can handle, especially if the TCAM is used for more than just the IPv4 FIB. So make sure what the limits are before you spend your money. Also, keep an eye on the weekly routing table report so you can take action when the BGP table starts creeping up to your routers' limits.
|
<urn:uuid:b3dbde0a-e3fc-4d83-a886-0e185e1b4b7f>
|
CC-MAIN-2017-09
|
http://www.bgpexpert.com/article.php?article=145
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00489-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.959797 | 1,128 | 2.65625 | 3 |
It’s common understanding these days that the more factors of identification that a user has to provide to an authentication system, the more trustworthy and secure it likely is. Single-factor authentication is usually accomplished by providing something you know, like a password or PIN number.
As two-factor authentication became more and more mainstream, the two factors involved have usually been something you know, and something you have, like a credit card, crypto-key USB device, a code generated every so often by a electronic card you keep in your wallet, a smart-card that can respond directly to cryptographic challenges, or an RFID or other radio device. The most common use of two-factor authentication is how bank customers authenticate to an ATM machine; they must provide something they have, their bank card, and something they know, it’s PIN.
As cheap ways to collect biometrics have begun to emerge, these two factors have begun to shift from something you know and something you have, to something you know and something you are. This notion of something you are, generally referred to as biometrics, include things like your finger or palm print, iris pattern, voice print, or even your DNA. Using something you are to authenticate is obviously more inexpensive than providing users with something they need to have, however some more advanced authentication systems now require all three-factors for authentication.
Enter the fourth factor of authentication: somewhere you are.
But how do you reliably prove where you are if you’re not authenticating physically in person? And how strong of an identifier is your location anyway?
Some authentication systems such as those used in online banking and other web applications, where the number of users being authenticated makes providing all of them with a hardware device or crypto-card cost prohibitive, have already begun to require a kind of hybrid factor between the second and fourth factors mentioned above. If you have previously been properly authenticated, the web application may create a cookie or some other indentifier in your computer system that it can retrieve, essentially turning your computer or web browser into both something you have and somewhere you are. The cookie itself is the something you have, and these are now being generally tied to the network source address of your computer, which is somewhere you are. If this cookie no longer exists, or no longer matches your network source address, the authentication system may ask for additional identifying information to further validate your identity. While not flawless, this type of thing is a step in the right direction.
As GPS devices continue to become smaller and cheaper, the something you have may also begin to double as the somewhere you are, or more specifically, somewhere it is. It stands to reason that if you are authenticating, and you have the something you have there with you, then the somewhere it is is equivalent to the somewhere you are. If after authenticating to your bank from your home, perhaps via three-factor authentication, you have this device transmit where it is to use as a fourth-factor of identification, further strengthening your provable identity. Authentication systems could also potentially be programmed with the geographical boundaries of a secured area, like a military base or campus, and only allow authentication from wireless devices if they are located within the geographical boundaries.
While a user’s location cannot difinitively identify a single user, it can however prove both context information and relationship information, similar to the concept of authentication groups used in user and password systems. If a user is authenticating from a physically secured area within a military base that only officers with a certain clearence are allowed to access, the location can contextually provde group association, but not who the individual user is.
Thus far, I have only found one company claiming to provide a four-factor authentication system, Priva Technologies. Their Cleared Security Platform however do not use somewhere you are as the fourth factor, but rather some proprietary challenge response between the ClearedKey hardware device (something you have) and the authentication system. Without any detail published about how this works, it is hard to tell if this is truly a fourth factor, or if it falls under the second factor in that it is a property of the hardware device and thus a more robust something you have.
It is also my opinion that the Cleared Security Platform does not even use true three-factor authentication. When authenticating, you really only provide the primary authentication system with something you have, the ClearedKey, and something you know, your passowrd. The third-factor, something you are is actually provided to a seperate, secondary authentication system in the ClearedKey itself, presumably preventing the ClearedKey from operating and being used to authenticate to the primary authentication system if the user didn’t biometrically authenticate to it first. Priva markets this behavior as a way to prevent the expense and complexity of maintaining a centralized biometrics database connected to the primary authentication system, which is a fine argument and attractive goal, however this technically splits the authentication in two turning it into a single-factor authentication to the ClearedKey, then a two-factor authentication to the Cleared Security Platform since it doesn’t actually send the biometrics data.
It is important to note that for an authentication system to truly be multi-factor, it has to require at least one of each of the identification factors described above; something you know, something you have, something you are, and somewhere you are. An authentication system requiring two seperate passwords or two seperate crypto keys is not employing two-factor authentication.
|
<urn:uuid:3b8e937b-7cd9-4920-b671-397e9bbd9d11>
|
CC-MAIN-2017-09
|
https://blog.dustintrammell.com/2008/11/21/four-factor-authentication/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00365-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.943729 | 1,142 | 2.921875 | 3 |
Mobile Technology and Globalization
Mobile technology did not only improve business but it also transformed personal relationships. Technology is one of the leading factors in the evolution of globalization. But how exactly does mobile technology enable globalization? Here are a few facts.
To begin with, I would like to point out something that may seem obvious but helps achieve a better understanding of the globalization process enabled by mobile devices. Today, the use of the mobile phone is closely related to the use of other modern communication technologies such as the computer, the Internet and e-mail.
Technology has evolved to such an extent that mobile phones are used not only for family or business calls, but they are now a way of living. Mobile phones have become an important tool we can use anytime and anywhere and enable us to do various tasks all at the same time. As a portal for communications, messaging, entertainment and information, mobile phones have become the manifestation of the “digital age”.
Currently, the trend is to expand the traditional cloud services to the mobile world, adapting the cloud services to meet the requirements of the mobile environment. Therefore, new programs are being created just for the mobile technologies. In regards to this aspect, I believe that location-based cloud tools have a high potential for success.
Some major ways in which mobile technologies enable globalization are:
Expanding the productive opportunities of certain types of activities by enhancing social networks, reducing risks associated with employment seeking, and enabling freelance service work;
Transforming and reinforcing the social and economic ties of micro-entrepreneurs and making local economies more efficient;
Enabling mobility of businesses;
Extending the capabilities of mobile phones to entrepreneurship, banking, e-learning, and health delivery systems;
Increasing the frequency of people’s contact with friends, family, and existing business connections and facilitating new contacts with business partners, suppliers, and customers, no matter where they are;
Allowing for organization of activities on the fly.
To conclude my argument, mobile technology will continue to influence and enable globalization. And with the rapid expansion of mobile technologies, people from anywhere around the world will be able to benefit from their advantages. However, there is one last aspect I should bring into discussion: if you use multiple devices, you should always keep in mind security. It is advisable that you use a tool which protects all mobile or desktop operating systems.
Furthermore, the responsibility of the IT department is to implement strong policies for security and data privacy. All employees must be responsible, especially in terms of mobile devices, by making sure that a device does not get lost.
By Rick Blaisdell / Rickscloud
|
<urn:uuid:73b75789-8751-4f1c-97da-3af695187dbd>
|
CC-MAIN-2017-09
|
https://cloudtweaks.com/2012/08/mobile-technology-and-globalization/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00065-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.936512 | 539 | 2.65625 | 3 |
NOAA and partners have launched a comprehensive, user-friendly online resource featuring the latest scientific research conducted within three West Coast national marine sanctuaries.
The Web site, http://sanctuarysimon.org, integrates scientific monitoring data from Gulf of the Farallones, Cordell Bank and Monterey Bay national marine sanctuaries -- three contiguous, federally protected marine areas off California's northern central coast. Developed by the Sanctuary Integrated Monitoring Network (SIMoN), the site makes a wealth of information about the region's marine ecosystem instantly and easily accessible.
"This new SIMoN Web site is a dynamic portal that provides the public and decision-makers with valuable information about one of the planet's richest and most diverse marine ecosystems," said William J. Douros, the sanctuary system's west coast regional director. "This innovative resource will greatly enhance our ability to identify natural and human-induced changes in the marine and coastal ecosystems that our sanctuaries protect."
The site's photo gallery also offers users access to more than 2,800 free, high-quality still and video images, sounds and graphics. Visitors can view the sanctuaries' incredible diversity of marine life, including fishes, seabirds and marine mammals, and explore a wide variety of habitats ranging from kelp forests to submarine canyons. Other sections of the site examine the physical characteristics of the area, including geology, oceanography and water quality.
SIMoN was created in partnership with the regional science and management community to integrate scientific research and long-term monitoring data to provide information needed for effective management and a better understanding of the sanctuary and its resources. With nearly 100 contributing partners already, sanctuarysimon.org will be continuously updated and enriched as additional partners in science and education join the project. Researchers from all over the world can contribute information, which will be authenticated and incorporated into the site's verified pages.
The numerous collaborators involved in the SIMoN project include the U.S. Geological Survey, Monterey Bay Aquarium Research Institute, California Department of Fish and Game, Pt. Reyes Bird Observatory and Cascadia Research Collective.
Stretching from the waters off Bodega Head south to Cambria near San Luis Obispo, Gulf of the Farallones, Cordell Bank and Monterey Bay national marine sanctuaries encompass approximately 7,130 square miles of ocean and estuarine waters.
|
<urn:uuid:5eee1b89-6739-4c8f-a9b1-a9470890ca9b>
|
CC-MAIN-2017-09
|
http://www.govtech.com/e-government/New-NOAA-Online-Research.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00485-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.888714 | 491 | 2.921875 | 3 |
Apple takes steps to improve data center energy efficiency
Monday, Nov 25th 2013
While data centers provide support for a wide variety of critical technological applications and systems, these facilities are also notorious for their electricity consumption.
Data centers require vast amounts of energy to power not only their servers and other hardware, but also the systems utilized to keep these heat-producing devices cool. However, in recent years, data center designers and operators have taken steps to improve the energy efficiency and sustainability of these facilities to significantly mitigate their impact on the surrounding environment.
Because much of the energy consumed by data centers comes as a result of cooling systems, many organizations utilize server room temperature monitoring systems as a means to keep critical devices cool while keeping a watchful eye on their energy consumption. Utilization of temperature monitoring within a data center can allow facility operators to prevent servers from overheating as well as reduce the amount of electricity used in the process.
Additionally, owners and operators of data centers also use renewable energy sources as a means to improve the environmental footprint of their facilities. Apple recently changed the game of renewable energy with their Maiden, N.C., data center.
Providing its own green power source
According to a report by GigaOM contributor Katie Fehrenbacher, Apple was one of the driving forces behind bringing renewable energy resources to North Carolina. During construction planning of its new data center site, the company looked into potential local sources of renewable energy, and when it found that there weren't sufficient resources available, Apple decided its create its own.
Fehrenbacher stated that the local utility provider, Duke Energy, believed that customers would not want to pay the premium that came along with greener energy sources. For this reason, the company made little effort to provide the area with renewable power sources. However, after Apple's unprecedented move, the company was more willing to provide more environmentally friendly options.
In order to power its newest facility, Apple spent a significant amount to build two solar panel farms and a fuel cell farm. Apple's website stated that the company's 100-acre onsite solar photovoltaic array is the largest in the nation that was created and is owned by the end user. The 20 megawatt facility can produce 42 million kilowatt hours of clean energy, and its second solar farm will go online in late 2013.
The company's fuel cell energy source is a 10 megawatt facility that provides an additional 83 million kilowatt hours of renewable power. In total, the company's clean resources provide 167 million kilowatt hours of electricity, equivalent to the amount needed to power 17,600 homes for one year, stated Apple.
In addition to utilizing clean electricity to power its technology, Apple has also made other efforts to create a greener data center. The facility uses a chilled water storage system and draws upon outside air in addition to cooling system management.
Apple's efforts represent an important step in the push for more energy efficient data centers. While many organizations may not have the resources to create their own renewable energy sources, they can make small efforts, such as temperature monitoring, to improve the power consumption of their facilities.
|
<urn:uuid:75fe8976-fae0-409e-be24-edb1a145af4f>
|
CC-MAIN-2017-09
|
http://www.itwatchdogs.com/environmental-monitoring-news/data-center/apple-takes-steps-to-improve-data-center-energy-efficiency-544979
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00009-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.957543 | 643 | 2.765625 | 3 |
Originally published November 15, 2011
When Confucius was asked what he would do first if he were in power, he responded: “Cleanse the definitions of terms we use!”
According to Confucius, nothing is so destructive for peace, justice and prosperity as confusing names and definitions.
To illustrate the question of what is real and what is not, Plato tells a story about a cave. In the middle of a cave, a number of prisoners sit against a small wall, chained there since childhood. They face one of the walls of the cave. Behind them there is a huge campfire that they cannot see. People walk between the campfire and the prisoners. All the prisoners can see is the shadows of these people on the cave’s wall in front of them. And because of the echo within the cave, even the sounds the people make seem to come from the direction of the shadows. To the prisoners, these shadows are the real world.
Now let’s assume, Plato continues, that one prisoner is released from his chains, gets up and walks around. At first, he will not recognize anything in this new reality, but after time he would adapt. He would understand more about the new world, and perhaps even understand how people walking alongside a campfire cast shadows on the wall. What would happen if he returned to the other prisoners and told them what he has learned? They would ignore him, ridicule him, and if not for their chains, probably kill him.
What Plato is trying to tell us is that a philosopher is like a prisoner freed from the cave, trying to understand reality. On a deeper level, Plato explains that the words we have for things refer to concepts in our mind; in other words, the shadows. We perceive reality through these concepts. Plato’s Cave is an old story, but still told in many variations. For instance, Plato’s Cave provides the philosophical basis for the film The Matrix, in which Morpheus explains to Neo: “How do you define real? If you're talking about what you can feel, what you can smell, what you can taste and see, then real is simply electrical signals interpreted by your brain.”
Plato’s Cave showed that what was reality to the prisoners was nothing but a shadow, covering the “true reality.” For centuries, philosophers have tried to break free of their chains and find the truth. The Enlightenment philosophers believed the world was a large “machine”, and it was man’s purpose to uncover the laws of nature through reason and understand how the world turns. Immanuel Kant spoke of the Categorical Imperative in his search of universal principles to decide what’s right and wrong. Throughout the Middle Ages, truth was a religious principle. Plato believed that everything we saw was just a reflection of an underlying concept, which Kant later called the thing as such (Ding an sich). Every tree is an example of the concept of treeness, every person an example of humanity, every chair an example of chairness.
But the postmodernists try to break away from the idea of truth. In their view, there can only be perception. Everything we perceive comes to us through our senses. What we see, what we feel, what we hear, and so forth. Perceptions can be communicated and shared, but this only means that reality is a social construct and can change anytime. Because perceptions are shared through language, truth and reality are culturally dependent. Think of the legend that Eskimos have nine different words for snow, or that doctors, accountants, lawyers and IT specialists have rich jargons to describe their truths and realities in much more detailed terms than those in other professions. Philosophers that shaped postmodernism include Søren Kierkegaard, Friedrich Nietzsche and Martin Heidegger. Although postmodernism has its critics as well, it is the dominant way of thinking today.
If we look through the postmodern lens, what more would Plato’s cave reveal? Let’s expand Plato’s thought experiment. To my knowledge, Plato never said the prisoners were not able or not allowed to talk. Let them talk, and have them describe what they see. The prisoners sitting on the ends of the row may describe the shadows close to them as very long, while the prisoners in the middle would characterize them as short. The cave is warm to those sitting close to the fire, but cold to those sitting farther away from it. Each would tell a different story. And, just for the sake of argument, let’s bring in time travel and introduce a video camera into ancient Greece. We’ll allow all prisoners to record their view of reality and share those recordings. Whose recording is true? They are all true.1 If they are smart enough, they will detect a pattern if they each describe their reality from left to right. In fact, let’s take the experiment one step further, and allow the prisoners to turn around. They can see the fire and all the people moving through the cave, but are still chained to the wall. They would still each describe a slightly different view on reality.
In short, postmodernists wouldn’t describe truth in terms of the shadow on the wall, and their underlying reality, they would describe it in relative terms; i.e., relative toward other perceivers regardless of whether they are looking at the shadows or the real people. In other words, truth is not in the objects we examine, not in the things as such, but in the eye of the beholder.
Although we live in the postmodern world, IT professionals (and many other business professionals as well) are firmly entrenched in classic times. In the tradition of Plato and Kant, there must be a universal underlying truth to things, and all we have to do is apply reason to uncover it. Sure, it may change over time, but hopefully only to move even closer to the “true truth.”
It is in the field of information management that this classic attitude is most visible. Professionals concerned with defining key performance indicators, putting together organizational taxonomies and building data warehouses have been looking for a single version of the truth since the advent of the information management discipline. It seems that most organizations have fundamental alignment issues in defining the terminology they use. In fact, I have formulated a “law” that describes the gravity of the problem: The more a term is connected to the core of the business, the more numerous are its definitions. There might be ten or more definitions of what constitutes revenue in a sales organization, what a flight means to an airline or how to define a customer for a mobile telephone provider.
Few have been successful in reaching one version of the truth. Business managers have fiercely resisted. Machiavelli might have pointed toward political motives of business managers since a single version of the truth would limit their flexibility to choose the version of the truth that fits their story best. However, IT professionals say business managers should see that the benefits of satisfying their own goals are less important than the satisfaction of contributing to the success of the overall organization. In fact, ignoring less important needs for the benefit of higher pleasures is a hallmark of human civilization. So much for civilization if we can’t even achieve this in the workplace.
Are IT specialists fighting windmills like Don Quixote? As I’ve discussed, the philosophers disagree whether there is a single objective truth or not.2 Postmodernists go only as far as to suppose joint observations, but others come to the aid of the classical IT professional. The American philosophical school of pragmatism states that we can call a statement true when it does all the jobs required of it. It fits all the known facts; matches with other well-tested theories, experiences and laws; withstands criticism; suggests useful insights and provides accurate predictions. If this is all the case, what stops us from calling it "true"?
But let’s stick to postmodernism for a while. To explain the failure of reaching a single version of the truth, postmodernists would point to the IT professionals themselves – they are simply misguided.
There actually is a very elegant and simple solution for the "one version of the truth" problem. “What is it?” I can hear you ask. I'll share that with you in my next article.
Recent articles by Frank Buytendijk
|
<urn:uuid:01fab940-82a2-401c-ab6b-a42a0d80401c>
|
CC-MAIN-2017-09
|
http://www.b-eye-network.com/view/15680
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00237-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.963753 | 1,746 | 2.953125 | 3 |
802.11n Unlocks the Potential of the 5 GHz Band
The IEEE 802.11 wireless LAN standards have long included service on multiple frequency bands—2.4 GHz and 5 GHz.
However, largely due to the disappointing coverage of existing 5 GHz products (802.11a)—the use of 5 GHz Wireless LANs has been limited to a few high-capacity enterprise networks, consumer networks, and wireless backhaul for metropolitan area networks. 5 GHz radio signals just do not propagate as well—particularly indoors, through walls—as 2.4 GHz radio signals. It's basic physics.
For most enterprises, a 5 GHz 802.11a infrastructure required too many access points. When 802.11g came along and delivered the 54 Mbps data rate of 802.11a in the 2.4 GHz band, most networks stayed at 2.4 GHz due to the better coverage of 802.11g access points. One consequence of this is the overloading of the 2.4 GHz band. With only three non-overlapping 802.11 channels in this band; Wi-Fi networks are increasingly contending with their neighbors as well as microwave ovens, cordless phones, and other devices that share this spectrum. Over time, the congestion in the 2.4 GHz band will only get worse. Where do we go from here?
802.11n to the Rescue
802.11n supports both 2.4 GHz and 5 GHz bands. It has a single MAC protocol that operates with a multiple frequency physical layers. And it dramatically improves the range, coverage and throughput in both frequency bands.
|Frequency Band (GHz)||Independent 20 MHz Channels||Possible 40 MHz Channels|
|5.47–5.75||10||5||Indoor/outdoor, dynamic frequency selection and power control|
802.11n makes use of the legacy 2.4 GHz band and constructs three largely non-interfering 20 MHz channels or one 20 MHz channel and one 40 MHz channel. It is backward-compatible with 802.11b/g stations and channelization. 802.11n makes use of the existing 802.11a channel set in the 5 GHz band at (5.15–5.25, 5.25–5.35, and 5.75–5.85 GHz) to construct twelve non-overlapping 20 MHz channels or as many as six non-overlapping 40 MHz channels. 802.11n will also take advantage of new worldwide regulatory changes making the 5.47–5.75 GHz band available for unlicensed WLAN use.
If 802.11n users could only tap the potential of the 5 GHz band they would have access to 25 channels in the combined bands—potentially delivering over seven gigabits per second of raw wireless capacity in an enterprise network.
The realization of that potential requires 802.11n to do what seems impossible—substantially increase the range of 5 GHz operation to match the range of 802.11g, while delivering the performance advantages of 802.11n. Novarum decided to test Draft 802.11n products to see if the performance improvements of multiple antennas, smart radios, and multiple spatial streams offered by 802.11n would be enough to overcome coverage limitations of 802.11a.
The Test Setup
To make the comparison, we selected a few standard 802.11g clients (The same clients we use for the Novarum Wireless Broadband Review of metropolitan wireless networks), several after-market 2.4 GHz 802.11n clients, a classic 802.11g access point, the new Apple dual-band draft 802.11n clients (embedded in Intel based MacBooks) and the new dual-band Airport Extreme N access point.
Our testing location is a classic San Francisco Victorian house with four floors and many small rooms. To illustrate the effects of wall and floor penetration, we picked seven locations of gradually increasing distance and numbers of walls and floors between the access point and the client test location. This residence has always needed several 802.11g 2.4 GHz access points to provide adequate Wi-Fi coverage. We used our standard Chariot delay, upstream throughput, downstream throughput test scripts from the Novarum Wireless Broadband Review to capture the data in a consistent fashion.
The pure 802.11n 5 GHz connections (between Apple's MacBook and Airport Extreme access point) had at least three times the throughput of the legacy 802.11g system in all but one location (where all systems performed equally). 802.11n in the 5 GHz band also delivered twice the throughput of 802.11n in the 2.4 GHz band in these tests.
These tests, while not exhaustive, illustrate the potential of 802.11n in the 5 GHz band. The extended range provided by 802.11n overcomes many of the real world deployment challenges of 5 GHz 802.11a networks. Novarum testing of Draft 802.11n products shows that 802.11n operating at 5 GHz will have similar range to legacy 802.11g networks in the 2.4 GHz band—at the maximum data rates.
The combined performance benefits of 802.11n enable more practical enterprise deployments at 5 GHz. The result is that 802.11n will be able to operate effectively across many more channels and therefore deliver much higher capacity in a given area.
While some have proposed that 802.11n will allow enterprise networks to operate with fewer APs, we think a better deployment strategy is to use the same AP density as current 802.11g networks but operate the entire 802.11n network in the 5 GHz band. Legacy 802.11 b/g clients and guest network access should stay in the 2.4 GHz band served by legacy APs or new 802.11n APs operating in legacy mode. With legacy 802.11 a/b/g clients and unknown guest clients isolated in the 2.4 GHz band, the 802.11n network at 5 GHz can operate at the highest possible performance. Our testing shows that the range and coverage of 802.11n will be sufficient to deliver maximum data rates across an enterprise with the access point density that we expect from 802.11g networks.
802.11n will accelerate the transition of wireless LANs from the 2.4 GHz band to the 5 GHz band, and users will benefit from the additional capacity that is available at 5 GHz.
Article courtesy of Wi-Fi Planet
|
<urn:uuid:b344a335-45ec-41f4-930a-781d3d1d6759>
|
CC-MAIN-2017-09
|
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/3695106/80211n-Unlocks-the-Potential-of-the-5-GHz-Band.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00237-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.906436 | 1,333 | 3 | 3 |
Finally, a legitimate health benefit to excessive drinking! Well, the author of a new study showing that the more alcohol seriously injured patients had in their blood, the less likely they were to die in the hospital might prefer we not put it that way. "This study is not encouraging people to drink," University of Chicago injury epidemiologist Lee Friedman said in a statement. That, of course, is because drunk people are (among other things) more prone to getting hurt, what with their stumbling around and driving drunk and getting belligerent and what have you. "However," he said, "after an injury, if you are intoxicated there seems to be a pretty substantial protective effect. The more alcohol you have in your system, the more the protective effect." Friedman, an assistant professor of environmental and occupational health sciences at UIC, conducted his study by analyzing Illinois Trauma Registry data for about 190,000 patients treated at trauma centers from 1995 through 2009 and tested for blood-alcohol content upon admission. From UIC:
The study examined the relationship of alcohol dosage to in-hospital mortality following traumatic injuries such as fractures, internal injuries and open wounds. Alcohol benefited patients across the range of injuries, with burns as the only exception.
The benefit extended from the lowest blood alcohol concentration (below 0.1 percent) through the highest levels (up to 0.5 percent)."At the higher levels of blood alcohol concentration, there was a reduction of almost 50 percent in hospital mortality rates," Friedman said. "This protective benefit persists even after taking into account injury severity and other factors known to be strongly associated with mortality following an injury."
Obviously, as Friedman emphasizes, this isn't a green light to a life of inebriation. Heavy alcohol consumption is extremely dangerous in the short- and long-term. So other than highlighting an interesting phenomenon, how can science use this research? One way, Friedman suggests, is that by understanding the protective effects of booze in situations of bodily trauma, "we could then treat patients post-injury, either in the field or when they arrive at the hospital, with drugs that mimic alcohol." The study is available on the website of the journal Alcohol and will appear in the December issue of the print edition. Now read this:
|
<urn:uuid:2fc23e71-9f81-4eef-8a7b-b9a33bc5baed>
|
CC-MAIN-2017-09
|
http://www.itworld.com/article/2718310/enterprise-software/injured-hospital-patients-more-likely-to-survive-if-they-re-drunk--research-shows.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00181-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.963973 | 456 | 2.734375 | 3 |
With the prospect of increasing amounts of data being collected by a proliferation of internet –connected devices and the task of organizing, storing, and accessing such data looming, we face the challenge of how to leverage the power of the cloud running in our data centers to make information accessible in a secure and privacy-preserving manner. For many scenarios, in other words, we would like to have a public cloud which we can trust with our private data, and yet we would like to have that data still be accessible to us in an organized and useful way.
One approach to this problem is to envision a world in which all data is preprocessed by a client device before being uploaded to the cloud; the preprocessing signs and encrypts the data in such a way that its functionality is preserved, allowing, for example, for the cloud to search or compute over the encrypted data and to prove its integrity to the client (without the client having to download it). We refer to this type of solution as Cryptographic Cloud Storage.
Cryptographic cloud storage is achievable with current technologies and can help bootstrap trust in public clouds. It can also form the foundation for future cryptographic cloud solutions where an increasing amount of computation on encrypted data is possible and efficient. We will explain cryptographic cloud storage and what role it might play as cloud becomes a more dominant force.
Applications of the Cryptographic Cloud
Storage services based on public clouds such as Microsoft’s Azure storage service and Amazon’s S3 provide customers with scalable and dynamic storage. By moving their data to the cloud customers can avoid the costs of building and maintaining a private storage infrastructure, opting instead to pay a service provider as a function of its needs. For most customers, this provides several benefits including availability (i.e., being able to access data from anywhere) and reliability (i.e., not having to worry about backups) at a relatively low cost. While the benefits of using a public cloud infrastructure are clear, it introduces significant security and privacy risks. In fact, it seems that the biggest hurdle to the adoption of cloud storage (and cloud computing in general) is concern over the confidentiality and integrity of data.
While, so far, consumers have been willing to trade privacy for the convenience of software services (e.g., for web-based email, calendars, pictures etc…), this is not the case for enterprises and government organizations. This reluctance can be attributed to several factors that range from a desire to protect mission-critical data to regulatory obligations to preserve the confidentiality and integrity of data. The latter can occur when the customer is responsible for keeping personally identifiable information (PII), or medical and financial records. So while cloud storage has enormous promise, unless the issues of confidentiality and integrity are addressed many potential customers will be reluctant to make the move.
In addition to simple storage, many enterprises will have a need for some associated services. These services can include any number of business processes including sharing of data among trusted partners, litigation support, monitoring and compliance, back-up, archive and audit logs. A cryptographic storage service can be endowed with some subset of these services to provide value to enterprises, for example in complying with government regulations for handling of sensitive data, geographic considerations relating to data provenance, to help mitigate the cost of security breaches, lower the cost of electronic discovery for litigation support, or alleviate the burden of complying with subpoenas.
For example, a specific type of data which is especially sensitive is personal medical data. The recent move towards electronic health records promises to reduce medical errors, save lives and decrease the cost of healthcare. Given the importance and sensitivity of health-related data, it is clear that any cloud storage platform for health records will need to provide strong confidentiality and integrity guarantees to patients and care givers, which can be enabled with cryptographic cloud storage.
Another arena where a cryptographic cloud storage system could be useful is interactive scientific publishing. As scientists continue to produce large data sets which have broad value for the scientific community, demand will increase for a storage infrastructure to make such data accessible and sharable. To incent scientists to share their data, scientific could establish a publication forum for data sets in partnership with hosted data centers. Such an interactive publication forum would need to provide strong guarantees to authors on how their data sets may be accessed and used by others, and could be built on a cryptographic cloud storage system.
Cryptographic Cloud Storage
The core properties of a cryptographic storage service are that control of the data is maintained by the customer and the security properties are derived from cryptography, as opposed to legal mechanisms, physical security, or access control. A cryptographic cloud service should guarantee confidentiality and integrity of the data while maintaining the availability, reliability, and efficient retrieval of the data and allowing for flexible policies of data sharing.
A cryptographic storage service can be built from three main components: a data processor (DP), that processes data before it is sent to the cloud; a data verifier (DV), that checks whether the data in the cloud has been tampered with; and a token generator (TG), that generates tokens which enable the cloud storage provider to retrieve segments of customer data. We describe designs for both consumer and enterprise scenarios.
A Consumer Architecture
Typical consumer scenarios include hosted email services or content storage or back-up. Consider three parties: a user Alice that stores her data in the cloud; a user Bob with whom Alice wants to share data; and a cloud storage provider that stores Alice’s data. To use the service, Alice and Bob begin by downloading a client application that consists of a data processor, a data verifier and a token generator. Upon its first execution, Alice’s application generates a cryptographic key. We will refer to this key as a master key and assume it is stored locally on Alice’s system and that it is kept secret from the cloud storage provider.
Whenever Alice wishes to upload data to the cloud, the data processor attaches some metadata (e.g., current time, size, keywords etc…) and encrypts and encodes the data and metadata with a variety of cryptographic primitives. Whenever Alice wants to verify the integrity of her data, the data verifier is invoked. The latter uses Alice’s master key to interact with the cloud storage provider and ascertain the integrity of the data. When Alice wants to retrieve data (e.g., all files tagged with keyword “urgent”) the token generator is invoked to create a token and a decryption key. The token is sent to the cloud storage provider who uses it to retrieve the appropriate (encrypted) files which it returns to Alice. Alice then uses the decryption key to decrypt the files.
Whenever Alice wishes to share data with Bob, the token generator is invoked to create a token and a decryption key which are both sent to Bob. He then sends the token to the provider who uses it to retrieve and return the appropriate encrypted documents. Bob then uses the decryption key to recover the files. This process is illustrated in Figure 1.
Figure 1: (1) Alice’s data processor prepares the data before sending it to the cloud; (2) Bob asks Alice for permission to search for a keyword; (3) Alice’s token generator sends a token for the keyword and a decryption key back to Bob; (4) Bob sends the token to the cloud; (5) the cloud uses the token to find the appropriate encrypted documents and returns them to Bob. At any point in time, Alice’s data verifier can verify the integrity of the data.
An Enterprise Architecture
In the enterprise scenario we consider an enterprise MegaCorp that stores its data in the cloud; a business partner PartnerCorp with whom MegaCorp wants to share data; and a cloud storage provider that stores MegaCorp’s data. To handle enterprise customers, we introduce an extra component: a credential generator. The credential generator implements an access control policy by issuing credentials to parties inside and outside MegaCorp.
To use the service, MegaCorp deploys dedicated machines within its network to run components which make use of a master secret key, so it is important that they be adequately protected. The dedicated machines include a data processor, a data verifier, a token generator and a credential generator. To begin, each MegaCorp and PartnerCorp employee receives a credential from the credential generator. These credentials reflect some relevant information about the employees such as their organization or team or role.
Figure 2: (1) Each MegaCorp and PartnerCorp employee receives a credential; (2) MegaCorp employees send their data to the dedicated machine; (3) the latter processes the data using the data processor before sending it to the cloud; (4) the PartnerCorp employee sends a keyword to MegaCorp’s dedicated machine ; (5) the dedicated machine returns a token; (6) the PartnerCorp employee sends the token to the cloud; (7) the cloud uses the token to find the appropriate encrypted documents and returns them to the employee. At any point in time, MegaCorp’s data verifier can verify the integrity of MegaCorp’s data.
generates data that needs to be stored in the cloud, it sends the data together with an associated decryption policy to the dedicated machine for processing. The decryption policy specifies the type of credentials necessary to decrypt the data (e.g., only members of a particular team). To retrieve data from the cloud (e.g., all files generated by a particular employee), an employee requests an appropriate token from the dedicated machine. The employee then sends the token to the cloud provider who uses it to find and return the appropriate encrypted files which the employee decrypts using his credentials.
If a PartnerCorp employee needs access to MegaCorp’s data, the employee authenticates itself to MegaCorp’s dedicated machine and sends it a keyword. The latter verifies that the particular search is allowed for this PartnerCorp employee. If so, the dedicated machine returns an appropriate token which the employee uses to recover the appropriate files from the service provider. It then uses its credentials to decrypt the file. This process is illustrated in Figure 2.
Implementing the Core Cryptographic Components
The core components of a cryptographic storage service can be implemented using a variety of techniques, some of which were developed specifically for cloud computing. When preparing data for storage in the cloud, the data processor begins by indexing it and encrypting it with a symmetric encryption scheme (for example the government approved block cipher AES) under a unique key. It then encrypts the index using a searchable encryption scheme and encrypts the unique key with an attribute-based encryption scheme under an appropriate policy. Finally, it encodes the encrypted data and index in such a way that the data verifier can later verify their integrity using a proof of storage.
In the following we provide high level descriptions of these new cryptographic primitives. While traditional techniques like encryption and digital signatures could be used to implement the core components, they would do so at considerable cost in communication and computation. To see why, consider the example of an organization that encrypts and signs its data before storing it in the cloud. While this clearly preserves confidentiality and integrity it has the following limitations.
To enable searching over the data, the customer has to either store an index locally, or download all the (encrypted) data, decrypt it and search locally. The first approach obviously negates the benefits of cloud storage (since indexes can grow large) while the second scales poorly. With respect to integrity, note that the organization would have to retrieve all the data first in order to verify the signatures. If the data is large, this verification procedure is obviously undesirable. Various solutions based on (keyed) hash functions could also be used, but all such approaches only allow a fixed number of verifications.
A searchable encryption scheme provides a way to encrypt a search index so that its contents are hidden except to a party that is given appropriate tokens. More precisely, consider a search index generated over a collection of files (this could be a full-text index or just a keyword index). Using a searchable encryption scheme, the index is encrypted in such a way that (1) given a token for a keyword one can retrieve pointers to the encrypted files that contain the keyword; and (2) without a token the contents of the index are hidden. In addition, the tokens can only be generated with knowledge of a secret key and the retrieval procedure reveals nothing about the files or the keywords except that the files contain a keyword in common.
Symmetric searchable encryption (SSE) is appropriate in any setting where the party that searches over the data is also the one who generates it. The main advantages of SSE are efficiency and security while the main disadvantage is functionality. SSE schemes are efficient both for the party doing the encryption and (in some cases) for the party performing the search. Encryption is efficient because most SSE schemes are based on symmetric primitives like block ciphers and pseudo-random functions. Search can be efficient because the typical usage scenarios for SSE allow the data to be pre-processed and stored in efficient data structures.
Another set of cryptographic techniques that has emerged recently allows the specification of a decryption policy to be associated with a ciphertext. More precisely, in a ciphertext-policy attribute-based encryption scheme each user in the system is provided with a decryption key that has a set of attributes associated with it. A user can then encrypt a message under a public key and a policy. Decryption will only work if the attributes associated with the decryption key match the policy used to encrypt the message. Attributes are qualities of a party that can be established through relevant credentials such as being an employee of a certain company or living in Washington State.
Proofs of Storage
A proof of storage is a protocol executed between a client and a server with which the server can prove to the client that it did not tamper with its data. The client begins by encoding the data before storing it in the cloud. From that point on, whenever it wants to verify the integrity of the data it runs a proof of storage protocol with the server. The main benefits of a proof of storage are that (1) they can be executed an arbitrary number of times; and (2) the amount of information exchanged between the client and the server is extremely small and independent of the size of the data.
Trends and future potential
Extensions to cryptographic cloud storage and services are possible based on current and emerging cryptographic research. This new work will bear fruit in enlarging the range of operations which can be efficiently performed on encrypted data, enriching the business scenarios which can be enabled through cryptographic cloud storage.
About the Authors
Kristin Lauter is a Principal Researcher and the head of the Cryptography Group at Microsoft Research. She directs the group’s research activities in theoretical and applied cryptography and in the related math fields of number theory and algebraic geometry. Group members publish basic research in prestigious journals and conferences and collaborate with academia through joint publications, and by helping to organize conferences and serve on program committees. The group also works closely with product groups, providing consulting services and technology transfer. The group maintains an active program of post-docs, interns, and visiting scholars. Her personal research interests include algorithmic number theory, elliptic curve cryptography, hash functions, and security protocols.
Seny Kamara is a researcher in the Crypto Group at Microsoft Research in Redmond and completed a Ph.D. in Computer Science at Johns Hopkins University under the supervision of Fabian Monrose. At Hopkins Dr. Kamara was a member of the Security and Privacy Applied Research (SPAR) Lab. Seny Kamara spent the Fall of 2006 at UCLA’s IPAM and the summer of 2003 at CMU’s CyLab. Main research interests are in cryptography and security and recent work has been in cloud cryptography, focusing on the design of new models and techniques to alleviate security and privacy concerns that arise in the context of cloud computing.
|
<urn:uuid:6b24a0e3-7feb-46fd-a7ca-43461eb7efed>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2011/03/11/considerations_for_the_cryptographic_cloud/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00181-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.913988 | 3,282 | 2.859375 | 3 |
NCTA last week filed an ex parte letter with the FCC regarding the so called “white space” devices and interference with cable systems. While much of the media coverage of these devices has focused on interference with broadcast signals, an often overlooked aspect is the negative impact they can have on cable systems.
The good news, however, is we believe there are steps that can be taken by device manufacturers and the FCC to mitigate those concerns and bring these devices to market. The use of white spaces holds promise for new wireless services. And while we support use of this innovative technology, the FCC must first ensure that no harm is done to millions of cable customers.
White space devices, for those unfamiliar, identify and use unused TV channels for transmission of data. They identify the TV channels in use in a given area, and use the unused TV channels within that area for data transfer.
Broadcasters and makers of wireless devices such as microphones are concerned that the devices may not properly identify used TV channels and cause interference with everything from over the air television reception to concert hall sound systems. Testing currently underway gives a certain amount of legitimacy to this fear. Some devices improperly identified every frequency as being in operation or improperly identified frequencies in use as not in use.
Beyond these issues, however, cable subscribers have unique interference issues that can arise from white space devices, and they have gone largely unreported.
- Cable television systems have no ‘white spaces.’ Cable systems use all of the channels in the broadcast television band for the delivery of programming and other services to their customers. As consumers with TVs connected directly to cable (without a set-top) tune up and down the dial, they may experience significant interference as they tune past channels utilized by white space devices.
- The proposed unlicensed TV band devices pose a significant threat to cable’s reception of distant over-the-air television programming at headends. If white space devices operate between a distant broadcast facility and a cable head-end, the device may not recognize the distant signal, and prevent the cable headend from receiving the signal at all.
In many cases, the most serious concerns about white space devices as they impact cable have more to do with the power of the devices. Higher power “fixed” white space antennas could impact consumers with cable ready TVs as far away as three miles from the antenna.
The use of white spaces is just one of the innovative solutions that cable and other industries are exploring to provide consumers more access to content when and where they want. These efforts are exciting but we should ensure that any new technology shouldn’t interfere with the right of consumers to enjoy the services to which they already subscribe.
To help resolve some of these technical challenges, we have proposed some steps that will mitigate the interference from this new technology. These solutions include:
- Restrict the operation of portable devices to a maximum of 10 mW and prohibit transmissions in the VHF channels given the high probability of direct pickup interference to TV receivers.
- Prohibit operations, at a minimum, on channels 2- 4.
- Restrict the operation of fixed devices to at least 400 feet from the external walls of residential buildings.
- Prohibit operation of fixed devices in VHF channels.
- Require spectrum coordination before operation of portable devices on channels adjacent to those being received at headends.
- Of the suggested methods by which fixed and portable devices might automatically determine channel availability, it appears that auto-location (GPS or equivalent), combined with regular access to a reliable database containing geographically-indexed lists of available channels, has the potential to provide the flexibility and reliability required to protect headend reception.
By incorporating these recommendations on the operation of white space devices, we believe the benefits of the technology can be balanced against the probable impact it will have on millions of cable television customers.
|
<urn:uuid:5fbddac7-cbc1-42e8-8ca8-401146b02829>
|
CC-MAIN-2017-09
|
https://www.ncta.com/platform/broadband-internet/broadband/ncta-ex-parte-letter-on-white-space-devices/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00233-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.941922 | 787 | 2.796875 | 3 |
State and local governments and allied community and civic groups made an extraordinary effort to ensure the highest level of participation in the 2000 U.S. census. The stakes were high: The results are used to determine congressional district boundaries and the allocation of hundreds of billions of federal program dollars.
An added, but somewhat underutilized benefit is that the results are available for governments and the public to use however they wish. In fact, census summary results should prove as useful to state and local constituencies as they do to the federal government, informing all sorts of programming and planning decisions. So what data is available, and where can it be found?
The first census results, the populations of the states, were released in December 2000. They have been followed by other population and housing statistics based on a survey of 100 percent of American households. Congressional redistricting data sets mandated by federal law were released in March 2001, followed by Summary File 1, a series of 286 detailed summary-data tables. Summary File 2, 47 tables produced for 250 iterations of race, ancestry and ethnicity, are slated for release this fall.
Data set releases are complemented by statistical briefs, which analyze particular topics and geographic areas, and by demographic profiles, which provide a concise summary of key statistics. Complete information on data products -- data sets, briefs and profiles -- and the release schedule are available
Census 2000 summary data sets are released on CD, DVD and the Internet. They are available for download
and for interactive search, query and mapping via the Census Bureaus American FactFinder (AFF) Web site
. The AFF site, which was built by IBM Global Services on contract to the Census Bureau, is an excellent tool for exploratory analysis, offering statistical tables and thematic maps and interfaces suitable for a range of users from school children to subject-matter experts.
AFF launched in early 1999, offering data from the Census 2000 Dress Rehearsal conducted in April 1998, the 1997 economic census and from the early phases of the Census Bureaus new American Community Survey. In light of early experiences, the bureau and IBM have enhanced the sites appeal, designing a cleaner interface without frames or cookies, refocusing on geographic areas rather than on particular surveys or data sets and adding convenient features such as an address locator mapping street addresses to census geographic areas. The site was also rebuilt to support thousands of simultaneous users. AFF is coded with Java servlets running in the IBM WebSphere application server accessing an Oracle 8i data warehouse. It runs on IBM RS/6000 SP clusters, one serving internal Census Bureau users and the other serving the public.
Production and Analysis
Census 2000 summary data set production poses difficult technical problems, compounded by extreme accuracy, accountability and security needs and a strict release schedule. While data set users will be able to fruitfully work with the data using common desktop software, the steps that the production team went through may be instructive for power users.
The bureaus analysis system runs on an eight-processor IBM RS/6000 M80 with 16GB memory and a four-terabyte disk storage system. The SuperSTAR analytical software suite from Space-Time Research
of Melbourne, Australia, forms the heart of the system, providing a graphical user interface for Census Bureau users to compose tables and a fast tabulation engine. Although SuperSTAR is similar to many online analytical processing tools, it may be unique in its combination of ease-of-use and suitability for both ad-hoc and large-scale analysis of microdata classified according to large, hierarchical dimensions. For instance, census data are summarized according to geographic hierarchies with multiple branches that include up to eight levels between the highest and lowest levels and up to 750,000 elements in the case of Texas, the state with the largest number of census geographic areas.
The analysis system uses SAS
for data preparation and output processing, both of which involve extensive checks of data consistency and accuracy. Although scripting tools such as Perl or Python could fill these roles, SAS is more "data aware." In addition, SAS macro language can be and is used fairly easily to write dynamic programs driven by parameter files that describe data products.
The Unix platform offers the highest level of reliability given a heavy, heterogeneous processing load. A data set like Summary Form 1 takes about two months to compute on a fully loaded machine running SuperSTAR tabulations, Java/JDBC code building SuperSTAR databases, SAS programs, shell scripts, interactive sessions, monitoring utilities and nightly backups and supporting remote SuperSTAR query clients.
For census data users, understanding the data sets is the first hurdle. Refer to the data-product documentation, available online as a PDF file, to find out about statistical-table contents, data set record layout and field definitions. The product documentation also provides value lists, survey accuracy, geographic coverage and other important background information. Unfortunately, a "geographic identifier" is the only metadata -- data describing the data sets and their contents -- provided by the Census Bureau in a format suitable for direct loading to analysis tools, so users will need to build data dictionaries and extraction, transformation and loading procedures.
Power users will need to master data concepts and data set formats, and find capable analysis tools. Both SuperSTAR and SAS can handle sparse output data sets with tens of thousands of fields and millions of records. (Sparse data has a very high proportion of zero values.) But not every tool has this capability: Tables in an Oracle 9i database, for example, are limited to 1,000 fields. You can successfully work with census summary data using desktop spreadsheet and database tools -- the data sets are partitioned into segments of 256 or fewer fields to ease the way for desktop-tool users -- but youll have to be careful to load only subsets that cover your topical and geographic interests.
Many difficult usage issues are out of product-documentation scope, including establishing comparability between 2000 and 1990 results. The biggest comparability challenges are dealing with redrawn geographic boundaries, changes in racial classifications and figuring how to map 1990 numbers to 2000 numbers. The 2000 survey was the first decennial census to allow individuals to be reported as belonging to more than one race.
Although planning for the 2010 census has already started, theres more to look forward to before the next decennial round. Notably, the Census Bureau has created the American Community Survey (ACS), a "continuous measurement" instrument designed to replace the decennial-census long form. ACS began with a 1996 demonstration survey of four localities; a 1999 survey of 31 sites kicked off a phase comparing ACS and Census 2000 results. The full program will launch in 2003 with a sample of more than three million households covering every county in the United States. Once sufficient data is accumulated, ACS will facilitate creation of small-geographic-area demographic statistics that will be of great use to local governments. And the Census Bureau will conduct the next economic census, a survey of American businesses, in 2002. Both ACS and economic census results are disseminated through American FactFinder alongside decennial census data.
Census 2000 summary data should prove invaluable to state and local organizations and to the public. Theres a lot available already and more to come, yours to explore and download.
|
<urn:uuid:c6c6c871-97fc-4ce9-b44d-1bfe5e8f80da>
|
CC-MAIN-2017-09
|
http://www.govtech.com/magazines/gt/Making-Sense-of-the-Census.html?page=1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00105-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.91805 | 1,479 | 2.6875 | 3 |
Russia on Wednesday took a step toward protecting private data by ratifying the so-called Convention 108, established in 1981 and legally binding in 45 countries.
The convention was set up to safeguard the privacy of private citizens. It comprises rules ensuring that data is processed fairly and through procedures established by law, for a specific purpose; that information is stored for no longer than is required for this purpose; and that individuals have a right to have access, rectify or erase their data.
Signatory states must also set up an independent authority to ensure compliance with data protection principles and to help prevent any abuses.
The decision to implement the convention was sent to the Council of Europe by the Permanent Representative and Ambassador of the Russian Federation to the CoE, Alexander Alekseev. It will enter into force on Sept. 1, making Russia the 46th member of the convention.
According to the CoE, the text is drafted in a technologically neutral style and has the potential to become a global standard regardless of technological advances. However it is currently being updated to ensure that its data protection principles are still valid for new tools and practices.
|
<urn:uuid:50206668-e47b-4097-9a47-834209793e14>
|
CC-MAIN-2017-09
|
http://www.itworld.com/article/2710588/security/russia-signs-international-privacy-pact.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00105-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.96371 | 228 | 2.578125 | 3 |
Last year I wrote an article on the consumerization of energy in which I predicted that “Distributed energy technologies… will soon be able to provide electricity at costs and reliability levels that are competitive with grid power. For the first time in 100 years these technologies will enable consumers to bypass their local electric utility company.” This article examines what has and hasn’t changed in the intervening year.
Let's start with one of the biggest factors driving this trend, one that hasn't changed, which is the availability of abundant cheap natural gas in the U.S. Natural gas spot prices generally stayed below $3.50/mmBtu in 2012, reaching a low of $1.95/mmBtu in April, and prices are expected to remain low for the foreseeable future according to EIA forecasts. Shale gas continues to revolutionize the U.S. energy industry, driving the shift from coal to natural gas in traditional power generation, spurring interest in LNG exports, reviving plans for natural gas vehicles and making it more difficult for renewable energy to compete with fossil fuels without subsidies. Distributed generation technologies like fuel cells and turbines that use natural gas as a fuel will continue to benefit from low fuel prices, making these resources increasingly attractive from an energy cost perspective.
The first and most important actual change in 2012 was the impact of severe weather events on grid reliability - specifically the impact from Hurricane Sandy which devastated large portions of New York and New Jersey. It's fair to say that Sandy was the watershed event that is forcing utilities, governments and energy consumers to rethink the concept of reliability in an era of increasingly destructive and more frequent severe weather events. Many articles and studies have pointed out that the grid is not adequately designed, built and operated to withstand these types of storms. However, measures to harden the grid by burying or relocating transmission and distribution infrastructure are very costly and may not resolve the issue. Indeed, a recent New York Times article pointed out that ConEdison "expects to spend as much as $450 million to repair damages to its electric grid in and around New York City." Typical residential bills "would have to rise by almost 3 percent for three years to cover those expenses alone. Putting all of its electric lines underground would cost around $40 billion, the company estimates. To recover those costs, electric rates would probably have to triple for a decade or more." However, a much greater use of distributed energy resources integrated as part of smart buildings and community micro-grids could be a much better solution to reliability in the face of severe weather events. For example, New York University's recently commissioned combined heat and power (CHP) plant remained operational during Hurricane Sandy while the surrounding areas of lower Manhattan lost power. NYU's CHP plant uses natural gas and steam turbines to provide electricity to 22 buildings and heat to 37.
Another change worth noting has been the increasing popularity of fuel cells. Like NYU's CHP plant, at least two major fuel cell installations remained operational during Hurricane Sandy. Delmarva Power had one such installation and stated that its "Bloom Energy Servers in New Castle, Delaware rode through Hurricane Sandy without incident and continued to feed power to the regional power grid despite all the challenges the storm presented." The other installation was a UTC Power PureCell system installed at 1211 Avenue of the Americas that powers part of News Corp. headquarters. It is notable that UTC Power is being acquired by another fuel cell company, ClearEdge Power, creating a fuel cell solution provider capable of serving a range of residential, small business and large enterprise customers. Also notable is the traction that Bloom Energy has been gaining, particularly with mission critical facilities like data centers. In 2012 Bloom Energy signed a landmark deal with eBay wherein the fuels cells will be the primary energy source for its new data center in Utah. Bloom also announced aditional deals with AT&T, making it Bloom's largest non-utility customer.
My (unofficial) prediction is that the combination of low natural gas prices, severe weather events and advances in fuel cell and CHP technologies will be the primary drivers going forward for the consumerization of energy.
|
<urn:uuid:170e7b8f-dbe5-4b4e-a553-f6bce3e92a55>
|
CC-MAIN-2017-09
|
https://idc-community.com/energy/smart-grid/theconsumerizationofenergyrevisited
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00457-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.961641 | 840 | 2.609375 | 3 |
A valid password requires a mix of upper and lower case letters, digits, and other characters. You can use a 7-character long password with characters from at least three of these four classes, or a 6-character long password containing characters from all the classes. A password that begins with an upper case letter and ends with a numerical digit does not count towards the number of character classes used. It is recommended that the password does not contain the username.
A passphrase requires at least 3 words, be 8 to 40 characters long, and must contain enough different characters.
/etc/security/login.mapfile contains the authentication rules for ESX/ESXi. Refer to this file to determine which file to edit in the workaround.
vpxuser : system-auth-local
* : system-auth-generic
system-auth-genericto authenticate all other users. If
system-auth-genericis not present on the system, the
/etc/security/login.mapfile typically lists
Caution: Modifying password restrictions may reduce the security of your VMware environment.
VMware ESX 4.x uses the pam module
pam_passwdqc.so. For additional information about this module and the different syntax, see the
pam_passwdqc man page.
Note: The preceding link was correct as of January 31, 2012 If you find the link is broken, provide feedback and a VMware employee will update the link.
To disable the restriction:
/etc/pam.d/system-auth-genericfile. Run the command:
password required /lib/security/$ISA/pam_passwdqc.so min=8,8,8,7,6 similar=deny match=0
password required /lib/security/$ISA/pam_passwdqc.so min=0,0,0,0,0 similar=deny match=0
password required pam_cracklib.so try_first_pass retry=3
VMware ESXi/ESX 4.1 and ESXi 4.0 use the
pam_passwdqc.so module to check for the password strength. By default, it uses these parameters:
pam_passwdqc.so retry=3 min=8,8,8,7,6
To modify these settings on an ESX/ESXi 4.1.x host:
For more information on Tech Support Mode, see:
/etc/pam.d/system-authfile using a text editor. For example, to open the file using a vi editor, run this command:
Note:You are changing the min values to match the password policy you want to enforce. For additional information about this module and the different syntax, see the
chmod +t /etc/pam.d/system-auth
To modify these settings on an ESXi 5.0 host:
password requisite /lib/security/$ISA/pam_passwdqc.so retry=3 min=8,8,8,7,6
password requisite /lib/security/$ISA/pam_passwdqc.so retry=N min=N0,N1,N2,N3,N4
retry=3: A user is allowed 3 attempts to enter a sufficient password.
N0=12: Passwords containing characters from one character class must be at least twelve characters long.
N1=10: Passwords containing characters from two character classes must be at least ten characters long.
N2=8: Passphrases must contain words that are each at least eight characters long.
N3=8: Passwords containing characters from all three character classes must be at least eight characters long.
N4=7: Passwords containing characters from all four character classes must be at least seven characters long.
password requisite /lib/security/$ISA/pam_passwdqc.so retry=3 min=12,10,8,8,7
|
<urn:uuid:9f2a512a-4e8b-4d58-81b3-0b4def603c93>
|
CC-MAIN-2017-09
|
https://www.247rack.com/dashboard/knowledgebase.php?action=displayarticle&id=75
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00225-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.789351 | 860 | 2.625 | 3 |
If you search around online, you might find different technical documents describing deduplication and how it works, but most of these documents are fairly technical. I thought it might be helpful to describe deduplication in a way that would make sense to anyone who wants to understand what it does.
NetApp’s deduplication (also referred to also referred to as A-SIS) is a storage efficiency feature. Storage efficiency simply means that NetApp uses this offering to help you maximize the amount of available free space on your storage system. Which in turn means that you spend less money on disk drives.
Without going into a huge amount of technical detail, I will give you an example. Let’s say you have a version controlled document and there are 10 versions of that document on your storage system. If each page were 1 MB in size, each document would be a total of 10 MB in size. Multiply 10 MB by 10 documents and that’s 100 MB of total space used to store multiple versions of the same document.
If only one page is different between each version of the document, and you only saved changes for each version and not the entire document, then the first document would be 10 MB and each revision would be 1 MB, making your total storage needs 19 MB instead of 100 MB. The process of reducing the total storage space required for these documents from 100 MB to 19MB is deduplication.
Deduplication looks at each version of the document, saves only the unique content from each revision, and uses metadata to point to the original content that these documents have in common. So when you retrieve a unique version of the file, the file system returns the shared data from the original file along with the unique content from the version of the file you requested.
All of this is really done at block level not file level and there is a lot of additional technical detail as to exactly what happens, but in layperson terms that is how deduplication can help you maximize your available storage space. Keep an eye out for our future blog posts as we explain each of NetApp’s advertised storage efficiencies.
Check out the de-duplication calculator at http://www.dedupecalc.com/ to see the potential cost and space savings you can achieve by using deduplication in your environment.
|
<urn:uuid:b3e6fca8-5379-4cd8-8e96-b19a2cc773ff>
|
CC-MAIN-2017-09
|
http://www.fastlaneus.com/blog/2010/04/14/a-non-technical-explanation-of-netapp-deduplication/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00097-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.9342 | 488 | 2.515625 | 3 |
NASA took a big step this week in its effort to launch a spacecraft to Mars this fall.
Just as NASA marked the one-year anniversary of its rover Curiosity's arrival on the Martian surface, the space agency took a big step this week in its effort to launch a spacecraft to Mars this fall.
NASA's Mars Atmosphere and Volatiles Evolution (MAVEN) spacecraft has been moved into a clean room at the Kennedy Space Center in Florida and now is going through final preparations for a scheduled November launch.
The spacecraft is undergoing detailed testing and fueling prior to being moved to its launch pad, NASA said.
The mission, which will focus on studying Mars' atmosphere, climate history and potential habitability, has a 20-day launch period that opens Nov. 18.
Maven's work is expected to help scientists reconstruct the Red Planet's past climate.
"Maven is not going to detect life," Bruce Jakosky, a planetary scientist and Maven's principal investigator, said in a written statement. "But it will help us understand the climate history, which is the history of its habitability."
Recent discoveries of evidence of ancient water flows on Mars, along with the discovery of chemicals needed to sustain life as we know it in Martian soil, have scientists saying that Mars used to be a blue planet - much like Earth. Today, however, Mars is a cold, dusty planet.
One question to be answered is what happened to change Mars and if the same thing could happen on Earth.
NASA noted that its scientists hope Maven will give them clues to help them understand how the loss of Mars' atmospheric gas may have played a part in changing the planet's climate.
"We're excited and proud to ship the spacecraft right on schedule," said David Mitchell, Maven project manager at NASA's Goddard Space Flight Center. "But more critical milestones lie ahead before we accomplish our mission of collecting science data from Mars. I firmly believe the team is up to the task. Now, we begin the final push to launch."
The Maven spacecraft was transported to the Kennedy Space Center from Buckley Air Force Base in Aurora, Colo., on Friday, Aug. 2, aboard a U.S. Air Force C-17 cargo plane. According to NASA, Lockheed Martin Space Systems in Littleton, Colo. designed and built the spacecraft and is responsible for testing, launch processing, and mission operations.
"It's always a mix of excitement and stress when you ship a spacecraft down to the launch site," said Guy Beutelschies, Maven program manager at Lockheed Martin. "It's similar to moving your children to college after high school graduation. You're proud of the hard work to get to this point, but you know they still need some help before they're ready to be on their own."
NASA's Mars Atmosphere and Volatiles Evolution (MAVEN) spacecraft will study the planet's atmosphere, climate history and potential habitability. (Image: NASA)
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected].
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA preps for November launch of Maven spacecraft for Mars" was originally published by Computerworld.
|
<urn:uuid:9318a13e-9e8e-4e1b-b7d3-4468c0105008>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2167254/data-center/nasa-preps-for-november-launch-of-maven-spacecraft-for-mars.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00449-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.952964 | 712 | 3.296875 | 3 |
Cell phones, appliances, cars. Batteries are everywhere! For the most part, if a device contains a battery, it is considered an electronic.
Responsible and environmentally sound battery handling is key in electronics recycling. But, how can you know how to properly dispose of something if you aren’t aware of its affects on the environment?
Batteries are produced from a combination of heavy metals, typically lead, mercury, cadmium or nickel. They react with an electrolyte to produce electricity.
Battery recycling is one of the most successful and often harmful businesses in the world if done irresponsibly.
The heavy metals are what makes batteries a danger to the environment.
Responsible recyclers realize the after affects of battery lead being absorbed into drinking water, lead poisoning. It doesn’t stop there. Incinerating the heavy metals contained in batteries will release dangerous toxins into the air when burned.
Some companies believe that by sending e-waste abroad to developing countries the problem disappears. Unfortunately, countries like Mexico have one-tenth the proper disposal standards as the U.S. The lax regulations lead to growing lead emissions onto factory floor and into the air.
If electronics recyclers can’t simply throw them out, burn them or send them away, what CAN they do to responsibly recycle batteries?
Along with mobile phones, batteries have been improving significantly throughout the years. Old batteries, typically made before 1997, should ALWAYS be responsibly recycled. These can contain as much as ten times the amount of mercury as the newer versions, after Congress mandated a mercury “phase-out,” according to an article about battery disposal.
Rechargeable batteries, most commonly in cell phones, tablets and laptops, are of greater concern nowadays because they still contain heavy metals detrimental to the environment.
Compliance with government regulations is essential for electronics recyclers and ITAD organizations. Many states have initiated laws that make responsible battery recycling mandatory. Fortunately, lead, the leading component in batteries, is easily recycled into new batteries or other lead products. An article by Green Living reported that “roughly 60 to 80 percent of the components in a brand-new battery are created from recycled lead and plastics.”
HOBI President Craig Boswell will share his knowledge on safe battery handling at the ISRI Annual Convention on April 11th.
|
<urn:uuid:f48870e4-5545-432a-a662-0bf0247f9b2a>
|
CC-MAIN-2017-09
|
https://hobi.com/safe-battery-disposal/safe-battery-disposal/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00446-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.953926 | 488 | 3.890625 | 4 |
In much the same way the internet transformed the banking industry through the introduction of online banking, big data stands to revolutionize how loans are handled. Think about the headaches you have to go through when getting a loan for a car, home, or new business. There’s bound to be piles of paperwork you need to fill out, some of which may require a law degree to fully understand. That’s not to mention the discussions you’ll need to have with loan officers and the many visits you’ll need to pay to the bank. Much of that is changing now as big data gets used for approving people for loans. It’s a new way to evaluate risk that helps give people with no credit the chance to get those loans that can change their lives.
When speaking of online lending using big data, it’s important to point out that one of the major benefits not only means getting approved for a loan when you otherwise might not. With big data, loans can get approved much more quickly, bypassing the hours usually needed in the more traditional way. It’s a quick process that in some instances can actually lead to lower interest rates when compared to market averages. Needless to say, it’s an appealing option that many will be drawn to, especially younger generations used to conducting their business in digital form.
Banks and lending institutions are not charities, though. When dealing with loans, the name of the game is risk. These organizations want to evaluate how likely you are to repay the money they give you, plus interest. That’s where big data plays its biggest role. Whereas most banks and credit unions will look at your credit score to determine how risky lending money to someone is, many startup lending companies use different — and some would say unorthodox — methods. With a combination of big data and machine learning capabilities, they can figure out the likelihood you’ll repay a loan through some unlikely factors that you may not have thought of.
Have you ever given much thought to the time of day you ask for a loan? What about how many emails you send out every day? Did you know that your Facebook friends could determine how likely you are to repay a loan? Some of these items may seem unimportant, but through big data analytics, experts have found that they provide signs of whether lending money to a person or business is risky. As more institutions embrace big data and Hadoop on cloud, they’re finding that these seemingly innocuous elements may be more accurate than the usual credit score in determining repayment reliability. If it takes you a while to input an email address, for example, big data has shown that may be indicative that you’re using a new email for the express purpose of applying for the loan (which is usually not a good sign). And they’ve also found that Apple users are less risky to lend to. Make of that what you will.
Based off of these and other factors, people are able to be approved for online loans, even if they don’t have a credit score. From this perspective, it’s easy to see how big data can be seen as a blessing for those who would be shut out of the process under normal circumstances. As helpful as these types of loans can be, it’s important to note that for now, most of them are designed for the short-term, and even though interest rates can be lower, some of those same factors may lead to loans with higher rates. It’s all done in a case by case basis, but when these alternative lenders approve up to 60 percent of loans for small business when the average is 20 percent for other organizations, the option can’t be dismissed.
The use of big data in this way, however, is not without controversy. Being denied alone because your friends on Facebook with the wrong people strikes many as absurd. Not to mention that this delves into personal details more than usual, which may feel like an intrusion on privacy. Despite these concerns, big data is clearly the future for loans. Big banks are no stranger to big data analytics, so it’s likely only a matter of time before they adopt similar processes.
Rick Delgado is a technology commentator and freelance writer.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks.
|
<urn:uuid:f697ca85-6898-47da-9c4e-2e67fd0a15b4>
|
CC-MAIN-2017-09
|
http://data-informed.com/how-big-data-will-transform-the-lending-industry/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00146-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.95734 | 905 | 2.609375 | 3 |
If you write software that handles text, you should expect to have your fundamental assumptions about how text works shaken up about once per decade.
It’s happening again around now. Suddenly, a large volume of text contains characters that not all software is ready to support. Three years ago, these characters didn’t exist, but they’ve been adopted so quickly that, on Twitter, they’re now more common than hyphens.
These characters aren’t produced by geeks who are trying to push your software to the limit, or by foreigners who won’t speak proper (insert your native language here). They are produced by non-technical users who are speaking your language and who have probably never heard of Unicode. They’re just trying to say things like:
The steady march of progress 🐌
People used to write programs that assumed that all text was in English (or Russian, or Japanese, or whatever language the programmer spoke). Then they bolted on extensions to their language’s character set to account for a few other languages. All of these extensions were, of course, incompatible with each other.
The need to share text worldwide made this unsustainable. In the 2000s, many programmers finally transitioned to using Unicode and representing it using UTF-8. And then, except for the need to drag some recalcitrant programmers kicking and screaming into the 21st century, all was well. Now every computer can represent any text in any language in a consistent way. Right?
There are MORE characters‽ 😧
What I’ve seen is that people who used to assume incorrectly that there were only 256 possible characters, now assume incorrectly that there are only 65,536 possible characters. This was true in Unicode for a brief window of time, but it quickly expanded far beyond that, once people realized that you can find more than 65,536 different characters in East Asia alone. The first 65,536 characters are called the “Basic Multilingual Plane”, and the regions above that are referred to informally as “astral planes” inhabited by “astral characters”.
There is a lot of software out there that does not understand astral characters. Even in my preferred programming language, Python, the support for astral characters is inconsistent up until version 3.3 (which not many people use yet).
I’ve seen discussions among developers who are competent at using Unicode that dismiss astral characters as somebody else’s problem. A StackOverflow question got a response along the lines of, “But seriously, what are you using astral characters for? Are you a linguist studying dead languages or something?”
Welcome to 2013. The Unicode standard added a large range of pictographic characters in 2010 called “emoji”. Emoji were originally used on Japanese cell phones, but now they’re used at a higher rate outside of Japan, mostly because they’ve been embraced by iOS. Almost all emoji are outside of the Basic Multilingual Plane.
Some chat software has supported replacing ASCII smileys with pictures for a long time (often unexpectedly and irritatingly), but that’s not as good a solution as having them be real Unicode text. When they’re Unicode, you can copy and paste them, save them, and generally use them in any way you would expect text to work. If they’re images that are added in by the software, you can’t be sure how they’ll work.
If you write or depend on software that thinks the Basic Multilingual Plane is all there is, then emoji will break your code. The code may replace them with nothing, or meaningless boxes, or garbage characters (mojibake), or it may even crash.
Emoji are no longer rare edge cases. On Twitter — which is of course not representative of everything, but is a really good public sample of how people communicate worldwide — emoji in the astral plane appear in 1 out of 20 tweets, and more frequently than 1 in 600 characters. You can see for yourself on emojitracker, a site that catalogues all the emoji on Twitter as people tweet them.
To put that in perspective:
- Astral characters representing emoji are, in total, more common than hyphens.
- They’re also more common than the digit 5, or the capital letter V.
- They are half as common as the # symbol. Yes, on Twitter.
- The character 😂 alone is more common than the tilde.
- “That’s not possible,” you might say. “I have a ~ key right here on my keyboard, and I don’t have a 😂 key.” But iPhones do have a 😂 key. It’s easier to find than ~.
👉 Why this is important
People are expressing their emotions in a single character, in a way that is understandable in any language. It’s apparently 1/600 of the way people want to communicate. Don’t just throw that away.
If you think you’re too serious and professional to worry about emoji, consider business software such as Campfire, Trello, and GitHub, which have all added some joy to their user experience with excellent support for emoji (including shortcuts to help type them on desktop computers).
If your main interest in text is to consume it — for example, if you too are using the Twitter streaming API — then losing astral characters means you’re losing a lot of content.
And if you’re particularly unprepared for astral characters, they may crash your code. If your code crashes given unexpected input, that’s a denial-of-service attack in the making. No matter what you think of emoji, you need to fix that anyway.
Can you imagine software whose developers decided that supporting the capital V (which is also about 1 character in 600) wasn’t worth it? After all, it’s not like you really need that letter unless you’re some nerdy fan of obscure characters. If you know someone named Victoria, why not get used to calling her “victoria” or maybe “Wictoria” to make the programmer’s life easier?
You might not notice right away that this hypothetical software was bad at handling capital Vs. Perhaps the software would even let you type ‘Vernor Vinge’, but when anyone else sees that text it would say ‘�ernor �inge’ or ‘ï¼¶ernor ï¼¶inge’. And if you typed the capital V when you were searching for something, you might just not get any results.
💥 How is this going wrong?
A symptom of this is that, when you need to interoperate with the large amount of code that uses UTF-8, you might think you’re producing UTF-8, but you’re really producing CESU-8, the nonstandard encoding that results from using UTF-8 on top of UTF-16. CESU-8 looks just like UTF-8, except that every astral character is messed up. (Astral characters are four bytes long in UTF-8, and six bytes long in CESU-8.)
From what I have seen in testing ftfy, CESU-8 is more common in the wild than legitimate UTF-8. Nobody really intends to produce CESU-8, so that indicates a large amount of code that has not been tested on astral characters.
This points to almost entirely the reason that astral characters are hard. There’s a right way and a wrong way to handle them that look really similar. It’s like the problem with plugging in a USB connector, except you don’t even notice when you’ve done it upside down.
Understand your tools 🔨
Emoji aren’t really different from other Unicode characters. You shouldn’t have to care whether something is in an astral plane or not, just like you no longer care whether a particular character is ASCII or not.
The libraries you already depend on to handle Unicode strings should be able to do their job. Although Unicode has assigned meanings to more astral characters recently, the standards that tell you how to encode and decode these characters have not changed since 1996.
(Okay, UTF-8 changed in 2003, but it was only to explicitly forbid characters with code points above 0x10ffff, making the number of possible astral characters smaller. This is of no concern to you unless you’re writing a UTF-8 decoder from scratch.)
Programming languages and libraries have had over 17 years to adopt Unicode and get it right. The fact that bugs remain reflects the assumption that astral characters are unimportant and only weird people use them, which I hope the statistics in this post can help to dispel.
They say “it is a poor craftsman who blames his tools”, but go ahead and blame your tools, because it seems they really have earned much of the blame. A string representation in which the Basic Multilingual Plane works, but weird things happen to other Unicode characters, is a leaky abstraction. As a programmer who works with strings in 2013, then, you need to find out whether these abstractions are leaking or not, and figure out the right way to use your system’s Unicode representations so that they work correctly.
☑ Just check your code
The answer to all of these issues is to put astral characters in your test cases. If your code supports Unicode and doesn’t support astral characters, then either your code or code you depend on is making a bad assumption. That code may be hiding other bugs as well.
So, try giving your code input that includes the character 😹. (That’s U+1F639 CAT FACE WITH TEARS OF JOY. In UTF-8, its bytes are F0 9F 98 B9.) Does it come out unharmed? When you write it to a file in UTF-8, does it come out as the same four bytes (six is right out)? If not, you’ve got a bug to either fix or report. If your code explodes given the input 😹, you should find out before your users do, by putting it in your test cases.
😕 So, uh, why were all those weird boxes in this blog post?
I included an emoji character in every sub-heading. In practice, they were probably only visible if you’re on Mac OS or iOS, or maybe on Windows 8.1 if you’re using the beta version or if you’re here from the future. Other recent versions of Windows and Linux are prepared to display emoji, but come without any fonts that can actually do so.
Apparently even some up-to-date versions of Google Chrome will not display emoji, on an OS that otherwise supports them. Seriously, Google?
If your browser itself isn’t getting in the way, you should be able to see the characters if you install a free font called “Symbola”. They won’t be beautifully rendered, but at least they’ll be there. You can get this font from a page called Unicode Fonts for Ancient Scripts.
And what a telling anachronism that title is! You need a page that was intended to be about “ancient scripts” to get emoji that were invented in the last decade. The universe of text is very different than it was three years ago.
|
<urn:uuid:23803da1-a9f0-44a6-8f40-f4b241f9d634>
|
CC-MAIN-2017-09
|
https://blog.luminoso.com/2013/09/04/emoji-are-more-common-than-hyphens/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00146-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.940035 | 2,439 | 2.671875 | 3 |
1940s motion simulator rehabbed for tech testing at sea
The Navy is repurposing a piece of training equipment from World War II and converting it into a facility for testing and simulating how radar and other tactical communication systems would operate at sea.
The 96,000 pound “motion table” platform, rechristened the Ship Motion System (SMS), was originally used in the 1940s to simulate ship motion for training machine gun operators for action at sea.
Now the Naval Research Lab and the Office of Naval Research are mounting an effort to restore the system to test how radar, tactical electronic warfare, communications, optical sciences and remote sensing would operate with rolling and pitching on the deck of a Navy ship maneuvering at sea.
To use the SMS with today’s high precision systems, NRL will upgrade its control and monitoring systems. The foundation and two main decks will be reused. The hardware will be replaced with state-of-the-art equipment including motion control and monitoring, according to a Navy notice.
NRL engineers Richard Perlut and Chuck Hilterbrick are leading the effort to refurbish the system at the NRL Chesapeake Bay Detachment, on the shore of the Chesapeake in Calvert County, Md.
The NRL uses the site for research in radar, electronic warfare, optical devices, materials, communications and fire research. It says the facility is ideal for the SMS project, as well as for experiments involving simulating targets of aircraft and ships.
Posted by GCN Staff on Jan 17, 2014 at 8:55 AM
|
<urn:uuid:369e3a12-cdd2-42ab-927f-5bf42af47c19>
|
CC-MAIN-2017-09
|
https://gcn.com/blogs/pulse/2014/01/navy-ship-motion-system.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00146-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.943529 | 320 | 3.03125 | 3 |
Surprise, Surprise. Federal Agencies Not Protecting The Information They Collect About YouThere are many policies, mandates, and laws that govern personally identifiable and financial information for federal agencies. So just how many federal agencies are living up to their responsibilities?
There are many policies, mandates, and laws that govern personally identifiable and financial information for federal agencies. So just how many federal agencies are living up to their responsibilities?You guessed it: not many.
When it comes to maintaining the privacy of information government agencies collect about U.S. citizenry, there are two overarching laws. These are the Privacy Act of 1974 as well as the E-Government Act of 2002. Each of these laws mandate that federal agencies protect personal information.
Other laws and mandates that come into play, depending on the nature of the agency and the information stored, include the Federal Information Security Management Act of 2002, aka FISMA -- which sets forth a good baseline for security policies; the Health Information Portability and Accountability Act, aka HIPAA; as well as the California Database Breach Disclosure law, which is largely known as SB 1386,and now similar laws are in force in more than 40 other states.
You'd think federal agencies would have clearly heard the message: citizens want their personal information maintained securely and responsibly. And so does the legislature. If they've heard the message, they certainly haven't listened. If there's one area where the federal government could set an example, you'd think it would be in implementing solid IT security. But it hasn't set such an example.
That's why in 2006, and once again last year, the Office of Management and Budget recapped federal agency IT security and privacy responsibilities that should be followed.
Unfortunately, here are the findings from the latest Government Accountability Office report on the status of federal agencies when it comes to protecting your personal information:
Of 24 major agencies, 22 had developed policies requiring personally identifiable information to be encrypted on mobile computers and devices.
Fifteen of the 24 agencies had policies to use a "time-out" function for remote access and mobile devices, requiring user re-authentication after 30 minutes of inactivity.
Fewer agencies (11) had established policies to log computer-readable data extracts from databases holding sensitive information and erase the data within 90 days after extraction.
Several agencies indicated that they were researching technical solutions to address these issues.
At first blush, these results might not seem so bad. After all, 22 of 24 agencies have developed "polices requiring personally identifiable information to be encrypted on mobile computers and devices."
That's a start. But the devil is in the implementation and enforcement of polices. Anyone can set a policy requiring data be encrypted. Just as anyone can set a policy to live within a budget, lose weight, quit smoking, or start exercising. Follow-through is the tough part.
And that's the rub here, according to the GAO: "Gaps in their [federal agency] policies and procedures reduced agencies' ability to protect personally identifiable information from improper disclosure."
Also, I'd like to pose a question: Why does citizen personally identifiable information need be on a notebook or "other mobile device" at all?
Is it too much to ask, when working with sensitive information, that workers and consultants actually sit at a workstation, in an office, where the network and system can be kept highly secured? And if they need remote access, why not use a thin device so the data stays in the database, and isn't left at a worksite ... or on a table in Starbucks.
|
<urn:uuid:15858a1f-7867-4edf-a88a-81bbf13c5759>
|
CC-MAIN-2017-09
|
http://www.darkreading.com/risk-management/surprise-surprise-federal-agencies-not-protecting-the-information-they-collect-about-you/d/d-id/1065029
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00198-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.941572 | 734 | 2.546875 | 3 |
Article by Ken Lee
Flash to the Future – The Next Generation of Non-Volatile Memory
Quickly, name a gadget that didn't even exist in the year 2000 but has since transformed our culture and the way we live today. 99% of you probably answered with MP3 player, tablet computer, GPS, e-Reader or smart phone.
What do all of these devices have in common? For one thing, they are all super portable, and that feature is due to the unbridled success of Flash memory.
The development and maturation of NAND Flash as an affordable, non-volatile, solid-state data storage solution has helped usher in an era of mobile technology. It has allowed us to invent gadgets that simply would not have worked if incorporated with spinning platter drives.
However, after a decade of revolutionizing the technology industry, Flash is reaching its limits for further development. In order to increase Flash's maximum capacity manufacturers have been shrinking the distance between transistors on flash chips over the years.
In 2000 Flash technology was manufactured using a 180-nanometer process. Today, modern NAND Flash cells are for the most part manufactured using a 32nm process, with some bleeding-edge manufacturers moving to a 24nm process. The only issue with this is that as Flash continues to shrink it becomes less and less reliable. As of April 2011, the theoretical minimum Flash cell size is 19nm. Beyond that point, the stability of Flash becomes highly suspect.
As we approach the limit of Flash technology, it is prudent to look towards the future and consider a couple of promising non-volatile, solid-state storage technologies that may succeed Flash memory in our mobile devices.
Like Flash, Phase-change memory (aka PRAM or PCM) is non-volatile, solid-state computer memory; meaning that it retains information when powered off and has no moving parts.
Unlike Flash which works by changing the electronic charge stored within gates to set a bit as a 1 or 0, PCM uses an electric current to produce heat which switches a chalcogenide glass between crystalline and amorphous states to set a bit as a 1 or 0.
PCM has several significant advantages over Flash:
- PCM can effectively write data 30x faster. The memory element can change the state of a single bit from a 1 to 0. In Flash if a bit is set to 0, the only way it can be changed to a 1 is by erasing an entire block of bits.
- PCM can be scaled to 0.0467 nanometers without any loss of reliability.
- PCM is more durable than Flash. Flash cells degrade quickly because the burst of voltage across the cell causes degradation. Once cells begin degrading, they leak electric charge causing corruption and loss of data. Flash memory is rated for about 5000 writes per sector, and most devices employ wear leveling to make them stable up to 1 million write cycles. PCM also degrades with use due to thermal expansion and metal migration but at a much slower rate. Theoretically PCM should endure up to 100 million write cycles.
- PCM is suitable for use in more environments than Flash. Because Flash relies on trapped electrons to store information, it is susceptible to data corruption due to exposure to radiation. PCM exhibits a higher resistance to radiation and therefore can be used in space and military applications.
Magentoresistive RAM (aka MRAM) is another non-volatile, solid state technology has been in development since the mid 1990s. MRAM stores data using magnetic charge as opposed to electrical charge. MRAM is composed of pairs of miniscule ferromagnetic plates which make up the memory cells.
Each cell consists of two magnetic layers separated by an insulating layer. Each cell can be manipulated by an induced magnetic field which sets the polarity of the magnetic layers in parallel orientation or in an anti-parallel orientation. The different orientations determine whether the bit is set to a 1 or a 0.
MRAM has many significant advantages over Flash and PCM:
- MRAM can be read and written to faster, and can be done on a much smaller scale. Like PCM, single bits can be changed from 1 to 0 without having to erase an entire block.
- MRAM degrades substantially slower than either Flash or PCM.
- MRAM could replace all memory in the future, making it a universal storage technology. It should offer speeds close to that of SRAM, with densities approaching that of DRAM, while being able to store information when power is removed like Flash or EEPROM.
- Like PCM, MRAM also exhibits a higher resistance to radiation and therefore can be used in space and military applications that Flash is not suited for.
It should be noted that we can only speculate on when manufacturers will have to stop using Flash as the primary storage media in their products. Consider that in 2002, many experts assumed that Flash cells would not be stable when scaled past 45nm and predicted that Flash technology would need to be replaced by 2010. We know now that those predictions proved to be false.
Many experts today believe that technological breakthroughs, like implementing graphene, will allow the technology to be scaled down to 10nm without loss of stability.
If this is true, Flash may still be the dominant memory in mobile devices for many years to come. Even though emerging technologies like PCM or MRAM are vastly superior to Flash in many ways, PCM and MRAM are much more expensive to manufacture than Flash.
As long as Flash can remain a viable storage media there are too few incentives, and too much production costs for manufacturers to rush devices that use next generation memory into the market.
Ken Lee is a product manager at Kanguru Solutions specializing in data storage and duplication equipment.
Cross-posted from Kanguru Blog – Technology on the Move!
|
<urn:uuid:f26276d9-6833-445b-b07e-4e99411e35e0>
|
CC-MAIN-2017-09
|
http://www.infosecisland.com/blogview/17308-The-Next-Generation-of-Non-Volatile-Memory.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00198-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.94221 | 1,209 | 3.375 | 3 |
In this article you will learn how to set and check the hostname of a machine.
Note: These instructions will work with almost all Linux / Unix distributions.
What is a hostname?
A hostname is name of your machine or computer. Generally referred by layman users of “Microsoft Windows” as a Computer name which they actually see over the network and other services. Hostnames are used primarily for administrative tasks, application related, identification, IP resolution and even more.
If you’re talking about Linux / Unix then you either need to set up your hostname manually or during installation of the operating system. If you did not choose or enter any hostname during installation most likely your hostname will be “localhost.localdomain”
To find out what is your current hostname we use following command:
Most likely, if you did not choose or enter any hostname during installation you will get output as “localhost.localdomain”
You can also easily identify whether your hostname has been changed when you login to get prompt, something like below:
In above prompt you can see that “localhost” tells you that you have the default hostname.
Now let’s come back to the “hostname” command. Hostname command has a lot of switches each of them have different usage and outputs which you can study by using the “man” command which means manual.
# man hostname
Now let’s talk about some of the important hostname switches that are used in the system administration world:
# hostname –a
Above the hostname command displays the complete hostname that is currently set. In some cases the “hostname” command may or may not display the complete hostname; “-a” switch forces the hostname command to display complete hostname.
# hostname –f
The above hostname command displays you complete “FQDN” fully qualified domain name.
# hostname –i
The above command displays you the IP address associated with the hostname you have set. For example, if you have “server.myhost.com” it will return the IP address associated with it.
You can override or define the IP address for the hostname you set in /etc/hosts file, which will be explained later.
To set a hostname:
When you set a hostname there are two primary options: one is a temporary hostname and the other is a permanent change of hostname.
Temporary Hostname Changes
When it comes to a temporary change of hostname, this change will not be permanent after next reboot.
# hostname something.yourdomain.com
# hostname something
How to make your hostname permanent:
To change or set your hostname permanently― that it should remain even after next reboot― you need to follow below procedure which depends to various Linux / Unix distributions.
Note: We will cover CentOS and Debian
To set your name under CentOS:
Choose your favorite editor as we need to edit a configuration file, ours is “vim”. However, you may use any file editor whether its gedit (graphical text editor), vi or nano.
# vim /etc/sysconfig/network
Context of the file will be something like above. You need to change HOSTNAME= part. After editing if I want my hostname to be “something.yourdomain.com” it will look like below:
We save the file and we have changed our hostname permanently to “something.yourdomain.com”
Now, the above change will be reflected only if you reboot your machine. If we talk about mission critical servers which cannot be rebooted for various reasons, we can run below command.
Now if you are Debian / Ubuntu Desktop or Server user you can follow the procedure below to change your hostname.
# vim /etc/hostname
Edit the file hostname in /etc/ directory.
If you did not set the hostname while installation, it will be “localhost.localdomain”. Delete that and without the quotes enter the hostname you want to set.
For example, if you want to set “ubuntu.iscool.com” enter it without the quotes and save the file.
Now, the above change will be reflected only if you reboot your machine. If you talk about mission critical servers which cannot be rebooted for various reasons, run the command below.
- I have a dedicated server with Plesk 8.x. how can I change the hostname of my server from IPAddress.dedicated.abac.net to a domain name which I currently own?
- Custom Swap on Linux Virtual Machines
- How do I start my Virtual Machine?
- Why can't I log into my Virtual Machine that I just started?
- Can I get additional IP addresses for my Virtual Machine?
|
<urn:uuid:dd5228d4-ed75-4b49-8925-a3933df5c0aa>
|
CC-MAIN-2017-09
|
http://www.codero.com/knowledge-base/content/10/307/en/how-to-set-the-hostname-of-a-machine.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00618-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.852431 | 1,045 | 3.515625 | 4 |
Best practices for web development
Before we dive into the best practices, let's define some terminology and find out what is a "Best Practice".
Coding Style—A coding style is a set of guidelines that specify how your code should look. The guidelines are helpful to ensure your code is easy to read and maintain. Coding styles can include such things as indentation, using tabs vs spaces, line length, naming of variables, and more.
Coding standard—A coding standard is a set of conventions regulating how your code should be written. These conventions usually include code style and formatting but can also further define how a variable should be treated and used.
Best practice—A best practice is a method or technique for coding that has consistently shown results superior to those achieved with other means. Best practices can include a combination of styles and standards.
Why should I use best practices?
There are a number of advantages for using best practices in your app development:
- Increases performance by using less CPU and bandwidth
- Improves cross-browser compatibility
- Provides maintainable code, which is important for large projects and teams
- Improves quality by making readable code, allowing for peer review and refactoring
- Helps automate tasks for continuous integration, such as build scripts and automated tests
- Makes debugging easier
- Allows new people to contribute to a project
- Saves time
- Reduces costs
Last modified: 2013-10-02
|
<urn:uuid:ddf27004-0df2-4351-8f7f-5c65853ae1f0>
|
CC-MAIN-2017-09
|
http://developer.blackberry.com/bbos/html5/documentation/best-practices_for_web_development.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00142-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.9189 | 298 | 2.640625 | 3 |
Sometimes we want to perform the coverage analysis of the input file: to find areas of the program not exercised by a set of test cases. These test cases may come from a test suit or you could be trying to to find a vulnerability in the program by ‘fuzzing’ it. A nice feedback in the form of a list of ‘not-yet-executed’ instructions would be a nice addition to blind fuzzing.
The straightforward way of creating such an analyzer in IDA would be to use the built-it instruction tracer. It would work for small (toy-size) programs but would be too slow for real-world programs. Besides, multi-threaded applications can not be handled by the tracer.
To tell the truth, we do not really need to trace every single instruction. Noting that the instruction at the beginning of a basic block gets executed would be enough. (A basic block is a sequence of instructions without any jumps into the middle). Thanks to the cross-reference and name information in IDA, we can discover them quite reliably, especially in the compiler generated code.
So, a more clever approach would be to set a breakpoint at the beginning of each basic block. We would keep a breakpoint at place until it fires. As soon the breakpoint gets triggered, we remove it and let the program continue. This gives us tremendous speed boost, but the speed is still not acceptable. Since an average program contains many thousands basic blocks, just setting or removing breakpoints for them is too slow, especially over a network link (for remote debugging).
To make the analyzer to work even faster, we have to abandon IDA-controlled breakpoints and handle them ourselves. It seems difficult and laborious. In practice, it turns out to be very easy. Since we do not have ‘real’ breakpoints that have to be kept intact after firing, the logic becomes very simple (note that the most difficult part of breakpoint handling is resuming the program execution after it: you have to remove the breakpoint, single step, put the breakpoint back and resume the execution – and the debugged program can return something unexpected at any time, like an event from another thread or another exception). Here is the logic for simple one-shot breakpoints:
if we get a software breakpoint exception and it’s address is in the breakpoint list
remove the breakpoint by restoring the original program byte
update EIP with the exception address
resume the program execution
This algorithm requires 2 arrays: the breakpoint list and the original program bytes. The breakpoint list can be kept as vector<bool>, i.e. one bit per address.
Anyway, enough details. Here are some pictures.
This view of the imported function thunks gives as live view of executed functions from Windows API (green means executed):
If you continue to run the program, more and more lines will be painted green.
In the following picture we see what instructions were executed and what were not:
We see that the jump at 40158A was taken and therefore ESI was always 1 or less.
If we collapse all functions of the program (View, Hide all), then this ‘bird eye’ view will tell us about the executed functions
The last picture was taken while running IDA in IDA itself. We see the names of user-interface functions. It is obvious that I pressed the Down key but haven’t tried to press the Up/Left/Right keys yet. I see how the plugin can be useful for in-house IDA testing…
Here is the plugin: http://www.hexblog.com/ida_pro/files/coverit.zip
As usual, it comes with the source code. IDA v5.0 is required to run it.
There are many possible improvements for it:
- Track function execution instead of basic block execution.
- Create a nice list of executed/not-executed functions.
- Create something like a navigation band to display the results (in fact it is not very difficult, just create a window and draw on it pixel by pixel or, rather, rectangle by rectangle)
- Count the number of executions. Currently the plugin detects only the fact of the instruction execution but does not count how many times it gets executed. Counting will slow things down but I’m sure that it can still be made acceptably fast.
- Monitor several segments/dlls at once. The current version handles only the first code segment coming from the input file (so called loader segment). It can be made to monitor the whole memory process excluding the windows kernel code that handles exceptions.
- Port to other platforms and processors. For the moment the code is MS Windows-oriented (the exception and breakpoint codes are hardcoded). Seems to be easy.
- Make the plugin to remember the basic block list between runs. This will improve the startup speed of subsequent runs.
- Add customization dialog box (the color, function/bb selector), in short, everything said above can be parameterized.
This plugin demonstrates how to do some tricky things in IDA: how to refresh the screen only when it is really necessary, to hook low-level debugger functions, to find basic blocks, etc.
Have fun and nice coverage!
|
<urn:uuid:7e6b893b-758d-4542-ba9e-1307f5027cf3>
|
CC-MAIN-2017-09
|
http://www.hexblog.com/?p=34
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00318-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.935302 | 1,108 | 2.53125 | 3 |
In 1981, hardware giant IBM unveiled the IBM Personal Computer. While the name (along with other terms like “microcomputer” and “home computer) had already been in use, most machines had limited compatibility and therefore limited value in the work environment. Within a few years of the PC’s release, the popularity of the product caused other companies to release clones of the system, ensuring widespread compatibility and standardization. Within the next two decades, the PC became as ubiquitous as the television, the telephone and the automobile. It was seen as a life necessity, to the degree that experts soon began defining PC ownership as the divide between the “haves” and the “have nots.” Simply put, the PC is everywhere.
So why does Mark Dean, a CTO for IBM Middle East and Africa as well as one of the developers of the first IBM PC, say that we’re moving into a “post-PC era?”
“I […] have moved beyond the PC as well,” he says, explaining that he now uses a tablet device as his primary computer. “While PCs will continue to be much-used devices, they’re no longer on the leading edge of computing. They’re going the way of the vacuum tube, typewriter, vinyl records, CRT and incandescent bulbs.”
Many insiders have been wondering when the tipping point will occur, when sales of mobile devices will outpace those of personal computers. What many don’t realize is that it’s already happened. Smartphones and tablets combined for a whopping 487.7 units shipped in 2011, compared to 414.6 million PCs. The PC era, it seems, is over.
Or is it? Before we forget, over 400 million PCs were shipped in 2011, a year that saw a weak overall economy and an Asian market that was affected by natural disaster. Microsoft posted revenues of $17.41 billion for Q1 2012. The 6 percent increase over last year’s numbers led Wall Street analyst Josh Olsen to quip, “Perhaps the demise of the PC is not as great as everyone is anticipating here.”
Are PCs still the cutting edge of the technology world? Probably not. At their core, they’re a 30+ year old piece of technology that has already innovated the business world as much as it ever will. But the same could be said of the automobile, which fundamentally hasn’t changed since the first Model T rolled off the assembly line.
Perhaps this is a case of apples and oranges. After all, people use mobile devices far differently than they use PCs. Tablets and smartphones are fundamentally media consumers; PCs are fundamentally media producers. This article, for example, was written on a PC. And as long as the PC continues to do what it does so well, it’s safe to say that it’s here to stay.
– Dan Lothringer
Dan is a contributing writer for VideoConferencingSpot.com
|
<urn:uuid:aad067d1-5673-4654-8a43-45e83a2d3061>
|
CC-MAIN-2017-09
|
http://www.lifesize.com/video-conferencing-blog/reports-of-the-pcs-death-are-greatly-exaggerated/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00194-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.966473 | 634 | 2.515625 | 3 |
Heartbleed has dominated tech headlines for a week now. News outlets, citizen bloggers, and even late-night TV hosts have jumped on the story, each amping up the alarm a little more than the last one. But while it's true Heartbleed is a critical flaw with widespread implications, several security experts we've spoken with believe the sky-is-falling tone of the reporting is a bit melodramatic.
"While this is technically a big deal,' the exposure that this has received by the media is overblown," says Greg Foss, senior security research engineer for LogRhythm, "especially when compared to other serious vulnerabilities that are responsibly disclosed every day, which few outside of the security community ever hear about."
So what do you need to worry about? Read on for the hype and the reality behind three of the most common claims to come out of the heartbleed hysteria.
The hype: The entire Internet has been compromised and it's open season for hackers.
The reality: You're probably not a target.
The Heartbleed vulnerability exists in OpenSSL, a common implementation of the SSL protocol used to secure communications on the Internet. It doesn't matter which browser or device you're using--if you are connecting to, or interacting with, sites and services that are using a vulnerable version of OpenSSL, any data you transmit is at risk of compromise.
That's certainly serious, but the patch for Heartbleed has been available since the vulnerability was publicly disclosed, and most affected sites and applications have already taken corrective action. The remaining sites and consumer-oriented Internet-of-things devices that rely on OpenSSL are at greater risk now that the flaw is public, but attackers generally focus on easy targets with high value. Your home router is most likely not worth the time and effort.
The hype: You're at great risk of being hacked.
The reality: Your risk is minimal if you're taking basic security measures.
CloudFlare tests confirmed it's possible to use the Heartbleed vulnerability to capture a server's private encryption key. Because this could enable an attacker to spoof a connection, create a malicious site that appears legitimate, or decrypt communications they've collected, sites and services need to be aware of it.
But there are two important caveats to consider. First, obtaining the private key requires a number of requests that any IDS/IPS (intrusion detection system / intrusion prevention system) should detect. In theory, an attacker shouldn't be able to steal the private keys, because alarm bells would go off and the IT admin would take steps to block those attempts.
Second, the leakage of a private key doesn't necessarily increase risks to the average consumer. "If you're a regular user of public Wi-Fi, then the risk is greatly increased," says Tyler Reguly, security research manager for Tripwire. "[But] if you're using your home computer on your own connection or your phone's data plan, the risk is minimized quite a bit. The odds that attackers have stored packet captures of your interactions that they can go back and decrypt is incredibly unlikely."
The hype: You must change all of your passwords
The reality: You should, but not yet
It's true that the Heartbleed vulnerability has existed for a couple years, and there's a fair chance that your passwords have been exposed or compromised. However, it's pointless to change your password on a vulnerable site before it has confirmed that the service is patched.
Tom Cross, director of security research at Lancope, says passwords were likely only exposed if users logged in to a vulnerable site after the vulnerability was made public. The odds of that are lower than the alarm around Heartbleed might suggest, because only 11 to 17 percent of websites are estimated to have been vulnerable, and most of them rapidly deployed the necessary patch.
The problem here is knowing when a vulnerable site has been fixed. Not all companies are being forthright about remediating the bug.
"Unless your vendors have specifically announced they have patched and reset their certificates, it wouldn't be a bad idea to change your password now and then again in a month," says Andrew Storms, director of DevOps for CloudPassage. "Everyone should remember two important best practices: use unique passwords on each site and change your password on a regular basis."
The real risk is crying wolf
As far as these experts are concerned, more dangerous than the Heartbleed vulnerability itself is the distorted expectations the media has created in its wake.
"Everyone talks about educating users, but this assumption puts the onus on the security industry," says Reguly. "If we cry wolf with every vulnerability, we're doing end users a disservice." Other security issues deserve as much or more concern, Reguly adds. "This is a critical issue that must be fixed, but for the average consumer the latest Flash and IE zero-days still pose a greater risk than Heartbleed."
This story, "Heartbleed: Security Experts Reality-Check the 3 Most Hysterical Fears" was originally published by PCWorld.
|
<urn:uuid:8008ea91-030b-4b5c-a3d7-0db8ad8c4471>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2377021/security0/heartbleed--security-experts-reality-check-the-3-most-hysterical-fears.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00370-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.960602 | 1,047 | 2.515625 | 3 |
We've said it before and we'll say it again: a world with driverless cars isn't that far off. Europeans have been exploring driverless platoons called "road trains" as the future of car commuting. Computer scientists are thinking about how to reconfigure intersections and traffic lights to make driverless city driving more efficient. American cities may soon compete to builddriverless infrastructure.
So the biggest roadblock facing a driverless world isn't necessarily technology, but legality. The question of who would be liable in the event of an accident — the human "driver," the car's owner, the manufacturer — remains an open and deferred one. The Wall Street Journal recently reported that only three state legislatures have weighed in on the matter. Only one, Nevada, has given driverless cars the legal green light, and even then only for testing purposes.
Over at the blog Marginal Revolution, economist Tyler Cowen points to a recent research paper by Bryant Walker Smith, a fellow at Stanford Law School, who has made the legality of driverless cars his bailiwick. In offering "the most comprehensive discussion to date of whether so-called automated, autonomous, self-driving, or driverless vehicles can be lawfully sold and used on public roads in the United States," Smith argues that driverless cars are "probably legal."
|
<urn:uuid:36723b2e-f8fb-42db-8ae5-208cfe7a0acd>
|
CC-MAIN-2017-09
|
http://www.nextgov.com/emerging-tech/2013/03/why-driverless-cars-are-probably-legal/62157/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00314-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.963599 | 268 | 2.578125 | 3 |
Researchers turn to nature for help in constructing nanoscale circuits
NANOTECHNOLOGY, which involves creating and using tools measured in billionths of a meter, holds great promise for applications such as medicine and quantum computing, but producing the devices in usable quantities in reasonable time remains a challenge.
Researchers at the University of Maryland's A. James Clark School of Engineering are working to enlist nature's help to produce nanocircuits economically.
'While we understand how to make working nanoscale devices, making things out of a countable number of atoms takes a long time,' said Ray Phaneuf, associate professor of materials science and engineering. 'Industry needs to be able to mass-produce them on a practical time scale.'
That's where nature comes in. 'Nature is very good at making many copies of an object' through self-assembly, Phaneuf said. But nature knows how to make only a limited range of patterns for these complex structures, such as shells or crystals. Phaneuf's work focuses on the use of templates to teach nature some new tricks.
'The idea of using templates is not new,' Phaneuf said. 'What is new is the idea of trying to convince nature, based on the topography of the template, that it should assemble objects in a particular place,' atom by atom.
One application for the process could be quantum computing. A host of schemes propose harnessing the quantum states of atomic particles to do complex calculations. One involves assembling pairs of quantum dots ' tiny semiconductors containing from one to 100 particles with elementary electric charges ' to create the qubits used in quantum calculations.
Assembling the billions of dots in the precise patterns needed for massively parallel computing may be possible, but, Phaneuf said, 'it may not be doable within the age of the universe' with current techniques.
'Nature already knows how to assemble quantum dots,' he said. 'We are working on the step before self-assembly, the self-organization of the substrate,' which will act as the template for the dots.
The silicon substrate is etched into steps using lithography, but it is difficult to reach the level of precision required at the atomic scale using lithography alone. Heat and cold can be used to add or subtract atoms on the surfaces and precisely shape the step patterns. The steps can also be shaped. The step patterns, which are stiff, tend to straighten out under heat but are limited by the surrounding patterns in how much they can straighten.
'We play this stiffness off with the repulsive interaction between steps' to create the sizes and shapes needed, Phaneuf said.
The result is a substrate that can be reused many times as a template for growing nanostructures with silicon and gallium arsenide for computer and cell phone components.
'It still is in the development stages,' he said.
'There is still quite a lot to do before we make practical devices out of it. I don't think we're quite ready to make transistors on the chips.'
And the market for the end products has not yet developed. You are not likely to find any deals on quantum computers from Dell or HP in the ads of your Sunday supplements this weekend. More-immediate applications for this technology are likely to be biochips used in biology and medicine.
|
<urn:uuid:63dc8661-c842-4f06-b4bf-e04dc71da2d5>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2007/12/07/natural-order.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00314-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.927718 | 689 | 3.1875 | 3 |
What is bad data? Some people consider it a technical phenomenon, like missing values or malformed records, but bad data includes a lot more.
In the Bad Data Handbook, data expert Q. Ethan McCallum has gathered 19 colleagues from every corner of the data arena to reveal how they’ve recovered from nasty data problems.
From cranky storage to poor representation to misguided policy, there are many paths to bad data. Bottom line? Bad data is data that gets in the way. This book explains effective ways to get around it.
Among the many topics covered, you’ll discover how to:
- Test drive your data to see if it’s ready for analysis
- Work spreadsheet data into a usable form
- Handle encoding problems that lurk in text data
- Develop a successful web-scraping effort
- Use NLP tools to reveal the real sentiment of online reviews
- Address cloud computing issues that can impact your analysis effort
- Avoid policies that create data analysis roadblocks
- Take a systematic approach to data quality analysis.
|
<urn:uuid:a88214b7-74e1-489c-a20a-de39622709bc>
|
CC-MAIN-2017-09
|
https://www.helpnetsecurity.com/2013/01/03/bad-data-handbook/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00014-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.880519 | 220 | 3.046875 | 3 |
When an orbiting star gets too close to a galaxy’s central supermassive black hole, it eventually gets torn apart by the immense gravitational forces, a phenomenon known as a “tidal disruption.” Although black holes cannot be seen directly, since their dense mass means that not even light can escape, the inhaled star produces a brief burst of light. It’s difficult, however, to distinguish these very rare events from other bright light events, like supernovas.
Now with the help of comprehensive sky surveys and powerful supercomputers, astrophysicists are closer to understanding and even predicting this amazing phenomenon. Researchers from Georgia Institute of Technology and the Max Planck Institute in Germany are using a mix of computational and theoretical models to describe the dynamics of black hole events, such as tidal disruptions or the merging of two supermassive black holes.
A major part of the challenge of finding a star-sucking black hole is knowing where to look in the vast universe with its billions of galaxies. Through a multitude of astronomical surveys performed over the years, a more precise picture of the universe is emerging.
A turning point came when astrophysicists noticed that some galaxies thought to be inactive would suddenly light up at the center.
“This flare of light was found to have a characteristic behavior as a function of time,” said Tamara Bogdanovic, Assistant Professor of Physics at the Georgia Institute of Technology, a lead researcher on the study. “It starts very bright and its luminosity then decreases in time in a particular way. Astronomers have identified those as galaxies where a central black hole just disrupted and ‘ate’ a star. It’s like a black hole putting up a sign that says: ‘Here I am.'”
Bogdanovic and her colleagues employ a range of techniques – including observational, theoretical and computational methods – to understand these distant events. Simulations are especially valuable to decoding the signatures of tidal disruptions, explains Bogdanovic. Where before it was thought that this was a fairly uniform class of events, new evidence and better data suggest a diversity in candidate appearance.
The computational work for the project has been carried out on National Science Foundation-funded supercomputers like Stampede at the Texas Advanced Computing Center and Kraken at the National Institute for Computational Sciences – both part of the XSEDE environment. Georgia Tech’s Keeneland supercomputer was also involved.
Scientists know that tidal disruptions are rare cosmic occurrences – calculations show that galaxies like the Milky Way would experience a disruption like this once every 10,000 years. The telltale light flare, however, lasts only a few years. Both the rarity of the event and the discrepancy in timescale underscore the challenge of spotting a tidal disruption, and illustrate the need for further multi-galaxy sky surveys.
“Calculating the messy interplay between hydrodynamics and gravity is feasible on a human timescale only with a supercomputer. Because we have control over this virtual experiment and can repeat it, fast forward, or rewind as needed, we can examine the tidal disruption process from many perspectives. This in turn allows us to determine and quantify the most important physical processes at play,” says Roseanne Cheng of the Center for Relativistic Astrophysics at Georgia Tech, who coauthored a paper on black holes with Bogdanovic.
Astrophysicists are on the verge of a breakthrough in better techniques, of which supercomputing is a major part, that will lead to the ability to verify more bright light events as tidal disruptions. Currently, only a few dozen flares have been singled out as potential black hole event candidates. As more data from astronomical surveys flow in, cases are expected to rise sharply, perhaps to hundreds per year.
“[The] huge difference…means that we will be able to build a varied sample of stars of different types being disrupted by supermassive black holes,” states Bogdanovic.
|
<urn:uuid:b19a81bc-bf3e-492d-b6ba-b56a17bf2941>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2014/04/16/decoding-black-holes/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00366-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.936979 | 820 | 4.34375 | 4 |
The California Solar power boom is certainly well on its way. In April, San Francisco, the home of cable cars and the Golden Gate Bridge, will be known as the first American city to require all new buildings (up to 10 stories) have solar panels installed to provide heat and/or electricity. This should come as no surprise, since San Francisco has long been known for its progressive stance on issues such as the conservation and the environment. Its municipalities passed similar mandates in 2013. City supervisor Scott Weiner shared on social media that “this legislation will help move us toward a clean energy future and toward our city’s goal of 100 percent renewable energy by 2025.” He also specified that this new legislation is merely a continuation of the state of California’s legislation that all buildings up to 10 stories dedicate 15% of their rooftop space to solar panels.
Based on the infographic discovered via SEIA, California is currently the #1 solar powered state in the U.S., generating a shocking 13,241 MW in solar energy from 3,319,000 homes. This makes other states pale in comparison, with the runner up being Arizona, which generates 2,087 MW in solar energy from 223,000 homes. California’s use of solar panel energy has caused the job market to explode with 75,598 solar power related jobs. Interestingly, Massachusetts is not far behind California in this with 15,095 solar power related jobs, begging the question if Massachusetts will soon rise above California in the ranks of states using the most solar energy. North Carolina also seems to be playing catch up in the solar power energy game, having installed 1,143 MW in solar capacity in 2015, though still not making the top 10.
By Jonquil McDaniel
|
<urn:uuid:5c8da1c1-2a2c-4f39-8c04-bea332fa19ea>
|
CC-MAIN-2017-09
|
https://cloudtweaks.com/2016/09/infographic-solar-power-california/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00066-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.944019 | 357 | 2.65625 | 3 |
In 1996, Deep Blue – IBM’s chess-playing supercomputer – beat Grand Master and world chess champion Gary Kasparov under standard tournament rules for the first time. Kasparov was not pleased.
In the subsequent decades, chess players at every level have had a growing variety of chess machines to play against and hone their skills. Most of them still lose – even against a chess engine running on a smartphone.
The power of these computers is a demonstration of Moore’s Law in action: the proposition that chip performance would double every 18 months.
Chess machines have ever-more processing power and can crunch numbers and data at ever-greater speeds, and use enhanced processing power to search through all possible future moves to find the best next one.
But until now, the way they play chess has not changed. By the time IBM made adjustments to Deep Blue, its brute processing power meant it could evaluate 200 separate million positions every second and had earned a position in the TOP500 list of the most powerful supercomputers.
Kasparov was realistically looking at no more than five positions a second. But he kept up with the computer even if he didn’t win the match.
But that has all changed – and the consequences reach much further than the esoteric world of chess grand masters. Researchers at Imperial College London have created an artificial intelligence (AI) machine called Giraffe that has taught itself to play chess.
Using deep learning, the new machine evaluates the board in the way that humans do. Rather than using brute strength to assess all possible positions, it uses AI to evaluate the next tricky move.
Deep learning algorithms are filtering through to all kinds of areas. Facebook can automatically tag you in a picture because deep learning algorithms have taught it the difference between you and your best friend. Amazon and Netflix use deep learning to make more accurate suggestions for books, films and music you might like. And LinkedIn finds possible contacts with almost spooky levels of accuracy – all because of deep learning.
These are the most obvious examples of deep learning in play today – but they’re just the start. Consider self-driving cars as an example. In order to work safely, the car has to observe the world around it and process information as a human does. It needs to be able to distinguish between a tree and a traffic light.
But it is impossible to imagine a situation in which a human could programme a car to identify every single type of tree that it might encounter, in every season and at every light level.
A deep learning algorithm, however, can look at millions of images, without tags or text, and through sophisticated classification of data and context learn the concept of a tree – and so identify one in real life. Just as humans do.
What separates deep learning from pattern recognition and previous analytic capabilities is its ability to learn, understand and apply context.
One of the more obvious examples is in the area of linguistics. As anyone who has attempted to learn a foreign language knows, there’s more to it than translating vocabulary word for word.
Meaning often comes from idiom and colloquialisms, very precise word use and even pronunciation. Online translation services struggle with these at the moment. But machines can learn to speak, read, write and simultaneously translate any number of languages and alphabets.
There is a similar approach in medical science, where deep learning algorithms can perform advanced diagnostics and prognosis for some of the most obstinate diseases, and even accurately predict a patient’s lifespan – all from understand the context of the images in front of them.
What’s interesting about this – and the language example – is that developers of the algorithms need have no medical or linguistic knowledge themselves.
In effect, deep learning removes the limitations of human capability from the development of technology. Computers can learn to do things that humans can’t do. Chess experts taught Deep Blue to play chess. Giraffe teaches itself. A computer can speak Chinese, even if its programmer can’t.
Naturally this presents opportunities as well as challenges. Health outcomes could be improved everywhere, particularly in areas where doctors are rarely available. International relations – in both the political and commercial sphere become easier and smoother.
>See also: Is machine learning about to go mainstream?
At a business level, deep learning will create far greater understanding of who customers really are. In the area of finance, deep learning will improve techniques for detecting and preventing fraud and scams. In retail, truly personalised services could become the norm.
Improved diagnostics could be applied to physical or remote infrastructure, as well as human bodies, for more efficient and safer manufacturing or energy generation, for example.
In this world, advanced programmers and data scientists are going to command a premium. Deep learning algorithms are changing our reality and becoming more commonplace by the year.
Computers that can read, write, speak and learn are going to feature in all kinds of spheres that were previously considered immune from the automation trend.
Sourced from Aingaran Pillai, CTO and founder of Zaizi
|
<urn:uuid:fdf4e72e-28dd-4226-8655-1f4b60a15a11>
|
CC-MAIN-2017-09
|
http://www.information-age.com/how-deep-learning-algorithms-are-changing-lives-and-business-123460759/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00538-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951174 | 1,049 | 3.21875 | 3 |
Best government Web sites: Cancer.gov
- By Joab Jackson
- Aug 24, 2008
In government, everyone is a specialist in some field or other. But when communicating your expertise to the outside world, it is important to make what you know as easy to understand and as free of jargon as possible. A group of volunteers from government agencies even banded together to create a Web site, plainlanguage.gov, devoted to helping fight bureaucratese on agency Web sites.
LEAD STORY: 10 great .gov Web sites
One site whose creators have taken this to heart is Cancer. gov, managed by the National Cancer Institute.
'The developers attend to the principles of plain language,' Haller said. In the last ACSI survey, it garnered a score of 80, the threshold for what ForeSee calls superior. The site includes descriptions of what cancer is, how to prevent it, how to treat it, and how to screen for it ' all of it in simple, easy-to-understand language. 'Cancer is a term used for diseases in which abnormal cells divide without control and are able to invade other tissues,' the site states. Its explanations for the various types of cancer, such as lung cancer or breast cancer, are organized by name or where they can be found in the body. For someone for whom cancer has become an issue, Cancer.gov can be a welcome, and even life-saving, start.
Joab Jackson is the senior technology editor for Government Computer News.
|
<urn:uuid:3e74141b-97fb-45b2-8d4d-20a91a5f0e1e>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2008/08/24/best-government-web-sites-cancergov.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00054-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.941553 | 310 | 2.671875 | 3 |
Getting and Understanding the Bigger Picture
Simple, elegant, direct . . . graph visualizations can be informative, too. Without knowing anything about the underlying data and without even trying, you see relationships and patterns. The nodes and links could be Facebook connections, router-to-router links in a network, a transportation distribution hub, the path of a call through a call center, or the connections between objects in a compiled grammar. All are networks—a set of entities and the connections between them—and all can be analyzed through graph visualizations
There is information in the connections. A glance is enough to identify nodes with the most links, nodes straddling different subgroups, and nodes isolated by their lack of connections. Corporations might look at a graph to verify that marketing and sales are communicating, urban planners to monitor the interconnectedness, or isolation, of neighborhoods, biologists to discover interactions between genes, and network analysts to monitor security.
Graph creation made easy
Graph visualizations are everywhere. They need to be. Without them, there would simply be no way to make sense of the vast amounts of data collected.
Fortunately, graphs are easy to make thanks to automatic graph-drawing programs that take information describing a node and the node’s connections and automatically lay out a topology, handling the low-level details of how to arrange nodes so they don’t overlap and obscure one another. Feed in the data and get a picture.
Aesthetics is important not so much for looks . . . but for readability.
A multitude of graph drawing programs is available, many offering add-ons such as editors to manually change the node layout and adjust colors, shapes, and backgrounds to achieve visually appealing graphs.
Aesthetics is important not so much for looks—though some visualizations can be stunning to look at—but for readability. Links that intersect and nodes that overlay one another result in poor readability, and graph visualization programs work hard to minimize the number of link intersections and give enough whitespace around each node to make it stand out from its neighbors.
One method to ensure a good distribution of nodes, force-directed layout, endows each node with a repelling force to push away neighboring nodes that are too close while spring-like attachments on links work to keep connected nodes clustered together.
The nodes themselves find their own optimal position in turn. First one node employs its repelling force, moving too-close nodes further away. Then it’s the turn of the next node, and then another until each has had a chance to position itself. But the sequence needs to be repeated since later nodes may intrude into the space of a previous node; many iterations may be needed until a state of equilibrium is reached.
Force-directed layouts are computationally heavy but work well for graphs of 100-200 nodes; however, at larger scales (100,000 nodes and more), the scheme breaks down under the weight of all the calculations. As the number of nodes increases, the number of calculations increases quadratically. For n nodes, the number of calculations is on the order of n 2 . What works for small graphs won’t work for large ones.
At a company like AT&T, which operates some of the world’s largest networks, the problem of scale is a perennial issue. AT&T must often develop custom solutions to address the scale issue. This is one reason it maintains a sophisticated and wide-ranging research effort.
What works for small graphs won’t work for large ones.
For the visualizations needed for its large data collections, AT&T adapted its own network visualization program, Graphviz, which had originally been developed for graphing software objects. (Because all networks have a similar mathematic definition, there is little difference between rendering software objects and router-to-router links in communications networks, apart from the scale factor.) For the program to handle layouts with millions of nodes entailed key advancements in network visualization, including the introduction of advanced geometric and numerical optimization algorithms. Entirely new approaches including stress majorization and multilevel optimization were also required
Additional algorithmic enhancements were needed to ensure a readable, useful visualization, and Graphviz pioneered techniques in linear programming for aesthetic node placement, drawing curved edge splines around obstacles through numerical optimization, and robust overlap removal methods to preserve the overall structure of layouts while also making it possible to read labels even in tightly packed diagrams.
AT&T has made Graphviz available as open source software, and many other programs incorporate Graphviz as a visualization service for applications as diverse as databases and data analysis, bioinformatics, software engineering, programming languages and tools, and machine learning.
So how does Graphviz handle large datasets?
The first step is to reduce the size of the graph. It takes a few steps since it’s an iterative process in which each step halves the number of nodes. One million nodes becomes 500,000, which then becomes 250,000, all the way down to a manageable 50 or fewer nodes. At this point it’s a small graph and can employ force-directed techniques.
Of course, the way in which the nodes are reduced has to be done carefully and above all uniformly to preserve the original overall structure. This is not easy, and certain substructures, such as star graphs with an inordinate number of connections, must be identified and handled as special cases.
Various filtering and aggregating techniques are used to identify nodes that can be deleted without destroying the overall structure. One method is graph coarsening, which identifies perfectly matched node-links arrangements that can be collapsed without altering the overall topology.
Laying out large graphs is a hard problem, but viewing them is also hard.
Once a good layout is found through a force-directed algorithm, the graph is then built up again to its original size. For a graph the size of 1 million nodes, the entire process takes around 15 minutes; considering the number of calculations carried out and the range of operations, this is exceedingly fast. This time can be reduced significantly if a slight reduction in layout quality is allowed.
Large graphs and their information
Laying out large graphs is a hard problem, but viewing them is also hard. Large graphs won't display within the confines of a computer screen. Viewing a small part of the whole at a time is a solution, but not when it's important to see how changes in one part of the graph affect other parts. In communications networks such as those maintained by AT&T, investigating how various elements within the network interact and affect other parts is sometimes the whole point.
The Graphviz proposal is an interactive graph viewer, Smyrna, designed to handle large graphs. In addition to panning and zooming, Smyrna provides a topological fisheye view that magnifies an area of interest while keeping the entire graph in view to maintain the needed context. This is done by collapsing nodes outside the focal area in a way that the overall structure is maintained, with more distortion occurring in the nodes farthest from the focal area. The calculations for collapsing nodes are done in real time to ensure smooth transitions as the viewer navigates through the scene.
With Smyrna, the viewer can also navigate through three dimensions, which gives additional space to examine a graph's structure. Occlusions are more of a problem with 3D displays, though it's easy to manipulate the graph to see behind nodes. This ability to directly manipulate the graph structure, to turn it and move individual nodes involves the viewer with the graph in a way not possible with 2D graphs.
If you can move an individual node, the next natural step is to click a node for information (node information is very rich in Graphviz), which Smyrna supports. Clicking a node can access underlying data if the visualization associates an action with each node, such as opening a web page where the information can be found.
But more useful for accessing information and analyzing the data, is the ability in Smyrna to write queries and filter graphs based on attribute information and graph structure. Queries could locate all nodes sharing similar attributes, such as all people with the same last name or birthday, or locate all ISPs that recently handled a certain IP address.
Filtering a graph reduces a large graph to just the part that is relevant to the particular query.
The distinction between visualization and interface is being blurred.
Where until now, node-link layouts have emphasized representing relationships and have too often been an end of a process (feed in the data, see what it looks like, go back to the data), the abliity to directly access data, write queries, and closely investigate individual nodes show the potential of how visualizations can become part of the analysis process itself, and to answer questions such as "Why do some nodes cluster together?" "Which nodes in the entire graph share the same attributes?"
The distinction between visualization and interface is blurring, and the line between visualization and analysis is being crossed.
For visualizations to truly be part of the analysis, there remain some hard problems to solve. One, networks change constantly, and understanding where these changes occur is an important part of network analysis. But finding changes in large graphs is difficult, for two fundamental reasons.
One, the sheer size of a visualization containing millions of nodes makes it hard to see changes, especially those happening in obscure areas that are not often investigated.
Second, the way in which large graphs are generated, where the initial node placement may be random, can cause a large graph to look different every time it's generated, even when the data changes are small and limited to a few nodes. People build a mental map of what a graph looks like, based on how they first see it and look for changes by comparing the new layout with their mental map. Minor tweaking to add or delete a few nodes causes many nodes to self-adjust, giving the potential that the visualization may look much different from the menal map even though the changes are small. New or better algorithms are needed to preserve a node layout so small changes don't result in big changes, and do so in very fast computing time.
Other problems include handling small-world graphs, in which a high number of nodes are closely related, a common feature of social and communications networks. The resulting tight proximity and the number of shared links make layout particularly difficult. One solution being explored is to represent small-world subgraphs as a single, large node that expands when clicked.
There is no lack of incentive to find solutions to these and other problems of large graphs. Visualizations may be the only way to tackle the analysis of extra-large datasets, which will only become more numerous and bigger as data collection techniques multiply. Solutions will be found, and the only questions are what those solutions will be, and how soon.
Getting the Big Picture
Undirected layouts are used when there is no inherent ordering of the nodes. Graphs arising from communication and online social networks are undirected.
Circular layouts depict connections of nodes closely related to one another, such as those on a LAN.
Radial layouts depict networks focused on some central node, as arises in some social networks.
Matrixes can be visualized using the same node-link graphs. Yifan Hu of the Labs graphed over 2000 sparse data sets using Graphviz.
Reducing the number of edge crossings is key to graph readability, and most graph layout algorithms consider the task of arranging nodes to reduce crossings.
But there are other ways of approaching the readability problem.
What if you start with a different premise, such as reducing the amount of ink? Reducing the number length of links would reduce the amount of ink.
In this exercise, the nodes were sorted around the circle in a way to reduce the length of edges. This produced layouts with fewer crossings than when heuristics are used to explicitly reduce crossings. And from an idea based on the work of Danny Holten (Eindhoven University of Technology), lines that traversed the interior share the same path when possible, reducing the perceived number of intersections as well
The resulting increase in white space reduces the clutter.
But can the graph be made clearer still?
The solution. Forget about saving ink, and try something new.
Rerouting links around the outside of the circle minimizes link crossings dramatically.
It uses more ink of course, but in graphs, clarity is all.
|
<urn:uuid:5b7756fe-6418-4b05-8721-816532af5bee>
|
CC-MAIN-2017-09
|
http://www.research.att.com/articles/featured_stories/2009/200910_more_than_just_a_picture.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00230-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.940921 | 2,573 | 3.0625 | 3 |
After not faring well in the DARPA Robotics Challenge, NASA is hoping its Valkyrie humanoid robot will do much better working on Mars.
The space agency has given two robotic prototypes to scientists at MIT and Northeastern University to build software and put together hardware adjustments that will make the robots autonomous and dexterous enough to aid human astronauts or even take their place on "extreme space" missions.
"Advances in robotics, including human-robotic collaboration, are critical to developing the capabilities required for our journey to Mars," said Steve Jurczyk, associate administrator for NASA's Space Technology Mission Directorate. "We are excited to engage these university research groups to help NASA with this next big step in robotics technology development."
The robots are expected to launch into space to work on asteroids or Mars as precursors to human missions, setting up habitats, producing drinkable water and fuel to get the astronauts back home again. NASA also expects to use robots as human assistants, collaborating on projects, using tools and taking over more dangerous projects.
NASA's Valkyrie robot participated in the DARPA Robotics Challenge, a competition designed to get roboticists from around the world working on humanoid robots that could be used to assist after disasters. The idea was to have a robot that could go into a damaged building or a dangerous area instead of sending humans into such hazardous conditions. In the competition, the robot was required to climb stairs, opening doors, turning off valves and search for victims
Many of the robots did well, walking over rubble, driving and getting out of a car and drilling holes in walls.
Others, like NASA's Valkyrie, didn't do nearly as well. They were unable to climb stairs and ladders and took several minutes to take one step. Some simply fell over. The Valkyrie didn't make it to the final round of competition.
DARPA and robotics scientists said there is still work to be done, but they have made great headway into building autonomous robots that can perform basic tasks on their own.
In five or 10 years, they expect robots to be ready to help in disaster situations.
Of the teams that competed in the DARPA challenge, NASA chose MIT, whose team placed sixth in the competition, to work on the robotic space project. Northeastern didn't compete, but its robotics group is now headed by Taskin Padir, who led the seventh-place team from Worcester Polytechnic Institute during the DARPA competition.
Both MIT and Northeastern will receive up to $250,000 a year for two years and will have access to onsite and virtual technical support from NASA. Both university teams will also participate in its upcoming Space Robotics Challenge.
This new competition, while creating opportunities to benefit the robotics industry as a whole, will focus on creating the robotics technology needed to launch Mars missions.
At this point, with MIT and Northeastern involved, NASA is focused on upgrading the robot. In 2016, the competition will move on to a competition among teams that have built software for a simulated robotic situation. After that, there will be a physical competition.
This story, "NASA needs robotic upgrades for work on Mars" was originally published by Computerworld.
|
<urn:uuid:bcaf384b-6ae3-4e47-a3c6-925d1c0c02da>
|
CC-MAIN-2017-09
|
http://www.itnews.com/article/3007393/robotics/nasa-needs-robotic-upgrades-for-work-on-mars.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00574-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.961074 | 645 | 2.9375 | 3 |
While firms are up on their toes in their respective bids to take their companies to the cloud--touted to be the future of computing--a visiting MIT (Massachusetts Institute of Technology) professor suggested Tuesday a complementary technology that will harness new silos of computing power, by way of the crowd.
Called "crowd computing," this relatively new approach to computing is described as billions of human beings connected to the Internet that analyze, synthesize, inform, perceive, and provide opinion of data using only the power of the human brain.
If it is not patently obvious, such a mechanism is currently in place in the prime examples of social media and Wikis, both of which have grown in popularity in recent years.
Crowd computing, explained Srini Devadas, professor of electrical engineering and computer science at the MIT Computer Science and Artificial Intelligence Laboratory, will complement the cloud as one of two burgeoning infrastructures that will enable the world to become more "collectively intelligent."
Such a feat is essential in mitigating various human concerns, such as in predicting and mitigating the effects of natural disasters, among many others. "It helps if we can have a competent, orchestrated response to these disasters," he emphasized.
In the case of earthquakes, for example, a concerned individual can tap on various data available through the cloud in order to predict the impacts of ground movement, and draft strategies for evacuation and relief operations by using the wisdom of the crowd.
Such an initiative has been employed by citizens during the onslaught of typhoon Ondoy (international name: Ketsana) to effectively direct disaster management efforts to places they are needed the most.
But the system is far from being perfect. In the case of crowd computing, Devadas said there needs to be a significant improvement in the way current technology systems moderate opinion, resolve conflicts, and check facts.
With cloud computing, on the other hand, the MIT professor urged providers to maintain the security and privacy of their offerings. "The challenge also lies in determining how to write parallel applications in order to access billions of processors to give quick answers to queries," he pointed out.
While Devadas stressed that these challenges need to be addressed in the next ten years to make the technology viable, he predicted that it's possible the solutions for these issues will become available by 2020.
With this concerted push towards collective intelligence, Devadas noted that humans are not going to be eventually replaced by machines--as many would-be doomsayers would prefer to believe--but would instead create "a symbiotic relationship between software and humans."
Devadas is in the country to deliver a lecture before employees of global processing firm Accenture, IT professionals, and students, as part of the BPO company's Accenture Solutions Delivery Academy (ASDA) program, a training module for their employees crafted in partnership with the prime technology education institution.
The training and certification program, launched in 2006, is afforded to new recruits, preferably during the first five years on the job, in order to encapsulate and prove the Accenture experience to the employees, and to ensure that professionals are competent enough to handle tasks in their respective fields.
The process involves formal training--which could take from as little as 18 months to as much as two years--an on-the-job training experience, and an exam that will test the mettle of the employee against what was taught in the first two steps.
At the end of the program, a certification will be conferred to the employee by MIT and Accenture, depending on the specific track he took.
Currently, the ASDA program offers four areas for certification: application developer, application designer, application tester, and application test designer.
As of June 30, there are at least 22,222 employees who have enrolled in the program, and 4,375 certified professionals.
In the Philippines, 2,166 Accenture employees have signed up for the program. It certified 200 individuals, 181 of them are developers, while the remaining 19 are designers.
This story, "Step Aside, Cloud: 'Crowd Computing' the Future of IT, Too" was originally published by Computerworld.
|
<urn:uuid:8b69976a-7147-4af5-aa06-21876de5fdb6>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2416674/internet/step-aside--cloud---crowd-computing--the-future-of-it--too.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00098-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.958348 | 849 | 3.03125 | 3 |
When it comes to ushering in the next-generation of computer chips, Moore’s Law is not dead, it is just evolving, so say some of the more optimistic scientists and engineers cited in a recent New York Times article from science writer John Markoff. Despite numerous proclamations foretelling Moore’s Law’s imminent demise, there are those who remain confident that a new class of nanomaterials will save the day. Materials designers are investigating using metals, ceramics, polymeric and composites that organize via “bottom up” rather than “top down” processes as the substrate for future circuits.
Moore’s Law refers to the observation put forth by Intel cofounder Gordon E. Moore in 1965 that stated that the number of transistors on a silicon chip would double approximately every 24 months. The prediction has lasted through five decades of faster and cheaper CPUs, but it’s run out of steam as silicon-based circuits near the limits of miniaturization. While future process shrinks are possible and 3D stacking will buy some additional time, pundits say these tweaks are not economically feasible past a certain point. In fact, the high cost of building next-generation semiconductor factories has been called “Moore’s Second Law.”
With the advantages of Moore’s Law-type progress hanging in the balance, semiconductor designers have been forced to innovate. A lot of the buzz lately is around “self assembling” circuits. Industry researchers are experimenting with new techniques that combine nanowires with conventional manufacturing processes, setting the stage for a new class of computer chips, that continues the price/performance progression established by Moore’s law. Manufacturers are hopeful that such bottoms-up self-assembly techniques will eliminate the need to invest in costly new lithographic machines.
“The key is self assembly,” said Chandrasekhar Narayan, director of science and technology at IBM’s Almaden Research Center in San Jose, Calif. “You use the forces of nature to do your work for you. Brute force doesn’t work any more; you have to work with nature and let things happen by themselves.”
Moving from silicon-based manufacturing to an era of computational materials will require a concerted effort and a lot of computing power to test candidate materials. Markoff notes that materials researchers in Silicon Valley are using powerful new supercomputers to advance the science. “While semiconductor chips are no longer made here,” says Markoff referring to Silicon Valley, “the new classes of materials being developed in this area are likely to reshape the computing world over the next decade.”
|
<urn:uuid:28bd9c46-e2b5-484c-83e5-1c7f669eee2e>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2014/01/10/moores-law-post-silicon-era/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00098-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.935885 | 561 | 3.625 | 4 |
This resource is no longer available
Enabling Scientific Breakthroughs at the Petascale
When the University of Illinois' National Center for Supercomputing Applications (NCSA) kicked off, the project managers knew they'd need a scalable parallel file system capable of meeting the staggering requirements of one of the most powerful supercomputers in the world.
To find out where the NCSA leaders found such a storage solution, access this paper, which parallels modern breakthroughs in science with breakthroughs in storage - and see how modern storage systems have allowed for many scientific developments. Read on to learn about the high-performance computing storage solution capable of enabling scientists across the US to conduct research with unprecedented sustained performance.
|
<urn:uuid:3d5f5eea-6d73-4a85-8287-ec80260f2630>
|
CC-MAIN-2017-09
|
http://www.bitpipe.com/detail/RES/1349204001_770.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00326-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.908594 | 142 | 2.625 | 3 |
California Natural Resources Agency Secretary Mike Chrisman and State Chief Information Officer Teri Takai yesterday announced the launch of a new Cal-Atlas Geospatial Clearinghouse Web site to help government agencies better coordinate their geospatial efforts and allow public access to geospatial data. The innovative approach to technology will allow the general public to access maps, data and information that has not previously been accessible on a single site or from a single source. Geospatial data is information based on geographic locations or characteristics.
The new Web site will centralize a variety of data and information. Cal-Atlas provides a number of important Web accessible services. These include:
Digital maps linked to related information via GIS technologies provide unique capabilities that have gained public awareness. Some of those more commonly known tools include ArcGIS Explorer, Google Earth and Maps, Microsoft Visual Earth, Yahoo Maps and NASA World Wind to name a few. The maps and information available on Cal-Atlas will help users answer important questions related to where to go in cases of an emergency, where a new road might be routed, where are the best places for different activities and recreation.
Using maps to see where things are in relation to each other is also a key to being able to plan and deliver more effective public services. Cal-Atlas is an effort to make sure that agencies have accurate, complete and up-to-date GIS data. It will also help organizations to coordinate their activities, avoid duplication of effort and ensure that they make the most of their data investments. More maps or links to maps will be offered as state agencies roll out their own interactive map based Web sites.
Governor Schwarzenegger last year called for the creation of a GIS task force to develop a statewide strategy to enhance the technology for environmental protection, natural resource management, traffic flow, emergency preparedness and response, land use planning and health and human services. The task force issued a report to the governor recommending, among other things, that the state should have a single office to oversee and coordinate its use of this technology.
Late last year, state CIO Teri Takai told Government Technology she wanted to create a chief geographic officer position.
|
<urn:uuid:69d6d679-5b0b-49aa-8be4-f5dec1b7cfed>
|
CC-MAIN-2017-09
|
http://www.govtech.com/policy-management/Geospatial-Coordination-Web-Site-Launched-by.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00271-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.924219 | 445 | 2.625 | 3 |
To help the attendees of my Brucon White Hat Shellcode workshop, I wrote a new program to generate simple shellcode. I’m releasing it now.
People regularly ask me for malware so they can test their security setup. First, that’s a bad idea, and second, you can do without.
Why is using malware a bad idea? It’s dangerous and not reliable. Say you use a trojan to test your sandbox. You notice that your machine is not compromised. But is it because your sandbox contained the trojan, or because the trojan failed to execute properly? It might surprise you, but there’s a lot of unreliable malware out in the wild.
So how can you reliably test your sandbox without risking infection, or even worse, have malware escape into your corporate network? Just use simple shellcode that creates a file in a location your sandbox should prevent, like system32.
To generate this shellcode with simple-shellcode-generator.py, create a text file (call it createfile.def) with these 2 lines:
kernel32.dll CreateFileA str 0x0 0x0 0x0 0x2 0x80 0x0 kernel32.dll CloseHandle eax
Each line in this definition file instructs the generator to generate assembler code to lookup the address of the WIN32 API function, and to call it with the arguments you provide. The first column defines the dll that contains the function to call, the second column is the actual API function, and the rest are arguments to this function. The arguments you provide are copied literally into the generated assembler code, except for 3 keywords.
Keyword int is used to represent any DWORD, it will result in the generation of a push 0x0.
Keyword str is used to reserve space for a string, and the address of the string is used as argument.
Keyword pint is user to reserve space for a DWORD, and the address of the DWORD is used as argument.
To generate our shellcode, issue this command:
simple-shellcode-generator.py -o createfile.asm createfile.def
This generates the following assembler code:
; Shellcode generated by simple-shellcode-generator.py ; Generated for NASM assembler (http://www.nasm.us) ; https://DidierStevens.com ; Use at your own risk ; ; History: ; 2011/09/23: generated BITS 32 KERNEL32_HASH equ 0x000D4E88 KERNEL32_NUMBER_OF_FUNCTIONS equ 2 KERNEL32_CREATEFILEA_HASH equ 0x00067746 KERNEL32_CLOSEHANDLE_HASH equ 0x00067E1A segment .text call geteip geteip: pop ebx ; Setup environment for kernel32.dll lea esi, [KERNEL32_FUNCTIONS_TABLE-geteip+ebx] push esi lea esi, [KERNEL32_HASHES_TABLE-geteip+ebx] push esi push KERNEL32_NUMBER_OF_FUNCTIONS push KERNEL32_HASH call LookupFunctions ; call to CreateFileA push 0x0 push 0x80 push 0x2 push 0x0 push 0x0 push 0x0 lea eax, [STRING1-geteip+ebx] push eax call [KERNEL32_CREATEFILEA-geteip+ebx] ; call to CloseHandle push eax call [KERNEL32_CLOSEHANDLE-geteip+ebx] ret %include "sc-api-functions.asm" KERNEL32_HASHES_TABLE: dd KERNEL32_CREATEFILEA_HASH dd KERNEL32_CLOSEHANDLE_HASH KERNEL32_FUNCTIONS_TABLE: KERNEL32_CREATEFILEA dd 0x00000000 KERNEL32_CLOSEHANDLE dd 0x00000000 STRING1: db "String 1", 0
You can replace “String 1” on line 57 with the file you want to create: “C:\Windows\System32\testfile.txt”.
This shellcode uses the library sc-api-functions.asm you can find in my shellcode repository.
|
<urn:uuid:45aefbce-3f61-40af-a203-26443de00b6b>
|
CC-MAIN-2017-09
|
https://blog.didierstevens.com/2011/09/23/simple-shellcode-generator-py/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00323-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.714208 | 951 | 2.625 | 3 |
- "Medication for men plagued by hair loss has become a topic of interest in Japan since a drug company began marketing it at the end of last year." March 5th, 2006 – Source
- "An increasing number of companies are apparently turning the Chinese fear of a bald spot into big bucks with some doing so well they are branching out into other countries." February 16, 2006 – Source
- "There is something in the air, or should we say in the hair, these days. Scientific research into hair loss remedies has never been more active or more exciting." June 7, 2006 - Source
- Hair is a complex and delicate part of the body.
- Keeping it healthy and beautiful is a challenge.
- Hair grows everywhere on the body with the exception of lips, eyelids, palms of the hands and soles of the feet.
- Hair is basically a form of skin.
- Hair is made up of a protein called keratin.
- Each shaft of hair is made of two or three inter-twined layers of keratin which grow from a follicle beneath the skin.
- Hair Structure - Source
- Hair Cycle - Source
What causes hair loss?
- Decrease in growth of hair
- Increase in shedding of hair
- Breakage of hair
- Conversion of thick terminal hairs to thin vellus hairs
Both men and women lose hair for similar reasons. Hair loss in men is often more dramatic, and follows a specific pattern of loss, one of which has been termed “Male Pattern Baldness" or "Androgenetic Alopecia".
Types of alopecia
- Alopecia Areata (AA): Hair loss occurring in patches anywhere on the body.
- Alopecia Totalis (AT): Total loss of the hair on the scalp.
- Alopecia Universalis (AU): Total loss of all hair on the body.
- Alopecia Barbae: Loss of facial hair (for a man) especially in the beard area.
- Alopecia Mucinosa: A type of alopecia which results in scaley patches.
- Androgenetic Alopecia (AGA): Also known as male pattern baldness. It is a thinning of the hair to an almost transparent state, in both men or women. It is thought to be a hereditary form of hair loss.
- Traction Alopecia: Traction alopecia is usually due to excessive pulling or tension on hair shafts as a result of certain hair styles. It is seen more often in women, particularly those of East Indian and Afro-Caribbean origin. Hair loss depends on the way the hair is being pulled. Prolonged traction alopecia can stop new hair follicles from developing and leads to permanent hair loss.
- Anagen Effluvium: This hair loss is generally caused by chemicals such as those used to treat cancer. Initially it causes patchy hair loss, which often then leads to total hair loss. The good news is that when you stop using these chemicals the hair normally grows back (usually about 6 months later). Other drugs also can cause hair loss. Many medicines used to treat even common diseases can cause hair loss.
- Scarring Alopecia: A form of alopecia which leaves scarring on the area of hair loss.
- Telogen Effluvium: A form of hair loss where more than normal numbers of hair fall out. There is a general 'thinning' of the hair. Unlike some other hair and scalp conditions, it is temporary and the hair growth usually recovers. (Source)
- Gradual onset
- Transition from large, thick, pigmented terminal hairs to thinner, shorter, indeterminate hairs and finally to short, wispy, non-pigmented vellus hairs in the involved areas
- Characterized by a receding hairline and/or hair loss on the top of the head
- Genetic predisposition
- Hormonal effect of androgen
- Reduction of blood circulation around hair follicle
- Deactivation of hair matrix cells
Some facts from Japan
- Market size: ¥ 30 Billion
- Number of products: more than 100
(JICST-EPlus - Japanese Science & Technology)
IP activity over the years
The graph indicates:
- Number of patents filed every 5 years (except for first 7 years).
- First solution proposed in 1973
- Filing trend indicates steep rise in activity recently.
- Active assignees
Assignees currently active with more than 5 patents to their credit during 2000-2005.
- Warner with 9 patents,
- Bristol with 6 and
- Abbott with 5.
Use the mouse(click and drag/scroll up or down/click on nodes) to explore nodes in the detailed taxonomy
Click on the red arrow on the side of a node name to view the content for that particular node in the dashboard
Composition of treatment for causes are identified and categorized as follows:
- Anti-androgens (Finasteride) source
- Vasodilators (Minoxidil) source
- Double action (Anti-androgen + Vasodilator)
- Hair matrix cells activator
|Cause||Treatment approach||Pathways affected|
|Hormonal effect of androgen||Anti-androgens||Testosterone pathway|
|Reduction of blood circulation around hair follicle||Vasodilators (eg. Minoxidil)||NO/cGMP Pathway|
|Deactivation of hair matrix cells||Hair matrix cells activator||
- Anti-androgens are used in hormone therapy.
- Anti-androgens are designed to affect the hormones made in the adrenal glands. They don't stop the hormones from being made, but they stop them from having an effect leading to hair loss.
What causes hair loss?
- Testosterone is reduced to its active metabolite, Dihydrotestosterone (DHT) by the enzyme 5 alpha reductase.
- DHT attaches to androgen receptor sites at the hair follicle.
- DHT causes gradual miniaturization of the follicle, which eventually results in hair loss.
How do anti-androgens treat hair loss?
- Anti-androgens compete with DHT to bind to the androgen receptor.
- Upon binding of anti-androgen in place of DHT, follicle miniaturization is lowered and hair loss prevented.
Functions of Anti-androgen
IP Map for anti-androgen
|Natural extracts||Palmetto berry extract (fatty acids & sterols), Pumpkin seed extract (Vitamins-B, alpha-linolenic acid, amino acids and phytosterols), Quercetin (Flavonoids) and Beta-sitosterol (Rice bran, wheat germ, corn oils and soybeans)||Fatty acids – Inhibit testosterone
Sterols - Mechanism of action unknown.
Quercetin results in cell growth cycle.
Beta-sitosterol reduce inflammation on scalp
|Organic compound||New class of 4-cycloalkoxy benzonitrile derivatives and salts||Acts as androgen receptor modulator and blocks formation of DHT.|
|Organic compound||New class of 6-sulfonamido-quinolin-2-one and 6-sulfonamido-2-oxo-chromene derivatives.||The compounds inhibit, or decrease, activation of androgen receptor by androgens.|
APHIOS Corp (2003)
|Natural extracts||Supercritical fluid isolate of Saw Palmetto and Sperol (Serenoa repens berry) and their analogs or derivatives.||Modulates androgenic activity by inhibiting 5.alpha.-reductase activity.|
Fundacion Pablo Cassara (2003)
|Nucleotide||Pharmacologically active oligonucleotides (encompass both DNA and S-DNA bond)||Oligonucleotides inhibit androgen receptor (AR) expression at very low concentrations in skin and hair follicle|
PFIZER INC (2001)
|Organic compound||Thyromimetic compounds (structurally similar to thyronine) with finasteride, or cyproterone acetate||Activates thyroid hormone receptors in hair follicle which in turn promote elasticisation of follicle walls and hair follicle|
|Peptides/nucleic acid||Bradykinin antagonist (peptide of plasma origin from kininogen precursor-kallikrein)||Inhibits synthesis of bradykinin receptors or compounds by binding to B2 receptor|
KAO Corp (1987)
|Natural extracts||Walnut extract (leaves/pericarps) with an organic solvent||Blocks formation of DHT|
- Minoxidil is a "potassium channel opener" that leads to vasodilation.
- The drug is available in two forms. Oral minoxidil is used to treat high blood pressure and the topical solution form is used to treat hair loss and baldness.
What causes hair loss?
- A thick network of tiny veins and arteries line the outer wall of the follicle. Blood pumps through the bulb and hair via this network.
- DHT accumulates in the hair follicles and roots, constricting the blood supply of oxygen and nutrients to the hair roots; which is also seen to possibly contribute towards hair loss.
How does Minoxidil treat hair loss?
- Minoxidil is applied to the scalp topically, where it dilates blood vessels in the scalp and sustains the hair follicles for longer period of time.
- Minoxidil is thought to have a direct mitogenic effect on epidermal cells, as has been observed both in vitro and in vivo. Though the mechanism of its action for causing cell proliferation is not very clear, minoxidil is thought to prevent intracellular calcium entry. Calcium normally enhances epidermal growth factors to inhibit hair growth, and Minoxidil by getting converted to minoxidil sulfate acts as a potassium channel agonist and enhances potassium ion permeability to prevent calcium ions from entering into cells. (Source)
- Minoxidil sulfate (MS) appears to be the active metabolite responsible for hair growth stimulation.
Functions of Vasodilators
IP Map for Vasodilators
|Organic compound||Benzopyran compounds||Rapidly metabolizes, and causes reduced cardiovascular effects as compared to other known potassium channel openers|
LG HOUSEHOLD & HEALTH CARE(2001)
|Natural extracts||Sophora flavescens extract (alkaloids & flavonoids, luteolin-7-glucose and cytosine) Hinokitiol (Taiwan hinoki oil, Aomori, Western Red Cedar oil) and Nicotinamide (Vitamin B complex)||Promotes function of cell activity and dilates blood vessels|
Double action (Anti-androgen + Vasodilator)
- Combination of Vasodilator + Anti-androgen (double action) composition for effective treatment of Male-Pattern Baldness.
What is the problem with using only Anti-androgen therapy?
- Anti-androgen is not effective in addressing the issue of vasocontriction around hair follicles due to sebum oil build up.
- Anti-androgen only prevents binding of DHT to androgen receptors. However, the effects of improper oxygen and nutrient supply to the brain due to vasocontriction still remains and gradually causes hair loss.
What is the problem with using only Vasodilator (or Minoxidil only) therapy?
- Vasodilator or Minoxidil-based products are generally not effective in stopping hair loss as vasodilators (or Minoxidil) do not block the harmful effects of DHT in the scalp and hair follicles.
- Vasodilators or Minoxidil simply dilate blood vessels in the scalp. However, the harmful DHT still gets produced in the body, enters the scalp and hair follicles causing hair loss.
How is the combination of Anti-androgens and Vasodilator (or Minoxidil) effective?
- Anti-androgens target the problem of DHT binding to androgen receptors and prevents follicle miniaturization.
- Vasodilators like Minoxidil cause vasodilation and therefore improve supply of oxygen and nutrients to the hair follicle and roots.
- Combination therapy therefore proves to be much more effective than individual therapy.
Functions of (Anti-androgen + Vasodilators)
IP Map for (Anti-androgen + Vasodilators)
|Peptides||Testosterone blocker or vascular toner (Flutamide, cyproterone acetate, spironolactone, progesterone, or analogs or derivatives) and minoxidil mixed along with non-retinoid penetration enhancer and sunscreen||Inhibits 5.alpha.-reductase activity (block DHT) and increase blood flow on the scalp|
|Peptides||Prostaglandin (polyunsaturated fatty acids) EP-2, EP-3 EP-4 receptor agonist with Minoxidil, 2,4-diaminopyrimidine 3-oxide, and Aminexil, cyclic AMP||Minoxidil (designed to mimic nitric oxide's effects) grows hair via prostaglandin-H synthase stimulation. EP-3 and EP-4 are expressed in anagen hair follicles which induce a reduction in the level of cAMP|
|Natural extract||Hop extract (oil contains terpenes and humulene), Rosemary extract (hydroalcohol), Swertia extract (glycol with a swertiamarin), Silanodiol salicylate (biologically active silicon compound)||Inhibits activity of 5-alpha-reductase, protects follicular cell membranes by neutralizing action of oxidation reaction in tissues, stimulates hair follicles and blood circulation to the hair root, supplies oxygen and nutrients to base of follicle, retains humidity, avoids dehydration of scalp|
Hair matrix cell activator
Hair matrix cell activator is a substance that acts at the matrix cells in the hair follicle preventing their degradation.
What causes hair loss?
- Stem cells are interspersed within the basal layer of the outer root sheath and in an area called the bulge.
- Stem cells migrate to hair matrix where they start to divide and differentiate, under the influence of substances produced by cells of the dermal papilla.
- Perifollicular matrix cells undergo slow degradation which prevents follicle stimulation.
- Hair follicle activation is required for hair growth and thus inhibition of follicle activation eventually leads to hair loss.
How does hair cell matrix activator treat hair loss?
- Hair cell matrix activator slows down and inhibits degradation of the perifollicular matrix.
- This leads to an increase in hair follicle matrix cells that differentiate from progenitor stem cells.
- Matrix activator allows activation of hair matrix cells and therefore follicle stimulation leading to hair growth.
Functions of Hair matrix cell activator
IP Map for Hair matrix cell activator
|Organic compound||(2-substituted oxyphenyl) alkanamide derivative and its salt||Mechanism of action has not been made clear, having excellent hair follicle activating action and regrowth promoting effect|
|Peptides||Metalloprotease (MMP-9) inhibitor (thiol or a hydroxamate) other than chelating calcium ions||Reducing the expression of MMPs (Metalloproteases) in the scalp - slows down or inhibits the degradation of the perifollicular matrix (extracellular matrix surrounding the hair follicle)|
Technology mapping based on patents analyzed
IPMap: Composition nature matrix
|Year||Organic Compound||Natural extracts||Peptides||Nucleotides||Natural extract + Organic comp|
|2004||WARNER (1)||BLOTECH (1)||....||....||KAO (1)|
|2003||WARNER (1)||APHIOS (1)||....||FUNDACION (1)||....|
|2001||PFIZER (1)||LG HEALTH-CARE (1)||....||....||....|
|2000||....||....||L’OREAL (1) / N/A (1)||....||....|
|1999||SHISEDIO (1)||COLOMER (1)||....||....||....|
Focus of patents
|Focus of patents||Patent no.||Rec. no.|
|2-substituted oxyphenyl alkanamide derivative having excellent hair growth effect.||US20020052498||1|
|Thyromimetic compounds, and its role in treating hair loss||US20030007941||2|
|Saw Palmetto berry extract, pumpkin seed extract, sitosterol and quercetin for the treatment and prevention of the biologically detrimental effects of DHT||US20060009430||3|
|4-cycloalkoxy benzonitriles and its use as androgen receptor modulators||US20060009427||4|
|Supercritical fluid isolate of Saw Palmetto, Sperol for inhibition of 5-.alpha.-reductase activity||US20050118282||5|
|New class of quinolin-2-ones and chromen-2-ones andtheir use as androgen receptor antagonists||US20050085467||6|
|Antiandrogen oligonucleotides usable for the treatment of dermatological androgen-related disorders||US20060009429||7|
|Bradykinin antagonists for stimulating or inducing hair growth and/or arresting hair loss||US20030073616||8|
|Extract from walnut leaves and/or pericarps as 5 alpha -reductase inhibitor||EP0279010||9|
|Stimulating hair growth using benzopyrans||US20040157856||10|
|Sophora flavescens extract, Coicis semen extract, clove extract, etc for promoting hair growth, function of cell activity and dilating peripheral blood vessels.||US20050053572||11|
|Compositions to prevent or reduce hair loss||US20060052405||12|
|Prostaglandin EP-3 receptor antagonists for reducing hair loss||US20050123577||13|
|Synergic effect arising from the interaction of active ingredients, consisting of three plant extracts and a synthetic organosilicic compound for prevent hair loss and stimulate hair growth||US6447762||14|
|Metalloprotease inhibitors to induce and/or stimulate the growth||US20040071647||15|
|Method of decreasing sebum production and pore size||US20050277699||16|
|Method for reducing sebum on the hair and skin||US4529587||17|
Distribution of patents
By patent types
By key ingredients
By target disease
Key ingredients vs. Target disease
Mode of administration
Product type vs. Product form
Patents by target diseases
|Target disease/ disorder||Patent no.||Rec. no.|
|Alopecia areata, alopecia pityrodes or alopecia seborrheica, or androgenic alopecia (i.e. male pattern baldness)||US20020052498||1|
|Alopecia areata, male pattern baldness and female pattern baldness||US20030007941||2|
|Androgenic alopecia (i.e. male pattern baldness), prostatic hyperplasia or both.||US20060009430||3|
|Inappropriate activation of the androgen receptor, acne, oily skin, alopecia||US20060009427||4|
|Prostatic hyperplasia, prostatic cancer, hirsutism, acne, male pattern baldness, seborrhea, and other diseases related to androgen hyperactivity||US20050118282||5|
|Alopecia, acne, oily skin, prostrate cancer, hirsutism, and benign prostate hyperplasia||US20050085467||6|
|Androgen-associated hair loss and androgen-skin related disorders.||US20060009429||7|
|Androgenetic or androgenic alopecia or androgeno-genetic alopecia||US20030073616||8|
|Diseases caused by testosterone (male-pattern alopecia)||EP0279010||9|
|Alopecia areata, female pattern hair loss, hair loss secondary to chemotherapy or radiation treatment, stress-related hair loss, self-induced hair loss, scarring alopecia, and alopecia in non-human mammal||US20040157856||10|
|Male pattern alopecia||US20050053572||11|
|Alopecia, androgenic alopecia||US20060052405||12|
|Male pattern alopecia||US6447762||14|
|Androgenetic, androgenic or androgenogenetic alopecia||US20040071647||15|
|Curing other scalp related problems||US20050244362||16|
Pathways associated with hair matrix cell activation
Molecular mediators of hair follicle embryogenesis: Identification of the molecular pathways controlling differentiation and proliferation in mammalian hair follicles provides the crucial link to understanding the regulation of normal hair growth, the basis of hereditary hair loss diseases, and the origin of follicle-based tumors. Homeobox (hox), hedgehog (hh), patched (ptc), wingless (wg}/wnt, disheveled (dsh), engrailed (en), Notch 1 and armadillo/B-catenin genes are all critical for hair follicle.
- Wnt pathway: Maintains hair-inducing activity of the dermal papilla.
- Hedgehog pathway: Sonic hedgehog (SHH) signaling plays a critical role in hair follicle development. Sonic hedgehog gene. Sonic hedgehog, SHH for short, helps guide hair follicles from a resting stage into growth activity. SHH is particularly important in the embryonic formation of hair follicles.
- STAT pathway
- TGF beta/BMP Pathway: Bone morphogenetic protein (BMP) signaling have been implicated in the regulation of both proliferation and differentiation in the hair follicle. BMP2 is expressed in the embryonic ectoderm, but then localizes to the early hair follicle placode and underlying mesenchyme. BMP4 is expressed in the early dermal condensate. Research results show that BMPs are a key component of the signaling network controlling hair development and are required to induce the genetic program regulating hair shaft differentiation in the anagen hair follicle. Transforming growth factor beta (TGF-beta), inhibits mitogen - induced dermal papilla cell proliferation
- FGF Pathway: Fibroblast growth factor (bFGF) and platelet-derived growth factor (PDGF) potentiate the growth of dermal papilla cells. It is proposed that these proteins increase the synthesis of stromelysin (an enzyme, matrix metalloproteinase) which acts on the papilla cells and accelerates their growth.
- MAPK Pathway: Mitogen-activated protein kinase (MAPK) activation, increases keratinocyte turnover.
- NOTCH Pathway: Notch-1 is expressed in ectodermal-derived cells of the follicle, in the inner cells of the embryonic placode and the follicle bulb, and in the suprabasal cells of the mature outer root sheath. Delta-1, one of the three ligands is only expressed during embryonic follicle development and is exclusive to the mesenchymal cells of the pre-papilla located beneath the follicle placode, and appears to promote and accelerate placode formation, while suppressing placode formation in surrounding cells. Other ligands, Serrate 1 and Serrate 2, are expressed in matrix cells destined to form the inner root sheath and hair shaft.
Pathways associated with Anti Androgen
Players of WNT inhibition Pathway
|Patent no.||Key compound||Players of inhibition|
|US6743791||Heterocyclic compounds||AKT3, GSK-3, ERK2|
|US20040072836||Aza-oxindole derivatives||GSK3, AKT, PKC|
|WO0056710||3-(Anilinomethylene) oxindoles||GSK3, AKT, PKC|
|WO2003011287||Pyrazolon derivatives||GSK3, ß-catenin|
|US6924141||Lithium chloride, Wnt3/4/ 7||ß-catenin, GSK3, Wnt|
|US6683048||Peptide sequence||a-catenin, ß-catenin|
|US6677116||Peptide sequence LXXLL||ß-catenin|
|US6303576||Peptide sequence LXXLL||ß-catenin|
Role of Pyrazole compounds in Wnt Pathway
- Pyrazole (C3H4N2) refers both to the class of simple aromatic ring organic compounds of the heterocyclic series characterized by a 5-membered ring structure composed of three carbon atoms and two nitrogen atoms in adjacent positions and to the unsubstituted parent compound. Being so composed and having pharmacological effects on humans, they are classified as alkaloids although they are not known to occur in nature.
- Pyrazoles are produced synthetically through the reaction of a,ß-unsaturated aldehydes with hydrazine and subsequent dehydrogenation
- Pyrazoles are used for their analgesic, anti-inflammatory, antipyretic, antiarrhythmic, tranquilizing, muscle relaxing, psychoanaleptic, anticonvulsant, monoamineoxidase inhibiting, antidiabetic and antibacterial activities.
- Structurally related compounds are pyrazoline and pyrazolidine.
GSK3 inhibition by pyrazole compounds
|R1=T-Ring D, wherein
T is a valence bond and Ring D = 5-6 membered aryl or heteroaryl ring;
R2 = hydrogen or C1-4 aliphatic and R2'= hydrogen;
R3 = -R, -OR, or -N(R4)2, wherein R = hydrogen, C1-6 aliphatic, 5-6 membered heterocyclyl, phenyl, or 5-6 membered heteroaryl, and L is -O-, -S-, or -NH-; and Ring D is substituted by up to three substituents selected from -halo, -CN, -NO2, -N(R4)2, optionally substituted C1-6 aliphatic group, -OR, -C(O)R, -CO2R, -CONH(R<4>), -N(R4)COR, -N(R4)CO2R, -SO2N(R4)2, -N(R4)SO2R, -N(R6)COCH2N(R4)2, -N(R6)COCH2CH2N(R4)2, or -N(R6)COCH2CH2CH2N(R4)2, wherein R = hydrogen, C1-6 aliphatic, phenyl, 5-6 membered heteroaryl ring, or 5-6 membered heterocyclic ring
|X = R1-A-NR4- or a 5- or 6-membered carbocyclic or heterocyclic ring; A is a bond, S02, C=O, NRg(C=O) or O(C=O) wherein Rg is hydrogen or C1-4 hydrocarbyl optionally substituted by hydroxy or C1-4 alkoxy; Y is a bond or an alkylene chain of 1, 2 or 3 carbon atoms in length;
R1 is hydrogen; carbocyclic or heterocyclic group having from 3 to 12 ring members; or C1-8 hydrocarbyl group optionally substituted by one or more substituents selected from halogen (e.g. fluorine), hydroxy, C1-4 hydrocarbyloxy, amino, mono- or di-C1-4 hydrocarbylamino, and carbocyclic or heterocyclic groups having from 3 to 12 ring members, and wherein 1 or 2 of the carbon atoms of the hydrocarbyl group may optionally be replaced by an atom or group selected from 0, S, NH, SO, S02;
R2 is hydrogen; halogen; C1-4 alkoxy (e.g. methoxy); or a C1-4 hydrocarbyl group optionally substituted by halogen (e.g. fluorine), hydroxyl or C1-4 alkoxy (e.g. methoxy); R3 is selected from hydrogen and carbocyclic and heterocyclic groups having from 3 to 12 ring members; and
R4 is hydrogen or a C1-4 hydrocarbyl group optionally substituted by halogen (e.g. fluorine), hydroxyl or C1-4 alkoxy (e.g. methoxy).
|X is a groupR1-A-NR4-or a 5-or 6-membered carbocyclic or heterocyclic ring;
A is a bond,SO2, C=O, NRg (C=O) or O(C=O) wherein Rg is hydrogen orC14 hydrocarbyl optionally substituted by hydroxy or C1-4 alkoxy;Y is a bond or an alkylene chain of 1,2 or 3 carbon atoms in length;R'is hydrogen; a carbocyclic or heterocyclic group having from 3 to 12 ring members; or a C1-8 hydrocarbyl group optionally substituted by one or more substituents selected from halogen (e. g. fluorine), hydroxy, C1-4 hydrocarbyloxy, amino, mono-ordi-Cl 4 hydrocarbylamino, and carbocyclic or heterocyclic groups having from 3 to 12 ring members, and wherein 1 or 2 of the carbon atoms of the hydrocarbyl group may optionally be replaced by an atom or group selected fromO, S, NH, SO, SO2 ;R2 is hydrogen; halogen;C14 alkoxy (e. g. methoxy); or aC14 hydrocarbyl group optionally substituted by halogen (e. g. fluorine), hydroxyl orC14 alkoxy (e. g. methoxy);R3 is selected from hydrogen and carbocyclic and heterocyclic groups having from 3 to 12 ring members; andR4 is hydrogen or a C1-4 hydrocarbyl group optionally substituted by halogen (e. g. fluorine), hydroxyl or C1-4 alkoxy (e. g. methoxy).
Inhibition by amine derivatives
Patent Number: US6989385 Applicant: Vertex Pharmaceuticals Incorporated Title: Pyrazole compounds useful as protein kinase inhibitors
Patent Number: US7008948 Applicant: Vertex Pharmaceuticals Incorporated Title: Fused pyrimidyl pyrazole compounds useful as protein kinase inhibitors
Patent Number: US6977262 Assignee: Mitsubishi Pharma Corporation Title: Dihydropyrazolopyridine compounds and pharmaceutical use thereof
Patent Number: US6664247 Assignee: Vertex Pharmaceuticals Incorporated Title: Pyrazole compounds useful as protein kinase inhibitors
Patent Number: US2004224944 Assignee: VERTEX PHARMACEUTICALS INC Title: Pyrazole compounds useful as protein kinase inhibitors
GSK-3 Inhibition Mechanism - Phosphorylation
- GSK-3 inhibition targets treatment of chemotherapy-induced alopecia source
- In the canonical Wnt signaling cascade, adenomatous polyposis coli (APC), axin, and GSK3 constitute the so-called destruction complex, which controls the stability of beta-catenin. It is generally believed that four conserved Ser/Thr residues in the N terminus of beta-catenin are the pivotal targets for the constitutively active serine kinase GSK3. GSK3 covalently modifies beta-catenin by attaching phosphate groups (from ATP) to serine, and threonine residues. In so doing, the functional properties of the protein kinase’s substrate (beta-catenin) are modified.
- In the absence of Wnt signals, glycogen synthase kinase (GSK) is presumed to phosphorylate the N-terminal end of beta-catenin, thus promote degradation of beta-catenin and subsequent ubiquitination and proteasomal targeting.
- Exposure of cells to Wnts leads to inactivation of GSK-3 through an as yet unclear mechanism.The phosphoprotein Dishevelled is required, after receptor-ligand interaction, to transduce the signal that results in the inactivation of GSK-3. As a result, beta-catenin is dephosphorylated and escapes the ubiqduitylation-dependent destruction machinery.
- Unphosphorylated beta-catanin accumulates in the cytoplasm and translocates to the nucleus, where it can associate with the TCF/LEFs and become a transcriptional transactivator.
- Beta-catenin phosphorylation at serine 45 (Ser45), threonine 41 (Thr41), Ser37, and Ser33 is critical for beta-catenin degradation. source
- Regulation of beta-catenin phosphorylation is a central part of the canonical Wnt signaling pathway. source
- Ser-X-X-X-Ser (X is any amino acid) motif is obligatory for beta-catenin phosphorylation by GSK3.source
- Beta-catenin phosphorylation/degradation and its regulation by Wnt can occur normally in the absence of Thr41 as long as the Ser-X-X-X-Ser motif/spacing is preserved. [httSp://pubs.acs.org/cgi-bin/abstract.cgi/bichaw/2006/45/i16/abs/bi0601149.html source]
- GSK3 is regulated by phosphorylation.
- Phosphorylation of GSK3beta on Ser9 (Ser21 in GSK3alpha) by protein kinase B (PKB) causes its inactivation is the primary mechanism responsible for growth factor inhibition of this kinase. Activation of GSK3beta is dependent upon the phosphorylation of Tyr216 (Tyr279 in GSK3alpha). Upon activation, it has been shown to phosphorylate a number of different cellular proteins, including p53, c-Myc, c-Jun, heat shock factor-1 (HSF-1), beta-catenin and cyclin D1. source
- GSK3 is inhibited by phosphorylation of serine-9 or serine-21 in GSK3beta and GSK3alpha, respectively. source
- GSK3’s substrate specificity is unique in that phosphorylation of substrate only occurs if a phosphoserine or phosphotyrosine is present four residues C-terminal to the site of GSK phosphorylation. source
- A phosphorylation cascade starts from GSK3 itself and initiates it in beta-catenin. source
- Thus our goal is to stop the phosphorylation of the serine and threonine residue of GSK3.
- The figure below illustrates the phosphorylation mechanism of serine and threonine by ATP.
- We can't stop conversion of ADP to ATP that relaseas Phosphorous group causing Phosphorylation.
- We can only block the oxygen atom on serine and threonine as a result which will in turn stop Phosphorylation.
- The two probable ways of blocking the oxygen atom are (a) As oxygen is a Lewis acid with strong electron donating capacity, so usually a strong electron pair acceptor can easily bind to oxygen atom and preventing phosphorylation or (b) breaking of the -OH bond with the carbon atom.
Serine - pyrazole reaction
- The T-loop of GSK-3 is tyrosine phosphorylated at Y216 and Y279 in GSK-3b and GSK-3a, respectively, but not threonine phosphorylated. Y216/Y279 phosphorylation could play a role in forcing open the substrate (e.g, beta-catanin)-binding site.
- Thus, T-loop tyrosine might facilitate substrate phosphorylation but is not strictly required for kinase activity.
- Stimulation of cells with pyrazole compounds cause inactivation of GSK-3 through phosphorylation (S9 of GSK-3 beta and S21 of GSK-3 alpha), which inhibits GSK-3 activity. Thus leading to dephosphorylation of substrates (e.g., beta-catanin) resulting in their functional activation and consequent increased hair follicle morphogenisis.
- Phosphorylation of S9/S21 creates a primed pesudosubstrate that binds intramolecularly to the positively charged pocket of the GSK-3. This folding precludes phosphorylation of substrates (eg., beta-catanin) because the catalytic groove is occupied. The mechanism of inhibition is competitive.
- A consequence of this is that primed substrates, in high enough concentrations, out-compete the pesudosubstrate and thus become phosphorylated.
- Thus, small molecule inhibitors modeled to fit in the positively charged pocket of the GSK-3 kinease domain could potentially be very effective for selective inhibition of primed substrates.
Proposed mechanisms to regulate GSK-3 source
- inactivation of GSK-3 through serine phosphorylation
- activation of GSK-3 through tyrosine phosphorylation
- inactivation of GSK-3 through tyrosine dephosphorylation
- Covalant modifications of substrates through priming phosphorylation
- inhibition or facilation of GSK-3 mediated substrate phosphorylation thriugh interation of GSK-3 with binding or scaffolding proteins
- targeting of GSK-3 to different subcellular localizations
- differential usage of isoforms or splice variants to alter subcellular localization or substrate specificity
- integration of parellel signals conveyed by a signal stimulus.
- Pyrazole compounds with inhibition constant (Ki) of <0.1 mM are a good starting point for developing molecules that can inhibit serine/threonine protein kinase (such as GSK-3) and the proteins they help to regulate. source
Pathway associated with anti-androgen
- Formed by peripheral conversion of testosterone by 5-alpha reductase
- Binds to androgen receptor on susceptible hair follicles
- Hormone-receptor complex activates genes responsible for gradual transformation of large terminal follicles to miniaturized (progressive diminution of hair shaft diameter and length in response to systemic androgens) follicles
Pathway associated with Minoxidil (vasodilators)
Minoxidil is a well know drug used for the treatment of alopecia. A co-relation between Sesquiterpene lactone (Helenalin) produced from Arnica montana and Minoxidil is illustrated in the figure below. Arnica montana, a Vasodilator, acts on the NO/cGMP Pathway through T-cells, B-cells and epithelial cells & abrogates kappa B-driven gene expression.
Patent activity in China
|Treatment approach||Patent number||Priority year||Assignee/Inventor|
|Hair matrix activator||CN1463693||2002||朱静建|
|Anti-androgen + Vasodilator||CN1150043||1996||梅晓春|
Details of treatment approaches
|Patent number||Patent title||Treatment approach||Composition nature||Composition||Composition action|
YE MINGWEI (CN)
|Chinese herbal medicine decoction for treating blood stasis obstruction type alopecia and its prepn
|Vasodilators||Herbal extract||Astragalus root, prepared rhizome of rehmannia, white peony root, angelica, peach kernel and sufflower||Promote blood circulation|
WANG YAJIE (CN)
|Alopecia areata treating medicine
|Vasodilators||Herbal extract||Pinellia tuber, fleeceflower root, arborvitae seed, chickení s gizzard membrane, prepared rhizome of rehmannia, Poris cocos, Codonopsis pilosula, etc||Promote blood circulation|
TAN RUBIAO (CN)
|Natural Chinese herb composition for treating alopecia and leucotrichia and its application
|Vasodilators||Herbal extract||Ginger, Cinnamomum cassia, myrrh, clove, mace nutmeg, Loranthus mulberry mistletoe, rhizoma dioscoreae, ligustrum japonicum, drynaria, fleece-flower root, and black sesame seeds||Enhances the hair growth and healthier hairs|
|Hair growing preparation containing compound of Chinese medicine and Western medicine
|Hair matrix activator||Mixture of Herbal extracts and western medicine||Persimmon leaf, oriental arbor-vitae leaf, ginseng leaf, yellow qi, fruit of the glossy privet, polygonum multiflorum, Kudzu root, dry ginger; Plus:Minoxidil, Vitamins and derivative, cystine, serine, leucine.||Better and faster hair growth|
|Hair follice activating liquid
|Vasodilators||Herbal extract||Ginseng, twists the stock blue, the licorice, the Sophora flavescens and hot peppers||Activates the hair-follicle and enhances the hair growth.|
|Trichogen and its prepn
|Vasodilators||Herbal extract||Ginseng, ganoderma lucidum, Chinese rhubarb, polygonum multiflorum, Chinese prickly ash, ginger, grass seed||Promote blood circulation and enhance hair growth|
|Washing free shampoo for nourishing and growing hair
|Vasodilator||Vitamin composition||Vitamin P (Bioflavonoids), Vitamin B15, Vitamin B2, nicotinic acid, bromo—geramineum||Stimulate hair growth|
|Efficient low-side effect external use medicine for curing seborrheic baldness
|Anti-androgen + Vasodilator||Mixture of Herbal extracts and organic compounds||Polygonum multiflorum, Ligustrum lucidum, Morus alba, Rehmannia glutinosa, Eclipta prostrata, Saliva miltiorrhiza, Carthamus tinctorius, Cnidium monnieri, Sophora flavescens, Dictamnus dasycarpus, Kochia scoparia, and antioxidants||Inhibit the excess secretion of the sebaceous glands, increase the blood circulation on scalp and enhance the hair growth|
|Channel-stimulating and hair-growing hair shampoo
|Vasodilators||Herbal extract + detergent||Herbal extracts, Penetration media, Detergents.||Increases the blood circulation under the scalp, reduces the hair los|
Like this report?
This is only a sample report with brief analysis
Dolcera can provide a comprehensive report customized to your needs
|Buy the customized Alopecia report from Dolcera|
|Patent Analytics Services||Market Research Services||Purchase Patent Dashboard|
|Patent Landscape Services||Dolcera Processes||Industry Focus|
|Patent Search Services||Patent Alerting Services||Dolcera Tools|
- Hair loss medication is a very active area of research and intellectual property development.
- One of the most promising areas of development is the area of Anti-androgens.
- The top companies are Merck, L’Oreal and Smithkline.
|
<urn:uuid:fee75bb3-3d5f-4fd6-b507-026486ea281b>
|
CC-MAIN-2017-09
|
https://www.dolcera.com/wiki/index.php?title=Alopecia_-_Hair_Loss&mobileaction=toggle_view_mobile
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00443-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.805913 | 9,737 | 2.53125 | 3 |
Security Alert: When Bots AttackBy Kim S. Nash | Posted 2006-04-06 Email Print
In moments, hackers with bot code can break into vulnerable computers, turn them into zombies, steal information and spread the infection. While you scramble to secure your network--and the vital data on it--botmasters sell access to your hacked machines
The malicious code snuck through Auburn University's firewall and onto one of the school's lab PCs in an electronic message. On that September day in 2004, Auburn's network security specialist, Mark Wilson, watched from his computer what happened next.
The message contained a link, an invitation to visit a Web site that the PC's owner, possibly a student, found too enticing to resist. He or she couldn't know that clicking this link would download dirty code, letting it burrow into the PC through an unpatched bug in the Microsoft Windows operating system. It wasn't a straight Trojan or a worm, but a combination of programming malice with far greater potential for harm. It would allow hackers to seize control of the machine and turn it into a "bot," a remote-controlled robot that they could order to send spam or steal data and, most important, turn other vulnerable computers on the university's network into bots just like it.
But the trick worked. Click. Immediately, the Alabama university, like so many other colleges, companies and government agencies, fell victim to what security experts call one of the biggest cybersecurity threats out there: bot attacks. Auburn's network was thrown open to hackers all over the world.
With the back-to-school assault on Auburn, whoever launched the attack was probably after more computers to enlarge his or her botnet, Wilson says. The code exploited a bug in Windows' LSASS, or Local Security Authority Subsystem Service, which is how Microsoft verifies users who log on to the Windows 2000 or Windows XP operating systems. Though Microsoft had released a patch five months earlier, not all of the computers at the
23,000-student school were updated, Wilson says.
Auburn, of course, isn't the only organization to be hit by bots.
On any given day, 3 million to 3.5 million bots are active around the world, says Alan Paller, director of The SANS Institute, a security researcher in Bethesda, Md.—enough to disable all U.S. online retailers three times over. And each day those bots infect 250,000 Internet Protocol addresses, representing hundreds of thousands of Internet-connected devices, according to CipherTrust, a security consultant.
While more than 50% of bot attacks go after home PCs, CipherTrust has also found bots in 40% of large and midsize companies. Caterpillar, CNN, eBay and Microsoft are among the companies that have suffered bot attacks.
And the threat is growing. The number of new successful bot strains—variants in bot code—was up 538% last year alone, says another computer security company, Cybertrust, which tracks the activities of 11,000 hackers and works with the Federal Bureau of Investigation on cybercrime cases.
Consider the government's recent, high-profile case against Jeanson James Ancheta, a 21-year-old hacker from Downey, Calif. Ancheta, who holds a high school equivalency diploma, pleaded guilty in January in U.S. District Court in Los Angeles to building and selling bots, using the profits to build his business, and using his network of thousands of bots to commit crimes.
According to the plea agreement, he had made more than $60,000 and infected at least 400,000 computers, including machines at two U.S. Department of Defense facilities. He also provided bots for intrusion by others. One potential target was electronics giant Sanyo, which declined to comment. So did a spokesman for the DOD's Joint Task Force-Global Network Operations, who refused to discuss details of Ancheta's attack "for security reasons."
"It's a constant battle," says Michael Lines, chief security officer at credit reporting firm TransUnion, the consumer credit report company in Chicago. Lines says TransUnion, with its one terabyte of sensitive financial data, is a frequent bot target, though he will not provide details. "There is no single technology or strategy to [solve] the problem," he says.
Bots may disappear as people clean up their PCs and patch their software so malicious code can't get in, but they are quickly replaced by new bots adapted to exploit different problems, including "zero-day exploits," software bugs for which patches don't yet exist.
One reason bots are such a troubling security concern is that hackers don't have to build their own code to create the intruders—they can download bot toolkits for free on the Internet.
They can even buy access to bots. Ancheta linked a price list to his "botz4sale" online channel, according to the plea. He offered up to 10,000 compromised PCs at a time on the underground hack market for as little as 4 cents each.
Some bots cost more. A PC on a government network, for example, may sell for as much as $40, according to CipherTrust, because it offers access to loads of potentially interesting information. Bots that attack brand-new exploits are also considered more valuable.
Once a bot is created behind a corporate firewall, the person who controls it can mess with company applications by, for example, installing a keystroke logger on the PC to capture passwords as they are typed.
Or by exploiting the right application or operating-system bug, a botmaster can copy, manipulate or delete customer information, personnel records or almost any data on the infected machine.
In Israel, Ruth and Michael Haephrati, age 28 and 44, pleaded guilty in March to several conspiracy and computer crimes involving bots, according to published reports in ComputerWeekly and in Globes, an Israeli news service. They built spying software that they sold to Israeli competitive intelligence companies, which snuck it onto vulnerable computers at their clients' competitors, illegally gathering corporate information, according to the reports.
Ancheta, on the other hand, used his botnet for other moneymaking ventures. He sold or rented bots to people looking for computer power to send spam, or launch denial-of-service attacks to disable specific Web sites, according to his plea. He made a few hundred dollars from each deal.
Like a legitimate technology vendor, Ancheta provided consulting help with his product. Tips included how to perform bits of mischief such as a "synflood," to take out a Web server by flooding it with bogus requests to connect, according to his plea.
More lucrative for Ancheta was defrauding online advertising companies. Adware companies will pay "partners" for each digital advertisement they install on a PC. The adware monitors the user's activity, such as what terms he searches for at Google or Yahoo, and then displays related pop-up ads. Sometimes ads will just play across the screen, unrelated to anything the user is doing.
Above-board adware partners can, for instance, bundle adware with other software they sell, such as games or screen savers. But when botmasters play this game, they instruct their bots to install ads on machines they've taken over, collecting as much as 40 cents for each successful placement. They sometimes clog a PC so much it can't function.
Check stubs, bank records and files from online payment service PayPal seized by prosecutors show that Ancheta and an unindicted co-conspirator, someone indentified in court papers as a juvenile nicknamed "SoBe," took in $58,357.86 this way in less than 12 months.
In an AOL Instant Messenger conversation between Ancheta and SoBe that was archived in files seized in the case, Ancheta said of the money he made from adware, "It's easy, like slicing cheese." But the cash flow depended on keeping his botnet strong and growing.
Exactly how Ancheta got his bots into computer systems is not known. Some court records so far are sealed, the companies and government agencies named in the case won't talk, and neither will Ancheta's lawyer, who did not respond to Baseline's request to talk with Ancheta.
|
<urn:uuid:1e2b02e5-1216-4fee-9cd2-2a5b6b8c3667>
|
CC-MAIN-2017-09
|
http://www.baselinemag.com/c/a/Projects-Management/Security-Alert-When-Bots-Attack
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00143-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.960164 | 1,712 | 2.578125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.