text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Do You Hear What I Hear?�Part IX: Queuing Solutions for QoS
Data packets will have to line up for processing at various points as they move from source to destination. Schema for assigning priority to different classes of traffic help manage data flows.
Concluding our series on VoIP Quality of Service (QoS), our previous installments have looked at some of the key factors surrounding the quality of the voice connection:
Part I: Defining QoS
Part II: Key Transmission Impairments
Part III: Dealing with Latency
Part IV: Measuring "Toll Quality"
Part V: Integrated Services
Part VI: Resource Reservation Protocol
Part VII: Differentiated Services
Part VIII: Multiprotocol Label Switching
In several recent tutorials, we examined the Integrated Services (intserv) and Differentiated Services (diffserv) projects of the Internet Engineering Task Force (IETF) that are designed to provide Quality of Service (QoS) capabilities for VoIP and multimedia networks. We also looked at Multiprotocol Label Switching (MPLS), which started as several vendor-proprietary solutions (and in those early days called tag switching), which were later melded into an IETF-supported standards development. To conclude our look at QoS, we will briefly define some of the queuing solutions that are currently being pitched in the router marketplace alongside those of the above standards.
To begin, a queue is a waiting linesomething that you have undoubted experienced at a toll booth on the highway, at the grocery store, or at a movie theater. Packets that are on their way from a source to a destination experience queues as well, as they wait their turn to be processed at a router, or enter a stream of data that must make a speed change, such as when a high speed LAN (such as fast Ethernet operating at 100 Mbps) interfaces with a lower speed WAN (such as a T-1 line operating at 1.544 Mbps). The simplest form of queuing is called First In First Out, or FIFO. As the name implies, the arrival order of the element (be that a car, human or packet) determines what gets the first available service. FIFO queuing does not have any mechanism to prioritize one traffic flow above another, nor does it have a way to allocate at least some of the available resources to each partywhoever gets their first grabs the first available resource or bandwidth.
Several improvements on FIFO queuing have been developed that are often used to improve QoS within router-based internetworks:
- Weighted Fair Queuing (WFQ): defines a resource allocation scheme based upon some type of differentiation in data flows. Each of the flows or traffic classes is assigned a mathematical weight, and then an algorithm allocates the available resources according to that assigned weight. Thus, low volume traffic flows can be given preference over higher volume traffic flows, which would otherwise consume an inordinate share of the network resource.
- Random Early Detection (RED): is sometimes called a congestion avoidance scheme, as it uses a proactive approach to manage the traffic queues before they fill up. RED makes use of the flow control mechanism built into the Transmission Control Protocol (TCP), that cause a sender to decrease its transmission rate when it learns that packets in that data stream have been dropped because of network congestion. If the queue length increases, additional packets are dropped. This sends an additional signal to the sender to further decrease the rate at which packets enter the network. As packets are dropped, the queue size decreases, which in turn decreases the queuing delay, and thus stabilizes network operation before a more serious problem occurs.
- Weighted Random Early Detection (WRED): an enhancement to RED which drops packets based upon a priority scheme. Different traffic flows are identified, and assigned values within the Precedence field of the Internet Protocol (IP) header. When network congestion occurs, packets with higher priorities are less likely to be dropped than those with lower priorities. This allows the network to maintain stable queue lengths, while giving priority to certain traffic flows that have require a higher QoS.
WFQ, RED and WRED are typically implemented within router-based multimedia networks, and therefore a perusal of your favorite router vendor's website should yield some additional technical and implementation details for further research.
Copyright Acknowledgement: © 2005 DigiNet ® Corporation, All Rights Reserved
Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons.
|
<urn:uuid:9940830e-3e9c-472b-b64c-f6bb5a5d4532>
|
CC-MAIN-2017-09
|
http://www.enterprisenetworkingplanet.com/unified_communications/Do-You-Hear-What-I-Hear151Part-IX-Queuing-Solutions-for-QoS-3529361.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00019-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.934111 | 968 | 2.78125 | 3 |
As a technology company, Gradwell is fascinated by the digital world and how telecommunications have progressed since the development and adoption of computers. Experts argue that the very first digital communication goes as far back as the telegraph system at least a century before the internet we all know and love. It’s also fairly obvious that computer science has rapidly developed internet technology since the first packet-switching papers in the 1960s, but we could be here all day talking about that period in history. The core of what drives it all goes as far back as 1679 when historians suggest Gottfried Leibniz identified the binary number system, which is the basis for binary code. And we all know the base of computer data is binary (well, there are some translators and other bits involved).
So, it is all maths, measurements and calculations. We’re not telling you anything you didn’t learn in school; however, the thought did prompt us to think about maths in general. Maths wasn’t always the most interesting lesson at school for some of us. Plenty of daydreams were enjoyed whilst the teacher scribbled various equations on the board, stating that, ‘One day these will come in handy’. Did we believe that phrase? Most of us never gave it a second thought. However, whether you were a fan at school or not, maths is all around us, in every decision we make, in every action that we do.
When you get up in the morning the first thing you do is look at the time on your phone, see numbers that help you then calculate how long you have to shower, get dressed and swallow some breakfast before you leave the house in time to get to work. It’s not only time we think about but measuring temperature too – we open the curtains to check the weather (sun might equate to a short-sleeved shirt in the office, whereas ice could add another five minutes to your journey) and hopping into a cool shower rather than waiting the extra minute for it to heat up gives you that tiny little lie-in in the morning.
On your journey to work you might be stuck in slow traffic and watch the clock as the minutes tick by whilst you get later and later. You’re also mentally calculating the driving distance to work, navigating spatial decisions between cars and of course, if you’d calculated your journey just a bit differently you might already be sitting at your desk, hands clasped around that perfectly measured and steaming cup of coffee. It’s all down to the numbers with a bit of learned behaviour thrown in the mix.
Fast forward to mid-morning and you’re ticking tasks off of your daily to do list. Prioritising your work is often done so based upon the importance of the task, but mentally it’s a 1, 2, 3 and 4 process to get there. ‘This one will only take me five minutes so I’ll finish that off before I nip to the kitchen to get another glass of water.’ Or ‘well, I’ve got a meeting in an hour so if I really focus between now and then I can get two reports done and dusted if I keep an eye on the time.’ Measuring time is so basic that we don’t even know we’re doing it.
That mid-morning meeting has arrived and no doubt there is at least one mention of an update on performance metrics and you’ll have to show more numbers to identify how things are going in your team.
Lunch has finally arrived and what do you do? Weigh up situations and options based on numbers – a five minute, 1 mile distance walk to the local convenience store might save you counting extra money and give you longer to relax on your break but it will result in a soggy sandwich and stale crisps. On the flip side, approach your sixty minute allotted break a little differently and you might find yourself enjoying a longer distance stroll to a cosy nearby pub, then measuring how many people are there to decide if you can tuck into a tasty pie and chips with your colleagues. Then again, the second option will set you back probably triple what the soggy sandwich costs. Again – it’s all about equations!
Your working day has come to an end so you’re out of the door and racing to your car to try and set off as quickly as possible to beat the traffic. Once you’re home there are more equations at home: measurements and recipes to follow for dinner, trying to squeeze in one more episode of your favourite TV show before bed and, last of all, calculating how much sleep you can get as you reset the alarm on your phone. Then it all starts again the next morning.
How is all of this relevant to what we do here at Gradwell? Without getting more philosophical about what it all means or discussing the mysteries of life, our daily routines consist of a series of equations – that’s the way we live our lives and it’s also the way we do our work. We rely on technology like phones, computers and the internet now but how did all of those things start? It’s simple: numbers, equations and calculations on a piece of paper (or old tablets…nothing like today’s digital notebooks) that grew, changed and evolved over the years to bring us the technological breakthroughs that now enhance our lives.
For more information about how Gradwell can help make your working day a little bit more stress free and efficient using modern technology for connectivity, visit our website at http://www.gradwell.com.
|
<urn:uuid:5c14538b-b79e-4652-86c8-8f12b1efa5c4>
|
CC-MAIN-2017-09
|
https://www.gradwell.com/2015/02/12/its-all-about-the-numbers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00019-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.956089 | 1,171 | 2.953125 | 3 |
New Zealand's parliament is preparing to vote on a major patent reform bill that will tighten the country's standards of patentability. One of the most significant changes in the proposed bill is a specific patentability exclusion for software. If the bill receives parliamentary approval in its current form, it will broadly eliminate conventional software patents in New Zealand.
The bill was drafted by the Select Commerce Committee, which decided to include the exclusion after reviewing feedback from the software industry. The bill's official summary acknowledges that software patents are detrimental to the open source software development model and have the potential to seriously stifle innovation.
"Protecting software by patenting is inconsistent with the open source model, and its proponents oppose it. A number of submitters argued that there is no 'inventive step' in software development, as 'new' software invariably builds on existing software," the bill summary says. "They felt that computer software should be excluded from patent protection as software patents can stifle innovation and competition, and can be granted for trivial or existing techniques. In general we accept this position."
The Commerce Committee says that the ban on software patents will not block companies from patenting hardware inventions that encompass embedded software. It will be up to the Intellectual Property Office of New Zealand to craft the specific rules for determining what kind of embedded software is patentable.
|
<urn:uuid:f1cc8440-5aef-427e-a0a4-028641da1a8a>
|
CC-MAIN-2017-09
|
https://arstechnica.com/information-technology/2010/04/new-zealand-patent-reform-bill-says-no-to-software-patents/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00491-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.941298 | 269 | 2.546875 | 3 |
In a recent speech in London, IBM Chairman, President and CEO Sam Palmisano laid out IBM's vision of the next decade as the decade of the smarter systems. Among the many areas IBM is focusing on to enable a smarter planet is smart water management.
In a recent speech in London, IBM
Chairman, President and CEO Sam Palmisano
laid out IBM's vision
of the next decade as the decade of the smarter
systems. Among the many areas IBM is focusing on to enable a smarter planet is
smart water management.
In his Jan.
12 speech at the Chatham House
"By a smarter planet, we mean that intelligence is being infused into
the systems and processes that enable services to be delivered; physical goods
to be developed, manufactured, bought and sold; everything from people and
money to oil, water and electrons to move; and billions of people to work and
Yet, despite water being viewed as cheap and abundant, due to existing water
management systems, one in five people on the planet do not have adequate
access to safe, clean drinking water, IBM
However, the total amount of water on this planet has not changed, but the
nature of that water is constantly changing. Everything from where rain falls
to the chemical makeup of the oceans is in flux, IBM
said. Thus, IBM's efforts are aimed at
preserving and protecting clean water for drinking, bathing, electric power,
industrial manufacturing, food and the irrigation of crops.
Particularly, during the winter season and holidays, the combination of cold
weather and more home cooking makes this time of the year a high-risk season
for sewage overflows and leaky pipes. Many water and sewage infrastructures
date back to the 1800s and early 1900s, and are overwhelmed by the fats, oils
and grease poured in kitchen sinks or other drains, which can cause blockages
in city sewer lines, resulting in overflows that pollute the environment.
Smarter systems of the type Palmisano describes can help to prevent such
According to Lux Research, better information about water usage will save
utilities money, make water management more efficient and provide one of the
simplest solutions to the problem of water scarcity. In fact, Lux
estimates that the market for water IT will reach $16.3 billion by 2020.
Analytics, Asset Management
Indeed, to be truly efficient, water utilities and treatment plants need
real-time management and analytics systems to track the condition of each
critical component, or "asset," including water pumps, valves,
collection pipes and electrical equipment, so that potential problems such as a
burst water main or a sewage overflow can be quickly identified and resolved.
IBM analytical software gives maintenance
and operations staff a view of all assets across the utility to help prevent potential
water emergencies. And IBM systems tap
geospatial data to show exactly where that asset is on a map while describing
its condition, cost and maintenance history.
Software lies at the heart of these systems. IBM's
acquisition of MRO Software in 2006 enhanced
Big Blue's decades-long work in the rail, water and other vertical industries
by adding asset management capabilities. IBM
attained MRO's Maximo asset management
software in that acquisition, and Maximo is a key component of IBM's
Smarter Planet initiatives because it helps organizations track each and every
asset across their enterprises-spanning both physical and IT assets, IBM
|
<urn:uuid:cfda9b3c-cae1-4fe2-9a95-c49913defde2>
|
CC-MAIN-2017-09
|
http://www.eweek.com/c/a/IT-Infrastructure/IBM-Puts-Smart-Water-Systems-on-Tap-890648
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00191-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.928461 | 728 | 2.78125 | 3 |
Google Takes Unconventional Route with Homegrown Machine Learning Chips
May 19, 2016 Stacey Higginbotham
At the tail end of Google’s keynote speech at its developer conference Wednesday, Sundar Pichai, Google’s CEO mentioned that Google had built its own chip for machine learning jobs that it calls a Tensor Processing Unit, or TPU.
The boast was that the TPU offered “an order of magnitude” improvement in the performance per watt for machine learning. Any company building a custom chip for a dedicated workload is worth noting, because building a new processor is a multimillion-dollar effort when you consider hiring a design team, the cost of getting a chip to production and building the hardware and software infrastructure for it.
However, Google’s achievement with the TPU may not be as earth shattering or innovative as it might seem given the coverage in the press. To understand what Google has done, it’s important to understand a bit about how machine learning works and the demands it makes on a processor.
Machine learning actually involves two different computing jobs, the learning and the execution of that learning, which is called inference. Generally, for training companies have turned to GPUs because of the parallelization they offer. For execution companies are using a range of different architectures, but the big challenge is handling the limits of getting data from memory to the processor. An ideal processor for machine learning would offer great parallelization and increased memory bandwidth. Outside of supercomputing, this is something the chip world hasn’t focused on. The demand for workloads hasn’t been there. But with machine learning that is changing.
So for the people eyeing innovations in machine learning chips the question is if Google has designed something new that can optimize for both highly parallel workloads and and execute quickly on those many small processing jobs without hitting a data bottleneck. Google isn’t saying, but what it has shown off seem more like a refining of existing architectures rather than something wholly new.
Norman P. Jouppi, a Distinguished Hardware Engineer at Google, declined to say if it was using TPUs for learning or for execution, but based on the use cases it cited, it is clearly using it to execute its machine learning algorithms. Jouppi says it is using the TPUs for Street View and Inbox Smart Reply, a feature that analyzes your email and offers three choices of response generated by Google’s AI. It was also used in the Alpha Go demonstration
Most companies pursuing machine learning today are have turn to massive parallelization to deliver the performance they need. For example, Facebook is using Nvidia GPUs in the specially designed servers it built just for implementing machine learning. IBM is testing a brain computing concept for eventual use, but in the meantime it is using an implementation of its Power architecture, CPUs and GPUs from Nvidia to run its cognitive computing efforts on.
Nervana Systems, a company building a cloud-based AI service has adapted the firmware on Nvidia GPUs to deliver faster performance (its power consumption is unknown).
With its TPU Google has seemingly focused on delivering the data really quickly by cutting down on precision. Specifically, it doesn’t rely on floating point precision like a GPU does. Jouppi says that the focus on less precision meant it wasn’t using floating point math.
Instead the chip uses integer math, which Google’s VP for Technical Infrastructure Urs Hölzle confirmed for reporters in a press conference. At the time, Hölzle noted the TPU used 8-bit integer. Essentially this means that instead of wasting processing cycles worried about calculating things out to the umpteenth decimal point, the TPU can let a few slide, which means larger models can be used because of the lower resolution of the data.
This lack of precision is a common tactic for building out neural networks, where accepting probabilities in gigantic data sets tends to generate the right answer enough of the time. But it’s also not incredibly complex from a design perspective.
“Integer math isn’t something new,” says Kevin Krewell an analyst with Tirias Research. He is also skeptical about the power savings claims when compared with today’s graphics chips. Joupi said the TPUs have been in use for at least a year at Google, which means that these processors are best compared not to today’s machine learning chips, but to those built a year ago.
Google didn’t disclose what manufacturing node the TPU is built at, but it’s most likely a 28-nanometer node, which was the standard for a new GPU last year. Now the new Pascal chips from Nvidia are manufactured using a FInFET process at 16 nanometers, which wasn’t available a year ago.
Still, for a company like Google, the value of saving money for a year running it’s massive machine learning operations may have outweighed the cost of designing its own chips. Jouppi says that these are not processors that Google expects to be obsolete in a year. He also added that the focus wasn’t on the number of transistors, which suggests that a focus on moving down the process node to cram more transistors on a chip isn’t as important with this design.
As for the design, Jouppi explained that the decision to do an ASIC as opposed to a customizable FPGA was dictated by the economics.
“We thought about doing an FPGA, but because they are programmable and not that power efficient–remember we are getting an order of magnitude more performance per watt — we decided it was not that big a step up to customization.”
Krewell points out that designing a chip from scratch, even a simple one, can cost $100 million or more. So for Google the question is whether the time to market advantage on more efficient machine learning inference justifies and will continue to justify that cost. Without knowing what node Google is manufacturing at, the size of its operations (when asked what percent of machine learning workloads were running on TPUs, Jouppi said, “I don’t know.”) or the details of the chip itself, it’s hard to say.
Our bet is that is exactly how Google wants it. Remember this? The company has gained a considerable advantage by investing in its infrastructure–from buildings it’s own gear to building actual fiber connections. But with machine learning being the new bedrock for product innovation and delivering services, Google now has to adapt its infrastructure strategy to the new era.
Unfortunately its competitors have learned from Google’s previous investments in infrastructure, so they are hot on its heels, seeking the same efficiencies. And since Google rarely shares anything it doesn’t have to about its infrastructure until it had already squeezed the economic and technical advantage out of them, the TPU announcement feels a lot like marketing.
Jouppi says the company has no plans to open source it’s TPU design or license it, and he didn’t say when the company might release more details, although it sounded like Google would eventually release them. Maybe it is waiting for the completion of a newer, better design.
Stacey Higginbotham has spent the last fifteen years covering technology and finance for a diverse range of publications, including Fortune, Gigaom, BusinessWeek, The Deal, and The Bond Buyer. She is currently the host of The Internet of Things Podcastevery week and writes the Stacey Knows Things newsletter all about the internet of things.
In addition to covering momentum in the Internet of Things space, Stacey also focuses on semiconductors, and artificial intelligence.
|
<urn:uuid:feb54ec3-7ed1-4cf5-b152-80ad0a216b46>
|
CC-MAIN-2017-09
|
https://www.nextplatform.com/2016/05/19/google-takes-unconventional-route-homegrown-machine-learning-chips/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00543-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951081 | 1,602 | 2.59375 | 3 |
Q&A: An Introduction to the Scala Programming Language
We explore what the Scala programming language can do for your organization with the language’s inventor.
What is the Scala programming language, how does it work with Java, and what is its role in high-performance computing? We learn from the language’s inventor, Martin Odersky, who is also the chairman and chief architect of Typesafe, which packages Scala, Akka middleware, and developer tools into an open source stack.
Enterprise Strategies: What is Scala?
Martin Odersky: I created Scala as a statically typed programming language to run atop the Java Virtual Machine; Scala smoothly integrates features of object-oriented and functional languages, enabling Java and other programmers to be more productive. Scala is designed to express common programming patterns in a concise, elegant, and type-safe way.
How is Scala different than Java?
Scala is interoperable with Java and runs atop the Java VM, so existing Java code and programmer skills are fully re-usable. However, there are important differences between the two languages: fewer keystrokes to make, type inferencing, function passing, and many other features differentiate Scala from Java.
Scala is statically typed, meaning type checking is performed during compile time as opposed to run time, like Java but with Type Inferencing support. This means that the code is deeply analyzed by the Scala compiler to assess what type a particular value is. In Scala there’s no use of static all together. This is replaced by use of singleton objects. Singleton objects are declared by using the keyword “object” and not “class.
Also, when writing Scala code, the syntax for method declaration is different. Scala uses “=” before the method body, proceeded by the identifier of the method along with the parameter list and return type. Functions are treated like variables and constants.
Another important difference is that Scala supports the use of closures (anonymous functions) which makes longer code simpler and shorter, so many programmers often find writing Scala code easier and more efficient.
How does it help programmers of high-performance computing systems?
Scala is ideal for programmers of high-performance systems because it offers useful solutions for two important challenges -- concurrency and parallelism. For example, because Scala encourages developers to avoid shared mutable state, its much easier to build programs that are reliable while using available computing resources efficiently.
Are there any drawbacks to using Scala versus other programming languages?
One of the great aspects of Scala is that it carries along all of the advantages of the Java language itself while practically blending in important concepts from functional programming. Scala is typically used most often for application development (such as Web applications, business middleware, and mobile applications) and unlike Java itself, is also an excellent fit for lightweight scripting scenarios. Scala is not as good of a fit for systems-level programming, where C or contemporary systems languages such as Go might be more suitable.
What has led to the rise of high performance computing systems?
Traditional businesses are becoming ever more sophisticated in terms of the data they collect and the analysis they perform, while a new breed of Internet-scale applications such as LinkedIn and Twitter grew in popularity. At the same time, trends such as the arrival of massively multicore hardware and cloud computing put new resources at the disposal of technologists. Increasingly, every IT organization and startup faces challenges of the scale that previously would have been considered high-performance or scientific computing.
What kinds of industries would benefit from using Scala for their high-performing computing systems?
We see strong commercial demand and adoption in a few sectors. First, global financial services firms are rapidly adopting Scala and complementary frameworks such as Akka event-driven middleware to build the next generation of technology solutions that can keep up with today's massive scale markets in real time. Separately, we see strong demand from consumer-facing Internet firms that need to design their systems to scale rapidly as they grow -- whether they are new applications such as Twitter or traditional businesses gone online such as the Guardian UK.
What's the best way to train a programmer in Scala? Are there prerequisites -- such as a knowledge of Java -- that would help reduce the amount of training needed?
Our company, Typesafe, has created a standard hands-on training curriculum that gets developers up and running quickly with Scala and Akka. We are working with a growing number of partners to deliver these courses in cities around the world, and increasingly through on-site training at customer's facilities. At the same time, there are a growing number of books (http://typesafe.com/resources/books) about Scala (including my own!) and self-service resources such as Twitter's Scala School (http://twitter.github.com/scala_school/).
While many programmers come to Scala with a Java background, we also see many who have been programming with C#, Ruby, Python, or other functionally inspired languages.
What does Typesafe do?
Jonas Bonér, creator of Akka middleware, and I launched Typesafe in May 2011 to create a modern software platform for the era of multicore hardware and cloud-computing workloads. Typesafe provides an easy-to-use packaging of Scala, Akka, and developer tools via the open source Typesafe Stack, as well as commercial support and maintenance via the Typesafe Subscription. Typesafe also provides training and consulting services to accelerate the commercial adoption of Scala and Akka.
|
<urn:uuid:877e5847-a206-4532-9b11-0139a298c8ab>
|
CC-MAIN-2017-09
|
https://esj.com/articles/2012/01/23/introduction-to-scala.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00187-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.940422 | 1,137 | 3.125 | 3 |
As we all know, the optical fiber cable system in the transmission of the optical signal, can not do without optical transceivers and fiber. There are two major types of optical transceivers categories: light-emitting diode (LED) and Laser light emitting device. Although in performance, the laser light is far supperior to the light-emitting diode, the vast majority of LAN users, but due to the manufacturing cost of the problem has been difficult to afford the high prices of laser emitters. Until recently, a new type of light emitting device of the vertical cavity surface emitting VCSELs (Vertical cavity surface emitting lasers) emergence, to solve this problem. VCSELs absorption of the laser light-emitting device performance advantages, such as high-speed response, the transmission spectrum is narrow, and the advantages of the light-emitting diode, such as coupling, high efficiency and low cost. Therefore, the use of low-cost, high-performance VCSELs emitting devices with the multi-mode fiber optic cable can transmit signals up to 10Gb / s.
However, another problem appeared in front of the user, that is, the transmission distance. In addition to the transfer rate using fiber optic cable, transmission distance requirements. Experimental results show that the traditional multimode fiber optic cable, either 50μm or 62.5μm can support 10Gb / s network transmission, but its support distances are less than 100 meters, this network backbone can not be met.
Multimode fiber transmission bottlenecks – DMD
Why can support 100Mbps when 2000m of multimode fiber at 1 Gbps only supports 550 meters? its main reason is due to the phenomenon of multi-mode fiber of the DMD. Tested, we found that the multi-mode optical fiber in the transmission of the optical pulse, the optical pulse in the transmission process will diverge broadening When this divergence is severe to a certain extent between the front and rear pulse superimposed on each other, so that the receiving end does not accurately distinguish between each of the optical pulse signal, a phenomenon we called DMD (Differential Mode Delay). The main reason is that the multimode optical fiber with a optical pulse includes a plurality of modal components, from the angle of view of the optical transmission, each modal component type go in the optical fiber transmission path, for example, a straight line along the center of the fiber transmissionthe light component transmitted through the fiber cladding reflection light component having a different path. From electromagnetic point of view, contained within the three-dimensional space in the multimode fiber diameter the many modal (300-1100) component, their composition is very complex.
A new classification of multimode fiber in the ISO standard, at present OM1 refers to the traditional 62.5μm multimode fiber, OM2 refers to the traditional 50 μm multimode fiber, OM3 is the new 10 Gigabit optical fiber . Note that the two modes of the bandwidth of optic fiber indicators., Overfilled Launch Bandwidth the of the matching indicators of was is for a a LED the light emitting device, while the the LASER bandwidth is the against the of the matching indicators of of the Novel laser the light emitting device. The OM3 optic fiber cable at the same time in two modes under the have carried out a optimized. Another should be noted that the choice of transmission wavelength, 850nm or 1300nm. Although the longer the wavelength, the performance will be better, but the cost of the light-emitting device will be doubled, therefore, if possible, try to choose a short-wavelength applications to reduce costs. In In For example, the the new type of VCSELs light-emitting device is to Application environment for the in order to short-wave long, instead of the standard Laser light-emitting device is mainly used for the environment of the long-wavelength.
The OM3 fiber of test problems
DMD test steps are: using a 5μm single-mode probe with OM3 fiber under test is connected by single mode probe optical pulse to keep the fiber under test, at the same time, the probe scanning move from the fiber axis edge to move every time, every time it moves around 1μm. At the receiving end, for each position of the optical pulse will be recorded and the DMD indicator to form superimposed on with a time-domain diagram. Reach the light pulse will due to the different path generated time difference, at the same time In due to to optical pulse itself will to divergence, the difference of will in these two areas added together, According to the comparison of standards, FOR USE IN THE judgment OM3 fiber-optic from whether to to meet the standard.
OM3 fiber performance advantages
User network applications in the face of pressure to upgrade from the 1Gb / s to the future 10Gb / s for the application and future smooth upgrade the excessive Each user needs to be carefully considered.
In the current 1Gb/s network era, the traditional multi-mode fiber support distance not more than 550 meters, while the use of single mode fiber and mean at the same time the use of expensive laser light-emitting device, cabling system, both of which cost almost the same, but in thenetwork device, the two options means that the price difference is at least doubled. In many cases, when the user transmission over a distance of 500 meters and 1000 meters but had to use laser devices. New OM3 multimode fiber allows for the support of Gigabit Ethernet Distance extended to 1000 meters, without the need to use expensive laser device. So at this stage, the user can bring significant performance advantages.
|
<urn:uuid:99ea71ee-c877-48d6-991b-fb9c426305b9>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/10-gigabit-multimode-and-dmd-testing-technology.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00539-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.91413 | 1,173 | 3.046875 | 3 |
From FiberStore,we will answer to you what is the Low Smoke Zero Halogen Cable. Low Smoke Zero Halogen(LSZH) cable is free of halogen (F, Cl, Br, I, At), lead-free environmental substances such as cadmium, chromium and mercury in the plastic material made ??of combustion does not emit toxic smoke (such as: hydrogen halide,carbon monoxide, carbon dioxide, etc.) environmentally friendly cable. LSZH cable jacketing is composed of thermoplastic or thermoset compounds that emit limited smoke and no halogen when exposed to high sources of heat.Less toxic and slower to ignite, they are a good choice for many internal installations. This cable may be run through risers directly to a convenient network or splicing closet for interconnection.
Why the halogen is dangerous?
When products containing halogens are burned, they produce very dangerous gasses. Public awareness of these dangers began after several tragic fires claimed the lives of victims who inhaled deadly halogenated fumes.Many organizations, local authorities and governments have undertaken broad initiatives to eliminate the production of halogenated material. In Asia, the United Kingdom and many European communities, the use of wire and cable containing halogens is highly regulated, and in some areas completely prohibited.
Halogenated compounds are normally very stable. When they burn, however, the halogens separate and become highly reactive, forming very toxic, extremely dangerous and corrosive gasses that can significantly damage organic, inorganic and metallic materials. The hydrogen chlorine gas produced from burning PVC, for example, is similar to mustard gas.
Fires involving the combustion of halogenated materials can be devastating. Inhalation of dangerous fumes can cause serious harm or even death to humans. Acid rain and fumes can quickly destroy expensive industrial and computer equipment.
Why use halogen free cable?
Low smoke and halogen free cabling is becoming increasingly necessary to protect against the risk of toxic gas emissions during a fire. Standard RG cables contain halogen insulation. Halogen insulation was first used because it helps prevent cables from fire, but if it does ignite, the resulting fumes are highly toxic and a major risk, both to human life and to circuitry in place: critical, for example, in an aircraft.
Halogen free cables are engineered and designed so that emissions during a fire offer low toxicity and low smoke. This type of cabling is increasingly of relevance in public sector housing and major new developments. It could be increasingly worthwhile and of interest when it comes to elderly housing too, where items such as disabled stair lifts are in use and the risk of additional complications as a result of fire significant.
Halogen free or zero halogen cabling is used in many areas of the cable and wiring industry, including aircraft, rail and construction. Used to protect wiring, it is proven to limit the amount of toxic gas emitted when it comes into contact with heat.
For more fiber optic cable information,please visit FiberStore.com or contact us.
|
<urn:uuid:813b0bf4-6ae4-4666-ba35-0ff39fd252c9>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/what-is-the-low-smoke-zero-halogen-cable.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00539-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.924738 | 618 | 2.65625 | 3 |
Throwing the first pitch at a big-league baseball game is something most of us can only dream about. But thanks to technology, one 13-year-old boy named Nick LeGrande will throw out the first pitch for a Major League Baseball game from 1800 miles away. It will be the first time that someone has remotely thrown a pitch via a telepresence robot.
Nick will throw the first pitch of Wednesday night's game at the O.co Coliseum between the Oakland Athletics and the New York Yankees from his home in Kansas City, Missouri, thanks to telerobotic pitching machine connected through Google Fiber.
Recently, Nick's big-league dreams were put on hold after he was diagnosed with a rare and life-threatening blood disorder known as Aplastic Anemia. Oakland A's reliever Ryan Cook, who will catch the pitch, worked with Google to give Nick this awesome opportunity after learning that Nick's plight stopped him from playing baseball games.
|
<urn:uuid:bcef8ca8-1884-4ece-8a06-0315b74c520a>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2384177/consumer-technology/kid-to-throw-out-the-first-pitch-at-a-baseball-game-through-a-telepresence-robot.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00063-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.973376 | 194 | 2.6875 | 3 |
Financial and operational modeling can be used to visualize a variety of business scenarios, based on different business drivers. This allows companies to gain deeper insights into the ways each decision will affect their businesses.
Performing these “what-if” scenarios is imperative to better understanding the impact of each financial decision or investment. In turn, companies make smarter business decisions.
What Is What-If Analysis?
Rather than creating a budget or a forecast based on past results, forward-thinking companies will strategically analyze what “could” happen in the future and then roll the desired results into the corporate plan.
Let’s take an example. One company asks itself, “What would be the impact on our business if we add 10 new sales representatives next quarter?” Asking that simple question translates to a different outcome for different managers.
For instance, what is the impact of this hiring scenario on the controller’s expense budget or forecast? For the head of Sales, how would this decision impact revenue and commissions for the year? For the VP of Marketing, what additional programs would the department need to create to support these new sales representatives?
To effectively answer that “what-if” question, these different users have different drivers they will need to model. The controller will be thinking about payroll drivers—such as FICA, FUTA, and SUTA—while the VP of Marketing will be wondering how many additional marketing programs, and related costs, are needed to increase the pipeline. Fundamentally, these managers have different drivers they will want to model to determine the impact on their budgets or forecasts.
Why Is Modeling What-If Scenarios Important?
Every business decision entails a degree of risk, and managing risk appropriately can help a business remain competitive and increase growth. By analyzing “what-if” scenarios, companies can significantly reduce the potential risk of business decisions by gaining a deeper understanding of how each decision will impact the overall company financials. This allows businesses to grow more efficiently by making informed, data-supported decisions that are most likely to improve company finances and operations.
Can Spreadsheets Be Used for What-If Analysis?
While “what-if” scenarios can be modeled in spreadsheets, it’s likely that users will encounter many of the same spreadsheet challenges they face when doing a budget or a forecast.
Spreadsheet-based modeling makes for an incredibly tedious and drawn-out process due to the complexity of calculations and the need to track and compare multiple scenarios. It's also prone to manual error and will result in decreased accuracy of any analysis.
To learn more about how spreadsheets can cause havoc on the planning and performance management processes, check out this white paper.
What’s a Better Approach to Modeling?
Using cloud-based modeling capabilities that are built on an EPM platform, offer multi-dimensional ad-hoc modeling capabilities, and are linked to the financial planning process is a better approach.
In the previous “what-if” scenario example, users could take their slice of data from the financial plan and manipulate it as needed. Departmental analysis with the desired drivers could be performed, adding new dimensions or new members and calculations, to determine the impact of that “what-if” scenario on the department. Once the analysis is completed, the results can be linked back into the overall corporate plan. The data is consolidated into a single platform, eliminating data inconsistencies and providing one source of the truth.
By performing “what-if” scenario analysis and systematically linking it back to the corporate plan, companies can make more informed decisions and reduce risk. Companies can assess the potential outcomes of the decisions, investments, and other business activities that directly impact the financials.
To learn more about the benefits of cloud-based modeling, check out this recent white paper on managing performance and planning for the future.
|
<urn:uuid:24a9ae6c-cdf6-4030-b60b-6c6cddb976b0>
|
CC-MAIN-2017-09
|
http://blog.hostanalytics.com/using-what-if-scenarios-in-financial-and-operational-modeling
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00415-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.922797 | 805 | 2.8125 | 3 |
6 Ways the IoT Could Impact Data Centers
The Internet of Things (IoT) is a term that refers to the interconnection between physical objects that is also known as a physical or IP address. This Gartner Report states that, by 2020, the IoT is expected to build to 26 billion units. This means that all data centers must be prepared for the increase of data, not just in terms of storage but for other aspects as well.
Here are 6 challenges that data centers should prepare for as a result of the IoT:
1. Volumes of data storage: The data of the IoT could come from personal consumers, devices and large enterprises. The combination of these sources could lead to an astronomical growth in the quantity of data that must be stored by a data center. While some amount of scalability is always planned for, in order to be proactive, a data center must plan specifically for the IoT.
2. Data security and privacy: With an increase in the amount of data, the security measures in the data center also need to be strengthened accordingly. The multiple devices used to access data add to the concerns about breach of privacy. These devices may vary from the smallest phone or tablet to a smart kitchen appliance or automobile.
3. Network requirements: Most data centers are equipped for medium-level bandwidth requirements for access to the data. With the IoT, the number of connections and the speed of access would both have to undergo significant improvements to satisfy the growing requirements.
4. Scaling of storage architecture: The increase in the storage requirement could also lead to a challenge in the way the storage and servers are configured. It is recommended that a distributed structure is adopted to make the storage and access most efficient.
5. Multiple locations: Providing a solution for storage of IoT data that comes from multiple locations could be a challenge for a data center at a single location. The trend might need to move towards a collection of connected centers that are administered from a central location.
6. Cost effectiveness: The type of detailed back up data that is possible in the current landscape may no longer be affordable both in terms of storage and in terms of required network bandwidth. This might encourage a need for selective backup with a well-thought out frequency of performing the operation.
At Lifeline Data Centers, we believe that identifying future challenges is the first step towards finding solutions. Contact us to learn more.
|
<urn:uuid:9dff71e9-7d08-4bdb-b413-2df2b7b7485b>
|
CC-MAIN-2017-09
|
http://www.lifelinedatacenters.com/data-center/6-ways-the-iot-could-impact-data-centers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00415-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.943781 | 487 | 2.734375 | 3 |
This post was originally published on the CA Security Council blog.
The current browser-certification authority (CA) trust model allows a website owner to obtain its SSL certificate from any one of a number of CAs. That flexibility also means that a certificate mis-issued by a CA other than the authorized CA chosen by the website owner, would also be accepted as trustworthy by browsers.
This problem was displayed most dramatically by the DigiNotar attack in 2011 and in a mistaken CA certificate issued by TURKTRUST in 2012. In these cases, certificates were issued to domains that were not approved by the domain owner. Fortunately, the problem was detected in both cases by public key pinning, which Google implemented in Chrome.
So what is public key pinning? Public key pinning allows the website owner to make a statement that its SSL certificate must have one or more of the following:
- A specified public key
- Signed by a CA with this public key
- Hierarchal-trust to a CA with this public key
If a certificate for the website owner’s domain is issued by a CA that is not listed (i.e., not pinned), then a browser that supports public key pinning will provide a trust dialogue warning. Please note that website owners can pin multiple keys from multiple CAs if desired, and all will be treated as valid by the browsers.
The website owner trusts that its chosen, specified CAs will not mistakenly issue a certificate for the owner’s domain. These CAs often restrict who can request the issuance of a certificate for the owner’s specific domains, which provides additional security against mis-issuance of certificates to an unauthorized party.
Unfortunately, the public key pinning that Google implemented in 2011 is not scalable as it requires the public keys for each domain to be added to the browser.
A new, scalable public key pinning solution is being documented through a proposed IETF RFC. In this proposal, the public key pins will be defined through an HTTP header from the server to the browser. The header options may contain a SHA-1 and/or SHA-256 key algorithm, maximum age of pin, whether it supports subdomains, the URI to report errors, and the strictness of the pinning.
An example of a pin would look as follows:
Implementing public key pinning will require website owners to make some critical early decisions, such as to how many keys to pin, whether to pin keys for subdomains and what to select as the maximum age of the pin.
The number of keys to pin and the maximum age of the pin will address the issue of acceptability of your website to browsers. Adding more keys to pin will useful, if your key may change due to changes at your CA or in the event of migration from one CA to another. The maximum age means the pin expires after a maximum number of seconds. By limiting the maximum age, any mistake in the pin will expire over a period of time. The proposed RFC indicates that 60 days would be a good maximum age of the pin.
Website owners who use pinning will also have to keep their pinning records up to date to avoid warning messages for replacement certificates the use are supported by a key which is not pinned. The benefit of potential warnings to the public for non-authorized certificates may justify this extra effort.
Pinning is also being looked at by Microsoft for the Enhanced Mitigation Experience Toolkit (EMET) and by the Android project for Android 4.2. We will see if other applications will also use pinning.
Update May 1 2015: Public Key Pinning Extension for HTTP is RFC 7469.
|
<urn:uuid:c8b3f316-a99b-4a3d-b622-f60623e4a36e>
|
CC-MAIN-2017-09
|
https://www.entrust.com/public-key-pinning-2/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00415-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.939958 | 755 | 2.609375 | 3 |
Attar A.,Association des Gastro enterologues Oncologues Group |
Malka D.,Institute Gustave Roussy |
Sabate J.M.,Association des Gastro enterologues Oncologues Group |
Bonnetain F.,Association des Gastro enterologues Oncologues Group |
And 6 more authors.
Nutrition and Cancer | Year: 2012
Although malnutrition is known to be frequent in cancer patients, it has not been described in a selected population of patients with gastrointestinal malignancies under chemotherapy only. Physician judgment about malnutrition and risk factors for malnutrition were also evaluated. All consecutive in- and outpatients of 11 centers were prospectively enrolled in a cross-sectional 14-day period study and classified according to the French health recommendations [Haute Autorité de Santé (HAS)]. Among 313 patients enrolled in 11 centers (mean age = 63 yr; range = 21-93; 67% male) mainly with colorectal (58%), pancreatic (15%), gastric (11%), and hepatobiliary (10%) primary tumors, the prevalence of malnutrition was 52%. Moderate and severe malnutrition was present in 27% and 25% of cases, respectively. Physicians considered it in 36% and 6% of cases, respectively, thereby misclassifying 134 patients (43%). The agreement between the HAS definition and the physicians judgment was very low ( = 0.30). Most of the patients who were identified as severely malnourished received no nutritional support. Performance status and pancreatic and gastric cancers were independently associated with malnutrition. Malnutrition levels are high, around 50%, unequally distributed according to the primitive tumor. It is still underestimated by physicians. Weight loss remains a clinically relevant, simple, and reliable marker of malnutrition. © 2012 Copyright Taylor and Francis Group, LLC. Source
|
<urn:uuid:49829042-5cfc-4107-9ff6-9c492272980e>
|
CC-MAIN-2017-09
|
https://www.linknovate.com/affiliation/association-des-gastro-enterologues-oncologues-group-2055872/all/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00183-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.939399 | 383 | 2.515625 | 3 |
The major difference between SDN and traditional networking lies in the concept of controller-based networking. In a software-defined network, a centralized controller has a complete end-to-end view of the entire network, and knowledge of all network paths and device capabilities resides in a single application. As a result, the controller can 1) calculate paths based on both source and destination addresses, 2) use different network paths for different traffic types, and 3) react quickly to changing networking conditions.
In addition to delivering these features, the controller serves as a single point of configuration. This full programmability of the entire network from a single location, which finally enables network automation, is the most valuable aspect of SDN.
|
<urn:uuid:bd3559a5-cc67-417d-8863-7a89ea0a7217>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2369448/networking/111753-Software-defined-networking-explained.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00407-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.933119 | 145 | 2.6875 | 3 |
When storms attack, Smart Grid could reduce outages, speed recovery
- By William Jackson
- Jul 10, 2012
It’s been a bad 12 months for electric utilities and the customers who depend on them. Hurricane Irene in August 2011 knocked out power to more than 4 million homes and businesses in the eastern United States, some for more than a week, and two months later a freak October snow storm left 3 million customers without power in the Northeast.
And last month a suddenly notorious type of storm — the fast-moving derecho — cut power to 5 million customers across six states from the Midwest to the Mid Atlantic. Some customers in the Washington, D.C., stayed without power for a week during the area’s hottest stretch of weather on record.
Forecasters predict that global climate change will make severe weather events more common, but a nationwide program to develop an intelligent power grid could help ease the pain of extended outages.
NIST fills some gaps in smart-grid standards
Smart-grid tech outpacing security, in 'delicate dance with risk'
“There are a number of ways Smart Grid technology would help,” said George Arnold of the National Institute of Standards and Technology.
At the distribution level, smart meter technology for customers can provide better visibility of the problems to utilities. At the transmission level, sensors can help identify unstable conditions in high-voltage transmission lines so that cascading failures can be avoided.
And the ability to handle two-way power flow will enable microgrids to generate power locally for campuses or neighborhoods, giving a degree of independence from the larger grid.
The improvements might not be apparent to those who sweltered without power through triple-digit temperatures last week, but the technology has begun making its way into power systems and homes.
“It will be a decades-long process,” said Arnold, who is the national coordinator for Smart Grid interoperability for NIST. “But we have a very significant start.”
The electric grid is a backbone of the nation’s critical infrastructure, providing the power the rest of the systems need to operate. With the current program of updating the grid with automated networking technology, there is a growing concern about the possibility of cyber attacks on this infrastructure. But most cyberattacks against the grid are unlikely to produce effects worse than those already produced by Mother Nature.
“The sorts of cyber attacks that are easy to do would simply trigger breakers, causing only brief outages,” said Scott Borg, director of the U.S. Cyber Consequences Unit, an independent research institute. “These would be less destructive than many storm outages.”
Attacks against the physical components at the core of the grid — the generators, large transformers and cross-country transmission lines — would be much more destructive but also more difficult to carry out.
“Highly sophisticated cyberattacks, prepared by considerable numbers of highly skilled experts, could cause damage that would make the worst storm damage seem trivial,” Borg said. “This is because such attacks could physically destroy large quantities of large, hard-to-replace equipment. The consequences of this could be almost unbelievably bad.”
Borg said utilities actually respond well to power outages caused by weather. They have plenty of experience, and the components being repaired and replaced are relatively easy to work with.
“What gets damaged by storms are mainly local distribution lines, local relays, and smaller transformers,” he said. “When storms cause lots of people to lose power for longer periods, this is usually because lots of things have been damaged in lots of locations.”
But even in these cases, power usually is restored to most homes and business within four days, Borg said. “Outages of three-and-a-half days or less are inconvenient and dangerous in extreme temperatures, but they normally cause remarkably little economic damage.”
One of the challenges of widespread outage that last for a week or more is simply identifying the scope and location of the problems, Arnold said. Today, utilities rely on customer reports to identify outages. “In most cases, they don’t know when people are without power,” he said.
At the distribution level, where most storm damage occurs, smart meters could provide this information. “They can more rapidly get a picture of where the damage is without spending a day driving around,” Arnold said. “They could be much more effective in dispatching crews.”
If the grid is enabled to handle two-way power flow, local generating sources could be used to help supply power to microgrids. These local generators could supply local customers, and feed excess power into the grid when demand is low. If the grid is damaged, the local generators could help take care of local demand.
“It wouldn’t necessarily provide all of the power needed all of the time,” Arnold said, but local generators could be more economical than static back-up generators that are used only during emergencies.
In the transmission system, phasor measurement units can provide a more granular way to measure and manage loads, so that problems can be spotted and dealt with before a cascading failure causes a widespread blackout.
The Energy Department has made $4.5 billion available through the Smart Grid Investment Grant program, part of the American Recovery and Reinvestment Act. Under the program, more than 11 million smart meters have been installed, a little more than 7 percent of the customer meters in the country, and 313 phasor measurement units have been installed out of a projected 800. Also, 5,741 automated feeder switches and 13,697 substation monitors have been installed.
While this deployment is under way, NIST is spearheading the job of developing technical standards for the new infrastructure. In February it released version two of its Framework and Roadmap for Smart Grid Interoperability Standards, and later this year will update its Guidelines for Smart Grid Cyber Security.
Also later this year, the government and industry Smart Grid Interoperability Panel, which has helped develop the guidelines, will be relaunched as a government-funded non-profit organization.
William Jackson is a Maryland-based freelance writer.
|
<urn:uuid:26cb99b2-f665-4690-9d05-80a8afd634d7>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2012/07/10/smart-grid-power-outages-severe-storms.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00283-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.944008 | 1,300 | 2.609375 | 3 |
Using DNS Logs As a Security Information Source :
Occasionally this list is used as part of research into malware and domain security. Please drop us a note if you find such a reference in an article or presentation; if you are the author, let us know.
Two papers we’ve become aware of:
Understanding the Prevalence and Use of Alternative Plans in Malware with Network Games - http://www.cc.gatech.edu/~ynadji3/docs/pubs/gzaraid2011.pdf
A Demonstration of DNS3: a Semantic-Aware DNS Service – http://iswc2011.semanticweb.org/fileadmin/iswc/Papers/PostersDemos/iswc11pd_submission_106.pdf
The zone and text files are ONLY be available from a mirror and are no longer be available
on the main site. All requests for files on www.malwaredomains.com will be directed to
our main mirror, mirror1.malwaredomains.com.
MalNET serves as a low interaction HTTP server which responds with a ’200 OK’ for every request. When a malware attempts to retrieve http://bad.malwaredomain.com/som/bad/file.exe, MalNET basically says ‘yep, OK, here it is’ and then does nothing. To make this work you will need to run some sort of blackhole DNS setup in your environment such as the one on offer from malwaredomains.com. Once you have traffic redirected to your MalNET host, you should be able to see what the malware is trying to download.
Today, we’re happy to announce Google Safe Browsing Alerts for Network Administrators — an experimental tool which allows Autonomous System (AS) owners to receive early notifications for malicious content found on their networks. A single network or ISP can host hundreds or thousands of different websites. Although network administrators may not be responsible for running the websites themselves, they have an interest in the quality of the content being hosted on their networks. We’re hoping that with this additional level of information, administrators can help make the Internet safer by working with webmasters to remove malicious content and fix security vulnerabilities.
To get started, visit safebrowsingalerts.googlelabs.com.
Nice article on SANs:
For example when seed data pulled from Malware Domains is correlated with passive DNS and ASN data, then visualized, it is possible to see how the majority of the authoritative nameservers are hosted in the same network block. This dependence indicates an investment by the aggressor into a particular hosting company and can provide an effective network-level block at relatively low cost. As always, be aware of potential collateral damage when blocking a network portion that may also contain legitimate IP hosting space.
Websense has an eye-opening writeup on how some malware is now using ARP cache-poisoning and making the infected machine into an HTTP proxy server. Poof! Your entire network is poisoned! Castlecops has a writeup from someone in China who has experienced this first hand: Machines which are declared clean by multiple AV products still suffer from the IFRAME. Yikes!
Dancho Danchev.’s blog lists several domains full of exploits, using “comprehensive multiple IFRAMES loading campaigns”:
8v8 (dot) biz uc147 (dot) com 070808 (dot) net qx13 (dot) cn sbb22 (dot) com uuzzvv (dot) com 55189 (dot) net 749571 (dot) com jqxx (dot) org mm5208 (dot) com 68yu (dot) cn 2365 (dot) us loveyoushipin (dot) com yun878 (dot) com xks08 (dot) com
In better news, shadowserver reports that the 17 Storm Worm domains including i-halifax.com and i-barclays.com, appears to have all been placed in a status of “NOT DELEGATED” over at nic.ru, preventing A records from being returned when looking up the domains. (Some of the other holiday-related Storm Worm domains still have their NS record.)
|
<urn:uuid:98c3b833-53cb-4152-8cd2-89122fc6a812>
|
CC-MAIN-2017-09
|
http://www.malwaredomains.com/?tag=news
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00459-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.865435 | 917 | 2.53125 | 3 |
Yaz previously posted a news article that used a camera to help detect signs of alcohol impairment, but the cost of the system seems prohibitive. There is word of some new technology to detect if the driver of a vehicle may be under the influence. Researchers at TCU have developed a fuel cell which can detect alcohol in the breath of a driver and then send a signal to police who may be cruising by.
Ed Kolesar, leader of the project, explains that the detector is based on a fuel cell run on ethyl alcohol. A pump draws air in from the passenger cabin, a platinum catalyst converts any alcohol to acetic acid, which then produces a current proportional to the concentration of alcohol in the air.
If enough alcohol is detected a transmitter would send a signal to nearby police up to 1 km away. Of course, this would require the police cruiser to be equipped with a receiver for the signal. The article claims the fuel cell would be specific for ethyl alcohol and hard to fool, but I am not convinced. It would be relatively easy to tape over a sensor or possibly vent fresh air directly over the detector to avoid being pulled over. I also wonder how effective the detector would be while cruising around with the windows wide open. As for the cost of the device, prototypes can be made with off the shelf components and would cost about $100. Mass production may even lower this amount.
|
<urn:uuid:f2af7081-1dae-446b-9c8c-2f5327aaa4c7>
|
CC-MAIN-2017-09
|
https://arstechnica.com/uncategorized/2002/03/2257-2/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00227-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.961303 | 283 | 2.6875 | 3 |
How does a city work? Interactive model gives Portland answers.
- By Rutrell Yasin
- Aug 09, 2011
How does public transportation affect education? What impact does population density have on public health? Is there a connection between CO2 levels and obesity?
Officials in the City of Portland, Ore., have collaborated with IBM to find answers to those and other questions, developing an interactive model that connects the relationships between the city’s core systems that handle the economy, housing, education, public safety, transportation and health care.
The computer simulation lets Portland’s leaders see how city systems work together, how environmental and other factors relate to each other and project the likely results of actions under consideration.
IBM puts its 'smart city' technology in one package
The model was built to support the development of metrics for the Portland Plan, the city’s roadmap for the next 25 years, said Joe Zehnder, chief planner with Portland’s Bureau of Planning and Sustainability.
“We wanted to break down our typical policy or investment silos like transportation, housing, economic development and the environment and look at how policy and investments within any of those areas could play a role in accomplishing a limited and shared set of priorities,” Zehnder said.
Portland basically served as a living laboratory during the year-long collaboration effort to explore how complex city systems behave over time.
IBM approached the Portland officials in late 2009 and kicked off the project in April 2010, when IBM experts met with over 75 Portland-area subject matter experts in a variety of fields to learn about system-interconnection points in the city.
Later, with help from researchers at Portland State University and systems software company Forio Business Simulations, the city and IBM collected approximately 10 years of historical data from across the city to support the model.
The project resulted in a computer model of Portland as an interconnected system that provides planners at the Portland Bureau of Planning and Sustainability with an interactive visual model that lets them navigate and test changes in the city's systems.
“We’ve been trying to model across cities, looking at how transportation relates to public safety or how public safety relates to education," said Justin Cook, IBM’s program manager for Portland. This will help cities set long-term policy goals.
Through a Web browser, the mayor or officials within the Bureau of Planning and Sustainability can access the model from anywhere, Cook said.
They can see an interactive visual map of interconnections and see, for example, what other areas are related to emissions, Cook said. Or they can draw a map between areas to visualize all of the connections.
Finally, they can make changes and test policy positions by doing “what if” scenarios such as “what would happen if the city added more sidewalk miles or more grocery stores per square miles," Cook said.
Meanwhile, IBM introduced new analytics software and services to help cities predict the results of policy decisions and their positive and negative consequences in the future.
System Dynamics for Smarter Cities is designed to help municipal officials reduce the unintended negative consequences on residents of municipal actions, as well as uncover hidden beneficial relationships among municipal policies, IBM officials said.
The software addresses the dynamics across a spectrum of municipal policies and their effect on citizens, such as the associations between:
- Public transportation fares and high school graduation rates.
- Obesity rates and CO2 emissions.
- Average health and attractiveness of the city to businesses.
- Population density and wellness.
- Taxes/fees collected and electricity consumption.
- Farmers markets and economic growth.
Rutrell Yasin is is a freelance technology writer for GCN.
|
<urn:uuid:9c799c2e-366d-4015-bebe-93f9f199898f>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2011/08/09/portland-ibm-smart-city-model.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00155-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.935885 | 765 | 2.765625 | 3 |
A spider that survived more than three months aboard the International Space Station has died less than a week into its new life as a celebrity. Nefertiti, a jumping red-backed spider, breathed her last after just four days of living in the Smithsonian's National Museum of Natural History. I was going to write that she "spun her last web," but jumping red-backed spiders actually don't make webs. Rather, as NASA explains, jumping spiders "hunt using their excellent vision to track and stalk prey, jumping and striking with a lethal bite -- similar to cats hunting mice." The point of the experiment, conceived by an Egyptian high school student who won a global YouTube Space Lab contest, was to see if jumping spiders could successfully hunt prey in microgravity. Nefertiti, at least, showed she could, managing to kill and devour fruit flies placed in her sealed environment on the space station. Nefertiti was aboard the space station from July to October. Here's the Smithsonian announcement:
It is with sadness that the Smithsonian’s National Museum of Natural History announces the death of Nefertiti, the “Spidernaut.” “Neffi” was introduced to the public Thursday, Nov. 29, after traveling in space on a 100-day, 42-million-mile expedition en route to and aboard the International Space Station. She was there to take part in a student-initiated experiment on microgravity.
This morning, before museum hours, a member of the Insect Zoo staff discovered Neffi had died of natural causes. Neffi lived for 10 months. The lifespan of the species, Phidippus johnsoni, can typically reach up to 1 year.The loss of this special animal that inspired so many imaginations will be felt throughout the museum community. The body of Neffi will be added to the museum’s collection of specimens where she will continue to contribute to the understanding of spiders.
Guess that rules out a ceremony at Arlington National Cemetery. Now read this:
|
<urn:uuid:0bbff365-cf7c-4018-80b8-6aed6c22bb97>
|
CC-MAIN-2017-09
|
http://www.itworld.com/article/2716658/enterprise-software/spidernaut-never-got-to-enjoy-its-fame.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00575-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.965966 | 422 | 3.15625 | 3 |
Technology in Action
Sandy shows storm-prediction progress
- By Frank Konkel
- Nov 05, 2012
Hurricane Sandy bears down on the mid-Atlantic on Oct. 29th in this NASA satellite image.
By any account, the super-storm known as Hurricane Sandy was devastating, claiming 113 American lives and potentially an estimated $50 billion in economic damage as it wreaked havoc on the east coast on the way to becoming the second-costliest U.S. storm behind only Hurricane Katrina.
But the damage could have been much worse – in loss of life and economic costs – were it not for the astoundingly accurate forecasts of Sandy’s path from the National Hurricane Center.
Some of the Center’s model runs pegged the hurricane’s landfall five days out within 30 miles of the actual landfall point. That is 10 times better than the public and emergency personnel bracing for the storm could have hoped for 20 years ago, when NHC’s historic average for 72-hour (three-day) hurricane forecasts were only accurate to about 345 miles of landfall. Had that still been the case here, the best forecast would have predicted landfall anywhere from northeastern Connecticut to northern South Carolina.
In fact, if a storm like Sandy happened more than two decades ago, there’s a good chance meteorologists wouldn’t have predicted landfall at all. Instead they would have used historical data to conclude such a system would take a right turn from a high-pressure system ridge into the eastern Atlantic ocean and away from the coastline, according to Dr. Sundararaman Gopalakrishnan, a senior meteorologist at the National Oceanic and Atmospheric Administration's Atlantic Oceanographic and Meteorological Laboratory in Miami.
“If Hurricane Sandy happened 20 years back, it would almost certainly have been a disaster without much warning,” said Gopalakrishnan, who witnessed predictive computer models as they shifted Sandy’s trajectory west toward New Jersey as more and more data came in.
He recalls thinking “Oh no. This is not good,” he said.
“I’m feeling sad about the deaths caused by this hurricane, but I think if it was not for these kinds of forecasts, there would be much, much more,” Gopalakrishnan said. “I can clearly see that from forecasting this storm. This was a unique situation as Sandy was captured in hybrid circulation from the land. I wish it would have turned to the right (east), but it is better to know than not know.”
The technology behind the forecasts
When NOAA – a scientific agency housed within the Department of Commerce that consists of six agencies, including the National Weather Service – was formed in 1970, meteorologists had satellites and past historical data to help predict the weather, but they didn’t have supercomputers like the IBM Bluefire, which is housed in the bottom levels of the National Center for Atmospheric Research in Boulder, Colo. It has a peak speed of 76 teraflops, or about 76 trillion floating-point operations per second.
The accuracy of predictions for the storm beat anything that could have done as recently as 20 years ago. (Image: NOAA)
Massive amounts of data, however, aren’t worth much without modeling, and the NHC uses more than 30 different models to predict storm intensity. In 2007, the NHC began utilizing the Hurricane Weather Research and Forecasting (HWRF) model, which analyzes big data collected from satellites, weather buoys and airborne observations from Gulfstream-IV and P-3 jets and churns out high-resolution computer-modeled forecasts every six hours.
Over the past year, Gopalakrishnan said hurricane forecasts have further improved through the use of higher-resolution computer models. Points spaced on weather maps have increased in resolution from 9 kilometers to 3 kilometers, making HWRF the “highest resolution hurricane model ever implemented for operations in the National Weather Services” according to Richard Pasch, senior hurricane specialist at the NHC.
Forecasting a hurricane takes big data, high-resolution models that incorporate large-scale physics and an accurate representation of initial conditions, and NWRF’s simulations have been successful thus far, Gopalakrishnan said, helping to improve hurricane forecasts by up to 20 percent.
Beginning five days before Sandy’s landfall, the NHC’s NWRF model ran 23 simulations of the hurricane’s expected path – one every six hours, and each simulation taking about 85 minutes to complete.
Simulations and other such data are shared immediately with other federal agencies including the Federal Emergency Management Agency, according to Erica Rule, spokesperson for the NOAA.
“When there is an active storm threatening landfall, there is extremely close coordination,” Rule said.
Not everything about Sandy was predicted as accurately as its landfall. Critics have questioned why predicted wind speeds in Washington, D.C. were higher than what was experienced, and the storm’s predicated landfall missed by a few hours.
Yet these errors in NHC forecasting seem small compared to yesteryear’s predictions.
Today, the NHC’s average miss in hurricane landfall predictions is a scant 100 miles. In 1970, NHC meteorologists missed by an average of 518 miles. Major improvements have been recognized in precipitation estimates and wind speed forecasts as well, which can prove vital in predicting storm surges and imminent flooding.
The increased predictive power decreases what scientists call a hurricane’s “cone of uncertainty,” or the area that includes all the possible paths a hurricane might go.
The better the predicted path of a hurricane, the less uncertainty of where, when and who it might strike, and Gopalakrishnan said “next-generation” efforts in hurricane prediction promise to further reduce uncertainty.
Frank Konkel is a former staff writer for FCW.
|
<urn:uuid:8a51dbc4-a0db-4fca-a9a5-edae75a54193>
|
CC-MAIN-2017-09
|
https://fcw.com/articles/2012/11/05/sandy-hurricane-center.aspx?admgarea=TC_Agencies
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00275-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.956704 | 1,232 | 3.234375 | 3 |
Sugino H.,Asa Medical Association |
Tsumura S.,Asa Medical Association |
Tsumura S.,Tsumura ENT Clinic |
Kunimoto M.,Asa Medical Association |
And 19 more authors.
PLoS ONE | Year: 2015
The Japanese guidelines for acute otitis media in children recommend classifying acute otitis media by age, manifestations and local findings, and also recommend myringotomy for moderate-grade cases with severe local findings, severe-grade cases, and treatment-resistant cases. The heptavalent pneumococcal conjugate vaccine was released in Japan in February 2010. In Hiroshima City, public funding allowing free inoculation with this vaccine was initiated from January 2011, and the number of vaccinated individuals has since increased dramatically. This study investigated changes in the number of myringotomies performed to treat acute otitis media during the 5-year period from January 2008 to December 2012 at two hospitals and five clinics in the Asa Area of Hiroshima City, Japan. A total of 3,165 myringotomies for acute otitis media were performed. The rate of procedures per child-year performed in <5-year-old children decreased by 29.1% in 2011 and by 25.2% in 2012 compared to the mean rate performed in the 3 years prior to the introduction of public funding. A total of 895 myringotomies were performed for 1-year-old infants. The rate of myringotomies per child-year performed for acute otitis media in 1-year-old infants decreased significantly in the 2 years after the introduction of public funding for heptavalent pneumococcal conjugate vaccine compared to all years before introduction (p<0.000001). Our results suggest a benefit of heptavalent pneumococcal conjugate vaccine for acute otitis media in reducing the financial burden of myringotomy. In addition, this vaccine may help prevent acute otitis media with severe middle ear inflammation in 1-year-old infants. © 2015 Sugino et al.This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Source
|
<urn:uuid:7a295388-7a67-491f-8984-3a223df9bc52>
|
CC-MAIN-2017-09
|
https://www.linknovate.com/affiliation/asa-medical-association-427198/all/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00627-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.925999 | 463 | 2.53125 | 3 |
Whether or not the D-Wave quantum computer is in actuality a quantum computer is a debate that HPCwire has been following since the project hatched in 2007. Early critics of the system claimed that it wasn’t a “real” quantum computer. Since that time, D-Wave has been winning over supporters, including Google, NASA and Lockheed Martin. But others (MIT’s Scott Aaronson for one) remain skeptical. In the course of the continuing controversy, the Washington Post‘s Timothy B. Lee recently took up the matter with D-Wave’s vice president of processor development Jeremy Hilton.
There is some disagreement over what exactly constitutes a quantum computer. The classic model for building a quantum computer is called Gate Model Quantum Computing, but an alternative model was introduced by MIT researchers in the early 2000s. It’s called adiabatic quantum computing, and it’s what D-Wave’s computer is based on. Some critics of D-Wave’s technology contend that it’s not the real deal.
“What D-Wave built is not universal quantum computing,” Hilton readily admits, but he maintains that it’s been proven in the literature – not by D-Wave – that the adiabatic model is an equivalent model of quantum computation.
D-Wave founders went with the adiabatic approach because they thought it had the best chance of enabling real work in a reasonable time frame without getting into the really difficult NP-Hard class of problem-solving.
Asked to be “concrete” in describing their hardware, Hilton responds:
“D-Wave has focused on the superconducting side of things to benefit from the infrastructural advancement the semiconductor has made. The fabrication of superconductors is all [mature] semiconductor technology. We fabricate [our chips] at Cyprus Semiconductor. We don’t have exotic tools to make those devices. That was an important aspect for D-Wave, we want to scale up to a high level. If all those problems have already been solved, we’ll be able to take advantage more quickly. [If we had used] ion trap technology, new technologies would have needed to scale up.”
As for why this system should be considered impressive even though it’s only “comparable or slightly better” than classical computing technology, Hilton affirms that even being “in the ballpark of the conventional algorithms in the field was very exciting.” The company and its backers are focused on their future roadmap and on the large improvements they are seeing between generations. For example, transitioning from a 128 qubit to 512 qubit processor returned a 300,000x improvement in performance. A 1000 qubit is planned for release some time this year and a 2000 qubit processor is on the horizon as well.
“We’re at a point where we see that our current product is matching the performance of state-of-the-art classical computers,” Hilton adds. “Over the next few years, we should surpass them. The ideal is to get into a space that is fundamentally intractable with classical machines. In the short term all we focus on is showing some scaling advantage and being able to pull away from that classical state of the art.”
In the remainder of the Q&A, Hilton uses a hills and valleys metaphor to describe how the D-Wave machine compares with its conventional computing cousins (“entanglement allows those valleys to interact and interfere in a way that allows the system to find its lowest-energy optimization”). He also explains why D-Wave hasn’t focused on Shor’s algorithms (“it’s not an interesting market segment for a business”), and counters claims of secrecy as a historic effect (their early years were focused on building a scalable technology, not publication).
In the final analysis, Mr. Lee asks all the right questions, but the responses, while frank, can come off as frustratingly vague – a paradox befitting the subject matter, perhaps, or something more calculated if you’re a critic.
Richard Feynman has been quoted as saying: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” But he also said: “If you can’t explain it to a six year old, you don’t really understand it.”
|
<urn:uuid:7d6d7901-1e3c-4b96-a724-c06ab3788248>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2014/01/13/everything-wanted-ask-d-wave/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00327-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.950146 | 938 | 2.953125 | 3 |
For Win Wonks, Software Restriction is Good Policy
Some of the most significant threats to your network's security and stability come from worms and trojan malware. These network threats can be transmitted through executable files attached to e-mails, via Internet downloads, or from Web pages. Although virus checkers are increasingly effective at protecting against threats embedded in executable files, there are still other measures you can use to protect the systems on your network. One of the most effective is to simply prevent people from opening unknown executable files in the first place. Windows Server 2003 provides a feature called Software Restriction Policies that allows you to do just that.
How Software Restriction Policies Work
In simple terms, Software Restriction Policies allow you to define what applications can or cannot be run on a computer. The restrictions are configured and applied through Group Policy, so there is no additional software to install, nor is there any major reconfiguration of Active Directory.
Which applications can be run in a "disallowed" environment, or cannot be run in an "unrestricted" environment, are defined through a set of rules. These rules provide different evaluation criteria by which a file that is attempting to be opened can be evaluated. There are four types of rules that can be created – hash, certificate, path and Internet zone. Each takes a different approach to determining whether a file will be permitted or prevented from running.
- Hash rule – The hash rule works by using a hashed value of the executable file as criteria to determine if it should run. When an attempt to start the executable is made, the hash value for the file is calculated and compared to the hash value stored when the rule was created. If the two match, then the file is allowed or disallowed based on the default security level that is configured. If the hash value does not match because, for example, a virus has infected it since it was created, the hash values won't match and the file will be prevented from opening.
- Certificate rule – Certificate rules use digital certificates to determine whether or not a file should be allowed to open. The certificate you use for the rule must be supplied to you by the software creator, and is then used to verify the validity of the executable when you attempt to use it. Like the hash rule, a certificate verifies that the software has not be altered or manipulated since it was created.
- Path rule – The path rule uses the location from which the executable is launched as the criteria for determining whether access is permitted. For example, if you were using the "disallowed" default rule, you could provide access to certain folders from which users are permitted to run an executable file. An attempt to run a file from another location would be blocked. To increase the flexibility of path rules, you can use folder variables such as %windir% and %userprofile%.
- Internet zone – The Internet zone rule uses the zones defined in Internet Explorer to identify from where a file is retrieved, and then determine whether a file can be executed. To view the zones and get an idea of locations and objects included in each zone, open Internet Explorer and click Tools > Internet Options. Then choose the Security tab. By configuring rules based on Internet zones, you can override the default setting for the Software Restriction Policy on executables obtained from that zone. So, for example, if you configured an Internet Zone rule when the default security level was "unrestricted," any software run directly from the Internet, perhaps as part of a web page, could be prevented from running. It should be noted, though, that at this time only Windows Installer packages are covered by Internet zone rules.
In some cases it is possible for an executable file to be subjected to more than one rule. When this happens, the rules are applied in order starting with the hash rule, followed by the certificate rule, then the path rule, and finally the Internet zone rule.
Another configurable aspect of Software Restriction Policies is the specification of what files types are covered by the policy. By default a wide range of executable files types such as BAT, COM, VBS, EXE and MSI are included. You can choose to add or remove file types as needed. The advantage of a configurable list is that as new programs become available or new threats are identified, you can add that file type to the list.
A Word of Caution
It should go without saying that implementation of Software Restriction Policies should only take place after considerable testing. The basic premise behind the technology is that it prevents people from opening files that they shouldn't. With this comes the inherent risk that a misconfiguration will result in people not being able to open files that they need.
Also keep in mind that many complex applications, like Microsoft Office, invoke other smaller applications or components as they run. As a result, you should test all aspects of an application, not just the main program.
Although the implementation of Software Restriction Policies can require a reasonable amount of administrative overhead, particularly on a large network, there are few other methods of providing such comprehensive control over what applications can be run and from where. In the next article, we'll look at the actual process of implementing Software Security Policies on a Windows Server 2003 system. Until then!
|
<urn:uuid:595067b0-17b7-4c09-b9eb-d3cd8c0247c6>
|
CC-MAIN-2017-09
|
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3445241/For-Win-Wonks-Software-Restriction-is-Good-Policy.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00503-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.933221 | 1,072 | 3.140625 | 3 |
Researchers at Stanford University have developed a lithium-ion battery that shuts down as it begins to overheat, potentially meaning the types of catastrophic fires seen in hoverboards, laptops and airliners could become a thing of the past.
Lithium-ion batteries are used in just about all portable electronics. They're light, can store a lot of energy and are easily recharged, but they are also susceptible to overheating if damaged. A short circuit in the battery often leads to fire.
In the new Stanford battery, researchers employed a polyethylene film that has embedded particles of nickel with nanoscale spikes. They coated the spikes with graphene, a conducting material, so that electricity can flow over the surface.
But when the temperature rises, the film expands, and at about 70 degrees Celsius (160 degrees Fahrenheit) the conducting spikes no longer touch each other, breaking the circuit and causing the battery to shut down.
Once the battery shuts down, a runaway thermal reaction is avoided and the battery cools, eventually bringing the nickel spikes back into contact and allowing the electricity flow to resume.
"We can even tune the temperature higher or lower depending on how many particles we put in or what type of polymer materials we choose," said Zhenan Bao, a professor of chemical engineering at the university and one of the research team.
The research was also carried out by Stanford engineer Yi Cui and postdoctoral scholar Zheng Chen. Details were published on Monday in the journal Nature Energy.
"Compared with previous approaches, our design provides a reliable, fast, reversible strategy that can achieve both high battery performance and improved safety," Cui said in a statement.
The battery is a second job for the nickel-embedded polyethylene material. Bao, the Stanford professor, used the same material in a wearable sensor she developed to measure body temperature.
|
<urn:uuid:2d174e4b-5adb-4249-9fec-c1adfcac2957>
|
CC-MAIN-2017-09
|
http://www.itnews.com/article/3021017/researchers-develop-a-lithium-ion-battery-that-wont-overheat.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00624-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951647 | 378 | 3.390625 | 3 |
A pair of robotic legs will be heading to the International Space Station.
That's right. Robot legs.
The 300-pound robot, which had been in the works for 11 years, has 38 PowerPC processors, including 36 embedded chips, which control its joints. Each of the embedded processors communicates with the main chip in the robot.
The robot, whose legless torso has been attached to a stationary pedestal, is expected to eventually take over some basic duties, such as cleaning and maintenance inside the station, freeing up the astronauts to do more critical work, like scientific experiments. NASA scientists hope that one day, with upgrades to the robot's torso, it will be able to work outside the station, aiding astronauts in spacewalks.
To do much of that work, the robot needs legs.
That's where SpaceX, a commercial space flight company that runs cargo missions to the space station, comes into play.
SpaceX is set to carry the robotic legs onboard one of its Dragon cargo crafts in its third contracted resupply mission to the space station. The mission had been scheduled for launch on Sunday but was postponed because of a recent fire that damaged radar equipment on the East Coast of Florida.
The damaged equipment sits in the Eastern Range, a facility that supports missile and rocket launches from both the Cape Canaveral Air Force Station and the Kennedy Space Station.
NASA has not given a new date for the mission launch.
However, once the legs arrive at the station and are attached to R2's torso, the robot will have a fully extended leg span of nine feet. That, according to NASA, will give it "great flexibility" to move around the inside and outside of the space station.
Each leg has seven joints and a device on its foot, called an end effector, a tool that enables the machine to use handrails and sockets.
The end effector also has what NASA is calling a vision system, which is designed to help controllers verify and eventually automate each limb's approach to and grasp of an object.
Over the last two and a half years, astronauts have run experimental trials with R2 to see how it functions in space and to put humans at ease with working with the robot.
So far, the robot has been able to correctly press buttons, flip switches and turn knobs. It also has worked with human tools, using an air flow meter and an RFID inventory scanner.
NASA also reported that R2, which can also catch free-floating objects, has communicated using sign language.
The space agency expects to test Robonaut 2's new legs in June. After checking out the joints, the robot will take its first steps in space.
This article, NASA's humanoid robot to get a leg up on space station, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected].
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA's Humanoid Robot to Get a Leg Up on Space Station" was originally published by Computerworld.
|
<urn:uuid:918403ff-d24d-470f-9360-74266ff7badd>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2377544/government/nasa-s-humanoid-robot-to-get-a-leg-up-on-space-station.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00500-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.953796 | 681 | 3.203125 | 3 |
Malware 101: An IT primer on malicious software
This feature first appeared in the Winter 2015 issue of Certification Magazine. Click here to get your own print or digital copy.
Malware is perhaps the most dangerous threat to the security of the average computer system. Research by Microsoft recently estimated that 17.8 percent of computers worldwide were infected by malware during a three-month period. That is an astonishing number that underscores the clear and present danger posed by malicious software on the modern internet.
Information technology professionals must educate themselves about the risks posed by malware and use that knowledge to defend their organizations against the malware threat. In this article, we provide background information on malware and describe ways that you can create a defense-in-depth approach to protecting your computing assets.
What is malware?
Malware is a shorthand term for “malicious software.” While software developers normally create programs for a useful purpose, such as editing documents, transferring files, or browsing the web, some have more malicious intent in mind. These developers create malware that they design to disrupt the confidentiality, integrity, or availability of information and computing systems. The intent of malware varies widely — some malware seeks to steal sensitive information while other malware seeks to join the infected system to a botnet, where the infected system is used to attack other systems.
The three major categories of malware are viruses, worms and Trojan horses. They mainly differ in the way that they spread from system to system. Viruses are malware applications that attach themselves to other programs, documents, or media. When a user executes the program, works with the document, or loads the media, the virus infects the system. Viruses depend upon this user action to spread from system to system.
Worms are stand-alone programs that spread on their own power. Rather than waiting for a user to inadvertently transfer them between systems, worms seek out insecure systems and attack them over the network. When a worm detects a system vulnerability, it automatically leverages that vulnerability to infect the new system and install itself. Once it establishes a beachhead on the new system, it uses system resources to begin scanning for other infection targets.
Trojan horses, as the name implies, are malware applications that masquerade as useful software. An end user might download a game or utility from a website and use it normally. Behind the scenes, the Trojan horse carries a malicious payload that infects the user’s system while they are using the program and then remains present even after the host program exits.
Hackers create new malware applications every day. Many of these are simple variants on known viruses, worms and Trojan horses that hackers alter slightly to avoid detection by antivirus software. Some viruses, known as polymorphic viruses, actually modify themselves for this purpose. You can think of polymorphism as a disguise mechanism. Once the description of a virus appears on the “most wanted” lists used by signature-detection antivirus software, the virus modifies itself so that it no longer matches the description.
Malware and the Advanced Persistent Threat
Recently, a new type of attacker emerged on the information security horizon. These attackers are groups known as Advanced Persistent Threats (APTs). APTs differentiate themselves from a typical hacker because they are well-funded and highly talented. The typical APT receives sponsorship from a government, a national military, or an organized crime ring.
While normal attackers may develop a malware application and then set it free, seeking to infect any system vulnerable to the malware, APTs use a much more precise approach. They carefully select a target that meets their objectives, such as a military contractor with sensitive defense information or a bank with sensitive customer records. Once they’ve identified a target, they study it carefully, looking for potential vulnerabilities. They then select a malware weapon specially crafted to attack that particular target in a stealthy manner.
The advanced nature of APTs means that they have access to malware applications that are custom-developed and unknown to the rest of the world. These attacks, known as zero-day attacks, are especially dangerous because signature-based detection systems do not know they exist and are unable to defend against them.
One of the most well-known examples of an APT in action was an attack in 2010 using malware termed “Stuxnet.” In this attack, believed to have been engineered by the U.S. and Israeli governments, malware infected and heavily damaged an Iranian uranium enrichment plant. Analysis of the malware by security researchers later revealed that it was very carefully developed by talented programmers with access to inside information about the enrichment plant.
Defending against malware
Organizations seeking to defend themselves against malware attacks should begin by ensuring they have active and updated antivirus software installed on all of their computing systems. This is a basic control, but one where organizations often fall short. There are many quality signature detection products on the market that will help protect systems against known threats. This base level of protection will easily defend against the majority of attacks.
Of course, these signature detection systems are not effective against zero-day attacks. That’s where more advanced systems come into play. Businesses seeking defense against APT-style attacks should consider implementing advanced malware defense techniques, such as application detonation and browser isolation. Application detonation systems “explode” new software in a safe environment and observe it for signs of malicious activity. New applications are only allowed on endpoint systems after passing this test.
The most common source of malware infection is unsafe web browsing. Users visit a website containing malware and inadvertently download a file containing malicious code that installs on their system. Education and awareness programs can help reduce this threat, but browser isolation systems go a step further. In this approach, users browse the web through an isolation appliance located outside of the network firewall. The isolation appliance handles all of the web processing and presents the user with a safely rendered version of the website. Any code execution takes place on the appliance and never reaches the end user’s system, isolating it from the malware.
If you are unlucky enough to experience a malware infection, you have a few options at your disposal. If it is a straightforward infection, your antivirus software may be able to completely resolve it. If you experience more complex symptoms, you may need to either rebuild the system from scratch or call in a malware removal specialist.
Malware and you
Would you like to become a malware specialist? If you plan to design or implement a malware defense program for your organization, many of the basic security certifications may come in handy. The Security+, SANS GIAC Security Essentials and CISSP certifications all offer basic training in malware prevention, removal and analysis.
If you’re looking to dive more deeply into malware studies, consider pursuing the SANS GIAC Reverse Engineering Malware (GREM) certification. This certification program prepares individuals to perform advanced analysis of malware for forensic investigations, incident response and system administration. Candidates for the credential must successfully pass a two-hour 75-question examination with a score of 70.7 percent or higher.
Malware remains the most common threat to cybersecurity today. Thousands of viruses, worms and Trojan horses exist on the Internet, seeking to quickly pounce on vulnerable systems. You can defend your organization by educating yourself on the threat, installing antivirus software and considering the deployment of advanced malware defense mechanisms.
|
<urn:uuid:817e4635-2988-4b90-98ab-32eb454bec8f>
|
CC-MAIN-2017-09
|
http://certmag.com/malware-101-primer-malicious-software/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00444-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.933648 | 1,511 | 3.59375 | 4 |
Health Care Slideshow: Social Medicine: Is the Internet Transforming Healthcare?By Don Reisinger | Posted 05-13-2011
18 percent of all U.S. Internet users have gone to the Web to find people who have similar ailments.
23 percent of U.S.-based Internet users who suffer from a chronic disease have used to the Internet to find others going through the same issue.
Even though the Internet is useful to Americans, 70 percent of adults say they still go to a health professional to get "information, care, or support" for what ails them.
Those looking for information on "weight loss or gain, pregnancy, or quitting smoking," are most likely to surf the Web.
When Americans want an "accurate medical diagnosis," 91 percent said they will go to a doctor or nurse.
Concern for others is one of the more common reasons people surf the Web for health information; 26 percent of those who are caring for someone with an illness look to the Internet to find information.
In the middle of a "medical crisis," people try to find information wherever they can; 85 percent say they take to the Web to learn more about the issue.
Internet users over age 65 are unlikely to look up information about a medical condition online. In fact, just 10 percent of seniors have done so.
When the health concern involves technical issues, professionals are the preferred resource. When the concern involves personal issues of how to cope with a health issue or get quick relief, then non-professionals were preferred by most patients.
|
<urn:uuid:eb4f0af7-a4ed-48ed-9c71-3155acf7c888>
|
CC-MAIN-2017-09
|
http://www.cioinsight.com/print/c/a/Health-Care/Social-Medicine-Is-the-Internet-Transforming-Healthcare-752089
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00620-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.941406 | 319 | 2.75 | 3 |
From its Babylonian roots, two primary forces have shaped banking: regulation and globalization. For millennia, sovereigns have held a tight grip over minting power and have used it as a strategic regulatory tool to establish and maintain power. Today's regulations may be more complex, but they also serve many disparate goals, some of them oddly conflicting. By contrast, globalization has primarily exerted an expansionary influence and helped to level the regulatory pressures over time.
Additionally, technology has served as a disruptive force in many industries throughout history. Communication, entertainment, travel, and retail are but a few industries that have experienced continuous disruptions in their primary business model from the impact of various technologies in the past 50 years.
Yet the banking industry -- especially retail consumer banking -- has not experienced any significant disruption from technology. It can be argued that the last major disruptive force in the banking industry was the ATM, which is roughly 50 years old. The Internet, which in that same period has turned many industries topsy-turvy, has had a minimal impact on retail banking. For example, many banks have not fully embraced mobile banking with robust solutions that meet the lifestyles of their consumers.
Some tend to blame technology. Others blame the industry, and yet a third group lays blame on the consumer's shoulders. These denunciations take the familiar forms of technology being cost-prohibitive, the industry being dominated by technophobes, and consumers being afraid to utilize technology where they keep their money. Upon reflection, however, the tension between regulation -- rooted in a national regulator and its government's domestic interests -- and globalization becomes one of the primary forces inhibiting and confusing the role of technology in banking. Regulation inhibits the adoption and use of technology in its most disruptive form. It forces the inventors, creators, and developers of technology to play within a confined arena, rendering disruption a remote possibility.
Globalization, though an overused term, forces nation-states to relinquish absolute control in favor of economic growth through trade. It has evolved into the gravitational pull that brings countries out of their national banking system's shell. It is worth noting that globalization was accelerated after World War II with the creation of the governing bodies of the Bretton Woods system. The IMF, the World Bank, and the GATT laid the groundwork for the modern cross-border trading system that now includes financial services and banking. The relevant question is how technology can become a positive disruptive force within the constraints of these two powerful forces.
The obvious answer is for banks to develop technology or fund startups that show promise. However, this solution faces a collective action problem. For a technology to be effective, it must be adopted by the industry as a whole, offered as a tool without expectation of windfall profit, and be so transparent and ubiquitous that it can be used in Kalamazoo or Katmandu with no fear of compatibility problems. Therefore, why would any one bank invest the capital needed to develop a technology and not be able to utilize it as a lever for its own competitive advantage? This is one reason banks are unlikely to attempt such a project single-handed as they recover from the recent recession and face higher capital requirements, living wills, increased costs, and risk mitigation regimes. The time and capital might at best create a tool to attract some consumers and at worst create a costly distraction for the bank.
One novel solution is the funding of an R&D body that allows its participants to dream and develop big. The idea might seem outlandish, but there is precedent in other industries. In essence, the banking sector needs an Apollo project, an NIH, or a DARPA: a basic research institution with the goal of creating technologies and solutions for banks and their customers in the 21st century. This institution will also need to be international in scope and funding. Fortunately, the vehicle already exists. The Basel Committee on Banking Supervision, which sets the rules for capital requirements, can also be funded and empowered to develop banking technology.
In the aftermath of the global financial crisis of 2008, there is a push among governments and banks to harmonize the main aspects of global banking regulation. Basel III and its capital requirements are one example. Even the Dodd-Frank Act recognizes its own limitations in extraterritoriality and incorporates the concept of harmonization. Moreover, as banks rise from the ashes of that devastating crisis, consolidation will continue to leave the problem of too big to fail ever present, necessitating more cooperation between banks and regulators in developing countercyclical tools and resolution scenarios. The best news is that technology R&D in the banking sector need not be as costly as the Apollo program (estimated to have cost more than $170 billion when adjusted for inflation).
In an environment where global economic and domestic regulatory forces demand greater cooperation between banks and regulators, why not spend a little money on R&D? I can't think of a better place to conduct banking technology research than a little town in Switzerland named Basel.
Behzad Gohari is a Managing Director at The Althing Group, an advisory firm helping clients navigate the capital markets. Over two decades, and through a dozen startups, he has acted as founder, investor, and strategic advisor, as well as utilizing his ... View Full Bio
|
<urn:uuid:5762fdc9-a41a-43fa-ae92-03ed8b10321e>
|
CC-MAIN-2017-09
|
http://www.banktech.com/innovation/can-technology-serve-as-a-positive-disruptive-force-in-banking/a/d-id/1297919?piddl_msgorder=thrd
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00144-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.959585 | 1,065 | 2.96875 | 3 |
Using nanotechnology, proteins and a chemical that powers cells in everything from trees to people, researchers have built a biological supercomputer.
The supercomputer, which is the size of a book, uses much less energy, so it runs cooler and more efficiently, according to scientists at McGill University, where the lead researchers on the project work.
"We've managed to create a very complex network in a very small area," said Dan Nicolau Sr., chairman of the Department of Bioengineering at McGill. "This started as a back-of-an-envelope idea, after too much rum I think, with drawings of what looked like small worms exploring mazes."
Nicolau has been working on the research for more than 10 years with his son Dan Nicolau Jr.; they have been joined by scientists from Germany, Sweden and The Netherlands.
This research advances work on biological computers that has been going on for years.
Last May, scientists at UC Santa Barbara reported that they were working on a circuit designed to mimic the human brain running approximately 100 artificial synapses.
However, while that mimics a living brain, it did not use biological components.
Nearly a decade ago, scientists predicted that within 15 years hybrid computers would be operating with a combination of technology and living organic material.
Now, researchers are taking the work a step further.
"It's exciting in that this was a real long shot to begin with, almost science fiction," said Patrick Moorhead, an analyst with Moor Insights & Strategy. "We don't necessarily need this as long as something like quantum computing comes through, but it's important to have many irons in the fire. With many options, one should pull through."
The biological computer is designed to process data quickly and accurately using parallel networks, much like traditional electronics supercomputers do. The biocomputer uses a chip that is 1.5 centimeters square with etched channels that carry short strings of proteins instead of the usual electrons. The proteins' movements are driven by adenosine triphosphate, a chemical that enables energy transfer between cells.
McGill scientists call adenosine triphosphates the "juice of life."
While the effort shows that the bio-supercomputer can handle complex classical mathematical problems by using parallel computing, researchers say there is "a lot of work ahead" to make it a full-scale functional computer.
"Now that this model exists as a way of successfully dealing with a single problem, there are going to be many others who will follow up and try to push it further, using different biological agents, for example," said Nicolau. "It's hard to say how soon it will be before we see a full-scale bio-supercomputer."
He added that to enable the bio-computer to take on more complex problems, one solution might be to combine the bio-machine with a conventional computer to create a hybrid device.
"Right now we're working on a variety of ways to push the research further," said Nicolau.
Zeus Kerravala, an anayst with ZK Research, called the work a "big step forward" in the goal of creating useful bio-supercomputers. "The goal here is to tackle some of the big issues in society," he added. "A biological computer would run the calculations differently and potentially give us a different way to get at some big answers."
Moorhead called the work a "breakthrough."
"Just the fact that it can do math is a step forward," he said.
This story, "Biological supercomputer uses the 'juice of life'" was originally published by Computerworld.
|
<urn:uuid:3d89672b-8934-4572-92b2-2c42a4f6604d>
|
CC-MAIN-2017-09
|
http://www.itnews.com/article/3040707/computer-hardware/biological-supercomputer-uses-the-juice-of-life.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00020-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.964665 | 754 | 3.984375 | 4 |
There’s no doubt that the increased use of technology in Connecticut classrooms enhances students’ learning. Students can now access information from across the globe on every subject imaginable and interact with individuals from other continents. But herein is where the problem lies; the internet doesn’t come without risk.
As a district technology coordinator in Connecticut, how can you ensure your students are safe online and acting responsibly? Anti-bullying policies and blocking, you may cry. But is this enough?
We’ve all heard about anti-bullying, and the state of Connecticut is not alone in having laws pertaining to the prevention of bulling amongst students, including through technology (known as ‘cyberbullying’). Passed in 2011, Public Act 11-232 requires schools to create a climate of safety by having:
Connecticut schools are also required to monitor for and report bullying incidents in a set time period to the school climate specialist. At last count, in 2012, over 91% of schools in Connecticut had adopted safe school climate plans.
Having policies in place to detect and prevent bullying is one key aspect of internet safety. But risks online extend far beyond this, from inappropriate and potentially harmful content to communication with individuals offering inappropriate activities and topics. Sometimes students just aren’t aware of the dangers, and while blocking websites can prevent access to certain areas it doesn’t teach them why something poses a risk.
This is where an internet safety monitoring system comes in; constantly monitoring your network for potential issues and flagging them for relevant staff members. Not only can it help identify bullying incidents, and even the perpetrators, of cyberbullying incidents (something which is required by state law) , but it can also act as an early warning mechanism for other issues by giving staff a picture of incidents over time.
If you’re interested in taking your anti-bullying policies further to extend into full internet safety, take a look at our network monitoring software and see how it could help you deliver an even safer environment for your students.
|
<urn:uuid:ade14b2d-519e-4032-904e-7934837d5d23>
|
CC-MAIN-2017-09
|
https://www.imperosoftware.com/internet-safety-in-connecticut-taking-anti-bullying-policies-to-the-next-level/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00196-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.947176 | 418 | 3.078125 | 3 |
The Cultural Effects of Video Gaming
Xbox. PlayStation. Wii. When you think of video games, it’s likely that some of these names will pop into your head. It’s also likely you’ll conjure up images of Super Mario stomping on bad guys, or of enemy warriors battling on an alien planet, or of your own James Bond-esque spy mission in a 3D virtual world.
But these days, video games aren’t just for tech geeks. Nor are they purely for entertainment value. In fact, games have been put to use in a new way altogether: as a platform for educational, business and therapeutic purposes.
“The gaming industry itself is maturing — both in its development style and its target product,” said D.J. Kehoe, assistant to the director of the information technology program in the College of Computing Sciences at the New Jersey Institute of Technology (NJIT).
In fact, some games are actually intended to be educational. For instance, last spring Kehoe directed a group of students as they developed a game for Pearson Education. The purpose of the project was to create a gaming environment that would bolster the reading skills of middle- and high-school students.
Further, a congressionally funded training program called the “University XXI program” has built 21st-century gaming simulations to train U.S. soldiers and Air Force recruits.
“We have built up capacity to help analyze and update training materials for the Army, and recently we’ve been looking at training materials and education materials for the Air Force,” said Sheilagh O’Hare, systems analyst at the University of Texas at Austin.
Her team worked with the medical department at Ft. Sam in Houston to update PowerPoint slides from the 1990s to incorporate video game elements and create interactive scenarios, she said.
In fact, the U.S. Army has even begun to leverage games as a way to bolster recruiting in areas where interest has been dwindling. Take Philadelphia, for example. As part of its marketing efforts, the Army has erected a state-of-the-art facility in a mall there. Complete with computers and Xbox games — not to mention a Black Hawk helicopter simulator room — the facility places potential recruits, as well as casual mall-goers, in a virtual war environment.
“There’s a game called ‘America’s Army,’ which is a free-to-play, networked, multiplayer game, and it’s just like going through military. They train you and you go on missions, and it’s supposed to give people an idea of what goes on in the military and the kind of work they’ll be doing,” Kehoe said.
One of the goals of this facility is to expose individuals to war-like conditions at an early age in the hopes that they will be more likely to enlist once they become eligible.
Today, almost seven years to the month after it launched, “America’s Army” has become one of the most popular Web-based computer games in the world.
“It’s good for recruiting,” Kehoe said, “but it’s also good just for entertaining.”
Games as Motivational Tools
Research indicates that younger employees — especially those from the Millennial generation — expect new technologies such as games and social networking to be deployed in the workplace and beyond.
Perhaps for this reason, a number of companies have turned to games to increase productivity, morale and retention, reducing turnover rates and absenteeism — measures that will ultimately impact the bottom line. Take, for instance, the Snowfly Capstone program, a Web-based software program. The idea behind the program is to reinforce positive performance and workplace behaviors, such as showing up to work on time or successfully completing an assigned task.
“We have a point-based incentive program that we set up for our customers, and [they] determine specific behaviors, goals and activities that [they] want to reinforce,” said Tyler Mitchell, vice president of market development at Snowfly. “If [employees] do something worthy of recognition, they receive what we call Snowfly game tokens. Via those game tokens, they can select from 11 to 12 games. [Then you] play a game, win a random number of points, accumulate those points and redeem the points for a variety of rewards.”
These games don’t require skills to win, however.
“Employees don’t win more points because they are good at video games; they win more points because they earn more game tokens. And the more games they play, the more points they accumulate,” Mitchell explained. “Once you earn the right to play a game, you have just as good a chance as anyone else to receive a high point payout.”
In fact, a good percentage of employees choose to take their game tokens home to play with their kids in exchange for chores or homework.
“You’re not losing anything [and] nothing is taken from you,” Mitchell said.
Thus far, the program has been implemented in several industries, including retail, health care, contact centers, banks and financial institutions.
Furthering Cultural Competence
Games can not only teach language skills, they can instill and propagate cultural knowledge, sensitivity and awareness.
Take, for instance, Alelo Inc., a developer of learning products that advance cross-cultural skills. Alelo has devised a series of interactive, game-based, 3-D simulations of real-life social interactions.
“The technology is a combination of artificial intelligence and video-game technology that allows us to create simulated game worlds, as well as online practice environments, where learners can practice conversing in the foreign language that they’re trying to learn,” said W. Lewis Johnson, president and chief scientist at Alelo.
“We’re really focusing on the skills [people] need to engage effectively in face-to-face communications, which includes not just the proper language but also the proper use of language, what forms of politeness are appropriate in different social contexts, body language and other cultural norms and expectations that arise in different situations that people might encounter,” he said.
For instance, in a game called “Mission to Iraq,” people must introduce themselves in accordance with local customs. Or, if they’re invited to an Iraqi’s home, they must understand the proper hospitality norms. Additionally, players must understand how to develop business relationships in other cultures.
“In Iraq — like many other countries — business dealings are relationship-based, so you have to practice small talk in the game in order to be able to develop rapport and [achieve success] with the project or the mission,” Johnson said.
Courses developed by the company are widely used in the U.S. military as well as the British army. Within these groups, the audience could include a number of different people:
- Soldiers on the ground who need to learn basic language skills, such as asking questions or exchanging information.
- Small unit leaders who may have to conduct searches through neighborhoods and would need to know how to address and interact with the occupants of homes.
- Leaders who meet with politicians, bureaucrats or neighborhood leaders to facilitate a particular project.
“All the courses address cultural as well as linguistic competence in varying degrees, depending on the learning objectives of a particular customer,” Johnson said.
As a Business Training Tool
Video gaming also can help executives learn business acumen — albeit in an alternate universe — as evidenced by the intricately designed, massively multiplayer game “EVE Online.”
Set in a science-fiction galaxy tens of thousands of years in the future, players can select professions — ranging from commodities trader to battle fleet commander to pirate — and begin a journey in search of fame, fortune and adventure.
The game allows players to hone a variety of skills, including strategizing and leadership capabilities.
“[A player] can choose to become an industrialist [or] focus on production, mining or trade, and in those professions they are basically training themselves in standard business skills such as production management, logistics, cost-effectiveness of production and market analysis, which are the basic questions of economic management,” said Eyjólfur Guðmundsson, lead economist at CCP, the developer and publisher of “EVE Online.”
“They can train in leadership [through the] management of large corporations and diplomacy in order to have negotiations and discussions with allies about common goals or [finding ways] to outsmart their enemies,” Guðmundsson explained. “All these professions train [people] in decision making, planning and strategic management. It’s a perfect business training platform.”
Guðmundsson is assigned the task of monitoring the in-game economics at any given time. In addition, he publishes reports on the markets and trade that occur within the game, affecting not only the daily operation of the game but also its future development.
In the same way entrepreneurs and executives rely on real-world economic indicators to make astute decisions, those who play “EVE Online” count on such updates and analysis, said the company CEO, Hilmar Pétursson, in a news release.
In fact, “EVE Online” recently broke a record for the most simultaneous online users, mustering more than 50,000 at a given time. That, plus the nearly quarter of a million subscribers, ensures a vibrant online society of gamers.
“We have a small, functioning economy and have even started the first steps toward democracy [through] a council of representatives elected by the players in a democratic election,” Guðmundsson said. “[The council members] discuss future development of ‘EVE’ [with CCP].”
A Therapeutic Device
Repurposing games to make them profitable in the business world is one thing, but research has shown that games can be exceptionally effective in the medical arena as well.
“Initially, the thought was that it’s a form of entertainment, but we realized that video games could be a very effective tool for pain management,” said Kristin Lindsay, coordinator at Child’s Play, a charity with the primary goal of enriching the lives of hospitalized children.
The focus of the nonprofit is to help bring video games to hospitalized children, both for entertainment and also because scientific studies have shown that playing games helps patients heal faster with a reduced need for pain medication.
“There are dozens of medical studies for using video games in various medical applications — including pain management, physical therapy, emotional well-being for long-term hospitalization, and socialization,” Lindsay said.
Video games also allow patients to connect with others, remain socially active and even develop relationships.
“For [some], it’s not just the distraction of the game and the fun of playing it, but knowing that they’re not alone, that other kids are going through the same thing as they are and forming friendships,” Lindsay said.
“These kids are being isolated for medical reasons, and that hurts them psychologically,” she added. “Having that arena where they can play online games and talk to other people, just getting out in a virtual way and being able to interact with other people, is psychologically very important. And that psychological benefit has definite ramifications in terms of pain management.”
Kehoe also is involved with a project in tandem with NJIT’s biomedical engineering program on the premise that robotic arm interfaces will help with the rehabilitation of patients suffering from cerebral palsy, strokes or other physical disabilities.
“We’re creating a gaming environment to keep them interested and to mask what they’re doing with an entertainment feel to it,” Kehoe said. “What they’re doing in real life is moving their arms or hands or whatever they’re working on in specific patterns with measurable results, but what they think they’re doing is playing a game.”
– Deanna Hartley, [email protected]
|
<urn:uuid:0e9cf3ed-501f-4334-af29-3e4bb6f160f8>
|
CC-MAIN-2017-09
|
http://certmag.com/the-cultural-effects-of-video-gaming/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00548-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.957334 | 2,575 | 2.96875 | 3 |
NEW YORK, NY--(Marketwired - Aug 20, 2014) - As the summer comes to a close, many families may plan on taking a late-summer getaway to New York City -- for some a first time visit, and for others an annual ritual.
With the National September 11 Memorial & Museum now open, parents and guardians should consider using a visit to the Museum as an opportunity to teach young kids -- who may not have been born in 2001 -- about what happened that day, says Tom Rogers, children's author and author of Eleven, the story of a boy who turns eleven on September 11th, 2001.
"A family trip to the 9/11 Memorial & Museum can raise important questions in kids' heads," said Rogers. "The Museum houses collections of artifacts and stories of people directly affected, and the 9/11 Memorial's website has even more information to help adults explain 9/11 to the kids in their lives."
Rogers has assembled some guidelines to help adults who would like to visit the 9/11 Memorial & Museum, and explain the events of 9/11 to the kids in their lives. He advises adults to...
- Be aware of emotions -- yours and theirs. Talking about 9/11 can bring up long-buried feelings -- adults should remember that there's no need to be cold, and that it's okay to be sad. "Kids might react differently than you did, and that's okay," added Rogers. "It's hard to imagine, but 9/11 is just history to today's kids -- not a personal memory like it is for us."
- Remember the heroes. "You can't visit the 9/11 Memorial & Museum without being reminded of the bravery and selflessness that we witnessed on that day," said Rogers. "Tell the kids you're teaching about how the worst of times brought out the best in so many of us."
- Just start talking. 9/11 is a tough subject, but if you don't teach them, they'll hear about it from someone else -- and there are a lot of strange theories and misinformed individuals out there. "Children should learn about that difficult time in a place where they feel safe -- with you," Rogers said. "Take your time, keep it simple, and remember to listen to the questions they're asking."
"Alex, the main character of Eleven, wants to be tough -- but he doesn't fully understand what's going on when he learns what happened on September 11th," noted Rogers. "It will be even harder for kids who weren't born yet -- and a visit to the 9/11 Memorial & Museum can help today's kids begin to understand that day's importance."
For more information about Eleven, please visit the website here. To schedule a conversation with Tom Rogers, please contact Eric Mosher of Sommerfield Communications at (212) 255-8386 or [email protected].
About Tom Rogers
Tom Rogers is a novelist and the screenwriter of numerous animated films, including The Lion King 1 1/2, Kronk's New Groove, and Disney's Secret of the Wings and the upcoming Legend of the NeverBeast. Eleven, the journey of a boy who turns eleven on September 11th, 2001, is his first novel for young adults.
|
<urn:uuid:7db3a0e6-f431-496d-87b8-e220b4e5a3ce>
|
CC-MAIN-2017-09
|
http://www.marketwired.com/press-release/family-trip-9-11-museum-memorial-new-york-can-present-opportunities-adults-talk-young-1940064.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00492-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.966026 | 677 | 2.71875 | 3 |
Wombat released its social engineering training module to defend against social engineering threats, including spear phishing and social media-based attacks.
Commonly defined as the art of exploiting human psychology to gain access to buildings, systems or data, social engineering is evolving so rapidly that technology solutions, security policies, and operational procedures alone cannot protect critical resources.
A recent Check Point sponsored survey revealed that 43 percent of the IT professionals surveyed said they had been targeted by social engineering schemes. The survey also found that new employees are the most susceptible to attacks, with 60 percent citing recent hires as being at “high risk” for social engineering.
A combination of social engineering assessments, which stage mock attacks on employees for the purposes of training, and a library of in-depth training modules to educate and reinforce concepts, work together to deliver measurable employee behavior change. Employees who fall for mock attacks are very motivated to learn how to avoid real attacks.
The social engineering training module explains the psychology behind these attacks, and gives practical tips for recognizing and avoiding them, which employees apply immediately during the training to lengthen retention.
The social engineering training module is the latest module available in Wombat’s Security Training Platform that helps companies foster a people-centric security culture and provide security officers with effective education tools.
With the platform, security officers can:
- Take a baseline assessment of employee understanding
- Help employees understand why their security discretion is vital to corporate health
- Create a targeted training program that addresses the most risky employees and/or prevalent behaviors first
- Empower employees to recognize potential threats and independently make correct security decisions
- Improve knowledge retention with short interactive training sessions that work easily into employees’ busy schedules and feature proven effective learning science principles
- Monitor employee completion of assignments and deliver automatic reminders about training deadlines
- Show measurable knowledge improvement over time with easy-to-read reports for executive management.
|
<urn:uuid:5fd7b884-fe84-4238-83d5-72c3197a67fa>
|
CC-MAIN-2017-09
|
https://www.helpnetsecurity.com/2013/01/15/wombat-unveils-social-engineering-security-training-module/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00016-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.945499 | 388 | 2.53125 | 3 |
The growing use of wireless technology combined with the complexity of many of medical devices has raised concerns about how protected they are against information security risks that could affect their safety and effectiveness.
That was the central conclusion of a report issued today from the Government Accountably Office that called on the Food and Drug Administration, the agency within the Department of Health and Human Services (HHS) that is responsible for ensuring the safety of medical devices such as implantable cardio-defibrillators or insulin pumps, to tighten the security requirements of medical devices.
From the GAO: "Medical devices may have several such vulnerabilities that make them susceptible to unintentional and intentional threats, including untested software and firmware and limited battery life. Information security risks resulting from certain threats and vulnerabilities could affect the safety and effectiveness of medical devices. These risks include unauthorized changes of device settings resulting from a lack of appropriate access controls. Federal officials and experts noted that efforts to mitigate information security risks need to be balanced with the potential adverse effects such efforts could have on devices' performance, including limiting battery life."
The GAO listed a number of specific threats including:
- Limited battery capacity: The limited capacity of batteries used in certain medical devices hinders the possibility of adding security features to the device because such features can require more power than the battery can deliver. The limited battery capacity makes these medical devices susceptible to an attack that would drain the battery and render the device inoperable.
- Remote access: Although remote access is a useful feature of certain medical devices, it could be exploited by a malicious actor, possibly affecting the device's functionality.
- Continuous use of wireless communication: Wireless communication allows medical devices to communicate; however, it could create a point of entry for unauthorized users to modify the device, especially if the wireless communication is continuously enabled.
- Unencrypted data transfer: Unencrypted data transfer is susceptible to manipulation. For example, a malicious actor could access and modify data that are not securely transmitted, affecting patient safety by altering information used in administering therapy.
- Untested software and firmware: Untested software can be vulnerability when there is a security issue in software and firmware that has not been identified and addressed.
- Susceptibility to electromagnetic (e.g., cellular) or other types of unintentional interference: This can cause vulnerabilities that make a device susceptible to unintentional or intentional threats. For example, if these medical devices are not designed with resistance to electromagnetic interference, their functionality can be affected.
- Limited or nonexistent authentication process and authorization procedures: A limited or nonexistent authentication process and authorization procedures could leave certain medical devices susceptible to unauthorized activities, such as changes to the devices' settings. Authentication is the verification of a user's identity-often by requesting some type of information, such as a password-prior to granting access to the device. Authorization is the granting or denying of access rights to a device.
- Disabling of warning mechanisms: Warning mechanisms-such as a vibration or loud tone-could be disabled on certain medical devices. If these mechanisms were disabled, a patient would not be alerted if, for example, unauthorized modifications were made to the device.
- Design based on older technologies: Certain medical devices can be designed using older technologies, such as older versions of software or firmware. Additionally, these devices might not have been designed with security as a key consideration.
- Inability to update or install security patches: The inability to update or install security patches in certain medical devices could prevent identified software defects from being addressed.
According to FDA, most software-related medical device problems occur because devices are using software that has been revised since the it was reviewed by FDA.
The GAO report also noted that addressing information security risks for certain medical devices involves additional safety considerations that are not typically necessary for other types of products. For example, incorporating encryption into the medical device could mitigate the information security risk of unauthorized changes to the settings of the device. However, adding encryption to a device could drain its battery more quickly, making it necessary to change the battery more frequently. Changing the battery for active implantable devices, such as a pacemaker, involves undergoing a surgical procedure, which has its own potential health risks. In contrast, two information security researchers we spoke with said that, in their opinion, technology has advanced such that encryption can be added to a medical device without using as much energy as before.
For its part, the FDA said that "in the future the agency intends to enhance its efforts related to information security. For example, officials said the agency will consider information security risks resulting from intentional threats when reviewing manufacturers' submissions for new devices. Officials said that they will consider whether the manufacturer identified the appropriate information security risks resulting from intentional threats and, if applicable, what proposed mitigation strategies the manufacturer included."
FDA officials also told the GAO that the agency is currently planning to review its approach to evaluating software used in medical devices. Officials said the review of its approach will be conducted by a contractor and will involve an analysis of how the agency considers software in medical devices during premarket reviews.
Check out these other hot stories:
|
<urn:uuid:1342c261-ac9b-40e6-8981-00bc02d82535>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2223222/mobile-security/wireless-medical-devices-face-myriad-security-concerns.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00068-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.95262 | 1,037 | 2.84375 | 3 |
Imagine controlling your phone or computer simply by touching the palm of your hand. An experimental device being developed by researchers at Microsoft and Carnegie Mellon University does exactly that. The device, called OmniTouch, is a wearable system that turns virtually any surface, including your skin, into a functioning touch screen. The project is being led by Carnegie Mellon student Chris Harrison and Microsoft researchers Hrvoje Benko and Andrew Wilson. To see how OmniTouch works and the potential applications for such technology, watch the short video below.
|
<urn:uuid:eeedcaa0-24f3-4bab-a374-f1322273250e>
|
CC-MAIN-2017-09
|
http://www.govtech.com/technology/Transforming-Your-Skin-into-a-Touch-Screen.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00488-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.945673 | 103 | 2.625 | 3 |
New data is bringing scientists much closer to proving that a particle discovered in the Large Hadron Collider last year is the elusive Higgs boson.
Scientists with the collider said they have analyzed two and a half times more data than was available to them last summer when they initially announced the particle find. And the more they analyze, the more likely they think the new particle actually is a Higgs boson.
[ QUICK LOOK: The Higgs boson phenomenon ]
"The preliminary results with the full 2012 data set are magnificent and to me it is clear that we are dealing with a Higgs boson, though we still have a long way to go to know what kind of Higgs boson it is," said Joe Incandela, a professor of physics at the University of California, Santa Barbara and a researcher at CERN, the European Organization for Nuclear Research.
Last summer, CERN researchers announced the discovery of a new particle and said early indications pointed to it being the Higgs boson, which has such great mystery and scientific importance that it has been dubbed the God particle.
The Higgs boson is a theoretical sub-atomic particle that is considered to be the reason everything has mass. Basically, without mass -- without the Higgs boson -- there would be no structure, no weight, to anything.
For at least four decades, scientists have hunted for the particle, which has become a cornerstone of physics theory.
Today's announcement was made at the Moriond Conference, a physics event held in Italy.
What scientists are focusing on now is what kind of Higgs boson this newly found particle might be. It could be what is called a Standard Model of particle physics or it could be something different that goes beyond the Standard Model in some theories.
CERN noted that to figure out whether this is a Standard Model Higgs boson, scientists will measure how quickly it decays into other particles and then compare the results to theoretical predictions.
The Large Hadron Collider, which includes a 17-mile underground loop on the border of France and Switzerland, was shut down last month for a two-year overhaul. The collider won't run any particle collisions until 2015 although the CERN lab is set to be back up in the second half of 2014
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected].
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Scientists closer to declaring discovery the 'God Particle'" was originally published by Computerworld.
|
<urn:uuid:f566f78c-16b1-4b45-b1bd-f0be62fd1f32>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2164404/data-center/scientists-closer-to-declaring-discovery-the--god-particle-.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00364-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.947473 | 567 | 2.796875 | 3 |
The assembler directives do not emit machine language but, as the name
direct the assembler to perform certain operations only during the assembly process.
Here are a number of directives that we shall discuss.
CSECT Identifies the start or continuation of a control section.
the start or continuation of a dummy control
section, which is used to pass data to subroutines.
EJECT Start a new page before continuing the assembler listing.
END End of the assembler module or control section.
EQU Equate a symbol to a name or number.
LTORG Begin the literal pool.
PRINT Sets some options for the assembly listing.
SPACE Provides for line spacing in the assembler listing.
START Define the start of the first control section in a program.
TITLE Provide a title at the top of each page of assembler listing.
USING Indicates the base registers to use in addressing.
By definition, a control section (CSECT), is “a block of
coding that can be relocated
(independent of other coding) without altering the operating logic of the program.”*
Every program to be executed must have at least one control section.
If the program has only one
control section, as is usually the case, we may begin it
with either a CSECT or START directive.
According to Abel, a START directive
“defines the start of the first control
in a program”**, though he occasionally contradicts himself.
We shall later discuss reasons
why a program might need more than one control
section. In this case, it is probably best to use only the CSECT directive.
* The definition is taken from
page 109 of Programming Assembler
Peter Abel, 3rd Edition, ISBN 0 – 13 –728924 – 3. The segment in quotes is taken
directly from Abel, who also has it in quotes. The source is some IBM document.
** Abel, page 577. But see page 40. Abel has trouble giving a definition.
A DSECT (Dummy
Section) is used to describe a data
area without actually
reserving any storage for it.
This is used to pass arguments from one program to another.
Consider a main program and a subroutine.
The main program will use the standard data definitions to lay out the data.
The subroutine will use a DSECT, with the same
structure, in order to
reference the original data.
The calling mechanism will pass the address of the original data.
The subroutine will associate
that with its DSECT and use the structure
found in the DSECT to generate proper addresses for the arguments.
We shall discuss Dummy Sections in more detail later.
The END statement must be the last statement of an assembler control section.
The form of the statement is quite simple. It is
So, our first program had the following structure.
Some program statements
Note that it easily could have been the following.
Some program statements
The EQU directive is used to equate a name with an
expression, symbolic address,
or number. Whenever this name is used as a symbol, it is replaced.
We might do something, such as
the following, which makes the symbol R12 to
be equal to 12, and replaced by that value when the assembler is run.
R12 EQU 12
There are also uses in which symbolic addresses are equated. Consider this example.
PRINT DC CL133’’
P EQU PRINT Each symbol references the same address
One can also use the location
counter, denoted by “*”, to set the symbol equal to
the current address. This example sets the symbol RETURN to the current address.
RETURN EQU * BRANCH TO HERE FOR NORMAL RETURN
The Location Counter
As the assembler reads the text
of a program, from top to bottom, it establishes
the amount of memory required for each instruction or item of data.
The Location Counter is used to establish
the address for each item. Consider
an instruction or data item that requires N bytes for storage.
The action of the assembler can be thought of as follows:
assembler produces the binary machine language equivalent of the
data item or instruction. This bit of machine language is N bytes long.
2. The machine language fragment is stored at address LC (Location Counter).
Location Counter is incremented by N.
The new value is used to
store the next data item or instruction.
The location counter is denoted by the asterisk “*”. One might have code such as.
SAVE DS CL3
KEEP EQU *+5
Suppose the symbol SAVE is associated with location X’3012’. It reserves 3 bytes for
storage, so the location counter is set to X’3015’ after assembling the item.
The symbol KEEP is now associated with X’3015’ + X’5’ = X’301A’
Literal Pool contains a collection of anonymous constant definitions, which are
generated by the assembler. The LTORG directive defines the start of a literal pool.
some textbooks may imply that the LTORG directive is not necessary for use
of literals, your instructor’s experience is different. It appears that an explicit LTORG
directive is required if the program uses literal arguments.
classic form of the statement is as follows, where the “L” of “LTORG” is
to be found in column 10 of the listing.
this statement should be placed near the end of the listing, as in the
next example taken from an actual program.
240 * LITERAL POOL
000308 242 LTORG *
000308 00000001 243 =F'1'
000000 244 END LAB1
Here, line 243 shows a literal that is inserted by the assembler.
This directive controls several options that impact the appearance of the listing.
Two common variants are:
PRINT ON,NOGEN,NODATA WE USE THIS FOR NOW
PRINT ON,GEN,NODATA USE THIS WHEN STUDYING MACROS
The first operand is the listing option. It has two values: ON or OFF.
ON – Print the
program listing from this point on. This
is the normal setting.
OFF – Do not print the listing.
The second operand controls the
listing of macros, which are single statements that
expand into multiple statements. We shall investigate them later.
The two options for this operand are NOGEN and GEN.
GEN – Print
all the statements that a macro generates.
NOGEN – Suppress the generated code. This is the standard option.
The third operand controls printing of the hexadecimal values of constants.
DATA Print the full hexadecimal value of
NODATA Print only the leftmost 16 hex digits of the constants.
A typical use would be found in our first lab assignment.
BALR R12,0 ESTABLISH
USING *,R12 ADDRESSABILITY
structure of this pair of instructions is entirely logical, though it may
appear as quite strange.
note that the USING *,R12 is a directive, so that it does not generate binary
machine language code.
The BALR R12,0 is an incomplete
subroutine call. It loads the
address of the
next instruction (the one following the USING, since that is not an instruction)
into R12 in preparation for a Branch and Link that is never executed.
The USING * part of the
directive tells the assembler to use R12 as a base
register and begin displacements for addressing from the next instruction.
mechanism, base register and offset, is used by IBM in order to save space.
It serves to save memory space.
We shall study it later.
Directives Associated with the Listing
Here is a list of some of the
directives used to affect the appearance of the
printed listing that usually was a result of the program execution process.
In our class, this listing can
be seen in the Output Queue, but is never actually
printed on paper. As a result, these directives are mostly curiosities.
EJECT This causes a page to be ejected before it is full. The assembler keeps
a count of lines on a page and will automatically eject when a specified
count (maybe 66) is reached. One can issue an early page break.
SPACE This tells the assembler to place a number of blank lines between
each line of the text in the listing. Values are 1, 2, .
SPACE 1 Each causes normal spacing of the lines
SPACE 2 Double spacing; one blank line after each line of text
SPACE 3 Triple spacing; 2 blank lines after each line of text.
TITLE This allows any descriptive title to be placed at the top of
each listing page.
The title is placed between two single quotes.
TITLE ‘THIS IS A GOOD TITLE’
|
<urn:uuid:2f4e8b26-9d8f-426b-bb38-9d99e794c5d4>
|
CC-MAIN-2017-09
|
http://edwardbosworth.com/MY3121_LectureSlides_HTML/AssemblerDirectives.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00404-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.871853 | 1,958 | 3.5625 | 4 |
Concept of FTTx
A simple understanding of FTTx is Fiber to the x, x here can be replaced such as H for home, B for building, C for curb or even W for wireless etc. It is a new technology used in today’s network. As we know, compared to copper or digital radio, fiber’s high bandwidth and low attenuation easily offset its higher cost. To install fiber optics all the way to the home or the users’ working places has always been the goal of fiber optic industry. Thanks to optical fiber all the way to subscriber, we can get unprecedented higher speed in enjoying more services at home, such as teleworking, tele-medicine, online shopping and so on. It is precisely because the demands for bandwidth keeps spiraling upwards, FTTx technology now is very popular with people and has to be imperative.
FTTx Enabling Technologies
According to the different termination place, the common FTTx architectures include these following types:
1. FTTC: Fiber To The Curb (or Node, FTTN)
Fiber to the curb brings fiber to the curb, or just down the street, close enough for the copper wiring already connecting the home to carry DSL (Digital Subscriber Line). Actually, FTTC bandwidth depends on DSL performance where the bandwidth declines over long lengths from the node to the home. Though the cost of FTTC is lower than FTTH in the first time installation, it is limited by the quality of the copper wiring currently installed to or near the home and the distance between the node to the home. Thus, in many developed, FTTC is now gradually upgrade to FTTH.
2. FTTH Active Star Network
FTTH active star network means that a home run active star network has one fiber dedicated to each home. It is the simplest way to achieve fiber to the home and offers the maximum amount of bandwidth and flexibility. However, this architecture generally needs a higher cost, as the requirements of the both in electronics on each end and the dedicated fibers for each home.
3. FTTH PON (Passive Optical Network)
The FTTH architecture consists of a passive optical network (PON) that allows several customers to share the same connection, without any active components (i.e., components that generate or transform light through optical-electrical-optical conversion). In this architecture, it usually needs a PON splitter. PON splitter is bi-directional, that is signals can be sent downstream from the central office, broadcast to all users, and signals from the users can be sent upstream and combined into one fiber to communicate with the central office. The PON splitter is an important passive component used in FTTH networks. There are mainly two kinds of passive optical splitters: one is the traditional fused type splitter as known as FBT coupler or FBT WDM optical splitter, which features competitive price; the other is the PLC splitter based on the PLC (Planar Lightwave Circuit) technology, which has a compact size and suits for density applications. Because it cuts the cost of the links substantially by sharing, this architecture is more prefered by people when choosing the architecture.
There are two major current PON standards: GPON(gigabit-capable PON) and EPON(Ethernet PON). The fomer uses IP-based protocol, based originally on ATM protocols but in its latest incarnation using a custom framing protocol GEM. EPON is based on the IEEE standard for Ethernet in the First Mile, targeting cheaper optical components and native use of Ethernet. In addition, there is BPON(broadband PON), was the most popular current PON application in the beginning. It also uses ATM as the protocol (BPON digital signals operate at ATM rates of 155, 622 and 1244 Mb/s).
The deployment technologies of FTTx generally mean the fiber optic cables deployment. And during the fiber optic cables deployment, the termination of fiber optic cable is usually a important part of it. When we begin fiber optic termination, splicing one of the necessary step. Fiber optic splicing includes fusion splicing and mechanical splicing, and now, fusion splicing is more widely used as its good performance and easy operation. In addition, cleaving, polishing and ends cleaning are also important in the fiber optic termination. Except the necessary steps of fiber optic termination, good connectors, pigtails and fiber terminal box (FTB) and the tool kits are also as the essential parts during the fiber optic terminal item.
Testing and Commissioning FTTx Network
Though it reduces the cost of using fiber optics, compared to other network, the components of FTTx seem to be more expensive. Meanwhile, in order to ensure the network work well, it is necessary to test and commission the network. Testing FTTx network is similar to other OSP (Out Side Plant) testing but the splitter and WDM add complexity. The common used testers include:
VFL – VFL, short for Visual Fault Locator, is a kind of device which is able to locate the breakpoint, bending or cracking of the fiber glass. It can also locate the fault of OTDR dead zone and make fiber identification from one end to the other end. Designed with a FC,SC,ST universal adapter, this fiber testing red light is used without any other type of additional adapters, it can locates fault up to 10km in fiber cable, with compact in size, light in weight, red laser output.
Power Meter and Light Source – Power Meter is used to measure received signal power while Light source is used to launch optical in modulated and unmodulated wave into fiberunder-test. Usually the optical light source is used with the fiber optic power meters, they act as an economic and efficient solution for the fiber optic network works. It is the most straight-forward way to test the fiber loss.
Optical Time Domain Reflectometer (OTDR) – OTDR is an optoelectronic instrument used to characterize an optical fiber. It can offer you an overview of the whole system you test and can be used for estimating the fiber length and overall attenuation, including splice and mated-connector losses. It can also be used to locate faults, such as breaks, and to measure optical return loss. It is an expensive tester, and needs more skills to use.
OCWR (Optical Continuous Wave Reflectometer) – OCWR is an instrument used to characterize a fiber optic link where in an unmodulated signal is transmitted through the link, and the resulting light scattered and reflected back to the input is measured. It is useful in estimating component reflectance and link optical return loss.
Optical Fiber Scope – Optical fiber scope is used for inspecting fiber terminations, providing the most critical view of fiber and faces. It can be able to perform visual inspection and examination of the connector end face for irregularities, i.e. scratches, dirt etc. Magnification can be up to 400x.
Doubtless, FTTx technology will continue to spread. With the higher and higher requirement of the network speeds, the requirement of FTTx is also improved both in technology and cost saving. And the next generation of PONs, such as 10G GEPON, WDM PON etc. also play an important role in the FTTx development. Maybe one day, we could enjoy the FTTd, ie. fiber to the desk and enjoy a avariety of modern network services.
|
<urn:uuid:30d99476-060a-4d88-9d28-111bf78faa8c>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/a-comprehensive-understanding-of-fttx-network.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00100-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.931505 | 1,559 | 3.234375 | 3 |
From Earth to the moon
- By Patrick Marshall
- Sep 19, 2007
It may not be as romantic as a full moon on a clear summer night as a breeze blows gently through your lover's hair as you stroll down the beach, but it's still sort of cool.Google Moon
, that is.
Google is working on doing for the moon what it has already done for the Earth. Using high-resolution images from NASA, Google is creating maps of the moon ' bright side only ' that include multimedia content illustrating the various landings since the first in 1969.
Google Moon's imagery is not as comprehensive as that of Google Earth, however. For one, you don't get a full spherical view of the moon. And, since human landings on the moon and available imagery for them are restricted to a relatively small portion of the lunar surface, the fully zoomed-out image is of a band of the bright side of the orb.
But as you zoom in, you'll see patches of higher-resolution imagery as well as astronaut symbols holding flags. Hover over a symbol and you'll get an explanation of what content is contained at that location.
We also found that the more you cruise around the very limited portion of this virtual moon for which there is imagery, the more it all starts to look pretty much the same.
Produced with the cooperation of NASA's Ames Research Center, the tool includes panoramic photographs of lunar landings, audio clips and videos, and descriptions of astronauts' activities.
"NASA's objective is for Google Moon to become a more accurate and useful lunar mapping platform that will be a foundation for future Web-based moon applications, much like the many applications that have been built on top of Google Maps," Chris C. Kemp, director of strategic business development at NASA's Ames Research Center, said in a press release. 'This will make it easier for scientists everywhere to make lunar data more available and accessible."
Patrick Marshall is a freelance technology writer for GCN.
|
<urn:uuid:2a12b854-1d82-4da4-a0f2-61a012142dab>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2007/09/19/from-earth-to-the-moon.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00100-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.949405 | 410 | 3.046875 | 3 |
As a new year approaches we must prepare for new Internet security threats. Every year, new and innovative ways of attacking computer users emerge and continue to increase in volume and severity. To know where we are going it is helpful to look at where we have been. Finding trends in Internet security has become a valuable, if not necessary, action for companies developing software to protect computer users.
Attacks have increased in sophistication and are often tailored to their specific victim. Trend tracking has shown that in 2008, the Web has become a primary conduit for attack activity. According to Symantec’s Top Internet Security Trends of 2008, attackers have become more difficult to track as they have shifted away from mass distribution of a small family of threats to micro distribution of large numbers of threats.
Spam and Phishing
This may be the most well known form of computer breaching, and yet it is still the healthiest and fastest growing of attacks. In 2004, Bill Gates predicted that spam would be resolved in another two years. In 2008, we were seeing spam levels at 76 percent until the McColo incident in November 2008, at which time spam levels dropped 65 percent. The battle with spammers has turned into an all out war and spammers are showing no sign of surrendering.
Spammers take advantage of current events, such as the presidential election, Chinese earthquake, Beijing Olympic Games and the economy. They use these widely socialized issues as headlines to lure people into clicking on a link to malware or sending money for unrealistic charitable campaigns. Social networks are only feeding the beast by making it easier for spam attacks to propagate quickly through a victim’s social network.
Phishing walks hand in hand with spam as it utilizes current events to make their bait more convincing. Another phishing tactic particularly recognized over the last year is by offering users a false sense of security by targeting .gov and .edu domains. Although cybercriminals cannot register domains under these domains, they find ways to compromise the Web servers to grant them control. Once control is gained, it becomes harder to fix because the domain cannot be simply deactivated. Lengthy measures are taken to have the company remove the compromised page from their website and secure their servers. The time it takes to make these changes allows the phished page to remain active and hit more victims.
Fake and Misleading Applications
Fake security and utility programs aka “scareware” promise to secure or clean up a user’s home computer. The applications produce false and often misleading results, and hold the affected PC hostage to the program until the user pays to remedy the pretend threats. Even worse, such scareware can be used as a conduit through which attackers install other malicious software onto the victim’s machine.
In 2008, the Identity Theft Resource Center (ITRC) documented 548 breaches, exposing 30,430,988 records. The significance of this data is truly spotlighted after realizing that it only took nine months in 2008 to reach the 2007 total. What is most interesting about data breaches is that most are not malicious in nature. In many cases, inadvertent employee mishandling of sensitive information and insecure business processes are the most common ways that data is exposed. This can be attributed to the increase of mergers, acquisitions and layoffs resulting from the thundering economic climate changes in 2008.
What to Watch for in 2009
Looking at attack trends and techniques malware creators favored in 2008 help us predict what to expect in 2009. Some of these new attacks are already starting to show up and users need to be aware so that they can stay safe online in 2009.
As we have learned, current events are utilized as headliners to bait victims. In 2009, it is easily predicted that the economic crisis will be the basis of new attacks. We expect to see an increase in emails promising easy-to-get mortgages or work opportunities. Unfortunately, the people already being hit hard by the economy who have lost jobs and who have had homes foreclosed will also become the primary prey of scams.
Advanced Web Threats
The number of available Web services is increasing and browsers are continuing to converge on a uniform interpretation standard for scripting languages. Consequently, we expect the number of new Web-based threats to increase. User-created content can host a number of online threats from browser exploits, distribution of malware/spyware and links to malicious websites. The widespread use of mobile phones with access to the Web will make Web-based threats more lucrative. We have already seen attacks disguised as free application downloads and games targeting Smartphones. We expect to see more truly malicious mobile attacks in 2009.
Social networks will enable highly targeted and personalized spam by phishing for username accounts and/or using social context as a way to increase the “success rate” of an online attack. In 2009, we expect an upgrade in spam to the use of proper names, sophisticatedly segmented according to demographic or market. The upgraded spam will resemble legitimate messages and special offers created from personal information pulled from social networks and may even appear to come from a social networking “friend.” Once a person is hit, the threat can easily be spread through their social network. Enterprise IT organizations need to be on the alert for these types of attacks because today’s workforce often accesses these tools using corporate resources. The battle against Internet security threats will continue to rage on and tactics on both sides will become more sophisticated over time. Although no one can be certain of what the future holds, we can look back and learn from our past to identify trends that can help make educated predictions for where future attacks may be heading.
|
<urn:uuid:2607245c-16e3-4c86-906f-f78d8c872f67>
|
CC-MAIN-2017-09
|
https://www.helpnetsecurity.com/2008/12/24/security-trends-of-2008-and-predictions-for-2009/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00273-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.949605 | 1,144 | 2.671875 | 3 |
The US Department of Energy today said it was conditionally committing $2 billion to develop two concentrating solar power projects that it says will offer 500 megawatts of power combined, effectively doubling the nation's currently installed capacity of that type of power.
Concentrated solar systems typically use parabolic mirrors to collect solar energy. Other methods include system such as a power tower that uses directed mirrors to concentrate the sun's rays onto a solar receiver at the top of a tall tower. Google in fact recently invested $168 million in such a system.
More on energy projects: 10 hot energy projects that could electrify the world
The new projects are both located in California: the Mojave Solar Project (MSP) in San Bernardino County and the Genesis Solar Project in Riverside County. The projects will both sell power to Pacific Gas and Electric.
According to the DOE, when operational, the 250MW Mojave Solar Project will avoid over 350,000 metric tons of carbon dioxide annually and is anticipated to generate enough electricity to power over 53,000 homes.
The site will be the first US utility-scale deployment of the project's vendor, Abengoa Solar's Solar Collector Assembly (SCA). The SCA's features include a lighter, stronger frame designed to hold parabolic mirrors that are less expensive to build and install. The SCA heat collection element uses an advanced receiver tube to increase thermal efficiency by up to 30% percent compared to the nation's first CSP plants, the DOE states. In addition, the advanced mirror technology will improve reflectivity and accuracy. Together, these improvements can permit the collection of the same amount of solar energy from a smaller solar field. Unlike older CSP plants, the Mojave system will operate without fossil fuel back-up systems for generation during low solar resource periods, according to the DOE.
The 250MW Genesis Solar Project meanwhile, will feature scalable parabolic trough solar thermal technology that has been used commercially for more than two decades. The project is expected to avoid over 320,000 metric tons of carbon dioxide emissions annually and produce enough electricity to power over 48,000 homes, the DOE stated. NextEra Energy in the primary vendor on the project.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories:
|
<urn:uuid:378c1d13-75ea-4e56-a302-f4f32a595f39>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2229487/infrastructure-management/energy-dept--spends--2b-to-double-us-concentrated-solar-power-capacity.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00625-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.916591 | 467 | 3.078125 | 3 |
-Encourages healthier lifestyle by showing consequences of an unhealthy one -
NEW YORK; May 16, 2006 – Accenture (NYSE: ACN) today unveiled an experimental “mirror” that shows unhealthy eaters what they could look like in the future if they fail to improve their diets.
The device – known as the Persuasive Mirror - stems from an Accenture research initiative aimed at developing technologies that encourage people to maintain healthy lifestyles in order to avoid obesity and related health problems. Plans call for it to be used in upcoming research studies at the University of California, San Diego (UCSD).
“We see great potential in using the technology available via the Persuasive Mirror not only to assess body image but also to determine how body image might be used to affect positive behavioral change,” said Jeannie Huang, M.D., M.P.H., assistant professor in residence at UCSD.
Dr. Huang is also a member of PACE (www.paceproject.org), a multidisciplinary research consortium of more than 40 professionals that conducts a broad array of research aimed at developing tools to help health professionals help their patients make and sustain healthy changes in physical activity, diet and other lifestyle behaviors.
The mirror was developed at Accenture Technology Labs in Sophia Antipolis, France, where researchers strive to embed technologies into ordinary household items, thereby allowing users to gain valuable health information just by going about their daily activities.
Accordingly, the prototype was built to look like a standard bathroom mirror. Operation requires that users do nothing more than look at their “reflections.” But the operational simplicity belies the device’s complex technology.
The “mirror” uses two cameras placed on the sides of a flat-panel display and combines video streams from both cameras to obtain a realistic replication of a mirror reflection. Advanced image processing and proprietary software are used to visually enhance the person’s reflection.
Couch Potatoes Beware
The mirror is fed information from webcams and sensing devices placed around the house, including images of everyday activities. For example, the monitoring system can be configured to spot visits to the refrigerator, treadmill usage, or time spent on the couch.
Software analyzes the data to determine behavior – be it healthy or unhealthy – and how behavior, including overeating, will influence future appearance, including obesity. As a result, a sedentary person, for example, can see his face growing fat before his eyes.
The Persuasive Mirror can also be configured to accept other health-related data. For instance, it can show the consequences of too much time spend in the sun, or calculate the benefits of data provided by devices such as a pedometer worn during a brisk walk or run. Future iterations will also calculate the effects of other unhealthy behaviors such as drinking, smoking or drug use.
“One of the key solutions experts identify for solving the growing problems caused by poor diet, including obesity, inactivity and smoking is a change in personal habits,” said Martin Illsley, director of the Sophia Antipolis facility, one of three research labs operated by Accenture. “This led us to think about using technology as a persuasion tool, specifically how technology can be used to create the kind of motivation and personal awareness that will change unwanted behaviors.”
This is known as the science of captology, defined as the study of computers as persuasive technologies. It includes the design, research and analysis of interactive computing products created for the purpose of changing people’s attitudes or behaviors.
Illsley and his team concluded that for any technology dealing with diet and exercise habits to be persuasive, it needed to be highly visual. They realized that a mirror that projects the image of how the individual’s face and body will look in the future if habits are poor – or, conversely, improve – could best drive home the point.
“We monitor the individual’s habits in terms of diet and exercise and whether or not they smoke or spend time in the sun. And by focusing on the face and body, visually project how he or she will look in the near future,” said Illsley. “The image can punish them if they have not taken good care of themselves, or can reward them if they are following healthy diet plans and have begun to lose weight.”
Intelligent Home Services
The mirror fits into Accenture Technology Labs initiative called Intelligent Home Services that merges sensor technologies and artificial intelligence to enable a new class of assistive technologies. It makes use of cameras to track activity and artificial intelligence techniques to learn habits automatically so that deviations can be spotted.
Previous prototypes have demonstrated how emerging technologies in the home can bring prolonged independence to the elderly, create a channel for new services and help businesses and governments address the challenge of the aging population.
All of the prototypes offer practical benefits to business. In the case of the mirror, which took 18 months to build, Illsley sees potential benefits for companies in such industries as pharmaceuticals, health care services and insurance.
“While applications exist for entering a photo of an individual and seeing how he or she is expected to look years later, such as those used to find missing children, this concept is completely different. We are not aware of another company or research firm that has done anything similar,” said Illsley.
Illsley cautions that input and monitoring from medical experts is essential. “That’s one reason we’re so excited about working UCSD. By collaborating with them, we can take this prototype to the next stage and ensure that further development takes place with medical expertise. This will ensure the technologies we have identified are used to help people improve their lifestyle in the best way possible.”
Accenture is a global management consulting, technology services and outsourcing company. Committed to delivering innovation, Accenture collaborates with its clients to help them become high-performance businesses and governments. With deep industry and business process expertise, broad global resources and a proven track record, Accenture can mobilize the right people, skills and technologies to help clients improve their performance. With more than 129,000 people in 48 countries, the company generated net revenues of US$15.55 billion for the fiscal year ended Aug. 31, 2005. Its home page is www.accenture.com.
|
<urn:uuid:89a0e957-f281-475f-a1ef-a1734daa796e>
|
CC-MAIN-2017-09
|
https://newsroom.accenture.com/news/high-tech-mirror-from-accenture-offers-look-into-future.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00149-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.9418 | 1,318 | 2.609375 | 3 |
Disk capacity will rise.
As storage capacity demands continue to increase, new serial technologies will gradually replace current parallel SCSI and ATA storage media during the next two years. Beyond these serial advances, hard-disk- drive vendors are working to develop even-more-advanced technologies to improve the physical data recording capacities of HDD magnetic media.
Many HDD manufacturers, including Hitachi Data Systems Inc., IBM and Seagate Technology LLC, are beginning to explore a new magnetic recording technology called perpendicular data recording. Based on the technologys potential capacity boosts, eWEEK Labs believes this is the shape of HDD storage to come. However, it probably wont emerge for several years.
Beyond perpendicular recording
Seagate is working on two additional technologies that will further increase areal density in recording media
Optically Assisted Magnetic Recording will use thermal energy (that is, lasers) to heat the media surface while data is being written, making it possible to write high-coercivity data to counter superparamagnetism
Self-Ordered Magnetic Arrays will convert grains of material in a single bit of data into unique bits of information that will orient in arrays that can be read and written with high thermal stability
In todays "longitudinal" HDD products, data bits are recorded on magnetic media using a recording method in which data bits are placed parallel to the media plane. Current longitudinal recording techniques can carry storage densities beyond 100 gigabits per square inch, but new recording methods will be necessary in the coming years to maintain the growth rate in HDD capacity, according to industry experts.
To achieve higher storage capacity, drive makers must increase the areal density of the magnetic media. Current methods involve making data bits smaller and placing them closer together, but there are several factors that can limit how small the data bits can be made.
As the data bits get smaller, the magnetic energies holding the bits in place also decrease, and thermal energies can cause demagnetiza- tion over time, leading to data loss. This phenomenon is called the superparamagnetic effect. To counter it, HDD manufacturers can increase the coercivity (the magnetic field required for the drive head to write the data on the magnetic media) of the disk. However, the amount of coercivity that can be applied is determined by the type of magnetic material used to make the head and the way data bits are written, and vendors are approaching the upper limits in this area.
Perpendicular recording places data bits perpendicular to the magnetic media surface. The data bits are formed in upward or downward magnetic orientation corresponding to the 1s and 0s of digital data. Perpendicular recording gives hard drives a much larger areal density in which to store data because it can achieve higher magnetic fields in the recording medium.
HDD vendors have been harnessing available technology to double their drive capacities every year, and advances in Serial ATA and SCSI drives will make large storage systems cheaper, faster and more efficient than before.
We therefore dont expect to see HDD systems that use perpendicular recording technology for several years, but this recording method will take future HDD systems to densities many times greater than the current longitudinal recording methods. Some experts estimate this new recording method can create areal density up to the terabit-per-square-inch range.
Further out on the horizon, Optically Assisted Magnetic Recording and Self-Ordered Magnetic Array methods could, in turn, eventually outrace perpendicular recording methods (see chart).
Imagine being able to store terabytes of data in your iPod or handheld devicesthe possibilities are almost endless.
Technical Analyst Francis Chu can be contacted at francis_ [email protected].
|
<urn:uuid:b512858f-4b9b-46c3-993b-6d1cf36efea2>
|
CC-MAIN-2017-09
|
http://www.eweek.com/c/a/Data-Storage/HDDs-Future-is-Perpendicular
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00025-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.90263 | 753 | 2.921875 | 3 |
Big Data is the buzzword on everyone's lips these days - promising to change the world through deep insights into vast and complex sets of data. But amidst the optimism at the recent IEEE Computer Society's Rock Stars of Big Data symposium at the Computer History Museum in Mountain View, Calif., there were also stark about warnings about the dark side of the new technology.
The human/ethical aspects of big data
In fact, Grady Booch, chief scientist at IBM Research and co-founder of Computing: The Human Experience, led off the event with a talk on the Human/Ethical Aspects of Big Data. In front of a couple hundred big-data professionals and interested parties, Booch acknowledged that big data can have "tremendous societal benefits," but made the case that the technology has gotten way out in front of our ability to understand where it's going, and we're likely in for some nasty surprises in the not-so-distant future. He expanded on those thoughts in a private conversation later that afternoon.
Many people worry about governments and corporations misusing big data to spy on or control citizens and consumers, but Booch warned the problem goes much deeper than just deliberate malfeasance: "Even the most benign things can have implications when made public." He cited the case of an environmental group that shared the locations of endangered monk seals near his home in Hawaii -- a seemingly innocuous way to raise awareness. But because monk seals eat fish, Booch said, some local fisherman used the information to try and kill the seals.
Data lasts forever
The problem is that big data doesn't go away once it's fulfilled its original purpose. "Technologists don't give a lot of thought to the lifecycle of data," Booch said. But that lifecycle can extend indefinitely, so we can never be completely be sure who will end up with access to that data. "This is the reality of what we do."
"Our technology is outstripping what we know how to do with our laws," Booch said. "And even today's best legal and technological controls may not be enough." Social, political and other pressures can affect how big data is used, he said, despite laws designed to constrain those uses.
Given how the unprecedented speed of technological change is affecting society, what is considered acceptable use of data is in constant flux and subject to contentious debate. For example, while airplanes have long used "black box" data recorders, those devices are now finding their way into cars. So far, that hasn't raised much debate, but imagine the outcry if we applied the same concept to, say, guns?
Our responsibility: Fix the "stupid things"
"The law is going to do some stupid things," Booch warned, which is why "technology professionals have a responsibility to be cognizant of the possible effects of the data we collect and analyze to raise the awareness of the public and the lawmakers."
"The world is changing in unforeseen ways, and no one has the answer," Booch said. "It's a brave new world and we're all making this up as we go along." But just because something is possible does not necessarily mean we should do it. "We need to at least ask the question: 'Should it be done?'"
Booch conceded that there is no economic incentive to raising these issues. But there are important considerations and consequences that can't be measure on a spreadsheet. "Ask yourself, 'What if the data related to you, or to your parent, or your child? Would that change your opinion and actions?'" If so, Booch said, you have an responsibility to speak out. "If you don't, who will?"
|
<urn:uuid:591c7734-82d1-48b2-bedd-476781138e3c>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2225820/data-center/with-big-data-comes-big-responsibility.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00021-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.959128 | 759 | 2.765625 | 3 |
List of Environmental Management Standards and Frameworks
The ISO 14000 Family of Standards
The ISO 14000 family of standards has two main documents, with one additional supplementary document attached to the family. ISO 14001 and ISO 14004 comprise the family of ISO 14000 documents with ISO 19011, guidelines for auditing management systems, attached as it is the auditing requirements document used to audit an ISO 14001 environmental management system. See http://www.iso.org/iso/iso14000 for more details and access to the standards.
ISO 14001: The most commonly used set of requirements for designing an environmental management system; it includes requirements to establish, document, implement, maintain and continually improve an environmental management system. ISO 14001 provides the information necessary for a company to implement an environmental management system, and an EMS certification against ISO 14001 is recognized worldwide. The requirements are aligned in the format of a PDCA improvement cycle (Plan-Do-Check-Act cycle) to Plan the work of the EMS, Do (implement) the work of the EMS, Check (monitor and review) the work of the EMS against requirements and Act to correct any problems that occur. This will feed back into the next round of planning. For more information on how this works see Plan-Do-Check-Act in the ISO 14001 Standard.
ISO 14004: Is a standard that can accompany the ISO 14001 standard for implementing an environmental management system, but is not necessary to do so. This document is designed to provide guidelines on principles, systems and support techniques that can be used by an organization to help implement their environmental management system. The standard is a good reference to turn to for ideas in how to make your implementation of ISO 14001 more effective and successful. Unlike ISO 14001, ISO 14004 is not intended for certification, regulatory or contractual use. This means that you cannot certify your EMS to ISO 14004, and the use of ISO 14004 is not intended to be mandated as a legal or contract requirement. For more information on this standard see ISO 14004 which explains the structure more.
ISO 19011: This is also a standard published by the international organization for standardization and includes the requirements for auditing a management system. The standard defines all the requirements for an audit program as well as how to conduct successful audits. It is used as a resource to train anyone who audits quality and environmental management systems, and the auditors who certify that companies have met the requirements of standards such as ISO 14001, ISO 9001 and the like are trained using ISO 19011.
Standards for Specific Environmental Management
Included in the ISO 14000 standards that are maintained by the International Organization for Standardization are several other documents that provide guidelines for incorporating very specific elements of environmental management which include:
- ISO 14006: Environmental Management Systems – Guidelines for incorporating ecodesign
- ISO 14020: Environmental labels and declarations – General principles
- ISO 14040: Environmental Management – Life cycle assessment – Principles and Frameworks
- ISO 14050: Environmental Management – Vocabulary
- ISO 14064-1: Greenhouse Gases – Part 1: Specification with guidance at the organization level for quantification and reporting of greenhouse gas emissions and removals
- ISO 14066: Greenhouse Gases – Competence requirements for greenhouse gas validation teams and verification teams
|
<urn:uuid:75c3d76c-1fb2-417b-9cce-a6fca7cdd82a>
|
CC-MAIN-2017-09
|
https://advisera.com/14001academy/knowledgebase/list-of-environmental-management-standards-and-frameworks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00197-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.903719 | 685 | 2.734375 | 3 |
Multihoming ISP Links
Multihoming ISP links
Today, problems associated with ISP link availability continue to cause organizations to lose millions of dollars each year. However, deploying a solution that is cost effective and operationally efficient can also be a challenge. The following are four alternatives on how to facilitate multihomed networks.
1. Border Gateway Protocol
Typically, larger organizations multihome their sites with two links from two separate ISPs, using Border Gateway Protocol (BGP) to route across the links. While BGP can provide link availability in the case of a failure, it is a slow and complex routing protocol. It is costly to deploy because it requires special Autonomous System (AS) numbers from the ISPs and it requires router upgrades to be installed.
BGP is also not well-suited to provide multihoming and intelligent link load balancing. In the case of a failure, ISP cooperation is often required for link recovery. In general, BGP causes long and unpredictable failover times, which will not meet high availability requirements.
2. WAN link load balancing
Also known as multihoming, WAN link load balancing is a session-based process of directing Internet traffic among multiple and varied network connections. It requires a single WAN link controller located at the main site between the gateway modems/routers and the internal network. It intelligently load balances and provides failover for both inbound and outbound traffic among the network connections. Assuming there are two ISP connections, both network connections can be used at the same time. The benefit here is that you don't pay for bandwidth that is only used as a backup for when an outage occurs.
For example, traffic will go through network connection number one. If the WAN link controller detects that connection number one is overtaxed or failed, it will direct users across the second ISP connection. Intelligent WAN link controllers will continuously spread the traffic across the network connections based on the available resources. For example, with two T1s, it will not wait until the first T1 is overutilized before sending traffic out the second WAN; it will make use of both lines evenly.
Having two 1.5Mbps network connections does not mean that users have 3Mbps of bandwidth available to them. You would have 3Mbps available, but not for a single session. In other words, you would have 3Mbps of available bandwidth, but only 1.5 of throughput could be dedicated to any individual session. A single session will still only have 1.5Mbps of throughput; as with WAN link load balancing, each user can use only one ISP connection at a time.
3. Site-to-site channel bonding
Site-to-site channel bonding is a form of WAN link load balancing with a different approach that can increase the total combined network bandwidth of multiple network connections between two locations. This approach requires a WAN link controller at the main site and also at the remote site. Unlike WAN link load balancing, site-to-site channel bonding conducts continuous health checks (up and down status) of the network connections in use, and uses packet-based load balancing to distribute traffic across all network connections. However, with site-to-site channel bonding, two 1.5Mbps network connections will equal approximately 3Mbps, providing all traffic with the combined throughput from the multiple network connections.
4. Multiple ISPs
Organizations can multihome their sites with two WAN links from the same ISP. While implementing this solution may be a lower cost to deploy, it is not a very efficient solution, as an outage at the ISP will still cause a network failure, or at least create a bottleneck when both links are unavailable or oversubscribed. For greater WAN redundancy, it is best to have two or more different ISPs and load balance and provide failover for traffic among them.
|
<urn:uuid:583280fa-7f85-42be-9695-bf43884dd4a3>
|
CC-MAIN-2017-09
|
http://www.eweek.com/c/a/Enterprise-Networking/How-to-Overcome-Multihomed-ISP-Link-Challenges/1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00317-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.936655 | 790 | 2.6875 | 3 |
Returns the geomagnetic field for a location at the specified date.
int wmm_get_geomagnetic_field(const wmm_location_t *loc, const struct tm *date, wmm_geomagnetic_field_t *field)
The geographic location to be used in the calculation of the magnetic field.
The date to be used in the calculation of the magnetic field.
The geomagnetic field for the given location and date.
Library:libwmm (For the qcc command, use the -l wmm option to link against this library)
The geomagnetic field is returned in field.
If the latitude_deg or longitude_deg values in loc exceed their ranges, they will be changed to fit into their respective range.
0 if successful, -1 if an error occurred, 1 if loc was altered to fit into the magnetic model range.
Last modified: 2014-05-14
|
<urn:uuid:e4022450-ebfd-46df-8503-57d492b08073>
|
CC-MAIN-2017-09
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.wmm.lib_ref/topic/wmm_get_geomagnetic_field.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00369-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.731924 | 199 | 2.6875 | 3 |
- Protect the password in transit from the threat of sniffers or intermediation attacks — Use HTTPS during the entire authentication process. HSTS is better. HSTS plus DNSSEC is best.
- Protect the password in storage to impede the threat of brute force guessing — Never store the plaintext version of the password. Store the salted hash, preferably with PBKDF2. Where possible, hash the password in the browser to further limit the plaintext version’s exposure and minimize developers’ temptation or expectation to work with plaintext. Hashing affects the amount of effort an attacker must expend to obtain the original plaintext password, but it offers little protection for weak passwords. Passwords like p@ssw3rd or lettheright1in are going to be guessed quickly.
- Protect the password storage from the threat of theft — Balance the attention to hashing passwords with attention to preventing them from being stolen in the first place. This includes (what should be) obvious steps like fixing SQL injection as well as avoiding surprises from areas like logging (such as the login page requests, failed logins), auditing (where password “strength” is checked on the server), and ancillary storage like backups or QA environments.
Implementing PBKDF2 for password protection requires two choices: an HMAC function and number of iterations. For example, WPA2 uses SHA-1 for the HMAC and 4,096 iterations. A review of Apple’s OS X FileVault 2 (used for full disk encryption) reveals that it relies in part on at least 41,000 iterations of SHA-256. RFC 3962 provides some insight, via example, of how to select an iteration count. It’s a trade-off between inducing overhead on the authentication system (authentication still needs to be low-latency from the user’s perspective, and too much time exposes it to easy DoS) versus increasing an attacker’s work effort.
Finally, sites may choose to avoid password management altogether by adopting strategies like OAuth or OpenID. Taking this route doesn’t magically make password-related security problems disappear. Rather than specifically protecting passwords, a site must protect authentication and authorization tokens; it’s still necessary to enforce HTTPS and follow secure programming principles. However, the dangers of direct compromise of a user’s password are greatly reduced.
The state of password security is a sad subject. Like D minor, which is the saddest of all keys.
|
<urn:uuid:06f7ae8e-396d-44d4-956a-98f65fe9c4fa>
|
CC-MAIN-2017-09
|
https://deadliestwebattacks.com/2012/08/27/password-interlude-in-d-minor/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00369-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.897024 | 522 | 2.71875 | 3 |
Projects are often complex, made up of a large number of moving pieces and bring numerous challenges to those involved. The reality is projects don’t always go the way we want them to. We often find ourselves being asked to do more with less and at a faster pace than we’re comfortable with. On occasion, we are successful in our efforts despite these restrictions.
Many key factors contribute to project success, but three of the fundamentals stand out—stakeholder identification and analysis, effective communication, and identifying project requirements, managing stakeholder expectations and scope of work.
Stakeholders may affect or be affected by the project—through a project decision, activity or outcome—or they may simply perceive themselves to be affected. The impact or perceived impact can be either positive or negative in nature.
Most projects have a large number of stakeholders. Identifying all stakeholders increases the chance of project success. You must secure and document relevant information about their interests, interdependencies, influences, potential involvement, and probable impact on the project definition, execution and final results. After obtaining this information, classify the stakeholders according to their characteristics. This will make it easier to develop a strategy to manage each stakeholder. An increased focus on key relationships is critical to project success.
How do you manage expectations? Through communication. It’s vitally important that there is a well-defined communication plan tailored to fit the project and stakeholders.
One component of critical thinking is to learn by questioning. The Project Management Institute (PMI®) indicates that 90 percent of a project manager’s job is communication, however, it is not limited to just talking. Proper communication involves listening, reading reports, generating reports, filtering information from one group to another, etc. To do this effectively, you need a well-defined communication management plan. The creation of any plan like this can be broken down into six questions that need to be asked continually: Who, What, When, Where, How and Why?
- Who needs to be communicated to?
- What needs to be communicated?
- When does that need to take place?
- Where does it need to happen?
- How is it going to happen?
- Why does it need to happen?
Project Requirements, Stakeholder Expectations and Scope of Work
Before beginning your project, make sure you have clarified all goals, objectives and requirements. Obtaining clarity on what is required and ultimately gaining buy-in from the major stakeholders are crucial. These requirements, along with related goals, objectives and deliverables, become the scope of work that must be completed and will be refined over the life of your project. But if you do not start with a solid understanding of what you’re trying to achieve, you might as well not begin at all.
It will be difficult, if not impossible, to achieve success from a stakeholders’ point of view if there is a lack of clarity concerning their project perception and expectations. This is why stakeholders must be identified and their expectations analyzed as early in the project lifecycle as possible.
Keeping the three key steps—identification and analysis of project stakeholders, the creation and use of an effective communication plan, and proper identification of project requirements, stakeholder expectations and accurate decomposition of the scope of work—at the forefront during project planning and execution greatly enhances the ability to achieve success.
For a deeper dive into how to guarantee project achievement, view my white paper, Three Steps to Ensure the Success of Your IT Projects.
|
<urn:uuid:8ae0fa7f-534f-4bc4-a9e2-7a9539147bb9>
|
CC-MAIN-2017-09
|
http://blog.globalknowledge.com/2017/01/17/3-essentials-to-avoid-it-project-failure/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00489-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.936818 | 719 | 2.609375 | 3 |
After a successful Sunday launch, a cargo spacecraft is on its way to the International Space Station, carrying a Google 3D smartphone, along with a flock of tiny satellites.
Orbital Sciences Corp.'s Cygnus spacecraft, filled with more than 3,000 pounds of food, supplies, hardware and scientific experiments, lifted off from NASA's Wallops Flight Facility in Virginia on Sunday at 12:52 p.m. ET.
The Cygnus cargo craft launched on top of Orbital's Antares rocket.
The Cygnus cargo spacecraft launches on top of the Antares rocket Sunday. The cargo craft is expected to rendezvous with the space station Wednesday morning. (Photo: NASA)
The spacecraft is taking a few days to catch up to the space station. Cygnus is scheduled to rendezvous with the orbiter at 6:39 a.m. Wednesday, when astronauts are expected to use the station's robotic arm to grab hold of the craft.
Besides food, scientific experiments and spare parts, the Cygnus is carrying equipment for Google's Project Tango. Google's 3D mapping technology comes out of Project Tango, the company's effort to create tablets and smartphones that are 3D-enabled.
Next month, astronauts are expected to integrate the 3D technology with its smart Spheres(Synchronized Position Hold, Engage, Reorient Experimental Satellites), free-flying space robots that NASA has been testing on the space station since 2011.
The Spheres have only been operating in a small area of the space station. Once the sensors and cameras that enable Project Tango's 3D navigation are operational inside the floating robots, astronauts will move the Spheres throughout the station, mapping its entire layout.
Once that map is complete, the Spheres can begin using the map to maneuver throughout the space station.
NASA wants to use the flying robots, which are about the size of a volleyball, to perform tasks on the space station. For instance, the Spheres might be used to manipulate a camera to give flight controllers in Houston views of the inside of the station. The flying robots also will carry air quality and noise sensors, saving astronauts from doing the monitoring themselves.
The Cygnus spacecraft also is carrying a fleet of 28 small satellites, called CubeSats, to the space station. The CubeSats, nanosatellites about the size of a loaf of bread, will be deployed from the station's Japanese Experiment Module airlock and will be used to image the entire planet. The images then can be used to identify and track natural disasters and relief efforts. They also will be used for environmental and agricultural monitoring and management.
NASA is upgrading smartphones on space satellites called Spheres with 3D-sensing Android smartphones developed by Google.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected].
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Google's 3D Mapping Tech Flying to Space Station" was originally published by Computerworld.
|
<urn:uuid:ebdc3250-b9bf-4ae5-8d76-8ed591b72e45>
|
CC-MAIN-2017-09
|
http://www.cio.com/article/2453646/computers-all/googles-3d-mapping-tech-flying-to-space-station.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00189-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.911193 | 670 | 3.140625 | 3 |
Autodesk Volunteers Help Preserve Historic Kosrae Using Reality Capture Software & BIM
Kosrae, an island in the South Pacific located between Guam and the Hawaiian Islands, was settled as early as 1250. But much of the island’s rich history is unknown by the people who live there, as well as the handful of scuba divers who visit annually to explore the pristine coral reefs surrounding the island. That’s why Autodesk is teaming up with KnowledgeWell (a non-profit dedicated to transforming the barriers faced by under-resourced nations into opportunities for successful business and public sector programs) to create a 3D model of the island and help tell its story to the rest of the world.
In 2007, Autodesk’s Pete Kelsey traveled to Kosrae to begin digital documentation of the island. This year, Joe Travis, a geographer and senior manager of technical sales at Autodesk, teamed up with Pete and began working with KnowledgeWell, a nonprofit that delivers business expertise to emerging parts of the world, to continue the project. Travis also worked alongside Chris Moreno an anthropologist with HDR Inc,, a global architecture and engineering firm, to interview the people on the island and better understand the remarkable culture on Kosrae and what is driving the future of the region.
“We are trying to preserve the history of the island and document what exists today,” Travis says. “Our goal is to help the island become recognized as a World Heritage Site and help prepare their infrastructure for increased tourism.” When the project is completed, Kosrae leaders will submit KnowledgeWell-created models with their application to UNESCO to achieve historic preservation status.
To create those models, Autodesk is leading a team in leveraging several forms of “reality capture” software. The first being LiDAR technology donated and delivered by McKim & Creed, which allows them to create a 3D models of features on the island. One of the key areas of Kosrae that the team is mapping is an ancient city known as the Lelu Ruins. “We basically take a large piece of survey hardware to scan different feature on the island, which shoots out millions of points with a laser beam, and allows us to capture the walls of an ancient city,” Travis says. “We scan these features and create 3D models with our software. With measurements and analysis, we can get a good idea of what the city looked like hundreds of years ago.” This same technique is being used on many features, both ancient and from the WWII era.
Pete Kelsey and KnowledgeWell team members survey the walls of the ancient Lelu Ruins.
A second form of reality captured used on the island was side-sonar scanning to capture large areas under the water. While on Kosrae, the volunteer team and equipment donors from R2Sonic performed multiple passes through one of the main bays to build a 3D model of the ocean floor. This scan proved to be interesting as ship wrecks and airplanes were revealed in the model after the scan was complete. “We’re starting to see a merger of underwater scans and terrestrial datasets,” says Travis.
Personally, Travis is primarily interested in the geographic data under the water — specifically the coral reefs that surround the island. “Kosrae is one of the few areas around the world where the reefs are still thriving,” he says.
To create a realistic digital model of the island’s coral reefs, the team uses another reality capture technique, this time with Autodesk’s 123D Catch software. Underwater, Travis and the team snorkel and scuba around the reefs and take dozens of photos of corals from various angles. Back at the computer, he uploads the photos to the Autodesk cloud. “Then we’re able to process the images so they create 3D models of the feature you’re targeting,” Travis says. “It gives us a true representation of the coral in real life. Over time, we propose that this technology will allow us to see the growth of the reefs or the lack of growth from bleaching or other reasons.”
A massive coral formation captured with an underwater camera by Joe Travis is one of numerous pictures stitched together with Autodesk’s 123D Catch software to create the 3D model.
Joe Travis used Autodesk’s 123D Catch software to transform underwater photographs of coral reefs into 3D images that can be useful for studying the reefs’ growth over time
KnowledgeWell volunteers hope that their documentation of Kosrae’s history using Autodesk tools will not only help educate local residents and launch lasting preservation efforts, but also will boost efforts to attract tourism. Today, Kosrae welcomes about 1,000 tourists each year. The island does not have any chain hotels, shops or restaurants. An Ace hardware store was “the only familiar brand name” Travis saw during a recent visit. The map data created by KnowledgeWell volunteers will allow leaders to determine how the island’s infrastructure would be impacted if they were able to host 5,000 or 10,000 tourists each year.
Travis is also working with the Kosrae Government’s Geographic Information Systems (GIS) team on the island to incorporate historical maps with the most recent mapping data available. Bringing these layers of aerial photography, land parcel information and antique scanned maps together allows the citizens of Kosrae to see their heritage along with 21st century mapping information. Integrating old paper maps with the latest digital mapping data “depicts a good picture of history meeting advanced technology,” Travis says.
Part of the work that Joe Travis and the KnowledgeWell team involves merging historical maps of Kosrae with current images and GIS data.
In addition to preserving an island’s history, reality capture software technology has numerous uses for government agencies. Travis says this technology could be used to create 3D models of bridges, buildings and even natural features such as glaciers or volcanoes. “You could potentially compare models of landforms over time,” he says. “We have to think bigger in terms of mapping and 3D reality capture now that we have increased technology potential.”
Other government agencies may want to capture an electrical substation or a nuclear power plant. “With reality capture hardware and software applications, you can quickly create a three-dimensional model of almost any structure or natural feature,” Travis says.
An acquaintance of Aaron Smith, the founder of KnowledgeWell, Travis says he has wanted to get involved in the organization for years. “Aaron has asked me for over 10 years to become a volunteer, but the time just hasn’t been right,” he says. “The planets aligned this past fall for me to get involved, and it has been a rewarding experience.”
KnowledgeWell helps meet business needs of governments around the world in need, using donated software and volunteer teams of professionals.
“KnowledgeWell relies on a strong volunteer team who share a mutual motivation to transform the barriers faced by under-resourced nations into opportunities for successful business enterprise. Our volunteers have the ability to shape the future and we are proud that Joe and Autodesk were able to partner with KnowledgeWell and donate their expertise, in order to boost eco-tourism and consideration as a UNESCO World Heritage Site for these pristine Micronesian islands,” added KnowledgeWell Senior Consultant, Aaron Smith.
This article first appeared on Acronym Online, a sister publication to Technically Speaking, focused on computer-aided design and related digital design technologies for the fields of AEC, manufacturing, and GIS.
|
<urn:uuid:f4441430-17f3-455e-b58d-b1b15be2e8ed>
|
CC-MAIN-2017-09
|
http://blogs.dlt.com/autodesk-employees-volunteer-knowledgewell-preserve-historic-kosrae-reality-capture-software-bim/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00541-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.942144 | 1,618 | 2.75 | 3 |
110 blocks are one type of punch blocks used to connect sets of wires in a structured cabling system. The “110” designation is also used to describe a type of insulation-displacement connector used to terminate twisted pair cables which uses a similar punch-down tool as the older 66 block. People are preffered to 110 blocks rather than 66 blocks in high-speed networks because they introduce less crosstalk and allow much higher density terminations, and meet higher bandwidth specifications. Many 110 blocks are certified for use in Category 5 and Category 6 wiring systems, even Cat6a. The 110 block provides an interconnection between patch panels and work area outlets.
Modern homes usually have phone service entering the house to a single 110 block, when it is distributed by on-premises wiring to outlet boxes throughout the home in series or star topology. At the outlet box, cables are punched down to standard RJ-11 sockets, which fit in special faceplates. The 110 block is often used at both ends of Category 5 cable runs through buildings. In switch rooms, 110 blocks are often built into the back of patch panels to terminate cable runs. At the other end, 110 connections may be used with keystone modules that are attached wall plates. In patch panels, the 110 blocks are built directly onto the back where they are terminated. Category 6 – 110 wiring blocks are designed to support Category 6 cabling applications as specified in TIA/EIA-568-B.2-1 with unique spacing that provides superior NEXT performance.
Both 66 and 110 blocks are insulation displacement connection (IDC) devices, which are key to reliable data connections. 66-clip blocks have been the standard for voice connections for many years. 110 blocks are newer and are preferable for computer work, for one thing, they make it easier to preserve the twist in each pair right up to the point of connection.
1. Although 66-clip blocks historically have been used for data, they are not an acceptable connection for Category 5 or higher cabling. The 110-type connection, on the other hand, offers: higher density (more wiring in a smaller space) and better control (less movement of the wires at the connection). Since more and more homes and businesses call for both voice and data connections, it is easy to see why it makes sense to install 110-type devices in most situations. Most cat5 jacks also use type 110 terminals for connecting to the wire.
2. The 110 block is a back-to-back connection whereas the 66 block is a side-by-side connection. The 110 block is a smaller unit featuring a two-piece construction of a wire block and a connecting block. Wires are fed into the block from the front, as opposed to the side entry on the 66 block. This helps to reduce the space requirements of the 110 block and reduce overall cost. The 110 block’s construction also provides a quiet front, meaning there is insulation both above and around the contacts. Since the quiet front is lacking on the 66 blocks, a cover is often recommended.
3. 110 blocks have a far superior labeling system that not only snaps into place but is erasable. This is particularly important for post-installation testing and maintenance procedures.
110 Connecting Blocks enable you to quickly organize and interconnect phone lines and communication cable, preserve the twists in each pair right up to the connection point. Plus, most networking cable equipment also use 110 type terminals for cable connections.
|
<urn:uuid:9ea97305-3075-497e-b777-996634ba0de5>
|
CC-MAIN-2017-09
|
http://www.fs.com/blog/modern-110-connecting-blocks-for-data-networking.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00417-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.93617 | 713 | 2.796875 | 3 |
TORONTO, ON and BOSTON, MA--(Marketwired - November 01, 2016) -
- Canadian Government-funded "Saving Brains" study: 25 percent of developing world's child stunting is associated with poor growth in womb, such as pre-term birth and low birth weight
- Authors prescribe "paradigm shift" from interventions focused solely on children and infants to those that reach mothers and families
In a new Canadian-funded study, Harvard T.H. Chan School of Public Health researchers today rank for the first time a range of risk factors associated with child stunting in developing countries, the greatest of which occurs before birth: poor fetal growth in the womb.
Based on their findings, they prescribe fundamental changes in approaches to remedy stunting, which today largely focus on children, calling for greater emphasis on interventions aimed at mothers and environmental factors such as poor water and sanitation and indoor biomass fuel use.
Funded by the Government of Canada through Grand Challenges Canada's "Saving Brains" program, the study reports that in 2011 some 44 million (36 percent) of two-year-olds in 137 developing countries were stunted, defined as being two or more standard deviations shorter than the global median. About one quarter (10.8 million) of those stunting cases were attributable to full-term babies being born abnormally small.
The findings highlight a need for more emphasis on improving maternal health before and during pregnancy, according to the researchers at Harvard Chan School, who published their work today in PLOS Medicine.
The absence of optimal sanitation facilities that ensure the hygienic separation of human waste from human contact has the second largest impact overall, attributable to 7.2 million stunting cases (16.4 percent), followed in third place by childhood diarrhea, to which 5.8 million cases (13.2 percent) are attributed.
Child nutrition and infection risk factors accounted for six million (13.5 percent) of stunting cases overall.
Teenage motherhood and short birth intervals (less than two years between consecutive births) had the fewest attributable stunting cases of the risk factors that were analyzed -- 860,000 (1.9 percent) of cases overall.
The study concludes that reducing the burden of stunting requires continuing efforts to diagnose and treat maternal and child infections, especially diarrhea, and "a paradigm shift... from interventions focusing solely on children and infants to those that reach mothers and families."
Says lead author Goodarz Danaei, Assistant Professor of Global Health at Harvard Chan School: "These results emphasize the importance of early interventions before and during pregnancy, especially efforts to address malnutrition. Such efforts, coupled with improving sanitation and reducing diarrhea, would prevent a substantial proportion of childhood stunting in developing countries."
"This is a serious problem at every level, from individual to national," he adds. "Early life growth faltering is strongly linked to lost educational attainment and the immense cost of unrealized human potential in the developing world. Stunting undermines economic productivity, in turn limiting the development of low-income countries."
While previous research identified a large number of nutrition-specific risk factors for stunting, such as preterm birth, zinc deficiency, and maternal malaria, the relative contribution of these risk factors had not been consistently examined across countries.
"Our findings provide further evidence that integrated nutrition-sensitive interventions, such as improved water and sanitation, are warranted in addition to nutrition-specific interventions to have an impact on the risk of stunting globally," says senior author and Principal Investigator Wafaie Fawzi, Professor and Chair of the Department of Global Health and Population at Harvard Chan School.
In all, 18 risk factors, selected based on the availability of data, were grouped into five categories and ranked:
- Poor fetal growth and preterm birth,
- Environmental factors, including water, sanitation and indoor biomass fuel use,
- Maternal nutrition and infection,
- Child nutrition and infection, and
- Teenage motherhood and short birth intervals (less than two years between child births).
The researchers map the burden of stunting attributable to these risk factors in the developing world on a website, healthychilddev.sph.harvard.edu, helping policy makers visualize important differences across regions, sub-regions and countries.
"These findings can help regions and countries make evidence-based decisions on how to reduce the burden of stunting within their borders," says Professor Danaei.
Findings at the regional level include:
- Environmental factors, such as water, sanitation and indoor biomass fuel use, are the second leading risk category in South Asia, Sub-Saharan Africa, East Asia and the Pacific.
- Poor child nutrition and infection is the second leading risk category in Central Asia, Latin America and the Caribbean, North Africa and the Middle East.
- Among Sub-Saharan African countries, the prevalence of stunting associated with poor sanitation in Central, East and West Africa is more than double that of southern Africa.
- Childhood diarrhea was associated with almost three times the burden of stunting in Andean and central Latin America compared with tropical and southern Latin America.
- Somalia had the largest prevalence of stunting attributable to breastfeeding that was discontinued before a child reaches 6-24 months of age.
The new study follows the publication of two major studies focused on poor child growth and developmental milestones by the same Canadian-funded "Saving Brains" team at Harvard T.H. Chan School of Public Health.
The first study, published in PLOS Medicine on June 7, 2016, found that one-third of three- and four-year-olds in low- and middle-income countries fail to reach basic milestones in cognitive and/or socio-emotional growth.
The second study, published in The American Journal of Clinical Nutrition on June 29, 2016, found that poor child growth costs the developing world US$177 billion in lost wages and 69 million years of educational attainment for children born each year.
"Knowing the major risk factors for stunting, the global cost of poor child growth, and the number of children missing developmental milestones are key pieces of information in ensuring children not only survive, but thrive," says Dr. Peter A. Singer, Chief Executive Officer of Grand Challenges Canada.
"This kind of information is essential to achieving the targets set out by the Every Woman Every Child Global Strategy for Women's, Children's, and Adolescent's Health. If you are a finance minister, you will want to check out the risk factors for stunting to reduce the toll on human capital and GDP in your country."
The importance of children thriving, not just surviving, is emphasized in the United Nations Sustainable Development Goals and is central to the Every Woman Every Child Global Strategy for Women's, Children's and Adolescent's Health. In 2014, the World Health Assembly set a target to reduce by 40 percent the number of stunted children worldwide by 2025.
The Saving Brains program supports new approaches to ensure children thrive by protecting and nurturing early brain development, providing a long-term exit strategy from poverty. Saving Brains has invested a total of $43 million in 108 innovations and the Saving Brains technical platform that helps to track and accelerate progress against the challenge.
For more information, visit grandchallenges.ca and look for us on Facebook, Twitter, YouTube and LinkedIn.
Grand Challenges Canada
Grand Challenges Canada is dedicated to supporting Bold Ideas with Big Impact® in global health. We are funded by the Government of Canada and we support innovators in low- and middle-income countries and Canada. The bold ideas we support integrate science and technology, social and business innovation -- we call this Integrated Innovation®. Grand Challenges Canada focuses on innovator-defined challenges through its Stars in Global Health program and on targeted challenges in its Saving Lives at Birth, Saving Brains and Global Mental Health programs. Grand Challenges Canada works closely with Canada's International Development Research Centre (IDRC), the Canadian Institutes of Health Research (CIHR) and Global Affairs Canada to catalyze scale, sustainability and impact. We have a determined focus on results, and on saving and improving lives.
Saving Brains is a partnership of Grand Challenges Canada, Aga Khan Foundation Canada, Bernard van Leer Foundation, Bill & Melinda Gates Foundation, The ELMA Foundation, Grand Challenges Ethiopia, Maria Cecilia Souto Vidigal Foundation, Palix Foundation, UBS Optimus Foundation and World Vision Canada. It seeks and supports bold ideas for products, services and implementation models that protect and nurture early brain development relevant to poor, marginalized populations in low- and middle-income countries.
Harvard T.H. Chan School of Public Health
Harvard T.H. Chan School of Public Health brings together dedicated experts from many disciplines to educate new generations of global health leaders and produce powerful ideas that improve the lives and health of people everywhere. As a community of leading scientists, educators, and students, we work together to take innovative ideas from the laboratory to people's lives -- not only making scientific breakthroughs, but also working to change individual behaviors, public policies, and health care practices. Each year, more than 400 faculty members at Harvard Chan School teach 1,000-plus full-time students from around the world and train thousands more through online and executive education courses. Founded in 1913 as the Harvard-MIT School of Health Officers, the School is recognized as America's oldest professional training program in public health.
Table 1: Number of stunting cases (in thousands) in children aged two in 2011 attributable to risk factor groups.
|Risk Factor Group|| ||Number of Attributable Stunting Cases (in thousands)|| ||Risk Factors|| ||Definition|
|Fetal growth restriction and preterm birth
|| ||Preterm, small-for-gestational age
|| ||Birth before 37 weeks of gestation and weight <10th percentile for gestational age
| ||Preterm, appropriate-for-gestational age
|| ||Birth before 37 weeks of gestation and weight ≥10th percentile for gestational age
| ||Term, small-for-gestational age
|| ||Birth at or after 37 weeks of gestation and weight <10th percentile for gestational age
| ||Low birth weight
|| ||Birth weight <2500g
|| ||Unimproved sanitation
|| ||Lack of access to safe sanitation in the community (based on WHO/UNICEF Joint Monitoring Programme definition of improved sanitation)
| ||Unimproved water
|| ||Lack of access to clean water in the community (based on WHO/UNICEF Joint Monitoring Programme definition of improved water source)
| ||Use of biomass fuels
|| ||Use of biomass fuels for cooking and heating
|Maternal nutrition and infection
|| ||Maternal short stature
|| ||Maternal height <160cm
| ||Maternal underweight
|| ||Maternal BMI <18.5 kg/m2
| ||Maternal malaria
|| ||Malaria in pregnancy
| ||Maternal anemia
|| ||Maternal hemoglobin <110g/L
|Child nutrition and infection
|| ||Childhood zinc deficiency
|| ||Deficient zinc intake during childhood based on age- and sex-specific zinc requirements
| ||Childhood diarrhea
|| ||Mean number of diarrhea episodes per year during childhood
| ||Non-exclusive breastfeeding
|| ||Non-exclusive breastfeeding of infants under six months of age
| ||Discontinued breastfeeding
|| ||Discontinued breastfeeding of children 6-24 months of age
| ||HIV infection without highly active antiretroviral therapy (HAART) before 2 years of age
|| ||Child HIV infection without initiation of HAART until after two years of age
|Teenage motherhood and short birth intervals
|| ||Teenage motherhood
|| ||Maternal age at delivery <20 years
| ||Short birth intervals
|| ||<24 months between consecutive births|
|
<urn:uuid:dd91207a-019f-40ef-b60a-f96e3fa47851>
|
CC-MAIN-2017-09
|
http://www.marketwired.com/press-release/1-risk-for-child-stunting-in-developing-world-poor-growth-before-birth-2171693.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00417-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.921732 | 2,458 | 2.875 | 3 |
In other words, success or failure at spreading a message through social media looks kind of random. When a social media campaign goes viral, "you don't have to assume it's because some people are influential--you can still get some random thing that gets 50 million views on YouTube," said Filippo Menczer, director of the Center for Complex Networks and Systems Research at the Indiana University School of Informatics and Computing. "Maybe the answer is that there is no answer."
Or, to put it another way, when computer scientists (or social media marketing gurus) try to prove cause-and-effect, they need to do a better job of correcting for the element of chance. "You have to ask, is it just a spurious correlation, where someone was going to win," Menczer said in an interview.
The paper on Competition among memes in a world with limited attention was published on Nature Magazine's Scientific Reports website. Indiana University Ph.D. candidate Lilian Weng was the lead author, with co-authors Menczer, Alessandro Flammini, director of undergraduate studies for Indiana University Informatics, and Alessandro Vespignani, Sternberg distinguished professor of physics, computer science, and health sciences at Northeastern University.
The term meme, coined by the British evolutionary biologist Richard Dawkins in his book The Selfish Gene, describes ideas that reproduce and spread like genes in biological systems. The Indiana University study also alludes to a related theory that ideas and concepts spread like infections, with fads and viral videos breaking out like epidemics. Another theoretical underpinning of the study was the idea of an attention economy in which information is plentiful but human capacity to focus on it is scarce.
The researchers built a computer model of a system with a network structure similar to that of Twitter, and a limited attention span for the software agents simulating the role of users (meaning they would only "remember" or focus on any given meme for a short time). By pumping simulated memes into this virtual Twitterverse and picking winners and losers according to a few simple rules, the computer scientists were able to reproduce the same kind of behaviors seen in the real network. The real-world analysis was based on 120 million retweets connected to 12.5 million users and 1.3 million hashtags.
"Our question was, can we look at these things to explain why some become very popular--why some YouTube videos go viral and others don't. There might be one dancing cat that gets a million views, but another dancing cat where only two people watch," Menczer said. Other studies have tried to make connections to the time of day an item is posted or the number of connections the person issuing the post has, but those may be misleading, he said.
Like an ecosystem that can only support a limited number of species and individuals within that species, social networks pick winners that go viral and losers that die without a trace according to a brutal process of natural selection, where chance plays a big role, according to the study.
The rules of the game can be summarized as "ideas propagate on the network, and people forget after a while," Menczer said. The researchers found that if they altered the assumptions in the model--for example, by altering the degrees of separation between social media users or their ability to pay attention--their model no longer matched reality. On the other hand, when they stuck to their base model--in which no post was inherently more interesting or popular than the other--the data matched up closely.
This does not really mean that there are no ideas more interesting or people more influential than others, only that the role of those factors is easy to overestimate, Menczer said.
"Of course, there are things that are more objectively interesting than others and if you write about them, probably yes, that will receive a bunch of following. However, even if you don't you might get lucky. And even if you do, someone else might be posting the same thing and get the attention instead of you," he said.
The Enterprise 2.0 Conference brings together industry thought leaders to explore the latest innovations in enterprise social software, analytics, and big data tools and technologies. Learn how your business can harness these tools to improve internal business processes and create operational efficiencies. It happens in Boston, June 18-21. Register today!
|
<urn:uuid:24c4ded3-e524-4e87-9d52-df27ee424cc4>
|
CC-MAIN-2017-09
|
http://www.networkcomputing.com/networking/science-probes-why-tweets-go-viral/1156935268?cid=sbx_bigdata_related_news_industry_analysis_big_data&itc=sbx_bigdata_related_news_industry_analysis_big_data
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00361-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.954089 | 896 | 2.703125 | 3 |
Super Fast and Super GreenBy Mel Duvall | Posted 2007-11-28 Email Print
New ranking of supercomputers compares their performance against their energy consumption
Can the world's most powerful computers also be green? That was the underlying question behind a new ranking of the world's supercomputers which compares the number of floating-operations they can perform per second versus the amount of energy consumed, or "FLOPS per watt."
The list compiled by researchers at Virginia Tech is intended to draw attention to idea that designers of supercomputers should pay more attention to the energy consumption, not just speed or computational prowess.
"In the world of supercomputers, the thinking has traditionally focused entirely on performance," says Virginia Tech associate professor Wu Feng, who spearheaded the Green 500 project with Kirk Cameron. "No one has worried about the power being consumed. Well, the world has changed. There is a lot of concern right now about the amount of power being consumed by computers and data centers in general, and we felt it was time to do something that really challenged the thinking of the (supercomputer) establishment."
In fact, as manufacturers have pursued the goal of building supercomputers that can complete hundreds of trillions of floating-point operations per second they have inadvertently created computers that consume so much energy and produce so much heat they require elaborate cooling systems to ensure their proper operation.
The first Green 500 list, which was released last week, will be refined in the months and years ahead, adds Feng. Initially, not all companies with computers on the Supercomputer 500 ranking would or could provide energy consumption metrics for their machines. However, based on the feedback received to date, and the attention the Green 500 is generating, Feng believes the list will be more comprehensive in the future and the methodology more refined.
The top of this year's list is dominated by IBM's Blue Gene supercomputers. In fact, IBM had nine out of 10 of the top 10 sites. The only non-IBM installation to crack the top 10 was a Dell PowerEdge cluster at Stanford University. In terms of flops per watt, the top installation was an IBM Blue Gene supercomputer installed at the Science and Technology Facilities Council Daresbury Laboratory in Cheshire, England. It achieves 357.23 megaflops per watt.
Much of IBM's success is due to the use of more efficient processors. The new generation of Blue Gene supercomputers use 850 MHz CPUs compared to 2GHz CPUs used in most supercomputers.
While this is the Green 500's first year, the genesis for the list dates back to 2001 when Feng was working at the Los Alamos National Laboratory. He was struggling to maintain the reliability of a supercomputer installed at the laboratory, which was overheating due to the high number of power-hungry processors.
That led him to design a new supercomputer, which focused as much on energy efficiency as power. The result was a machine named Green Destiny, which used 240 Transmeta processors, operating at 667 MHz, and sipped only 3.2 kilowatts of power (about the same as two hair dryers). Next Page: The Green 500: Top 25 Most Energy-Efficient Supercomputers
|
<urn:uuid:fb1c34e2-3ac6-4776-a015-d681fa66177c>
|
CC-MAIN-2017-09
|
http://www.baselinemag.com/c/a/Projects-Management/Super-Fast-and-Super-Green
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00237-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.955255 | 658 | 2.609375 | 3 |
According to a new study 90% of Americans do not feel safe online.
The report released by McAfee and the National Cyber Security Alliance (NCSA) revealed that 59% of Americans say that their job is dependent on a safe and secure internet.
However, 90% do not feel completely safe from hackers, viruses and malware while online.
"The threat to the safety of Americans online is growing every day and as the survey shows the fear of Americans has also grown to 90 percent," said Gary Davis, vice president of global consumer marketing at McAfee. "It is our responsibility to make sure that consumers are aware of these growing threats so they can be best prepared to defend themselves against these hidden criminals."
Last year 26% of Americans were notified by a business or online service provider that their personal information had been lost or compromised due to a data breach.
The survey of 1,000 adult Internet users found a disparity between online safety perceptions and actual practices involving smartphone security and password protection.
"The Internet is a shared resource for so many of our daily activities which is why protecting it is a shared responsibility," said Michael Kaiser, executive director of the NCSA. "Everyone should take security measures, understand the consequences of their actions and behaviours and enjoy the benefits of the Internet."
A recent study by McAfee revealed that nearly 20% of Americans browse the internet unprotected while another 12% have zero protection security and 7% have their security software installed but disabled.
"The need for consumers to stay educated is necessary now more than ever with nine in ten Americans using their computers for banking, stock trading or reviewing personal information," said McAfee in a blog post.
Please follow this author on Twitter @Tineka_S or comment below.
|
<urn:uuid:e596d41e-a94c-4b6b-9ee0-e54cbc892edd>
|
CC-MAIN-2017-09
|
http://www.cbronline.com/news/one-in-four-americans-suffered-security-breaches-last-year-021012
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00237-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.960239 | 356 | 2.8125 | 3 |
If smartwatch development can be thought of in terms of dance crazes, the next phase might become the Twist.
Researchers at Carnegie Mellon University have developed a prototype smartwatch that expands user interface possibilities with a tactile face that can be rotated.
Chris Harrison/Carnegie Mellon University
This prototype smartwatch developed at Carnegie Mellon University gives users more input options by being able to tilt, click or twist its face.
Instead of just scrolling through a touchscreen, talking or pushing buttons, users can manipulate the face of the prototype to interact with the display in more ways.
+ ALSO ON NETWORK WORLD Smartwatches are basically useless +
Exhibited at the ongoing ACM CHI Conference on Human Factors in Computing Systems in Toronto, the prototype is being billed as a way to overcome the small form factor and input limitations on standard smartwatches, unlocking their powerful computer potential.
Users can twist, pan in two dimensions, tilt or click the prototype's face, as well as using the conventional scrolling or button functions.
Applications include moving and zooming around a map on the display without fingers obscuring the view, clicking the face to snap a photo or twisting the face to adjust the volume on a music player app.
A demo on YouTube also shows how the prototype can be used to play the first-person shooter "Doom," with panning, twisting and clicking motions serving as controls.
Users also don't have to lift their fingers from the device to re-target an object or menu item.
"Since our fingers are large, and people want smartwatches to be small, we have to go beyond traditional input techniques," Gierad Laput, a PhD student at Carnegie Mellon's Human-Computer Interaction Institute, wrote in an email.
"Digitizing watchface mechanical movements offers expressive interaction capabilities without occluding the screen. It is a simple yet clever idea, and it is easy to implement."
The approach is cheap and potentially compact, the researchers including Chris Harrison, assistant professor of human-computer interaction, wrote in a related paper.
Further development of the watch could add functions such as 3D pan, yaw, pitch and roll, enhancing the input options.
The researchers are interested in commercializing the technology but have no concrete plans yet, Laput said.
With Samsung, Sony and other major manufacturers turning out wrist computers, the smartwatch industry was worth about US$700 million in 2013 and is expected to reach $2.5 billion this year, according to Zurich-based research firm Smartwatch Group.
|
<urn:uuid:eeb9df61-02b8-445a-99e6-b5ffca34e7fa>
|
CC-MAIN-2017-09
|
http://www.networkworld.com/article/2176497/smb/researchers-try-new---39-twist--39--on-smartwatches.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00533-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.926609 | 521 | 2.5625 | 3 |
When LIDAR came down to Earth, mapping projects took off
Second of four parts
Sometimes it takes a crisis to spur innovation.
Such was the case with creation of the Red River Basin Decision Information Network (RRBDIN), and its LIDAR mapping of large swaths of North Dakota, Minnesota and central Canada, said Charles Fritz, director of the International Water Institute.
"In the Red River basin we had a major flood back in 1997. That's when Grand Forks went under," Fritz said. "My organization actually was one of the outcomes from that flood." In the wake of the flood, the Federal Emergency Management Agency took action that resulted in the creation of the RRBDIN, which is managed by the International Water Institute with support from the U.S. Army Corps of Engineers, North Dakota State University Extension and other partners.
"When the 1997 flood hit, we had a huge problem because the only things we had to work with were old [U.S. Geological Survey] maps," Fritz said. Even with the obvious need for better information, FEMA and state funding weren't enough to do the job. "At the time, in 2000, we knew LIDAR was available. We were looking at different technologies to acquire better elevation data, but the numbers were astounding. I mean, we were talking about $35 million."
LIDAR, or light detection and ranging, is a mapping technology that bounces light emitted from a laser source, typically aboard an aircraft, and captures the return to build a detailed map of an area. It’s been around for 50 years, but until recently was an expensive and cumbersome technology limited to specialized uses.
By 2005, however, "Costs were coming down, and the technology was improving," Fritz said. "We said OK, let's move on this."
In 2009, the RRBDIN finished its first pass, collecting LIDAR data covering 54,000 square miles. The result was 8 terabytes of data that the team needed to make available in a useful form. Since then, the team has been working to develop and deploy a series of online tools.
"We did not want have a situation where everybody in the basin had to know ArcView if they want to use the LIDAR data, so we put together the online viewer," Fritz said. "There are some really cool tools in there."
RRBDIN's LIDAR viewer allows users to create and customize maps with elevations down to 2-foot contours or spot elevations. There's also a forecast display tool. When the National Weather Service generates a forecast of a flood in Fargo of, say, 38.5 feet, "What does that mean from an inundation and extent standpoint?” Fritz asked. “We can take the LIDAR data, combine it with that 38.5 feet forecast and produced an interactive map that shows the extent of the flooding."
Fritz said the LIDAR data can be combined with all sorts of other data. It can, for example, be used to determine where and to what extent nutrients will flow in irrigated fields.
"We're talking about efficiencies here, where we can get the most bang for the buck, whether it's a flood damage reduction project or water quality project or natural resource enhancement project," he said. "All of that is predicated on the LIDAR data."
When he started the project, Fritz felt like he was a voice in the wilderness. He spent a year and a half trying to convince people that the project was important to them. "Now," he said, "I can tell you that with no exceptions the people who have experience with the data that we have collected are all saying that it is the best thing since sliced bread. Even most local watershed districts will not start a meeting unless they have that LIDAR viewer open on the table."
While Fritz's team is focused on building more applications to use the data, they are also looking forward to a fresh collection of data. "Since we completed data collection in 2009," he said, "there's been a lot of work, especially in the major metropolitan areas. Fargo, Grand Forks. They put up flood dikes, etc., so now we are talking about how we're going to update the current LIDAR data set."
If flatness is a hydrological challenge in the Red River Basin, rugged terrain is a major challenge in Oregon. Feet-on-the-ground surveys of forest inventories, for example, are particularly time-consuming and, therefore, expensive and quickly outdated.
Scanning from the air is much less costly. "It ends up being about offsetting costs," said John English, LIDAR data coordinator for Oregon's Department of Geology. "It is allowing people to save money on these long surveys."
With two specialists – English and one other scientist collecting and organizing the data – the state of Oregon covers between 5,000 and 7,000 square miles per year at a cost of $3 million to $4 million. So far, the team has collected LIDAR data on 26 percent of the state, focusing first on the more heavily populated western half. The state also has approximately 25 GIS analysts working with the data for a variety of agencies and purposes.
The airborne LIDAR fires eight points per square meter, more than 100,000 pulses per second. According to English, that's enough to ensure that even in dense forest, some of the points reach the ground. Comparing the distances of pulses reflected off the tree tops and those reflected off the ground allows the team to calculate the heights of trees very accurately. And not only that, the full array of returned pulses allows the team to survey undergrowth, too.
Oregon's LIDAR efforts, of course, aren't restricted to forest inventories. In fact, the first use of LIDAR was a joint effort of the state's Department of Geology and Mineral Industries and the U.S. Geological Survey to conduct a landslide study in the Portland area in 2004. They also use the data for habitat analysis assessment.
"People are finding more uses for it," English said. "Municipal mapping of streets, measuring volumes for displaced sediment, flood mapping, hazard mapping. You can even detect wear and tear on roads. It can do a rough survey on everything in an entire city. If you have a house and you know its height, and there's a flood, we can infer how many houses are totaled according to FEMA. Right now we’re producing the most accurate flood-inundation maps ever made."
English says the plan is to scan the entire state. "But we're trying to do it in a methodical way. Primary areas of interest are places where work is being done and where humans interact with the environment,” he said. “So we've covered about 98 percent of the population of Oregon."
PREVIOUS: How LIDAR is revolutionizing maps, geospatial data
NEXT: How LIDAR maps beneath the sea, and in the fourth dimension
|
<urn:uuid:920f3d64-84af-4d1e-b214-5d13ae8decbf>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2013/03/13/when-lidar-costs-decreased-mapping-projects-took-off.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00533-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.967934 | 1,465 | 3.140625 | 3 |
If you use an Apple iPhone, iPad or other iDevice, now would be an excellent time to ensure that the machine is running the latest version of Apple’s mobile operating system — version 9.3.1. Failing to do so could expose your devices to automated threats capable of rendering them unresponsive and perhaps forever useless.
On Feb. 11, 2016, researcher Zach Straley posted a Youtube video exposing his startling and bizarrely simple discovery: Manually setting the date of your iPhone or iPad all the back to January. 1, 1970 will permanently brick the device (don’t try this at home, or against frenemies!).
Now that Apple has patched the flaw that Straley exploited with his fingers, researchers say they’ve proven how easy it would be to automate the attack over a network, so that potential victims would need only to wander within range of a hostile wireless network to have their pricey Apple devices turned into useless bricks.
Not long after Straley’s video began pulling in millions of views, security researchers Patrick Kelley and Matt Harrigan wondered: Could they automate the exploitation of this oddly severe and destructive date bug? The researchers discovered that indeed they could, armed with only $120 of electronics (not counting the cost of the bricked iDevices), a basic understanding of networking, and a familiarity with the way Apple devices connect to wireless networks.
Apple products like the iPad (and virtually all mass-market wireless devices) are designed to automatically connect to wireless networks they have seen before. They do this with a relatively weak level of authentication: If you connect to a network named “Hotspot” once, going forward your device may automatically connect to any open network that also happens to be called “Hotspot.”
For example, to use Starbuck’s free Wi-Fi service, you’ll have to connect to a network called “attwifi”. But once you’ve done that, you won’t ever have to manually connect to a network called “attwifi” ever again. The next time you visit a Starbucks, just pull out your iPad and the device automagically connects.
From an attacker’s perspective, this is a golden opportunity. Why? He only needs to advertise a fake open network called “attwifi” at a spot where large numbers of computer users are known to congregate. Using specialized hardware to amplify his Wi-Fi signal, he can force many users to connect to his (evil) “attwifi” hotspot. From there, he can attempt to inspect, modify or redirect any network traffic for any iPads or other devices that unwittingly connect to his evil network.
TIME TO DIE
And this is exactly what Kelley and Harrigan say they have done in real-life tests. They realized that iPads and other iDevices constantly check various “network time protocol” (NTP) servers around the globe to sync their internal date and time clocks.
The researchers said they discovered they could build a hostile Wi-Fi network that would force Apple devices to download time and date updates from their own (evil) NTP time server: And to set their internal clocks to one infernal date and time in particular: January 1, 1970.
The result? The iPads that were brought within range of the test (evil) network rebooted, and began to slowly self-destruct. It’s not clear why they do this, but here’s one possible explanation: Most applications on an iPad are configured to use security certificates that encrypt data transmitted to and from the user’s device. Those encryption certificates stop working correctly if the system time and date on the user’s mobile is set to a year that predates the certificate’s issuance.
Harrigan and Kelley said this apparently creates havoc with most of the applications built into the iPad and iPhone, and that the ensuing bedlam as applications on the device compete for resources quickly overwhelms the iPad’s computer processing power. So much so that within minutes, they found their test iPad had reached 130 degrees Fahrenheit (54 Celsius), as the date and clock settings on the affected devices inexplicably and eerily began counting backwards.
|
<urn:uuid:c89111ae-d5ad-4bab-80c3-e07c0c31b4c7>
|
CC-MAIN-2017-09
|
https://krebsonsecurity.com/tag/apple-1970-bug/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00057-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.940402 | 881 | 2.546875 | 3 |
New research claims the ‘undetected’ malware can wreak havoc.
Malware called Gyges, developed by the Russian intelligence service, has been leaked to cyber-criminals and has also been incorporated into ransomware and online banking Trojan toolkits, threat analysis company Sentinel Labs has claimed.
Gyges mainly targets Windows 7 and 8 users running 32 and 64-bit versions of the platforms.
What makes this sophisticated piece of malware worse is that it is virtually invisible and capable of operating undetected for long periods of time. Plus it also seems to have the stamp of a state.
However, Sentinel Labs’ research added that with constant monitoring on endpoints, it does become difficult for the otherwise "invisible" malware to hide or evade detection.
In his research paper, Udi Shamir, head of research at Sentinel Labs, said: "We first detected Gyges with our heuristic sensors and then our reverse engineering task force performed an in-depth analysis.
"It appears to originate from Russia and be designed to target government organisations. It comes to us as no surprise that this type of intelligence agency-grade malware would eventually fall into cybercriminals’ hands."
A notable fact about Gyges is that it uses less intrusive techniques and strikes when a user is inactive, in contrast to the more common technique of waiting for user activity.
Sentinel recovered government traces inside the carrier code, which it later connected to previous targeted attacks that used the same characteristics.
"At this point it became clear that the carrier code was originally developed as part of an espionage campaign," Shamir said.
Gyges code can be used for eavesdropping on network activities, key logging, stealing user identities, screen capturing and other espionage techniques, as per the research analysis.
The team also claims that Gyges can be used for money extortion via hard drive encryption (ransomware) and online banking fraud. It can also install rootkits and trojans, create botnets and zombie networks, and target critical infrastructure.
|
<urn:uuid:a23a0fe4-9f17-473c-88e2-2896cb117e41>
|
CC-MAIN-2017-09
|
http://www.cbronline.com/news/cybersecurity/data/russian-state-authored-espionage-malware-up-for-sale-210714-4322631
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00233-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.950615 | 417 | 2.609375 | 3 |
Picking the Connectome Data Lock
Back in 2005, researchers at Indiana University and Lausanne University simultaneously (yet independently) spawned a concept and pet term that would become the hot topic in neuroscience for the next several years—connectomics.
The concept itself isn’t necessarily new, even thought the use of “connectomics” in popular science circles is relatively so. The idea was formed in the late 1980s and has since evolved as described succinctly below:
A hybrid between the study of genomics (the biological blueprint) and neural networks (the “connect”) this term quickly caught on, including with large organizations like the National Institutes of Health (NIH) and its Human Connectome Project.
For instance, the NIH is in the midst of a five-year effort (starting in 2009) to map the neural pathways that underlie human brain function. The purpose is to acquire and share data about the structural and functional connectivity of the human brain to advance imaging and analysis capabilities and make strides in understanding brain circuitry and associated disorders.
And talk about data… just to reconstruct the neural and synaptic connections in a mouse retina and primary visual cortex involved a 12 TB data set (which incidentally is now available to all at the Open Connectome Project).
Mapping the connectome requires a complete mapping process of the neural systems on a neuron-by-neuron basis, a task that requires accounting for billions of neurons, at least for most larger, complex mammals. According to Open Connectome Project, the human cerebral cortex alone contains something in the neighborhood of 1010 neurons linked by 1014 synaptic connections.
That number is a bit difficult to digest without context, so how about this: the number of base-pairs in a human genome is 109.
The Big Data Challenges of Connectomics
To build out the human connectome at the microscale level required (there are already some macroscale efforts based on MRI), there are serious data collection, machine vision tool-related and algorithmic hurdles that must be addressed.
While there are groups working on new innovations in serial electron microscopes to move ahead on the machine-vision and image-prcocessing side, the sophisticated pattern recognition software tools are still lagging behind.
According to Sebastian Seung, Professor of Computational Neuroscience at MIT and author of the book, Connectome, if we can collectively tap the brainpower of the planet (that’s a total of over 7 billion of us) we might be able to fully understand the brain as a singular unit.
To put the Connectome concept into some kind of context, Seung asks us to envision the crisscrossing network of airline networks in a map. He says that if each city on that map was a neuron, and every flight between two cities was a connection, it would create a diagram that goes pretty far in terms of showing a neuron network.
But in terms of the brain, that network expands to mind-boggling proportions. Now imagine that expanded to 100 billion cities and ten thousand flights from every city. That would be the complexity of the map of the network inside our brains.
Seung says that the value of the Connectome concept can be traced back to an old old hypothesis in neuroscience that suggests our memories are somehow encoded in that crisscrossing network diagram—and that each Connectome is unique. “It may be that your particular pattern of connections has a lot of information inside that huge wiring diagram. That might actually contain the information of your memories.”
He suggests that memories create the idiosyncratic aspects of our personal identities. Further, maybe your personality, maybe even people who have problems with psychiatric disorders that might also be due to some kind of special different kind of pattern in brain wiring.
To this end he claims that while his genome might not relate anything about individuality, a Connectome certainly will—but that individuality might change over time, just as our personalities do.
As Seung says, “That map of your brain s not fixed, but your experiences can alter it. That’s one reason why we believe that experiences are leaving the memories inside your brain through the Connectome.”
The question is, how can scientists tap the massive data required to reverse engineer the Connectome—to get to the heart of what makes us individuals? This is a delicate, intricate machine, he says. “You’ve got a machine, you want to take it apart, dissasamble it,” he says. “Those of you out there who are real super nerds, super engineers, will resonate with the idea of disassembling a machine to figure out how it works.”
The difference here is that scientists can’t reverse engineer using the rip and read method. It’s impossible (not to mention gross) to think about pulling the brain apart to hone in on the individual neuron level and even if it was possible, the networks are tightly, impossibly tangled… part of what makes the Connectome idea so interesting in the first place.
Seung has a radical approach to reverse engineering these problems. His team embeds a (dead) brain in really hard plastic resins and slice it into extremely thin slices–a thousand times thinner than a hair. They then image these brain slices in an electron microscope, which offers extremely high resolution.
What this creates is the closest thing possible to a virtual brain. It offers an image of every neuron and every synapse inside each slice. In order to map out that piece of brain, however, researchers have to trace the trajectory of every branch of a neuron through those images and find those synapses. That turns out to be a very time consuming task—and an exercise in some seriously data-intensive computing.
Inside the Brain Slice Data
The images from one cubic millimeter of brain, amount to about a petabyte of data if you image it at that resolution.
As Seung says, “I want to emphasize that I’m talking about really high resolution microscopy here, not an MRI scan like you see in the newspapers. Their cubic millimeter would be one pixel, but here we’re talking about a petapixel.
That data is so overwhelming, that we are still struggling to deal with that kind of deluge. One way is to use AI, artificial intelligence to see the paths in those images. But AI still isn’t perfect, so we need people to interact with the AI, to guide it and to correct it.”
Seung’s team is inviting the public to join in the brain slice-inspired AI fun via a new site called eyewire.org.
There, potential contributors to the project are faced with the image of the retina, the neural tissue at the back of the eye. It serves that up on the web in your web browser and you can join an online community and help the team map out the neural connections. I think it’s an exciting way of getting everyone involved in a kind of endeavor that formally only professionals could be involved in.
“It’s a game and it’s a game that we’ve all enjoyed at one time in our lives. It’s like a gigantic three-dimensional coloring book. You just have to stay between the lines and color in the branch of a neuron as you follow it through this three-dimensional stack of images. It’s at its preliminary stages.
The first result is that it can be fun. We have addicts, we don’t have a lot of users yet, but we have some people who are very dedicated and excited. We have a sense of community, if you go to the discussion forums, you’ll see all kinds of great comments, and questions, and hilarious things that people think of, creative things, things that enhanced our site. We’ve learned a lot from our users.”
|
<urn:uuid:6fed3039-3ca1-40b5-801c-41a32f7770ae>
|
CC-MAIN-2017-09
|
https://www.datanami.com/2012/05/01/picking_the_connectome_data_lock/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00409-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.943241 | 1,640 | 2.53125 | 3 |
Rootkit.Duqu.A is the current star in the world of malware but, as history shows, that fame will be short-lived. Just like fashion models, modern malware has a lifespan in the media eye of a couple of weeks to a couple of months, tops. They then fade into the shadow of more dangerous and sophisticated tools, according to Bitdefender.
Before Duqu, a multitude of e-threats claimed the award for the most innovative, most dangerous or most pervasive piece of malware in the wild. It is a game that malware creators have played with victims – the computer users – or with their arch-enemies – the AV industry – since computers were too large to fit in a regular room and were anything but “personal.”
Without a doubt, 2010 was known for the emergence of Stuxnet, the first piece of malware specifically designed to sabotage nuclear power plants. It can be regarded as the first advanced tool of cyber-warfare.
However, sophisticated malware has also been put to more “civilian” use. Back in 2008, social networking users befriended Trojan.Koobface, a piece of malware that used to spread via social platforms such as Facebook, Twitter and Hi5. Once infected, users would serve as both vectors of infection for other social network contacts and as human robots to solve CAPTCHA challenges for cyber-criminals, among other things.
If you were old enough to “drive” a computer back in 2004, you probably remember the MyDoom worm, a rapidly-spreading mass-mailer worm apparently commissioned by a spam group to automate sending of unsolicited mail via infected computers acting as relays.
1999 brought another game changer named Melissa, a mass-mailing macro virus, which managed to overload Internet mail systems to the point of shutdown. If most computer users knew they should be careful with exe files, they were completely unaware that opening a Word document would spread the worm to their e-mail contact lists.
The early 90s marked an important milestone for the traditional antivirus industry that was relying on string signatures to statically identify malware. The emergence of Chameleon, an e-threat actually able to mutate its code after each infection in order to trick AV scanners and evade detection, signaled that it was time for the industry to switch to more advanced defense technologies such as heuristics and sandboxing.
If you thought that Rootkit.Rustock and Rootkit.TDSS were packed full with novel technologies, you’re in for a surprise. Boot sector malware has been around since 1986, when two Pakistani computer-shop owners created the Brain Boot Sector virus, a piece of harmless code that was able to camouflage its presence by tampering with the result of disk read requests.
Of course, this list could only end with the great-grand parent of the modern Trojan, the Pervading Animal game. Built on a Univac 1100/42 mainframe that looks like this, the game had primitive artificial intelligence support and was complemented by a “software distribution routine” called PERVADE that would copy the game in the directories of other users of the Univac mainframe. Although the purpose was to allow other users to grab a copy of the game, this method of distribution is what we call today a “classic Trojan Horse attack”.
However, the history of malware – a term that we tend to associate with modern threats such as Bankers or keyloggers – is rife with incidents that allowed viruses to morph from innocent pranks to advanced military weapons.
|
<urn:uuid:2c0825a4-6fc5-4b26-b539-48f626c09136>
|
CC-MAIN-2017-09
|
https://www.helpnetsecurity.com/2011/10/24/duqu-another-most-advanced-piece-of-malware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00109-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.962544 | 738 | 2.515625 | 3 |
This resource is no longer available
Bring Your Own Device: Preparing for the influx of mobile computing devices in schools
With over 90 percent of high school students using technology to study or work on homework assignments, students are some of the most enthusiastic and savvy users of mobile computing devices. How can schools continue to advance student knowledge of technology with smaller budgets?
View this informative white paper to learn how schools are adopting bring your own device policies. Students can collaborate on class assignments and various other tasks while advancing their knowledge of technology. Uncover some of the other benefits of BYOD by reading this white paper.
|
<urn:uuid:10713237-0d2a-4004-83ba-3174e710a7ae>
|
CC-MAIN-2017-09
|
http://www.bitpipe.com/detail/RES/1347290709_74.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00581-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.931396 | 122 | 2.921875 | 3 |
Satellite has been thought of as the choice of last resort for connecting cell sites to the core network. The high recurring cost of satellite bandwidth along with the ever-growing demand for more and more bandwidth has kept network planners away from considering satellite backhaul as a viable option.
Mobile Network Operators (MNOs) have market opportunities, and in many cases face legal requirements, to provide nationwide coverage. Universal Service Obligations (which are part of the MNO’s license grant) require the MNO to provide mobile service to distant, sparsely populated communities. Such deployments can be challenging because of the lack of any telecom or power infrastructure to support the cell site equipment. Further, revenue from such sparsely populated communities may not be able to cover the cost of the MNO’s infrastructure. On the other hand, there are many affluent communities needing mobile service (such as oil rigs, mining camps, and vacation resorts) who can afford to pay, but are again hampered by their own unique deployment challenges.
Other proven mobile service opportunities include: cells on wheels (temporary or emergency communications), commercial aircraft, cruise ships, trains, etc.
Satellite technology is a key enabler of mobile technologies, even in the most remote areas. Satellite can provide instant footprint anywhere on earth. The technology is distance-insensitive and can backhaul 2G, 3G, 4G, WiFi, and other mobile technologies. Conventional satellites, with their wider footprint are a good fit for 2G backhaul, catering to 100’s of kbps of traffic. Emerging High Throughput Satellites (HTS) can illuminate the same footprint but with the ability to drop tens of Mbps capacity for 3G and 4G base stations. HTS capacity can even open up suburban and urban markets for mobile backhaul – for instance to fill specific coverage voids or along roadways or waterways.
As mobile technology has evolved, the traffic profile has shifted to effectively all data. Social media, video streaming, web browsing, and file sharing dominate the traffic through the internet. More and more of this traffic is carried on mobile networks, which have to be engineered differently than the legacy voice networks. In fact, 3G and 4G traffic is mostly web dominated and asymmetric in nature.
Satellite is ideally suited to carry asymmetric data. The forward direction can carry 100’s of Mbps, multi-casted to the VSAT terminals, while the traffic in the reverse direction can share traffic on common channels. For heavy traffic sites, the VSATs can even support ‘nailed-up’ channels so that the capacity is available exclusively to the VSAT. The bottom line – VSATs and mobile backhaul are a perfect match.
As illustrated, 3G started the rush to offer higher and higher speeds for mobile users. As mobile apps took off (Facebook, YouTube, Netflix, Skype, and so on), the demand for higher and higher speed became insatiable. HTS, able to carry 100s of Mbps of capacity in every beam, offers the ability to satisfy the hunger for mobile speed. Satellite is no longer a choke point for high-speed mobile backhaul. And with the order of magnitude increase in satellite capacity but at almost no incremental cost of the spacecraft, the cost per bit of HTS service is much lower, eliminating any hurdles for using satellite for mobile backhaul.
Satellite backhaul is a natural fit for mobile backhaul. The technology gives instant connectivity anywhere in the world. The capacity constraint is removed with the advent of HTS. Mobile users can be brought on line in a matter of hours and be a part of the Global community. Mobile operators need not be constrained in expanding their coverage footprints. Let satellite communications bring more and more people together.
© Copyright Hughes Network Systems LLC. All Rights Reserved.
The HUGHES logo is a registered trademark of Hughes Network Systems, LLC, an EchoStar company. All other logos and trademarks are the property of their respective trademark owners. ® and ™ denote registered trademarks in the United States and other countries
|
<urn:uuid:81955de2-3b1f-4add-8bb9-ce29b9aa4f2d>
|
CC-MAIN-2017-09
|
http://www.hughes.com/company/newsroom/stories/cellular-backhaul-the-killer-app-for-high-throughput-satellites?locale=en
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00633-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.923226 | 840 | 2.78125 | 3 |
A paradigm shift is occurring within the agricultural sciences, owing to the genomics-based data explosion and concurrent computational advances. In India, leaders are intent on furthering the burgeoning genomic supercomputing discipline through the establishment of a supercomputing center and a nationwide grid to support bioinformatics and agricultural science.
A recent article in BioSpectrum explores this transformation, noting that data-intensive nature of genomics is such that it is not well-suited to traditional analytical approaches. Thus “the role of bioinformatics emerged as an inter-disciplinary programme, linking computational and mathematical sciences with life sciences,” observes the article’s author Rahul Koul.
In New Delhi, ASHOKA – short for Advanced Super-computing Hub for OMICS Knowledge in Agriculture – was established at the Centre for Agricultural Bioinformatics (CABin) as the first supercomputing hub for Indian agriculture. CABin, part of the Indian Agricultural Statistics Research Institute (IASRI), fosters collaborative bioinformatics pursuits at the national and international levels. The center has dedicated high speed connectivity to multiple domain-centric research organizations relating to crops, animals, fisheries and agricultural microbes.
IASRI is working with the agriculture ministry and CDAC, Pune, to build a nationwide grid of supercomputers for agri-science and agricultural planning. The goal of the effort is to have biologists, statisticians and computer scientists working together with a system biology approach geared toward problem-solving. This facility will be open to Indian Council of Agricultural Research (ICAR) members as well as agricultural scientists across India.
The author notes that bioinformatics projects have been underway at ICAR institutions, but these were small scale, isolated efforts. The current initiatives seek to integrate these activities at the national level with an additional emphasis on the field of agriculture. The supercomputing environment that will undergird the research was developed to support the requirements of agricultural bioinformatics and computational biology science.
The subproject “Establishment of National Agricultural Bioinformatics Grid (NABG) in ICAR” is overseen by the National Agricultural Innovation Project (NAIP), part of ICAR. The state-of-art datacenter hub, ASHOKA, includes two supercomputers that rank at numbers 11 and 24 in the Indian Institute of Sciences (IISc) list of top supercomputers of India. The hub has approximately 1.5 petabytes of storage divided in to three different types of storage architecture: Network Attached Storage (NAS), Parallel File System (PFS) and archival.
This hub connects to supercomputing systems to form a National Agricultural Bioinformatics Grid that includes the National Bureaux of Plant Genetic Resources (NBPGR) in New Delhi, and Lucknow, the National Bureaux of Agriculturally Important Microbes (NBAIM) in Mau, and the National Bureaux of Agriculturally Important Insects (NBAII) in Bangalore. There are a number of computational biology and agricultural bioinformatics software/workflow/pipelines currently in development. The goal is to provide seamless access to these biological computing resources to scientists across the country.
Head and principal scientist at CABin Dr. Anil Rai explains the impetus for the computational grid and access portal. “We are trying to build this system so that scientists don’t have to come here every-time for the analysis but to ensure that they can carry out the same while sitting on their desktops,” he states. “For that we are building national bioinfomatics portal which is almost 80 percent ready. There is a provision for monitoring of the data results by respective scientists regularly and even sms alerts to provide quick info on progress is also there. This system will support computational requirements of the biotechnological research in the country. This will also bridge the gap between genomic information and knowledge, utilizing statistical and computational sciences. Further, this will help in establishment of large genomic databases, data warehouse, software & tools, algorithms, genome browsers with high-end computational power to extract information and knowledge from cross-species genomic resources.”
Dr. Dinesh Kumar, senior scientist in biotechnology at CABin, adds his support.”It will open up new vistas for downstream research in bioinformatics ranging from modelling of cellular function, genetic networks, metabolic pathways, validation of drug targets to understand gene function and culminating in the development of improved varieties and breeds for enhancing agricultural productivity to many folds.”
|
<urn:uuid:428da615-8d19-4a9f-b4d6-df0c9951e3ad>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/2013/12/17/supercomputing-bolsters-agricultural-science-india/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00157-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.91548 | 943 | 2.8125 | 3 |
Written by GEOFF MULLIGAN, CHAIRMAN AND CEO, IPSO ALLIANCE
Reposted from Future of Business & Tech
Glance at the wrist of the modern day commuter and you will find fitness trackers and smartwatches monitoring activities and uploading data to the Cloud. A look into the home of a cardiac patient might show tabletop monitoring that sends daily readings into the hands of a specialist in another state. And coming soon are a whole new set of devices and services that will make health care more efficient while saving lives, time and money.
In the growing field of remote healthcare, devices like wireless body fluid analyzers reduce the cost of diagnostic tests, like urinalysis, and enable doctors to treat patients who are otherwise unable to travel. Prototypes of next generation emergency rooms are now being tested with interconnected diagnostic machines. These provide the medical team with a comprehensive view of the patient’s overall health and allow efficient allocation of resources while decreasing potential errors.
Crash to care
The White House SmartAmerica Challenge posed the following scenario: utilize the IoT in order to coordinate first responders in a disaster situation. Locations must be pinpointed and the closest available medical response units should be notified automatically.
“Today the IoT encompasses wearable devices and remote monitoring, but researchers are developing the “Ingestible IoT”; a pill with an embedded sensor that will push the boundaries of conventional healthcare.”
Emergency vehicles would need to communicate with traffic lights to provide navigation and traffic management to both responders and the public, ensuring a safe roadmap. Once onsite, EMTs need devices that relay, in real time, vital medical information about the injured parties to hospital personnel, in order to prepare for incoming trauma patients.
IoT systems can now guide first responders not just to the closest hospital, but to one with available space, required medical equipment and necessary specialists. Once inside the hospital, new IoT technology, emerging from programs like OpenICE, will enable medical equipment within the emergency room to intercommunicate and alert the staff if a patient’s vitals are approaching critical levels. For example, the patient’s pulse ox, when compared with blood pressure and respiration rate, can be used to ensure patients are receiving the correct amount of pain medication. The Physician-Patient Alliance for Health and Safety reports that over a four-year period, 700 deaths and 56,000 adverse events could be attributed to Patient Controlled Analgesia. An Internet compatible IoT system of medical devices can reduce these numbers.
Today the IoT encompasses wearable devices and remote monitoring, but researchers are developing the “Ingestible IoT”; a pill with an embedded sensor that will push the boundaries of conventional health care. These “smart pills” can remind you to take your medication (or alert you if your grandmother hasn’t), monitor glucose levels or take photos of your intestinal tract. Combing the IoT with health care will drive not just innovation of services but also reduce costs, increase accuracy and bring services to more of the world’s population.
|
<urn:uuid:e948b6dd-2fca-4741-8feb-820b354f6b37>
|
CC-MAIN-2017-09
|
https://www.amsl.com/blog/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00329-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.907352 | 632 | 2.515625 | 3 |
Kaspersky Lab researchers have recently analysed a piece of malware that works well on all three of the most popular computer operating systems – the only thing that it needs to compromise targeted computers is for them to run a flawed version of Java.
The Trojan is written wholly in Java, and exploits an unspecified vulnerability (CVE-2013-2465) in the JRE component in Oracle Java SE 7 Update 21 and earlier, 6 Update 45 and earlier, and 5.0 Update 45 and earlier.
Once the malware is launched, it copies itself into the user’s home directory and sets itself to run every time the system is booted. It then contacts the botmasters’ IRC server via the IRC protocol, and identifies itself via a unique identifier it generated.
The malware’s main reason of existence is to make the infected machine flood specified IP addresses with requests when ordered to via a predefined IRC channel. The botmasters simply have to define the address of the computer to be attacked, the port number, the duration of the attack, and the number of threads to be used in it.
At the time of analysis, the botnet formed by machines “zombified” by this particular Trojan was targeting a bulk email service.
|
<urn:uuid:085d7359-778b-45c1-b9a8-5681f566c438>
|
CC-MAIN-2017-09
|
https://www.helpnetsecurity.com/2014/01/29/java-based-malware-hits-windows-mac-and-linux/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00626-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.911693 | 254 | 2.8125 | 3 |
Is ParalleX This Year’s Model?
By: John Moore
Scientific application developers have masses of computing power at their disposal with today’s crop of high-end machines and clusters. The trick, however, is harnessing that power effectively. Earlier this year, Louisiana State University’s Center for Computation & Technology (CCT) released its approach to the problem: an open-source runtime system implementation of the ParalleX execution model. ParalleX aims to replace, at least for some types of applications, the Communicating Sequential Processes (CSP) model and the well-established Message Passing Interface (MPI), a programming model for high-performance computing. The runtime system, dubbed High Performance ParalleX (HPX) is a library of C++ functions that targets parallel computing architectures. Hartmut Kaiser -- lead of CTT’s Systems Technology, Emergent Parallelism, and Algorithm Research (STE||AR) group and adjunct associate research professor of the Department of Computer Science at LSU -- recently discussed ParalleX with Intelligence in Software.
Q: The HPX announcement says that HPX seeks to address scalability for “dynamic adaptive and irregular computational problems.” What are some examples of those problems?
Hartmut Kaiser: If you look around today, you see that there’s a whole class of parallel applications -- big simulations running on supercomputers -- which are what I call “scaling-impaired.” Those applications can scale up to a couple of thousand nodes, but the scientists who wrote those applications usually need much more compute power. The simulations they have today have to run for months in order to have the proper results.
One very prominent example is the analysis of gamma ray bursts, an astrophysics problem. Physicists try to examine what happens when two neutron stars collide or two black holes collide. During the collision, they merge. During that merge process, a huge energy eruption happens, which is a particle beam sent out along the axis of rotation of the resulting star or, most often, a black hole. These gamma ray beams are the brightest energy source we have in the universe, and physicists are very interested in analyzing them. The types of applications physicists have today only cover a small part of the physics they want to see, and the simulations have to run for weeks or months.
And the reason for that is those applications don’t scale. You can throw more compute resources at them, but they can’t run faster. If you compare the number of nodes these applications can use efficiently -- an order of a thousand -- and compare that with the available compute power on high-end machines today -- nodes numbering in the hundreds of thousands, you can see the frustration of the physicists. At the end of this decade, we expect to have machines providing millions of cores and billion-way parallelism.
The problem is an imbalance of the data distributed over the computer. Some parts of a simulation work on a little data and other parts work on a huge amount of data.
Another example: graph-related applications where certain government agencies are very interested in analyzing graph data based on social networks. They want to analyze certain behavioral patterns expressed in the social networks and in the interdependencies of the nodes in the graph. The graph is so huge it doesn’t fit in the memory of a single node anymore. They are imbalanced: Some regions of the graph are highly connected, and some graph regions are almost disconnected between each other. The irregularly distributed graph data structure creates an imbalance. A lot of simulation programs are facing that problem.
Q: So where specifically do CSP and MPI run into problems?
H.K.: Let’s try to do an analogy as to why these applications are scaling-impaired. What are the reasons for them to not be able to scale out? The reason, I believe, can be found in the “four horsemen”: Starvation, Latency, Overhead, and Waiting for contention resolution -- slow. Those four factors are the ones that limit the scalability of our applications today.
If you look at classical MPI applications, they are written for timestep-based simulation. You repeat the timestep evolution over and over again until you are close to the solution you are looking for. It’s an iterative method for solving differential equations. When you distribute the data onto several nodes, you cut the data apart into small chunks, and each node works on part of the data. After each timestep, you have to exchange information on the boundary between the neighboring data chunks -- as distributed over the nodes -- to make the solution stable.
The code that is running on the different nodes is kind of in lockstep. All the nodes do the timestep computation at the same time, and then the data exchange between the nodes happens at the same time. And then it goes to computation and back to communication again. You create an implicit barrier after each timestep, when each node has to wait for all other nodes to join the communication phase. That works fairly well if all the nodes have roughly the same amount of work to do. If certain nodes in your system have a lot more work to do than the others -- 10 times or 100 times more work -- what happens is 90 percent of the nodes have to wait for 10 percent of the nodes that have to do more work. That is exactly where these imbalances play their role. The heavier the imbalance in data distribution, the more wait time you insert in the simulation.
That is the reason that MPI usually doesn’t work well with very irregular programs, more concretely -- you will have to invest a lot more effort into the development of those programs -- a task not seldom beyond the abilities of the domain scientists and outside the constraints of a particular project. You are very seldom able to evenly distribute data over the system so that each node has the same amount of work, or it is just not practical to do so because you have dynamic, structural changes in your simulation.
I don’t want to convey the idea that MPI is bad or something not useful. It has been used for more than 15 years now, with high success for a certain class of simulations and a certain class of applications. And it will be used in 10 years for a certain class of applications. But it is not well-fitted for the type of irregular problems we are looking at.
ParalleX and its implementation in HPX rely on a couple of very old ideas, some of them published in the 1970s, in addition to some new ideas which, in combination, allow us to address the challenges we have to address to utilize today’s and tomorrow’s high-end computing systems: energy, resiliency, efficiency and -- certainly -- application scalability. ParalleX is defining a new model of execution, a new approach to how our programs function. ParalleX improves efficiency by exposing new forms of -- preferably fine-grain -- parallelism, by reducing average synchronization and scheduling overhead, by increasing system utilization through full asynchrony of workflow, and employing adaptive scheduling and routing to mitigate contention. It relies on data-directed, message-driven computation, and it exploits the implicit parallelism of dynamic graphs as encoded in their intrinsic metadata. ParalleX prefers methods that allow it to hide latencies -- not methods for latency avoidance. It prefers “moving work to the data” over “moving data to the work,” and it eliminates global barriers, replacing them with constraint-based, fine-grain synchronization techniques.
Q: How did you get involved with ParalleX?
H.K.: The initial conceptual ideas and a lot of the theoretical work have been done by Thomas Sterling. He is the intellectual spearhead behind ParalleX. He was at LSU for five or six years, and he left only last summer for Indiana University. While he was at LSU, I just got interested in what he was doing and we started to collaborate on developing HPX.
Now that he’s left for Indiana, Sterling is building his own group there. But we still tightly collaborate on projects and on the ideas of ParalleX, and he is still very interested in our implementation of it.
Q: I realize HPX is still quite new, but what kind of reception has it had thus far? Have people started developing applications with it?
H.K.: What we are doing with HPX is clearly experimental. The implementation of the runtime system itself is very much a moving target. It is still evolving.
ParalleX -- and the runtime system -- is something completely new, which means it’s not the first-choice target for application developers. On the other hand, we have at least three groups that are very interested in the work we are doing. Indiana University is working on the development of certain physics and astrophysics community applications. And we are collaborating with our astrophysicists here at LSU. They face the same problem: They have to run simulations for months, and they want to find a way out of that dilemma. And there’s a group in Paris that works on providing tools for people who write code in MATLAB, a high-level toolkit widely used by physicists to write simulations. But it’s not very fast, so the Paris group is writing a tool to covert MATLAB to C++, so the same simulations can run a lot faster. They want to integrate HPX in their tool.
ParalleX and HPX don’t have the visibility of the MPI community yet, but the interest is clearly increasing. We have some national funding from DARPA and NSF. We hope to get funding from the Department of Energy in the future; we just submitted a proposal. We expect many more people will gain interest once we can present more results in the future.
has written about business and technology topics for more than 20 years. Moore’s articles have appeared in publications and on websites, including Baseline, CIO Insight, Federal Computer Week, Government Health IT and Tech Target. Areas of focus include cloud computing, health information technology, systems integration, and virtualization. Moore’s articles have previously appeared in Intelligence in Software.
|
<urn:uuid:6a95058b-8f5c-450b-823b-e2756e0aa373>
|
CC-MAIN-2017-09
|
http://www.intelligenceinsoftware.com/feature/expert_insight/is_parallex_this_years_model/index.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00150-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.943196 | 2,127 | 2.8125 | 3 |
What is the Remote Code Evaluation Vulnerability?
Remote Code Evaluation is a vulnerability that can be exploited if user input is injected into a File or a String and executed (evaluated) by the programming language's parser. Usually this behavior is not intended by the developer of the web application. A Remote Code Evaluation can lead to a full compromise of the vulnerable web application and also web server. It is important to note that almost every programming language has code evaluation functions.
Remote Code Evaluation Explanation and Example
A code evaluation can occur if you allow user input inside functions that are evaluating code in the respective programming language. This can be implemented on purpose, for example to access mathematical functions of the programming language to create a calculator, or accidentally because user controlled input is not expected from the developer inside those functions. It is generally not advised to do so. In fact it is considered to be a bad practise to use code evaluation.
Example of Code Evaluation Exploitation
You want to have dynamically generated variable names for every user and store its registration date. This is how it could be done in PHP:
eval("\$$user = '$regdate');
Since the username is generally user controlled input an attacker can generate a name like this:
x = 'y';phpinfo();//
The resulting php code would now look like this:
$x = 'y';phpinfo();// = '2016';
As you can see the variable is now called x and has the value y. After the attacker was able to assign that value to the variable he is able to start a new command by using the semicolon (;). He can now comment out the rest of the string, so he doesn't get syntax errors. If he executes this code the output of phpinfo will appear on the page. You should keep in mind that this is not only possible in PHP but also in any other language with functions that evaluate input.
Stored Remote Code Evaluation Explanation and Example
Unlike the above example this method does not rely on any specific language function, but on the fact that specific files are parsed by the language's interpreter. An example for this would be a configuration file that is included in a web application. Ideally you should avoid using user input inside files that are executed by an interpreter as this can lead to unwanted and dangerous behavior. This kind of exploit technique is often seen in combination with an upload functionality that does not do the sufficient checks on file types and extensions.
Example of Stored Code Evaluation Exploitation
You develop a web application that has a control panel for every user. The control panel has some user specific settings such as the language variable, that is set depending on a parameter and then stored inside a configuration file. An expected input could be like this:
The above will then be reflected as $lan = 'de'; inside the configuration file. Though an attacker could now change the language parameter to something like:
The above would result in the following code inside the file:
$lan = 'de';phpinfo()//';
And the above will be executed when the configuration file is included in the web application, basically allowing the attackers to execute any command they want.
Impacts of the Remote Code Evaluation Vulnerability
An attacker who is able to execute such a flaw is usually able to execute commands with the privileges of the programming language or the web server. On many languages he can issue system commands, write, delete or read files or connect to databases.
How to Prevent Remote Code Evaluation
As a rule of thumb you should avoid using user input inside evaluated code. The best option would be to not use functions such as eval at all. It is considered to be a bad practise and can more often than not be completely avoided. You should also never let a user edit the content of files that might be parsed by the respective languages. That includes not letting a user decide the name and extensions of files he or she might upload or create in the web application.
What You Should NOT Do to Prevent Remote Code Evaluation
- Sanitize user input; this is most of the time not possible due to the big amount of possible bypasses of restrictions.
- Let a user decide the extension or content of files on the web server and use safe practices for secure file uploads.
- Pass any user controlled input inside evaluation functions or callbacks.
- Try to blacklist special chars or function names. Exactly as sanitizing this is almost impossible to safely implement.
|
<urn:uuid:18ce4abf-eeea-4971-9cba-2c2edecbf314>
|
CC-MAIN-2017-09
|
https://www.netsparker.com/blog/web-security/remote-code-evaluation-execution/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00502-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.909845 | 916 | 2.890625 | 3 |
Today’s guest post is brought to you by Andy Wolber. You can find more of Andy’s work in TechRepublic where he writes for the Google in the Enterprise newsletter. We know it’s summer and teachers would prefer to think about the beach as opposed to the best ways to leverage Google Docs. But there’s a good chance that if you’re school isn’t already on Google Apps, you will be soon. After all, 72 of the top 100 universities have “Gone Google”, according to Google executive Sundar Pichai at Google I/O 2014. (We have a cool infographic that shows the crazy growth of Google Apps for Education.) Here’s the thing. Google Docs is more than just a web-based word processor. The real power of Google Docs is collaboration: the ability for people to write and edit a document together. Here are a few tips to store away for the beginning of the school year. These will help you use the full feature set of Google Docs in your classroom.
- Take notes: type together In the class I teach, students sometimes take notes in a shared Google Doc. One student creates the document, then shares it with the class. Typically, two or three students serve as active note-takers. Every now and then a technically proficient student will add—or correct—clarifying details to the document. (The class covers the use of technology by governments and nonprofit organizations.) A maximum of fifty people can simultaneously edit a Google Doc. [Learn how to share a document from Google.]
- Discuss a document: chat When multiple people access a document while logged in, they can chat in real-time. The chat appears as a sidebar, next to the document. Chat offers a way to discuss document details, but also can serve as an informal “backchannel” chat: a way to comment without interrupting another person. Some students are more comfortable typing than speaking. Chat sessions related to a document disappear when the document is closed, though. You’ll need to select, copy and save any information you want to preserve from the chat session elsewhere. [Learn more about document chat from Google.]
- Provide feedback: insert a comment To provide feedback within a Google Doc, select the relevant section of text, then choose “Insert”, then “Comment”. The selected text will be highlighted, with your comment displayed to the side of the document. Comments work well to identify and discuss issues that might be resolved in a variety of ways. For example, a comment might identify that a paragraph “seems vague”, an “assertion is unsupported”, or a “sentence is awkward”. An entire discussion can occur within an inserted comment. Unlike Chat, all comments and replies are saved with each document. Within a comment, click “Resolve” to close the comment thread and hide the comment. (Select the “Comments” button to view all comments—including resolved comments.) [Learn more about comments from Google.]
- Recommend an edit: “suggesting” mode Switch to “suggesting” mode to recommend a specific change to a document. (Select the drop down arrow next to “Editing” while in a document, then choose “Suggesting”.) Type your suggestions within the document. Suggested changes display in-line with the original text, but in a different color. The owner of the document may either “accept” or “reject” each suggested change. Use “suggesting mode” when an issue has a clear and obvious fix. For example, to suggest that the word “Goggle” be changed to “Google”. Suggestions work well to identify grammar or spelling errors. Any person with “can comment” permission may make a suggestion or insert a comment in a document. [Learn more about how to suggest edits from Google.]
- Find, insert and cite resources: research tool The Research tool offers a way to find, insert, and cite resources without leaving your document. A young writer might use the research tool to find a quote to use as a story prompt, while an academic researcher might use it to cite scholarly research. To use the Research tool, select the “Tools” menu, then “Research”. A search window displays to the right of your document. Type your search term, then view the results. You may narrow the results to a specific type of content, such as “images”, “quotes”, or results from Google Scholar. Place your cursor over the result you want, then insert (or cite) the content in your document. [Learn more about the Research tool from Google.] Using Google Docs in the classroom? If so, what other features have you found useful?
|
<urn:uuid:28994188-1c1e-4bc4-be15-4b7105a5b3f3>
|
CC-MAIN-2017-09
|
https://www.backupify.com/blog/learn-latest-classroom-writing-tips-google-docs
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00446-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.896597 | 1,020 | 2.625 | 3 |
This election season, people started to talk about political cookies – a new expression for the term previously used for targeted or customized ads. These are advertisements that are automatically pushed out based an Internet user's search habits. Computer cookies are files stored on the user’s computer that save the browsing history and behavior on websites s/he visits. This history can be activated by companies to provide a tailored browsing experience as soon as users return to these websites. Every time a user submits information to a website the information is stored. The data in the cookie file is stored locally (and reactivated at return visits) and can also be transferred to another website.
This practice has been around for a while and Google especially has become known for pushing context-relevant ads based on individual email content to Gmail accounts or to search results. Similarly, TV ads are targeting those cable TV subscribers in states that are known to be swing states – or states where pollsters know that there are many undecided voters. Other states, such as NY state – a historically blue state – will likely see very few TV ads.
This is where political cookies come in. A recent ProPublica article revealed that companies such as Microsoft and Yahoo are selling political candidates access to their users’ data:
Microsoft and Yahoo are selling political campaigns the ability to target voters online with tailored ads using names, Zip codes and other registration information that users provide when they sign up for free email and other services.
Based on the users known search and browsing history cookies, in combination with their voter records, political campaigns now have a much better sense of who they should target. These ads then pop up using a network of different sites, including social media platforms, online news sites, etc. Subtle reminders are pushed at a user based on their previous search and browsing behavior and pop up in the ad section of the visited site – instead of the previous practice pushed at them in a targeted email from which users can actively unsubscribe.
Social media companies are heavily using this practice in less subtle ways:
- Facebook’s application to register voters and to use the same application, including voter registration records to invite friends who haven’t registered. On election day 2008 Facebook pushed an application out to count the number of people who voted and published it as an update to the newsfeed, increasing awareness among those contacts who haven’t voted. Washington State is the first state this year to actively use a Facebook application to help people register online.
- Amazon’s election heat map displays political preferences based on its buyers’ purchases of political books.
- Twitter displays user sentiments posted in tweets about both candidates in real time on the Twitter election index and sells trending topics to political candidates or promoted tweets based on the sentiments.
|
<urn:uuid:c36a9a41-dd64-434b-b867-0c79efbd0956>
|
CC-MAIN-2017-09
|
http://www.nextgov.com/technology-news/tech-insider/2012/09/there-public-appetite-political-cookies/57892/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00622-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.952 | 560 | 2.78125 | 3 |
IPv6: An answer to network vulnerabilities?
- By Julian Weinberger
- Aug 19, 2014
On Aug. 15, 2012, in one of the most devastating cyberattacks ever recorded, politically motivated hackers wiped the memory of more than 30,000 computers operated by the Saudi Aramco oil company in an attempt to stop the flow of oil and gas to local and international markets.
The United States took notice of the attack, with then-Defense Secretary Leon Panetta later remarking that a similar attack on critical U.S. infrastructure, including water and electrical facilities, would cause unparalleled destruction and upheaval.
Two years later, despite steps to shore up the nation's network security defenses, cyberattacks remain seemingly ubiquitous and advanced persistent threats (APTs) are starting to exploit a broader range of threat vectors. In May, the January-April 2014 report of the Department of Homeland Security's Industrial Control Systems Cyber Emergency Response Team revealed that a public utility company in the U.S. had been breached by a "sophisticated threat actor." Fortunately, the attack did not disrupt operational capabilities.
However, as the report says, the attack is a wake-up call to government agencies to re-architect and secure their control networks, particularly "security controls employed at the perimeter." To prevent Panetta's worst-case prediction from becoming reality, government agencies must construct an end-to-end, intelligent security environment that includes interconnected components such as firewalls, virtual private networks, role- and attribute-based access control systems, intrusion-prevention systems and antivirus software.
One of the first steps they should take in the next few months, however, is to meet new IP address requirements.
The security benefits of IPv6
The first version of the IP address system, IPv4, was developed in 1981 and only allowed for 4.3 billion unique IP addresses to be assigned to Internet-enabled devices. But now, given that the Internet of Things could potentially encompass 26 billion devices by 2020, IPv4 is no longer sufficient to meet demand. The new IPv6 protocol, with its 128-bit addresses that have more combinations than there are known stars in the galaxy, is slowly being rolled out and is expected to solve that problem.
So what does IPv6 mean for government agencies and their network security efforts?
For nearly four years, federal agencies have been preparing for two IPv6 deadlines issued by previous U.S. CIO Vivek Kundra. The first deadline of Sept. 30, 2012, required agencies to upgrade their public-facing websites to IPv6. By the next deadline of Sept. 30, 2014, federal agencies must have upgraded all internal client applications to the new protocol.
However, the transition to IPv6 isn't strictly for logistical reasons.
According to Elise Gerich, a vice president at the Internet Corporation for Assigned Names and Numbers, "rapid adoption of IPv6 is a necessity" to maintain the economic growth brought forth by the Internet. The White House has made a similar connection, with President Barack Obama declaring cybersecurity threats to be "one of the most serious economic and national security challenges we face as a nation."
IPv6 helps government agencies combat those threats. Unlike its predecessor, the new protocol contains the universal, end-to-end encryption and integrity-checking technology used by the most secure IP Security-based VPNs. It also has secure name-resolution capabilities, rendering man-in-the-middle and naming-based attacks much more difficult to accomplish. Best of all, its advanced security features will work natively for all IPv6 connections on all compatible devices and systems.
IPv6 will reinforce the network security defenses of government agencies, but adoption of the protocol is only the first step.
An in-depth look at defense in depth
To fully protect against breaches like the recent attack on a public utility company, government agencies must take their network security a step further by adopting a robust, multilayered defense-in-depth strategy. The value of this approach has been proven at the enterprise level, where it has been shown to build redundancy into organizations' information security infrastructure to deal with new, emerging threats, such as APTs, that are aided by the increasing variety of devices and operating systems that are accessing networks. Even if there is a breach of a perimeter safeguard, such as a network firewall, other interconnected defense mechanisms -- such as VPNs, access control systems and intrusion-prevention systems -- can work together to repel the attack or prevent it from progressing further into the network.
And how does IPv6 fit in with a defense-in-depth approach? As mentioned, the basic security technology of IPv6 is IPsec, and when used in the context of securing communications with government networks, its security capabilities include virtually unbreakable encryption, secure key exchange, access control and protection against replay attacks, among other features.
By implementing a centrally managed VPN and a network of IPv6-enabled devices as part of a defense-in-depth strategy, an agency is able to limit the vulnerabilities of its network and verify that all endpoints are compliant with the agency's network security policies.
So why take a chance and wait until Sept. 30 to upgrade to IPv6? Attackers won't wait until that deadline -- and neither should government agencies.
Julian Weinberger is an international system engineer at NCP Engineering.
|
<urn:uuid:d6e7eb21-0109-429d-a96b-9b3f20f7e8d9>
|
CC-MAIN-2017-09
|
https://fcw.com/articles/2014/08/19/ipv6-answer-to-network-vulnerabilities.aspx?admgarea=TC_Opinion
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00198-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.949684 | 1,104 | 2.71875 | 3 |
FCC tests if 911 cell-phone calls can be traced indoors
- By William Jackson
- Nov 30, 2012
As telephones have become untethered and phone numbers are no longer tied to an address, the ability of first responders to pinpoint the location of cellular phone calls to 911 has become increasingly important.
The Federal Communication Commission already requires a level of accuracy for locating the origin of cell phone calls made outdoors, but the technologies being used tend to break down inside buildings. An FCC advisory committee is conducting a study in the San Francisco Bay area to establish benchmarks for the accuracy of current location technologies.
The six-week program is being carried out by a working group of the Communications Security, Reliability and Interoperability Council, made up of industry and government 911 stakeholders. Testing began Nov. 15 and is expected to be completed by year’s end, said Norman Shaw, executive director of government affairs at Polaris Wireless, a company that provides 911 location services, and co-chair of the indoor location working group.
A final report on the current state of the art is expected to be finished in March, which could help to decide whether and how location requirements for 911 calls should be extended indoors.
The committee initially recommended that such requirements were not yet practical. “To extend regulation indoors at this time would be premature because we don’t have any data,” Shaw said. Current tests are intended to provide needed data.
A possible solution considered by CSRIC for indoor 911 is use of commercial location-based services, which deliver ads and other services to mobile devices based on their location. Such a scheme might work, Shaw said, “but there are huge impediments to be overcome.”
The impediments are primarily legal and social rather than technical, he said. Companies providing such services are concerned about liability issues if information is inadequate or inaccurate in a life-and-death situation.
The testing of carrier technologies is being held in 80 buildings in the Bay Area, representing dense-urban, urban, suburban-residential and rural environments. Gaining access to buildings and cooperation from major cellular carriers was a major undertaking, Shaw said. There are multiple test sites in many of the buildings, and 100 calls will be made from each test point.
Technologies being tested are Polaris’s radio frequency pattern matching, which uses location-based “signatures” to identify the origin of a call; advanced forward link triangulation as used by Qualcomm; a low-Earth-orbit assist for GPS being proposed by Boeing; and a system of terrestrial GPS repeaters from NextNav.
The ability to locate the origin of calls is important because callers in an emergency sometimes do not know their exact location or might not be able to give it to a 911 operator. Through a service called Enhanced 911, incoming calls from landline phones are automatically accompanied with location data. Carriers are required to provide location data for outdoor cellular calls, usually delivered from a combination of Global Positioning System data and triangulation from carrier cell towers.
Estimates of the percentage of 911 calls coming from cell phones range from 70 percent to more than 80 percent.
William Jackson is a Maryland-based freelance writer.
|
<urn:uuid:e88d9d56-5dfc-4ed1-9f04-37285cfb49b2>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2012/11/30/fcc-911-cell-phone-calls-traced-indoors.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00374-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951344 | 660 | 2.59375 | 3 |
Milbrath M.O.,Michigan State University |
van Tran T.,Michigan State University |
van Tran T.,Bee Research and Development Center |
Huang W.-F.,University of Illinois at Urbana - Champaign |
And 4 more authors.
Journal of Invertebrate Pathology | Year: 2015
Honey bees (Apis mellifera) are infected by two species of microsporidia: Nosema apis and Nosema ceranae. Epidemiological evidence indicates that N. ceranae may be replacing N. apis globally in A. mellifera populations, suggesting a potential competitive advantage of N. ceranae. Mixed infections of the two species occur, and little is known about the interactions among the host and the two pathogens that have allowed N. ceranae to become dominant in most geographical areas. We demonstrated that mixed Nosema species infections negatively affected honey bee survival (median survival= 15-17 days) more than single species infections (median survival = 21 days and 20 days for N. apis and N. ceranae, respectively), with median survival of control bees of 27 days. We found similar rates of infection (percentage of bees with active infections after inoculation) for both species in mixed infections, with N. apis having a slightly higher rate (91% compared to 86% for N. ceranae). We observed slightly higher spore counts in bees infected with N. ceranae than in bees infected with N. apis in single microsporidia infections, especially at the midpoint of infection (day 10). Bees with mixed infections of both species had higher spore counts than bees with single infections, but spore counts in mixed infections were highly variable. We did not see a competitive advantage for N. ceranae in mixed infections; N. apis spore counts were either higher or counts were similar for both species and more N. apis spores were produced in 62% of bees inoculated with equal dosages of the two microsporidian species. N. ceranae does not, therefore, appear to have a strong within-host advantage for either infectivity or spore growth, suggesting that direct competition in these worker bee mid-guts is not responsible for its apparent replacement of N. apis. © 2014 Elsevier Inc. Source
Beaurepaire A.L.,Martin Luther University of Halle Wittenberg |
Truong T.A.,Bee Research and Development Center |
Fajardo A.C.,University of the Philippines at Los Banos |
Dinh T.Q.,Bee Research and Development Center |
And 3 more authors.
PLoS ONE | Year: 2015
The ectoparasitic mite Varroa destructor is a major global threat to the Western honeybee Apis mellifera. This mite was originally a parasite of A. cerana in Asia but managed to spill over into colonies of A. mellifera which had been introduced to this continent for honey production. To date, only two almost clonal types of V. destructor from Korea and Japan have been detected in A. mellifera colonies. However, since both A. mellifera and A. cerana colonies are kept in close proximity throughout Asia, not only new spill overs but also spill backs of highly virulent types may be possible, with unpredictable consequences for both honeybee species. We studied the dispersal and hybridisation potential of Varroa from sympatric colonies of the two hosts in Northern Vietnam and the Philippines using mitochondrial and microsatellite DNA markers. We found a very distinct mtDNA haplotype equally invading both A. mellifera and A. cerana in the Philippines. In contrast, we observed a complete reproductive isolation of various Vietnamese Varroa populations in A. mellifera and A. cerana colonies even if kept in the same apiaries. In light of this variance in host specificity, the adaptation of the mite to its hosts seems to have generated much more genetic diversity than previously recognised and the Varroa species complex may include substantial cryptic speciation. Copyright: © 2015 Beaurepaire et al. Source
Forsgren E.,Swedish University of Agricultural Sciences |
Wei S.,Chinese Academy of Agricultural Sciences |
Guiling D.,Chinese Academy of Agricultural Sciences |
Zhiguang L.,Chinese Academy of Agricultural Sciences |
And 5 more authors.
Apidologie | Year: 2015
Populations of Apis mellifera and Apis cerana in China and Vietnam were surveyed in order to study possible pathogen spill-over from European to Asian honeybees. This is the first survey of the prevalence of honeybee pathogens in apiaries in Vietnam, including pathogen prevalence in wild A. cerana colonies never in contact with A. mellifera. The bee samples were assayed for eight honeybee viruses: deformed wing virus (DWV); black queen cell virus (BQCV); sac brood virus (SBV); acute bee paralysis virus (ABPV); Kashmir bee virus (KBV); Israeli acute paralysis virus (IAPV); chronic bee paralysis virus (CBPV); and slow bee paralysis virus (SBPV), for two gut parasites (Nosema ssp.) and for the causative agent for European foulbrood (Melissococcus plutonius). The Vietnamese samples were assayed for Acarapis woodi infestation. No clear evidence of unique inter-specific transmission of virus infections between the two honeybee species was found. However, in wild A. cerana colonies, the only virus infection detected was DWV. With findings of IAPV infections in Chinese samples of A. cerana colonies in contact with A. mellifera, inter-specific transmission of IAPV cannot be ruled out. BQCV was the most prevalent virus in managed colonies irrespective of bee species. We did not detect the causative agent of European foulbrood, M. plutonius in wild or isolated colonies of A. cerana in Vietnam or China; however, low incidence of this pathogen was found in the Asian host species when in contact with its European sister species. No evidence for the presence of A. woodi was found in the Vietnamese samples. © 2014, The Author(s). Source
|
<urn:uuid:8cfc8c5a-def9-4068-a191-d475c8175240>
|
CC-MAIN-2017-09
|
https://www.linknovate.com/affiliation/bee-research-and-development-center-641396/all/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00550-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.916543 | 1,313 | 2.6875 | 3 |
Memory is not to be trusted. It is unpredictable, frustratingly fleeting, and only gets worse with age. It will forever cling tightly to useless facts and yet misplace the freshest, most essential ones. I can remember my childhood phone number, the tune and lyrics to Let It Be, and most of the Gettysburg Address, and yet I can’t seem to recall the office computer password I reset last week. I doubt that I am alone in this.
Memory is such a crucial element of our very existence, so why are we so poor at remembering things? We should have plenty of storage space; anywhere you search you will find that the human brain contains 100 billion neurons, although when pressed on this number, scientists can’t seem to remember where it came from.
It is tempting to believe that this memory-fail has a lot to do with the flood of information coming our way, making it increasingly difficult to find a suitable clearing in that vast forest of brain cells. But this is a problem that pre-dates the flood; the challenge of recollection is as old as recorded history.
As it has with many things that are integral to our culture, innovation has propelled the art of recording information ever onward. From cave walls to hand-written parchment documents to Gutenberg’s printed pages, humans have continued to find new and sometimes better ways to record things that might otherwise be forgotten. Hand-copied documents were painstakingly difficult to reproduce, and thus were presumed to be of superior value. In an age where storage is vast and instantaneous, much less thought is given to what we keep. If you doubt that to be true, just search Facebook for cat pictures.
These days digital storage dominates how we set aside information for later recall, and it has progressed at a phenomenal rate. In the early fifties, while I was learning to walk, the first disk drives were humming away in IBM laboratories, storing around 5 megabytes of data. Today my smartphone houses a chip smaller than a dime that holds 128000 megabytes. It is estimated that globally we will stash 44 zettabytes of data annually by 2020. Once upon a time, it was an insult to say someone had their “head in a cloud”; now we all do.
It is no wonder that our obsession with memory has enabled this explosion of virtual-mental storage space. In many ways, memory is what defines us as individuals, setting us apart from our peers and putting our lives in meaningful context. It guides our decisions, enriches our relationships and confirms our existential selves. These are times when electromechanical devices can reinforce our declining senses and substitute for failed body parts; why shouldn’t we be turning more to technology to fortify our befuddled brains?
In countless ways we are what we remember. The lessons learned, the pages read, the art contemplated and enjoyed, the loves and heartaches experienced – recalling them is what shapes our being. Perhaps memory tools can help reclaim a portion of lives lost to dementia or Alzheimer’s disease? Perhaps the cure for these tragic losses of memory lies not in repairing or propping up those 100 billion or so neurons, but in bypassing them completely and putting all that we have experienced into a safe, reliable holding place, to be recalled randomly at will, much as we were accustomed to doing in better days?
Probably not. There will always be some things that technology can never replace.
Author Profile - Paul W. Smith, a Founder and Director of Engineering with INVENtPM LLC, has more than 35 years of experience in research and advanced product development.
Prior to founding INVENtPM, Dr. Smith spent 10 years with Seagate Technology in Longmont, Colorado. At Seagate, he was primarily responsible for evaluating new data storage technologies under development throughout the company, and utilizing six-sigma processes to stage them for implementation in early engineering models. He is a former Adjunct Professor of Mechanical Engineering at the Colorado School of Mines, and currently manages the website “Technology for the Journey”.
Paul holds a doctorate in Applied Mechanics from the California Institute of Technology, as well as Bachelor’s and Master’s Degrees in Mechanical Engineering from the University of California, Santa Barbara.
|
<urn:uuid:1f3ae470-ca26-427a-85ce-78de9a5f1eb3>
|
CC-MAIN-2017-09
|
http://www.lovemytool.com/blog/2016/03/for-the-memories-by-paul-w-smith.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00494-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.954687 | 876 | 2.734375 | 3 |
Attackers are compromising Linux and Windows systems to install a new malware program designed for launching distributed denial-of-service (DDoS) attacks, according to researchers from the Polish Computer Emergency Response Team (CERT Polska).
The malware was found by the Polish CERT at the beginning of December and the Linux version is being deployed following successful dictionary-based password guessing attacks against the SSH (Secure Shell) service. This means only systems that allow remote SSH access from the Internet and have accounts with weak passwords are at risk of being compromised by attackers distributing this malware.
"We were able to obtain a 32-bit, statically linked, ELF file," the Polish CERT researchers said Monday in a blog post. The executable runs in daemon mode and connects to a command-and-control (C&C) server using a hard-coded IP (Internet Protocol) address and port, they said.
When first run, the malware sends operating system information -- the output of the uname command -- back to the C&C server and waits for instructions.
"From the analysis we were able to determine that there are four types of attack possible, each of them a DDoS attack on the defined target," the researchers said. "One of the possibilities is the DNS Amplification attack, in which a request, containing 256 random or previously defined queries, is sent to a DNS server. There are also other, unimplemented functions, which probably are meant to utilize the HTTP protocol in order to perform a DDoS attack."
While executing an attack, the malware provides information back to the C&C server about the running task, the CPU speed, system load and network connection speed.
A variant of the DDoS malware also exists for Windows systems where it is installed as "C:\Program Files\DbProtectSupport\svchost.exe" and is set up to run as a service on system start-up.
Unlike the Linux version, the Windows variant connects to the C&C server using a domain name, not an IP address, and communicates on a different port, according to the Polish CERT analysis. However, the same C&C server was used by both the Linux and Windows variants, leading the Polish CERT researchers to conclude that they were created by the same group.
Since this malware was designed almost exclusively for DDoS attacks, the attackers behind it are likely interested in compromising computers with significant network bandwidth at their disposal, like servers. "This also probably is the reason why there are two versions of the bot -- Linux operating systems are a popular choice for server machines," the researchers said.
However, this is not the only malware program designed for Linux that was identified recently.
A security researcher from the George Washington University, Andre DiMino, recently found and analyzed a malicious bot written in Perl after allowing attackers to compromise one of his honeypot Linux systems.
The attackers were trying to exploit an old PHP vulnerability, so DiMino intentionally configured his system to be vulnerable so he could track their intentions. The vulnerability is known as CVE-2012-1823 and was patched in PHP 5.4.3 and PHP 5.3.13 in May 2012, suggesting the attack targeted neglected servers whose PHP installations haven't been updated in a long time.
After allowing his honeypot system to be compromised, DiMino saw attackers deploy malware written in Perl that connected to an Internet Relay Chat (IRC) server used by attackers for command and control. The bot then downloaded local privilege escalation exploits and a script used to perform Bitcoin and Primecoin mining -- an operation that uses computing power to generate virtual currency.
"Most servers that are injected with these various scripts are then used for a variety of tasks, including DDoS, vulnerability scanning, and exploiting," DiMino said Tuesday in a blog post that provides a detailed analysis of the attack. "The mining of virtual currency is now often seen running in the background during the attacker's 'downtime'."
DiMino's report comes after researchers from security vendor Symantec warned in November that the same PHP vulnerability was being exploited by a new Linux worm.
The Symantec researchers found versions of the worm not only for x86 Linux PCs, but also for Linux systems with the ARM, PPC, MIPS and MIPSEL architectures. This led them to conclude that the attackers behind the worm were also targeting home routers, IP cameras, set-top boxes and other embedded systems with Linux-based firmware.
|
<urn:uuid:e06b3ef6-2df9-4edb-89b5-a9d44c55ff66>
|
CC-MAIN-2017-09
|
http://www.computerworld.com/article/2486982/malware-vulnerabilities/new-ddos-malware-targets-linux-and-windows-systems.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00070-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.957152 | 918 | 2.734375 | 3 |
Construction has a reputation of lagging even further behind on technological innovation than healthcare. A boom in the meeting of the two though, has brought about advancements new to both industries — most notably, Building Information Modeling (BIM).
3D modeling isn’t a new technology, but historically it has been difficult to apply to the construction industry. Previously, the industry did not believe that computer models were reliable enough to be used in the field for prefabrication of critical building components. If anything needed on-site rework, any savings in time, cost or quality would be lost. Additional issues included:
Now, GPS and laser systems locate BIM information in the field, which means that replacing large medical equipment, like MRIs, can be done by finding the best route through the existing hospital — minimizing impact on the facility, people, and operations.
Special Demands Of Healthcare
The use of BIM is far from just innovation for its own sake. Healthcare facilities are incredibly complex to build, both technically and logistically. Building specifications must be highly detailed and architectural plans tightly coordinated. Beyond that, innovation in technology in healthcare frequently outpaces the time it takes to build actual facilities. The result? Projects are plagued with coordination issues for designers and cost overruns when designs are made obsolete by new technology or healthcare regulation.
The Benefits Of BIM
The true beauty of BIM, is that a facility can be designed, built, and tested in a virtual environment, where real-time simulations can be conducted before construction even starts. This allows a project team to find errors, omissions, and conflicts before the construction phase even kicks off, meaning that corrections can be made at minimal cost.
Even after construction is complete, though, BIM offers the benefit of creating a working database of information around the facility that is useful throughout the building’s functional life. Any future upgrades, operational procedures, or scenario planning can be done using BIM technology and the information gathered and created from the beginning of the project’s life.
In The Real World
The world’s tallest children’s hospital (23-story, Lurie Children’s Hospital in Chicago) used the technology in ways that involved facility personnel and levels of coordination of construction teams that were previously unheard of.
While the construction industry has also not traditionally been associated with high levels of safety, the partnership with the healthcare construction boom has yielded positive results. According to Steve Whitcraft, a 28-year, construction-industry veteran and current director of commercial and healthcare segment groups in the Texas region at Turner Construction Co., “Focus on safety is the inspiration for many improvements in healthcare construction. We are challenging our project teams to produce the same (or better) result, eliminating risk to the patient, the worker and adjacent objects, people or buildings. Just because ‘we have always done it that way’ is not a good enough reason to continue unquestioned.”
Looking to go deeper? Check out the Top 5 Trends In Managed Services For Architecture, Engineering, And Construction.
|
<urn:uuid:9bf537ed-635f-48b8-83a5-ace39795961f>
|
CC-MAIN-2017-09
|
https://www.bsminfo.com/doc/healthcare-construction-and-tech-why-vars-should-care-0001
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00242-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.949745 | 632 | 2.703125 | 3 |
Clip and save: Tips for a secure WLAN
Deploying a secure wireless LAN involves more than securing the wireless link between the client and the access point. Ultimately, it involves the entire network infrastructure.
The National Institute of Standards and Technology warned in SP 800-48, Wireless Network Security, 'maintaining a secure wireless network is an ongoing process that requires greater effort than that required for other networks and systems.' Including wireless in the network mix means risk assessments should be done more frequently and security controls should be continuously evaluated, NIST said. The agency expects to refresh its wireless security guidance later this year.
The Defense Department, in its draft Wireless LAN Security Framework, noted that security must be designed into the wireless LAN. The final DOD Security Technical Implementation Guide will include requirements and recommendations for configuration and deployment, such as mutual authentication of both access points and end users, and strong encryption that meets government standards. Wireless clients will also have to be certified against Common Criteria protection profiles.Step-by-step security
The end requirements for each agency will differ, but NIST has laid out the steps that must be taken to ensure that wireless networks are adequately secured. This begins with a risk assessment and a cost-benefit analysis to determine if wireless is feasible and desirable.
Agencies should pay attention to mitigating risks in physical security as well as in system security. This includes identification badging systems and physical access control.
Access points should be configured to ensure that only authorized administrators can access and manage them. Strong passwords should be used and management links should be encrypted as strongly as possible.
Physical site surveys are needed to determine where access points are needed and to ensure that the range of access points does not exceed what is necessary. Because eavesdropping cannot be completely prevented, encryption is recommended. Placing the WLAN outside of the firewall so all traffic can be passed through the firewall might also be a good idea.
Policy updates to address software upgrades, patch management and configuration management may be needed to boost the overall security posture of the network. Wireless intrusion detection can be a useful tool in a defense-in-depth strategy and will eventually be required for DOD WLANs.
NIST's recommendations for building a secure WLAN include:
ADOPT a robust ID system for physical access control
DISABLE file and directory sharing on PCs
PROTECT sensitive files with passwords and encryption
INVESTIGATE 802.11 products with the best security strategy and performance history
USE products with Simple Network Management Protocol Version 3 or other encrypted management capabilities
TURN OFF all unnecessary services on wireless access points
TURN OFF power to access points when not in use, if possible
TURN ON the logging capabilities of access points and review logs regularly
CONFIGURE access points to require passwords for management, encrypt management links, use MAC Access Control Lists, change default keys and passwords, and disable remote SNMP
CONDUCT a site survey and strategically place access points
DEPLOY a virtual private network with a firewall between gateways and clients
ESTABLISH comprehensive security policies on use of wireless devices
USE personal firewalls and antivirus software on wireless clients
GET expert help in conducting security assessments after deployment.
To read more of NIST's current recommendations for securing wireless networks, go to www.gcn.com
and enter 457 in the GCN.com/box.
|
<urn:uuid:b0393211-bc94-438f-a483-eb2c2a2ce285>
|
CC-MAIN-2017-09
|
https://gcn.com/articles/2005/07/27/clip-and-save-tips-for-a-secure-wlan.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00418-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.912965 | 693 | 2.5625 | 3 |
A malicious botnet called ‘Nitol’ was interrupted by Microsoft on Sept. 13. ‘Nitol’ was using a Dynamic DNS to enable the infected bot computers to communicate with the hacker’s command and control server.
For background, it is possible to serve a website from a home computer, but the difficulty is that your home Internet service provider provides a constantly changing address, also known as an Internet Protocol (IP) address. To overcome this problem, there are many services to map a static domain name (e.g., yoursite.com) to your constantly changing IP address. This kind of service is known as Dynamic DNS.
There are also malicious uses for Dynamic DNS. If your computer is infected with malware, a hacker will need a way to send instructions to that malware in order to carry out an attack, in most cases. The hacker needs an IP address in order for the malware to communicate back to the hacker’s ‘command and control’ server.
Instead of directly addressing the hacker’s IP address in malware, the malware is only aware of a domain name, which can be resolved into an IP address. The hacker wants to make it difficult to be traced or blocked, so it would be very handy for a hacker if they could quickly change their IP address associated with the domain that the malware is talking to.
In other words, as shown by Nitol, a hacker can quickly change their address, making it very difficult to find a pattern and block the communication.
This botnet, and many others, were using a specific Dynamic DNS to redirect messages to their command and control servers. The victims of the ‘Nitol’ botnet were targeted through computers sold pre-bundled with malware, and Microsoft’s work was to disrupt the supply chain causing the spread of the malware. This differs from the more common malware distribution methods through social engineering (e.g., email) and by browser-drive-by attacks (Java), but what they almost all have in common is the need to communicate to a command and control server.
|
<urn:uuid:7f013007-58ac-4abe-b61a-eb662324b962>
|
CC-MAIN-2017-09
|
https://www.entrust.com/nitol-malware-leveraging-dynamic-dns-for-nefarious-gains/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00418-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.965794 | 432 | 3.046875 | 3 |
Smart buildings, smart grids, smart infrastructure - these terms are bandied about more often than ever. In this image, technology is incorporated into an ordinary pivot irrigation system, making it a smart agricultural device. Notice some sprinkler heads are off while others are active. This is due to broadband wireless monitoring technology, developed by the U.S. Department of Agriculture, that records and transmits soil-moisture levels. This pilot project in Georgia enables farmers to effectively irrigate soil without wasting water.
|
<urn:uuid:43ea1d5b-4764-40a7-8e0f-54afbd3c78a8>
|
CC-MAIN-2017-09
|
http://www.govtech.com/featured/Green-Initiatives-Broadband-Wireless-Monitor-Saves.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00362-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.933795 | 101 | 2.828125 | 3 |
Dell plans to recycle however many of the 4.1 million recalled batteries that customers turn in (see Dell battery recall not likely to have big environmental impact), but what happens to the other 2 billion lithium ion batteries which will be sold this year? Most will last for 300 to 500 full recharges (one to three years of use) before failing and ending up in your local municipal landfill or incinerator.
Is that a bad thing? No... and yes.
According to the U.S. government, lithium ion batteries aren't an environmental hazard. "Lithium Ion batteries are classified by the federal government as non-hazardous waste and are safe for disposal in the normal municipal waste stream," says Kate Krebs at the National Recycling Coalition. While other types of batteries include toxic metals such as cadmium, the metals in lithium ion batteries - cobalt, copper, nickel and iron - are considered safe for landfills or incinerators (Interestingly enough, lithium ion batteries contain an ionic form of lithium but no lithium metal).
But that doesn't mean Americans should be dumping 2 billion batteries per year into the waste stream.
Europeans have a dimmer view of landfilling lithium ion batteries. "There is always potential contamination to water because they contain metals," says Daniel Cheret, general manager at Belgium-based Umicore Recycling Solutions. The bigger issue is a moral one: the products have a recycling value, so throwing away 2 billion batteries a year is just plain wasteful - especially when so many American landfills are running out of space. "It’s a pity to landfill this material that you could recover," Charet says. He estimates that between 8,000 and 9,000 tons of cobalt is used in the manufacture of lithium ion batteries each year. Each battery contains 10 to 13% cobalt by weight. Umicore recyles all four metals used in lithium ion batteries.
The reason why more lithium ion batteries aren't recycled boils down to simple economics: the scrap value of batteries doesn't amount to much - perhaps $100 per ton, Cheret says. In contrast, the cost of collecting, sorting and shipping used batteries to a recycler exceeds the scrap value, so batteries tend to be thrown away. Unfortunately, the market does not factor in the social cost of disposal, nor does it factor in the fact that recycling metals such as cobalt has a much lower economic and environmental cost than mining raw materials. So we throw them away by the millions.
As in many areas of environmental protection, the European Union is far ahead of the U.S., having passed a battery recycling law that will require vendors to reclaim for recycling a minimum of 25% of the batteries they manufacture and sell - including lithium ion. It's a shame we can't provide economic incentives to do the same on this side of the pond.
- IT Blogwatch: Now it's Apple's Sony battery aggro (and zoo@home)
- Robert L. Mitchell: Dell battery issues known eight months ago; Sony implies overheating related to Dell charging process
- Robert L. Mitchell: What if they threw a recall and no one came?
- IT Blogwatch: Sony tells of Dell cell hell (and pix o' the day)
- IT Blogwatch: Sony fingered for fires, says Dell (and August's PUP)
- Joyce Carpenter: Mac OS X better than Vista which is better than XP which is better than ...
|
<urn:uuid:0a7bb729-6690-446d-8890-971dd47176db>
|
CC-MAIN-2017-09
|
http://www.computerworld.com/article/2482910/technology-law-regulation/lithium-ion-batteries--high-tech-s-latest-mountain-of-waste.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00534-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.934338 | 713 | 3.296875 | 3 |
A penetration test, or the short form pentest, is an attack on a computer system with the intention of finding security weaknesses, potentially gaining access to it, its functionality and data. A Penetration Testing Linux is a special built Linux distro that can be used for analyzing and evaluating security measures of a target system.
There are several operating system distributions, which are geared towards performing penetration testing. Distributions typically contains pre-packaged and pre-configured set of tools. This is useful because the penetration tester does not have to hunt down a tool when it is required. This may in turn lead to further complications such as compile errors, dependencies issues, configuration errors, or simply acquiring additional tools may not be practical in the tester’s context.
Popular examples are Kali Linux (replacing Backtrack as of December 2012) based on Debian Linux, Pentoo based on Gentoo Linux and BackBox based on Ubuntu Linux. There are many other specialized operating systems for penetration testing, each more or less dedicated to a specific field of penetration testing.
Penetration tests are valuable for several reasons:
- Determining the feasibility of a particular set of attack vectors
- Identifying higher-risk vulnerabilities that result from a combination of lower-risk vulnerabilities exploited in a particular sequence
- Identifying vulnerabilities that may be difficult or impossible to detect with automated network or application vulnerability scanning software
- Assessing the magnitude of potential business and operational impacts of successful attacks
- Testing the ability of network defenders to successfully detect and respond to the attacks
- Providing evidence to support increased investments in security personnel and technology
The new pentest distroes are developed and maintained with user friendliness in mind, so anyone with basic Linux usage knowledge can use them. Tutorials and HOW TO articles are available for public usages (rather than kept in closed community). The idea that pentest distroes are mainly used by network and computer security experts, security students and audit firms doesn’t apply anymore, everyone want’s to test their own network, Wireless connection, Website, Database and I must say most of the distribution owners are making it really easy and offering training for interested ones.
Now lets have a look at some of the best pentest distroes of 2014, some are well maintained, some are not, but either way they all offer great package list to play with:
1. Kali Linux (previously known as BackTrack 5r3)
Kali is a complete re-build of BackTrack Linux, adhering completely to Debian development standards. All-new infrastructure has been put in place, all tools were reviewed and packaged, and we use Git for our VCS.
- More than 300 penetration testing tools: After reviewing every tool that was included in BackTrack, we eliminated a great number of tools that either did not work or had other tools available that provided similar functionality.
- Free and always will be: Kali Linux, like its predecessor, is completely free and always will be. You will never, ever have to pay for Kali Linux.
- Open source Git tree: We are huge proponents of open source software and our development tree is available for all to see and all sources are available for those who wish to tweak and rebuild packages.
- FHS compliant: Kali has been developed to adhere to the Filesystem Hierarchy Standard, allowing all Linux users to easily locate binaries, support files, libraries, etc.
- Vast wireless device support: We have built Kali Linux to support as many wireless devices as we possibly can, allowing it to run properly on a wide variety of hardware and making it compatible with numerous USB and other wireless devices.
- Custom kernel patched for injection: As penetration testers, the development team often needs to do wireless assessments so our kernel has the latest injection patches included.
- Secure development environment: The Kali Linux team is made up of a small group of trusted individuals who can only commit packages and interact with the repositories while using multiple secure protocols.
- GPG signed packages and repos: All Kali packages are signed by each individual developer when they are built and committed and the repositories subsequently sign the packages as well.
- Multi-language: Although pentesting tools tend to be written in English, we have ensured that Kali has true multilingual support, allowing more users to operate in their native language and locate the tools they need for the job.
- Completely customizable: We completely understand that not everyone will agree with our design decisions so we have made it as easy as possible for our more adventurous users to customize Kali Linux to their liking, all the way down to the kernel.
- ARMEL and ARMHF support: Since ARM-based systems are becoming more and more prevalent and inexpensive, we knew that Kali’s ARM support would need to be as robust as we could manage, resulting in working installations for both ARMEL and ARMHF systems. Kali Linux has ARM repositories integrated with the mainline distribution so tools for ARM will be updated in conjunction with the rest of the distribution. Kali is currently available for the following ARM devices:
Kali is specifically tailored to penetration testing and therefore, all documentation on this site assumes prior knowledge of the Linux operating system.
Penetration testing and security auditing requires specialist tools. The natural path leads us to collecting them all in one handy place. However how that collection is implemented can be critical to how you deploy effective and robust testing.
It is said the necessity is the mother of all invention, and NodeZero Linux is no different. Our team is built of testers and developers, who have come to the census that live systems do not offer what they need in their security audits. Penetration Testing distributions tend to have historically utilized the “Live” system concept of Linux, which really means that they try not to make any permanent effects to a system. Ergo all changes are gone after reboot, and run from media such as discs and USB’s drives. However all that this maybe very handy for occasional testing, its usefulness can be depleted when you’re testing regularly. It’s our belief that “Live System’s” just don’t scale well in a robust testing environment.
All though NodeZero Linux can be used as a “Live System” for occasional testing, its real strength comes from the understanding that a tester requires a strong and efficient system. This is achieved in our belief by working at a distribution that is a permanent installation that benefits from a strong selection of tools, integrated with a stable Linux environment.
NodeZero Linux is reliable, stable, and powerful. Based on the industry leading Ubuntu Linux distribution, NodeZero Linux takes all the stability and reliability that comes with Ubuntu’s Long Term Support model, and its power comes from the tools configured to live comfortably within the environment.
BackBox is a Linux distribution based on Ubuntu. It has been developed to perform penetration tests and security assessments. Designed to be fast, easy to use and provide a minimal yet complete desktop environment, thanks to its own software repositories, always being updated to the latest stable version of the most used and best known ethical hacking tools.
BackBox main aim is providing an alternative, highly customizable and performing system. BackBox uses the light window manager Xfce. It includes some of the most used security and analysis Linux tools, aiming to a wide spread of goals, ranging from web application analysis to network analysis, from stress tests to sniffing, including also vulnerability assessment, computer forensic analysis and exploitation.
The power of this distribution is given by its Launchpad repository core constantly updated to the last stable version of the most known and used ethical hacking tools. The integration and development of new tools inside the distribution follows the commencement of open source community and particularly the Debian Free Software Guidelines criteria.
BackBox Linux takes pride as they excelled on the followings:
- Performance and speed are key elements
Starting from an appropriately configured XFCE desktop manager it offers stability and the speed, that only a few other DMs can offer, reaching in extreme tweaking of services, configurations, boot parameters and the entire infrastructure. BackBox has been designed with the aim of achieving the maximum performance and minimum consumption of resources.
This makes BackBox a very fast distro and suitable even for old hardware configurations.
- Everything is in the right place
The main menu of BackBox has been well organized and designed to avoid any chaos/mess finding tools that we are looking for. The selection of every single tool has been done with accuracy in order to avoid any redundancies and the tools that have similar functionalities.
With particular attention to the end user every needs, all menu and configuration files are have been organized and reduced to a minimum essential, necessary to provide an intuitive, friendly and easy usage of Linux distribution.
- It’s standard compliant
The software packaging process, the configuration and the tweaking of the system follows up the Ubuntu/Debian standard guide lines.
Any of Debian and Ubuntu users will feel very familiar with, while newcomers will follow the official documentation and BackBox additions to customize their system without any tricky work around, because it is standard and straight forward!
- It’s versatile
As a live distribution, BackBox offer an experience that few other distro can offer and once installed naturally lends itself to fill the role of a desktop-oriented system. Thanks to the set of packages included in official repository it provides to the user an easy and versatile usage of system.
- It’s hacker friendly
If you’d like to make any change/modification, in order to suite to your purposes, or maybe add additional tools that is not present in the repositories, nothing could be easier in doing that with BackBox. Create your own Launchpad PPA, send your package to dev team and contribute actively to the evolution of BackBox Linux.
Blackbuntu is distribution for penetration testing which was specially designed for security training students and practitioners of information security. Blackbuntu is penetration testing distribution with GNOME Desktop Environment.
Here is a list of Security and Penetration Testing tools – or rather categories available within the Blackbuntu package, (each category has many sub categories) but this gives you a general idea of what comes with this pentesting distro:
- Information Gathering,
- Network Mapping,
- Vulnerability Identification,
- Privilege Escalation,
- Maintaining Access,
- Radio Network Analysis,
- VoIP Analysis,
- Digital Forensic,
- Reverse Engineering and a
- Miscellaneous section.
Because this is Ubuntu based, almost every device and hardware would just work which is great as it wastes less time troubleshooting and more time working.
The Samurai Web Testing Framework is a live linux environment that has been pre-configured to function as a web pen-testing environment. The CD contains the best of the open source and free tools that focus on testing and attacking websites. In developing this environment, we have based our tool selection on the tools we use in our security practice. We have included the tools used in all four steps of a web pen-test.
Starting with reconnaissance, we have included tools such as the Fierce domain scanner and Maltego. For mapping, we have included tools such WebScarab and ratproxy. We then chose tools for discovery. These would include w3af and burp. For exploitation, the final stage, we included BeEF, AJAXShell and much more. This CD also includes a pre-configured wiki, set up to be the central information store during your pen-test.
Most penetration tests are focused on either network attacks or web application attacks. Given this separation, many pen testers themselves have understandably followed suit, specializing in one type of test or the other. While such specialization is a sign of a vibrant, healthy penetration testing industry, tests focused on only one of these aspects of a target environment often miss the real business risks of vulnerabilities discovered and exploited by determined and skilled attackers. By combining web app attacks such as SQL injection, Cross-Site Scripting, and Remote File Includes with network attacks such as port scanning, service compromise, and client-side exploitation, the bad guys are significantly more lethal. Penetration testers and the enterprises who use their services need to understand these blended attacks and how to measure whether they are vulnerable to them. This session provides practical examples of penetration tests that combine such attack vectors, and real-world advice for conducting such tests against your own organization.
Samurai Web Testing Framework looks like a very clean distribution and the developers are focused on what they do best, rather than trying to add everything in one single distribution and thus making supporting tougher. This is in a way good as if you’re just starting, you should start with a small set of tools and then move on to next step.
Like Knoppix, this distro is based on Debian and originated in Germany. STD is a Security Tool. Actually it is a collection of hundreds if not thousands of open source security tools. It’s a Live Linux Distro (i.e. it runs from a bootable CD in memory without changing the native operating system of your PC). Its sole purpose in life is to put as many security tools at your disposal with as slick an interface as it can.
The architecture is i486 and runs from the following desktops: GNOME, KDE, LXDE and also Openbox. Knoppix has been around for a long time now – in fact I think it was one of the original live distros.
Knoppix is primarily designed to be used as a Live CD, it can also be installed on a hard disk. The STD in the Knoppix name stands for Security Tools Distribution. The Cryptography section is particularly well-known in Knoppix.
The developers and official forum might seem snobbish (I mean look at this from their FAQ
Question: I am new to Linux. Should I try STD?
Answer: No. If you’re new to Linux STD will merely hinder your learning experience. Use Knoppix instead.
But hey, isn’t all Pentest distro users are like that? If you can’t take the heat, maybe you shouldn’t be trying a pentest distro after all. Kudos to STD dev’s for speaking their mind.
Pentoo is a Live CD and Live USB designed for penetration testing and security assessment. Based on Gentoo Linux, Pentoo is provided both as 32 and 64 bit installable livecd. Pentoo is also available as an overlayfor an existing Gentoo installation. It features packet injection patched wifi drivers, GPGPU cracking software, and lots of tools for penetration testing and security assessment. The Pentoo kernel includes grsecurity and PAX hardening and extra patches – with binaries compiled from a hardened toolchain with the latest nightly versions of some tools available.
It’s basically a gentoo install with lots of customized tools, customized kernel, and much more. Here is a non-exhaustive list of the features currently included :
- Hardened Kernel with aufs patches
- Backported Wifi stack from latest stable kernel release
- Module loading support ala slax
- Changes saving on usb stick
- XFCE4 wm
- Cuda/OPENCL cracking support with development tools
- System updates if you got it finally installed
Put simply, Pentoo is Gentoo with the pentoo overlay. This overlay is available in layman so all you have to do is layman -L and layman -a pentoo.
Pentoo has a pentoo/pentoo meta ebuild and multiple pentoo profiles, which will install all the pentoo tools based on USE flags. The package list is fairly adequate. If you’re a Gentoo user, you might want to use Pentoo as this is the closest distribution with similar build.
Weakerth4n has a very well maintained website and a devoted community. Built from Debian Squeeze (Fluxbox within a desktop environment) this operating system is particularly suited for WiFi hacking as it contains plenty of Wireless cracking and hacking tools.
Tools includes: Wifi attacks, SQL Hacking, Cisco Exploitation, Password Cracking, Web Hacking, Bluetooth, VoIP Hacking, Social Engineering, Information Gathering, Fuzzing Android Hacking, Networking and creating Shells.
- OS Type: Linux
- Based on: Debian, Ubuntu
- Origin: Italy
- Architecture: i386, x86_64
- Desktop: XFCE
If you look into their website you get the feeling that the maintainers are active and they write a lot of guides and tutorials to help newbies. As this is based on Debian Squeeze, this might be something you would want to give a go. They also released Version 3.6 BETA, (Oct 2013) so yeah, give it a go. You might just like it.
Matriux is a Debian-based security distribution designed for penetration testing and forensic investigations. Although it is primarily designed for security enthusiasts and professionals, it can also be used by any Linux user as a desktop system for day-to-day computing. Besides standard Debian software, Matriux also ships with an optimised GNOME desktop interface, over 340 open-source tools for penetration testing, and a custom-built Linux kernel.
Matriux was first released in 2009 under code name “lithium” and then followed by versions like “xenon” based on Ubuntu. Matriux “Krypton” then followed in 2011 where we moved our system to Debian. Other versions followed for Matriux “Krypton” with v1.2 and then Ec-Centric in 2012. This year we are releasing Matriux “Leandros” RC1 on 2013-09-27 which is a major revamp over the existing system.
Matriux arsenal is divided into sections with a broader classification of tools for Reconnaissance, Scanning, Attack Tools, Frameworks, Radio (Wireless), Digital Forensics, Debuggers, Tracers, Fuzzers and other miscellaneous tool providing a wider approach over the steps followed for a complete penetration testing and forensic scenario. Although there are were many questions raised regarding why there is a need for another security distribution while there is already one. We believed and followed the free spirit of Linux in making one. We always tried to stay updated with the tool and hardware support and so include the latest tools and compile a custom kernel to stay abreast with the latest technologies in the field of information security. This version includes a latest section of tools PCI-DSS.
Matriux is also designed to run from a live environment like a CD/ DVD or USB stick which can be helpful in computer forensics and data recovery for forensic analysis, investigations and retrievals not only from Physical Hard drives but also from Solid state drives and NAND flashes used in smart phones like Android and iPhone. With Matriux Leandros we also support and work with the projects and tools that have been discontinued over time and also keep track with the latest tools and applications that have been developed and presented in the recent conferences.
Features (notable updates compared to Ec-Centric):
Custom kernel 3.9.4 (patched with aufs, squashfs and xz filesystem mode, includes support for wide range of wireless drivers and hardware) Includes support for alfacard 0036NH
Easy integration with virtualbox and vmware player even in Live mode.
MID has been updated to make it easy to install check http://www.youtube.com/watch?v=kWF4qRm37DI
Includes latest tools introduced at Blackhat 2013 and Defcon 2013, Updated build until September 22 2013.
UI inspired from Greek Mythology
New Section Added PCI-DSS
IPv6 tools included.
Another great looking distro based on Debian Linux. I am a great fan of Greek Mythology, (their UI was inspired by it), so I like it already.
DEFT Linux is a GNU / Linux live for free software based on Ubuntu , designed by Stefano Fratepietro for purposes related to computer forensics ( computer forensics in Italy) and computer security. Version 7.2 takes about 2.5 GB.
The Linux distribution DEFT is made up of a GNU / Linux and DART (Digital Advanced Response Toolkit), suite dedicated to digital forensics and intelligence activities. It is currently developed and maintained by Stefano Fratepietro, with the support of Massimo Dal Cero, Sandro Rossetti, Paolo Dal Checco, Davide Gabrini, Bartolomeo Bogliolo, Valerio Leomporra and Marco Giorgi.
The first version of Linux DEFT was introduced in 2005, thanks to the Computer Forensic Course of the Faculty of Law at the University of Bologna. This distribution is currently used during the laboratory hours of the Computer Forensics course held at the University of Bologna and in many other Italian universities and private entities.
It is also one of the main solutions employed by law enforcement agencies during computer forensic investigations.
In addition to a considerable number of linux applications and scripts, Deft also features the DART suite containing Windows applications (both open source and closed source) which are still viable as there is no equivalent in the Unix world.
Since 2008 is often used between the technologies used by different police forces, for today the following entities (national and international) We are using the suite during investigative activities
- DIA (Anti-Mafia Investigation Department)
- Postal Police of Milan
- Postal Police of Bolzano
- Polizei Hamburg (Germany)
- Maryland State Police (USA)
- Korean National Police Agency (Korea)
Computer Forensics software must be able to ensure the integrity of file structures and metadata on the system being investigated in order to provide an accurate analysis. It also needs to reliably analyze the system being investigated without altering, deleting, overwriting or otherwise changing data.
There are certain characteristics inherent to DEFT that minimize the risk of altering the data being subjected to
analysis. Some of these features are:
- On boot, the system does not use the swap partitions on the system being analyzed
- During system startup there are no automatic mount scripts.
- There are no automated systems for any activity during the analysis of evidence;
- All the mass storage and network traffic acquisition tools do not alter the data being acquired.
You can fully utilize the wide ranging capabilities of the DEFT toolkit booting from a CDROM or from a DEFT USB stick any system with the following characteristics:
- CD / DVD ROM or USB port from which the BIOS can support booting.
- CPU x86 (Intel, AMD or Citrix) 166 Mhz or higher to run DEFT Linux in text mode, 200Mhz to run
DEFT Linux in graphical mode;
- 64 Mbytes of RAM to run DEFT Linux in text mode or 128 Mbytes to run the DEFT GUI.
DEFT also supports the new Apple Intel based architectures
All in all, it looks and sounds like a purpose build Distro that is being used by several government bodies. Most of the documents are in Italian but translations are also available. It is based on Ubuntu which is a big advantage as you can do so much more. Their documentation is done in a clear an professional style, so you might find it useful. Also if you speak Italian, I guess you already use/used it.
Caine is another Italy born/origin Ubuntu based distro.
Caine (an acronym for Computer Aided Investigative Environment’) is a distribution live oriented to Computer Forensics (computer forensics) historically conceived by Giancarlo Giustini, within a project of Digital Forensics Interdepartmental Research Center for Security (CRIS) of the University of Modena and Reggio Emilia see Official Site. Currently the project is maintained by Nanni Bassetti.
The latest version of Caine is based on the Ubuntu Linux 12.04 LTS, MATE and LightDM. Compared to its original version, the current version has been modified to meet the standards forensic reliability and safety standards laid down by the NIST View the methodologies of Nist.
- Caine Interface – a user-friendly interface that brings together a number of well-known forensic tools, many of which are open source;
- Updated and optimized environment to conduct a forensic analysis;
- Report generator semi-automatic, by which the investigator has a document easily editable and exportable with a summary of the activities;
- Adherence to the investigative procedure defined recently by Italian Law 48/2008, Law 48/2008,.
In addition, Caine is the first distribution to include forensic Forensics inside the Caja/Nautilus Scripts and all the patches of security for not to alter the devices in analysis.
The distro uses several patches specifically constructed to make the system “forensic”, ie not alter the original device to be tested and/or duplicate:
- Root file system spoofing: patch that prevents tampering with the source device;
- No automatic recovery corrupted Journal patch: patch that prevents tampering with the device source, through the recovery of the Journal;
- Mounter and RBFstab: mounting devices in a simple and via graphical interface.
RBFstab is set to treat EXT3 as a EXT4 noload with the option to avoid automatic recovery of any corrupt Journal of ‘EXT3;
- Swap file off: patch that avoids modifying the file swap in systems with limited memory RAM, avoiding the alteration of the original artifact computer and overwrite data useful for the purposes of investigation.
Caine and Open Source == == Patches and technical solutions are and have been all made in collaboration with people (Professionals, hobbyists, experts, etc..) from all over the world.
CAINE represents fully the spirit of the Open Source philosophy, because the project is completely open, anyone could take the legacy of the previous developer or project manager.
The distro is open source, the Windows side (Nirlauncher/Wintaylor) is open source and, last one but not least important, the distro is installable, so as to give the possibility to rebuild in a new version, in order to give a long life to this project.
Parrot Security OS is an advanced operating system developed by Frozenbox Network and designed to perform security and penetration tests, do forensic analysis or act in anonymity.
Anyone can use Parrot, from the Pro pentester to the newbie, because it provides the most professional tools combined in a easy to use, fast and lightweight pen-testing environment and it can be used also for an everyday use.
It seems this distro targets Italian users specifically like few other mentioned above. Their interface looks cleaner which suggests they have an active development team working on it which can’t be said above some other distroes. If you go through their screenshots page you’ll see it’s very neat. Give it a try and report back, you never know which distro might suit you better.
BlackArch Linux is a lightweight expansion to Arch Linux for penetration testers and security researchers. The repository contains 838 tools. You can install tools individually or in groups. BlackArch is compatible with existing Arch installs.
Please note that although BlackArch is past the beta stage, it is still a relatively new project. [As seen in BlackArch Website]
I’ve used Arch Linux for sometime, it is very lightweight and efficient. If you’re comfortable with building your Linux installation from scratch and at the same time want all the Pentest Tools (without having to add them manually one at a time), then BlackArch is the right distro for you. Knowing Arch community, your support related issues will be resolved quickly.
However, I must warn that Arch Linux (or BlackArch Linux in this case) is not for newbies, you will get lost at step 3 or 4 while installing. If you’re moderately comfortable with Linux and Arch in general, go for it. Their website and community looks very organized (I like that) and it is still growing.
I’ve tried to gather as much information I could to compile this list. If you’re reading this because you want to pick one of these many penetration Linux Distributions, my suggestions would be pick the distribution that is closets to your current one. For example, if you’re an Ubuntu user, pick something based on Ubuntu, if you’re Gentoo user then Pentoo is what you’re after and so forth. Like any Linux distribution list, many people will have many opinions on which is the best. I’ve personally used several of them and found that each puts emphasis on a different side. It is upto the user which one they would like to use (I guess you could try them on VMWare or VirtualBox to get a feel).
I know for a fact that there are more Penetration Test Linux distributions out there and I missed some. My research shows these are the most used and maintained distroes, but if you know of some other Penetration Test Linux distributions and would like to add into this list, let us know via comments.
|
<urn:uuid:39a111cc-9264-4dc6-b2ea-3e52e6be3c80>
|
CC-MAIN-2017-09
|
https://www.blackmoreops.com/2014/02/03/notable-penetration-test-linux-distributions-of-2014/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00234-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.937725 | 6,116 | 3.125 | 3 |
As technology is wont to do, drones have grown beyond purely military application and entered the civilian sector
The World Health Organization (WHO) has teamed up with Matternet, a California tech company, to explore the use of drones in the delivery of medical supplies to remote regions of Bhutan, according to Health IT Outcomes.
Matternet started getting attention in 2013, after CEO, Andreas Raptopoulos’ TED talk about the application of drones to the healthcare industry. His talks at the time focused on Africa and Haiti. “Imagine if you are in Mali with a newborn in urgent need of medication — it may take days to come … 1 billion people on Earth have no access to all-season roads.”
The company started with octocopters delivering medicine to the Petionville camp in Port-Au-Prince, Haiti, after the 2010 earthquake, according to Healthcare IT News.
Bhutan, located in South Asia, currently battles a doctor to patient ration of 1:3300. That statistic is challenging on its own, but healthcare delivery in the country also means navigating the obstacles inherent in navigating the Himalayan mountain range. These traits together, made Bhutan an excellent testing ground for the WHO and Matternet to explore the benefits of medical package delivery through drones.
According to Raptopoulous, “The beauty of this technology is its autonomy. There’s no pilot needed to fly this vehicle. They fly using GPS waypoints from one landing station to the next. Once they arrive at a landing station, they swap battery and load automatically. This is the heart of our system … we believe that Matternet can do for the transportation of matter what the Internet did for the flow of information.”
According to Quartz, the drones in Bhutan are small, quadcopters, capable of carrying loads of about four pounds, a distance of 20km at a time between pre-designated landing stations. Matternet can track the flights in real-time, and plans to eventually incorporate the automated landing stations that will replace drone batteries, thereby extending flight times. Each drone costs between $2,000 and $5,000. So far, they’ve functioned without any glitches.
Phil Finnegan, an analyst at the Teal Group (a U.S.-based firm that analyzes the aerospace industry), says they see a potential market in civil government and commercial that will reach $5.4 billion over the next ten years, with applications in agriculture and humanitarian efforts.
|
<urn:uuid:d48bddbb-d118-4a98-9c47-adc6d0da07d3>
|
CC-MAIN-2017-09
|
https://www.bsminfo.com/doc/drones-are-moving-into-the-healthcare-arena-0001
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00110-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.936496 | 520 | 2.6875 | 3 |
High-tech companies use the Internet to provide customers with a clean, insulated environment where they can make purchases, do research and chat with friends and strangers.
But behind every antiseptic Web page lies its real-world technological counterpart; and all too often the servers that run Web sites and online services arent nearly as spic-and-span as the pages they bring to life.
Thats especially true in large cities where Internet companies either use servers owned by other firms or hire such firms to house their servers remotely. These server "farms," also known as data centers, use enormous amounts of energy relative to their size. Exodus Communications operations in the San Francisco Bay area, for instance, consume as much electricity as 12,000 houses.
Whats more, server farms cant risk power outages, or the consequent lack of access to the Internet. Therefore, they rely on diesel backup generators, which typically generate more pollution than power plants.
These concerns have led a number of city officials to rethink how their municipalities can accommodate the needs of high-tech firms while ensuring that the resulting energy use and pollution doesnt spiral out of hand. San Francisco, for example, recently passed temporary legislation that requires new server farms to apply for conditional use permits. To receive the permit, a server farm needs to demonstrate that it has minimized air pollution from its backup generators and designed its building to be energy efficient. The Board of Supervisors is now considering permanent guidelines for future server farms.
"The regulation for the conditional use process is pretty balanced and is typical for a lot of development, so were not adding something that developers arent familiar with," said Greg Asay, a legislative aide for San Francisco Supervisor Sophie Maxwell, who sponsored the legislation. San Francisco has 16 server farms in existence or in development that total two million square feet, and many of them lie in Maxwells district.
The legislation was driven by health and energy concerns, said Asay, "and the energy concerns have a health impact as well. We have, in California, a mad rush to build power plants, including here in San Francisco, and if the demand [for electricity] keeps increasing, were going to be stuck with a lot of power plants."
Residents of Maxwells district, which also contains San Franciscos two power plants, already suffer disproportionately high asthma rates. The fear was that without this legislation, their problems would only get worse, whether from the construction of new power plants or from the exhaust of diesel generators that carry server farms through Californias increasingly common blackouts.
The health concerns also became a pressing issue due to residential growth around the farms themselves. "Its tough to find a place where theres not someone [already living nearby]. Even within industrial areas, theres been massive growth in the last few years of these live/work lofts [converted warehouse offices exempt from most housing laws]," said Asay. "Even when an area is not zoned for housing, you still have housing across the street. Its tough to find a whole part of town that we can section off as industrial."
Server Farm Repellant
If San Francisco cant do so, though, server farms might go elsewhere. "You need to be in the proximity of those who own and use the servers, preferably within about 20 minutes to a half-hour, so that if theres a problem, the user will be able to reach the data center very quickly," said John Mogannam, senior vice president of engineering for U.S. DataPort in San Jose. "If somebody wants to build a data center [in San Francisco] and knows that he has to jump through hoops, it wont be hard to find a place in Oakland, South San Francisco, Millbrae or wherever else space is available."
"We actually like the [San Francisco] regulations because it will encourage companies to come locate on our campus," said Mogannam, referring to his companys 174-acre, 10-warehouse complex thats currently under construction in San Jose. "As far as the city of San Francisco is concerned, it will discourage companies from locating data centers there."
Asay isnt so sure. After all, server farms can actually be accessed from anywhere in the world; its mere habit that keeps them close to the companies they serve. Whats more, he says, "Even if San Francisco is first, it wont be long before the rest of the country enacts these types of regulations."
U.S. DataPort, for instance, will be employing a natural gas cogeneration plant in its new San Jose facility at the insistence of both the city and the state. "It eliminates the diesels completely," said Mogannam.
The fallout from San Franciscos new regulations probably wont be apparent in the near future. "The economic downturn is giving us time to figure out whether there will be a ripple effect from the legislation," said Asay. "We have a lot of permits already in, but they might not build out for a couple of years. Right now, theres not much of a demand."
|
<urn:uuid:71295855-64a9-41bd-9bec-26fb3c4109f4>
|
CC-MAIN-2017-09
|
http://www.govtech.com/magazines/gt/Down-on-the-Server-Farm.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00406-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.958208 | 1,038 | 2.921875 | 3 |
LIVERMORE, Calif., Dec. 22 — Sandia National Laboratories has formed an industry-funded Spray Combustion Consortium to better understand fuel injection by developing modeling tools. Control of fuel sprays is key to the development of clean, affordable fuel-efficient engines.
Intended for industry, software vendors and national laboratories, the consortium provides a direct path from fundamental research to validated engineering models ultimately used in combustion engine design. The three-year consortium agreement builds on Department of Energy (DOE) research projects to develop predictive engine fuel injector nozzle flow models and methods and couple them to spray development outside the nozzle.
Consortium participants include Sandia and Argonne national laboratories, the University of Massachusetts at Amherst, Toyota Motor Corp., Renault, Convergent Science, Cummins, Hino Motors, Isuzu and Ford Motor Co. Data, understanding of the critical physical processes involved and initial computer model formulations are being developed and provided to all participants.
Sandia researcher Lyle Pickett, who serves as Sandia’s lead for the consortium, said predictive spray modeling is critical in the development of advanced engines.
“Most pathways to higher engine efficiency rely on fuel injection directly into the engine cylinder,” Pickett said. “While industry is moving toward improved direct-injection strategies, they often encounter uncertainties associated with fuel injection equipment and in-cylinder mixing driven by fuel sprays. Characterizing fuel injector performance for all operating conditions becomes a time-consuming and expensive proposition that seriously hinders engine development.”
Industry has consequently identified predictive models for fuel sprays as a high research priority supporting the development and optimization of higher-efficiency engines. Sprays affect fuel-air mixing, combustion and emission formation processes in the engine cylinder; understanding and modeling the spray requires detailed knowledge about flow within the fuel injector nozzle as well as the dispersion of liquid outside of the nozzle. However, nozzle flow processes are poorly understood and quantitative data for model development and validation are extremely sparse.
“The Office of Energy Efficiency and Renewable Energy Vehicle Technologies Office supports the unique research facility utilized by the consortium to elucidate sprays and also supports scientists at Sandia in performing experiments and developing predictive models that will enable industry to bring more efficient engines to market,” said Gurpreet Singh, program manager at the DOE’s Vehicle Technologies Office.
Performing experiments to measure, simulate, model
Consortium participants already are conducting several experiments using different nozzle shapes, transparent and metal nozzles and gasoline and diesel type fuels. The experiments provide quantitative data and a better understanding of the critical physics of internal nozzle flows, using advanced techniques like high-speed optical microscopy, X-ray radiography and phase-contrast imaging.
The experiments and detailed simulations of the internal flow, cavitation, flash-boiling and liquid breakup processes are used as validation information for engineering-level modeling that is ultimately used by software vendors and industry for the design and control of fuel injection equipment.
The goals of the research are to reveal the physics that are general to all injectors and to develop predictive spray models that will ultimately be used for combustion design.
“Predictive spray modeling is a critical part of achieving accurate simulations of direct injection engines,” said Kelly Senecal, co-founder of Convergent Science. “As a software vendor specializing in computational fluid dynamics of reactive flows, the knowledge gained from the data produced by the consortium is invaluable to our future code-development efforts.”
Industry-government cooperation to deliver results
Consortium participants meet on a quarterly basis where information is shared and updates are provided.
“The consortium addresses a critical need impacting the design and optimization of direct injection engines,” Pickett said. “The deliverables of the consortium will offer a distinct competitive advantage to both engine companies and software vendors.”
Source: Sandia National Laboratories
|
<urn:uuid:4576151c-1528-4217-b5da-72bf6dbb6283>
|
CC-MAIN-2017-09
|
https://www.hpcwire.com/off-the-wire/sandia-national-laboratories-forms-new-consortium/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00406-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.910325 | 806 | 2.625 | 3 |
Streamline Datacenters for Scalable Computing
Traditional datacenters – server sprawl leads to gross inefficiencies, high costs
Historically, datacenter servers have been built with fixed designs tailored to specific applications and requiring long lead times. These purpose-built systems require complete replacement when the time comes for processor, storage, memory or other component upgrades. They are also usually difficult to redeploy for other applications or use cases, and require individual management. Ultimately they contribute to server sprawl as more application-specific systems are brought online – a bloat that leads to poor utilization, high operating expenses and rampant power and cooling costs.
The Open Compute Project – streamlining scalable, rack scale computing
The Open Compute Project was formed to design and enable the most efficient server, storage and datacenter hardware designs for scalable computing. The focus of the project is on customer-designed, simple, cost-effective open technologies that can be acquired from multiple sources and efficiently deployed. Early proponents include Facebook, Rackspace, university and government labs, and Wall Street banks.
Open Compute servers are simple and modular, enabling a single design to be configured for different purposes and supporting common applications, such as high-performance computing (HPC), general-purpose server and storage-server applications. The modularity of Open Compute components server hardware components to be disaggregated and easily optimized, paving the way to rack scale computing. Compared to traditional servers, material and shipping costs of Open Compute servers are lower, and upgrades are simpler and more cost-effective. Open Compute servers also support a common management framework across nodes from different suppliers, enable the design of low-cost building block storage and energy-efficient datacenters, and make it easy for IT organizations to better match hardware to the software that it runs.
Currently, the Open Compute Project defines designs and standards for dual-processor server motherboards, high-efficiency self-cooled server power supplies, a simple screw-less server chassis, designs for 42U server racks and 48 volt DC battery cabinets, an integrated DC/AC power distribution scheme, a 100% air-side economizer and an evaporative cooling system to support the servers.
Open Compute transforms the server design and delivery model. Traditionally, original equipment manufacturers (OEMs) define server architectures and deliver the systems to end customers. With Open Compute, end customers define server architectures and original design manufacturers (ODMs) or OEMs deliver the solutions. Open Compute solutions also integrate the latest technologies such as flash cache for fast storage.
New datacenter ecosystem partnerships to emerge
As rapid growth continues in private and public clouds and Web 2.0 deployments, Open Compute will give rise to new ecosystem partnerships between large-scale end customers and tier-one storage solution providers like Broadcom . Like never before, customers will work directly with Broadcom and other providers to deploy innovative products and architectures.
Broadcom central to the Open Compute ecosystem
Broadcom is leveraging its deep history of collaboration in its work with Open Compute. Broadcom's work with ODMs on Open Compute reference designs and implementations is very similar to Broadcom's traditional business with OEMs. In fact, Broadcom helped develop and deliver one of the most popular Open Compute platforms – OpenVault.
Broadcom has two key advantages in an Open Compute-style ecosystem. First, because Broadcom offers the de facto standard for storage products and solutions, it is central to the Open Compute ecosystem. As the industry migrates to hyperscale datacenters, more Open Compute or Open Compute-inspired products will be deployed, and Broadcom will be deeply involved. Secondly, Open Compute is an excellent venue for Broadcom to leverage its collaboration model to work directly with end customers who understand the technology problems they need solved, who need the enterprise expertise of Broadcom to solve them while improving datacenter capital and operating expenditures, and who are uninhibited by legacy products and businesses.
Broadcom offers the broadest portfolio of server storage products and solutions, and brings decades of storage expertise to the design of more efficient, rack scale architectures. As the industry migrates towards hyperscale datacenters, more Open Compute or Open Compute-inspired products will be deployed, and Broadcom will be deeply involved. Open Compute is an excellent venue for Broadcom to leverage its collaboration model to work directly with end customers who are looking for solutions to specific technology challenges. Together, Open Compute and Broadcom will help reduce datacenter capital and operating expenditures, and enable massive new datacenter architectures.
Broadcom storage silicon is specified in Open Compute servers and directly supports Facebook and other Open Compute server builders. Broadcom is actively engaged in several of the workgroups, and with all Open Compute ODM manufacturers.
|
<urn:uuid:a0dfd236-4c95-4038-8a24-24086cab39c2>
|
CC-MAIN-2017-09
|
https://jp.broadcom.com/applications/datacenter-networking/hyperscale-streamline-datacenters
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00582-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.903491 | 993 | 2.546875 | 3 |
While Oracle and MySQL remain top picks for database systems, there are many others available, from big guns like Microsoft SQL Server to the increasingly popular MongoDB. Each has its own strengths and weaknesses, so your latest IT project may find you scratching your head as you try to decide on database software.
If you’re looking for a database platform, you probably already know the basics, but a database is a collection of data, information of almost any type, organized in a manner that can be accessed, managed, and updated either by other programs or by users directly. They are required to recall specific data on demand, like when a social media user looks back on their profile one year ago.
Databases can be installed on individual workstations or on central servers or mainframes. Applications are as varied as an industry might require; they are used to store and sort transactions, inventory, customer behavior, pictures, video, and more. Most business IT applications will require some form of database.
The first decision you’ll need to make is between desktop and server database. Desktop database management systems are licensed for single users, while server database management systems often include failsafe designs to guarantee they will be always accessible by multiple users and applications.
Some desktop database options include Microsoft Access (included with Office or Office 365 licenses), Lotus Approach, or Paradox. They are pretty inexpensive and use GUIs that make interacting with SQL simple for non-power users.
Chances are you need a server database management solution, if you’re reading this blog. They offer greater flexibility, performance, and scalability than a desktop database. Oracle, IBM DB2, Microsoft SQL, MySQL, PostgreSQL, and MongoDB are all popular options. MySQL, MongoDB, and PostgreSQL are all open source while they others are closed. Another open source database gaining popularity is Cassandra, released by Facebook.
The large vendors like Oracle and IBM have the advantage of longstanding popularity, meaning they now work with a variety of programming languages and operating systems. Microsoft SQL is conveniently integrated into the Windows Server stack and is relatively inexpensive.
Before choosing a vendor, you’ll need to ask the following questions:
One quick way to narrow down your options is to decide whether you need an SQL (Structured Query Language) based database or NoSQL. SQL databases are relational, which means they are sorted into a table and organized by each entry (the row) and its qualities (the columns). It is important to note that you have to predefine these qualities. NoSQL databases can have varying storage types, including document, graph, key-value, and columnar.
Document databases store each record in a document and documents are grouped in collections. The structure of each document does not have to be the same. Graph databases are best suited for data types that graph well, like trends. The structures have entries and information about the entries connected via line. Key-value databases use pairs of key-values to associate data. The key is an attribute which is then linked to a value. The resulting associative array is also called a dictionary, made up of many record entries, each of which contains fields. The key is used to retrieve the entry from the database. Columnar databases have column families, each of which contains rows. The columns do not have to be predefined and the rows do not need to have the same amount of columns.
Another important distinction between SQL and NoSQL is ACID compatibility. All SQL databases retain ACID functionality, while many NoSQL options do not. ACID stands for Atomicity, Consistency, Isolation, and Durability. Atomicity means if a transaction has two or more pieces of information, they either all make it into the database or none do. Consistency means if a new entry fails, the data is returned to its previous state before the entry was transacted. Isolation means a new transaction remains separate from other transactions. Durability means data remains in its state even after a system restart or failure.
Note: a transaction refers to any retrieval or update of information.
SQL servers are generally not scalable across other servers, while NoSQL servers are often used in cloud environments as they can scale across servers, with many platforms including automation. If your data needs will not change in structure (meaning you know the categories of each entry are stable, like a contact database of First Name, Last Name, Phone, Address, E-mail, etc) and you don't expect massive growth, SQL might fit the bill.
|
<urn:uuid:851a7156-18cd-495c-914f-ccdfbc9e3047>
|
CC-MAIN-2017-09
|
https://www.greenhousedata.com/blog/which-database-management-platform-is-right-for-you
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00634-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.931519 | 928 | 2.828125 | 3 |
Who can forget one of the most iconic lines from the original Star Trek television show: "Warp drive, Mr. Scott. Make it so." Kidding! Just doing a little mash-up there. Before any Trekkie heads explode, here's the real line, delivered by a young William Shatner. All of which is a clumsy, Friday-ish way to lead into this week's meeting in Dallas of space scientists who will discuss possible ways to propel spacecraft faster than the speed of light and probably engage in master-level Star Trek trivia contests. From Yahoo news:
Spacecraft propelled by antimatter harvested by robotic factories on Mercury will be under discussion, as will spacecraft made from hollowed-out asteroids and a laser-beam “highway” to provide energy for ships to “hop” to nearby stars.Some of these technologies may come into being within 20 years, the organizers claim -- but the goal is interstellar travel by 2100, visiting planets such as those found by NASA’s Kepler space telescope.
Maybe it is possible that humans could go interstellar by then. After all, 2100 is more than 86 years from now -- a long time in terms of scientific discoveries and breakthroughs -- but we're still talking about taking nine months to go to Mars. We'll need a quantum leap to get to the interstellar level. Yet I'm not sure what the big rush is. Granted, we're well ahead of schedule in degrading the Earth's environment, but our planet is a long way from being uninhabitable. And while I agree with Dr. Friedwart Winterberg, a theoretical physicist from the University of Nevada, who tells Yahoo, “For the human species and its unique culture to survive the death of the sun, a bridge must be built to other solar systems with earthlike planets," scientists expect our sun to remain unchanged for several billion years. There's time. So while I think it's great that researchers are trying to advance the science of space travel, we're not yet in a doomsday scenario. To use a football analogy, let's work the ball upfield before throwing up a hail Mary. Now read this:
|
<urn:uuid:44b3f4ea-0ab6-4579-be61-3ee3bc37487f>
|
CC-MAIN-2017-09
|
http://www.itworld.com/article/2708369/enterprise-software/no-hurry-for-warp-speed.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00102-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.944507 | 440 | 2.5625 | 3 |
Using off-the-shelf gaming technology that tracks brain activity, a team of scientists has shown that it's possible to steal passwords and other personal information.
Researchers from the University of Oxford, University of Geneva and the University of California at Berkeley demonstrated the possibility of brain hacking using software built to work with Emotiv Systems' $299 EPOC neuro-headset.
Developers build software today that responds to signals emitted over Bluetooth from EPOC and other so-called brain computer interfaces (BCI), such as MindWave from NeuroSky. Of course, if software developers can build apps for such devices, so can criminals.
"The security risks involved in using consumer-grade BCI devices have never been studied and the impact of malicious software with access to the device is unexplored," the researchers said in a paper presented in July at the USENIX computer conference. "We take a rst step in studying the security implications of such devices and demonstrate that this upcoming technology could be turned against users to reveal their private and secret information."
The researchers found that the software they built to read signals from EPOC significantly improved the chances of guessing personal identification numbers (PINs), the general area participants in the experiment lived, people they knew, their month of birth, and the name of their bank.
The Emotiv device, used in gaming and as a hands-free keyboard, uses sensors to record electrical activity along the scalp. Voltage in the brain spikes when people see something they recognize, so tracking the fluctuation makes it possible to gather information about people by showing them series of images.
The researchers conducted their experiments on 28 computer science students. In the PIN experiment, the subjects chose a four-digit number and then watched as the numbers zero to nine were flashed on a computer screen 10 times for each digit. While the images flashed before the subjects, the researchers tracked brain activity through signals from the EPOC neuro-headset.
The same form of repetitive showing of images was used in the other experiments, such as a series of bankcards to determine a subject's bank or images of people to find the one they knew.
In general, the researchers' chance of guessing correctly increased to between 20% and 30%, up from 10% without the brain tracking. The exception was in figuring out people's month of birth. The rate of guessing correctly increased to as much as 60%.
Nevertheless, the overall reliability was not high enough for an attack targeted at a few individuals. "The attack works, but not in a reliable way," Mario Frank, a UC Berkeley researcher in the study, said on Friday. "With the equipment that we used, it's not possible to be sure that you found the true answer."
A criminal would have to build malware that could be distributed to as many people as possible. Such a tactic is used in distributing malware via email, knowing that only a small fraction of recipients will open the attachments. However, that small fraction is enough to create botnets of hundreds of thousands of computers.
With BCI devices, the user base today is too small to launch large-scale attacks. Also, users buy software directly from manufacturers, so it would be difficult for criminals to distribute malware.
However, a security risk could arise in the future, if brain-tracking devices become standard for interacting with computers and online stores are created to sell hundreds of thousands of applications, much like people buy apps for Android smartphones today.
To minimize risk, device manufacturers should start building security mechanisms today, such as limiting the information software can access from the headset to only the data needed to run the app, experts say.
"One thing that could be improved, for instance, is that the device itself does some pre-processing and only outputs the data that is required for the application," Frank said.
Such precautions should be taken today to prevent unnecessary risks in the future, he said.
|
<urn:uuid:e5b903cc-b0b2-4a65-90e8-6021e6ec736e>
|
CC-MAIN-2017-09
|
http://www.csoonline.com/article/2132246/mobile-security/brain-could-be-target-for-hackers--researchers-show.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00278-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.959721 | 796 | 3.171875 | 3 |
Secure programming is the last line of defense against attacks targeted toward our systems. This course shows you how to identify security flaws & implement security countermeasures when writing code for Android and iOS mobile devices. Using sound programming techniques and best practices shown in this course, you can produce high-quality code that stands up to attack.
The course covers major security principles when writing Java code for Android and Objective-C code for iOS.
The objectives of the course are to acquaint students with security concepts and terminology, and to provide them with a solid foundation for developing secure software. By course completion, students should be familiar with major secure programming practices and have learnt the basics of security analysis and design.
Demonstrating the Top 10 Mobile Security Attacks
Insecure data storage
Weak server side controls
Insufficient transport layer protection
Client side injection
Poor authorization and authentication
Improper session handling
Security decisions via untrusted inputs
Side channel data leakage
Sensitive information disclosure
Secure Coding Best Practices
Creating files with correct ACLs
Secure memory handling
Secure data storage
Transport level encryption
Storage level encryption
Validating server certificates and avoiding Man-in-the-Middle
Client-side certificate authentication
Application permission isolation
The permission model
Permission types and app restriction
Creating custom permissions
Verifying process permissions during runtime
Securely activating components
Avoiding access to restricted screens
Avoiding hard coded secrets
Obfuscate the program
Detecting common code-level vulnerabilities
Secure device management
Basic knowledge of the Android development platform
Basic knowledge of the iOS development platform
Interested in this course? Have any questions?
Let us know and we’ll get back to you…
% Pure Security knowledge!
|
<urn:uuid:63ae805e-f347-4bd4-8829-fca073e60bb6>
|
CC-MAIN-2017-09
|
https://appsec-labs.com/portfolio-item/mobile_awareness_1d/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00451-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.766664 | 354 | 3.28125 | 3 |
Wednesday January 11, 2017
Twin Killers in Manufacturing
As every production manager is aware, it’s not easy to achieve a smooth and continuous flow of products in concert with market demand. Although managers are aware of this, what is generally unknown or recognized is the major obstacles to running an effective operation all boils down to two basic phenomena that exists in all manufacturing operations. These two phenomena are referred to as dependent events which include the interactions between resources and products and statistical fluctuations which causes excessive amounts of variation. In order to improve our decision-making ability, it’s necessary to understand the nature of both dependencies and variation that permeate all production firms. So let’s explore both of these phenomena in more detail.
In any manufacturing operation there are numerous dependencies that exist. Dependencies are those sequence of operations or activities in any plant that cannot occur until some other operation takes place and has been completed. These interactions between dependent events play a vital role in the smooth flow of materials through a manufacturing process. If a preceding operation is delayed, then many times the end product will be delayed and shipment to the customer could be late. There are many examples of these dependencies within every manufacturing facility. For example, the production process cannot begin until the raw materials are received, accepted, and sent to the first step in the process. Likewise, the raw materials cannot be received until an order is placed, the raw material is produced, payment is received, etc. And what happens when a quality problem is detected in any step in the process? The root cause of the problem must be found, impacted materials must be reworked (or scrapped) or new materials must be supplied.
The significance of dependencies within the manufacturing process is magnified immensely by another reality, the unavoidable existence of variability in the form of both random events and statistical fluctuations. Random events are those activities occurring at unpredictable times that have a significant disruptive effect on the entire manufacturing process. Random events occur in the process from many different sources. For example, suppose a customer order is suddenly canceled? What problems result from this cancellation?
In any manufacturing facility there are events referred to as statistical fluctuations that wreak havoc on the shop floor. It’s important to understand that these fluctuations occur that cause higher levels of variability. Typical examples include things like actual customer orders being different than what has been forecasted; the time required to process materials at a work station is different from the planned time; or even the time to set-up a machine is different than the predicted amount.
Although random events and statistical fluctuations are different phenomena, they both cause variability and if the process is poorly controlled, they both send shock waves throughput the system. Production managers are constantly on guard for both of these variation producers that cause disruptions and adjustments to their planned activities. So what’s the impact of dependency and variability?
Dependency and Variability Impact
In order to demonstrate the damaging effects of dependency and variability in a manufacturing system, let’s create a simple analogy. Imagine a basic production line that only produces a single product, uses only one raw material, and has a single sequence of dependent operations. The figure below describes this simple process.
Raw material is received, processed by the first resource, transferred to the second resource, then the third and so on until a finished product is created. Each step is dependent upon the preceding step. And the impact of any variation in time or quality between steps is felt by all downstream operations. Downstream resources fall further and further behind the work schedule as the disruptions and negative fluctuations accumulate throughout the system. One obvious effect is the accumulation of work-in-process inventory which translates into lost or reduced throughput. One important point is that negative variations from the scheduled flow of product will accumulate more readily than positive variations will. The definitive result is that as the flow of products is disrupted, throughput decreases, excess work-in-process forms and operating expenses increases.
One of the teachings of Lean Manufacturing is the concept of the balanced manufacturing plant, meaning that all process steps are close to having the same processing times. Is this a good idea?
In my next post, we will discuss the balanced plant concept in more detail as well as the effect of disruptions in a balanced plant. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond.
Until next time.
L. Srikanth and Michael Umble, Synchronous Management – Profit-Based Manufacturing for the 21st Century, Volume One – 1997, The Spectrum Publishing Company, Wallingford, CT
|
<urn:uuid:cad64701-7cb9-4a85-9c36-05555c0ac7cd>
|
CC-MAIN-2017-09
|
http://manufacturing.ecisolutions.com/blog/posts/2017/january/dependencies-and-variability.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00627-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.951074 | 937 | 2.828125 | 3 |
Riot Games is putting artificial intelligence to work to improve the sportsmanship of millions of gamers
In this article...
- League of Legends creator Riot Games has been experimenting with AI and predictive analytics to find and stop online trolls and increase sportsmanship
- Since implementing their AI-assisted program, verbal abuse in games has dropped 40 percent
Millions of young online gamers today are accustomed to battling bad guys. But their biggest foes are often their fellow players. Many online gaming sites are rife with creepy bigotry, harassment and even death threats. It's a common issue for many online communities, too, including Twitter, YouTube and Facebook.
So how do you root out the rotten apples? Over the past several years, Riot Games, which produces the immensely popular League of Legends, has been experimenting with artificial intelligence (AI) and predictive analytics tools to find the online trolls and make their games more sportsmanlike. League players are helping spot toxic players and, as a community, deciding on appropriate reactions. Their judgments are also analyzed by an in-house AI program that will eventually—largely on its own—identify, educate, reform and discipline players. The research Riot Games is doing into how large and diverse online communities can self-regulate could be used in everything from how to build more collaborative teams based on personality types to learning how our online identities reflect our real-world identities.
“We used to think that online gaming and toxic behavior went hand in hand.”
“We used to think that online gaming and toxic behavior went hand in hand," explains Jeffrey Lin, lead game designer of social systems at Riot Games. “But we now know that the vast majority of gamers find toxic behavior disgusting. We want to create a culture of sportsmanship that shows what good gaming looks like."
Achieving that goal presents big challenges. Riot Games has always maintained rules of conduct for players—forbidding use of racial slurs and cultural epithets, sexist comments, homophobia and deliberate teasing—but in the case of League, the volume of daily activity has made it all but impossible to enforce the rules through conventional tools and human efforts. More than 27 million people play at least one League game per day, with over 7.5 million online during peak hours.
That's one reason why Riot Games is putting serious brainpower behind the initiative. Lin, who holds a doctorate in cognitive neuroscience, works with two other Riot doctors—data science chief Renjie Li (Ph.D. in brain and cognitive sciences) and research manager Davin Pavlas (Ph.D. in human factors psychology)—to drive the program forward. Creating the tech foundation for this effort wasn't easy either. A giant data pipeline was needed to turn petabytes of anonymous user data into useful insights on how players behave. Lin's team also collaborated with artists and designers to make sure their work didn't interfere with the look or flow of the game.
Phase 1: The Tribunal
In the first phase of the program—the Tribunal, which launched in 2011—players would report fellow gamers when they felt they broke the rules. Reports were fed into a public case log where other players (called “summoners") were assigned incidents to review. The case often included chat logs, game statistics and other details to help the reviewer decide if the accused should be punished or pardoned. Linn says that most negative interactions come from otherwise well-behaved players who are simply having a bad day and take it out online.
The players use the context of the remark to vote on the degree of punishment for the case, which could range from a modest email “behavior alert" reminding them of a rules infraction and hopefully pointing them to positive play or a lengthy ban. After tens of millions of votes were cast, Riot put the Tribunal “in recess" in 2014 and started to make the pivot toward a new system that could be managed more on its own through AI.
Read these too< >
“There are two ways to deal with any type of problem at this scale, and we support both in tandem," says Erik Roberts, head of communications at Riot. “First, put the tools in the hands of the community and second, build machine learning systems that leverage the scale of data—contributed from the community through reports—to combat the problem."
Phase 2: AI
Last year, Riot kicked off testing of its new “player reform" system, one that provides faster feedback and automates parts of the process. It specifically targets verbal harassment, with the system capable of emailing players “reform cards" that mark evidence of negative behavior. Lin's team hand-reviewed the first few thousand cases to make sure everything was going well and the results were astounding: Verbal abuse has dropped 40 percent since the Tribunal and the new AI-assisted evaluation program took over.
“Since implementing their AI-assisted program, verbal abuse in games has dropped 40%.”
Lin believes that more game developers will follow thisr model—linking cognitive research to better game play and hiring cross-discipline teams dedicated to that purpose. “By showing toxic players peer feedback and promoting a discussion among the community, players reformed," Lin says. “We showed that with the right tools we could change the culture."
To learn how Big Data, automation and artificial intelligence will shape the future, download the HPE white paper “Big Data in 2016.”
|
<urn:uuid:bc423565-fa64-453a-93ef-64f4943669f4>
|
CC-MAIN-2017-09
|
https://www.hpematter.com/childhood-issue/League-of-Legends-New-Model-Civility-in-Online-Communities
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00027-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.964347 | 1,117 | 2.765625 | 3 |
As everything around us becomes connected to the Internet, from cars to thermometers to the stuff inside our mobile phones, technologists are confronting a tough new challenge: How does a machine verify the identity of a human being?
In Redwood City, Calif., a start-up called OneID is offering a single sign-on for a variety of Web sites and devices. In a video, an engineer at OneID demonstrated how he used it to open his garage door at home. Jim Fenton, an engineer with OneID, demonstrated how to open a garage door using his company’s technology.
“The Achilles’ heel of the Internet of things is, how do you secure access to all these things?” said the engineer, Jim Fenton. “If you connect all these things to the Internet you need to have good ways — good from a security standpoint and a convenience standpoint — good ways to control access to things. Having user names and passwords is not a good solution for every device.”
Trouble is, not very many things — online or off — have yet adopted the OneID system, which means Mr. Fenton must still use a lot of user names and passwords. He keeps them in a couple of password managers on his computer, along with an encrypted USB stick. “It’s not fun,” he said.
Read the full article at: http://bits.blogs.nytimes.com/2013/09/10/beyond-passwords-new-tools-to-i...Back to all News
|
<urn:uuid:bc4a1421-bf14-435e-b1b6-6cd62d251484>
|
CC-MAIN-2017-09
|
http://www.northbridge.com/beyond-passwords-new-tools-identify-humans
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00447-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.943408 | 325 | 3.03125 | 3 |
A striking number of U.S. travelers, while aware of the risks, are not taking the necessary steps to protect themselves on public Wi-Fi and are exposing their data and personal information to cyber criminals and hackers, according to research released today by AnchorFree, the global leader in consumer security, privacy and Internet freedom.
The PhoCusWright Traveler Technology Survey 2013 polled 2,200 U.S. travelers over the age of 18 revealing new insights into travelers’ online behavior and their understanding of cyber risks.
It is estimated that 89 percent of Wi-Fi hotspots globally are not secure. The increased use of smartphones and tablets to access unsecured public Wi-Fi hotspots has dramatically increased the risk of threats. Travelers were three times more likely to use a smartphone or tablet than a laptop to access an unsecured hotspot in a shopping mall or tourist attraction, two times more likely in a restaurant or coffee shop and one and a half times more likely at the airport.
While most travelers are concerned about online hacking, very few know how, or care enough, to protect themselves. Looming threats — from cyber thieves to malware and snoopers — are skyrocketing on public Wi-Fi and travelers need to be vigilant in protecting themselves.
Further to this point, a striking 82 percent of travelers surveyed reported that they suspect their personal information is not safe while browsing on public Wi-Fi, yet nearly 84 percent of travelers do not take the necessary precautions to protect themselves online. The top three concerns cited when using public Wi-Fi are the possibility of someone stealing personal information when engaging in banking or financial sites (51 percent), making online purchases that require a credit or debit card (51 percent) and making purchases using an account that has payment information stored (45 percent). Travelers were less concerned about using email or messaging services on public Wi-Fi (18 percent).
Cyber-security threats are not the only issues people face while traveling. Thirty-seven percent of international travelers —which equates to 10 million U.S. travelers annually— encountered blocked, censored or filtered content including social networks (40 percent) such as Facebook, Twitter and Instagram during their trip. Top websites that were also blocked include video and music websites such as Hulu and YouTube (37 percent), streaming services such as Pandora and Spotify (35 percent), email (30 percent) as well as messaging sites such as Skype and Viber (27 percent).
To avoid the threat of hacking and cyber attacks, more than half of travelers (54 percent) try not to engage in online activities that involve personally sensitive information while one in five (22 percent) avoid using public Wi-Fi altogether because they believe their personal information is at risk. Only 16 percent reported using a VPN such as Hotspot Shield.
|
<urn:uuid:442d2128-4f5d-4a15-a25d-ab61a2faeaa2>
|
CC-MAIN-2017-09
|
http://hospitalitytechnology.edgl.com/news/Four-in-Five-Travelers-Fear-Mobile-Use-of-Unsecured-Public-Wi-Fi-89521
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00023-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.94421 | 573 | 2.65625 | 3 |
Del Valle ISD is a public school district in Texas that serves over 11,000 students on 15 campuses in Southeast Travis County near Austin. The school district is committed to providing innovative educational programs that cultivate critical thinking skills among its students, including mock trial leagues and debate teams. Unlike typical programs, these students do not just compete within their school district, within their state, or even within their own country. These students are able to compete internationally with students like themselves from all over the world through the power of HD video collaboration.
From the outside, Del Valle ISD may look like a typical school system, but on the inside, students are traveling the world and gaining a cultural education that very few people ever have a chance to experience. By partnering with world-class schools in countries such as South Africa, Australia, Nigeria, Iran, Korea, India, Taiwan, and many others, students from Del Valle ISD are able to flex their debate skills and perform mock trial hearings via LifeSize video conferencing solutions.
The idea was born in 2003. Del Valle ISD had distance learning video equipment that was given to them from Region 13, but it was severely underutilized. Programs such as mock trial and debate required students to travel for competitions. In fact, students from as far as Alaska would journey in for these tournaments, resulting in a week’s worth of travel. Since acquiring the video conferencing equipment from Region 13, the school district has continually built upon its distance learning programs and now participates in international educational programs with 225 schools across 75 countries.
Students now have the opportunity to debate about current events and even historical events, as they take opposing sides on topics such as Armenian genocide, the evolution of the Catholic Church, or even Truman’s role in the bombing of Hiroshima. They collaborate with local colleges and even leaders in the judicial system, such as a judge in Australia for debates and mock trial events. Whether they are working on a topic from the YMCA, ICC or National Forensics League, or choosing a topic of their own, these young people compete against their peers across the globe in crystal-clear HD, all from the comfort of their classroom in their hometown.
“I really see this program as a living book,” said Michael Cunningham, project director of World Class Schools and educator for Del Valle ISD. “These students are learning something incredible every day, and expanding their global knowledge in ways that we never thought were possible a decade ago. This is something that a standardized, multiple choice test could never teach. This is where true critical thinking skills are born.”
The program has even extended beyond mock trials and debate competitions; the students also interact with their international peers to learn more about their culture, food and music. Del Valle ISD students have sung Norwegian Christmas carols, learned how to cook exotic foods like antelope and porcupine, and spoken to Dr. Scott Kofmehl, chief of staff to the United States ambassador to Islamabad to learn about having a career in foreign affairs.
“When we first tried using video conferencing in 2003, the quality of the image just wasn’t good enough to use on a daily basis, but the technology has really evolved since then,” said Cunningham. “LifeSize ClearSea provides exceptional HD quality and it’s so easy to use that we share it with schools all over the world.”
“I want schools to realize that you can do this for next to nothing: bring a world-class school to your school for only a little money,” said Cunningham. “With this program, we are enabling our students to open their mind and see what is really going on in their world. Other districts may prefer to spend the money to bring their students on field trips in their local community, but we have chosen to invest in video conferencing to bring the world to our students.”
|
<urn:uuid:f94e97be-618a-4da6-b09c-fa430cf68c4e>
|
CC-MAIN-2017-09
|
http://www.lifesize.com/video-conferencing-blog/innovative-educational-approach/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00199-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.977804 | 813 | 2.84375 | 3 |
LOB and XML complex data types have extended the capabilities of DB2 for z/OS in order to support the needs of today's applications. The infrastructure for storing LOB and XML data is similar in nature because both data types use the concept of an auxiliary set of objects.
LOB data type
Support for large objects (LOBs) was introduced with DB2 V6. Its usage, like many other new DB2 features, was slowly adopted. In the beginning, the administration of LOBs was difficult to manage because all of the SQL to create the objects had to be manually coded. While this method provides the most flexibility in regards to naming standards, it can also be error-prone and time consuming.
DB2 V8 delivered the ability to use the CURRENT RULES = 'STD' service register to create LOBs. The CURRENT RULES = 'STD' service register enables DB2 to implicitly create all of the auxiliary objects when the BASE table is created. However, DB2 dictates the naming standard. The table space name, for example, will begin with an "L" followed by seven randomly generated characters. The table name consists of the first five characters of the base table name, the first five characters of the LOB column name, and eight randomly generated characters. These naming conventions are shown in the example in the following figure:
Figure 1. DB2 naming conventions for auxiliary objects when CURRENT RULES = 'STD' is used
Although the CURRENT RULES = 'STD' service register makes life much easier, many DBAs continue to define LOBs by using the manual method primarily because of their internal naming standards. This flexibility is not available when dealing with XML.
XML data type
Extensible Markup Language (XML) is the by-product of an evolutionary cycle that began with IBM's Generalized Markup Language (GML) in 1969. XML is platform and vendor independent and has evolved into the industry standard for exchanging data across different systems, platforms, and applications. XML is self-describing and easy to extend, which makes it a very flexible data model for all types of data, both structured and unstructured.
Prior to DB2 9, XML documents could be stored in DB2 for z/OS as either a LOB or "shredded" and stored in multiple relational tables. Unfortunately, storing XML documents in this fashion inhibits the ability to take full advantage of the benefits of XML, such as the search capability. Storing XML data in this fashion also makes it very difficult, if not impossible, to re-create a full XML document on demand. DB2 9, however, provides the ability to store XML data in DB2 in a parsed hierarchical format, which enables the exploitation of full XML capabilities within the database engine.
When using DB2 9's pureXML capabilities, DB2 implicitly creates all of the XML auxiliary objects. Some users might think that this is an undesirable restriction because it is highly probable that the naming standard does not meet existing standards. However, remember that the auxiliary objects are not directly accessed through SQL, and the benefits of increased productivity and minimal errors should not be overlooked.
XML uses a DOCID rather than a ROWID. Conceptually, both mechanisms are the same. The auxiliary table space inherits the attributes from the table space storing the BASE table. Only one DOCID is required in the base table regardless of the number of XML columns. The definition of the DOCID is automatically generated by DB2 when an XML column type is added to the base table. The name is DB2_GENERATED_DOCID with a data type of BIGINIT. The auxiliary table has the same name as the base table with an "X" appended to the front of it. The DOCID index on the BASE table appends a I_DOCID prefix to the name of the BASE table. The NODEID index on the auxiliary index name appends a I_NODEIDX to the name of the BASE table. These naming conventions are shown in the following example:
Figure 2. Naming conventions for XML auxiliary objects
Although LOB objects can still be managed manually, the delivery of the CURRENT RULES = 'STD' special register in DB2 V8 vastly improved productivity. This functionality has been extended even further with the automatic implicit creation of XML object types provided by DB2 9.
The following table summarizes how auxiliary objects for LOB and XML data types can be created and how they are named and stored.
Table 1. Creation, naming conventions, and storage of auxiliary objects
|LOB Data Type||XML Data Type|
|Auxiliary objects can be created manually||Auxiliary objects cannot be created manually|
|Auxiliary objects can be created implicitly by DB2 9 when CURRENT RULES = 'STD' is used||Auxiliary objects can be created implicitly by DB2 9|
|Naming conventions for auxiliary objects when CURRENT RULES = 'STD' is used:
||Naming conventions for auxiliary objects:
|Auxiliary tables are stored as type X in SYSIBM.SYSTABLES||Auxiliary tables are stored as type P in SYSIBM.SYSTABLES|
- Participate in the discussion forum.
- Explore how DB2 Administration Tool for z/OS and DB2 Object Comparison Tool for z/OS support LOBs and XML.
- In the DB2 for z/OS area on developerWorks, get the resources you need to advance your DB2 skills.
|
<urn:uuid:999a450c-f155-4e5e-b9b9-71edd9b23b28>
|
CC-MAIN-2017-09
|
http://www.ibm.com/developerworks/data/library/techarticle/dm-1007db2lobsxml/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00019-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.907756 | 1,161 | 3 | 3 |
Adaptive Learning Systems
What are the adaptive learning systems?
Adaptive Learning System is a tool for transforming the learning system from a passive reception of information by the receiver to an interactive process by collaborating in the education process. Adaptive Learning System is used in education as well as in business training. There are various thick and thin client versions of adaptive learning systems available from various vendors.
|
<urn:uuid:b42098bc-e9b7-4738-b592-f3766a05465d>
|
CC-MAIN-2017-09
|
https://www.hcltech.com/technology-qa/what-is-adaptive-learning-system
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00547-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.956239 | 77 | 3.015625 | 3 |
I like to think of the transport layer as the layer of the OSI Model that enables more interesting traffic. While we network engineers may love a lot of the simpler uses of the IP protocol and networks in general, we'd all be jobless without the transport layer.
Layer 4 provides for the transparent transfer of data for users, systems, and applications and reliable data transfer services to the upper levels. Since the vast majority of our network traffic is IP-based nowadays, it's probably easiest to think about layer 4 as it relates to IP traffic specifically.
The transport layer controls the reliability of communications through flow control, segmentation, and error control. Two great examples of transport protocols are TCP (as in TCP/IP) and UDP. Understanding the differences between TCP and UDP really helps when troubleshooting and when trying to understand the results from a packet capture. TCP, or the Transmission Control Protocol, is connection oriented. This means that when a TCP conversation occurs a session is established and that session is used to control and ensure the flow of data between. Once the conversation is finished the session is terminated. UDP, or the User Datagram Protocol, is not connection oriented. It's a simpler and in some ways more elegant protocol and data is transferred in a "best effort" type of style vs. the guaranteed delivery with TCP.
Darragh Delaney: Troubleshooting application problems by looking at network traffic
Oftentimes, layers 4-7 can be grouped together and thought of as the application layers. Because we work so much with TCP/IP nowadays, even though TCP/IP is a layer 4 stack I sometimes find myself thinking of it as the application layer. Maybe it's because I tend to associate applications with TCP ports. As a matter of fact, when I started writing this post I titled it "Layer 4 of the OSI Model - understanding the application layer," and then I had to go back and correct myself. Even though this is a common mistake and many folks tend to group these levels, understanding the differences between layers 4-7 of the OSI Model will help to enhance your troubleshooting and design skills.
Layer 4 is also sort of the "hot" layer right now. Years ago, layer 3 was talked about a lot as layer 3 switches were new on the market and in high demand. Today, layer 4 switches are available and application accelerators, WAN accelerators, load balancers, and firewalls all operate at the layer 4 level. In the case of a WAN accelerator, most operate by first identifying the application via TCP port and then by breaking down, optimizing, and the rebuilding the TCP session as it passes through and between the WAN optimizers.
Last but not least, network management technologies that leverage flow-based network traffic analysis like NetFlow and IPFix leverage the transport layer information including TCP port numbers and session start/stop/duration to identify and measure application traffic.
In a nutshell, while in the old days many network administrators would consider anything above layer 3 to beyond their demarc of responsibility - sagacious network engineers today treat layer 4 and above as well.
Josh Stephens is Head Geek and VP of Technology at SolarWinds, an IT management software company based in Austin, Texas. He shares network management best practices on SolarWinds GeekSpeak and thwack. Follow Josh on Twitter@sw_headgeek and SolarWinds @solarwinds_inc.
|
<urn:uuid:85fdee4c-41b0-492b-93c7-4f9b57ce8966>
|
CC-MAIN-2017-09
|
http://www.computerworld.com/article/2470343/network-hardware-solutions/the-transport-layer---understanding-layer-4-of-the-osi-model.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00491-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.933581 | 701 | 3.15625 | 3 |
Weather prediction in the 50s. Missile intercepts of UFOs in the 60s. Electronic messages in the 70s. We learned about it all through the commercials.
As far back 1956, Remington Rand told us how Univac could collect historical weather data, analyze that "big data" and use it to make predictions -- "all in a matter of minutes." Imagine, data fed into a computer through magnetic tapes to the "memory tanks" that could hold 12,000 units of information. And it gets the prediction right due to its unique ability to check itself for possible errors. Error checking! Who would have thought!
But real peace of mind came from the Sage computer, part of a defense system created by MIT and IBM working with the Air Force. All scheduled air traffic was entered via punch card (error checking anyone?) so that actual flying objects could be compared to expected flying objects. If an unknown object appears on screen, an officer fires a light gun at it. Scared yet? Next an IBM computer calculates the missile trajectory for intercept, from which "there is no escape." Sure do wish he'd mentioned error checking.
Moving into the 70s, office messages can be read on screen while you have your morning coffee. Want to share something with others? Just push a button and the words you see on screen will appear on paper. Another button and you can send the message to similar units in other offices. It's just an experiment, but soon Xerox systems could assist you in managing your most precise resource. No, not your time or even your children. We're talking about your information.
Next time, we'll move on to some great computer ads from the 1980s .
|
<urn:uuid:612b828d-b9e5-435c-9dd7-177e856082a0>
|
CC-MAIN-2017-09
|
http://www.computerworld.com/article/2475285/business-intelligence/computing-in-the-commercial-age.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00015-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.943454 | 345 | 3.171875 | 3 |
Why cloud data storage is secure — and why it might not be
This feature first appeared in the Fall 2014 issue of Certification Magazine. Click here to get your own print or digital copy.
Nude photos of well-known actresses certainly garner attention on their own, but this summer they drew attention to an unlikely subject: cloud security. Private photographs of Jennifer Lawrence, Kate Upton and other celebrities posted by hackers on Reddit and other Internet sites brought the security of cloud storage services into the public spotlight. These photographs were allegedly stolen from Apple iCloud accounts that were used to backup iOS device photos automatically. In the wake of the incident, consumers and businesses find themselves asking the same question: Is the cloud as secure as we think it is?
Security Risks in the Cloud
Cloud data storage services provide a way for users to store information online without concerning themselves with the technical details of how and where the data is actually stored. Simply upload data to a trusted service, such as Box, Google Drive, Dropbox or iCloud, and cloud sync software takes care of the rest. The promise made by the provider is that your data will be securely stored across multiple locations and synced to all of your devices. That’s a compelling business case, especially when you take into account that such services are often either inexpensive or completely free.
The benefits of cloud services over local storage are clear — the ability to access data anytime, anywhere and from any device. So what are the security risks? Moving data to a cloud storage solution introduces security concerns that may not exist in a local storage environment. Organizations making the move to cloud storage should understand these risks and take action to mitigate them as much as possible.
First, cloud data storage solutions may be more susceptible to hacking attacks. The open nature of the service means not only that users can access their data from anywhere, but also that attackers have greater opportunity to attempt that same access. In the case of the iCloud celebrity photo disclosures, hackers allegedly used phishing messages to fool victims into revealing their passwords and/or the answers to their security questions. That information was then used to access photos stored in their online accounts. Apple claims that iCloud itself was never breached — the hackers merely entered through normal channels using stolen info — but the open nature of the service facilitated the attack.
In some cases, the provider itself may experience security issues. The web applications and syncing tools used to store and access files in cloud services can have security vulnerabilities, just like any other piece of software. For example, during a four-hour period in June 2011, Dropbox inadvertently allowed access to any account without requiring the correct password. Although Dropbox techs quickly corrected the flaw, it highlighted the fact that the privacy of our information is dependent upon the implementation of strong security controls by cloud providers.
Start with Policy
Every business needs to consider the impact of cloud data storage options on the security of their data. Even if you have no plans to intentionally place your data in the cloud, employees may discover the convenience of consumer cloud services and place corporate data there for easy access. (For example, does anyone at the office use Google Drive to share work documents?) The cloud security journey should begin with a solid set of policies.
The first policy every organization should consider implementing is a Bring Your Own Device (BYOD) policy. This policy should clearly state whether, and to what extent, employees may use personally owned devices on corporate networks and with business information. The permissiveness of this policy will vary with an organization’s risk tolerance, but the bottom line is that it should clearly answer questions such as:
— What types of devices may be used?
— Must employees register BYOD devices with the company?
— What data may be stored, processed and transmitted on the device?
— What security controls are required before a device is used for business purposes?
— Does the organization retain the right to remove data from personally owned devices? Through what mechanism?
Organizations should also address the growing trend of consumerization of technology by adopting a Bring Your Own Cloud (BYOC) policy. Similar to the BYOD policy, the BYOC policy should clearly outline whether employees may use personal cloud accounts to store business information. If this is allowed, then the policy should clearly state the conditions of use.
The final policy element that should be in place for secure use of the cloud is a data classification policy. This policy should outline the different categories of business information and clearly describe what information fits into each category. For example, a company might adopt a classification policy that places all information into categories labeled Highly Sensitive, Sensitive and Public. The Highly Sensitive category might include Social Security Numbers, credit card numbers and similarly restricted data elements. The Public category might include only information explicitly approved for public release, while all remaining information fits into the Sensitive category.
Data classification efforts should directly support the BYOD and BYOC policies. If an organization clearly classifies its data, those classifications may then be used to describe the appropriate use of personally owned devices and personal cloud accounts. For example, a BYOD policy might explicitly state that approved personally owned devices may be used to process Sensitive information but may not store, process or transmit Highly Sensitive information under any circumstances.
Secure Account Access
Organizations choosing to adopt cloud data storage solutions should implement strong security controls to protect access to stored data. This begins, of course, with using strong passwords to protect accounts. The easiest way to do this is integrating the cloud service with your existing authentication system using the Security Assertion Markup Language (SAML). Major cloud providers typically offer SAML integration as an added feature on their enterprise accounts.
Enterprises may also strengthen authentication by adopting multifactor authentication, particularly for those accounts storing sensitive information. In a multifactor authentication approach, the user first provides a password and is then prompted to input a one-time code. That code is provided by a smartphone app, text message, or special keyfob. By providing this code, the user not only proves that he or she has knowledge of the account password but also has possession of a trusted device.
IT administrators should pay careful attention to access controls, just as they would for local storage. Groups and roles should be curated carefully to ensure that membership is appropriate and that users only have access to the information they need to perform their jobs. One particular concern with cloud storage services is their ability to create public file sharing URLs that may allow unauthenticated access to a file or directory. Administrators should ensure that users receive training on the various access controls supported by the cloud storage service and conduct periodic audits to verify the appropriateness of permission settings.
Encrypt Sensitive Information
Encryption is the gold standard for data protection. By using specialized mathematical algorithms to encrypt information, users can be certain that anyone lacking access to the decryption key is unable to decipher the data, even if they somehow gain access to it in encrypted form. When it comes to cloud storage services, there are two ways enterprises may leverage encryption to protect information: encryption in transit and encryption at rest.
Encrypting data in transit protects the contents of files as they travel across the internet from the cloud provider to the end user (or vice versa). Strong encryption is easy to implement and uses protocols known as the Secure Sockets Layer (SSL) and Transport Layer Security (TLS). The most common implementation of SSL/TLS is the secure HTTP (HTTPS) protocol used to encrypt web communications. When selecting a cloud storage provider, enterprise should ensure that all communications between end users and the provider are secured with SSL/TLS.
Encryption may also be used to protect data at rest, while it is stored on the servers of the cloud provider. This protects information from prying eyes if an unauthorized individual gains access to the cloud servers. It’s important, however, to verify where the encryption keys are stored and who has access to them. In many cloud storage solutions, the architecture requires that the cloud provider have access to the encryption keys. In this case, employees of the provider would theoretically have the ability to decrypt your information. If a solution provides enterprises with the ability to manage their own keys, this problem is solved, but the ability of users to access data may be more limited.
Encryption is a tricky topic and security professionals should scrutinize the technical details of any potential vendor’s implementation. Watch for the use of industry standard encryption protocols (such as SSL/TLS and HTTPS) and algorithms (such as AES and RSA). Pay careful attention to how keys are stored and managed. If a vendor refuses to disclose the details of their encryption implementation, claiming that they are proprietary, consider it a red flag.
Cloud data storage solutions provide tremendous benefits to end users and enterprises. They offer flexible access to information in a cost effective manner. Organizations considering the implementation of cloud storage, however, must ensure that they have appropriate policies and controls in place. These measures should aim to implement the same level of protection expected from local solutions in the cloud.
|
<urn:uuid:1442dbec-31a9-4367-b8aa-7542c890d59f>
|
CC-MAIN-2017-09
|
http://certmag.com/cloud-data-storage-secure-might/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00367-ip-10-171-10-108.ec2.internal.warc.gz
|
en
| 0.923741 | 1,844 | 2.578125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.