text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Redundant links are always welcome in switch topology as they are increasing the network’s availability and robustness. Redundant links if we look at them from layer 2 perspective can cause Layer 2 loops. This is simply because TTL (Time To Live) field of the packet is found in Layer 3 header. In networking technology this means that TTL number will be diminished only when the packet is passing through the router. There is no way to “kill” a packet that is stuck in layer 2 loop. This situation can result in broadcast storms. Fortunately, Spanning Tree Protocol (STP) can allow you to have redundant links while having a loop-free topology, thus preventing the potential for broadcast storms.
<urn:uuid:af6253a0-f219-428c-bb3c-5ebcdbd827ed>
CC-MAIN-2017-09
https://howdoesinternetwork.com/tag/rouge-switch
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00561-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926928
148
2.515625
3
How to Write Clean Code Even bad code can function. But if code isn't clean, it can bring a development organization to its knees. Every year, countless hours and significant resources are lost because of poorly written code. Within this article we will take the key concepts, and points around writing Clean Code - referenced from the amazing book, The Clean Coder: A Code of Conduct for Professional Programmers by Robert Cecil Martin (aka Uncle Bob). How should be name our objects, functions, variables and methods? - Names should be thought of well and thoughtfully. - Communicate your intent. If you require a comment to describe your name, you have chosen a bad name. - Use pronounceable names. People will need to talk about your code, make it easy for them. - Code should read like well written English, writing a line code so it reads like a sentence. The names you choose should lend itself to this, i.e - Classes are named with a Noun. i.e. Variables are named with a Noun. i.e user_account_id = 124816 Method/Functions are named with a Verb. i.e. Predicates should return a Boolean. i.e if x == 2: - Classes are named with a Noun. i.e. - Ensure you stick to the rule of naming inside of the realms of your scope. - variables names should be named extremely short if there scope is extremely small. - variables names should be long if they are in a big long scope, such as a global variable. - function names should be short if they have a long scope - function names should be long if they have a short scope - classes names should be short, as in the realms of Python they can be considered public. You mentioned nouns and verbs - can you explain what they are? - Noun: a word that refers to a person, place, thing, event, substance or quality e.g.'nurse', 'cat', 'party', 'oil' and 'poverty'. - Verb: a word or phrase that describes an action, condition or experience e.g. 'run', 'look' and 'feel'. How should I construct my functions and how should they operate? As a general rule of thumb, functions should, - be small, very small - do just ONE thing How do you ensure your function is only doing ONE thing ONLY? Ensure you can extract no further functions from the original function. Once you can extract no more, you can be sure the function is doing one thing, and one thing ONLY. - You should not pass Boolean or None types into your function - No more then 3 arguments should be passed into your function - It is better to raise an exception then return an error code - Custom exceptions are better then returning error codes - Custom exceptions should be scoped to the class Command Query Structure (CQS) This is one of my favorite disciplines within functions, as this is a great way to create clean functions that only do ONE thing, - Functions that change state should not return values, but can throw an exception - Functions that return values should not change state - It is bad for a single function to know the entire structure of the system - Each function should only have limited knowledge of the system If you want your code to be clean, then TDD SHOULD be adopted. Lets look at why, the laws of TDD and the TDD process, The benefits to TDD are, - promotes the creation of more efficient code - improves code quality - ensures the minimum amount of code is used - prevents code regression There are 3 laws to TDD, - Write no production code until you have create a failing unit test - Write only enough of a test to demonstrate a failure - Write only enough production code to pass the test The process for TDD is, - RED - Create a test and ensure it fails - GREEN - Write production code to ensure the test passes - REFACTOR - Refactor your code and ensure tests still pass Good architectures are NOT composed of tools and frameworks. Good architectures allow you to defer the decisions about tools and frameworks, such as UIs and databases. But how is this achieved? This is achieved by building an architecture that decouples you from them - by building your application, not on your software environment, but based on your use-case. Decoupling your applications allows for changes to be made, fair easier that it being a single monolithic application. Also decoupling allows for clear business decisions to be made to each part of your application i.e time/money spent on the UI, API and usecase, i.e core application. What is a Use Case? A use case is a list of actions or event steps to achieve a goal. It is important to note within our use-case no mention is made to databases or UIs. Below is an example, -- Create Order -- - Data Customer-id Customer-contact-id Payment-information Shipment-mechanism - Primary Course Order clerk issues Create Order command with above data System Validates all data System creates order and determines order-id System delivers order-id to clerk - Exception Course Validation Error - System delivers error message to clerk As more use cases are defined partitioning it is required to allow clear separation within your system, this design is also know as EBI (Entity, Boundary, and Interactor). - Business Object (Entities) - Entities are application independent business rules. The methods within the object should NOT be specific to any of the systems. An example would be a Product Object. - Controller (Interactors) - Use cases are application specific. Use-cases are implemented by interactor objects ; It is the goal of the interactor to know how to call the entities to reach the goal of the use case. An example for our example use case would be CreateOrder. - User interfaces (Boundaries) - A boundary is the interface that translates information from the outside into the format the application uses, as well as translating it back when the information is going out. Below shows how this actually looks, Clean Code - Robert Cecil Martin
<urn:uuid:289cd1d3-2c15-49df-8e48-804d4057d8c3>
CC-MAIN-2017-09
https://www.fir3net.com/Programming/Python/clean-code-notes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00081-ip-10-171-10-108.ec2.internal.warc.gz
en
0.880796
1,319
3.34375
3
How Google Works: Reducing ComplexityBy David F. Carr | Posted 2006-07-06 Email Print For all the razzle-dazzle surrounding Google, the company must still work through common business problems such as reporting revenue and tracking projects. But it sometimes addresses those needs in unconventional—yet highly efficient—ways. Other Google's distributed storage architecture for data is combined with distributed execution of the software that parses and analyzes it. To keep software developers from spending too much time on the arcana of distributed programming, Google invented MapReduce as a way of simplifying the process. According to a 2004 Google Labs paper, without MapReduce the company found "the issues of how to parallelize the computation, distribute the data and handle failures" tended to obscure the simplest computation "with large amounts of complex code." Much as the GFS offers an interface for storage across multiple servers, MapReduce takes programming instructions and assigns them to be executed in parallel on many computers. It breaks calculations into two parts—a first stage, which produces a set of intermediate results, and a second, which computes a final answer. The concept comes from functional programming languages such as Lisp (Google's version is implemented in C++, with interfaces to Java and Python). A typical first-week training assignment for a new programmer hired by Google is to write a software routine that uses MapReduce to count all occurrences of words in a set of Web documents. In that case, the "map" would involve tallying all occurrences of each word on each page—not bothering to add them at this stage, just ticking off records for each one like hash marks on a sheet of scratch paper. The programmer would then write a reduce function to do the math—in this case, taking the scratch paper data, the intermediate results, and producing a count for the number of times each word occurs on each page. One example, from a Google developer presentation, shows how the phrase "to be or not to be" would move through this process. While this might seem trivial, it's the kind of calculation Google performs ad infinitum. More important, the general technique can be applied to many statistical analysis problems. In principle, it could be applied to other data mining problems that might exist within your company, such as searching for recurring categories of complaints in warranty claims against your products. But it's particularly key for Google, which invests heavily in a statistical style of computing, not just for search but for solving other problems like automatic translation between human languages such as English and Arabic (using common patterns drawn from existing translations of words and phrases to divine the rules for producing new translations). MapReduce includes its own middleware—server software that automatically breaks computing jobs apart and puts them back together. This is similar to the way a Java programmer relies on the Java Virtual Machine to handle memory management, in contrast with languages like C++ that make the programmer responsible for manually allocating and releasing computer memory. In the case of MapReduce, the programmer is freed from defining how a computation will be divided among the servers in a Google cluster. Typically, programs incorporating MapReduce load large quantities of data, which are then broken up into pieces of 16 to 64 megabytes. The MapReduce run-time system creates duplicate copies of each map or reduce function, picks idle worker machines to perform them and tracks the results. Worker machines load their assigned piece of input data, process it into a structure of key-value pairs, and notify the master when the mapped data is ready to be sorted and passed to a reduce function. In this way, the map and reduce functions alternate chewing through the data until all of it has been processed. An answer is then returned to the client application. If something goes wrong along the way, and a worker fails to return the results of its map or reduce calculation, the master reassigns it to another computer. As of October, Google was running about 3,000 computing jobs per day through MapReduce, representing thousands of machine-days, according to a presentation by Dean. Among other things, these batch routines analyze the latest Web pages and update Google's indexes. Also in this Feature:
<urn:uuid:bf107344-9d8a-4f3a-9a56-17b673a38aba>
CC-MAIN-2017-09
http://www.baselinemag.com/c/a/Infrastructure/How-Google-Works-1/5
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00257-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929698
867
3.015625
3
After the collapse of the Interstate 35W bridge in Minneapolis in 2007, which killed 13 people and injured 144, congressional interest in fixing the nation’s bridges escalated. But efforts have not matched concerns: Since the tragedy, the addition of billions of extra federal dollars specifically for the repair and replacement of deficient and obsolete bridges has failed to come to fruition. In the meantime, several states have begun using structural monitoring – a combination of sensors, wireless communication and cloud computing -- to watch the structural integrity of bridges and potentially reduce infrastructure repair costs, enhance safety, and reduce load postings and detours. Today, 73 percent of U.S. road traffic and 90 percent of truck traffic travels over state-owned bridges. The American Association of State Highway and Transportation Officials estimates that nearly one in four of those bridges needs repairs, and it would cost $140 billion to repair all of them. The average age of America’s bridges is 43 years. Traditionally, visual inspections by technicians have formed the basis of U.S. spending on bridge infrastructure -- but it’s an imperfect system. “Visual inspection is subjective and highly variable,” said Peter Vanderzee, CEO of Alpharetta, Ga.'s Lifespan Technologies, which uses special sensors to monitor bridges for stress. “It is rare that visual inspection findings are challenged, even if they are unusual or questionable, simply because the process is expensive and time consuming, and not required by federal regulation.” But structural monitoring utilizing highly accurate sensing devices can enable objective, precise and timely performance data on the condition of bridges, said Vanderzee. Using technology-based monitoring, a state can install a suite of sensors on a bridge or other structure to monitor its structural integrity either once or on a continual basis. Sensor placement is determined by engineers, and data from the structure is collected and sent to a secure data center for analysis. “Based on work we have done, we estimate 30 to 40 percent of bridges evaluated using advanced condition assessment technologies are in better to much better condition than presumed based on visual inspection,” he said. Sensor Tech in South Carolina The South Carolina Department of Transportation (SC DOT) has been using sensors from Lifespan Technologies to monitor the structural behavior of several bridges in the state for more than five years. “One bridge was scheduled to be replaced, but replacement was some time off,” said Lee Floyd, South Carolina’s state bridge maintenance engineer. “We were concerned about the need for constant monitoring, and we felt this technology would help relieve us of some of that.” Utilizing the sensors, data from the bridge is now sent to SC DOT in real-time, saving the state significant inspection costs. The bridge also has a weight restriction, and the sensors help ensure the restriction is not exceeded on a regular basis. “I can sign in from anywhere, anytime and get real-time information,” said Floyd. “I can also set thresholds, so if a certain sensor goes to a certain level, I’ll get an email or a text message.” A few years after the department deployed the technology on one particular bridge, Floyd said he began to see sensors spike after midnight. Some investigation revealed that logging trucks were going over the bridge at night in order to avoid the transport police -- who were then deployed to the bridge and ticketed several overweight trucks. South Carolina also deployed sensors on another bridge in the state that was scheduled to be replaced. When replacement plans ran up against opposition given the bridge’s proximity to a historic plantation, sensors were installed to allow SC DOT to continuously monitor the bridge's condition to ensure safety until the replacement can be completed. “The technology doesn’t just help us in avoiding or adjusting restrictions, or avoiding replacement,” said Floyd. “There is a benefit also in postponing replacement because that allows you to better utilize the cash flow of your funds to other bridges. So it really is a combination of all three of those things.” SC DOT also installed sensors on the Arthur Ravenel Jr. Bridge, a cable-stayed bridge over the Cooper River in South Carolina that connects downtown Charleston to Mount Pleasant. The bridge has a main span of 1,546 feet, the third longest among cable-stayed bridges in the Western Hemisphere. Sensors will allow SC DOT to measure the day-to-day performance of the bridge, as well as its performance during high-wind events (the span is designed to endure wind gusts in excess of 300 mph). Overall, Floyd said he would like to eventually expand use of the technology in South Carolina. “There’s a lot of viability in the technology,” he said. “You still have to do your hands-on and visual inspections, but I see this as a complementary tool to help us make better decisions.” Colorado's Yearlong Bridge Sensor Pilot In April, the Colorado Department of Transportation (CDOT) installed sensors under the Williams Canyon Bridge on U.S. 24, allowing CDOT to monitor the structure’s movements from its Denver headquarters. It is the first bridge in Colorado to utilize the technology. “We’re in the testing phase at this point, but it’s important to find out how well it works in order to better manage our bridge infrastructure,” said CDOT staff bridge manager Josh Laipply. “If it proves successful, we’ll install these sensors on other structures throughout the state, allowing us to monitor a variety of characteristics, such as load capacity and bridge movement, which will enhance everything that we’re already doing as part of our inspection program.” Parsons engineering firm is assisting CDOT with the installation of the system under a $150,000 contract. “We selected a load restricted bridge that was relatively complex for this pilot project,” said Mark Nord, CDOT bridge asset management engineer, who adds that they plan to monitor the bridge for a year to see if the technology adds value for the state. As of press time, an initial load test had been completed, but the analysis of the results was not yet available. “Long-term monitoring will allow us to see what loads cross the bridge, how the structure behaves under live loads, and how the bridge is performing,” he said. “The load test will also help us validate the rating. If we get an improvement in rating, that means there are less restrictions on permitted loads that can cross it. And if the bridge is performing like we expect or even better than we expect, then it may delay replacement or change the nature of a rehabilitation project. The idea is to see if it can help us save money and use the funds we have more strategically.” Nord said if the technology works well, it may be expanded in the state. “I don’t picture it on every bridge we own,” he said. “It would probably be targeted primarily to bridges that we have questions about.” To date, technology-based structural monitoring has also been utilized by transportation departments in Pennsylvania, Massachusetts and New York, as well as the Canadian Pacific Railroad, according to Vanderzee.
<urn:uuid:9ba00cda-0010-41b4-8b7b-41937e54e196>
CC-MAIN-2017-09
http://www.govtech.com/Sensors-Wireless-Tech-Help-States-Monitor-Troubled-Bridges.html?flipboard=yes
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00609-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955656
1,512
2.78125
3
NASA engineers updated the software for a robotic Mars rover, correcting a more than two-month-old computer glitch while the robot hurtled through space on its way to Mars. Late in November, NASA launched its $2.5 billion Mars Science Laboratory. Dubbed Curiosity, the SUV-sized super rover is on an eight-month journey to Mars with a mission to help scientists learn if life ever existed on the Red Planet. However, a problem caused a computer reset on the rover Nov. 29, three days after launch, NASA reported last week. The problem was due to a cache access error in the memory management unit of the rover's computer processor, a RAD750 from BAE Systems. "Good detective work on understanding why the reset occurred has yielded a way to prevent it from occurring again," said Mars Science Laboratory Deputy Project Manager Richard Cook, in a statement. "The successful resolution of this problem was the outcome of productive teamwork by engineers at the computer manufacturer and [NASA's Jet Propulsion Laboratory]." Guy Webster, a spokesman for the JPL, told Computerworld that because of the processor glitch, the rover's ground team was unable to use the craft's star scanner, which is designed for celestial navigation. That technology was not in use for several months, and NASA engineers had to guide the rover through one major trajectory adjustment using alternate means, according to Webster. The fix, which was uploaded to the rover as it traveled through space, changed the configuration of unused data-holding locations, called registers. NASA reported that engineers confirmed this week that the fix was successful and the star scanner is working again. Curiosity, equipped with 10 science instruments, is expected to land on Mars in August. The super rover is set to join the rover Opportunity, which has been working on Mars for more than six years. Opportunity has been working alone since a second rover, Spirit, stopped functioning last year. Curiosity will collect soil and rock samples, and analyze them for evidence that the area has, or ever had, environmental conditions favorable to microbial life. Curiosity weighs one ton and is twice as long and five times heavier than its predecessors. This story, "NASA fixes computer glitch on robot traveling to Mars" was originally published by Computerworld.
<urn:uuid:3f7a1552-aa44-4002-bc96-5b23b132bcfd>
CC-MAIN-2017-09
http://www.itworld.com/article/2732645/consumer-tech-science/nasa-fixes-computer-glitch-on-robot-traveling-to-mars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00609-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960949
463
3.015625
3
Over the course of the nearly forty years I have been working on database systems, there have been many debates and arguments about which database technology to use for any given application. These arguments have become heated, especially when a new database technology appears that claims to be superior to anything that came before. When relational systems were first introduced, the hierarchical (IMS) and network (IDMS) database system camps argued that relational systems were inferior and could not provide good performance. Over time this argument proved false, and relational products now provide the database management underpinnings for a vast number of operational and analytical applications. Relational database products have survived similar battles with object-oriented database technology and multidimensional database systems. Although relational technology survived these various skirmishes, the debates that took place did demonstrate that one size does not fit all and that some applications can benefit by using an alternative approach. The debates also often led to relational product enhancements that incorporated features (e.g., complex data types, XML and XQuery support) from competitive approaches. Some experts argue that many of these features have corrupted the purity and simplicity of the relational model. Just when I thought the main relational products had become a commodity, several new technologies appeared that caused the debates to start again. Over the course of the next few newsletters, I want to review these new technologies and discuss the pros and cons of each of them. This time I want to look at MapReduce, which Michael Stonebraker (together with David DeWitt), one of the original relational database technology researchers, recently described as a “a giant step backwards.” What is MapReduce? MapReduce has been popularized by Google that uses it to process many petabytes of data every day. A landmark paper by Jeffrey Dean and Sanjay Ghemawat of Google “MapReduce is a programming model and an associated implementation for processing and generating large data sets…. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.” Michael Stonebraker’s comments on MapReduce explain MapReduce in more detail: “The basic idea of MapReduce is straightforward. It consists of two programs that the user writes called map and reduce plus a framework for executing a possibly large number of instances of each program on a compute cluster. The map program reads a set of records from an input file, does any desired filtering and/or transformations, and then outputs a set of records of the form (key, data). As the map program produces output records, a "split" function partitions the records into M disjoint buckets by applying a function to the key of each output record. This split function is typically a hash function, though any deterministic function will suffice. When a bucket fills, it is written to disk. The map program terminates with M output files, one for each bucket. After being collected by the map-reduce framework, the input records to a reduce instance are grouped on their keys (by sorting or hashing) and fed to the reduce program. Like the map program, the reduce program is an arbitrary computation in a general-purpose language. Hence, it can do anything it wants with its records. For example, it might compute some additional function over other data fields in the record. Each reduce instance can write records to an output file, which forms part of the answer to a MapReduce computation.” The key/value pairs produced by the map program can contain any type of arbitrary data in the value field. Google, for example, uses this approach to index large volumes of unstructured data. Although Google uses its own version of MapReduce, there is also an open source version called Hadoop from the Apache project. IBM and Google have announced a major initiative to use Hadoop to support university courses in distributed computer programming. MapReduce is not a new concept. It is based on the list processing capabilities in declarative functional programming languages such as LISP (LISt Processing). Today’s systems implement MapReduce in imperative languages such as Java, C++, Python, Perl, Ruby, etc. The key/value pairs used in MapReduce processing may be stored in a file or a database system. Google uses its BigTable database system (which is built on top of the Google distributed file system, GFS) to manage the data. Key/value pair databases have existed for many years. For example, Berkeley DB is an embedded database system that stores data in a key/value pair data structure. It was originally developed in the1980s at Berkeley, but it is now owned by Oracle. Berkeley DB can also act as a back end storage engine for the MySQL open source relational DBMS. Why the Controversy? Given that MapReduce is not a database model, but a programming model for building powerful distributed and parallel processing applications, why is there such a controversy with respect to relational systems? To answer this question we need to examine the relational model of data in more detail. In a relational model, data is conceptually stored in a set of relations or tables. These tables are manipulated using relational operators such as selection, projection and join. Today, these relational operators are implemented primarily using the structured query language (SQL). How the table data is physically stored and managed in a relational database management system (RDBMS) is up to the vendor. The mapping of relational operators (SQL statements) to the back-end storage engine is handled by the relational optimizer whose job it is to find the optimal way of physically accessing the data. This physical data independence is a key benefit of the relational model. When using SQL, users define what data they want, not how it is to be accessed. Techniques such as indexing and parallel and distributed computing are handled by the underlying RDBMS. SQL is a declarative language, and not an imperative/procedural language like Java and C++, which require a detailed description of any data access algorithms that need to be run. Of course, SQL statements can be embedded in procedural languages. The reverse is also true; SQL can invoke stored procedures and user-defined functions written in a procedural language. The concern of Michael Stonebraker is that the use and teaching of MapReduce will take the industry back to the pre-relational times when there was a lack of formalized database schemas and application data independence. MapReduce advocates argue that much of the data processed by MapReduce involves unstructured data that lacks a data schema. They also argue that today’s programmers vastly outnumber SQL experts, don’t know or don’t want to know SQL, find MapReduce much simpler, and prefer to access and analyze data using their own procedural programming. Both camps are correct and both approaches have their benefits and uses. As I said at the beginning of this article, one size does not fit all. The challenge is to understand where each approach fits. Data Analysis Processing Modes When accessing and analyzing data there are three types of processing that need to be considered: batch processing of static data, interactive processing of static data, and dynamic processing of in-flight data. A business intelligence environment, for example, involves the SQL processing of static data in a data warehouse. This can be done in batch mode (production reporting) or interactively (on-demand analytical processing). SQL may also be used to analyze and transform data as it is captured from operational systems and loaded into a data warehouse. MapReduce is used to process large amounts of data in batch mode. It is particularly useful for processing unstructured data or sparse data involving many dimensions. It is not suited to interactive processing. It would be very useful, for example, for transforming large amounts of unstructured data for loading into a data warehouse, or for data mining. Neither MapReduce nor SQL are particularly suitable to the dynamic processing of in-flight data such as event data. This is why we are seeing extensions to SQL (such as StreamSQL) and new technologies such as stream and complex event processing to handle this need. MapReduce is, however, useful for the filtering and transforming of large event files such as web logs. The next article in this series will look at stream processing in more detail. MapReduce and Relational Coexistence and Integration Several analytical RDBMS vendors (Vertica, Greenplum, Aster Data Systems) are offering solutions that combine MapReduce (MR) and relational technology. Vertica’s strategy is one of coexistence. With Vertica, MR programs continue to run in their normal operating environment, but instead of routing the output to the MR system, the Reduce program loads output data into the Vertica relational DBMS. The Vertica support works in conjunction with Amazon Elastic MapReduce (EMR). EMR is a web service that provides a hosted Hadoop framework running on the infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). This link shows how to use EMR to process and load a data set from S3 into the Vertica RDBMS running on Amazon EC2. The Vertica solution could be used, for example, to do batch ETL (extract, transform, load) processing where the input is a very large data set (a set of web logs, for example) and the output is loaded into a data warehouse managed by Vertica. Aster and Greenplum have a strategy of integrating the MR processing framework into the RDBMS to take advantage of the benefits of RDBMS technology such as parallel computing, scalability, backup and recovery, and so forth. Greenplum allows developers to write MR programs in the Python and Perl scripting languages. This support enables MR scripts to use open source features such as text analysis and statistical toolkits. These scripts can access flat files and web pages, and can use SQL to access Greenplum relational tables. Source tables can be read by Map scripts and target tables can be created by Reduce scripts. This architecture allows developers to mix and match data sources and programming styles. It also allows the building of a data warehouse using both ETL (data is transformed before it is loaded in the data warehousing environment) and ELT (data is transformed after it is loaded into the data warehousing environment) approaches. Greenplum MR scripts can also be used as virtual tables by SQL statements – the MR job is run on the fly as part of the SQL query processing. Greenplum’s RDBMS engine executes all the code – SQL, Map scripts, Reduce scripts – on the same cluster of machines where the Greenplum database is stored. For more information, see the Greenplum white paper Whereas Greenplum tends to emphasize the use of SQL in MR programs, Aster takes the opposite approach of focusing on the use of MR processing capabilities in SQL-based programs. Aster allows MR user-defined functions to be invoked using SQL. These functions can be written in languages such as Python, Perl, Java, C++ and Microsoft .NET (C#, F#, Visual Basic), and can use SQL data manipulation and data definition statements. The Linux .NET support is provided by the Mono open source product. These functions can also read and write data from flat files. Like Greenplum, Aster MR capabilities can be used for loading a data warehouse using both ETL and ELT approaches. Aster, however, tends to emphasize the power of the ELT approach. For more information, see the Aster white paper Both Greenplum and Aster allow the combining of relational data with MapReduce style data. This is particularly useful for batch data transformation and integration applications, and intensive data mining operations. The approach used will depend on the application and the type of developer. In general, programmers may prefer the Greenplum approach, whereas SQL experts may prefer the Aster approach. What About Performance? MapReduce supporters often state that MapReduce provides superior performance to relational. This obviously depends on the workload. Andrew Pavlo of Brown University together with Michael Stonebraker, David DeWitt and several others recently published a paper comparing the performance of two relational DBMSs (Vertica and an undisclosed row-oriented DBMS) with Hadoop MapReduce. The paper concluded that, “In general, the SQL DBMSs were significantly faster and required less code to implement each task, but took longer to tune and load the data.” It also acknowledged that, “In our opinion there is a lot to learn from both kinds of systems” and “…the APIs of the two classes of systems are clearly moving toward each other.” MapReduce has achieved significant visibility because of its use by Google and its ability to process large amounts of unstructured web data, and also because of the heated debate between the advocates of MapReduce and relational database technology experts. Two things are clear. Programmers like the simplicity of MapReduce and there is a clear industry direction toward supporting MR capabilities in traditional DBMS systems. MapReduce is particularly attractive for the batch processing of large files of unstructured data for use in a business intelligence system. My personal opinion is that if MR programs are being used to filter and transform unstructured data (documents, web pages, web logs, event files) for loading into a data warehouse, then I prefer an ETL approach to an ELT approach. This is because the ELT approach usually involves storing unstructured data in relational tables and manipulating it using SQL. I have seen many examples of these types of database applications, and this approach is guaranteed to give database designers heartburn. At the same time, I accept that some organizations would prefer a single data management framework based on an RDBMS. This is one of the reasons why DBMS vendors added support for XML data and XQuery to their RDBMS products. My concern is that relational products and SQL are becoming overly complex, especially for application developers. Recent articles by Colin White
<urn:uuid:6972d4fd-f610-4362-b871-d7951c7f3760>
CC-MAIN-2017-09
http://www.b-eye-network.com/view/10786
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00253-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925703
2,990
2.734375
3
Role-Playing: How Truly Effective Is It? Role-playing is thought to be an excellent technique to enhance people’s soft skills; however, its effectiveness is heavily dependent upon the personality traits of an instructor’s audience. In fact, role-playing is utilized best with naturally outgoing individuals because of its requirement for students to get up in front of the classroom and act out scenarios. And all too often, people are simply too intimidated or shy to do that in front of their peers. Therefore, it is critical that trainers perform some kind of initial personality and skill evaluation of their students in order to decipher whether or not role-playing would be a successful technique to employ. If role-playing is determined not to be the best technique to equip learners with the soft skills to successfully perform their job roles, other techniques must be applied in the classroom. Most often, trainers will take it upon themselves to educate their students through lecture or by example. Trainers are usually vivacious by nature because it is their job to engage their students and, as a result, learners may simply learn by observation. Creating small teams also is considered a good technique to equip people with soft skills; however, this may still be a difficult situation for extremely introverted individuals. But even so, such students need to learn how to relate to others in order to be successful managers or leaders in their jobs. For extremely introverted individuals, it may be beneficial for trainers to spend some one-on-one time with them. In the end, because it is becoming all too common for technically proficient IT professionals to be promoted to managerial positions—whether or not they have the skill sets required—trainers will have to revamp the way in which they teach soft skills and most likely have to grant more time throughout in their classes as well.
<urn:uuid:7703321d-bedc-4258-a7c9-ff640dafea05>
CC-MAIN-2017-09
http://certmag.com/role-playing-how-truly-effective-is-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00605-ip-10-171-10-108.ec2.internal.warc.gz
en
0.969935
371
2.65625
3
Sandia Develops Cognitive Machines ALBUQUERQUE, N.M. -- A new "smart" machine that could fundamentally change how people interact with computers is being tested at the Department of Energy's Sandia National Laboratories. For the past five years, a team led by Sandia cognitive psychologist Chris Forsythe has been developing cognitive machines that accurately infer user intent, remember experiences with users, and allow users to call on simulated experts to help them analyze situations and make decisions. The initial goal of the work was to create a "synthetic human" -- a software program/computer that could think like a person. "The benefits from this effort are expected to include augmenting human effectiveness and embedding these cognitive models into systems like robots and vehicles for better human-hardware interactions," said John Wagner, manager of Sandia's Computational Initiatives Department. "We expect to model, simulate and analyze humans and societies of humans for Department of Energy, military and national security applications." Massive computers that could compute large amounts of data were available, said Forsythe. "But software that could realistically model how people think and make decisions was missing," he said. There were two significant problems with previous modeling software. First, the software did not relate to how people actually make decisions -- it followed logical processes, which people don't necessarily do. People make decisions based, in part, on experiences and associative knowledge. Software models of human cognition also did not take into account factors such as emotions, stress and fatigue. In an early project, Forsythe developed the framework for a computer program that used both factors. Follow-up projects developed methodologies that allowed the knowledge of a specific expert to be captured in computer models and provided synthetic humans with episodic memory -- memory of experiences -- so they might apply their knowledge of specific experiences to solving problems in a manner that closely parallels what people do. "Systems using this technology are tailored to a specific user, including the user's unique knowledge and understanding of the task," said Forsythe. Work on cognitive machines started in 2002 with a contract from the Defense Advanced Research Projects Agency (DARPA) to develop a machine that can infer an operator's cognitive processes. This capability provides the potential for systems that augment the cognitive capacities of an operator through "discrepancy detection." In discrepancy detection, the machine uses an operator's cognitive model to monitor its own state, detecting discrepancies between the machine's state and the operator's behavior. Early this year, work began on Sandia's Next Generation Intelligent Systems Grand Challenge project. "The goal of this Grand Challenge is to significantly improve the human capability to understand and solve national security problems, given the exponential growth of information and very complex environments," said Larry Ellis, the principal investigator. "It's entirely possible," said Sandia's Forsythe, "that these cognitive machines could be incorporated into most computer systems produced within 10 years." -- Sandia National Laboratories IBM Delivers World's Most Powerful Linux Supercomputer TOKYO -- Japan's largest national research organization announced at the end of July that it ordered an IBM eServer Linux supercomputer that when completed, will deliver more than 11 trillion calculations per second, making it the world's most powerful Linux-based supercomputer. It is expected to be more powerful than the Linux cluster at Lawrence Livermore National Laboratory in Livermore, Calif., which is currently ranked the third most powerful supercomputer in the world, according to the independent TOP500 List of Supercomputers. The plan is to integrate the supercomputer with other non-Linux systems to form a massive, distributed computing grid -- enabling collaboration between corporations, academia and government -- to support various research including grid technologies, life sciences, bioinformatics and nanotechnology. The system -- with a total of 2,636 processors -- will include 1,058 eServer 325 systems. The powerful new supercomputer will help Japan's National Institute of Advanced Industrial Science and Technology (AIST), known worldwide for its leading research in grid technologies, to accelerate research using grid technology for a wide variety of projects. These projects include the search for new materials to be used for superconductors and fuel cell batteries, and the search for new compounds that could be the basis for a cure for various malignant diseases. Each new IBM eServer 325 system delivered to AIST contains two powerful AMD Opteron processors in a 1.75" rack mounted form factor. AIST will run SuSE Linux Enterprise Server 8 on the supercomputer. The grid will incorporate the Globus Toolkit 3.0 and the Open Grid Services Infrastructure. The grid is also planned to link heterogeneous and geographically dispersed computing resources, including servers, storage and data, allowing researchers to collaborate. The eServer 325 systems are designed to run either Linux or Windows operating systems, and 325 can run both 32-bit and 64-bit applications simultaneously. -- IBM FDA Approves Stair-Climbing Wheelchair WASHINGTON, D.C. -- The U.S. Food and Drug Administration approved a battery-powered wheelchair in August that relies on a computerized system of sensors, gyroscopes and electric motors, which allow indoor and outdoor use on stairs, and on level and uneven surfaces. The FDA expedited review of the product -- the Independence iBOT 3000 Mobility System -- because it has the potential to benefit people with disabilities. An estimated 2 million people in the United States use wheelchairs. FDA Commissioner Mark B. McClellan said, "It can help improve the quality of life of many people who use wheelchairs by enabling them to manage stairs, reach high shelves and hold eye-level conversations." Four-wheel drive enables users to traverse rough terrain, travel over gravel or sand, go up slopes and climb 4-inch curbs. For use on stairs, two sets of drive wheels rotate up and over each other to climb up or down, one step at a time. Because of its unique balancing mechanism, the wheelchair remains stable and the seat stays level during all maneuvers. The user can push a button to operate the wheelchair in several different ways. To climb up stairs, the occupant backs up to the first step, holds onto the stair railing, shifts his weight over the rear wheels, which causes the chair to begin rotation of the front wheels over the rear wheels and then down to the first step. As the user shifts his weight backward and forward, the chair senses this and adjusts the wheel position to keep his center of gravity under the wheels. The chair ascends stairs backward and descends forward (the user always faces down the stairs). To reach high shelves or hold eye-level conversations with people who are standing, the occupant shifts his weight over the back wheels, so the iBOT lifts one pair of wheels off the ground and balances on the remaining pair. The user then presses a button to lift the seat higher. People must weigh no more than 250 pounds and must have the use of at least one arm to operate the chair. They also must have good judgment skills to discern which obstacles, slopes and stairs to prevent serious falls. Users must be capable of some exertion when climbing stairs in the wheelchair by themselves. However, for users who cannot tolerate such exertion, there is a feature that allows someone else to hold onto and tilt the chair's back to allow it to climb up or down stairs. Physicians and other health professionals must undergo special training to prescribe the iBOT. The chair must be calibrated to the patient's weight; and patients must be trained in its use and pass physical, cognitive and perception tests to prove they can operate it safely. The FDA approved the wheelchair based on a review of extensive bench testing of the product conducted by the manufacturer -- Independence Technology of Warren, N.J. -- and on a clinical study of its safety and effectiveness. Approval was also based on recommendation of the Orthopedic and Rehabilitation Devices Panel of the FDA's Medical Devices Advisory Committee. The firm performed a wide range of tests on the chair, including mechanical, electrical, performance, environmental and software testing. In the pivotal clinical study, 18 patients -- mostly people with spinal cord injury -- were trained to use the iBOT. They test-drove for two weeks to allow researchers to check maneuverability, falls and other problems compared to those encountered with their regular wheelchairs. They also tested it going up hills, over bumpy sidewalks, crossing curbs, reaching shelves and climbing stairs. Twelve patients could climb up and down stairs alone with the iBOT and the other six patients used an assistant. When these same 18 patients used their regular wheelchairs, one patient could "bump" down stairs, but no one could go up just one step. During the pivotal study, three patients fell out of the iBOT and two fell out of their own wheelchairs. None of the falls occurred on stairs. Two patients experienced bruises while using the iBOT. As a condition for approval, the manufacturer agreed to provide periodic reports to the FDA to document the chair's usage, functioning and any patient injuries. The manufacturer also said the iBOT will be available throughout the next few months in strategically located clinics across the country. -- The U.S. Food and Drug Administration
<urn:uuid:8be96823-d56e-4940-94c9-893bbc97070c>
CC-MAIN-2017-09
http://www.govtech.com/products/99415999.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00605-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941786
1,914
2.984375
3
As the calendar counts down to the first exascale supercomputer, efforts to resolve the steep technological challenges are increasing in number and urgency. Among the many obstacles inhibiting extreme-scale computing platforms – resilience is one of the most significant. As systems approach billion-way parallelism, the proliferation of errors at current rates just won’t do. In recognition of the severity of this challenge, the federal government is seeking proposals for basic research that addresses the resilience challenges of extreme-scale computing platforms. On July 28, 2014, the Office of Advanced Scientific Computing Research (ASCR) in the Office of Science announced a funding opportunity under the banner of “Resilience for Extreme Scale Supercomputing Systems.” The program aims to spur research into fault and error mitigation so that exascale applications can run efficiently to completion, generating correct results in a timely manner. “The next-generation of scientific discovery will be enabled by research developments that can effectively harness significant or disruptive advances in computing technology,” states the official summary. “Applications running on extreme scale computing systems will generate results with orders of magnitude higher resolution and fidelity, achieving a time-to-solution significantly shorter than possible with today’s high performance computing platforms. However, indications are that these new systems will experience hard and soft errors with increasing frequency, necessitating research to develop new approaches to resilience that enable applications to run efficiently to completion in a timely manner and achieve correct results.” The authors of the request estimate that at least twenty percent of the computing capacity in large-scale computing systems is wasted due to failures and recoveries. As systems increase in size and complexity, even more capacity will be lost unless new targeted approaches are developed. The DOE is specifically looking for proposals in three areas of focus: 1. Fault Detection and Categorization – current supercomputing systems must be better understood in order to prevent similar behavior on future machines, according to DOE computing experts. 2. Fault Mitigation – this category breaks into two parts: the need for more efficient and effective checkpoint/restart (C/R) and the need for effective alternatives to C/R. 3. Anomaly Detection and Fault Avoidance – using machine learning strategies to anticipate faults far enough in advance to take preemptive measures, such as migrating the running application to another node. Approximately four to six research awards will be made over a period of three years with award sizes ranging from $100,000 per year to $1,250,000 per year. Total funding up to $4,000,000 annually is expected to be available subject to congressional approval. The pre-application due date is set for August 27, 2014.
<urn:uuid:249ae00f-fb7a-4c9d-af23-674a0d08149c>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/07/30/doe-fund-exascale-resilience-research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00129-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916341
556
2.6875
3
The rapidly increasing interaction of consumers with social online networks, mobile phones and other intelligent devices has brought about significant lifestyle benefits that are under a serious threat from cybercriminals according to an international virus analyst. Addressing the audience of Kuwait’s ICT Security Forum, Stefan Tanase, Malware Analyst at the EEMEA Research Center, Kaspersky Lab Global Research and Analysis Team, said that in 2009 social networking sites will be used by around 80 per cent of all Internet users, the equivalent of more than one billion people. "The growing popularity of social networking sites has not gone unnoticed by cybercriminals; last year, such sites became a hotbed of malware and spam and yet another source of illegal earnings on the Internet. The Kaspersky Lab collection contained more than 43,000 malicious files relating to social networking sites in 2008 alone," said Tanase. "Malicious code distributed via social networking sites is 10 times more effective than malware spread via email. Social networks have, approximately, a 10 per cent success rate in terms of infection compared to less than 1 per cent for malware spread via email,” he said. Stolen names and passwords belonging to the users of social networking sites can be used to send links to infected sites, spam or fraudulent messages such as a seemingly innocent request for an urgent money transfer. "Generally, users of social networking sites trust other users and accept messages sent by someone on their friends list without thinking; this makes it easy for cybercriminals to use such messages to spread links to infected sites. Various means are used to encourage the recipient to follow the link contained in the message and download a malicious program." According to the Kaspersky Lab expert, major Web 2.0 platforms such as Facebook or Twitter are highly vulnerable to malware attacks and end users need to be aware of the risks and be ready to take precautionary measures to protect themselves. During his presentation, Tanase also highlighted the rapid spread of mobile phone hacking. "In the last week alone we have found five new Trojans which send such money transfer requests without the permission or knowledge of the phone's owner. The goal is to transfer large quantities of small sums in the hope that while individual users might not notice the leak, the overall sum of transfers will be significant. "There is a rise of the number of attacks targeting mobile phones and a more clear shift towards methods for monetization of these attacks." About Kaspersky Lab Kaspersky Lab is the largest antivirus company in Europe. It delivers some of the world’s most immediate protection against IT security threats, including viruses, spyware, crimeware, hackers, phishing, and spam. The Company is ranked among the world’s top four vendors of security solutions for endpoint users. Kaspersky Lab products provide superior detection rates and one of the industry’s fastest outbreak response times for home users, SMBs, large enterprises and the mobile computing environment. Kaspersky® technology is also used worldwide inside the products and services of the industry’s leading IT security solution providers. Learn more at . For the latest on antivirus, anti-spyware, anti-spam and other IT security issues and trends, visit www.viruslist.com.
<urn:uuid:446adc31-3a90-4452-809d-cae4112f736f>
CC-MAIN-2017-09
http://www.kaspersky.com/au/about/news/virus/2009/One_billion_social_networkers_this_year_exposed_to_Cybercrime_Kuwait_ICT_delegates_hear
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00074-ip-10-171-10-108.ec2.internal.warc.gz
en
0.913299
671
2.921875
3
NASA is making final preparations to launch a robotic probe in early September to study the moon and its atmosphere. Scientists hope the information will help them better understand Mercury, asteroids and the moons orbiting other planets. "The moon's tenuous atmosphere may be more common in the solar system than we thought," said John Grunsfeld, NASA's associate administrator for science. "Further understanding of the moon's atmosphere may also help us better understand our diverse solar system and its evolution." The probe, nick named LADEE for Lunar Atmosphere and Dust Environment Explorer, is set to blast off from NASA's Wallops Flight Facility on Wallops Island, Va. at 11:27 p.m. ET Sept. 6 -- two weeks from today. LADEE will lift off on board a U.S. Air Force Minotaur V rocket, which started out as a ballistic missile but was converted into a space launch vehicle. The robotic probe, which is about the size of a small car, will orbit the moon for an expected four to five-month mission. About a month after launch, the spacecraft will enter a 40-day test phase. During the first 30 days of that period, LADEE will be focused on testing a high-data-rate laser communication system. If that system works as planned, similar systems are expected to be used to speed up future satellite communications. After that test period, the probe will begin a 100-day science mission, using three instruments to collect data about the chemical makeup of the lunar atmosphere and variations in its composition. The probe also will capture and analyze lunar dust particles it finds in the atmosphere. This mission will be the first to launch a spacecraft beyond Earth orbit from NASA's Virginia Space Coast launch facility. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. This story, "To the Moon Or Bust! NASA Preps to Launch Lunar Probe" was originally published by Computerworld.
<urn:uuid:fa896aa4-0777-49b9-aedb-69c6bfaa90c6>
CC-MAIN-2017-09
http://www.cio.com/article/2383062/government/to-the-moon-or-bust--nasa-preps-to-launch-lunar-probe.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00602-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916431
446
3.921875
4
By Cathy Dew on February 9, 2017 In case you missed it, here are the first four blogs in our series on business process analysis templates: This week we look at personas... Personas are detailed profiles of fictional characters that represent a specific segment of users within a targeted demographic. Analysts and designers can create personas that reflect a fully realized example of who is the target audience for a website, intranet or application. Personas help application designers fully envision and understand the different attitudes and behaviors of users within various demographic segments. The goal is for the persona to capture the specific information that is relevant and necessary to reflect the demographic they represent. Depending on your project, some aspects of a persona will be more or less important and some cases, not important at all. Here is a that can be used to build a foundation for your personas: - Persona Name: The idea is for the personas to become part of the project team. Best practices say to give them a standard first name and then use their role or title as the last name, for example, Sandra Sales Manager. Pretty quickly the team will know who Sandra is. - Attributes: Differentiating characteristics which reflect the particular demographic that the persona represents include age, gender, ethnicity, marital status. Depending on the project it may also be important to know their employer, title and salary. - Persona Image: A representative picture is typically used to create a sense of realism and to give designers a firm anchoring point for the persona within their mind. - Goals: What the persona is trying to achieve, relative to your project. (Identify 3-5) - Skills: Specific abilities the persona embodies as it relates to your focus are. Some personas may cross over or combine job tasks and duties. - Behaviors/Desires: The types of behaviors has and statement of what motivates the persona to act a specific way can add to the value of the persona. It is important to understand frustrations and pain points. (Identify 3-5) - Character: Details about the persona’s character add a sense of realism and lead to a better representation of an actual user. This may be motivators like incentives, risk and achievements. Or it could be personality traits like where the persona fits on a scale of introvert/extrovert, passive/active or analytic/creative. - Location/Environment: The surrounding environment that the persona functions within, the environmental factors can have such a large influence on behavior. As you can see there is a lot to choose from when describing a persona. Spend time to consider what is important in the context of your project to help narrow down this list. Personas are not usually more than a page or two in length. They are useful in business analysis process efforts to help you focus on exactly who you are designing for, which almost always is not you. For reference please click here to download a sample persona template. Your personas can help guide product or application design decisions regarding features and user interactions. They can be printed and hung on the wall for all analysts and designers to see and be reminded of, as they complete their day-to-day activities. In this way, the personas become socialized into the surrounding environment. You might think that it doesn’t make a difference. But consider “putting on the hat of the customer”, it makes a great deal of difference in how you: Here is an example of the persona development methodology taken from a case study for a client's intranet development project. - First, we surveyed employees to classify the different types of end user of the Intranet. The survey focused on goals, behaviors, attitudes and usage of the intranet. - We followed this up with detailed one-on-one interviews that covered a wide range of employees and divisions. Then we summarized the findings in the form of task scenarios and personas. - Finally, we checked the legitimacy of the personas by distributing them to stakeholders within the organization. Then we revised and finalized the personas according to their comments. One of the other long lasting benefits of developing personas is the collaboration and refinement in understanding that occurs across the project team and who the target customer is and how we can support their goals. Having personas allows the team to more easily separate personal opinions and prejudices and really focus on what would Sally Sale Manager would do. By doing this work early in the development of a project, you can avoid a significant number of design flaws, reducing the necessary development and re-development efforts and costs considerably. In the end, it really is all about the end user or customer. Let personas be active participating members of your project team to keep your project on track and user-centered. If you need help navigating this process, feel free to reach out to 2Plus2! We are always happy to help create personas that drive user-centered design. Call us at (510) 652-7700.
<urn:uuid:883f90d5-b2a7-42a6-a6a5-d274e2330407>
CC-MAIN-2017-09
https://www.2plus2.com/News/Business-Process-Analysis-Template-3-Personas-Ke.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00602-ip-10-171-10-108.ec2.internal.warc.gz
en
0.9236
1,033
2.71875
3
Over the past few years, artificial hands have come a long way in terms of dexterity. They can grasp, shake hands, point, and, usefully, make the "come hither" gesture. Now, researchers at the Cleveland Veterans Affairs Medical Center and Case Western Reserve University have made significant progress in building a prosthetic hand that provides something like a sense of touch. The hand, which you can see put to use in a demonstration in the video below, has 20 sensitive spots that can perceive other objects' physicality. Implants that connect those spots to nerves in the patients arm have continued to work 18 months after installation, which MIT Technology Review reports, notes is a important milestone since "electrical interfaces to nerve tissue can gradually degrade in performance." Hands are more than tools for manipulating the physical world. They are also tools of perception, reporting sensations such as heat, texture, contact. These two systems, output and input, work together, helping us to know when our grasp is tight or whether we've reached the object on a shelf that's just out of view. The difficulty of building a machine that can perceive tactile information and report it back to the brain has become the roadblock for a truly hand-like prosthetic. The new prosthetic is a step towards creating this feedback loop. And it can do more than sense simple contact. Dustin Tyler, of Case Western, can adjust the device to signal different textures. Igor Spetic, who is using the hand in the above video, "says sometimes it feels like he’s touching a ball bearing, other times like he’s brushing against cotton balls, sandpaper, or hair," according to the Technology Review report. At the heart of the technology is a custom version of an interface known as a cuff electrode. Three nerve bundles in the arm—radial, median, and ulnar—are held in the seven-millimeter cuffs, which gently flatten them, putting the normally round bundles in a more rectangular configuration to maximize surface area. Then a total of 20 electrodes on the three cuffs deliver electrical signals to nerve fibers called axons from outside a protective sheath of living cells that surround those nerve fibers. This approach differs from other experimental technologies, which penetrate the sheath in order to directly touch the axons. These sheath-penetrating interfaces are thought to offer higher resolution, at least initially, but with a potentially higher risk of signal degradation or nerve damage over the long term. And so they have not been tested for longer than a few weeks. Thus far, the device has only been tested in the lab, but researchers are hoping that further development and study could bring it to the market within the next decade.
<urn:uuid:88fe5401-5eea-47a2-9194-a1ef819a41d1>
CC-MAIN-2017-09
http://www.nextgov.com/health/2013/12/va-researchers-help-develop-artificial-hand-can-feel/75311/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00650-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947229
557
3.109375
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, and it provides the authors analytical data about your interactions with their content. Embed code for: 5 Beginning of minerals Select a size These are the notes taken on 10/17-10/19. What do you think a mineral is? (Give Examples) Are minerals valuable? Describe the mineral at your table. Agenda: Mineral Basics Characteristics of Minerals mineral a natural, usually inorganic solid that has a characteristic chemical composition, an orderly internal structure, and a characteristic set of physical properties. To be a mineral, a substance must have four characteristics: it must be inorganic-it cannot be made of or by living things; it must occur naturally-it cannot be man-made; it must be a crystalline solid; it must have a consistent chemical composition. Characteristics of Minerals, continued The diagram below shows the four characteristics of minerals. Kinds of Minerals The 20 most common minerals are called rock-forming minerals because they form the rocks that make up Earth’s crust. Ten minerals make up 90% of Earth’s crust: quartz, orthoclase, plagioclase, muscovite, biotite, calcite, dolomite, halite, gypsum, and ferromagnesian minerals. All minerals can be classified into two main groups-silicate minerals and nonsilicate minerals-based on their chemical compositions. Kinds of Minerals, continued silicate mineral a mineral that contains a combination of silicon and oxygen and that may also contain one or more metals Common silicate minerals include quartz, feldspars, and ferromagnesian minerals, such as amphiboles, pyroxenes, and olivines. Silicate minerals make up 96% of Earth’s crust. Quartz and feldspars alone make up more than 50% of the crust. nonsilicate mineral a mineral that does not contain compounds of silicon and oxygen Nonsilicate minerals comprise about 4% of Earth’s crust. Nonsilicate minerals are organized into six major groups based on their chemical compositions: carbonates, halides, native elements, oxides, sulfates, and sulfides. (Foldable of nonsilicates using book) Double check understanding What compound of elements will you never find in a nonsilicate mineral? Nonsilicate minerals never contain compounds of silicon bonded to oxygen. Each type of mineral is characterized by a specific geometric arrangement of atoms, or its crystalline structure. crystal a solid whose atoms, ions, or molecules are arranged in a regular, repeating pattern. One way that scientists study the structure of crystals is by using X rays. X rays that pass through a crystal and strike a photographic plate produce an image that shows the geometric arrangement of the atoms that make up the crystal. Crystalline Structure of Silicate Minerals silicon-oxygen tetrahedron the basic unit of the structure of silicate minerals; a silicon ion chemically bonded to and surrounded by four oxygen ions Isolated Tetrahedral Silicates In minerals that have isolated tetrahedra, only atoms other than silicon and oxygen atoms like silicon-oxygen tetrahedra together. Olivine is an isolated tetrahedral silicate. Crystalline Structure of Silicate Minerals, continued What is the building block of the silicate crystalline structure? The building block of the silicate crystalline structure is a four-sided structure known as the silicon-oxygen tetrahedron, which is one silicon atom surrounded by four oxygen atoms. Ring silicates form when shared oxygen atoms join the tetrahedra to form three-, four-, or six-sided rings. Beryl and tourmaline are ring silicates. In single-chain silicates, each tetrahedron is bonded to two others by shared oxygen atoms. Most single-chain silicates are called pyroxenes. In double-chain silicates, two single chains of tetrahedra bond to each other. Most double-chain silicates are called amphiboles. In the sheet silicates, each tetrahedron shares three oxygen atoms with other tetrahedra. The fourth oxygen atom bonds with an atom of aluminum or magnesium, which joins the sheets together. The mica minerals, such as muscovite and biotite, are sheet silicates. In the framework silicates, each tetrahedron is bonded to four neighboring tetrahedra to form a three-dimensional network. Frameworks that contain only silicon-oxygen tetrahedra are the mineral quartz. Other framework silicates contain some tetrahedra in which atoms of aluminum or other metals substitute for some of the silicon atoms. Quartz and feldspars are framework silicates. The diagram below shows the tetrahedral arrangement of framework silicate minerals. Crystalline Structure of Nonsilicate Minerals Because nonsilicate minerals have diverse chemical compositions, nonsilicate minerals display a vast variety of crystalline structures. Common crystalline structures for nonsilicate minerals include cubes, hexagonal prisms, and irregular masses. The structure of a nonsilicate crystal determines the mineral’s characteristics. In the crystal structure called closest packing, each metal atom is surrounded by 8 to 12 other metal atoms that are as close to each other as the charges of the atomic nuclei will allow. P. 116, 1-9 ls make up 96% of Earth’s crust. Quartz and feldspars alone make up more than 50% of the crust.
<urn:uuid:15b1c894-8e99-4402-a7cd-311e0f0b9927>
CC-MAIN-2017-09
https://docs.com/alison-lane/5253/5-beginning-of-minerals
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00046-ip-10-171-10-108.ec2.internal.warc.gz
en
0.848238
1,257
3.546875
4
Magnetic Resonance Imaging (MRI) is an advanced method to detect pathological changes in the organs inside the body. It uses magnetic fields and radio waves to create detailed high-quality images of individual organs, joints, and tissues. MRI scanners provide different view angles and, in certain cases, can present much better location, extent, and cause of disease compared to conventional methods like X-rays or ultrasound. At a Medical Center in the Netherlands, images from MRI machines need to be provided to a number of different stations for diagnosis and surgical preparation. Instead of the usual prints, these images will be distributed electronically to the radiologists, surgeons, and meeting and operating rooms, along with the electronic patient records. The most important factor for the medical center was a pixelperfect reproduction of the images at all locations because pixel loss could lead to misdiagnosis. Since the hospital campus is quite large, the images must be sent over long distances of up to more than one kilometer. In addition, electromagnetic and RF interferences (EMI/RFI) could affect the electronic transmission and cause a loss of image quality. In total, the new distribution system needed to include four servers providing the images to a total of 22 stations distributed on the campus. Besides the images, keyboard and mouse signals of the user consoles also needed to be extended to allow staff to add notes to the patient files. The required long distance and the environment with high interferences lead to a solution that uses fiber-optic cable as transmission media. Fiber optics technology is based on light pulses and is completely immune to all EMI/RFI interference. In addition, fiber optics allows much greater distances than a CATx infrastructure, without loss of the signal quality. To distribute the images and peripheral data, Black Box suggested the DKM FX Compact Matrix Switch with 32 fiber SFP as the central switch. The DKM FX gives reliable access to high-quality, real-time digital video and a whole host of peripherals across the campus. It routes DisplayPort 1.1 resolutions up to RGB 3:3:3 and HDMI or DVI resolutions with Full HD 1080p. The distributed locations are connected through the DKM FX Modular Extenders that provide the necessary interfaces and signal extension depending on the individual location. Four operating rooms, each equipped with four large HDMI displays, receive the required images through extenders with quad-head video in a pixelperfect quality. Additional keyboard and mouse access allows the team to protocol the operation process. Two meeting rooms receive all required data from the image and patient data servers. Dual-head DKM FX HDMI Extenders give full USB-keyboard/mouse control and display the images on two 40" LCD displays. For patient stations and the patient data archive, the extenders provide high-quality images and USB-keyboard and mouse control as well as USB 2.0 extension for barcode readers and printer access.
<urn:uuid:d9cea583-ef1b-4066-a466-2c7cd0bd0457>
CC-MAIN-2017-09
https://www.blackbox.com/en-us/resources/case-studies/technology-solutions/medical-center
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00518-ip-10-171-10-108.ec2.internal.warc.gz
en
0.911787
592
2.828125
3
The small robots can rectify mistakes made during the demonstration, avoiding a complete catastrophe. Scientists at Harvard School of Engineering and Applied Sciences demonstrated a flashmob of 1,024 simple robots, collectively known as Kilobots to arrange themselves into complex shapes. The aim of the demonstration was to show how simple behaviour among simple machines can create complex behaviours, and the demonstration was done successfully with the robots that collectively made the letter K and arranged themselves as a starfish. Once the initial set of instructions was delivered, the Kilobots did not require any micromanagement or intervention. Only four robots made the original coordination system while the rest received a 2D image of the activity they had to mimic, and following the procedure, the robots arranged themselves. Harvard school of engineering and applied sciences, Prof Radhika Nagpal, said: "The beauty of biological systems is that they are elegantly simple — and yet, in large numbers, accomplish the seemingly impossible. "At some level, you no longer even see the individuals; you just see the collective as an entity to itself. We can simulate the behavior of large swarms of robots, but a simulation can only go so far. "The real-world dynamics — the physical interactions and variability — make a difference, and having the Kilobots to test the algorithm on real robots has helped us better understand how to recognize and prevent the failures that occur at these large scales." This technology marks the beginning of the use of collective artificial intelligence, which can be used to create complex machines that will be capable of eventually creating the swarm.
<urn:uuid:5230dd01-7649-408d-936c-40b574a70db6>
CC-MAIN-2017-09
http://www.cbronline.com/news/enterprise-it/harvard-researchers-demonstrate-flashmob-of-more-than-1000-robots-180814-4346344
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00042-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949208
326
3.671875
4
The ambitious Apollo moon mission era suffered from the economic realities of space exploration, which led to the downsizing of the U.S. space program. For a period of time, no significant advancements were made. The space shuttles grew old, and in 2011, all three were retired after more than thirty years of service. Since then, U.S. astronauts have had to travel to the International Space Station via Russian spacecraft. However, this is likely to change soon in a significant way. Earlier this month, SpaceX, an American aerospace manufacturer and space transport services company founded by Elon Musk, was certified by the United States Air Force to conduct military launches. The move dramatically changed the space industry because until then, the lucrative contracts to launch satellites and other military vehicles into space had been limited to one player—United Launch Alliance—a joint venture of Lockheed Martin and Boeing. However, in a game-changing move, SpaceX promised to launch satellites into orbit at a significantly lower cost. While SpaceX may not be a familiar name, they have flown numerous resupply missions to the International Space Station under contracts with NASA. And, as space travel becomes more affordable, SpaceX imagines a surge in consumer demand to get to space. Accordingly, they are building a spacecraft that can carry human cargo into space to eliminate the reliance that America has on Russian launchers. They are not alone, however. Other startups are joining the commercial space industry because of the growing demand for delivering materials into space and the potential of commercial space travel. In 2004, the founder of Virgin Atlantic, Richard Branson launched Virgin Galactic with the intent of creating a brand new space tourism industry to transport passengers into low Earth orbit for a few hours. Hundreds of people have already paid $250,000 to reserve the thrill ride. Another competitor, Blue Origin—founded by Jeff Bezos of Amazon—is aiming to deliver cheaper access to space for both people and satellites albeit using completely different technology. Similarly, dozens of smaller startups have emerged all over the world with the goal of making an impact in the space industry. If this trend continues and some stability is achieved, the future of manned space travel looks promising. To learn more about this topic or to connect with our IT support team, give us a call at (831) 753 -7677 or send us an email: [email protected]. Alvarez Technology Group, Inc. 209 Pajaro Street Salinas, CA 93901 Toll Free: 1-866-78-iTeamLocal: (831) 753-7677 Fax: (831) 753-7671
<urn:uuid:d026b489-ea09-481b-9fc4-133cd0060a0d>
CC-MAIN-2017-09
https://www.alvareztg.com/more-business-startups-are-shooting-for-space-the-final-frontier/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00042-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939936
544
2.859375
3
NASA Captures Storm at Saturn's North Pole / February 4, 2014 NASA’s Cassini spacecraft has captured a stunning new image of Saturn’s north pole that highlights not only the planet's famous rings, but also a swirling, strangely hexagonal storm brewing in its atmosphere. The odd hexagonal vortex is thought to be about 20,000 miles in diameter, and is essentially a jet stream of 200 mph winds that surround a giant storm, according to NASA officials. "The hexagon is just a current of air, and weather features out there that share similarities to this are notoriously turbulent and unstable," Andrew Ingersoll, a Cassini imaging team member at the California Institute of Technology, said in a statement. "A hurricane on Earth typically lasts a week, but this has been here for decades — and who knows — maybe centuries." Like this story? If so, subscribe to Government Technology's daily newsletter. Space agency officials said no weather feature they’ve yet discovered in the solar system resembles Saturn’s storm, which they believe may be so long-lasting because of Saturn’s lack of land mass, which normally serves to disrupt wind currents. Saturn’s entirely gaseous form prevents this type of calming effect, according to NASA. The Cassini spacecraft has been sending back images of Saturn since its arrival at the planet in 2004, and was originally launched in 1997. It will be expected to collect data until at least 2017, at which point it will burn up in the ringed planet’s atmosphere.
<urn:uuid:eeb6a7a9-39bb-444c-a055-b86487dc1adc>
CC-MAIN-2017-09
http://www.govtech.com/photos/Photo-of-the-Week-NASA-Captures-Storm-at-Saturns-North-Pole.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00162-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95072
321
3.328125
3
Information Economics Overview Information Economics will help you find the value in your information without losing sight of risk and cost. Our video tells you why information is getting harder and harder to manage. As the information explosion gathers pace, the records and information your business holds is growing exponentially. Customers want more information, regulations are making you keep more information and your people generate and use more information than ever before. So the cost of handling and storing it has never been higher. However, managed properly, you can find and use the value in your information. Information economics is about maximising these opportunities, reducing risks and controlling costs to make sure you achieve ROI – Return on Information. Before you can exploit the value in your data you need to take control, so you know exactly what you have, and exactly where it is. 42% of organisations say the volume of paper is increasing while 36% of firms keep all their information in case it’s needed. However, 71% of companies could reduce storage costs by 30% if they knew what they could destroy and when. By destroying records that are beyond their statutory retention periods, you could cut your physical and digital storage requirements by up to 40%. Once you’re only keeping the information you need, you can maximise its value and minimise its cost. Start by reclaiming your office space. Every filing cabinet you can dispense with gives you space which could have a rental value of around 1,500 USD per year. Storing your most valuable and important records offsite reduces their exposure to loss, damage or theft. Professional document storage means your data is secure, helping you avoid data protection fines of up to 380,000 USD. And with your storage properly organised, research indicates that the effectiveness of workers could increase by 26.5% through improved access. So, to ensure you get a return on information, tackle your document storage and start seeing information differently, then you can plan how to extract maximum value from your data.
<urn:uuid:aae475a4-857f-441c-83eb-27c6dae1d85d>
CC-MAIN-2017-09
http://www.ironmountain.com/Knowledge-Center/Reference-Library/View-by-Document-Type/Demonstrations-Videos/I/Information-Economics-Overview-Video.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00162-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929757
414
2.53125
3
Kaspersky Lab presents the article 'Drive-by Downloads: The Web Under Siege' by Kaspersky Lab security evangelist Ryan Naraine. The article is devoted to the covert downloading of malware from websites without the user’s knowledge, known as drive-by downloads. Cybercriminals deploy exploits on a malicious server. Code to redirect connections to that malicious server is then embedded on Web sites, and lures to those sites are spammed via e-mail or bulletin boards. Whereas before Cybercriminals created malicious websites, now they increasingly compromise legitimate Web sites and either secretly embed an exploit script, or plant redirect code that silently launches attacks via the browser. This makes drive-by downloads an even greater threat. Malware exploit kits, which are sold on underground hacker sites, serve as the engine for drive-by downloads. The kits are fitted with exploits for vulnerabilities in a range of widely deployed desktop applications, including Internet browsers. If an exploit is successful, a Trojan is silently installed that gives the attacker full access to the compromised computer. The attacker can take advantage of the compromised computer in order to steal confidential information or to launch DoS attacks. According to ScanSafe, 74 percent of all malware detected in the third quarter of 2008 came from visits to compromised Web sites. This means that we are in the midst of a large-scale drive-by download epidemic. Over a recent ten-month period, the Google Anti-Malware Team found more than three million URLs initiating drive-by malware downloads. The drive-by download epidemic is largely attributed to the unpatched state of the Windows ecosystem. With very few exceptions, the exploits in circulation target software vulnerabilities that are known – and for which patches are available. The most practical approach to defending against drive-by downloads is to pay close attention to the patch management component of defense. It is also essential to install antivirus software and to keep its databases updated. Importantly, the antivirus product should include a browser traffic scanner to help pinpoint potential problems from drive-by downloads. The full version of the article is available at Viruslist.com and an abbreviated version can be found on the Kaspersky Lab corporate website http://www.kaspersky.com. This material can be reproduced provided the author, company name and original source are cited. Reproduction of this material in re-written form requires the express consent of the Kaspersky Lab Public Relations department. About Kaspersky Lab Kaspersky Lab delivers the world’s most immediate protection against IT security threats, including viruses, spyware, crimeware, hackers, phishing, and spam. Kaspersky Lab products provide superior detection rates and the industry’s fastest outbreak response time for home users, SMBs, large enterprises and the mobile computing environment. Kaspersky technology is also used worldwide inside the products and services of the industry’s leading IT security solution providers.
<urn:uuid:a9c8821a-cd8d-44de-b12d-d23217710a5f>
CC-MAIN-2017-09
http://www.kaspersky.com/au/about/news/virus/2009/Drive_by_Downloads_The_Web_Under_Siege
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00338-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917181
602
2.578125
3
The idea of infecting BIOS has long been a highly intriguing prospect for cybercriminals: by launching from BIOS immediately after the computer is turned on, a malicious program can gain control of all the boot-up stages of the computer or operating system. Since 1998 and the CIH virus, which could merely corrupt BIOS, malware writers have made little progress on this front. That changed, however, in September when a Trojan was detected that could infect BIOS and as a result gain control of the system. The rootkit is designed to infect BIOS manufactured by Award and appears to have originated in China. The Trojan’s code is clearly unfinished and contains debug information, but Kaspersky Lab analysts have verified that its functionality works. Attacks against individual users The DigiNotar hack. One of the main aims of the hackers who attacked the Dutch certificate authority DigiNotar was the creation of fake SSL certificates for a number of popular resources, including social networks and email services that are used by home users. The hack occurred at the end of July and went unnoticed throughout August while the attacker manipulated the DigiNotar system to create several dozen certificates for resources such as Gmail, Facebook and Twitter. Their use was later recorded on the Internet as part of an attack on Iranian users. The fake certificates are installed at the provider level and allow data flows between a user and a server to be intercepted. The DigiNotar story once again demonstrates that the existing system of hundreds of certificate authorities is poorly protected and merely discredits the very idea of digital certificates. MacOS threats: the new Trojan concealed inside a PDF. Cybercriminals are taking advantage of the complacency shown by many MacOS users. For instance, most Windows users who receive email attachments with additional file extensions such as .pdf.exe or .doc.exe will simply delete them without opening them. However, this tactic proved to be a novelty for Mac users, who are more prone to unwittingly launch malicious code masquerading as a PDF, an image or a doc etc. This mechanism was detected in late September in the malicious program Backdoor.OSX.Imuler.a, which is capable of receiving additional commands from a control server as well as downloading random files and screenshots to the server from the infected system. In this case, the cybercriminals used a PDF document as a mask. Kaspersky Lab detected 680 new variations of malicious programs for different mobile platforms in September. 559 of them were for Android. In recent months there has been a significant increase in the overall number of malicious programs for Android and, in particular, the number of backdoors: of the 559 malicious programs detected for Android, 182 (32.5%) were modifications with backdoor functionality. More and more malicious programs for mobile devices are now making extensive use of the Internet for such things as connecting to remote servers to receive commands. Mobile Trojans designed to intercept text messages containing mTANs used in online banking are becoming increasingly popular among cybercriminals. Following in the footsteps of ZitMo, which has been operating on the four most popular platforms for the last year, is SpitMo which works in much the same way but in tandem with the SpyEye Trojan rather than ZeuS. Attacks via QR codes. At the end of September the first attempted malicious attacks using QR codes were detected. When it comes to installing software on smartphones, a variety of websites offer users a simplified process that involves scanning a QR code to start downloading an app without having to enter a URL. Predictably, cybercriminals have also decided to make use of this technology to download malicious software to smartphones: Kaspersky Lab analysts detected several malicious websites containing QR codes for mobile apps (e.g. Jimm and Opera Mini) which included a Trojan capable of sending text messages to premium-rate short numbers. Attacks on corporate networks The number of serious attacks on large organizations that make use of emails in the initial stages is on the increase. In September alone there was news of two major incidents that made use of this tactic. The first, named Lurid, was uncovered by Trend Micro during research by the company’s experts. They managed to intercept traffic to several servers that were being used to control a network of 1,500 compromised computers located mainly in Russia, former Soviet republics and countries in eastern Europe. Analysis of the Russian victims showed that it was a targeted attack against very specific organizations in the aerospace industry, as well as scientific research institutes, several commercial organizations, state bodies and a number of media outlets. The attackers managed to gain access to data by sending malicious files via email to employees in these organizations. Attack on Mitsubishi. News about an attack on the Japanese corporation Mitsubishi appeared in the middle of the month, although research by Kaspersky Lab suggests that it was most probably launched as far back as in July and entered its active phase in August. According to the Japanese press, approximately 80 computers and servers were infected at plants manufacturing equipment for submarines, rockets and the nuclear industry. Malware was also detected on computers at the company’s headquarters. There is now no way of knowing exactly what information was stolen by the hackers, but it is likely that the affected computers contained confidential information of strategic importance. “It is safe to say that the attack was carefully planned and executed,” says Alexander Gostev, Chief Security Expert at Kaspersky Lab. “It was a familiar scenario: in late July a number of Mitsubishi employees received emails from cybercriminals containing a PDF file, which was an exploit for a vulnerability in Adobe Reader. The malicious component was installed as soon as the file was opened, resulting in the hackers getting full remote access to the affected system. From the infected computer the hackers then penetrated the company’s network still further, cracking servers and gathering information that was then forwarded to the hackers’ server. A dozen or so different malicious programs were used in the attack, some developed specifically with the company’s internal network structure in mind.” The war on cybercrime Closure of the Hlux/Kelihos botnet. September saw a major breakthrough in the battle against botnets – the closure of the Hlux botnet. Cooperation between Kaspersky Lab, Microsoft and Kyrus Tech not only led to the takeover of the network of Hlux-infected machines, the first time this had ever been done with a P2P botnet, but also the closure of the entire cz.cc domain. Throughout 2011 this domain had hosted command and control centers for dozens of botnets and was a veritable hotbed of security threats. At the time it was taken offline the Hlux botnet numbered over 40,000 computers and was capable of sending out tens of millions of spam messages on a daily basis, performing DDoS attacks and downloading malware to victim machines. Kaspersky Lab currently controls the botnet and the company’s experts are in contact with the service providers of the affected users to clean up infected systems. Detection for Hlux has been added to Microsoft’s Malicious Software Removal Tool, helping to significantly reduce the number of infected machines. More detailed information about the IT threats detected by Kaspersky Lab on the Internet and on users' computers in September 2011 is available at http://www.securelist.com.
<urn:uuid:2fec163b-d68f-43a2-a601-ffe590862c9b>
CC-MAIN-2017-09
http://www.kaspersky.com/au/about/news/virus/2011/Malware_in_September_The_Fine_Art_of_Targeted_Attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00338-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964462
1,518
2.640625
3
How Far can a Wi-Fi Signal Travel? You’ve most likely noticed that Wi-Fi signals do not travel an infinite distance. The farther a Wi-Fi signal goes, the weaker it gets. This is known as attenuation. The same thing happens to your voice. When you speak or yell, your voice will be nice and loud nearby, but your voice will get weaker (softer) the farther it travels. Exactly how far a Wi-Fi signal can travel depends on several factors: 1. The type of wireless router used: More powerful wireless routers (with more powerful antennas) are capable of transmitting a signal farther. 2. The type of 802.11 protocol used. Here are the transmission ranges (indoors): - 802.11b: 115 ft. - 802.11a: 115 ft. - 802.11g: 125 ft. - 802.11n: 230 ft. - 802.11ac: 230 ft. However, keep in mind that calculating attenuation and wireless ranges indoors can be very tricky. That’s because inside your house, the Wi-Fi signal bounces off obstacles and has to penetrate a variety of materials (like walls) that weaken the signal. What Kinds of Things Can Obstruct a Wireless Signal? Solid items can greatly attenuate (weaken) communication signals. Let’s compare this to your voice again. If you’re speaking to someone in another room, they’ll be able to hear you more clearly if the door between the two rooms is open rather than closed. Likewise, obstructions like walls and doors can dampen a wireless signal, decreasing its range. - A solid wood door will attenuate a wireless signal by 6 dB. - A concrete wall will attenuate a wireless signal by 18 dB. And each 3 dB of attenuation represents a power loss of ½! What Else Can Impact a Wireless Signal? In addition to physical obstructions like walls and doors, radio interference can also impact Wi-Fi signals. For example, various home appliances like microwave ovens, cordless phones, and wireless baby monitors can all interfere with your Wi-Fi network (you can read more about wireless interference here). In addition, if there are too many Wi-Fi networks all using the same wireless channel in the same area, the “noise” can impact your signal. Let’s return to the voice comparison. What happens when you’re trying to speak and someone else starts speaking, turns on the TV, or turns up the radio volume at the same time? It’s much harder for others to hear what you’re saying. How Can Wired/Wireless Extenders Help? If you have a big house, and you’d like to be able to communicate with someone upstairs or in a far room, you might install a home intercom system. This is similar to a wired Wi-Fi extender. These devices use the home’s existing wiring (coax for MoCA-based solutions and electrical wiring for Powerline-based solutions) to extend the wireless network into a far corner of the home. In essence, they carry the wireless signals through a wired connection (where there’s less attenuation and interference) and then send out a strong wireless signal in the new location.
<urn:uuid:e518a134-fbe6-4e63-9516-f108f1043364>
CC-MAIN-2017-09
http://wifi.actiontec.com/learn-more/wifi-wireless-networking/how-far-can-a-wi-fi-signal-travel/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00038-ip-10-171-10-108.ec2.internal.warc.gz
en
0.896758
700
3.40625
3
When a criminal duo labeled the “High Country Bandits” robbed a series of Arizona and Colorado banks in 2009 and 2010, FBI investigators turned to the owners of local cell phone towers. A federal judge signed a court order authorizing a “tower dump” of call metadata from towers nearby the robbery sites. (Such court orders have a lighter burden of proof than a warrant, which requires probable cause.) The metadata received by investigators contained 150,000 numbers in plain text. Only two phone numbers were present at every crime scene. Investigators traced these back to their owners, the bank robbers, who were arrested and later convicted. In 2012, investigators—many of them local police departments—made over 9,000 such requests for cell tower data. Many were granted without a warrant, at the discretion of a judge. The NSA program known as CO-TRAVELER, which was exposed by the Snowden documents, collects similar data, tapping mobile phone networks with a purported intent of identifying associates of known intelligence targets. “Incidentally,” NSA documents say, the program can capture up to 5 billion mobile phone location records per day. In the High Country Bandits case, such techniques obviously proved to be an effective law enforcement strategy. They are also a significant privacy infringement, containing the mobile phone records and potentially GPS data and network history of individuals en masse. Yale computer scientists Aaron Segal, Bryan Ford, and Joan Feigenbaum may have a solution. In a paper presented at an August 18 conference on open communication, the researchers paint an idyllic picture of a potential surveillance environment that’s heavy on reach and light on breach. The paper, “Catching Bandits and Only Bandits: Privacy-Preserving Intersection Warrants for Lawful Surveillance,” proposes combining a system of checks and balances with cryptographic techniques to let investigators identify records of interest without exposing anyone else’s data. Here’s how it works: It uses “privacy-preserving” algorithms When the FBI received the tower dump data from the High Country Bandits case, they tracked down the bank robbers by finding the intersection of all data sets—in other words, the only phone numbers that made calls near each cell phone tower in question. The key to the Yale researchers’ protocol is a well-established cryptographic method known as “privacy-preserving” set intersection. “Privacy-preserving” means that the operation works on encrypted information and doesn’t reveal anything about the data except the intersecting elements. If the FBI had done things this way, they could have still found the criminals but avoided compromising 150,000 people’s information. It creates checks and balances by distributing encryption NSA surveillance programs like CO-TRAVELER are suspicious because they happen in a private, unchecked sphere. As the Yale researchers write: “In short, the public must simply ‘trust’ the U.S. government’s evidence-free assertions that its mass ingestion and secret processing of privacy-sensitive data are (secretly) lawful and subject to adequate (secret) privacy protections and effective (secret) oversight.” Checks on domestic agencies are more extensive. Most tower dumps require a court order and not a warrant, like the High Country Bandits case. If a judge deems a request too expansive, they might demand that the time window be narrowed or that a policy for handling extraneous personal data be specified. The Yale protocol imposes stronger checks that work by distributing the actual encryption of the data. Once an agency like the FBI receives the data, it’s been encrypted three times over: once by a key held by the court that authorized the dump, once by a key held by the FBI themselves, and once by a key held by a legislative organization that oversees all requests for surveillance data. As long as the keys stay secure, it’s impossible for any single branch to operate on the data without the cooperation of the other two. But it’s no catch-all The protocol is a step up from the status quo, according to Christopher Soghoia, chief technologist for the American Civil Liberties Union. But under certain conditions, the metadata still wouldn’t be totally impermeable, he says. What’s more, law enforcement agencies might be slow to adopt the new technique due to red tape. Some privacy advocates oppose any large-scale culling of personal metadata. In a string of critical tweets, security consultant Eleanor Saitta said the paper essentially endorsed over-surveillance. Rather than trying to limit surveillance with clever cryptography, which could eventually be compromised, the government should instead seek to limit its access outright. That argument ignores the fact that big data is here to stay in one form or another, says Bryan Ford, one of the paper’s authors. Denying that, he says, is merely “living in a fantasy land.”
<urn:uuid:b9ccd81b-800f-487a-99ee-a8a50e95f884>
CC-MAIN-2017-09
http://www.nextgov.com/big-data/2014/08/researchers-say-you-can-surveil-everyone-and-see-only-criminals/91892/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00214-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930903
1,029
2.546875
3
Quantum computing could revolutionize the way we interact with information. Such systems would process data faster and on larger scales than even the most super of supercomputers can handle today. But this technology would also dismantle the security systems that institutions like banks and governments use online, which means it matters who gets their hands on a working quantum system first. Just last week I wrote about how a team of researchers in the Netherlands successfully teleported quantum data from one computer chip to another computer chip, a demonstration that hinted at a future in which quantum computing and quantum communications might become a mainstream reality. That still seems a long way off—physicists agree that transmitting quantum information, though possible, is unstable. And yet! The U.S. Army Research Laboratory today announced its own quantum breakthrough. A team at the lab's Adelphi, Maryland, facility says it has developed a prototype information teleportation network system based on quantum teleportation technology. The technology can be used, the Defense Department says, to transmit images securely, either over fiber optics or through space—that is, teleportation in which data is transmitted wirelessly. The DoD says it can imagine using this kind of technology so military service members can securely transmit intelligence—photos from "behind enemy lines," for instance—back to U.S. officials without messages being intercepted. But this kind of technological advance, especially in a government-run lab, is significant for the rest of us, too. Quantum computing would offer unprecedented upgrades to data processing—both in speed and scope—which could enhance surveillance technologies far beyond what exists today. "That's why the NSA in particular is so interested in quantum computers and would like to have one," the physicist Steve Rolston told me last week, "and probably would not tell anyone if they did."
<urn:uuid:04181f16-c862-4668-940c-8076d7ffe1b7>
CC-MAIN-2017-09
http://www.nextgov.com/defense/2014/06/us-army-says-it-can-teleport-quantum-data-now-too/86251/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00390-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950571
367
2.796875
3
Boeing, the largest aerospace company on the planet, builds a lot of planes. The organization manufactures aircraft for airlines and governments in more than 150 countries. And every one of those planes contains thousands of wires that connect its various electrical systems. These complex webs of wires don't weave themselves, and putting all the parts together is a monumental task. Each week, thousands of Boeing's U.S. workers construct "wire harnesses," or "people-size portions of the electrical systems" designed to help them join the various shapes and sizes of wires, according to Kyle Tsai, a research and development (R&D) engineer with Boeing Research and Technology (BRT), the company's central R&D organization. "Wire harnesses are very complex and very dense, and the technicians have to use what are, in essence, roadmaps to find the attachment points and connector pins," Tsai says. "There are so many that it can be information overload at times." Today, Boeing wire-harness techs mostly use PDF-based assembly instructions on laptops to help them find the appropriate wires, cut them to size and then connect the components via wire harnesses. Techs must constantly shift their attention back and forth between on-screen roadmaps and harnesses. And they use a lot of CTRL+F keyboard commands to find specific wire numbers, which means techs have to frequently use their hands to manipulate computers and navigate wire documentation. For 20 years, Boeing had been looking for a hands-free system that used some sort of wearable computer to reduce production time and related errors. The company experimented with an augmented reality (AR) application and "head-mounted, see-through display" called the Navigator 2 as early as 1995, according to the 2008 book Application Design for Wearable Computing. But effective and affordable hardware just didn't exist … until Google released its Glass smartglasses. In the past, "everything was hardware constrained: battery life, screen, weight," according to Jason DeStories, another R&D engineer with BRT. "Now we're in a era where hardware is no longer the constraint." Smartglasses get off the ground at Boeing When Google released the first "Explorer Edition" of Glass in the fall of 2013, DeStories says his manager purchased a few of the early smartglasses and asked him to start tinkering, to see if Glass might be the hardware the company needed. In early 2014, DeStories and his team got to work on a demo application designed specifically for wireless harness assembly. The challenge, according to DeStories: "How do we get that information to the technician right at the time they're doing it, in the shortest manner possible, with the simplest user input?" The goal was to "reduce [technician's] time from intent to action," he says. It didn't take long to come up with a barebones Glass app, and DeStories and his crew started to slowly show it to small groups of wire harness techs. The Glass app was a hit and served as a proof of concept. Later that year, an internal Boeing newsletter detailed the initiative, called "Project Juggernaut," and drew attention across the company. "It piqued a whole lot of interest," DeStories says. "That really forced us to take a step back and realize we needed an enterprise solution, not just this one-off application." Juggernaut meets Skylight To take Project Juggernaut to the next level, Boeing had to find a secure, reliable way to connect Glass to its wire harness database and pull the necessary information in real time. "Boeing put a competitive RFP [request for proposal] out into the market that I'm sure all of the Glass at Work partners heard about," says Brian Ballard, CEO and cofounder of APX Labs, maker of the Skylight enterprise platform for smartglasses and a 2016 World Economic Forum Tech Pioneer company. "They described a pain point in their manufacturing process, and we saw it and were like, 'Oh man, that is exactly what we can do.' We started a competitive bid for it and won the work." Boeing and APX Labs started work to integrate Skylight in early 2015, according to Ballard, and between March and November, DeStories and his team traveled to various Boeing locations and showed an early offline version of the Glass app to determine whether or not the concept would stand up to scrutiny on the harness floor. The team also did an internal training event with about 20 people, expecting some pushback. "Out of those 20 people, maybe two were not that excited," he says. "The rest were excited to use it and picked up on it pretty quick." It was clear, that "across the board, it was going to work," according to DeStories. Putting Glass to work on Boeing wire harnesses During the pilot, when a participant showed up for work she'd first visit a lockbox to check out a Glass unit, and then go to her computer to login and authenticate the device on the network, according to DeStories. For authentication, the tech would put on the smartglasses and scan a QR code generated by the system on her computer, which then pushed the wire harness app to the smartglasses. Next, the tech would head to her work station on the assembly floor, grab the next "shop order," and then scan another QR code on the box of components, which provided necessary status updates or notes and told her where to get started, DeStories says. The Skylight app supports touch gestures and voice commands, so a tech could, for example, pick up a box of components, and then begin the process by saying, "OK Skylight. Start wire bundle. Scan order." Next, she might say, "OK Skylight. Local search. 0447," to quickly launch an assembly roadmap for the wire No. 0447 on her smartglasses heads-up display. If she came across a problem she couldn't solve on her own, she could stream her point-of-view video of the wire harness to an expert in another location for assistance. Or she could check to see if there was another assembly video already available for playback on the smartglass display. "When you truly sit in the pilot seat, from the technicians point of view, having a hands-free device where the information is just always in the upper right corner of your eye really starts to make sense," DeStories says.
<urn:uuid:c7177841-378a-444f-9c48-f04cbf761536>
CC-MAIN-2017-09
http://www.itnews.com/article/3095132/wearable-technology/google-glass-takes-flight-at-boeing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00090-ip-10-171-10-108.ec2.internal.warc.gz
en
0.968821
1,347
3.125
3
The humans don't know, it is probably better that way They are glued to their devices all night and all day. Smart homes, bluetooth, and the internet of things, everything is connected these days... so it seems. If Dr. Seuss wrote a book about the Internet of Things, he could have come up with prose similar to the lines above. But he likely wouldn’t have thought to incorporate NFC tags on the pages of the book to help make the story more engaging. Lisa Seacat DeLuca, however, is no mere Dr. Seuss. The author of a new children’s book titled “Internet of Mysterious Things,” Deluca also happens to be the most prolific female inventor in IBM’s history, with over 600 patents filed with the U.S. Patent and Trademark Office. The book, which opens with the four lines above, is “probably one of the nerdiest children’s books,” DeLuca says in a Kickstarter video for the book. “I like to say it is a children’s book with a touch of technology.” She came up with the idea for the story after noticing that her kids—two sets of twins—were asking questions about the Internet of Things. “I have IoT stuff all over the house, and my kids were asking me: ‘how does it work,” DeLuca says, who is a Technology Strategist and Software Engineer for the IBM Commerce, Cognitive Incubation Lab. In her book, DeLuca takes some license in explaining how IoT technologies ranging from activity trackers to connected security alarms work, enlisting the help of unicorns, leprechauns, and zombies to assist in the storytelling. “I wanted the book to be really fun to keep kids engaged,” DeLuca says. Related: The 25 Most Influential Women in IoT The idea for the story itself was born when DeLuca was at a tech conference and observed that nearly everyone was staring at their phones when exiting the event. “They were literally bumping into each other,” she recalls. She began work on the story shortly after that. “I came up with the first page about the humans not knowing about this alternative world [of the Internet of Things] because we are so into our phones and the story came together.” DeLuca hooked up with an illustrator, Adam Record, who had worked on a children’s book for her sister, Sara Crow. Titled “Even Superheroes Have to Sleep,” Crow’s book was picked up by Penguin Random House. As an illustrator, Record has worked on more than 50 children's books and companies like Disney, Harpercollins, and Penguin Books. In about a year, the artist had completed the drawings. DeLuca employed the help of the illustrator for her first self-published children’s book, Constantine Petkun, from Latvia to create animations that launch in an app when tapping a mobile phone over the NFC tags located throughout the book. DeLuca compares the concept to a living book you might find in the Harry Potter series: “Each page can come to life through the help of the NFC tags, but it’s not like you are holding a tablet; it’s a real book,” DeLuca explains. For now, the procedure for adding NFC tags to the book is low tech. “I manually take the tags and stick them on every page,” DeLuca says. “If the book becomes super popular and I raise a ton of money, I am sure I can find a printer that can embed the tags.” When asked about the goals of the book, DeLuca said she had a couple: “I wanted to provide a way for children and their parents to learn together how common everyday technologies work,” she says. “And I hoped to use technology in a counterintuitive way that suggests we periodically put down our devices, so we don't miss out on our real lives.” Ultimately, it is hard for a conventional book to compete against a smart device for our attention. The “Internet of Mysterious Things,” however, is unique, says Tamara McCleary, CEO of Thulium.co. “There is no other book like this on the market currently,” McCleary says. “When I read the book, I was struck by how Lisa lures us in willingly with her whimsical prose paired with equally entertaining illustrations, but at the same time she is offering us a more in-depth real-world explanation of complicated technology through her NFC tags and webpages devoted to fun facts and further, more-educational instruction.” McCleary says that Lisa’s role in technology makes her uniquely qualified to write a book like this. “Lisa is such a perfect person to deliver these nuggets of wisdom as she comes from a place of authenticity, both as a mother and a leader in technology,” she says. “I actually know a lot of adults who could use this information to understand the Internet of Things, better and I hope her book is folded into early childhood education. It needs to be on every public library’s shelves.”
<urn:uuid:a4fb20c7-07ac-4039-9da7-bad9daff507a>
CC-MAIN-2017-09
http://www.ioti.com/iot-trends-and-analysis/iot-enabled-children-s-book-rockstar-inventor
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00510-ip-10-171-10-108.ec2.internal.warc.gz
en
0.973824
1,123
2.53125
3
Head-mounted computers like Google Glass are a useful way to view content and interact with the world on the move, but one drawback is the lack of a physical interface on which the user can click, drag or navigate content. Japan's NTT DoCoMo showed a prototype technology at Japan's Ceatec exhibition this week that aims to fix that. The technology essentially takes any paper surface - a sheet of writing paper or the page of a book -- and with the tap of a finger turns it into a display before your eyes. I got to try out the technology during a demonstration at the show. DoCoMo's prototype glasses are a clunky affair, designed for engineering tests rather than consumers. They have two clear lenses through which I could see the world. The navigation system works with a small motion-sensing ring, worn on the finger you'll be navigating with. I reached down and picked up an ordinary paper notebook, but when I tapped on its cover with my finger the outline of a display appeared. Tapping again brought up a simple interface that looked like it had been projected on to the notebook. From there, I could navigate menu items by swiping icons on the notebook and tapping the ones I wanted to select. Tapping the movie icon played a movie, which appeared to be displayed on the notebook cover. But the image was produced only in the glasses, and to anyone standing nearby it looked like I was tapping away on a blank page. To them I probably appeared slightly mad, but to my eyes it all made sense. So instead of using voice commands or tiny buttons to control the display in the glasses, I could pick up something tangible instead. It also worked on a small pad of sticky notes, with the menu adapting to the size by showing one icon per screen instead of the grid I saw on the notebook. The interface is among a number prototypes for head-mounted displays being demonstrated by NTT DoCoMo, Japan's largest cellular operator. A second, called the space interface, allows a person to manipulate virtual objects. In the demonstration, a virtual object hovered in front of me, and I could reach out my hands and pinch the sides and stretch it to make it bigger. A small infrared camera on the glasses keeps track of the wearer's hands and interprets those movements to manipulate the object they appear to be holding. A second demonstration of that technology allowed me to bounce an animated toy bear up and down on my finger. As it descended, I held my hand out to where it was falling and when it reached my finger it bounced back up. I could also bat it from side to side. As with the first system, those nearby see nothing more than a person waving their hands in the air, but to the wearer it all makes sense. A third wearable technology used augmented reality to provide additional information about the world around. When a person approaches, the head-mounted device uses facial recognition to try to identify them, then projects their name and other details on a display. So rather than scramble to remember someone's name, you can reply with a confident greeting and maybe a remark about the last time you saw them. It feels very futuristic and might raise concerns about privacy. The same system can be used to translate text, on a restaurant menu for example. If you hold the menu up in front of you, the headset reads it and displays the translation. All three technologies are research projects and there are no immediate plans to turn them into commercial products. Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service. Follow Martyn on Twitter at @martyn_williams. Martyn's e-mail address is [email protected]
<urn:uuid:c03daad7-b264-4115-ab55-fd46d8d28294>
CC-MAIN-2017-09
http://www.computerworld.com/article/2485328/computer-peripherals/docomo-shows--touch-display--for-computer-glasses.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00086-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949502
776
2.515625
3
Scores of filing cabinets containing thousands of patient medical records are disappearing into the cloud. Use of electronic health records systems in doctors' offices has doubled in recent years, according to a new report released Tuesday by the Centers for Disease Control and Prevention. In 2012, 72 percent of office-based physicians reported using electronic health records, up from 35 percent in 2007, the CDC says. The report finds that adoption of electronic health records was higher among younger physicians compared with older physicians, among primary-care physicians rather than specialty doctors, and among larger practices than smaller. This digital revolution among doctors is driven in part by the stimulus bill, which created a system for incentive payments to Medicare and Medicaid physicians who could use electronic health records to improve patient care. While there's plenty of anecdotes of patients irritated by their doctors looking at a screen during their appointment, early evidence shows using electronic health records can improve health outcomes. Online systems can remind physicians when patients are due for vaccinations and prescription refills, as well as offer a complete snapshot of the patient's health history so that doctors can make more informed decisions about treatment. The Office of the National Coordinator for Health Information Technology is helping guide implementation of the Hitech Act reforms. Led by Karen DeSalvo, the office is currently navigating the process of getting different electronic health systems to talk to each other—a process known as interoperability. "We have made impressive progress on our infrastructure, but we have not reached our shared vision of having this interoperable system where data can be exchanged and meaningfully used to improve care," DeSalvo said at a recent health information-technology conference. With electronic health records systems being put to use in thousands of doctors' offices nationwide, the next step is to be able to transfer patient data across systems, allowing patients with complex conditions to share their medical information with specialty doctors and hospitals.
<urn:uuid:979aca84-d3fc-41b8-8668-019bd88ddd99>
CC-MAIN-2017-09
http://www.nextgov.com/health/2014/05/paper-medical-records-are-vanishing-cloud/84859/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00438-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960132
380
2.75
3
Usually, when we identify trends in computing at IT Business Edge, we are referring to the immediate future-a few months or, at most, a year or two ahead. But two recent pieces of news have implications worth paying attention to in a longer timeframe. And, as often is the case with futuristic computing, the ideas are strange: One is living computers based on rotten food (actually, Escherichia coli, or E. coli), and the other involves famously weird quantum science, in which the computer's individual bits aren't called on to be either a zero or a one, the bedrock process of today's devices. Several sites, including TechEye.net, report on the use of E. coli for computations. The focus of the research is on the composition of logic gates. A logic gate, another fundamental underpinning of computing, takes one or more inputs to produce a single output. Put enough of these together and a full computer task can be completed. There are seven types of logic gates. For instance, in an "and" logic gate (AND gate), all inputs (A and B) must be "true"-signified in computer-ese as a one-for the output also to be considered true. The other three possible combinations for a two-input AND gate (two falses, true and false, false and true) all create a false. OR gates work in a similar fashion, but the requirements to meet the conditions to reach a result of "true" are different. In an OR gate, A or B must be true for the result to be a one. The ones and zeros are created by higher and lower levels of electricity. That's where the advance was made. Researchers at the University of California at San Francisco, possibly doing their research in a poorly kept cafeteria, used genes inserted into E. coli strains as the logic gates. Subsequently, the gates released a chemical signal that enabled them to connect to each other as they would on a circuit board, the story says. The ultimate goal was to create a language that would, in essence, enable code to be written as it is for more traditional logic gates. The other advance was reported by Ars Technica, which describes research published by the Applied Physics Letters by English and Australian researchers. As suggested by the logic gate description above, classical computing is binary: the choices are zero or one. The status of each bit is independent of any other. Anyone who has read anything about the quantum world probably knows what comes next. In quantum computers, the story says, quantum bits (qubits) are one and zero simultaneously. The operations that are done to the qubits don't switch them from ones to zeros or vice versa; rather, they change the probability that the qubit will eventually be in either state. The second, and related, idea is that an operation on one of the qubits impacts all on that string. The story says that mistakes come from two areas. One is the "intrinsic uncertainty" associated with quantum operations. The other is purely physical: The quantum world is weird because it is so small. This makes it tricky (to say the least) to come up with equipment that can poke and prod the qubits without gumming things up. The remainder of the story describes what the team set out to do-which involves directional couplers and interferometers-and what it means. It is too early to tell precisely what these new types of computers would be used for or when the research will show up in products. Quantum science already plays a role in security, but computers based on the approach would be orders of magnitude more complex. In any case, it is important to have a general idea of what is going on. Scientists have long suspected that Moore's Law on the continuing growth of computing power and reduction in its costs would, like Brett Favre, eventually hit its physical limits. One or these seemingly strange approaches-or both-may allow Moore to play on for decades longer.
<urn:uuid:a79eec80-0c3a-45bc-b08b-80be193fa80b>
CC-MAIN-2017-09
http://www.itbusinessedge.com/cm/blogs/weinschenk/is-quantum-mechanics-or-rotten-food-computings-future/?cs=44741
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00206-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956801
822
3.375
3
NOAA, Navy swimming with the SHARCs - By John Breeden II - Sep 21, 2012 When I wrote a column recently about DARPA’s AlphaDog, and how the mule-like robot could one day help Marines carry and even recharge their gear in the field, a lot of people were impressed. But agencies whose focus is a little bit more aquatic in nature, such as the National Oceanic and Atmospheric Administration and the Navy, countered with information about a more advanced robot they are already using in the field. While the Marines work with their mules, the Navy and NOAA are swimming with SHARCs. SHARCs are Sensor Hosting Autonomous Research Craft created by Liquid Robotics. Whereas the AlphaDog can travel 20 miles on its own with limited user intervention, SHARC Wave Gliders are now routinely swimming the world’s oceans, traveling hundreds of miles and going for up to a year without personal human contact. NOAA launched its first Wave Glider in April 2011 and has since found plenty of other uses for them, including one deployed to monitor Hurricane Isaac. The Naval Postgraduate School recently acquired two Wave Gliders and opened its Sea Web and Wave Glider Laboratory. The secret of the SHARC — actually, the Navy calls them SHARCs; NOAA refers to them as Wave Gliders — is that it has two power systems. The first is an array of solar panels that floats above the water on a surfboard-size keel and is used to power instruments below that can measure just about anything from ocean salinity to the strength of whale songs. But the second part of the setup is what makes the Wave Gliders so amazing. The surface part of the robot that floats is tethered to a submersible that hangs seven meters into the water. When the top part of the robot rises on a swell, it pulls the lower part up too. Fins on the submersible direct the water and force the craft forward, somewhat like the way an airplane moves through the air. Then when the float comes off the wave, the lower part of the robot sinks down, but its fins rotate in the opposite direction, and it gets pushed forward once again. So it can always move forward as long as there is wave action. The solar power system at the top of the robot powers navigation control and can direct its movements and hold a true course, or its path can be changed remotely by a human operator if needed. Right now the Wave Gliders are mostly being used for research, but the possibility of more dangerous work, such as scanning for minefields or even espionage tasks, could be in the cards for an always-on, always moving, low profile craft that can operate independently anywhere in the world without any risk to human life. John Breeden II is a freelance technology writer for GCN.
<urn:uuid:3dbde4fe-3697-49ab-b250-ff2991209b77>
CC-MAIN-2017-09
https://gcn.com/articles/2012/09/21/sharc-robotic-wave-gliders-noaa-navy.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00382-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952718
587
2.671875
3
Device manufacturers could now move AI-level processing from the cloud down to end users, PC Magazine reports, with one New York computer science professor saying the technology means that now "every robot, big and small, can now have state-of-the-art vision capabilities." The article argues that this standalone, ultra-low power neural network could start the creation of a whole new category of next-generation consumer technologies. New Chip Offers Artificial Intelligence On A USB Stick An anonymous reader writes: "Pretty much any device with a USB port will be able to use advanced neural networks," reports PC Magazine, announcing the new Fathom Neural Compute Stick from chip-maker (and Google supplier) Movidius. "Once it's plugged into a Linux-powered device, it will enable that device to perform neural network functions like language comprehension, image recognition, and pattern detection," and without even using an external power supply.
<urn:uuid:6b226c87-3da4-47c9-97e6-0f15de94e8b9>
CC-MAIN-2017-09
http://www.hackbusters.com/news/stories/594947-new-chip-offers-artificial-intelligence-on-a-usb-stick
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00306-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917511
187
2.578125
3
In our first entry on 2012 storage trends, we look at an aspect of flash-based storage, not that flash storage is suddenly a trend--it's reality. What is a trend, though, is the move to flash-dependent systems. Flash-dependent storage systems are systems that are either all-flash or systems where flash plays a critical role in the delivery of data. The first to keep an eye on are flash-only storage systems. These are different than solid date drives (SSDs) or solid-state storage appliances; they are complete storage systems with typically a robust storage feature set like snapshots and replication. They are designed to compete directly with traditional legacy storage systems instead of augment them like other solid-state storage solutions do. Their big differentiator is that there is no mechanical disk storage in them. To keep costs down, these vendors use technologies like thin provisioning, cloning, and deduplication to reduce the cost per gigabyte concern that you would typically have when considering flash-based media. There is no doubt that the above combination of technologies can drive the cost per gigabyte of flash-based media into the realm of a high-performance 15K RPM hard drive-based system. There are flash-dependent systems where flash plays a critical role in the delivery of data to the application but still have some form of mechanical storage available to them. These systems are typically using flash as a cache, as we discussed in our article "The Advantages Of Storage System Based Caching" or they're using flash as a primary tier. In both cases flash is designed to augment the use of high-capacity SATA-based mechanical hard disk (HDD) drives instead of lower capacity high-performance SAS-based hard drives. The goal of these systems is to deliver better performance than a storage system configured with high-performance hard drives and no flash, but do so at a lower price point. The combination of HDD and flash allows for a much smaller allocation of memory-based storage which also helps keep prices down. The key difference is the sensitivity to, or likelihood of, a cache- or memory-based tier miss, which would mean that data has to be retrieved from the mechanical hard drive storage tier. In flash-only systems there is no chance of a miss, but there is a likelihood of a higher cost. If you are looking for a new high-performance storage system to solve a broad range of application performance problems the flash-only systems certainly warrant strong consideration, especially if the data is not cache friendly. If you need a new storage system but your performance needs are more modest, where an occasional access from a mechanical hard drive storage tier and the resulting lower performance is not an issue for you, than a flash-dependent system may be a better option. If your current storage solution performs admirably but you need a performance boost for a very specific application set, then one of the more traditional solid-state storage appliances or servers with installed PCIe-based solid-state storage devices may be more appropriate. In reality we think that most will end up with a combination of server-based caching, typically via a PCIe solid-state storage device and either a flash-only system or flash-dependent system. Track us on Twitter
<urn:uuid:a5d2faae-c127-48f2-bccf-760d51f898a6>
CC-MAIN-2017-09
http://www.networkcomputing.com/storage/flash-dependent-storage-systems-take-2012/919010245
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00603-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939795
664
2.6875
3
Storage Networking 101: Configuring Disk Arrays It involves some tedium, but configuring disk arrays is the most critical part of building a SAN. Here's what you need to know. The most critical, sometimes tedious, part of setting up a SAN is configuring each individual disk array. In this Storage Networking 101, we'll delve into best practices and cover the general concepts you must know before configuring SAN-attached storage. There are three general steps when configuring a disk array: - First, you create a RAID set. It can be any type of RAID the array supports, and we'll just assume RAID-5 for this article so that we can talk about hot spares. - You can either slice up the RAID set to present multiple LUNs to a host, or you can create "RAID Groups," as most vendors call it. This is a completely optional step, but it can make your life easier. - Third, you must assign LUNs to a host. Create a RAID Set The first step can be done many ways. Say you have an array that holds 14 disks per tray, and you have four trays. One option is to create two (or more) RAID-5 volumes on each tray. You can then assign part or all of each RAID-5 volume to various hosts. The advantage to this method is that you will know which hosts use what specific disks. If the array with three additional trays was purchased at the same time, it actually makes more sense to allocate the RAID sets vertically, so that a single tray failure doesn't take out the RAID volume. With only four trays this means you'll have three disks worth of usable space per 4-disk RAID-5 volume: probably not a good use of space. More often people will create huge RAID-5 sets on the arrays. There's a balance between performance and resiliency that needs to be found. More disks mean better performance, but it also means that two disk failures at once could take out all of your data. Surprisingly, multiple disk-at-once failures are quite common. When the array starts rebuilding data onto a previously unused disk, it frequently fails. Configure RAID Groups The second step causes quite a bit of confusion. Regardless of how you've configured the RAID sets in the array, you'll need to bind some amount of storage to a LUN before a host can use it. The LUN can be an entire RAID-5 set (not recommended), or it can be a portion. The partitioning method ensures that you aren't giving too large a volume to a host. There are many reasons for this: - Some file systems cannot handle a 1TB or larger volume - Your backup system probably won't be able to backup a file system that's larger than a single tape - The important one: more LUNs presented to the host (seen as individual disks by the OS) means that separate I/O queues will be used Back to the second step: Raid Groups. A partitioned RAID-5 set of 1TB, for example, into 100GB chunks, will provide 10 LUNs to deal with. If you don't care what nodes use what disks, you can just throw these LUNs into a group with other LUNs. I prefer to keep one RAID group per host, but others see that as limiting flexibility. Some hosts need a dedicated set of disks, where you know that only one host will be accessing the disks. A high-traffic database server, for example, should not have to contend with other servers for I/O bandwidth and disk seeks. If it truly doesn't matter to you, simply create a bunch of LUNs, and assign them to random groups. It is also important to create and assign "hot spare" coverage. Spare disks that are left inside the array are "hot" spares. They can be "global," so that any RAID volume in the event of a failure uses them, or they can be assigned to specific RAID volumes. Either way, ensure you have a hot spare, if you can afford the lost space. If not, be sure to monitor the array closely—you'll need to replace any failed disk immediately. This is where it gets tricky. Different storage arrays will have different terminology, and different processes for assigning LUNs or groups of LUNs to a host. Assign Your LUNS Step three, "assign LUNs to a host," means that you're going to map WWNs to LUNs on the array. If you didn't, then any host zoned properly could see all the volumes on the array, and pandemonium would ensue. Be cautious about certain cheaper storage arrays, too. They may not even have this feature by default, until you purchase a license to enable it. While the purveyors of limited-use technology call this feature "WWN Masking" or "SAN-Share," the market leaders in the SAN space realize that it's required functionality. The most common approach is to create a "storage group," which will contain "hosts" and "LUNs" (or RAID groups with many LUNs). Whatever diverging terminology is used, the universal concept is that you need to create a host entry. This is done by manually entering in a WWN, or connecting the host and zoning it appropriately so that the array can see it. Most arrays will notice the new initiator and ask you to assign it a name. Once your hosts, and all their initiator addresses, are known to the array, it can be configured to present LUNs to the host. One final note about array configuration. You'll be connecting two HBAs to two different fabrics, and the array will have one controller in each fabric. The host needs to be configured for multipathing, so that either target on the array can disappear and everything will continue to function. We'll dedicate an entire article to host configuration, including multipathing and volume managers, but be aware that the disk array side often needs configuring too. The majority of disk arrays require that you specify what type of host is being connected, and what type of multipathing will be used. Without multipathing, LUNs need to be assigned to specific controllers, so that the appropriate hosts can see them. Once LUNs are assigned to a host, they should be immediately available to the operating system, viewed as distinct disks. Think about this for a moment. You've taken individual disks, and combined them into RAID volumes. Then, you've probably partitioned them into smaller LUNs, which is handled by the disk array's controllers. Now the host has ownership of a LUN, comprised of possibly 10 different disks, but each LUN is smaller than individual disks. The host OS can choose to stripe together multiple LUNs, or even partition individual LUNs further. It's quite fun to think about.
<urn:uuid:73cc3195-51b3-42d2-bdd9-62a638d9ed44>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/netsp/article.php/3698291/Storage-Networking-101-Configuring-Disk-Arrays.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00479-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941685
1,442
2.75
3
Educational games are getting contagious Do you think you have what it takes to stop a plague in its tracks? The Centers for Disease Control and Prevention has released an iPad app called “Solve the Outbreak” that lets players take on the role of a disease detective sent by the CDC to take control of an outbreak scenario. In real life, new outbreaks happen every day, and the CDC sends out its investigators to determine the causes, so treatment can be initiated. The game rates how well players handle the fictitious situation and is designed to help the public learn about what the CDC does on a daily basis. The CDC’s interest in games doesn’t include just making its own. The agency has also taken an interest in Plague, Inc. -- a tablet game where players try to create and spread a deadly disease -- and have even asked the game’s creator to speak at the CDC offices. Using games and mobile apps to help educate the public about government activities has become increasingly popular, such as “America’s Army,” which has even been made into interactive books and comic books. At the Massachusetts Institute of Technology, researchers are using a “Tron”-like game to find ways of improving network security. Muzzy Lane Software has a game designed to teach people how government works.And the European Space Agency is using a game to help improve its software for controlling robotic space flights. Would-be disease detectives can download the app at the iTunes store and maybe save us from a plague or two. Posted by Greg Crowe on Mar 20, 2013 at 9:39 AM
<urn:uuid:58c00d28-23ae-409d-a8e8-89d2ff0dc191>
CC-MAIN-2017-09
https://gcn.com/blogs/mobile/2013/03/educational-games-getting-contagious.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00479-ip-10-171-10-108.ec2.internal.warc.gz
en
0.970799
335
2.96875
3
First, the authors describe basic usage syntax for inline assembly (inline asm) embedded within C and C++ programs. Then they explain intermediate concepts, such as addressing modes, the clobbers list, and branching stanzas, as well as more advanced topics, such as memory clobbers, the volatile attribute, and locks are discussed for those who want to use inline asm in multithreaded applications. See the article here.
<urn:uuid:7f07ed49-14a1-417a-acc0-17551db732b2>
CC-MAIN-2017-09
https://www.ibm.com/developerworks/community/blogs/5894415f-be62-4bc0-81c5-3956e82276f3/entry/a_guide_to_inline_assembly_for_c_and_c2?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00599-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915549
100
3.171875
3
Swapping out an aging HDD for a new SSD, or moving information from one drive to another location via data archiving solutions, usually requires that the old storage medium be properly scrubbed of its data if it is no longer going to be used. By doing so, consumers and businesses prevent unwanted parties from accessing personal or corporate items on discarded drives. San Jose Mercury News business reporter Steve Johnson examined the media sanitization guidelines formulated by the National Institute of Standards and Technology. In their guidebook, the NIST authors recommended that when discarding one computer for another or migrating data to new drives, users "disfigure, bend, mangle or otherwise mutilate the hard drive so that it cannot be reinserted into a functioning computer." These measures, while seemingly extreme, are foolproof ways to eliminate items like credit card numbers or personally identifiable information. These pieces of data remain in stored files even after a user "deletes" them by dragging them to the desktop recycle bin or using the Delete key, and there is no consensus about the efficacy of drive-cleaning software or repeatedly overwriting old data with new. "While experts agree on the use of random data, they disagree on how many times you should overwrite to be safe," wrote the authors of a report from U.S. Department of Homeland Security's Computer Emergency Readiness team, cited by Johnson. "While some say that one time is enough, others recommend at least three times." When procuring a new SSD to upgrade computer performance, end users can consider destroying the old drive themselves with hardware tools and safety equipment. Before ensuring its destruction, they may want to use a Blu-ray burner to copy old data to a disc, or utilize an archiving solution. Additional hard drive sanitization methods Short of destroying the drive outright, there are several alternatives that may work. In The Courier-Journal, writer Kim Komando recommended formatting the drive by reinstalling Windows. However, this approach may not ensure complete data erasure, especially on high-capacity HDDs that have been heavily fragmented over time. Komando also touted the power of drive-wiping software, but came to a similar conclusion as the NIST coordinators. "If you don't need your hard drive anymore, physically destroying it is the best way to keep your data from falling into the wrong hands," she wrote. "I would still run the Boot And Nuke [erasure] program first, however."
<urn:uuid:185749a5-2c2a-421f-ab50-a0e5e1c71177>
CC-MAIN-2017-09
http://blog.digistor.com/ensuring-proper-data-disposal-when-switching-to-ssd-drives/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00299-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94152
508
2.859375
3
A distributed Denial of Service (DDoS) attack is a simple variation of a Denial of Service attack in which the attacker initiates the assault from multiple machines to mount a more powerful, coordinated attack. A DDoS attack is an amplified Denial of Service attack. In DDoS attacks, multiple hosts simultaneously attack the victim server, resulting in a powerful, coordinated, Denial of Service attack. This type of attack can even take down large sites such as Yahoo, Amazon and CNN, which are designed to handle millions of requests in a short amount of time. A DDoS attack is executed as follows: an attacker locates vulnerable machines, gains access to them, and installs an attack program. These machines are often referred to as "zombies". When the attacker decides to strike, the attacker commands all the "zombies" to start flooding the victim target. The owners of the "zombies" have no clue that their computers are being used to attack remote systems, and it is more difficult to locate the attacker because the attack program is not running from the attacker's computer. Recently, web servers have also been used to execute DDOS attacks. Web servers provide a more muscular attack platform with higher bandwidth and processing power—one server is the equivalent of 3,000 infected PCs. The concept of DDoS can also be used to achieve other goals, such as stealth scanning (just a few packets from each zombie) and distributed password cracking (using the aggregate processing power). The impact of a DDOS includes: - Application outages - Brand damage - Financial loss due to the inability to process financial transactions
<urn:uuid:c9425a52-eb01-4fb8-b6f4-26faa3f0a92c>
CC-MAIN-2017-09
https://www.imperva.com/Resources/Glossary?term=distributed_denial_of_service_ddos
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00299-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946409
331
3.359375
3
Happy Earth Day! As we take today to reflect on environmental impact, let’s take a look at some solid data to see how much energy and carbon dioxide emissions are really saved by data center energy efficiency and renewable energy use. The recent memo from the Greed Grid reintroducing their metrics for data center efficiency provides a great jumping off point to estimate the environmental impact of an average data center. With recent headlines like Apple’s shift towards renewables and Google’s funding of wind farms, not to mention our own efforts at Green House Data to improve efficiency and overall Power Usage Effectiveness (PUE), many in the data center industry are curious about the actual data at hand. If a company improves 2% of its overall carbon footprint through efficient or renewably powered data centers, what does that actually mean? Is it a large impact or just a PR opportunity? Summary of Data Center Emission Data and Method We took a theoretical 10 MW facility and assumed it was operating at capacity for simplicity of math and comparison. We measured this facility’s emissions at 1.8 and 1.2 PUE to see how improving operations would affect emissions. Using the Carbon Usage Effectiveness (CUE) metric, which is a combination of PUE and the Carbon Dioxide Emissions Factor (CEF) of a location, we discovered: eGrid Subregions – Find Your Carbon Usage Effectiveness The United States electricity grid is divided into various subregions called eGrid regions (Emissions and General Resource Integrated Database). This database tracks air emissions rates, net generation, resource mix and more. The EPA provides this information and also the geographical division of eGrids, which we subsequently placed on a Google Map: If you select the region of your data center facility location, you can see the Greenhouse Gas Emissions Factor and CEF to help calculate your own CUE (Carbon Usage Effectiveness). CEF is the Carbon Dioxide Emission Factor, measured in kilograms of CO2 emitted per kilowatt-hour (kgCO2eq/kWh), as described by the Greed Grid. The EPA reports the Greenhouse Gas Emissions Factor (GGEF) of each eGrid subregion. The formula for CEF is as follows: CEF = ( Greenhouse Gas Emissions Factor / 1 BTU) / 1000 The Greenhouse Gas Emissions factor is measured in kg CO2 emitted for each MBtu, so dividing it by the value of 1 BTU in Watt Hours (which is 0.293 wH) describes the amount of CO2 emitted in kg per Megawatt-Hour. Dividing that amount by 1000 provides the CEF in kWh. As an example, the Rocky Mountain eGrid is RMPA and has a GGEF of 254.6387. Any facility located in the RMPA grid region has the following CEF: 254.6387 kg Co2e/MBtu / 0.293Wh = 869.07 KgCo2e/MWh = 0.86907 KgCO2e/kWh The Annual Emissions of a 10 MW Data Center To find the total emissions in kg of a 10 megawatt data center facility, we first have to calculate the Carbon Usage Effectiveness (CUE) at both 1.8 and 1.2 PUE. The CEF is multiplied by PUE to get CUE, which is multiplied by annual energy use in kWh to find the total emissions of a data center in kg of CO2. CEF * PUE = CUE CUE * Total Annual Energy Draw = Annual Emissions For our 10 MW facility, if it were in the RMPA grid region, the calculation would go as follows: PUE (10 MW Facility in RMPA Grid Region) Carbon Usage Effectiveness Annual Emissions (kg) 0.86907 kgCO2e/kWh * 1.8 PUE = 1.564 CUE (1.564 CUE * 8,765.81 hours per year) * 10,000 kW = 137,126,485.77 kg annual emissions 0.86907 kgCO2e/kWh * 1.2 PUE = 1.043 CUE (1.043 CUE * 8,765.81 hours) * 10,000 kW = 91,417,657.18 kg annual emissions The chart below shows the annual emissions of a 10 MW facility in each subregion at 1.8 and 1.2 PUE. Although PUE is a variable metric and has been accused of manipulation in the past, if we assume a legitimate measurement, lowering PUE from 1.8 to 1.2 can result in millions of pounds of CO2 saved from the atmosphere. This chart shows the amount of CO2 saved in each grid region (calculated as [Emissions at 1.8 – emissions at 1.2] * 2.20462 lbs/kg). Once again, this is for a 10 MW facility. The data points to two interesting conclusions, only one of which is really controllable by data center operators: (1) lowering PUE delivers dramatic reductions in carbon footprint and (2) the electrical grid region will significantly impact the emissions level of data centers. For many companies including Green House Data and our favorite headline-grabbers like Google and Facebook, another weapon in the fight against emissions is renewable energy sources. The big companies are constructing their own renewable generation or investing in large scale privately owned wind farms and solar fields, removing themselves from the grid subregions entirely. Other companies purchase Renewable Energy Credits, meaning they are still impacted by the efficiency of their local, dirty grid; but at least are making an investment to reduce the ultimate carbon footprint of data center operations. In either case, it will be interesting to see if data center operators large and small begin to measure their Carbon Usage Effectiveness and total emissions on a yearly basis. Of course, these calculations are for regular operation pulling off the standard grid only and do not take into account factors like diesel generators, office supplies, executive travel, etc. But the above charts, maps, and formulas can at least help get operators on their way to measuring carbon footprint of data centers. Posted By: Joe Kozlowicz
<urn:uuid:164fc7c8-da5e-47e9-ae40-8808c4afd303>
CC-MAIN-2017-09
https://www.greenhousedata.com/blog/the-truth-about-data-center-carbon-emissions-and-pue
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00475-ip-10-171-10-108.ec2.internal.warc.gz
en
0.91264
1,307
2.953125
3
Internet standards and protocols simply aren't all that exciting, or even that interesting, to most people. But they're necessary—especially in the case of IPv6 (Internet Protocol version 6), the much-needed update to the aging IPv4. IPv6 is the successor to IPv4, which, in Internet terms, is pretty ancient as it was created 40 years ago. IPv4 is the addressing scheme used to assign IP addresses to devices that connect to the Internet, and those addresses are quickly running out. IPv4 has approximately 4.3 billion addresses, and with all of the computers, tablets, smartphones, smart TVs, and what have you that can connect to the Internet these days, well, it's easy to see why those addresses are running out.
<urn:uuid:fdc9a9ff-ba75-4c1d-80d1-23d2b4782988>
CC-MAIN-2017-09
http://www.cio.com/article/2369399/mobile/79625-Techs-20-Biggest-Winners-of-2012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00351-ip-10-171-10-108.ec2.internal.warc.gz
en
0.968455
157
2.984375
3
MIT researchers are using nanotechnology to help doctors detect cancer in their patients sooner, increasing their odds of beating the disease. To diagnose cancer, doctors look for specific proteins secreted by cancer cells. The problem with these biomarkers is that they're difficult to detect, giving the cancer more time to grow and advance before it's discovered and treated. A group of MIT scientists, though, has developed a technology using nanoparticles that can interact with the cancer proteins to create thousands more biomarkers. The nanoparticles basically act as amplifiers. "There's a desperate search for biomarkers, for early detection or disease prognosis, or looking at how the body responds to therapy," Dr. Sangeeta Bhatia, a member of MIT's David H. Koch Institute for Integrative Cancer Research, said in a statement. Amplifying the biomarkers is critically important to cancer diagnosis because the proteins produced by cancer cells often are so diluted in the bloodstream that they're nearly impossible to detect. Stanford University researchers released a report a year ago noting that cancerous ovarian tumors can grow for 10 years before currently available blood tests can detect them. "The cell is making biomarkers, but it has limited production capacity," Bhatia said. "That's when we had this 'A-ha!' moment: What if you could deliver something that could amplify that signal?" The MIT research echoes work being done at Princeton University to make cancer more easily detectable. Princeton scientists reported this summer that they have had a breakthrough in nanotechnology and medicine that could make tests to detect diseases, such as cancer and Alzheimer's, three million times more sensitive. And late in 2008, researchers at Stanford University used nanotechnology in a blood scanner to detect early-stage cancers. "The earlier you can detect a cancer, the better chance you have to kill it," Shan Wang, a Stanford professor of materials science and electrical engineering, said at the time. "This could be especially helpful for lung cancer, ovarian cancer and pancreatic cancer, because those cancers are hidden in the body." Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "MIT uses nanotech to make cancer easier to detect" was originally published by Computerworld.
<urn:uuid:6148462d-85a3-4ad5-b6ee-42d6d0f4f5c6>
CC-MAIN-2017-09
http://www.itworld.com/article/2717126/consumer-tech-science/mit-uses-nanotech-to-make-cancer-easier-to-detect.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00119-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95041
522
3.53125
4
Microsoft encourages us to think of Linux, when we think of it as all, as an also-ran operating systems for nerds. The last thing Microsoft wants us to think about is that there are some spaces where Microsoft is a distant number two and Linux is on top. Too bad Microsoft, there are several such places. One such is HPC (High Performance Computing). At HPC's very highest end, supercomputers, Linux rules. The first computer to bust the petaflop, 1.0 quadrillion calculations per second, barrier? IBM's Roadrunner supercomputer running Linux. Out of the Top 500 supercomputers in the world, over 80% of them are running Linux. Better still, Linux manages to pull this off by largely using off-the-shelf components unlike the supercomputers of years gone by. Instead of specialized hardware, the Roadrunner uses AMD Opteron and Sony, Toshiba and IBM's Cell processors. Yes, that's the same Cell CPU that's inside your Sony PlayStation 3. Linux has been making the most from the least in supercomputing since 1994, when Thomas Sterling and Don Becker, at NASA Goddard Space Flight Center's CESDIS (Center of Excellence in Space Data and Information Sciences) created the first Beowulf Linux-powered clustered supercomputer. That first system, which was made up of 16 486-DX4 processors connected by channel bonded Ethernet, proved you could deliver supercomputing performance with COTS (Commodity off the Shelf) based systems. I've always regretted that I had left Goddard several years earlier so I never had a chance to get my name into a footnote of supercomputing and Linux history. HPC's real bread and butter isn't supercomputers though. It's managing, or trying to manage the madness that is the financial markets. Wall Street runs on Linux. Almost all the major financial markets rely, to one extent or another, on Linux. For example, the CME (Chicago Mercantile Exchange) Group, the world's largest derivatives exchange, just joined the Linux Foundation. Solaris is also a major player in the world's markets. Microsoft, on the other hand, is probably best known for its colossal failure on the London Stock Exchange. What made this especially ironic was that Microsoft has made the London Exchange's decision to go with a .NET and Windows Server 2003 solution part of its "Get the Facts" anti-Linux campaign including ads with fake headlines from the "The Highly Reliable Times." Despite that flop, Microsoft is continuing to try to prove that it has Wall Street cred as a real HPC player. In its latest attempt, Microsoft is making no bones that Windows HPC Server 2008, scheduled to be released on Nov. 1st, will try to challenge Linux in HPC on Wall Street. I'm not terribly worried about it. When you talk about really high-end performance, you talk about Linux, Solaris and AIX, maybe HP-UX or VMS, or even z/OS on a mainframe. Windows? Windows isn't even in the conversation. Anyone who knows anything about HPC, which it seems didn't include the IT staffers at the London Stock Exchange knows this. Someone, somewhere out there may have used HPC Server 2008's predecessor, Windows CCS (Compute Cluster Server) 2003, but I've never met them. HPC Server 2008 will probably have more users than CCS, but probably not that many more. After all, while HPC Server 2008 talks a good game with high-speed networking, cluster management tools, advanced failover capabilities, etc., etc., let's face it. It's still Windows. People may be willing to put up with a blue screen of death on their PC. I don't think stock brokers will be so patient when an entire exchange goes down especially in a market as volatile as this one is. After all, as the London Times reported today, numerous rivals are popping up to challenge the London Stock Exchange, and, in part, that's because of the Exchange's Windows-based system failures. No, when it comes to big-time finances, Linux and Solaris are on top and Windows is an also-ran and it's going to stay that way.
<urn:uuid:070ee1f3-9a7b-4628-9c3d-ef6a2dbb71bb>
CC-MAIN-2017-09
http://www.computerworld.com/article/2480627/data-center/where-windows-is--2-to-linux.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00347-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954221
881
2.640625
3
NSBRI, Alion aim for software to help keep pilots oriented - By Patrick Marshall - Mar 09, 2009 The National Space Biomedical Research Institute (NSBRI) commissioned a project to study — and develop countermeasures for — astronauts’ spatial disorientation during space flight. Alion Science and Technology, which has developed models of human response at different accelerations for the Air Force, will conduct the study under a $1.73 million grant from NSBRI. Scientists have long known that human beings determine their physical position in the world by using a variety of cues. One of the most important pieces of data is the bending of cilia in the inner ear as they are pushed by the movement of fluids. Although that system works pretty well for Earth-bound people, it throws a few curves at pilots and astronauts. According to one recent study, spatial disorientation was a factor in more than 20 percent of all Air Force aircraft incidents that resulted in death, permanent disability, the destruction of an aircraft or property damage exceeding $1 million. Between October 1993 and September 2002, there were 25 high-performance fighter incidents in which spatial disorientation was a causal factor, and those incidents caused 19 fatalities and more than $455 million in damage. The problem is that although computers adequately measure changes in an aircraft’s attitude, position and acceleration, there are no reliable sensors available for detecting a pilot’s perception of those attributes. And even turning to remotely controlled aircraft does not solve the problem of spatial disorientation because studies have found that operators of remotely guided aircraft are susceptible to effects of spatial disorientation similar to those experienced in a craft. Alion’s strategy combines knowledge about human physiology with data about an aircraft’s movement and then uses software to predict in real time the disorienting effects that are likely to afflict pilots. Of course, it’s not enough to simply know that a pilot might be experiencing disorientation. So Alion will work to develop appropriate means of notifying pilots and delivering corrective information. “Half of the battle is detecting when[an astronaut] is suffering from spatial disorientation, and the other is how to deal with it,” said Ron Small, Alion’s principal investigator on the project. “Zero gravity in space adds to the confusion with astronauts’ visual cues, senses and perceived orientation. By developing countermeasures, we can help the spacecraft to be controlled properly, which not only can ensure the mission is effective but help keep the astronauts safe.” Alion already has set up simulators in its work for pilots who navigate within Earth’s atmosphere. Through these tests, researchers have found that multiple types of notifications and levels of corrective information could be called for, depending on the situation a pilot might be facing and the demands on his or her attention. For example, if an aircraft is about to stall at low altitude, multiple visual and auditory alerts could be issued. If the aircraft is at a higher altitude and in no immediate danger, more subtle alerts could be warranted. “When our confidence is low or if the risk to the vehicle and crew is low, we do nothing,” Small said. “If the pilot recovers or our confidence is low, we do nothing. But, as our confidence in our assessment of [spatial disorientation] increases, we trigger progressively more intrusive countermeasures. We start with visual cues that help the pilot recover to straight and level flight.… Next we use audio cues to inform the pilot that [spatial disorientation] is suspected and how to recover.” Small said Alion also has experimented with a tactile vest, developed by the Dutch and the U.S. Navy, to see if it would help break through to the pilot’s active attention and assist with recovery. Alion’s grant runs through August 2011, and the company intends by then to test the system in a lunar lander simulator. Patrick Marshall is a freelance technology writer for GCN.
<urn:uuid:36bea7c1-2cc8-4a0a-b7a6-a99ece60a235>
CC-MAIN-2017-09
https://gcn.com/articles/2009/03/09/update2-spatial-disorientation.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00347-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948381
840
2.8125
3
Making smart buildings smarter It's Monday morning. You've just parked your car and you're approaching the entrance to your office building. The Wi-Fi router in the lobby detects your smartphone as you come within range, and it sends commands that turn on the computer in your office as well as the lights and heat. By the time you reach your desk, everything is cozy and your computer is ready for work. Just as important, your company is saving big bucks on its utility bills. No, this capability isn't quite available, except perhaps in Bill Gates' house. But we can expect it soon. Researchers have tested using existing IT infrastructure — including smartphones, laptops, wireless routers and wired networks — to track building occupancy in real time and use that information to manage lighting, environmental controls and other services in buildings. Originally proposed in 2008 by Bruce Nordman and Alan Meier, researchers at the Lawrence Berkeley National Laboratory, the idea is called "implicit occupancy sensing," and according to Nordman, it promises huge savings. In some offices with occupancy sensors over every cubicle, they found that “many people are not in their office 50 percent of the time," Nordman said. "They may be on vacation, sick, traveling, in a meeting, lunch, who knows? But if you could have a light on only when someone is sitting there, than you would save 50 percent. And when you save on lighting you also reduce your air conditioning load." Nordman, Ken Christensen of the Department of Computer Science and Engineering at the University of South Florida, and other colleagues from those institutions and the University of Puerto Rico at Arecibo, recently published their findings from testing parts of such a system in a building at the Lawrence Berkeley National Laboratory complex. The infrastructure in their test included smartphones, networked computers, routers and other devices. By monitoring network addresses of devices and requests as well as automatic polling sent across the network, the software developed by the team was able to determine in real time the occupancy of any location in the building. The data, Nordman said, showed that the number of spikes in the network peaked around noon. Activity rose in the morning and fell in the afternoon, revealing patterns of people coming to work, powering up their computers, using them, then powering off. “Not only do we know the number, but we knew exactly which computers were on because we also had their IP and MAC addresses," Nordman added. The system was also able to triangulate the location of specific cell phones by detecting the wireless access points reported as available to the device in addition to its actual connections. One of the major benefits of implicit occupancy sensing is that, unlike dedicated occupancy sensors, it runs on infrastructure that is already in place for other purposes, Nordman said. "You get the data essentially for free, he said. "There is no cost to install or maintain this network, and you can get highly granular data." Compared to buying and installing sensors that typically connect to only a single device — such as a lighting control — and that are not connected to the network, Nordman said, "this is a much more powerful and less expensive way to do things." What's more, the system could be extended to include data from any peripherals connected to networked devices. You might, Nordman suggested, schedule computer cameras to take a photograph of all the lights in the ceiling once a week and automatically analyze the images to see if any lights are out and need to replaced. Nordman acknowledged that the system isn't yet ready for deployment. "I've had no research funding for this topic yet," he said. "But eventually this will be done just because it makes sense." The major challenge in developing more robust systems, Nordman said, is developing standard communications protocols "so that the devices that are producing information can send it to the occupancy engine in the building. Then that engine can distribute the information back to devices, which then can utilize the information to change operations." Posted by Patrick Marshall on Dec 17, 2013 at 8:11 AM
<urn:uuid:2735c863-9443-408b-8120-60ac8c2b0273>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2013/12/smartphones-smart-buildings.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00467-ip-10-171-10-108.ec2.internal.warc.gz
en
0.97017
835
2.671875
3
On January 18, 2009, the SANS Internet Storm Center reported the first instances of what is now being described as a DNS DDOS (distributed denial of service) attack (see http://isc.sans.org/diary.html?storyid=5713). The attack is simple: the attacker spoofs the victim's source address in a DNS query for '.' (dot) to a DNS server, which then generates a much larger response to be sent to the victim. This is also known as an amplification attack whereby the attacker's traffic is amplified 10-fold by the natural DNS response. The purpose of the attack is to generate as much traffic as possible to victim's system (the spoofed address used) or network. It is also quite likely that the owner or administrators of the participating DNS server are completely unaware that their system is being used in this way. In fact, if the queries are successfully answered, then most logging levels will not report this activity at all. The attack takes advantage of certain configurations on the part of the participating DNS server. This includes all BIND and Microsoft DNS servers. For Adonis, the results are as follows: v5.5.0 and v5.5.1With recursion enabled: Check that "allow-query-cache" is not set to allow more than "allow-recursion". If they do not conflict, then the server will deny the request and defeat the attack. With recursion not enabled: Set additional-from-cache no; the server will deny the request and defeat the attack. v5.1With recursion enabled: The system will respond to these requests regardless of any other settings. We recommend disabling recursion on external facing Adonis systems. With recursion disabled: Set additional-from-cache no, set additional-from-auth no; the server will deny the request and defeat the attack. You can also see various other mechanisms to detect and protect against this attack on the SAN site (see http://isc.sans.org/diary.html?storyid=5713). Neither ISC nor CERT have issued any advisories, vulnerability or other notices, indicating that this is not considered a major problem. Reports on the incidence of attacks have been low in number.
<urn:uuid:5bdc2d5b-4980-4de1-be4b-8b23ad2c89d6>
CC-MAIN-2017-09
https://www.bluecatnetworks.com/services-support/customer-care/security-vulnerability-updates/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00643-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930692
479
2.765625
3
We will speak here about some basics about Forwarding UDP broadcast traffic. If you were wondering what Forwarding UDP broadcast traffic actually is I will try to explain it here in few words. If you have more that one broadcast domains in your local network, let’s say that you have three VLANs. In normal networking theory it’s normal that broadcast initiated on host inside one VLAN will get to all host inside that VLAN but it will not get across to other VLAN. Typically the broadcast domain border is router or a Layer’s 3 switch VLAN interface. Although this is normal for most of broadcast traffic there needs to be a way to forward some kinds of broadcast traffic across that border. Why? Here’s a simple example. If you use DHCP, and you are, you will probably have hosts in different VLANs and all of them need to get the IP address from DHCP. If Forwarding UDP broadcast traffic didn’t exist it will be needed to have one DHCP server on every VLAN. Remember that DHCP works using broadcast traffic in some of the steps. Simple DHCP address leasing: Host that connects on the network will in the first step send broadcast DHCP discover message in order to find where the server is or if the server actually exist. After the HDCP replies with unicast DHCP offer host will one again use broadcast to send DHCP request to server. Server will then acknowledge the IP address leasing with unicast ACK message and that’s it.
<urn:uuid:22719afe-3b6e-4b2d-b5a1-b687cc1cb80d>
CC-MAIN-2017-09
https://howdoesinternetwork.com/tag/broadcast
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00519-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931504
309
3.03125
3
Even before the Internet, computer security was a problem. In the 1986 movie War Games, we saw a young Matthew Broderick hacking his way into the computer that controls the U.S.' nuclear command and control. Today's hackers are the phone freakers of the 1980s, emulating telephone noises to obtain free long-distance calls. Viruses and worms have been part of the background noise of cyberspace since its earliest days. So what's new? Well, the numbers tell the tale. In 2000, there were 21,000 reported virus incidents. Three years later, the number was more than six times higher. In 2002, the worldwide cost of worms and viruses was estimated at $45 billion; August 2003 alone saw costs of almost the same magnitude, while the annual cost will rise 300% year over year. Twenty-seven million Americans have been the victims of identity theft in the past five years, but one-third of that total were victimized in the past 12 months. Patches to correct the kind of commercial-software vulnerabilities that hackers target most frequently were once issued at a rate of maybe 10 per month. In 2002, they appeared at a rate of dozens per week. And in 2003, worms that used to take several days to travel around the globe spread to more than 300,000 systems on six continents in less than 15 minutes from launch. The implications are huge for corporate America. Five years ago, U.S. corporations spent 2% to 3% of their IT budgets on security; now that portion is roughly 8% to 12% (see chart at left). And the worst is, it hasn't helped. In recent months, even the most security-aware companies have been victimized. These include airlines, large banks, electric utilities, investment houses, railroads, and other critical infrastructure enterprises that have developed IT security policies and spent lavishly on defensive technologies. Put simply, it's becoming easier--and more profitable--to be a hacker, and harder--and costlier--to defend an enterprise. We'll describe the problem and offer a 10-point road map to keep your security spending on track. There was a time when a hacker had to spend years learning how to write and read complex computer code. Once in a great while, a hacker would discover a software vulnerability--read "glitch"--and then spend weeks developing an exploiter, an "attack script," that could take advantage of the vulnerability to permit unauthorized entry into a network. Then, after getting privileges--such as "system administrator" access--the hacker would install a Trojan, or "back door," breach and spend additional weeks exploring the network. It's much more sophisticated now. A hacker can go to a Web site to develop a virus or worm by piecing together downloadable scripts that can be easily tailored for specific targets. One recent version of the Sobig virus went after Internet addresses belonging only to banks and other financial institutions. Hackers have programs that scan the Microsoft, Bugtrack, and CERT Web sites to notify them when a new software vulnerability is announced. Then they spring into action, often using prewritten code as a platform for new exploits. Often, within hours, a new attack code is developed and tested. Once perfected, the blended virus-worm is launched into cyberspace in a "fire and forget" attack. The attack script takes on a robotic-like life of its own, wandering for months in cyberspace looking for victims. Code Red, a virus-worm launched in July 2000, still comes alive once a month to search for unpatched computers. The difficulty of defense IT executives used to think that computer security was about establishing a virtual perimeter, a medieval castle in cyberspace protected by firewalls and "demilitarized zones." To get inside, team members had to stand at the gate and speak their password. It's clear now that concept is about as modern as the walled town. Free software allows just about anyone to crack most passwords in seconds. Many enterprises have opened holes in their firewalls for road warriors who slip into the fortress by tunneling through virtual private networks. Unfortunately, the road warriors are usually using unclean laptops that allow viruses and other malicious code to burrow in along with them. Networks also open the gates to itinerant traders, called consultants, supply-chain partners, or customers, that are allowed on your network and into your systems--that's at the heart of collaboration. Or sometimes you eject a town member from the community, but the person copies the key to the gate before leaving. These are disgruntled employees or those recently fired. So much for the perimeter-defense theory. Software protection isn't much better. Where you used to run an operating system with 8 million lines of code, you now run one with 40 million lines or more. Now think about a recent Carnegie Mellon University study that found that most computer program writers make an error every thousand lines of code, and you begin to see why there are so many vulnerabilities and patches. Complexity compounds the problem. The sleepy guard on the wall tower of our medieval fortress can't be relied on to see the attack coming and to respond. Indeed, no longer can your staff alone handle IT security in a 24/7 environment where new attacks can sweep the globe in the time it takes to page you. "Whenever a risk appears in one area, it has the potential to infect and seriously hamper another," says a spokesman at the Chubb Group of Companies in Warren, N.J. "It's imperative that operational, information-technology and financial managers across the enterprise join together to assess and mitigate their organization's most serious exposures." That's the model Chubb follows as it builds up its security team. IT security audits can no longer be an annual event by an outside team that charges $100,000 and is gone in two weeks. Such audits must be automated and continuous. Knowing the password and yelling it up at the guard shouldn't be enough for him to drop the gate over the moat. Access and authorization to enter the network can't be limited to the password on the yellow stickie under the mouse pad. Perimeters must be created within perimeters, so the enemy can't waltz into the gold vault and the courier who's intercepted outside of town doesn't reveal his message. Firewalls and anti-virus programs have to run on desktops. E-mail, hard disks, and storage should be encrypted. One approach to cyberspace security involves arrests and rewards to catch the bad guys. Microsoft has offered up to $5 million for information leading to the arrest and conviction of hackers who attack vulnerabilities in its software. This is about .1% of the cost of damages caused by viruses and worms launched against Microsoft software in August 2003. Clearly, helping law enforcement catch hackers is necessary remediation of cyberspace. But it's just a small part of the overall solution. Few hackers have been caught despite the efforts of the FBI, CIA, NSA, and other three-letter entities. The major attacks of the past few years are, almost without exception, open cases. The real answer lies in designing safer software. Products already exist that help software developers scan code for common errors. New products are coming that let enterprises lock down code that's been tested and certified, preventing any subsequent insertions (back doors) without multiple authorizers' approval. Until they arrive, the best solution is "defense in depth." This mission, however, can't just be handed to the town's watchman. Those who come to trade their wares must accept a little added scrutiny. And, the funds allocated to the defense of the town must keep pace with the threat outside the walls. If we could line up all of the lessons learned since the initial Morris Worm attack in the mid-1980s, we could draw several conclusions about the challenge: * Risk managers must integrate IT security across major corporate functions. Human resources, business continuity, and operations don't generally meet around the water cooler, but managing risk demands cooperation across these and other disciplines. * The challenge is far more complex than initially assumed. Standards are lacking not only in various areas of IT security, but also for calibrating IT performance and financial returns. * Finally, managing risk demands a long-term strategy. The road map for success is steeped in business process as well as awareness, education, and training. Successful strategies must motivate employees, as well as vendors, suppliers, and others not controlled by the corporation. Cybersecurity road map Our security road map has 10 components that operate across corporate functions, technologies, cultures, and business processes. It will help you think of this large-scale implementation in manageable steps as follows: * Establish a governance structure that resolves complex security risks, educates corporate communities, and involves senior decision makers. Good security starts and ends with governance. These processes also let risk managers resolve complex issues that affect multiple segments of the company such as integrating IT concerns into outsourcing considerations. One positive trend in the past several years is the creation of corporate security councils comprising representatives from key business functions. The council's role is to review strategic issues unique to cybersecurity and provide input into corporate decisions. * Create policies for the full scope of IT security; where such materials already exist, ensure that processes are in place to update policy statements and guidelines on an ongoing basis. IT security should incorporate management expectations and orchestrate corporatewide behavior. In creating or updating such policies, consider the following: - Do policies adequately capture relevant business considerations, such as supply-chain management and business continuity? - Do they take into account tangential issues such as training, awareness, and resource limitations? - Are policies enforced consistently; if not, why not? - Do policies reflect management's orientation toward such issues as tolerance for risk? - Do policies extend to suppliers, customers, and business partners? * Develop a risk-assessment program. There's nothing new about having to perform risk assessments. What has changed, however, is the complexity, scope, and cost of these assessments, and that they may not always be timely or conducive to making good business decisions. Congress now requires federal agencies to perform criticality assessments in addition to threat and vulnerability reviews. Risk managers should consider following suit and establishing their own criticality reviews. * Extend business-continuity and disaster-recovery planning to IT assets. Risk managers should review the extent to which emergency planners have fully integrated IT systems into their recovery strategies. This includes restoration strategies as well as long-term recovery programs. In conducting this review, managers should take a broad-based approach. In the aftermath of 9/11 and the August 2003 blackout, for example, power shortages created computing problems, and many network administrators and computing professionals were unable to get to work. Planning should include these and other contingencies. * Enhance business-case arguments and capital planning for IT goals and objectives. Last August's blackout reinforced the need to recalibrate cybersecurity-investment arguments. According to the Michigan State Public Service Commission, the Slammer attack significantly undermined efforts to restore electric power. More generally, the need to reboot plants and factories after power was restored caused further delays. The Michigan report reveals the importance of planning and funding appropriate IT projects. There will no doubt be follow-on cyberattacks and additional blackouts. Absent changes in the pattern of capital planning, damages can show up as a restatement of earnings or an unwelcome confession in a financial-disclosure statement. When articulating your business case and capital requirements, state the ROI in terms that are meaningful to the board. Balance prudent security with limited resources. * Integrate IT and physical security planning. Risk managers should ensure that security plans include both physical and virtual assets. At many companies, physical and cybersecurity programs are completely separate. In other cases, security planners assume that mainframes should be protected, but fail to extend their planning to other assets that provide essential services. * Let audit professionals enhance controls and compliance objectives. Security planners should use internal and external auditors to help define cybersecurity objectives and review progress against those objectives. In defining objectives, audits should take into account standards and other requirements, such as The Sarbanes-Oxley Act of 2002. * Heighten security vigilance through education, publicity, and training. The best way to capture a fortress is from within. But spies and saboteurs aren't the only danger; corporate dupes are an even greater liability. Take advantage of pre-existing internal communications and education programs to increase preparedness, familiarize employees with security procedures, and improve compliance throughout the enterprise. * Work with corporate counsel to address compliance and liability issues. Sarbanes-Oxley is only part of the story. Cybersecurity requirements for the electric power, banking, and health-care industries are well-known, but regulators are also insisting on rules that mandate IT integration capabilities for things like cross-border trade and port security--both of which require secure electronic messaging and resilient communications. By meeting regularly with corporate legal counsel and including its representative on the corporate security council, risk managers can ensure they're in compliance and avoid being blindsided by new requirements. * Prioritize IT assets and the essential services they support. Managing infrastructure risks means prioritizing the business services that are essential to the company and the IT resources on which they depend. In addition to mission-critical information systems, essential infrastructure services include electric power, telecommunications, transportation, banking, and others that the company often takes for granted. Risk managers should identify which information services are vital to the company's core services and place special emphasis on ensuring that those assets are secure. This 10-point game plan will push the risks and liabilities associated with cybersecurity to the forefront of the corporate agenda and help to dramatically increase your preparedness. But this program won't remove the threat or eliminate the need for strong walls until the technology industry puts better weapons at our disposal. For now, a truly secure enterprise remains the Holy Grail. Richard Clarke is chairman of Good Harbor Consulting LLC, specializing in homeland security. Lee Zeichner is an attorney and publisher of a newsletter covering risk-management laws and policies. Please send comments on this article to [email protected]. The 90-Day Plan True cybersecurity requires that financial, IT, and operational managers from across the enterprise--and outside it--come together to assess and guard against their company's most serious risk and exposures. This three-month plan will get you started. First month: Update, review, and set up new processes * Update and implement an enterprisewide governance program or begin one if it's not already in place. * Establish processes for creating cross-enterprise cybersecurity policies and begin a risk-assessment program with definite expectations. * Review IT disaster-recovery and emergency planning if you haven't done so recently. Second month: Focus on ROI and objectives * Calculate capital requirements and security ROI to the extent possible. Then, create processes to protect your most important--and costly--cyberassets. * Conduct a review of audit and control objectives. Third month: Fill security holes and spread the word * Identify areas of security noncompliance. Launch an internal PR campaign to heighten awareness and improve performance at all levels of the business. * Meet with corporate legal counsel to review compliance requirements and be sure you're up-to-date on new regulations. * Prioritize and focus on those IT assets that support mission-critical services. This article originally appeared in the January 2004 issue of Optimize magazine.
<urn:uuid:019da5c1-c5ff-4df2-82db-d5577c55d90c>
CC-MAIN-2017-09
http://www.banktech.com/beyond-the-moat-new-strategies-for-cybersecurity/d/d-id/1289682
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00043-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950563
3,214
2.578125
3
Good news, Earthlings! The day after Valentine's Day a 150-foot-wide asteroid will fly so close to our planet that it will pass through the orbit of several satellites, but experts said on Thursday that it will not hit us. In fact, the so-called DA14 asteroid will be such a close call that the force of Earth's gravity will actually cause the asteroid to ricochet off those orbits, creating more distance between the asteroid and our planet so that the next fly-by won't be so nerve-wracking. At 17,100 miles away, the DA14 will become the largest object ever (on record) to fly so close to Earth and not hit it. Which is really good news since it's traveling eight times faster than a speeding bullet. Scientists say that it could take out a satellite or two, however. It's hard not to hear the theme song to Armageddon in your head when thinking about this sort of thing. Although 17,100 miles is a lot of miles in terms of space distance, for an asteroid half the size of the International Space Station to zip by so closely is a little breath-taking. It also leads us to wonder: At what point do we start talking about sending Bruce Willis and his persnickety pack of oil drillers into space to stop the dang thing? DA14's projected path brings it just one-thirteenth the distance to the moon from Earth, less than seven roundtrip flights from New York to Los Angeles. If it hit us, the resultant explosion would have the force of a 2.5-megaton atomic bomb.
<urn:uuid:0308378e-85d5-4604-bf0f-c7ea50f5fa8c>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2013/02/though-it-might-take-out-couple-satellites-asteroid-will-not-destroy-earth-next-week/61191/?oref=ng-relatedstories
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00391-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951201
327
2.546875
3
Pi Day: How the 'irrational' number pushed the limits of computing - By William Jackson - Mar 14, 2012 March 14 is, by act of Congress, officially designated Pi Day. Think about it for a minute and, if remember your high school math, you will understand why: 3.14 are the first three digits of Pi, which represent the ratio of the circumference of a circle to its diameter. If you are a purist you celebrated this occasion at 1:59 a.m. today (159 being the next three digits). If you use a 12-hour clock you could also celebrate at 1:59 p.m. The value of Pi, an irrational number that goes on forever, has been figured to more than a trillion digits. We have neither the need nor the space for that here, but the challenge has helped to push the boundaries of computing. According to Wikipedia, the day was first commemorated by physicist Larry Shaw of the San Francisco Exploratorium in 1988. It was confirmed by Congress in 2009, when House Resolution 224 was passed by a vote of 391 to 10, making Pi Day possibly one of the last issues on which congressional Republicans and Democrats found agreement. Going beyond PDF for the Web NIST math project expands the horizons of Web publishing The significance of Pi goes well beyond high school mathematics. “It’s one of a few numbers that seem to be fundamental and basic in math,” said Daniel W. Lozier, retired lead of the mathematical software group in the Applied and Computational Mathematics Division of the National Institute of Standards and Technology. Calculating the value of Pi has become something of a competition to mathematicians, and also has become an important tool in computing, Lozier said. “It has played a role in computer programming and memory allocation and has led to ingenious algorithms that allow you to calculate this with high precision,” he said. “It’s a way of pushing computing machinery to its limits.” Because of the large strings of numbers involved, memory is a critical issue in doing calculations along with methods for efficient calculations. “Pi serves as a test case for mathematical studies in the area of number theory,” Lozier said. One of the landmarks in calculating Pi was a 1962 paper by Daniel Shanks and John W. Wrench Jr. in the Mathematics of Computation. In 1949, it took the early ENIAC computer 70 hours to figure 2,037 digits of Pi. In 1959, it took an IBM 704 4 hours and 20 minutes to calculate it to 16,157 places. The authors of the paper estimated that it would take 167 hours and more than 38,000 words of memory to calculate to 100,000 places, but that could not be done because the IBM 704 did not have that capacity. Using a new program and a new computer, the IBM 7090, in 1961 they were able to take the value to 100,000 places in 8 hours and 43 minutes, which was 20 times faster. The authors predicted that it would be another five to seven years before computers had the capacity to figure the value to 1 million places. In 1989, an IBM 3090 was able to take it to 1 billion places, which was pushed to 200 billion by 1999 and to 1.24 trillion places in 2005. The record today is held by Japan’s T2K Supercomputer, which figured the value to 2.6 trillion digits in about 73 hours, 36 minutes. The House resolution on Pi Day notes that “mathematics and science are a critical part of our children’s education, and children who perform better in math and science have higher graduation and college attendance rates.” Unfortunately, it also notes that American children score well behind students in many other countries in science and math, and that the United States has shown only minimal improvement in test scores since 1995. So the House recognized the designation of Pi Day and supports its celebration around the world, encouraging schools and educators to observe the day with “appropriate activities that teach students about Pi and engage them about the study of mathematics.” H.R. 224 is non-binding, however, which is why you probably spend March 14 at work rather than celebrating at home with your family. But whether at work or at home, there are many uses of Pi. The most common, and the one you probably remember from high school, is figuring the area of a circle: A=Pi R-squared, where A is the area and R is the radius. It also can be used to determine the volume of three-dimensional objects, such as a cylinder, which is useful in figuring the displacement of an internal combustion engine. Astronomers use it in figuring orbits and distances. Unfortunately, it is not much use in cryptography, although modern crypto algorithms depend on random numbers. As an irrational number that goes on endlessly without repeating itself, Pi might seem like a good source of a random sequence, but since the value of Pi has been figured to so many decimal places, any sequence chosen from it is likely to be far too predictable to be secure. William Jackson is a Maryland-based freelance writer.
<urn:uuid:6c1e70f9-52b5-494e-a62b-f78e4767d1da>
CC-MAIN-2017-09
https://gcn.com/articles/2012/03/14/pi-day-value-pushing-boundaries-of-computing.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00567-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963124
1,082
3.421875
3
How to Be Green with Your Computer Hardware Many IT organizations today fail to assume sufficient responsibility for the ultimate end-of-life destination of used-up PCs. Companies should take a closer look at the IT manufacturer's policy toward PC and computer hardware takeback before buying. One organization, the Electronic Takeback Coalition, maintains a list of recyclers that have pledged to adhere to certain corporate responsibility standards, including not incinerating e-waste or shipping it to China. With CIOs focused on achieving the green data center by reducing energy consumed for cooling and shifting to server virtualization to cut power consumption, there remains a dirty little secret many IT organizations would just as soon ignore: where their old PCs end up. "We estimate that 55 percent of all PCs are in the commercial sector," says David Daoud, research manager for personal computing, PC Tracker, and green IT at International Data Corp., an IT research firm in Framingham, Mass. "In their effort to reduce their impact on the environment, many IT organizations have focused on the data center, but other angles of green IT have been essentially neglected." The impact of that neglect on the environment worldwide could be huge. An estimated 1.8 billion pounds of PCs are retired worldwide each year, but only about half that amount-865 million pounds-is processed by recyclers, according to a report issued this month by International Data Corp. Although some of the remaining 900 million pounds of computer hardware is rebuilt or reused, much of it is just plain discarded into landfills or incinerated. more, a huge amount of so-called e-waste is handled by manual laborers working in electronic dumps in Unfortunately, many IT organizations today fail to assume sufficient responsibility for the ultimate end-of-life destination of their fleets of thousands of used-up PCs. For instance, Daoud says one of the biggest means of disposal for corporations seeking to rid themselves of their rafts of obsolete PCs is to donate them to nonprofit groups. In effect, that means they have washed their hands of the problem. part of the traditional IT lifecycle is not so green," Daoud told attendees at Instead of disposing of PCs or donating them, some companies, he says, may elect instead to retire them earlier and sell them to other organizations while they are still useful and marketable. But all companies should take a closer look at the IT manufacturer's policy toward takeback before buying. "If you buy the right product in the beginning, it will cost you less to recycle it at the end," Daoud says.
<urn:uuid:c2efa3b9-4939-4aba-9767-009e5b907d10>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Green-IT/How-to-Be-Green-with-Your-Computer-Hardware
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00035-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954556
529
2.53125
3
As a fundamental practice in protecting sensitive data, encryption has long been a focal point in cyber security development and implementation. However, in light of recent news on government surveillance efforts from agencies, as well as cyber espionage attempts by foreign governments, data encryption has been getting a lot of attention. When effectively applied, strong encryption algorithms are trusted to keep prying eyes off of meaningful data. However, companies and government agencies continue to struggle to ensure their data is being adequately protected. Furthermore, cloud providers wishing to do business with the federal government often find themselves unable to offer the assurance that their encryption methods are up to the task. There are a number of roadblocks to reliable encryption, but these challenges are some of the most common. And fortunately, each one has a viable solution. 1. Choosing the right configuration for encryption. According to Johns Hopkins cryptography researcher Matthew Green, many organizations rely on SSL to encrypt sensitive data. And although this protocol is effective, its keys are comparatively small and vulnerable to interception. Green suggests that certain configurations for SSL, such as DHE and ECDHE, can more effectively protect against successful decryption than the RSA configuration. 2. Covering the full lifecycle. For complete protection against surveillance, data needs to be encrypted not just during transfer, but also when it’s at rest and when it’s accessed by applications. Successfully managing encryption across the full lifecycle can mean rewriting software, planning cross-jurisdiction governance and adding processes. Organizations handling the full lifecycle of their data need to invest in methods to ensure encryption at all stages. 3. Key management. The high volume and wide distribution that organizations manage results in the generation of a large number of keys, which need effective management. In addition to wide-ranging governance considerations, this calls for consistent training to ensure privacy at all times. 4. Encryption in the cloud. Organizations employing cloud services for data management have additional challenges in applying encryption since the cloud service provider holds the key to that encrypted data. Solutions for using encryption in the cloud exist. However, getting everything right in the implementation of these solutions is best left to a cloud security expert. 5. FedRAMP. Cloud service providers looking to work with federal government agencies have a particular interest in meeting encryption standards, which by FedRAMP regulations means FIPS 140-2 validation. Since the inception of FedRAMP, this standard has been a major hurdle for many CSPs. However, specialists in the area of FedRAMP compliance have been successful in helping many CSPs gain certification. For cloud services providers, organizations entrusting their data to the cloud, or organizations managing their own end-to-end encryption, Lunarline helps effectively implement encryption across the entire lifecycle. For more information about our products and services, visit Lunarline.com or contact us today.
<urn:uuid:d3f0da6c-8dbc-43cf-bb8f-d8b9905cc0f9>
CC-MAIN-2017-09
https://lunarline.com/blog/2015/09/enterprise-encryption-roadblocks-opportunities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00035-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948866
585
2.59375
3
If humans are ever going to get to Mars and other deep-space destinations, we’ll need to figure out how people can live reasonably safely in space for very long durations, possibly indefinitely. In the meantime, we may have to send astronauts into situations we know are dangerous and expose them to unknown environments. How will NASA decide when a mission’s potential gains outweigh the risks to human health? The agency asked the independent Institute of Medicine for recommendations, and that guidance came Wednesday in a report offering ethical principles and a framework for making such decisions. Human space flight has always involved risks and pushed boundaries. Longer missions -- including lengthy stays on the International Space Station -- involve prolonged potential exposure to radiation, chemicals and microgravity as well as unknown elements, NASA Chief Health and Medical Officer Richard Williams told Nextgov. Such exposure can impair vision, reduce bone mass, cause certain types of cancer -- and that’s just some of what we currently know. “You can get exposed at levels higher than your standards will allow, and there may be risks that you don’t fully understand and don’t have a standard for,” he said. The trick is determining the point beyond which further exposure is unacceptable. “We’ve always done that,” Williams said. “For long-duration missions it’s a matter of degree of exposure.” NASA can’t ethically loosen it’s health standards in general or create a different set of health standards for long-duration flights, IOM determined. But in rare cases it might make sense to grant exceptions to the current standards, the report said. In all stages of determining if exceptions to health standards are warranted, NASA should consider basic ethical principles. For instance, in reviewing requests for specific exceptions, IOM suggested the space agency could require the proposed missions -- - Be expected to have exceptionally great social value; - Have great time urgency; - Have expected benefits that would be widely shared; - Be justified over alternate approaches to meeting the mission’s objectives; - Establish that existing health and safety standards cannot be met; - Be committed to minimizing harm and continuous learning; - Have a rigorous consent process to ensure that astronauts are fully informed about risks and unknowns, meet standars of informed decision making, and are making a voluntary decision; and - Provide health care and health monitoring for astronauts before, during and after flight and for the astronauts lifetime. Surely, the astronauts themselves take on the greatest risk. “Some astronauts may be willing to accept more risks than others, but that has yet to be seen because we’re just getting to the point where we’re going to meet or exceed our standards,” Williams said. “Accepting some risk is mandatory -- it has to be done.”
<urn:uuid:5b6e4227-83f6-4a88-a844-f39776fca2c5>
CC-MAIN-2017-09
http://www.nextgov.com/health/2014/04/nasa-weighs-ethics-astronaut-health-risks-long-missions-deep-space/81787/?oref=ng-skybox
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00083-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935702
594
3.453125
3
There's not a ton of oxygen in space. Because of this, we have to rethink how we go about doing the things that we take for granted here on Earth, like defense against an incoming missile. Back in 2007, China tested the first ASAT (Anti-Satellite missile) against one of it's own dead satellites; it was a resounding success. Because of that, companies like the Raytheon Company has been working to give US space assets a figurative barrel roll defense. One of the most common defenses against an incoming missile is a flare. Common aircraft use stuff like Magnesium-Viton-Teflon (MVT) flares to create a giant ball of light, heat and radiation that distracts incoming missiles and causes them to hit the flare and not the original target. The trick is to make the missile think it's hitting the target and not a decoy. That's not so easy in space, where the raw material required to create an exothermic reaction are sparse. As is usually the case, science has the answer, and that answer is quantum dots. What's a Quantum Dot, Anyway? A quantum dot is a tiny little nanoparticle made of zinc, cadmium or some other semiconductor material. At such a small size, the quantum dots have some pretty unique electrical properties that you wouldn't see in a more sizable mass of their material; they give off light that the human eye can detect, for instance. If that wasn't cool enough, the color of light they display depends on the size of the particle, making them tunable to whatever color the scientist wants. This is called the "size quantization effect". The dots can be tuned to give off infrared or ultraviolet light as well--light outside the visible spectrum. Quantum Dots in Space This brings us back to the defense aspect of this equation. Quantum dots can, based on this, be fine tuned to mimic the exact radiation signature of the space object they're trying to serve as a decoy for, making them incredibly effective as a flare. The theory is to eject a cloud of quantum dots into space, via a spray from a storage tank or exploding a pack of dots suspended in inert gas. Those dots are fine-tuned to the spectral signature of the spacecraft they're protecting, and the missile hits the cloud instead of the craft. That's the theory, anyway. You can see more at the Raytheon Company's patent application, which includes the full explanation including utilization of a ground-based tracking system to warn of a launched missile. The idea of quantum physics being used in space defense and strategy is one that may seem to sully the very idea of scientific achievement. You have to remember, though, that military application of such technological advancements are generally the way it goes. The Manhattan Project, anyone? What do you think is next for space defense? I'm holding out for orbital lasers, myself. Like this? You might also enjoy... - Researchers Plan to Make a Robot With Touchy-Feely Skin - Weather Sensors Turned This House Into a Giant Synth Box - Telescopes Team Up to Give Us Our First Real Look at a Black Hole This story, "Quantum dots could protect spacecraft from missiles" was originally published by PCWorld.
<urn:uuid:1d0d1bf9-6fd7-4d9b-be64-3db4a41f4db2>
CC-MAIN-2017-09
http://www.itworld.com/article/2731724/consumer-tech-science/quantum-dots-could-protect-spacecraft-from-missiles.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00259-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939731
682
3.578125
4
Disasters of the natural and cyber nature happen every day and many smaller companies are simply not prepared for the immense amount of recovery that it takes to get back to work. As a mid or smaller level company it is essential that a business continuity plan be in place to aid in the recovery of the business after any disaster. In order to create an effective and useful business continuity plan it is first essential to understand what a business continuity plan is and how to create one. Put quite simply, a business continuity plan, also frequently called a disaster plan, is a plan that is created to help streamline the recovery process following a natural disaster or any disaster for that matter that interrupts and potentially halts the function of the business in question. This can be something like a tornado, flooding, a hurricane, a fire, or a cyber attack that cripples your computer system. This type of plan is generally constructed in several different steps in order to be the most effective that it can possibly be. The first part of the process involves taking an in depth survey of your business and what it takes to function on a day-to-day basis. This is not something that is going to be done in a few hours and generally a few days of observation are needed to create an in depth snapshot of what your business needs to function. This step may be things like what inventory do you need, what equipment do you need, how many employees are necessary, what type of communication do you use, what type of utilities, what space you need, etc. This is essentially everything your business needs to function on the most basic level. This is going to provide the stepping stone to create your entire plan and to know what you need for the next parts of the planning process. The next step is to actually construct you plan. This means thinking about where your business will function in the event of a disaster, where you will get the supplies and inventory you need, how you will contact your employees and your customers, and so on. This is going to be the step that takes the longest and it can be broken up into several smaller steps. To help create your plan you should first set up some sort of communication. This can be something like an email address, a computer that is kept off site, a phone number customers can call that will not be affected by disaster and the like. Communication is the most important part of any business and it is important to keep communication open at all times no matter the disaster or the state of your business following this disaster. The next step is to consider how you are going to back up information. This might mean storing information off site, having a cloud account that keeps your information safe, or employing a third party company to store your information remotely. This is going to be a crucial step as well, as lost information can be the end of any company. After all is said and done it becomes necessary to educate your employees about the steps that you have put into place and the strategy that has been created. This is going to help insure that the recovery process is as fast as possible and that your employees are ready and willing to help you get back on track after any disaster.
<urn:uuid:85056fb4-b6ee-4ebe-ab0e-7fcc75ed1d8f>
CC-MAIN-2017-09
https://www.apex.com/what-is-a-business-continuity-plan-and-why-do-you-need-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00435-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963878
645
2.53125
3
When computer hackers attacked Estonia earlier this year -- shutting down numerous Web sites connected to the country's electronic infrastructure, including government, commercial banks, media outlets and name servers -- the event was nothing new in the world of cyber-security. Since the mid-1990s, denial-of-service (DoS) attacks -- generally a computer assault that floods a network or Web site with unnecessary traffic, rendering it slow or completely interrupted -- have caused serious problems for the Internet. DoS attacks are often waged by "botnets," which are a series of computers that have been hijacked by viruses and take part in attacks without their owners' knowledge. Attackers often launch attacks from unallocated IP addresses so the assailants can't be found. The attack on Estonia has been called "cyber-warfare" and the first time botnets threatened the security of an entire nation. Over the years, similar attacks have closed some of the largest e-commerce companies, such as Amazon.com, eBay and Buy.com, as well as federal and state government Web sites. With an estimated 2,000 to 3,000 DoS attacks daily worldwide, large corporations, small Web-based businesses and governments have been forced to take precautions to defend against DoS attacks or face costly shut downs and/or the demands of "cyber-extortionists," a new breed of Internet criminal who demands payment in exchange for not launching a DoS attack. Dark Address Space In 2003, the federal government established the U.S. Computer Emergency Response Team (U.S.-CERT), an arm of the Department of Homeland Security that protects the nation's public and private Internet infrastructure, in response to DoS and other harmful cyber-attacks. To help prevent DoS attacks, or at least warn private and public sectors of impending attacks, U.S.-CERT uses its Einstein program to monitor federal network "dark address space" on the Internet. Dark address space, which is sometimes referred to as "darknet," is the area of the Internet's routable address space that's currently unused, with no active servers or services. On computer networks, darknet is the addresses held in reserve for future network expansion. Often when DoS and other cyber-attacks occur, blocks of Internet address space, including darknet space, briefly appear in global routing tables and are used to launch a cyber-attack, or send spam, before being withdrawn without a trace. By monitoring all traffic to and from dark space, U.S.-CERT and other cyber-security organizations gain insight into the latest techniques and attacks. The U.S.-CERT's Einstein program provides information about darknet activity originating from state and local government systems, helping notify states of potential cyber-attacks and other malicious activities. New York is in the process of implementing its own plan to combat cyber-attacks by collecting malicious cyber-attack information directed at the state's IT infrastructure, which can provide early warning intelligence about the nature and characteristics of the attacks. New York state receives warnings of potentially malicious cyber-activity from U.S.-CERT on a daily basis, said William Pelgrin, director of the New York State Office of Cyber Security and Critical Infrastructure Coordination. His office is working with the University atAlbany to create the Multi-State Information Sharing and Analysis Center (MS-ISAC) Darknet Sensor system, which will help New York and other states prevent cyber-attacks by monitoring dark space and other nonallocated IP addresses. A darknet server will be configured to capture all traffic destined for this unused space. The server listens to all traffic directed at the unused address space and gathers the information packets that enter the dark space. "Just the fact that we are seeing state-targeted traffic in federal dark space is definitely worth the investment to deploy this program to monitor state dark space," said Pelgrin. "Our goal is not only to do this for New York state, but for all other states." The MS-ISAC Darknet Sensor system, which is expected to be implemented by late 2007 or early 2008, will monitor and gather information for all traffic directed through the nationwide darknet, which is considered malicious since no legitimate services are available at dark address spaces. New York's internal and public networks will be analyzed, which is expected to provide invaluable insight into the security of New York's networks and help predict impending network attacks. Pelgrin is also the founder and chair of the MS-ISAC, whose mission is to raise the level of cyber-security readiness and response for state and local governments nationwide. Although the MS-ISAC Darknet Sensor system will be centered in New York, Pelgrin said the system will benefit other states too. "I'm a big believer in sharing information and a collaborative and cooperative approach to my job," Pelgrin said. "I knew from the beginning that geographic borders make no sense in state cyber-security. A cyber-attack in California can have an effect in New York." A MS-ISAC volunteer member will see what information on dark space should be shared with other states to prevent cyber-attacks. Alaska and Montana have agreed to join New York's Darknet sensor system, and Pelgrin expects others to join once the program is running. States participating in the program will set up a monitoring system with sensors placed in strategic places on the network to create an early warning system. A monitoring center will interpret and evaluate warnings, which will eventually help accurately evaluate cyber-attacks. "I think it's a very valiant effort and it's a very useful approach," said Jose Nazario, senior security researcher of Arbor Networks, a network security provider. "I liken the approach of darknet monitoring to throwing a petri dish out there or sticking your finger in wind; it's a tremendous way to measure all the junk on the Internet and discover both in terms of known and existing threats, 'Where is it coming from, who's launching them, and who do we need to block or shut down?'" Dark space monitoring is valuable for protecting municipalities since more government infrastructure and resources are being made available online, Nazario said. "Clearly it's very valuable for federal governments," Nazario said. "I would argue that state governments depend just as much on infrastructure not only for their own infrastructure but for their resources, whether business or educational institutions, or other research statewide networks." Nazario said his firm tracks between 2,000 and 3,000 major DoS attacks every day, all of which come from forged addresses. Although the U.S.-CERT program often warns states of potential cyber-attacks, the program is oriented primarily at the federal level, and states often don't have adequate defense against DoS attacks, according to Pelgrin. With the shared connectivity of the Internet, cyber-attacks can come from anywhere in the world, therefore, a collaborative approach is the best defense for states and organizations worldwide, he added. "Whatever we learn from states and across the world will help New York state, and hopefully what we do will help other states as well," Pelgrin said. Chandler Harris is a regular contributor to Government Technology magazine. He also writes for Public CIO, a bimonthly journal, and Emergency Management and Digital Communities magazines.
<urn:uuid:43dbfce8-7654-43d6-9a00-b4c194272b28>
CC-MAIN-2017-09
http://www.govtech.com/security/Dark-Spaces.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00255-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948671
1,499
3.0625
3
Researchers have turned a display annoyance into a way to show two different images simultaneously. When an LCD is tilted, colors change and become difficult to see, but with Dual View from Microsoft Research Asia different images and video can be shown. "We're actually exploiting this property by using a special algorithm to render the image in a special way so that we can hide or show different images at different angles," said Xiang Cao, a researcher with Microsoft Research Asia. "Basically making a bug into a feature." To see the Dual View display watch a video on YouTube. In one example, Xiang held a laptop tablet screen, which displayed a game of cards, horizontally. On one side of the screen, a player could see his own cards, but not his opponents'; the other side showed only the opponents' cards. It's not perfect, because it's limited to the optical properties of the display. "You may lose a little bit of contrast or saturation and there are certain angles that work better than others," Xiang said. There are a variety of uses for the technology, from privacy to gaming to even potential 3D applications. There are no immediate plans for commercialization.
<urn:uuid:84f3397f-d149-4832-875e-afbf2170eff9>
CC-MAIN-2017-09
http://www.cio.com/article/2396288/hardware/exploited-display-bug-lets-lcds-show-two-images-simultaneously.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00128-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943738
239
2.859375
3
Feds Seek To Educate Patients On Info SharingU.S. Department of Health and Human Services offers guidelines and open-source software that healthcare institutions can use to help patients understand what they are agreeing to. 7 Portals Powering Patient Engagement(click image for larger view and for slideshow) Using a combination of guidelines and open-source software, the U.S. Department of Health and Human Services is trying to encourage healthcare organizations to obtain "meaningful consent" as part of the process of sharing patient information online. Although the government's bigger push has been for "meaningful use" of electronic health records and the creation of state and regional health information exchanges (HIEs) for sharing those records between institutions, these innovations also raise questions about giving patients more control over how and with whom their information is shared. As defined by the HHS Office of the National Coordinator for Health Information Technology (ONC), meaningful consent means giving patients more options, along with education on what those options mean. A patient might elect not to have their data shared or only allow sharing under specific circumstances, such as a medical emergency. The eConsent Toolkit announced Tuesday is derived from a pilot project in Western New York state that wrapped up in March, which looked at the use of tablets for interactive education and also as a means for patients to record their choice from among the options offered. [ Patients want complete access to -- and the option to edit -- their medical records online. Read Patients Seek More Online Access To Medical Records. ] As explained in a post on the Health Affairs blog, "The Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule generally permits, but does not require, covered health care providers to give patients the choice as to whether their health information may be disclosed" -- although states have the ability to make their own privacy rules. The HHS guidelines also talk about the distinction between opt-in rules, where patients must make an affirmative choice to allow sharing, and opt-out rules, where data will be shared unless the patient specifically objects. Although opt-in would do more to protect patient privacy, the published guidelines don't express a preference for one over the other. What HHS is sharing is a toolkit that providers and HIEs can use as a starting point for developing a meaningful consent program. In addition to a set of guidelines and sample videos for patient education, ONC is releasing an open-source software product called the eConsent Story Engine, which can be used to deliver interactive presentations on different scenarios so patients understand the circumstances in which their information might be shared. The same software can then be used to record their consent, including an electronic signature. Based on the New York trial, ONC learned that patients wanted to learn more before being asked to make a choice on whether to consent to the sharing of their information. However, patients -- and computer users in general -- will often click OK on a form asking their consent to a long list of terms and conditions without really reading it or understanding it. The point of the interactive software is to make the choice meaningful. The designers of the program tried to present the information in a more interactive, engaging, and clear way before displaying the screen with consent options. "As patients become more engaged in their health care, it's vitally important that they understand more about various aspects of their choices when it relates to sharing their health in the electronic health information exchange environment," said Joy Pritts, ONC's chief privacy officer, in a statement for the press release. Follow David F. Carr on Twitter @davidfcarr or Google+. His book Social Collaboration For Dummies is scheduled for release in October 2013.
<urn:uuid:3b6abc23-366f-4cb5-b69c-152fffbc71e6>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/feds-seek-to-educate-patients-on-info-sharing/d/d-id/1111573?cid=nl_iw_daily_2013-06-19_html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00424-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948041
755
2.84375
3
What is Data Virtualization? Data virtualization is technology that helps IT organizations more efficiently secure, manage, and deliver application data. Instead of relying on complex, manual processes to control and deliver application data, data virtualization allows IT to automatically deliver virtual copies of production data for non-production use cases. Common scenarios for leveraging data virtualization include application development, testing, reporting, archiving, or data migration. Data Virtualization is VMWare for the Data Layer Just as server virtualization unlocks efficiencies at the compute layer, data virtualization drives similar benefits for data in repositories such as databases, datawarehouses, and file systems. Data virtualization technology decouples application data from physical hardware, allowing end users to access storage-efficient virtual data copies though a self-service model. How Does Data Virtualization Work? Data virtualization platforms work in three key steps. First, data virtualization software installed on-premise or in the cloud collects data from production sources and stays synchronized with those sources as they changes over time. Next, the data virtualization platform serves as a single point of control for administrators to secure, archive, replicate, and transform data. Finally, it allows users to provision fully-functional virtual data copies that consume significantly less storage than physical copies. Key Benefits of Data Virtualization Speed and Agility Benefit–Legacy infrastructure and high-touch manual processes often bottleneck data delivery. In large organizations, the end-to-end process of provisioning a new copy of production data can take days or weeks. With data virtualization, end users can access full copies of multi-terabyte data sources in minutes. Moreover, data virtualization offers additional control over those copies: users can refresh, bookmark, rewind, integrate, and branch data copies to improve enterprise agility and collaboration. Data Efficiency Benefit–Over 90% of the data in environments used for development, testing, and analytics is redundant. Data virtualization platforms consolidate this data into a single compressed and de-deduplicated footprint before sharing data blocks across all downstream environments. Rather than making and moving new data blocks, data virtualization solutions intelligently share common data blocks to drive storage efficiency. Data Security Benefit–By automating data delivery, data virtualization solutions reduce administrative touchpoints that drive privileged user access risk. In addition, coupling data virtualization with data masking solutions allows IT to secure and deliver virtual data without exposing confidential information. Key Use Cases of Data Virtualization Application Development / DevOps–Data virtualization can provide read-writeable virtual data copies that can be quickly spun up or torn down. They can be shared among teams or branched and versioned just like code, eliminating dependencies on physical ticketing systems to deliver key data. Test Data Management (TDM)–Data virtualization can complement or replace traditional test data management solutions such as subsetting or synthetic data generation. Fast delivery of full datasets can compress testing cycles and increase software quality. Backup and Disaster Recovery–Continuous data protection, granular recovery-point accuracy, and significantly reduced storage requirements makes data virtualization an idea fit for backups. Cloud or Datacenter Migration–By more easily provisioning data for testing and validation environments, data virtualization eliminates dependencies between migration teams and production teams, reducing downtime and accelerating migration. Packaged Application Projects–Delivery of high-quality production data to development teams accelerates implementations, customizations, and upgrades for packaged applications such as ERP. Things Data Virtualization Is Not Server Virtualization – Server virtualization transformed data centers by enabling higher utilization of both server infrastructure and IT resources. Data virtualization affects the data underlying enterprise applications, bringing the efficiency of server virtualization to the data layer. Service Virtualization – Service virtualization technologies simulate the behavior of applications that are inaccessible for testing purposes because they are too complex, not yet fully functional, or outside of organizational control. Storage Cloning – Enterprise storage arrays provide efficient, read-write snapshots. However, they lack the functionality and transactional awareness that data virtualization offers to solve the complex data delivery requirements for application projects. Data Federation – These solutions provide an abstraction layer that maps multiple autonomous database systems into a single federated database, often for the purposes of analytics or reporting. While this may also be referred to as "data virtualization" these solutions do not aim to provide similar functionality or benefits. Replication – Replication solutions provide a mechanism to move data from one place to another, often for analytics or data protection purposes. However, they provide neither the storage-efficiency nor the agility benefits of data virtualization. Application Virtualization – encapsulates software from the underlying operating system, allowing it to run consistently regardless of the environment of installation. This provides consistency and ease benefits. Desktop Virtualization – gives end users the experience they need to perform their work, while removing dependencies on a particular PC, sometimes storing key data remotely. In their simplest implementations, they increase ease of use; more sophisticated solutions also decrease data risk. Network Virtualization – creates logical networks that are independent of site-specific constraints, improving data security or making distant sites available for local operations. Storage Virtualization – eliminates dependencies on physical storage, allowing efficient use of resources and flexible management.
<urn:uuid:10c8927a-fb16-47ef-81be-144dc1a4ca13>
CC-MAIN-2017-09
https://www.delphix.com/products/data-virtualization-engine/what-is-data-virtualization
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00600-ip-10-171-10-108.ec2.internal.warc.gz
en
0.865295
1,078
2.828125
3
Dutch water experts have teamed up with IBM to launch a new initiative called Digital Delta, which will investigate how to use Big Data to prevent flooding. The Netherlands is a very flat country with almost a quarter of its land at or below sea level, and 55 percent of the Dutch population is located in areas prone to flooding. The government already spends over 7 billion in water management every year, and this is expected to increase 1-2 billion by 2020 unless urgent action is taken. While large amounts of data are already collected, relevant data can be difficult to find, data quality can be uncertain and with data in many different formats, this creates costly integration issues for water managing authorities, according to IBM. The Digital Delta initiative will see Rijkswaterstaat (the Dutch Ministry for Water), local Water Authority Delfland, Deltares Science Institute and the University of Delft using IBM's Smarter Water Resource Management solution to combine data from new and existing water management projects, in order to prepare for imminent difficulties. Delft University of Technology will use IBM Intelligent Operations for Water to access weather predictions, real-time sensor data, topography and information about asset service history to make more informed and timely decisions on maintenance schedules. This will save costs while preventing flooding of tunnels, buildings and streets. Rijkswaterstaat and local water authorities will manage water balance data and share the information centrally through the Digital Delta platform, making it possible for the Dutch water system to optimise the discharge of water and improve the containment of water during dry periods, and prevent damage to agriculture. HydroLogic Research and IBM together with the Delfland Water Board will develop a scalable early flood warning method, through integration of a large amount of real-time measurement data from the water system, as well as weather information and water system simulation models. Meanwhile, Digital Delta will enable Deltares' Next Generation Hydro Software (which facilitates the numerical modeling of rivers, seas and deltas) to access large volumes of data in multiple formats, by maintaining a catalogue of frequently used data and converting it into a standardised form. IBM will use data visualisation and deep analytics to provide a real-time dashboard that can be shared across organisations and agencies. This will enable authorities to coordinate and manage response efforts and, over time, enhance the efficiency of overall water management. With better integrated information, IBM claims that water authorities will be able to prevent disasters and environmental degradation, while reducing the cost of managing water by up to 15 percent. "Aggregating, integrating and analysing data on weather conditions, tides, levee integrity, run off and more, will provide the Dutch government with detailed information that better prepares it to protect Dutch citizens and business, as well as homes, livestock and infrastructure," said Jan Hendrik Dronkers, Director General of Rijkswaterstaat. "As flooding is an increasing problem in many regions of the world, we hope that the Digital Delta project can serve as a replicable solution to better predict and control flooding anywhere in the world." Michael J Dixon, general manager of Global Smarter Cities at IBM added that the implications for this work are global, as cities around the world adopt smarter solutions to better manage the water cycle. "With this innovative collaboration, IBM is setting a worldwide example using the power of Big Data, analytics and optimisation to better manage water quality, flood risk and drought impact, while also stimulating new innovations in this crucial area of technology," he said. This story, "IBM Uses Big Data to Improve Dutch Flood Control" was originally published by Techworld.com.
<urn:uuid:8c7e68b6-a02c-4e13-a956-a0f3a80fc8cb>
CC-MAIN-2017-09
http://www.cio.com/article/2384573/big-data/ibm-uses-big-data-to-improve-dutch-flood-control.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00300-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920903
743
3.125
3
The National Science Foundation (NSF) is investing $15 million to create a more robust, agile and secure Internet. The NSF announced on Monday it is splitting the money between three projects that are all aimed at developing better Internet architectures that are able to handle new technologies like the Internet of Things, smart cars and nano devices that are straining an old architecture. The funding is expected to move efforts from the design stage to pilot programs in large-scale, realistic environments. "NSF-funded research has been instrumental in advancing network technologies, beginning with the first large-scale use of Internet technologies to link researchers to the nation's supercomputing centers..., and along the way, helping to transition the network into the self-governing and commercially viable Internet we know today," Farnam Jahanian, head of the Computer and Information Science and Engineering Directorate at NSF, said in a statement. The three projects receiving the funding are led by Carnegie Mellon University, the University of California-Los Angeles and Rutgers University. According to the NSF, those projects are focused on developing new network architectures and networking concepts, such as communications protocols. The projects also are researching societal, economic and legal issues connected to the Internet's affect on society. "These investments are all about developing the next generation of the Internet that encompasses mobility, the Internet of Things and improved quality of service for applications like video and medical applications," said Patrick Moorhead, an analyst with Moor Insights & Strategy. "The Internet was initially created for universities sharing text-based data between mini-computers and mainframes. We are now in a world of billions of connected devices, some small enough to be digested and some delivering real-time video that need a different form of Internet connectivity." Dipankar Raychaudhuri, a professor at Rutgers University and principal investigator with the Internet research project there, said the Internet is at a crucial juncture. "The Internet is at a historic inflection point, with mobile platforms and applications fast replacing the fixed-host/server model, which dominated the Internet since its inception," Raychaudhuri said in a statement. "This fundamental shift presents a unique opportunity to develop an efficient, robust and secure next-generation Internet architecture in which wireless devices and mobile applications are primary drivers of a new design." In the next phase of the three research projects, each is expected to test its system in real-world settings. For example, Carnie Mellon is set to test its new architecture in two network environments -- a vehicular network being deployed in Pittsburgh and in what the NSF describes as a large-scale video delivery environment. The UCLA team is partnering with Open mHealth, a non-profit healthcare ecosystem, and with UCLA Facilities Management, which is operating a major monitoring system for energy-efficient and secure buildings on the West Coast. As for Rutgers, researchers there are expected to hold three tests of the architecture, including one with a wireless service provider in Madison, Wis.; a content production and delivery network trial; and a context-aware public service weather emergency notification system with end-users in the Dallas/Fort Worth area. Moorhead said he's glad researchers are working on new architectures to support the Internet's new uses. "To better support those applications, the Internet needs to identify the type of end point, the type of content and apply the right path -- cached, replicated or direct with the appropriate level of security and fallback addressing for reliability," he added. "To increase agility, they want to improve quality of service for end points that are changing conditions during the connection, like a connected car as it races down the highway transmitting and receiving different kinds of data. Prior generations of the Internet relied on the connection and content remaining stable." Moorhead said he doesn't expect to see new Internet architectures in real use for about 10 years. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about internet in Computerworld's Internet Topic Center. This story, "Feds offer $15M for research on the next Internet" was originally published by Computerworld.
<urn:uuid:37f676a2-9857-4f46-a038-b6c8244d8e25>
CC-MAIN-2017-09
http://www.itworld.com/article/2698985/networking/feds-offer--15m-for-research-on-the-next-internet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00300-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939739
896
2.734375
3
Smart home brings many convenient and time-saving features into people’s lives. The possibility to check the content of your fridge when you’re grocery shopping. The ability to open you home door lock for the plumber remotely. The possibility of checking if you left the oven on or not, and to turn it on when you are heading home from the supermarket, etc. All of these things have the potential of saving time and effort. And our time is very valuable for most of us. IoT and the smart home revolution shows a lot of promise. But with the possibilities, as always, there are also threats. One of the biggest threats comes from the devices themselves: What personal data do they collect and how do they handle it? IoT devices are vulnerable – at least as much as any other internet-connected devices. On this blog we’ll discuss security and privacy issues of connected devices and the IoT.
<urn:uuid:6ac3467b-d575-417a-8f49-2efed5480546>
CC-MAIN-2017-09
https://iot.f-secure.com/about/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00168-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957954
188
2.796875
3
Supply and demand tend to move towards equilibrium in markets with prices serving as the mechanism by which that equilibrium comes about. The imbalance between supply and demand exerts a downward or upward pressure on prices, the results of which, in turn, modify amounts supplied and amounts demanded in opposite direction until a price emerges that makes supply and demand equal. Until recently, mathematical economists focused almost exclusively on proving the existence and mapping the structure of a market in equilibrium. The last few years, however, have witnessed an explosion of research into the algorithmic processes by which markets come into equilibrium (or by which game players settle on strategies – which comes to much the same thing.) This study, which usually goes by the name of Algorithmic Game Theory (AGT,) has already had significant commercial impact through its systemization and expansion of our understanding of optimal auction design. As AGT matures further, however, it will also profoundly impact Infrastructure and Application Management (IAM) in a world that revolves around the access of cloud-based services, across mobile interfaces, by users who are embedded within social networks. • First, it will allow enterprises to design better chargeback and internal pricing systems, both with regard to ensuring that pricing schemes do indeed achieve the resource allocation and behaviour modification effects an enterprise has hoped for and with regard to justifying the very principles on which a resource allocation goal is based. • Second, it will support the development of automated dynamic decentralized resource allocation systems by working out the principles by which software agents can coordinate local actions without requiring a powerful, centralized manager of managers to ensure that scarce resources are fairly distributed across multiple business needs. • Third, it will provide the industry with an understanding of how to extract a coherent end to end performance picture across multiple cloud service providers by providing them with a set of incentives to be open without compromising their individual interests. • Fourth, algorithmic markets are themselves a kind of distributing computing model which could be deployed for the purposes of IAM. It is interesting to note that one of the issues that bedevils the algorithms capable of driving markets towards equilibrium is computational complexity. The theory of computational complexity segregates algorithms into classes depending upon how the rate at which resources are consumed grows with the size of algorithm inputs. One famous class of algorithms is called P (for Polynomial) which contains those algorithms where the rate of resource consumption (expressed as time it takes for the algorithm to execute) grows according to a polynomial function. P algorithms are generally considered to be efficient. Another famous class is called NP (for Non-deterministic Polynomial – think of an algorithm with branching paths, each path of which can grow with input according to a polynomial function.) The NP class has many famous members like the algorithm for determining an optimal path through a network and is generally considered to indicate an high degree of inefficiency or resource consumptiveness. It turns out that the complexity or resource consumptiveness of equilibrium discovery algorithms fall somewhere between the level characteristic of P and the level characteristic of NP. So while such algorithms are not as hungry as general network path optimization algorithms, they could, in theory, start consuming a lot of compute resources very quickly and without much warning, potentially undermining the second and fourth scenarios mentioned above. On the other hand, markets occur in the real world and get to equilibrium pretty rapidly so even if the complexity here is a theoretical problem, it could be the case that in practice (or, in other words, in the neighbour hood of the input sizes we are actually likely to encounter) chances of a resource consumption blow up are small. Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:5dba070d-91ab-4d1e-b549-b3da5c3c11a8>
CC-MAIN-2017-09
http://blogs.gartner.com/will-cappell/ai-and-iam-markets-and-algorithms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00520-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943272
853
2.625
3
What not to do with your data The federal government has long produced data by the truckload, and the open-data initiatives of the Obama administration have put more of it in the public eye than ever before. And although many agencies have moved beyond spreadsheets and CSV files to offer dashboards, maps and other visualization tools, the vast majority of those presentations are not very good. Nathan Yau is trying to change that. His book, "Data Points: Visualization that Means Something," does not focus on agencies in particular, though federal data is discussed and used in dozens of sample charts and graphs. Whether it is census data or a chart comparing the cost of cable television vs. Netflix and other "cord-cutting" options, the challenge remains the same: how to make a visual presentation clear enough to be easily comprehensible, yet informative enough to tease out real insights. Effective visualization is hard, Yau stressed, and requires a mix of math and design skills that few individuals possess. "Data Points" is not a technical how-to guide — though Yau has written that, too, with his 2011 book, "Visualize This." His goal this time is to walk would-be data visualizers through the process of design and analysis, from the ground rules of statistics and visual aesthetics to proven best practices for storytelling and common errors to avoid. Want to know whether to use a pie chart or a bar chart for a particular dataset, and what signals a map's color palette sends to the audience? "Data Points" has the answers. Curious about how to explore and display the correlation between two variables? Yau plots education data from all 50 states 18 ways and shows how different visuals can uncover very different patterns in a single dataset. With a mix of hard rules, best-practice examples, and data-visualization history that dates back to William Playfair and Florence Nightingale, Yau seeks to impart a mindset as much as a skill set. "The mark of a good graph is not only how fast you can read it," he wrote, quoting statistician William Cleveland, "but also what is shows. Does it enable you to see what you could not see before?" Kaiser Fung's new book, meanwhile, dispenses with the aesthetic visual storytelling questions entirely, instead drilling into the dangers of datasets themselves. In "Numbersense: Using Big Data to Your Advantage," Fung warns that "people in industry who wax on about Big Data take it for granted that more data begets more good.... [But] when more people are performing more analyses more quickly, there are more theories, more points of view, more complexity, more conflicts and more confusion. There is less clarity, less consensus and less confidence." In Fung's view, the core problem is not that the creators of a dataset are trying to mislead — though there are plenty of examples of that as well, many of which he has documented over the years on his "Junk Charts" blog. Rather, he said, most consumers of data are essentially innumerate and do not understand basic statistics or the countless judgment calls that go into developing a dataset. To fill those knowledge gaps, "Numbersense" presents eight chapter-length case studies. The consumer price index and monthly unemployment reports are placed under Fung's microscope, as are law school rankings, Groupon's economics, fantasy football stats and multiple firms' marketing efforts. Even the dieter's dreaded body mass index gets deconstructed. So although Fung praises the Bureau of Labor Statistics for the "impressive accuracy" of its payroll survey, he shows how the definition of unemployment is at least as important as the tallying process. When does an out-of-work individual slip out of the workforce? Do you have any idea what the "seasonal adjustment" entails? And what happens when an employer simply skips that month's survey? As Fung notes, "Statisticians have a cautionary saying: Absence of evidence is not evidence of absence." At its core, Fung's warning boils down to Mark Twain's frequent dictum that there are three kinds of lies: lies, damned lies and statistics. Yet a basic understanding of data and some healthy skepticism can go a long way, Fung promises. Know where the numbers come from and what assumptions were made in crunching them, and you'll avoid the lion's share of confusion and mischief. As Fung succinctly put it, "The key isn't how much data is analyzed, but how." Troy K. Schneider is editor-in-chief of FCW and GCN. Prior to joining 1105 Media in 2012, Schneider was the New America Foundation’s Director of Media & Technology, and before that was Managing Director for Electronic Publishing at the Atlantic Media Company. The founding editor of NationalJournal.com, Schneider also helped launch the political site PoliticsNow.com in the mid-1990s, and worked on the earliest online efforts of the Los Angeles Times and Newsday. He began his career in print journalism, and has written for a wide range of publications, including The New York Times, WashingtonPost.com, Slate, Politico, National Journal, Governing, and many of the other titles listed above. Schneider is a graduate of Indiana University, where his emphases were journalism, business and religious studies. Click here for previous articles by Schneider, or connect with him on Twitter: @troyschneider.
<urn:uuid:5b39e164-9f7a-4134-a5b2-2a26e8aba591>
CC-MAIN-2017-09
https://fcw.com/articles/2013/09/16/bookshelf-what-not-to-do-with-data.aspx?admgarea=TC_ExecTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00044-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955335
1,122
2.703125
3
Various Linux distributions these days have started rolling out fancy GUIs to attract end users. Though this is a good strategy but working on Linux without understanding and using command line utilities is still not possible. Some one who uses Linux should know at least some basic commands that are required every now and then to accomplish trivial tasks. So, In this article, we will discuss a few commonly used (but must know) Linux commands with an example for each. 1. Linux ps command This command is used to provide information on the... [More]
<urn:uuid:c24c67ae-1394-4885-a7c7-b302d605ebb6>
CC-MAIN-2017-09
https://www.ibm.com/developerworks/community/blogs/58e72888-6340-46ac-b488-d31aa4058e9c/tags/locate?sortby=0&maxresults=30&lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00044-ip-10-171-10-108.ec2.internal.warc.gz
en
0.923418
108
2.609375
3
When Axel Kohlmeyer, the associate director of the Institute for Computational Molecular Science at Temple University, needed a large cluster of computers on which to run his models of physical molecular interactions, he could not always get time on the supercomputers managed by the major academic centers. "To do this number of calculations for the phase diagrams, you would not have the computer power locally, so you would have to apply to the supercomputing centers, but our research is not necessarily attractive to them, so we would not get the time," Kohlmeyer says. [ For timely data center news and expert advice on data center strategy, see CIO.com's Data Center Drilldown section. ] Yet, thanks to specialized graphics chips, or graphics processing units (GPUs), used by video-game consoles and high-end PCs, Kohlmeyer and his team were able to continue their research work. A server with six of the specialized processors allowed his group to carry on with their computationally intensive work without continually begging CPU cycles from the academic computing facilities. "We can now do problems that we couldn't do before," he says. Business Apps Tap In For technical applications, GPU-based computing is a natural fit. Yet, the specialized servers and supercomputing clusters are also finding their way into data centers in the business world. The reason is easy to see: For many difficult problems, graphics processing unit (GPU) clusters can deliver up to 100 times more calculations than a typical system with the same number of processors, take up less space and use far less power. Among the applications targeted by GPU clusters include seismic processing to calculate the best places to drill for oil and gas. Search companies use GPU clusters to speed up the specialized operations that rank and sort search results. And investment firms use the clusters to speed up their analysis of various financial products. "Two factors are of interest: Whether there is a lot of data involved and whether there is a lot of computation involved," says Sumit Gupta, senior manager for NVidia's Tesla GPU products. "If those are true, the GPUs can likely be involved." One oil-and-gas exploration firm replaced their 2,000-CPU cluster with a 32-GPU cluster, delivering the same performance but requiring less than 4 percent of same volume and 4 percent of the same energy as the company's previous system. In total, GPUs could perform on par with the previous system but at 1/20th the cost. The promise of GPU-based clusters for advanced computing can be seen in the latest Top500 supercomputing rankings released this month. The Chinese Nebulae supercomputer, which uses an estimated 4,600 GPUs and twice that many CPUs, grabbed the No. 2 slot on the list. Three other systems, two from China and one from Japan, also placed in the Top500 list of supercomputers. Such performance gains were made possible by a dramatic shift in the architecture of graphics processors. Where GPUs used to speed up a fixed pipeline of graphical computations, chip architectures have become more generalized. Now each GPU consists of a large parallel array of small processors, says Patricia Harrell, director of stream computing for AMD. "If you look at a graphics processor 10 years ago, you had hardware that was doing something at a fixed step in the pipe line," she says. "Over the years, the hardware became more general purpose and flexible." Academic research has benefitted tremendously from GPU clusters. Kohlmeyer's team at Temple University now uses a six-GPU cluster in their data center to run many of their simulations, up to 60 times faster than their previous server, allowing them to quickly test new scenarios. Such small systems could help research group's immensely, NVidia's Gupta says. "Computing today is a bottleneck for science," he says. "We are not providing enough computing today for scientists and it is slowing down innovation." GPUs Can't Solve All Problems With all the advantages for specialized data centers, GPUs will not necessarily solve run-of-the-mill large-scale problems. Calculating large data sets is something at which GPUs excel, but problems that have large data dependencies (and thus a lot of branching instructions) can be problematic. "The challenge really is that people are used to serial processing, that they have solved the problem and written an algorithm to handle the data sequentially," says NVidia's Gupta. Reframing problems to run in the massively parallel systems is not easy. Programmers will have to remember techniques that they were told to forget in the 90s, says Kohlmeyer. "It is not realistic to assume that all applications will run well on GPUs, especially not right now," he says. "To some degree you have to rewrite your software and rethink your strategies for efficient parallelism to get the most performance." Follow everything from CIO.com on Twitter @CIOonline.
<urn:uuid:3691ed2f-3091-436c-93b4-fb67cf10e96c>
CC-MAIN-2017-09
http://www.cio.com/article/2417562/data-center/gaming-chips-score-in-data-centers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00092-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957936
1,011
2.515625
3
Mapping Every Road Casualty Across America / November 29, 2011 Between 2001 and 2009, nearly 370,000 people died on U.S. roads, and an interactive map from ITO World – using official data from the National Highway Traffic Safety Administration – lets users dial down and view each and every one. Each dot -- in blue, green, orange, purple and black -- represents a life lost, and details age, year of crash, male or female, and whether multiple persons were killed. Image © ITO World Ltd. Fatality data from FARs (public information). Base mapping © MapQuest 2011, map data © OpenStreetMap and contributors CC-BY-SA.
<urn:uuid:413e5b1d-ba29-4e2f-9270-44ef03752d2a>
CC-MAIN-2017-09
http://www.govtech.com/photos/Photo-of-the-Week-Mapping-Every-Road-Casualty-Across-America-11292011.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00092-ip-10-171-10-108.ec2.internal.warc.gz
en
0.887233
141
2.578125
3
Pushing Parallel Barriers Skyward As much data as there exists on the planet Earth, the stars and the planets that surround them contain astronomically more. As we discussed earlier, Peter Nugent and the Palomar Transient Factory are using a form of parallel processing to identify astronomical phenomena. Some researchers believe that parallel processing will not be enough to meet the huge data requirements of future massive-scale astronomical surveys. Specifically, several researchers from the Korea Institute of Science and Technology Information including Jaegyoon Hahm along with Yongsei University’s Yong-Ik Byun and the University of Michigan’s Min-Su Shin wrote a paper indicating that the future of astronomical big data research is brighter with cloud computing than parallel processing. Parallel processing is holding its own at the moment. However, when these sky-mapping and phenomena-chasing projects grow significantly more ambitious by the year 2020, parallel processing will have no hope. How ambitious are these future projects? According to the paper, the Large Synoptic Survey Telescope (LSST) will generate 75 petabytes of raw plus catalogued data for its ten years of operation, or about 20 terabytes a night. That pales in comparison to the Square Kilometer Array, which is projected to archive in one year 250 times the amount of information that exists on the planet today. “The total data volume after processing (the LSST) will be several hundred PB, processed using 150 TFlops of computing power. Square Kilometer Array (SKA), which will be the largest in the world radio telescope in 2020, is projected to generate 10-100PB raw data per hour and archive data up to 1EB every year.” It may seem slightly absurd from a computing standpoint to plan for a project that does not start for another eight years. Eight years ago, the telecommunications world was still a couple of years away from the smartphone. Now the smartphones talk to us. The big data universe grows even faster, possibly as fast as the actual universe. It is never a bad idea to identify possible paths to future success. Eight years from now, quantum computing may come around and knock all of these processing methods out of the big data arena. However, if that does not happen, cloud computing could potentially advance to the point where it can support these galactic ambitions. “We implement virtual infrastructure service,” wrote Hahm et al in explaining their cloud’s test infrastructure, “on a commodity computing cluster using OpenNebula, a well-known open source virtualization tool offering basic functionalities to have IaaS cloud. We design and implement the virtual cluster service on top of OpenNebula to provide various virtual cluster instances for large data analysis applications.” According to Hahm et al, the advantage essentially comes from using computing power from a cloud to act as one large computing entity, as opposed to carefully splitting up the task over parallel threads. It is akin to taking an integral of a function over the limits of integration as opposed to individually counting up all of the slices made up of height times little change in x. “This massive data analysis application requires many computing time to process about 16 million data files. Because the application is a typical high throughput computing job, in which one program code processes all the files independently, it can gain great scalability from distributed computing environment. This is the great advantage when it comes with cloud computing, which can provide large number of independent computing servers to the application.” To test this, the group analyzed data from SuperWASP, an England-based astronomical project with observatories in Spain and South Africa. Specifically, they examined 16 million light curves, which are designed to locate extra-solar planetoids based on differences in light emanated from the potential planet’s host star. According to Hahm et al, “In this experiment we can learn that the larger and less input data files are more efficient than many small files when we design the analysis on large volume of data.” “With the successful result of whole SuperWASP,” Hahm et al concludes, “data analysis on cloud computing, we conclude that data-intensive sciences having trouble with large data problem can take great advantages from cloud computing.” Perhaps cloud computing has the advantage over the petabyte scale. But it seems likely that something completely different will have to be developed between now and 2020 before an Exabyte can be processed in a year.
<urn:uuid:16ec2d8d-31ff-4def-91b4-656d78e9db86>
CC-MAIN-2017-09
https://www.datanami.com/2012/09/12/pushing_parallel_barriers_skyward/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00388-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92925
930
3.046875
3
Internet Explorer Flaw Can Cause Zero-Day Exploit A security breach of Internet Explorer could occur if a hacker hijacks session cookies from users' visits to a Web site, According to Rosario Valotta, an Italian security researcher. In a process coined "cookiejacking" by Valotta, the stolen data can be used to carry out a zero-day attack. Successfully compromised systems can be installed with malware, send messages or forge clicks. The researcher warns that this flaw affects all versions of Microsoft’s Internet browser. The exploit only occurs when a user drags and drops an object across the PC screen. Valotta was able to test this by creating a Facebook game where users dragged articles of clothing to reveal an undressed photo of a woman. "I published this game online on FaceBook and in less than three days, more than 80 cookies were sent to my server, " Valotta told Reuters. "And I've only got 150 friends." To be leveraged into a zero-day attack a hacker would need to create an IFrame element in a Web site and have a user select the entire cookie. Using Valotta's Facebook demonstration as an example, the cookie would be hidden in the article of clothing object. Once a user drags the piece of clothing, this violates the browser’s cross-zone interaction policy, and allows the attacker access to the victim’s system. To add another level of difficulty when performing this attack, the exploit involves hackers knowing a potential victim’s Windows username and which OS version is being used -- before getting the user to select the entire content of the harmful cookie. While Microsoft is investigating the discovered flaw, Microsoft spokesman Jerry Bryant believes there is little risk of vulnerability being exploited. "Given the level of required user interaction, this issue is not one we consider high risk," said Bryant.
<urn:uuid:994c5463-6aa0-4537-9c16-409c6f37d720>
CC-MAIN-2017-09
https://mcpmag.com/articles/2011/05/27/internet-explorer-flaw.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00088-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915413
383
2.734375
3
Social Security Number Prediction Makes Identity Theft EasyPosting your birthday on Facebook could help identity thieves predict your Social Security number, a new study finds. Online information about your date of birth and place of birth could allow identity thieves to guess your Social Security number, according to a paper by two Carnegie Mellon researchers. The paper, published on Monday in The Proceedings of the National Academy of Sciences, details the "unexpected privacy consequences" that arise when disparate data sources can be correlated. The authors of the study, Alessandro Acquisti, an associate professor of information technology and public policy at CMU's Heinz College, and Ralph Gross, a postdoctoral researcher, demonstrate that Social Security numbers can be predicted using basic demographic data gleaned from government data sources, commercial databases, voter registration lists, or online social networks. Knowing a person's Social Security number (SSN), name, and date of birth is typically enough to allow an identity thief to impersonate that person for the purpose of various kinds of fraud. Thus, being able to easily guess a person's SSN presents a significant security risk. Acquisti and Gross estimate that 10 million American residents publish their birthdays in online profiles, or provide enough information for their birthdays to be inferred. The accuracy with which SSNs can be predicted in 100 attempts varies, based on the availability of online data and on the subject's date and place of birth, from 0.08% to over 10% for some states. Such odds may not seem particularly dangerous, but an attacker could use a computer program to guess and guess again, over and over. With 1,000 attempts, a SSN becomes as easy to crack as a 3-digit PIN. Among those born recently in small states, the researchers were able to predict SSNs with 60% accuracy after 1,000 attempts. In their paper, Acquisti and Gross pose a hypothetical scenario in which an attacker rents a 10,000 machine botnet to apply for credit cards in the names of 18-year-old residents of West Virginia using public data. Based on various assumptions, such as the number of incorrect SSN submissions allowed before a credit card issuer blacklists a submitting IP address (3), they estimate that an identity thief could obtain credit card accounts at a rate of up to 47 per minute, or 4,000 before every machine in the botnet got blocked. Based on an estimated street price that ranges from $1 to $40 per stolen identity, identity thieves in theory could make anywhere from $2,830 to $112,800 per hour. As a temporary defensive strategy, the authors recommend that the Social Security Administration fully randomize the assignment of new SSNs, instead of randomizing only the first three digits, as the agency recently proposed. But, they note, such measures would not protect existing SSNs. They also suggest that legislative defenses, such as SSN redaction requirements, won't work either. "Industry and policy makers may need, instead, to finally reassess our perilous reliance on SSNs for authentication, and on consumers' impossible duty to protect them," the paper concludes.
<urn:uuid:0ebf251f-2770-4007-8740-04cc7b217bf2>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/social-security-number-prediction-makes-identity-theft-easy/d/d-id/1081144?piddl_msgorder=asc
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00560-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932485
643
2.65625
3
NASA scientists are working to bring the Mars Reconnaissance Orbiter, which has been orbiting the Red Planet for eight years, back online after the spacecraft suffered a glitch Sunday. The orbiter put itself into safe mode and swapped from its main computer to a backup, NASA said. "The spacecraft is healthy, in communication and fully powered," said Dan Johnston, NASA's project manager for the orbiter. "We have stepped up the communication data rate, and we plan to have the spacecraft back to full operations within a few days." The orbiter is one of several NASA robotic machines that is studying the Red Planet. The spacecraft has been working in conjunction with the Mars rovers Curiosity and Opportunity, and another orbiter, the Odyssey. In addition to studying Mars, the Reconnaissance orbiter relays data and images from Curiosity and Opportunity back to Earth, and relays commands from Earth to the rovers. Sunday's glitch has kept NASA from receiving nformation about the movements of the two rovers. Scientists also have been unable to send new commands to the rovers. This isn't the first time the orbiter has put itself into safe mode. NASA reported that this has happened four other times in the spacecraft's eight years in the Mars orbit. The last time it happened was in November 2011. NASA's tech team never discovered what problem sent the orbiter into safe mode on the other occasions. This time, though, the orbiter went into safe mode after switching from a main radio transponder to a backup. The transponder is used to gather signals from the rovers and send those signals back to Earth. According to NASA, scientists won't try to switch the orbiter back to the main transponder but will try to figure out why it made the switch. The Reconnaissance orbiter began its work in March 2006 and completed a two-year mission. It is now on its third extension. This article, NASA tries long-distance repair on robotic Mars orbiter, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "NASA tries long-distance repair of Mars orbiter" was originally published by Computerworld.
<urn:uuid:b3fb3b9a-c7d1-4f24-8298-fa4c9578e548>
CC-MAIN-2017-09
http://www.networkworld.com/article/2175108/data-center/nasa-tries-long-distance-repair-of-mars-orbiter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00612-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956095
525
2.71875
3
Research on Android malware called KorBanker has uncovered a treasure trove of text messages that include authentication codes for Google and Facebook and VPN passwords. How the thieves made use of the data is not known. However, FireEye researcher Hitesh Dharmdasani assumes cybercriminals have figured out a way to exploit it for financial gain. "It is potentially bad that someone else apart from the intended recipient would have it," Dharmdasani said Thursday of the stolen data. Dharmdasani knew Android malware was used to intercept message communications on smartphones. However, he did not know the kind of data collected until he found the MySQL database in a command-and-control server for KorBanker. FireEye has known about the malware, found mostly in South Korea, for about a year. KorBanker's original purpose was to steal online banking credentials. The malware is hidden in another app, such as a fake Google Play app, and offered for free on an online store. While most Android smartphone owners in the U.S. download apps from the official Google Play store, people in Asia regularly use less safe third-party stores, which often contain malware. When the smartphone user installs the fake app, KorBanker overrides the online banking app. Clicking on the hijacked banking app launches a screen asking users if they want to install an update. Answering yes gets another screen asking for the username and password. Over the last two months, the creators of KorBanker have expanded its thievery to text messages. The data collected by the app included VPN passwords and temporary authentication codes for Google, Korea Mobile, Facebook, Seoul Credit Rating & Information Co. and SK Telecom. Authentication codes include temporary identification numbers used in two-factor authentication and password resets. The research proves that valuable information can be collected from text messages, which should act as a warning to businesses, Dharmdasani said. "There is sensitive information, it is being stolen and it can be used," he said. In other Android news, a researcher has compiled a list of more than 350 apps that fail to perform SSL certification validation over HTTPS, making them vulnerable to man-in-the-middle attacks. While this security flaw is known, Will Dorman with the CERT Coordination Center at Carnegie Mellon University is searching for vulnerable apps on Google Play and CERT is notifying the vendors.
<urn:uuid:db98798a-5874-48f0-afbc-18b8fd6ce29f>
CC-MAIN-2017-09
http://www.csoonline.com/article/2603022/data-protection/android-malware-stash-of-text-messages-found.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00556-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926417
501
2.53125
3
The Shylock malware, also known as Caphaw, made it first appearance in 2011. The malware was named Shylock because of excerpts from Shakespeare’s The Merchant of Venice, in the code. Shylock is not vastly different to a number of other recent malware, however it is worth examining here in order to review current malware capabilities especially advanced detection-avoidance behaviours. There was a flurry of Shylock activity around the beginning of 2013, and recently it has made another resurgence. Overall, there have been three waves of Shylock activity. Initially, infections appeared in Russia, Turkey, Denmark and Italy. In a second wave, UK banks such as Barclays, HSBC, Santander, RBS, and Natwest were targeted. Now, a lot of the activity is aimed at US banks such as Chase Manhattan, BoA, Citi Private, Wells Fargo, and Capital One. Shylock is a browser-based attack with connections to a Command-and-Control (C&C) server. The purpose of the malware is to steal banking credentials. Communication between the infected system and the C&C uses self-signed certificates, is encrypted with SSL, and the C&C is masked through quasi-random locations, making conventional intrusion detection systems (IDS) ineffective. Shylock is polymorphic – it randomly alters it’s signature making conventional anti-virus signature-based systems ineffective. It has also been reported that Shylock is able to detect the commencement of AV scanning in the system, the malware then deletes it’s own files and registry entries to avoid detection, running only in memory. Shylock is then able to hook into the PC shut-down process enabling it to restore it’s own files and registry entries after the AV scanning has completed. Another interesting facet of Shylock is that it shuts down if it detects it is running in a virtual environment. The purpose of this is to make analysis of the malware difficult, as malware researchers generally analyse malware in a virtual environment. Shylock can download upgrade modules from the C&C server, such as modules that permit it to spread. One plugin allows the malware to record stream video of the user’s banking session. It can also download a key logging plug-in. The primary infection mechanism of Shylock is to attack vulnerabilities in an old version of the Java runtime machine on old XP systems. Primary infections to date have all been on XP systems running an old version of Java. An update module of the malware allows it to spread through a secondary mechanism by infecting files in shared folders on a LAN, files on USB flash drives, and through Skype. It is through this secondary infection, that later operating systems could be vulnerable. A notable aspect of Shylock is the boldness of the attacker’s real-time interaction with victims. An update module of Shylock changes the telephone numbers on a bank’s web page to the attacker’s telephone number. Presumably Shylock’s operators would encourage callers to hand over their bank credentials. It would be interesting to phone one of these numbers to evaluate the attacker’s abilities in passing off as legitimate bank staff. In another aspect, the attackers masquerade as bank staff and encourage victims to communicate with them via a fake customer service chat window on the bank website. William Shakespeare wrote in The Merchant of Venice: All that glitters is not gold Often have you heard that told On internet-connected systems, all may not be as it appears. Information security’s challenge is to protect against advanced threats such as Shylock. Due to polymorphism, conventional signature-based anti-virus technology is ineffective, and due to encryption, intrusion detection methods are ineffective. One defence mechanism that is effective against Shylock is secure browser technology. An effective solution is a secure browser which prevents man-in-the-browser attacks, man-in-the-middle attacks, DNS attacks and key logging attacks.
<urn:uuid:d7f4303e-0892-4b3f-9de9-47f90bfbe31b>
CC-MAIN-2017-09
https://dwaterson.com/2013/11/04/resurgence-of-shylock/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00556-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944626
832
2.5625
3
Will Masdar City Be a Global Model for Sustainability? / October 11, 2011 What would a sustainable city look like? The answer lies in Masdar City, a 2.3 square mile carefully planned community in the United Arab Emirates, which relies on solar energy and other renewable energy sources. Construction on the city began in 2006 and is projected to be complete by 2025 at a $20 billion price tag. Everything in Masdar City has been meticulously designed, constructed and tested to maximize the region’s resources and advocate eco-friendly practices, which is reflected in the use of battery-powered driverless vehicles, photovoltaic panels and an LED tower that changes color to alert people when too much energy is being consumed. Even businesses are carefully selected and must comply with the city’s low carbon mandate. Photos courtesy of MasdarCity.ae
<urn:uuid:e1ab6694-26d9-4a44-be3f-2875be73944b>
CC-MAIN-2017-09
http://www.govtech.com/photos/Photo-of-the-Week-Will-Masdar-City-Be-a-Global-Model-for-Sustainability-10042011.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00608-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950199
178
2.65625
3
When it comes to wildlife conservation, land managers have relied mostly on expert opinion to determine whether a particular species might be endangered. There hadn't been a systematic way to know what is happening with wildlife using real data -- until now. In 2013, Conservation International (CI) and Hewlett-Packard launched HP Earth Insights, a partnership that uses HP big data technology to improve the speed and accuracy of analysis of data on the biodiversity of tropical regions. The findings provide early warnings about threats to species and tropical forests. Data is collected by the Tropical Ecology, Assessment and Monitoring (TEAM) Network, a coalition that includes CI, the Smithsonian Institution and the Wildlife Conservation Society. Strategically placed sensor cameras take photos when an animal walks by -- producing more than 500,000 images per year. A data management tool allows users to take those images and collect information from them. An analytics system is then used for modeling and calculating statistics, taking into account variants such as current environment, human presence and land use changes. "The correlation data can show on a per-species basis whether or not there's a significant impact" in a particular area, says Eric Fegraus, director of information systems for the TEAM Network. For instance, "if a species' downward trend correlates with human presence, then maybe we need to rethink where our trails in our park systems go," he says. The project has amassed more than 6 million climate measurements, 2.3 million camera trap photos and 4TB of critical biodiversity data. Manu National Park in Peru is one of 16 sites in 15 countries that use the data. "It is the only study that is monitoring animals for five years straight in a methodical way in the same place," says Patricia Alvarez-Loayza, the TEAM Network's site manager for Manu National Park. She notes that most studies end after one to three years and are not conclusive. This story, "Conservation International" was originally published by Computerworld.
<urn:uuid:f8f66dd9-5467-41fd-96de-0035b417f77a>
CC-MAIN-2017-09
http://www.itnews.com/article/2977444/data-analytics/conservation-international.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00484-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915429
406
3.734375
4
New federal legislation establishes a national definition of telehealth and clarifies the scope of which electronic methods can be used to safely deliver health-care services. The Telehealth Modernization Act of 2013 provides principles to guide states that are considering their own telehealth policies. Introduced in December and sponsored by Reps. Doris Matsui, D-Calif.; and Bill Johnson, R-Ohio; the legislation aims to standardize what telehealth is and promote its use by health-care professionals in the U.S. In an interview with Government Technology, Johnson explained that the bill – H.R. 3750 – isn’t a mandate directed at states. Instead, it encourages them to look further into telehealth as a viable option for physicians and patients to maximize the quality of health care. A number of states have statutorily defined telehealth over the last several years as technology advancements have enabled doctors and patients to meet virtually with confidence. But according to a joint statement issued by Matsui and Johnson, 50 sets of rules for what types of care can be provided can often lead to uncertainty for providers. The Telehealth Modernization Act should provide principles and guidance to help standardize terminology on a national scale. “It’s not a regulatory burden like an EPA rule or Obamacare,” Johnson said of the bill. “It is meant to get the states to start thinking about setting standards for telehealth, so as technology innovations make it more available, that they’re all stalking the same code and shooting for the same standards.” Like this story? If so, subscribe to Government Technology's daily newsletter. But if the bill isn’t a mandate for change, what if a state’s definition of telehealth differs significantly from the federal one? Would states have to amend their own laws? Johnson admitted he didn’t look at everything on the books from all 50 states, but said he believes if a state is already engaged in enabling telehealth, then they’d likely want to look at their laws to see how they can be more facilitative. Johnson added that his intent is not to tie any potential future government funding regarding telehealth to adherence to H.R. 3750. He said he’s not in favor of forcing states to comply with a national definition of the term and is more for letting the private market drive telehealth innovation and standards. On the subject of insurance companies and how they’d interpret the Act, Johnson added that he’s not worried that the definition of activities that constitute a telehealth “visit” could be construed to include emails and phone calls. He doesn’t envision being at that level yet, and reiterated that the bill is simply a vehicle to define what telehealth is and provide a framework for states. “I think the states and the private market are much closer to what the needs of their citizens are,” Johnson said. “They’ll come up with the right answers. The right answers rarely come from inside the Beltway here in D.C."
<urn:uuid:50adde19-07b5-4930-837f-e36a9656afab>
CC-MAIN-2017-09
http://www.govtech.com/Feds-Draft-Legislation-to-Define-Telehealth-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00077-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955689
638
2.515625
3
WiGig, or IEEE 802.11ad, is fully a part of Wi-Fi. It is Wi-Fi in 60 GHz. This year will be pivotal for WiGig with the first access points and devices incorporating the technology. Next year will be a breakout year for WiGig. Several flagship smartphones with WiGig will ship in higher volumes, along with more PCs, access points, VR headsets, and other products with WiGig. Here’s why: - Millimeter wave spectrum is hot. Advances in radio and antenna spectrum now allow this spectrum to be used for access and not just a point-to-point connection. The complex methods of strengthening the signal in this spectrum are precisely what also provides it with other advantages – narrow beams are formed between WiGig radios, which allow more WiGig connections to be used on the same frequency at the same time in the same vicinity. This is why 5G mobile technology will be designed around centimeter and millimeter wave spectrum. - The 60 GHz band has a lot of spectrum available. 2.4 GHz spectrum is becoming increasingly crowded. It is used by proprietary technologies, Wi-Fi, Bluetooth, and ZigBee. This is why the Wi-Fi market has shifted to dual-band products – especially with dual-band 802.11n/ac. The 5 GHz band has much more spectrum available – multiples of the spectrum available for use in the 2.4 GHz band. The 60 GHz band has even more spectrum. Respectively, the amounts of spectrum available in the 2.4 GHz, 5 GHz, and 60 GHz bands are 60 MHz (20 MHz X 3 non-overlapping channels), about 600 MHz (this varies by country), and about 8 GHz of spectrum (supporting 3 or 4 ultra-wideband channels in most countries). - Ultra-wideband spectrum allows for much higher data rates. 802.11b, 802.11g, and 802.11n use 20 MHz channels in the 2.4 GHz band. In the 5 GHz band, 802.11n uses up to 40 MHz channels, 802.11ac uses up to 80 MHz channels, and 802.11ac Wave 2 can use up to 160 MHz channels. Channel width allows for much higher data rates. In the 60 GHz band, 802.11ad uses channels that are about 2 GHz wide. Wi-Fi has gone ultra-wideband. The same is true for 5G. - WiGig products are ramping up. More brands and models of PCs can be ordered with WiGig as an option. Access points with WiGig are hitting the market. Smartphones with WiGig are imminent. VR headsets with cords need WiGig to ditch the cord. Several WiGig chipset vendors are in discussions with all of the VR gear vendors. - There are many chipset vendors that will support this market. Not all of them have chipsets ready yet like Intel, Nitero, Peraso, SiBEAM (a part of Lattice Semiconductor), and Qualcomm, but more will be ready soon, including from Broadcom, Marvell, MediaTek, and others. To sum it up, WiGig will use the 60 GHz band where multiple ultra-wideband channels are available and provide Wi-Fi with faster data rates and, more importantly, a massive increase in network capacity. The ecosystem for WiGig spans the mobile, PC, and consumer electronics industries across the consumer, enterprise, and service provider markets. It is supported by a number of large and small WiGig chipset vendors and is a part of the bigger Wi-Fi ecosystem. Its increased availability in portable PCs has provided a base for this technology as 2016 becomes a pivot year as Wi-Fi access points and smartphones enter the picture. The foundation is being laid in 2016 for 2017 to be a breakout year for WiGig. ABI Research details the whole WiGig ecosystem, driving factors, and some of our market forecasts in our white paper on this topic here: https://www.abiresearch.com/pages/mu-mimo-and-802-11ad/
<urn:uuid:350eed34-07ca-4e02-b40c-7adfd937ad49>
CC-MAIN-2017-09
https://www.abiresearch.com/blogs/wigig-market-verge-breakout-year/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00605-ip-10-171-10-108.ec2.internal.warc.gz
en
0.913445
860
2.546875
3
Three stealthy tracking mechanisms designed to avoid weaknesses in browser cookies pose potential privacy risks to Internet users, a new research paper has concluded. The methods -- known as canvas fingerprinting, evercookies and cookie syncing -- are in use across a range of popular websites. The findings, first reported by Pro Publica, show how such tracking is important for targeted advertising but that the privacy risks may be unknown to all but the most sophisticated web users. Profiling Web users, such as knowing what Web pages a person has visited before, is a central component of targeted advertising, which matches advertisements with topics a person may be interested in. It is key to charging higher rates for advertisements. Cookies, or data files stored by a browser, have long been used for tracking, but cookies can be easily blocked or deleted, which diminishes their usefulness. The methods studied by the researchers are designed to enable more persistent tracking but raise questions over whether people are aware of how much data is being collected. The researchers, from KU Lueven in Belgium and Princeton University, wrote in their paper that they hope the findings will lead to better defenses and increased accountability "for companies deploying exotic tracking techniques." "The tracking mechanisms we study are advanced in that they are hard to control, hard to detect and resilient to blocking or removing," they wrote. Although the tracking methods have been known about for some time, the researchers showed how the methods are increasingly being used on top-tier, highly trafficked websites. One of the techniques, called canvas fingerprinting, involves using a Web browser's canvas API to draw an invisible image and extract a "fingerprint" of a person's computer. It was thought canvas fingerprinting, first presented in a research paper in 2012, was not in use on websites. But it is now employed on more than 5,000 of the top 100,000 websites ranked by metrics company Alexa, according to the paper. More than 95 percent of those canvas fingerprinting scripts came from AddThis.com, a company that specializes in online advertising, content and web tracking tools. AddThis.com could not immediately be reached for comment. The researchers also found some top websites using a method called "respawning," where technologies such as Adobe System's Flash multimedia program are manipulated to replace cookies that may have been deleted. These "evercookies" are "an extremely resilient tracking mechanism, and have been found to be used by many popular sites to circumvent deliberate user actions," the researchers wrote on a website that summarized their findings. Respawning Flash cookies were found on 107 of the top 10,000 sites. The third method, cookie syncing, involves domains that share pseudonymous IDs associated with a user. The practice is also known as cookie matching and is a workaround for the same-origin policy, a security measure that prevents sites from directly reading each other's cookies. Such matching is helpful for targeting advertisements and for selling those ads in automated online auctions. The researchers argue that cookie syncing "can greatly amplify privacy breaches" since companies could merge their databases containing the browsing histories of users they're monitoring. Such sharing would be hidden from public view. Those companies are then in "position to merge their database entries corresponding to a particular user, thereby reconstructing a larger fraction of the user's browsing patterns." "All of this argues that greater oversight over online tracking is becoming ever more necessary," they wrote. The paper was authored by Gunes Acar, Christian Eubank, Steven Englehardt, Marc Juarez, Arvind Narayanan and Claudia Diaz. Send news tips and comments to [email protected]. Follow me on Twitter: @jeremy_kirk
<urn:uuid:b8f3ea0f-0681-40a7-9395-83b0d86ffaf4>
CC-MAIN-2017-09
http://www.itworld.com/article/2696618/networking/stealthy-web-tracking-tools-pose-increasing-privacy-risks-to-users.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00481-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957122
768
2.734375
3
When I wrote my column about DARPA’s AlphaDog, and how the mule-like robot could one day help Marines carry and even recharge their gear in the field, a lot of people were impressed. But agencies whose focus is a little bit more aquatic in nature, like NOAA and the Navy, countered with information about a pretty advanced robot they are already using in the field. While the Marines play with their mules, the Navy and NOAA are swimming with Sharcs. Sharcs are Sensor Hosting Autonomous Research Craft created by Liquid Robotics, Inc. Whereas the AlphaDog can travel 20 miles on it’s own with limited user intervention, the Sharc Wave Gliders are now routinely swimming the world’s oceans, traveling thousands of miles and going for up to a year without even seeing any humans. The secret of the sharc is that it has two power systems. The first is an array of solar panels that float above the water on a surfboard-sized keel. That is used to power instruments that can measure just about anything from ocean salinity to the strength of whale songs. NOAA is using an increasing number of Sharc Wave Gliders for research because of this. But the second part of the setup is what makes the Wave Gliders so amazing. The surface part of the robot that floats is tethered to a submersible that hangs seven meters below the water. When the top part of the robot rises up on a swell, it pulls the lower part up too. Fins on the submersible direct the water and force the craft forward, sort of how an airplane moves through the air. Then when the float comes off the wave, the lower part of the robot sinks down, but its fins rotate in the opposite direction, and it gets pushed forward once again. So it can always move forward as long as there is wave action. The solar power system at the top of the robot powers navigation control, so it can direct its movements and hold a true course, or its path can be changed remotely by a human operator if needed. Right now the Wave Gliders are mostly being used for research, but the possibility of more dangerous work like scanning for minefields, or even espionage tasks could be in the cards for an always-on, always moving, low profile craft that can operate independently anywhere in the world without any risk to human life. Posted by John Breeden II on Sep 21, 2012 at 9:03 AM
<urn:uuid:f02e0b44-d27c-49ea-be12-0aae0b9871fc>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2012/09/feds-are-swimming-with-sharcs.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00177-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946071
507
3.109375
3
by Hannes Tschofenig, Nokia Siemens Networks and Henning Schulzrinne, Columbia University Summoning the police, the fire department, or an ambulance in emergencies is one of the most important functions the telephone enables. As telephone functions move from circuit-switched to Internet telephony, telephone users rightfully expect that this core feature will continue to be available and work as well as it has in the past. Users also expect to be able to reach emergency assistance using new communication devices and applications, such as instant messaging or Short Message Service (SMS), and new media, such as video. In all cases, the basic objective is the same: The person seeking help needs to be connected with the most appropriate Public Safety Answering Point (PSAP), where call takers dispatch assistance to the caller's location. PSAPs are responsible for a particular geographic region, which can be as small as a single university campus or as large as a country. The transition to Internet-based emergency services introduces two major structural challenges. First, whereas traditional emergency calling imposed no requirements on end systems and was regulated at the national level, Internet-based emergency calling needs global standards, particularly for end systems. In the old Public Switched Telephone Network (PSTN), each caller used a single entity, the landline or mobile carrier, to obtain services. For Internet multimedia services, network-level transport and applications can be separated, with the Internet Service Provider (ISP) providing IP connectivity service, and a Voice Service Provider (VSP) adding call routing and PSTN termination services. We ignore the potential separation between the Internet access provider, that is, a carrier that provides physical and data link layer network connectivity to its customers, and the ISP that provides network layer services. We use the term VSP for simplicity, instead of the more generic term Application Server Provider (ASP). The documents that the IETF Emergency Context Resolution with Internet Technology (ECRIT) working group is developing support multimedia-based emergency services, and not just voice. As is explained in more detail later in this article, emergency calls need to be identified for special call routing and handling services, and they need to carry the location of the caller for routing and dispatch. Only the calling device can reliably recognize emergency calls, while only the ISP typically has access to the current geographical location of the calling device based on its point of attachment to the network. The reliable handling of emergency calls is further complicated by the wide variety of access technologies in use, such as Virtual Private Networks (VPNs), other forms of tunneling, firewalls, and Network Address Translators (NATs). This article describes the architecture of emergency services as defined by the IETF and some of the intermediate steps as end systems and the call-handling infrastructure transition from the current circuit-switched and emergency-calling-unaware Voice-over-IP (VoIP) systems to a true any-media, any-device emergency calling system. IETF Emergency Services Architecture The emergency services architecture developed by the IETF ECRIT working group is described in and can be summarized as follows: Emergency calls are generally handled like regular multimedia calls, except for call routing. The ECRIT architecture assumes that PSAPs are connected to an IP network and support the Session Initiation Protocol (SIP) for call setup and messaging. However, the calling user agent may use any call signaling or instant messaging protocol, which the VSP then translates into SIP. Nonemergency calls are routed by a VSP, either to another subscriber of the VSP, typically through some SIP session border controller or proxy, or to a PSTN gateway. For emergency calls, the VSP keeps its call routing role, routing calls to the emergency service system to reach a PSAP instead. However, we also want to allow callers that do not subscribe to a VSP to reach a PSAP, using nothing but a standard SIP user agent (see and for a discussion about this topic); the same mechanisms described here apply. Because the Internet is global, it is possible that a caller's VSP resides in a regulatory jurisdiction other than where the caller and the PSAP are located. In such circumstances it may be desirable to exclude the VSP and provide a direct signaling path between the caller and the emergency network. This setup has the advantage of ensuring that all parties included in the call delivery process reside in the same regulatory jurisdiction. As noted in the introduction, the architecture neither forces nor assumes any type of trust or business relationship between the ISP and the VSP carrying the emergency call. In particular, this design assumption affects how location is derived and transported. Providing emergency services requires three crucial steps, which we describe in the following sections: recognizing an emergency call, determining the caller's location, and routing the call and location information to the appropriate emergency service system operating a PSAP. Recognizing an Emergency Call In the early days of PSTN-based emergency calling, callers would dial a local number for the fire or police department. It was recognized in the 1960s that trying to find this number in an emergency caused unacceptable delays; thus, most countries have been introducing single nationwide emergency numbers, such as 911 in North America, 999 in The United Kingdom, and 112 in all European Union countries. This standardization became even more important as mobile devices started to supplant landline phones. In some countries, different types of emergency services, such as police or mountain rescue, are identified by separate numbers. Unfortunately, more than 60 different emergency numbers are used worldwide, many of which also have nonemergency uses in other countries, so simply storing the list of numbers in all devices is not feasible. In addition, hotels and university campuses often use dial prefixes, so an emergency caller in some European universities may actually have to dial 0112 to reach the fire department. Because of this diversity, the ECRIT architecture decided to separate the concept of an emergency dial string, which remains the familiar and regionally defined emergency number, and a protocol identifier that is used for identifying emergency calls within the signaling system. The calling end system has to recognize the emergency (service) dial string and translate it into an emergency service identifier, which is an extensible set of Uniform Resource Names (URNs) defined in RFC 5031 . A common example for such a URN, defined to reach the generic emergency service, is urn:service.sos. The emergency service URN is included in the signaling request as the destination and is used to identify the call as an emergency call. If the end system fails to recognize the emergency dial string, the VSP may also perform this service. Because mobile devices may be sold and used worldwide, we want to avoid manually configuring emergency dial strings. In general, a device should recognize the emergency dial string familiar to the user and the dial strings customarily used in the currently visited country. The Location-to-Service Translation Protocol (LoST) , described in more detail later, also delivers this information. Some devices, such as smartphones, can define dedicated user interface elements that dial emergency services. However, such mechanisms must be carefully designed so that they are not accidentally triggered, for example, when the device is in a pocket. Emergency Call Routing When an emergency call is recognized, the call needs to be routed to the appropriate PSAP. Each PSAP is responsible for only a limited geographic region, its service region, and some set of emergency services. For example, even in countries with a single general emergency number such as the United States, poison-control services maintain their own set of call centers. Because VSPs and end devices cannot keep a complete up-to-date mapping of all the service regions, a mapping protocol, LoST , maps a location and service URN to a specific PSAP Uniform Resource Identifier (URI) and a service region. LoST, illustrated in Figure 1, is a Hypertext Transfer Protocol (HTTP)-based query/response protocol where a client sends a request containing the location information and service URN to a server and receives a response containing the service URL, typically a SIP URL, the service region where the same information would be returned, and an indication of how long the information is valid. Both request and response are formatted as Extensible Markup Language (XML). For efficiency, responses are cached, because otherwise every small movement would trigger a new LoST request. As long as the client remains in the same service region, it does not need to consult the server again until the response returned reaches its expiration date. The response may also indicate that only a more generic emergency service is offered for this region. For example, a request for urn:service:sos.marine in Austria may be replaced by urn:service:sos. Finally, the response also indicates the emergency number and dial string for the respective service. The number of PSAPs serving a country varies significantly. Sweden, for example, has 18 PSAPs, and the United States has approximately 6,200. Therefore, there is roughly one PSAP per 500,000 inhabitants in Sweden and one per 50,000 in the United States. As all-IP infrastructure is rolled out, smaller PSAPs may be consolidated into regional PSAPs. Routing may also take place in multiple stages, with the call being directed to an Emergency Services Routing Proxy (ESRP), which in turn routes the call to a PSAP, accounting for factors such as the number of available call takers or the language capabilities of the call takers. Emergency services need location information for three reasons: routing the call to the right PSAP, dispatching first responders (for example, policemen), and determining the right emergency service dial strings. It is clear that the location must be automatic for the first and third applications, but experience has shown that automated, highly accurate location information is vital to dispatching as well, rather than relying on callers to report their locations to the call taker. Such information increases accuracy and avoids dispatch delays when callers are unable to provide location information because of language barriers, lack of familiarity with their surroundings, stress, or physical or mental impairment. Location information for emergency purposes comes in two representations: geo(detic), that is, longitude and latitude, and civic, that is, street addresses similar to postal addresses. Particularly for indoor location, vertical information (floors) is very useful. Civic locations are most useful for fixed Internet access, including wireless hotspots, and are often preferable for specifying indoor locations, whereas geodetic location is frequently used for cell phones. However, with the advent of femto and pico cells, civic location is both possible and probably preferable because accurate geodetic information can be very hard to acquire indoors. In almost all cases, location values are represented as Presence Information Data Format Location Object (PIDF-LO), an XML-based document to encapsulate civic and geodetic location information. The format of PIDF-LO is described in , with the civic location format updated in and the geodetic location format profiled in . The latter document uses the Geography Markup Language (GML) developed by the Open Geospatial Consortium (OGC) for describing commonly used location shapes. Location can be conveyed either by value ("LbyV") or by reference ("LbyR"). For the former, the XML location object is added as a message body in the SIP message. Location by value is particularly appropriate if the end system has access to the location information; for example, if it contains a Global Positioning System (GPS) receiver or uses one of the location configuration mechanisms described later in this section. In environments where the end host location changes frequently, the LbyR mechanism might be more appropriate. In this case, the LbyR is an HTTP/Secure HTTP (HTTPS) or SIP/Secure SIP (SIPS) URI, which the recipient needs to resolve to obtain the current location. Terminology and requirements for the LbyR mechanism are available in . An LbyV and an LbyR can be obtained through location config-uration protocols, such as the HTTP Enabled Location Delivery (HELD) protocol or Dynamic Host Configuration Protocol (DHCP) [12, 13]. When obtained, location information is required for LoST queries, and that information is added to SIP messages . The requirements for location accuracy differ between routing and dispatch. For call routing, city or even county-level accuracy is often sufficient, depending on how large the PSAP service areas are, whereas first responders benefit greatly when they can pinpoint the caller to a particular building or, better yet, apartment or office for indoor locations, and an outdoor area of at most a few hundred meters. This detailed location information avoids having to search multiple buildings, for example, for medical emergencies. As mentioned previously, the ISP is the source of the most accurate and dependable location information, except for cases where the calling device has built-in location capabilities, such as GPS, when it may have more accurate location information. For landline Internet connections such as DSL, cable, or fiber-to-the-home, the ISP knows the provisioned location for the network termination, for example. The IETF GEOPRIV working group has developed protocol mechanisms, called Location Configuration Protocols, so that the end host can request and receive location information from the ISP. The Best Current Practice document for emergency calling enumerates three options that clients should universally support: DHCP civic and geo (with a revision of RFC 3825 in progress ), and HELD . HELD uses XML query and response objects carried in HTTP exchanges. DHCP does not use the PIDF-LO format, but rather more compact binary representations of locations that require the endpoint to construct the PIDF-LO. Particularly for cases where end systems are not location-capable, a VSP may need to obtain location information on behalf of the end host . Obtaining at least approximate location information at the time of the call is time-critical, because the LoST query can be initiated only after the calling device or VSP has obtained location information. Also, to accelerate response, it is desirable to transmit this location information with the initial call signaling message. In some cases, however, location information at call setup time is imprecise. For example, a mobile device typically needs 15 to 20 seconds to get an accurate GPS location "fix," and the initial location report is based on the cell tower and sector. For such calls, the PSAP should be able to request more accurate location information either from the mobile device directly or the Location Information Server (LIS) operated by the ISP. The SIP event notification extension, defined in RFC 3265 , is one such mechanism that allows a PSAP to obtain the location from an LIS. To ensure that the PSAP is informed only of pertinent location changes and that the number of notifications is kept to a minimum, event filters can be used. The two-stage location refinement mechanism described previously works best when location is provided by reference (LbyR) in the SIP INVITE call setup request. The PSAP subscribes to the LbyR provided in the SIP exchange and the LbyR refers to the LIS in the ISP's network. In addition to a SIP URI, the LbyR message can also contain an HTTP/HTTPS URI. When such a URI is provided, an HTTP-based protocol can be used to retrieve the current location . This section discusses the requirements the different entities need to satisfy, based on Figure 2. A more detailed description can be found in . Note that this narration focuses on the final stage of deployment and does not discuss the transition architecture, in which some implementation responsibilities can be rearranged, with an effect on the overall functions offered by the emergency services architecture. A few variations were introduced to handle the transition from the current system to a fully developed ECRIT architecture. With the work on the IETF emergency architecture, we have tried to balance the responsibilities among the participants, as described in the following sections. An end host, through its VoIP application, has three main responsibilities: it has to attempt to obtain its own location, determine the URI of the appropriate PSAP for that location, and recognize when the user places an emergency call by examining the dial string. The end host operating system may assist in determining the device location. The protocol interaction for location configuration is indicated as interface (a) in Figure 2; numerous location configuration protocols have been developed to provide this capability. A VoIP application needs to support the LoST protocol in order to determine the emergency service dial strings and the PSAP URI. Additionally, the device needs to understand the service identifiers, defined in . As currently defined, it is assumed that SIP can reach PSAPs, but PSAPs may support other signaling protocols, either directly or through a protocol translation gateway. The LoST retrieval results indicate whether other signaling protocols are supported. To provide support for multimedia, use of different types of codecs may be required; details are available in . The ISP has to make location information available to the endpoint through one or more of the location configuration protocols. In order to route an emergency call correctly to a PSAP, an ISP may initially disclose the approximate location for routing to the endpoint and give more precise location information later, when the PSAP operator dispatches emergency personnel. The functions required by the IETF emergency services architecture are restricted to the disclosure of a relatively small amount of location information, as discussed in and in . The ISP may also operate a (caching) LoST server to improve the robustness and reliability of the architecture. This server lowers the round-trip time for contacting a LoST server, and the caches are most likely to hold the mappings of the area where the emergency caller is currently located. When ISPs allow Internet traffic to traverse their network, the signaling and media protocols used for emergency calls function without problems. Today, there are no legal requirements to offer prioritization of emergency calls over IP-based networks. Although the standardization community has developed a range of Quality of Service (QoS) signaling protocols, they have not experienced widespread deployment. SIP does not mandate that call setup requests traverse SIP proxies; that is, SIP messages can be sent directly to the user agent. Thus, even for emergency services it is possible to use SIP without the involvement of a VSP. However, in terms of deployment, it is highly likely that a VSP will be used. If a caller uses a VSP, this VSP often forces all calls, emergency or not, to traverse an outbound proxy or Session Border Controller (SBC) operated by the VSP. If some end devices are unable to perform a LoST lookup, VSP can provide the necessary functions as a backup solution. If the VSP uses a signaling or media protocol that the PSAP does not support, it needs to translate the signaling or media flows. VSPs can assist the PSAP by providing identity assurance for emergency calls; for example, using , thus helping to prosecute prank callers. However, the link between the subscriber information and the real-world person making the call is weak. In many cases, VSPs have, at best, only the credit card data for their customers, and some of these customers may use gift cards or other anonymous means of payment. The emergency services Best Current Practice document discusses only the standardization of the interfaces from the VSP and ISP toward PSAPs and some parts of the PSAP-to-PSAP call transfer mechanisms that are necessary for emergency calls to be processed by the PSAP. Many aspects related to the internal communication within a PSAP, between PSAPs as well as between a PSAP and first responders, are beyond the scope of the IETF specification. When emergency calling has been fully converted to Internet proto-cols, PSAPs must accept calls from any VSP, as shown in interface (d) of Figure 2. Because calls may come from all sources, PSAPs must develop mechanisms to reduce the number of malicious calls, particularly calls containing intentionally false location information. Assuring the reliability of location information remains challenging, particularly as more and more devices are equipped with Global Navigation Satellite Systems (GNSS) receivers, including GPS and Galileo, allowing them to determine their own location . However, it may be possible in some cases to check the veracity of the location information an endpoint provides by comparing it against infrastructure-provided location information; for example, a LIS-determined location. So far we have described LoST as a client-server protocol. Similar to the Domain Name System (DNS), a single LoST server does not store the mapping elements for all PSAPs worldwide, for both technical and administrative reasons. Thus, there is a need to let LoST servers interact with other LoST servers, each covering a specific geographical region. Working together, LoST servers form a distributed mapping database, with each server carrying mapping elements, as shown in Figure 3. LoST servers may be operated by different entities, including the ISP, the VSP, or another independent entity, such as a governmental agency. Typically, individual LoST servers offer the necessary mapping elements for their geographic regions to others. However, LoST servers may also cache mapping elements of other LoST servers either through data synchronization mechanisms (for example, FTP or exports from a Geographical Information System [GIS] or through a specialized protocol ) or by regular usage of LoST. This caching improves performance and increases the robustness of the system. A detailed description of the mapping architecture with examples is available in . Steps Toward an IETF Emergency Services Architecture The architecture described so far requires changes both in already-deployed VoIP end systems and in the existing PSAPs. The speed of transition and the path taken vary between different countries, depending on funding and business incentives. Therefore, it is generally difficult to argue whether upgrading endpoints or replacing the emergency service infrastructure will be easier. In any case, the transition approaches being investigated consider both directions. We can distinguish roughly four stages of transition (Note: The following descriptions omit many of the details because of space constraints): If devices are used in environments without location services, the VSP's SIP proxy may need to insert location information based on estimates or subscriber data. These cases are described briefly in the following sections. Figure 4 shows an emergency services architecture with traditional endpoints. When the emergency caller dials the Europeanwide emergency number 112 (step 0), the device treats it as any other call without recognizing it as an emergency call; that is, the dial string provided by the endpoint that may conform to RFC 4967 or RFC 3966 is signaled to the VSP (step 1). Recognition of the dial string is then left to the VSP for processing or sorting; the same is true for location retrieval (step 2) and routing to the nearest (or appropriate) PSAP (step 3). Dial-string recognition, location determination, and call routing are simpler to carry out using a fixed device and the voice and application service provided through the ISP than they are when the VSP and the ISP are two separate entities. There are two main challenges to overcome when dealing with traditional devices: First, the VSP must discover the LIS that knows the location of the IP-based end host. The VSP is likely to know only the IP address of that device, visible in the call signaling that arrives at the VSP. When a LIS is discovered and contacted and some amount of location information is available, then the second challenge arises, namely, how to route the emergency call to the appropriate PSAP. To accomplish the latter task it is necessary to have some information about the PSAP boundaries available. Reference does not describe a complete and detailed solution but uses building blocks specified in ECRIT. Still, this deployment scenario shows many constraints: Partially Upgraded End Hosts A giant step forward in simplifying the handling of IP-based emergency calls is to provide the end host with some information about the ISP so that LIS discovery is possible. The end host may, for example, learn the ISP's domain name by using LIS discovery , or might even obtain a Location by Reference (LbyR) through the DHCP-URI option or through HELD . The VSP can then either resolve the LbyR in order to route the call or use the domain to discover a LIS using DNS. Additional software upgrades at the end device may allow for recognition of emergency calls based on some preconfigured emergency numbers (for example, 112 and 911) and allow for the implementation of other emergency service-related features, such as disabling silence suppression during emergency calls. In most countries, national and sometimes regional telecommunications regulators, such as the Federal Communications Commission (FCC) and individual states, or the European Union, strongly influence how emergency services are provided, who pays for them, and the obligations that the various parties have. Regulation is, however, still at an early stage: in most countries current requirements demand only manual update of location information by the VoIP user. The ability to obtain location information automatically is, however, crucial for reliable emergency service operation, and it is required for nomadic and mobile devices. (Nomadic devices remain in one place during a communication session, but are moved frequently from place to place. Laptops with Wi-Fi interfaces are currently the most common nomadic devices.) Regulators have traditionally focused on the national or, at most, the European level, and the international nature of the Internet poses new challenges. For example, mobile devices are now routinely used beyond their country of purchase and, unlike traditional cellular phones, need to support emergency calling functions. It appears likely that different countries will deploy IP-based emergency services over different time horizons, so travelers may be surprised to find that they cannot call for emergency assistance outside their home country. The separation between Internet access and application providers on the Internet is one of the most important differences to existing circuit-switched telephony networks. A side effect of this separation is the increased speed of innovation at the application layer, and the number of new communication mechanisms is steadily increasing. Many emergency service organizations have recognized this trend and advocated for the use of new communication mechanisms, including video, real-time text, and instant messaging, to offer improved emergency calling support for citizens. Again, this situation requires regulators to rethink the distribution of responsibilities, funding, and liability. Many communication systems used today lack accountability; that is, it is difficult or impossible to trace malicious activities back to the persons who caused them. This problem is not new, because pay phones and prepaid cell phones have long offered mischief makers the opportunity to place hoax calls, but the weak user registration procedures, the lack of deployed end-to-end identity mechanisms, and the ease of providing fake location information increases the attack surface at PSAPs. Attackers also have become more sophisticated over time, and Botnets that generate a large volume of automated emergency calls to exhaust PSAP resources, including call takers and first responders, are not science fiction.
<urn:uuid:6eae9781-0d9e-464c-a736-e73a77d9b3d3>
CC-MAIN-2017-09
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-50/134-ecrit.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00473-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928629
5,633
2.546875
3
Location, location, location The Global Positioning Satellite (GPS) system can help you find yourself, or the nearest drycleaner, or a missing Christmas manger, or a hidden geocache. But our expectations of GPS are that it works accurately, instantly, and reliably on all our devices. That's asking a bit much from what is fundamentally 1990s-era technology. Fortunately, there's help. Assisted GPS (AGPS) and a variety of complements and supplements to GPS can shrink the wait for a positive fix on a location— the Time To First Fix or TTFF—from multiple minutes down to as little as under a second without sacrificing accuracy. Reducing that wait means that your pictures are instantly tagged with coordinates (geotagged), a map drops a pin on your current location before the map itself finishes loading, and a device you're using in a remote location knows right away that it's in the middle of nowhere. Oh, and Google and others can give you the right ads for your location without any tedious waiting on their part, either. Typically, AGPS systems help a GPS figure out where the satellite signals that the receiver is picking up are located precisely at that moment. Related systems that aren't technically AGPS use a variety of means to combine with, replace, or enhance GPS data into providing a geographic result. In this article, I'll explain how AGPS and alternatives work, and I'll conclude with a discussion of what's coming in the future for even more precise or esoteric location finding. The GPS system, run by the US Department of Defense through an Air Force space division, uses a constellation of 32 satellites that each orbit the earth twice per day. A GPS receiver should be able to get signals from about 10 satellites at a time in ideal circumstances, but far fewer can be picked up reliably in most real-world conditions. All the satellites constantly transmit data—the navigation message—over the same set of frequencies, using an encoding that allows 50 bits per second (really! 50 bps!) for a total of 1,500 bits of data to be demodulated from each satellite each 30 seconds. On every minute and half-minute, each satellite transmits its notion of the precise time and its health, followed by its location and a path in orbit that's valid for as long as four hours (the ephemeris, pluralized ephermerides). It also transmits a subset of data about the other satellites in orbit, including a rougher position (the almanac). It takes 25 navigation messages, all received perfectly over 12.5 minutes, to assemble a full almanac. A timestamp is also included as part of each 300-bit (six-second) segment or sub-frame of the message. With the timing information and the ephemerides from four satellites, a GPS receiver can perform trilateration, which allows a point to be plotted accurately to within about 5 to 15 meters (15 to 45 feet). Although geometrically only three satellites are needed, atmospheric effects and other issues introduce small errors in timing. A fourth satellite corrects those errors and allows an accurate and corrected time and elevation as well. (Some techniques for assisting or supplementing GPS can avoid the need for a fourth satellite, or even use fragmentary data from two satellites.) A cold start From a cold start with typically older GPS receivers—where the receiver has never been turned on, has been off for several weeks, has lost a battery charge, or has been moved a few hundred miles since its last activation—the entire almanac has to be retrieved over 12.5 minutes. And that's outdoors with good overhead visibility. Jean-Michel Rousseau, a staff product manager in the group at Qualcomm that handles GPS technology, explained that that's "the maximum time that the user would get a position assuming it had absolutely no knowledge of the GPS constellation." While you can still buy certain kinds of standalone GPS devices with this lag—like the ATP Photo Finder used for geotagging digital camera images—most modern gear uses techniques to have enough information to avoid this cold start problem. Most chipmakers and manufacturers of standalone GPS hardware now claim about 30 to 60 seconds for a warm start, with a few exceptions. "GPS receivers ship with some default almanac data in memory; as a result a receiver no longer has to decode this data off the satellites," Rousseau said. Qualcomm's gpsOne, for instance, can fire up in 35 seconds in a device that works in standalone mode, with none of the assistance available that we're about to learn about.
<urn:uuid:53e7f895-6a36-4a4e-a9d7-750e559d85f2>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2009/01/assisted-gps/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00173-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945903
948
2.8125
3
As part of the administration’s new 3.8 trillion budget (that is with a T, not a B guys. Really big number.), a radical change in NASA’s mission is proposed. If President Obama has his way, NASA will stop being the operator and builder of our space vehicles and program. The US space program will be placed in the hands of private industry. The government will try to steer the private space industry with grants and technology development projects. This could be a great environment and opportunity for an open source space program. Of course this is a huge departure for the NASA we have known. For the past 50 years NASA has been the designer, builder and operator of our space program. The space race and the race to the Moon were national goals to strive for. In later years, NASA’s mission became more cooperative with other world powers in projects such as the International Space Station and missions to the planets. Then President George W. Bush announced a return to the Moon by 2020. Many experts said it was pie in the sky and would never happen in that time frame, but nevertheless we spent almost 9 billion dollars to date on this mission. Now if the Obama administration has their way we are scrapping it. As a child of the Apollo program, I am dismayed that we can’t even come close to accomplishing something that we were able to do 40 years ago. Have we fallen that far? Alan Boyle over on MSNBC’s Cosmic Log has a great synopsis of how and why the space program is being turned over to the “commercial boys” and who some of those players are. But there is another player. A dark horse if you will, that could be coming up fast on the outside. The Open Luna Foundation has a plan to use open source software and hardware and most importantly an open source community methodology to put a permanent outpost on the Moon. They estimate that they will have to raise 500 to 700 million dollars to pull this off. By selling tourism and moon souvenirs they hope to raise the bulk of this money. They are also looking for donations and volunteers. If you think you have the right stuff head on over. I have included a slide presentation to give you some details on Open Luna. But here are some highlights of their approach: - All aspects of the mission plan and hardware will be open source. This information will be publicly available and community support and involvement will be actively pursued and welcomed. - Special efforts will be made to involve students, educational facilities, and amateur space enthusiasts. - A strong media presence will be a priority. The entertainment and educational potential of the mission will be exploited to allow the mission to reach the maximum number of people possible. This furthers the educational potential of the mission, provides publicity for sponsors (which will encourage support for future missions), and demonstrates to people that this is possible in the present and inspires the next generation to continue and exceed these mission goals. - Mission hardware will be light and geared toward continuity from one mission to future missions. This will save costs and simplify the mission and hardware development. Superfluous hardware will be removed from missions and each component will be made in the lightest fashion possible. This may create initial complications, but it will balance out over the span of the program. Risk levels will be assessed and considered to balance risk with the cost of safety to the ability of the mission to continue forward. - Much like an Alpine expedition, moderate risks will be acceptable in favor of exploration. - Access to all scientific data and acceptance of outside research proposals will be encouraged. They currently have a 5 mission plan. I am sure this and other aspects of the plan will change. But that is what they are proposing right now. So why do I think open source could be a winning strategy for a successful return to the moon? Mostly for the same reasons why it took a government to get us there in the first place. I think pure science like going to the moon without a certain profit will not be sustainable in a traditional commercial program. Now some may say that if it is not commercially viable it shouldn’t be undertaken. But sometimes you have to do things for adventure and discovery, without knowing what the exact pay off will be. More often than not though, where there is new discovery, there is new opportunity. It spawns new technology and innovation. I envision using software and systems based on open source software that will save significant dollars in license costs, but more importantly allow for the rapid development of new applications and features that will be required for the mission. Of course some of the hardware (rockets) will be commercially available models. But I am counting on Open Luna to spur development of new designs for crew compartments, living quarters and open source design for the permanent moon station. I think a vibrant community will give Open Luna the edge for government grants and incentives. There are an inordinate amount of space enthusiasts in the IT industry. I think a well organized open source space project could attract a super community of volunteers and developers to accelerate the technology needed in the shortest, cheapest and most efficient manner. Maybe software will not be the zenith of open source usefulness. Maybe the true future of open source lies in the stars, “going where no commercial software has gone before.” Here is Open Luna's slide show:
<urn:uuid:11efd474-8003-41cc-8c6c-3e08515e4986>
CC-MAIN-2017-09
http://www.networkworld.com/article/2229522/opensource-subnet/bang--zoom--is-open-source-the-right-way-to-the-moon-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00469-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945981
1,094
2.546875
3
"The field of theoretical cryptography has blossomed in a way that I didn't anticipate in the early days," said Ron Rivest, a professor of electrical engineering and computer science at MIT and, along with Shamir and Len Adelman, one of the inventors of the RSA public-key cryptosystem. "It's related to so many other fields, information theory and others. It's much broader and richer than I imagined it would be." By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. In April 1977, Rivest, Shamir and Adelman published a paper called "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems," (.pdf) which described a practical method for encrypting a message using a publicly shared key. The paper picked up on the work done a year earlier by Diffie and Hellman, who had invented the concept of public-key cryptography. Until then, no one had been able to work out a practical way to transmit a decryption key to the recipient of a message. Diffie and Hellman's innovation was brilliant in its simplicity: encode the message with a shared public key and decrypt it with a private key. The RSA paper was the beginning of digital encryption and eventually led to its wide use on the Web and in commercial software. But Hellman, an former engineering and math professor at Stanford University, said he was surprised that cryptography hadn't advanced more in the last 30 years. "I thought there would be provably secure systems, and 30 years later, we don't have them," he said. "I thought there would be more cryptosystems as well." But even as they noted the lack of progress in some areas, the panelists emphasized that cryptanalysis has advanced greatly and Shamir said that he expects some significant progress in the coming year on a couple of fronts. He mentioned that there are a number of serious attempts to implement an attack on the SHA-1 hash algorithm. "I think we'll see success on that in the next few months," Shamir said. He also pointed out that cryptosystems' unfortunate tendency to fail badly when any small change is made to them, makes them somewhat difficult to implement and work with. "The main problem with cryptography is that it's highly discontinuous. If you have a cryptosystem and make any slight change, it can lead to devastating attacks," Shamir said. "We didn't think enough at the time about how to recover from these attacks." Diffie, CSO at Sun Microsystems and a Sun fellow, said the initial zeal that he and the other pioneers of digital cryptography had led to a mistaken belief that their discoveries would make data completely secure. "I think cryptography will always just be one of the pieces," Diffie said. "The worst you can say is that public-key cryptography has been a great success."
<urn:uuid:6f5a6bf1-5f52-4fb1-a2eb-9f3e78dcb2f8>
CC-MAIN-2017-09
http://www.computerweekly.com/news/1280096243/Cryptographers-Panel-Forefathers-still-eager-for-new-advances
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00645-ip-10-171-10-108.ec2.internal.warc.gz
en
0.979872
610
3.078125
3
Education Paradigm Shift In The Cloud Computing Era Cloud computing has emerged as an innovative player in the industry with many organizations as well as individuals reap among numerous benefits that it offers. With its e-learning programs and online education in general, the education industry seems to be leading the way in adopting and making full usage of this technology. Cloud computing offers several key benefits such as low cost, flexibility, reliability, redundancy, and a higher return on investment. There are three main types of services offered by the cloud: - Software as a Service (SaaS): In the past, the primary way to install software was by means of a CD-ROM. With SaaS, CD-ROMs are not required as the entire application software can be accessed directly from the internet. The service provider bears the burden and cost of providing such a service, freeing you to focus on your core strengths and not have to worry about managing software applications, bugs, upgrades etc. - Infrastructure as a Service (IaaS): In this scenarios, basically the entire IT infrastructure is provided to you by a service provider, while you just have to pay the ‘rent’ for that infrastructure. Examples are servers, storage, and computers. - Platform as a Service (PaaS): With Platform as a Service (PaaS), you have the availability of software platform to develop your own customized applications in the cloud that do not depend on a specific platform to run, and you can make them widely available to users through the Internet. Examples are application development and deployment tools, and hosting applications. Novel as it sounds, cloud computing indeed promises numerous benefits and arguably few risks related to data security, vulnerability, confidentiality, etc. –Nevertheless, benefits far outweigh associated risks. Getting the basics out of the way, cloud computing technology has been rapidly adopted by a wide spectrum of industries. Educational institutions and universities across the globe continue to reap benefits of this technology. The following example demonstrates the power of cloud computing. In a typical university setting, the IT infrastructure provides the requisites to students for effective use of technology as a learning aid. Similarly, professors, librarians, and administrative staff are also catered to by the IT department. For instance, Exchange Server may be setup to provide e-mail facility to users, various software applications specific to certain learning methodologies, and data storage servers, fall within the purview of the IT department. With the adoption of cloud computing, a shift in paradigm can be evidenced, since SaaS, IaaS, or PaaS can essentially replace some or even all of the aforementioned requirements. With this new adoption, for instance, students, professors, and administrative staff can be provided software and hardware related services at a much lower cost and less headaches using cloud computing. All software as well as hardware can now be outsourced to service providers. Researcher at the University, whose projects require a great deal of processing power and/or additional server capacity can do so by opting for the IaaS option. Therefore, the entire University can virtually run and managed using cloud computing, potentially savings millions in costs. While cloud computing is growing exponentially and impacting not only businesses in a favorable manner, it is also influencing the way we conduct our daily life. By Syed Raza
<urn:uuid:e29caddd-4b36-4f7a-b676-445770fbf9d9>
CC-MAIN-2017-09
https://cloudtweaks.com/2014/06/education-paradigm-shift-cloud-computing-era/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00521-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946266
674
2.96875
3
Heat is the great enemy of electronics. As microchips run at higher clock speeds, engineers are running up against the challenge of keeping components cool in an efficient and economical manner. But a new cooling method developed by a researcher at Sandia National Laboratories opens the way for a variety of applications, from higher-speed chips to more power-efficient electronics and refrigeration systems. The Air Breathing Heat Exchanger, also known as the Sandia Cooler, uses a unique system of rotating fins to shunt heat away from microelectronics. Unlike a conventional heat exchanger, which is a fixed device with cooling vanes designed to move heat away from a processor through convection, the Sandia Cooler uses rotating cooling fins attached to a shaft that conveys heat up from the processor. Movement is important because it helps reduce the boundary layer of air that clings to an object such as a heat exchanger, said the cooler’s inventor, Sandia researcher Jeff Koplow. In a static air-cooled heat sink or cooling vane, a layer of warm air will accumulate around the device like an insulating blanket, greatly reducing its ability to conduct heat away from electronics. Although there are methods that use directed jets of air to help improve cooling by blowing away the accumulated warm air, they tend to be noisy and power-hungry, Koplow said. The Sandia system’s high-speed rotating fins reduce the boundary layer by a factor of 10, which greatly improves cooling in a much smaller device, he said. The fins are designed to be aerodynamically efficient, which increases their ability to shed heat and reduce noise. Koplow developed the cooler because he felt that progress in cooling systems for electronics was incremental at best. “Why is the performance of air-cooled heat exchangers so lousy?” he said. The Sandia coolers have potential uses in a variety of cooling applications. The small high-speed cooling fans used to cool most electronics are not very efficient, Koplow said. By making the heat exchanger rotate, the fins, which are only a few inches across, can cool a processor and provide air flow in an electronics cabinet. Another advantage to the cooler is that the tiny engine that drives the fins is very power-efficient. Moving fins are also resistant to dust, which can collect on and foul passive cooling vanes. Dust fouling is a common problem with many computers, Koplow said. As it builds up on cooling vanes, it acts as an insulator, diminishing the device’s ability to shed heat. When heat exchangers stop working efficiently, computers begin to slow down, as the processor reduces its clock speed to keep from overheating. Besides keeping computers cool and free of dust, the Sandia Cooler technology can also be applied to systems such as air conditioners, which also suffer from dust collecting on their heat exchangers. Air conditioning efficiency is important for data centers, where a considerable part of the power costs go toward cooling. Koplow estimated that more efficient heat exchangers for processors could drive down a data center’s power consumption by as much as 20 percent and greatly reduce noise. Making an entire building’s heating and cooling system more efficient would lead to even greater efficiencies, since up to half the power needs for server farms go to their HVAC systems, he said. Despite its promise, Koplow cautions that the technology is still immature. One potential hurdle that must be analyzed is manufacturing. Making the fins and their motor is not very complex, but the device does spin at several thousand rotations per minute and requires tight, precise spacing for the air gap. Developing the means to produce them cheaply and in quantity remains a challenge, he said.
<urn:uuid:6014e24a-2b23-4cb4-9641-0f35b389bf8c>
CC-MAIN-2017-09
https://gcn.com/articles/2011/07/29/sandia-new-cooling-system-for-electronics.aspx?admgarea=TC_DataCenter
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00521-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956812
776
3.59375
4
Viral Art: A Gallery Of Security ThreatsVisually, online threats such as viruses, worms, and Trojans can be as beautiful as they are menacing to individual PC users, enterprises, and IT security professionals. With 94 % of IT professionals expecting to suffer a security breach, and Windows 7 already showing signs of vulnerability to hackers, it's fair to say we're under siege from attackers. But what does the enemy look like? What color is spyware? What shape and form identify varying strains of malware, worms, and Trojans? Artists Alex Dragulescu and Julian Hodgson accepted a commission from MessageLabs, now part of Symantec, and set to work to find out. It turns out the look of online threats can be as beautiful as they are menacing to individual PC users, enterprises, and IT security professionals. Using pieces of disassembled code, API calls, memory addresses, and subroutines associated with the bane of a security team's existence, they analyzed the data by frequency, density, and groupings. Algorithms were then developed and the artists mapped the data to the inputs of the algorithms, which then generated virtual 3-D entities. The patterns and rhythms found in the data gave shape to the configuration of the artificial organisms, and the result was a series of images called Malwarez. In addition to malware, worms, Trojans, the artists also analyzed and created renderings of e-mail spam, phishing attacks, keyloggers, and malicious e-card attacks. Dragulescu's projects are experiments and explorations of algorithms, computational models, simulations, and information visualizations that involve data derived from databases, spam e-mails, malware, blogs, and video-game assets. In 2005, his software Blogbot won the IBM New Media Award. Blogbot is a software agent in development that generates experimental graphic novels based on text harvested from blogs. Since 2007, Dragulescu has worked as a researcher in the Social Media Group at the MIT Media Lab. InformationWeek Analytics has published an independent analysis on what executives really think about security. Download the report here (registration required).
<urn:uuid:677f2a43-9836-4e08-945e-93562f1cdde6>
CC-MAIN-2017-09
http://www.darkreading.com/vulnerabilities-and-threats/viral-art--a-gallery-of-security-threats/d/d-id/1079278
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00045-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944662
453
2.609375
3
On Wednesday, May 23, Jason Healey, moderator for the Atlantic Council’s “Building a Secure Cyber Future: Attacks on Estonia, Five Years On,” reminded us that for all the talk of emergent threats and new technology, cybersecurity has a history that’s worth remembering and learning from. While the Navy still studies the Battle of Trafalgar from the Napoleonic Wars, network defenders and pundits often ignore events from even five years back, deeming them no longer relevant. The technology has changed somewhat over the years, but the underlying principles of computer network operations and security as a whole have not, and with events such as the interactive conference on the 2007 cyber attacks on Estonia, the Cyber Statecraft Initiative aims to bring that perspective to cyber conflict. Though perhaps not as significant a conflict in cyberspace as the Battle of Trafalgar was at sea, the attacks on Estonia were eye opening for the international and security communities. Amid a heated, nationalistic dispute between Russia and Estonia over a Soviet-era statue of a Red Army soldier, in late April 2007 Estonia was hit by a wave of cyber attacks temporarily disabling crucial websites such as those of Estonian parliament, newspapers, banks, and ministries. Given the current scale of cyber crime, the attack might have gone unnoticed by the global security community today, and even back then an attack of that magnitude would occur every three weeks, with the largest cyber attack at that point being ten times the size. Still, the attacks on Estonia were extremely significant in several ways. While only moderately large for the Internet as a whole, the attacks were overwhelming for the tiny country of Estonia, which had only 1.3 million people. Estonia had also been a leader in e-governance, relying on the Internet for many government services. Lastly, it was seen as an example of state-on-state cyber conflict, as the attacks were believed to be carried out or ordered by the Russian government. So what can today’s defenders learn from this little cyber Trafalgar? The biggest lesson from the attacks on Estonia is that network defense and mitigation at the national and international level requires relationships that must be forged before the attack. As panelist Brian Peretti, Financial Services Critical Infrastructure Program Manager at the Office of Critical Infrastructure Protection and Compliance Policy of the United States Department of the Treasury, noted, just as the Internet is based on trusted connections between computers, cybersecurity is based on trusted connections between people and organizations. Estonia’s Computer Emergency Response Team (CERT) couldn’t develop the relationships it needed during an attack, and lacked the necessary ties to internal organizations such as banks and Internet service providers, as well as international organizations such as other CERTs. Other CERTS need to be confident in its capabilities to properly identify a threat and reciprocate if they have trouble before they are willing to work with another country’s team. As a result, Estonia’s CERT became a bottleneck. Fortunately, that lesson was not lost on the United States. After the Estonia attacks, America established the National Cyber Response Coordination Group domestically, which was co-chaired by the Department of Homeland Security, the Department of Justice, and the Department of Defense and we’ve since run numerous exercises with international partners.
<urn:uuid:9aad7b20-1236-456f-80f8-c348bbdc0329>
CC-MAIN-2017-09
http://www.fedcyber.com/2012/06/07/estonia-as-a-cyber-trafalgar/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00221-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971195
671
2.6875
3
Author: John Muster Publisher: McGraw-Hill Professional In this book John Muster will teach you how to use UNIX and Linux through clear presentation of the concepts, hands-on tutorials and exercises, illustrations, chapter reviews, and more. The following list represents but is not limited to, some of the topics that you will learn inside this book: - Log on to UNIX/Linux system - Specify instructions to the shell - Run programs to obtain system and user information - Set and use permissions for files and directories - Create and execute shell scripts - Control user processes - Use the visual editor (vi) - Manage, print, and archive large files - Use multiple utilities in scripts - Access and tour graphical desktops - Create and change directories - Administer a Linux PC system The subjects covered in each chapter are organized in a way the reader can quickly find learning objectives, skills-check sections, hands-on tutorial, fundamental skill-building exercises, illustrations and figures, chapter self-tests, end-of-chapter summaries, quizzes, and projects. About the Author John Muster is a leading UNIX and Linux Curriculum Developer and Instructor at the California Berkeley Extension, where he was given the Honored Teacher Award. For the last twenty years he has explored how to facilitate learning of UNIX and Linux through several major projects funded by the United States Treasury, Anderson Consulting (Accenture), Sun Microsystems and Apple Computing. These projects form the body of work on which this text is based. Inside the book The first chapter, although very brief (in terms of pages), covers some basic but fundamental stuff for those who are not familiar with the UNIX OS family. If for some reason all UNIX/Linux computer systems would cease to function, we would be living in a very different world. Why? Find out yourself by buying and reading this book. After pointing out the importance of these robust and superior OS-es, you will learn how to log on, log out, to/from terminal or graphical UNIX/Linux window. By reading the first chapter, you should be able to log on and/or start a terminal window in the graphical environment displayed at login. As the author moves on you get familiar with the history of UNIX and Linux. After this brief survey, you carry on by meeting the shell. In this chapter’s text you’ll learn how to run programs to obtain system and user information, communicate instructions properly to the system’s command interpreter, navigate to other directories in the file system, use standard programs to create, examine, and manage files, access the online manual pages that describe specific commands and files, search the online manual pages by keyword or regular expression, identify and access useful Internet sites. This third chapter completes the tour of the major features of the system including communication with the shell to execute processes, navigating the file system and employing permissions. Getting work done in UNIX/Linux generally entails asking the shell to execute utilities. We can also issue commands to tailor or modify many aspects of UNIX and Linux to meet our particular needs. Well, here you find out how to use those utilities, manage input and output from utilities, and employ special shell characters to give instructions, manage user processes, modify the computing environment, create and execute a basic shell script. In Linux and UNIX, we frequently modify some configuration files. These files consist of lines of characters. Computer text editors were developed to accomplish these tasks. The UNIX/Linux visual editor is a powerful, fast, command-driven screen editor. Although, especially in Linux we have a choice to edit/create our ASCII files with other editors (Emacs for example), VI is an essential tool, especially for system administrators when in need to recover a system from a system crash. After completing this chapter, you will be able to use vi to create and access files, move around in a file effectively, add lines, words, and characters, delete characters, lines, blocks of text, cut, paste, undo changes and much more. Some of the most prominent features on the UNIX system landscape are its powerful and versatile utility programs. Specific utilities locate system information, sort lines, select specific files, modify information, and manage files for users. Although each utility is designed to accomplish a simple task, they can be easily combined to produce results that no single utility could produce on its own. So, read on and learn to count the words, lines, and characters in a file, sort the contents of a file, identify and remove duplicate lines in a file, compare two files by identifying lines common to both, perform concatenation, math calculations, and more. Every day, users around the world employ powerful UNIX/Linux utilities to accomplish complex tasks. These commands can be put together to form a script. Scripts increase efficiency, save time, and eliminate typing errors. You’ll learn to combine basic utilities to accomplish complex tasks, create shell scripts, construct scripts incrementally, identify errors in scripts and repair them. After reading chapter seven you will be able to create directories, change current working directory, use complete pathnames for files, identify the role of inodes, data blocks, and directories when managing files, move whole directories and their contents, remove directories and much more. The various shells interpret out commands to accomplish tasks we specify. After the programs complete their work, the shell pops back up and asks what’s next. We have to talk shell language when we issue commands if we want to be understood. The author teaches you how to identify all tokens on a command line, establish where utilities read input, write output, and write error info, use the command line to pass complex arguments, modify the search path, redirect output and much more. UNIX/Linux systems manage files that can vary greatly in their importance – from top secret to casual non-important notes. To maintain a sound security policy, files on each system are given a different level of protection. For those of us who are concerned with the security aspects of our systems this chapter is a must read as it demonstrates how to determine the permissions that various kind of users are granted for files, change permissions, specify how permissions limit access to files and directories, and a bunch of other things regarding system’s security. At any moment, one to several hundred users can be logged on to the same Linux or UNIX machine, each accomplishing different tasks. When UNIX was designed, true multitasking such as this, was the main objective. This chapter investigates how the hardware and software execute utilities, and how to monitor and control separate tasks. These skills are essential, not only for system administrators, but for users and programmers as well. As we use Linux and UNIX we often have to manage large files. Maybe we need to create those files, and after that, we need to split those files into smaller pieces, copy individual files, etc. Chapter eleven examines how to accomplish these tasks. Unix and Linux support both character and graphical-based terminals. Although interacting with the system by typing commands is an effective mean of communicating out intentions to the system, some applications are more easily managed through a graphical user interface. This chapter explores how to access, use, and customize the most popular graphical desktop environments. The SysAdmin of a Linux system is responsible for the installation, operation, maintenance, repair, and security of the system. Problems can arise when we fail to accomplish the needed tasks or when we make mistakes while performing certain duties. Muster provides an introduction to UNIX/Linux system administration. Topics are briefly introduced, and basic commands are covered. About the CD-ROM This book comes bundled with a CD. The CD contains but is not limited to: This book is certainly grate guide to lead you through all the right steps to as quickly as possible acquire and apply your knowledge to real-world UNIX/Linux systems. Through carefully developed, hands-on interactions with a UNIX/Linux system, you are guided through a grand, stimulating journey from the absolutely basic system features to a rich mastery of the details employed by experts. This book is aimed to beginners but could serve very well also advanced users because it is not a big boring thousand pages reference tom but an interesting learning guide.
<urn:uuid:5ac4d747-42b5-42bf-86c2-e2605066c217>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2004/02/02/introduction-to-unix-and-linux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00341-ip-10-171-10-108.ec2.internal.warc.gz
en
0.913751
1,718
3.09375
3
According to the Bible, the universe was created in about a week. Astrophysicists are currently building a virtual universe that will be completed in about four months, using 2048 processors of the MareNostrum supercomputer. Hosted by the Barcelona Supercomputing Center, this 10,240-processor IBM machine is able to perform more than 94 trillion operations per second. This unique facility, the largest in Europe and ironically located inside an old chapel, is the perfect place to compute the formation and evolution of a virtual replica of our own universe. The latest generation of astronomical instruments has allowed astronomers to have a clear view of the universe at its infancy, based on the so-called “cosmic microwave background,” as well as a very detailed knowledge of the universe at present, in its fully adult, grown-up age. In order to fill the gap in between, and to prepare for the next generation of astronomical instruments, astrophysicists from all over Europe have gathered in Barcelona to run a single application that can compute the evolution of large scale structures in the universe. The MareNostrum galaxy formation project is a multidisciplinary collaboration between astrophysicists of France, Germany, Spain, Israel and USA, together with computer experts from IDRIS (Institut du Développement et des Ressources en Informatique Scientifique) and BSC (Barcelona Supercomputing Center). The application solves a very complex set of mathematical equations by translating them into sophisticated computational algorithms. These algorithms are based on state-of-the-art adaptive mesh refinement techniques and advanced programming technologies in order to optimize the timely execution of the same application on several thousands of processors in parallel. The simulation is now computing the evolution of a patch of our universe — a cubic box of 150 millions light years on a side — with unprecedented accuracy. It requires roughly 10 billion computational elements to describe the different kinds of matter that are believed to compose each individual galaxy of our universe: dark matter, gas and stars. This requires the combined power of 2048 PowerPC 970 MP processors and up to 3.2 TB of RAM memory. Contrary to other large computational problems in which information can be split into independent tasks, and because of the non-local nature of the physical processes we are dealing with, all 2048 processors have to exchange large amounts of data very frequently. To support this type of processing, the application takes advantage of the high bandwidth and low latency Myrinet interconnect installed on the MareNostrum computer. A personal computer, provided it had enough memory to store all the data, would need around 114 years to do the same task. In about four months, using almost one million CPU hours, several billions years of the history of the universe will be simulated. The simulation makes intensive use of the I/O sub-system. To allow such a huge simulation to run smoothly during weeks of computation and to get an optimal performance of the system, application tuning was required: the simulation package provides a restart mechanism that allowed for recovery and resumption of the computation. In this way, the application is able to deal with hardware failures without having to restart from the beginning. This mechanism requires around 30 TB of data to be written to save the application state. In order to minimize the Global File System contention, an optimized directory structure has been proposed, supporting a sustained “parallel write” performance of 1.6 Gbps. Other design aspects were taken into account in order to improve a massive “parallel read and broadcast” over the Myrinet network, in order to read and dispatch the initial condition data over all the processors. As in computer-simulated movies, a large number of snapshots are stored in sequence in order to provide realistic animation. The total amount of scientific data generated will exceed 40 TB. This unique database will constitute a virtual universe that astrophysicists will explore in order to create mock observations and to shed light in the many different processes that gave birth to the galaxies, and in particular, our own Milky Way galaxy. The first week of computation was performed last September, during which 34 snapshots were generated, producing more than 3 TB of data. During the entire week only two hours were lost because of a hardware failure. This was due to failure of a single compute node — out of the 2100 processors reserved for the run. The MareNostrum virtual universe was evolved up to the age of 1.5 billion years. Astrophysicists believe that this is precisely the era of the formation for the first Milky Way-like galaxies. Researchers have detected roughly 50 such large objects, with more than 100,000 additional galaxies of smaller size in the simulated universe. They are currently analyzing their physical properties in the virtual catalogue, as well as preparing for the next rounds of computations that will be needed in order to complete the history of the virtual universe. One of the most important issues of numerical modelling of complex physical phenomena is the accuracy of the results. Unfortunately, the researchers cannot compare the results from the numerical simulations with laboratory experiments, like in other areas of computational fluid dynamics. Instead, the reliability of the simulations can be assessed by comparing results from different numerical codes starting from the same initial conditions. In this regard, the researchers are also simulating the MareNostrum Universe with a totally different numerical approach, using more than two billion particles to represent the different fluid components. This simulation is part of a long-term project named The MareNostrum Numerical Cosmology Project (MNCP). Its aim is to use the capabilities of MareNostrum supercomputer to perform simulations of the universe with unprecedented resolution. Source: Barcelona Supercomputing Center, http://www.bsc.es/
<urn:uuid:c464a141-b56d-4c65-82cd-474439e411e9>
CC-MAIN-2017-09
https://www.hpcwire.com/2006/12/15/the_marenostrum_universe-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00517-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932729
1,175
3.421875
3
Operations research (OR) is the use of quantitative techniques (statistics, etc.) to solve problems and help leaders make decisions. OR has been around for centuries, but in the decade before World War II it came to be recognized as a distinct discipline. Operations research was used extensively during World War II to solve numerous problems; everything from how best to use radar or hunt submarines to running factories and getting supplies to the troops efficiently. Following this wartime experience, there were even more numerous peacetime successes. This led to OR being called "Management Science." OR is still widely used in military applications. Why is it that OR never gets any respect? Operations Research is, arguably, the most important scientific development of the 20th century. OR, like the earlier Scientific Method, is basically a management technique. Management is also a 20th century concept that gets little respect. In some respects, OR is the combination of management principles and the Scientific Method. Without the breakthroughs in management techniques, the enormous scientific progress of the 20th century would not have been possible. Unfortunately, management is, like the air we breathe, something most of us take for granted. But take away the air, or the management, and you notice the absence real quick. Techniques have always had difficulty getting attention. You cannot see them. Objects, on the other hand, are easy to spotlight and promote in the media. Even genetic engineering can generate pictures of new proteins, or the latest crop of clones. But how do you get a picture of OR in action? Someone playing with a slide rule or workstation don't quite cut it. It's so bad that the most common representation of engineers at work is Dilbert. And no one has ever asked me to talk about OR on TV or radio. OR The Ancient Art Part of the problem is that OR has actually been around for thousands of years, it just wasn't codified until the 20th century. From Alexander the Great to Thomas Edison, we have records of great men applying OR methods to problems. They just didn't call it OR. But if you examine what they did, it was. Consider some of the examples. Alexander the Great, and his father Philip, had a firm grasp of politics, finance and mathematics. There was no magic involved in how Philip came out of nowhere and dominated the more powerful Greeks. His son was equally astute, coming up with one clever solution after another. Alexander was always in the company of mathematicians and scientists, something which Greece had an abundance of at the time. Military, scientific and political problems were all carefully thought out and solutions adroitly implemented. Take a close look at the period and you'll recognize a lot of OR at work. The Romans were equally adept, and much OR can be seen as they built the largest empire of the ancient period. Napoleon, who was educated as a mathematician, again used OR tools to innovate and accomplish his goals. And then there was Thomas Edison, the most prolific inventor ever. He gave a splendid example of military OR in action two decades before OR was recognized as a discipline. Asked by the US Navy to help in dealing with German subs and the threat to US shipping, Edison analyzed the situation and came up with the convoy system and much of what we now think of as convoy protection and anti-submarine warfare. OR The Packaged Art The reason OR had to be reinvented time and again was because no one in the past established OR as a distinct discipline. This was common in the ancient world, where many modern devices, like the steam engine, were invented, but there were no trained engineers to bring such devices to production and wide spread use, or to record and preserve the technology. Until the last two centuries, knowledge was acquired via apprenticeship, not formal education. But the achievements of those few who reinvented OR were always considered individual genius, not something you could package and reuse. Packaging knowledge is another 20th century movement that propelled everything else. While books have been around for thousands of years, and the modern university education was developed in the 19th century, the mass production of scientists and engineers is a 20th century innovation. Roger Bacon invented, published and propagated the Scientific Method in the 13th century. This led to a steady and growing progress in science and engineering. OR was the next logical development in the systematic application of knowledge to problems. But we should take heed of the experience with the scientific method. This technique was used successfully by scientists, but also misused or ignored by opportunists, politicians and charlatans of all types. OR's Few Practitioners Because OR requires you to think clearly and methodically, there arenít many practitioners. Despite strenuous efforts, only about five percent of US army officers are familiar with OR techniques (based on an estimate I did with Army OR instructors at Ft. Lee.) And the army has cut back on the training of officers in operations research techniques. The Scientific Method is rather simpler to explain and implement than OR. The Scientific Method was a relatively simple recipe for getting to the truth. It is taught in high school science courses. OR involves a lot more imagination and heavy duty math. Coming out of World War II, U.S. flag officers were true believers in OR. Soon there were several new agencies set up to do OR type work. Outfits like the Army's Operation Research Office (ORO), the Army-Air Corps' Operations Analysis Division (OAD) and the Navy's Operations Evaluation Group (OEG after ASWORG). In the beginning, OR was considered such a broad approach to problem solving that many different disciplines were accepted as part of the process. But eventually OR practitioners like Koopman, Leach, Morse, Kimball, Newman, Solant etc. came up with a generally accepted curricula. Initially, the non-military universities providing OR instruction taught the fundamental academic subjects and some applications. This approach was less pure OR and more systems engineering and business decision/quantitative sciences. Officers obtained OR training at the Naval Post Graduate School, the Air Force Institute of Technology and the civilian schools the Army used for such training (Georgia Tech, Colorado School of Mines, Florida Institute of Technology, and MIT). Other civilian schools producing OR practitioners were; Purdue, Texas Tech, UT Austin, Ohio State, and Texas A&M. By the 1980s, the Army had created MOS 49 (Operations Research/Systems Analyst) and established a school at the Logistics Management College at Ft Lee, VA. Graduates could go right to work, or transfer to a Masters degree program with 15 credits already taken care of by the Ft Lee course. There were between two and four classes a year, with about thirty officers per class. During the same period other military schools were running two classes a year with varying number of students. Naval Postgraduate School had 40 per class, Colorado School of Mines had ten, Georgia Tech had five and AFIT 25 students per class. Graduates were sent off to places where analytical skills were needed, especially in staffs and research operations. Results were forthcoming, as a lot of the smart moves made during the 1980s were done with OR operators guidance. There were enough OR practitioners that OR cells could be set up in many key organizations. This all changed when the Cold War ended. Staffs were cut and the smart guys were the first ones to move out for greener pastures. The OR operators knew they were in demand outside the military, and also realized that someone in a narrow specialty like OR was not going to make a lot of rank. Another "peace dividend" cutback was in training for military OR specialists. The service schools for teaching OR were shut down and officers sent to civilian OR courses. This was not the same, as the military used OR differently than civilian organizations. However, sending the students to the civilian schools made it easier for these officers to get jobs when they decided to bail out. But the very complexity of OR makes it possible to encapsulate OR as a distinct tool. This is what is happening. A black box on steroids. Going into the 21st century, we are beginning to mass produce robotic scientists and engineers. Increasingly, control devices use computers and OR techniques to run everything from automobile engines to stock portfolios. We think nothing of using powerful microprocessors and sensors to do, automatically, what once took a team of highly trained people to do. OR appliances or (ORAs) are an outgrowth of the development of expert systems and more powerful microprocessors. We already have ORAs in the form of powerful diagnostic systems on PCs and in automobiles and other vehicles. We tend to overlook the increasing amount of problem solving AI being used in machines and large systems. Diagnostic software, in particular, is making great strides. OR and Wargames OR has not served us as well as it could in wargaming and policy studies. Many lost opportunities have resulted. Take the Rawanda situation. The myth forming is that if troops had been sent in immediately, there would have been no genocide. In part, this is another case of amateurs studying tactics while professionals pay attention to logistics. But where are the OR studies of this situation? It's not a difficult one to do. Calculating the logistics is easy, working out the impact of peacekeeping troops on the killing is a bit more of a challenge. More Process Than Problem Solving Wargames are another area that could use more OR. As far as I can tell, OR shows up most often in professional wargame development as more process than problem solver. You can make a case that is how it should be, but my personal experience was that OR was the primary tool for developing a simulation of a historical conflict. OR techniques were used to solve the problem of how to develop a system that would generate reproducible results. Those of you who have played manual wargames long enough to absorb those games design techniques, and have designed your own, know what I mean. Wargame designers have abundant personal experience in making this work. I first encountered this while inadvertently predicting the outcome of the 1973 Arab-Israeli war, and more deliberately predicting the process by which the 1991 Gulf War was fought. A few years later, the lead designer on the Gulf War game found himself tasked, on very short notice, to create an accurate game of a conflict between Ecuador and Peru. His CINC gave him a medal for that effort, for the CINC considered COL Bay's overnight effort superior to what was being sent down from Washington. Even with the new wargaming MOS in the army, you still have tension between OR practitioners and wargamers. As OR types are heard saying, when you've tried everything else, try simulation. This destructive attitude was picked up in the civilian schools now teaching officers OR and is another example of how inadequate such schools are for training military OR practitioners. Fear of Trying Some subjects are difficult to even touch in professional wargames. And these are often issues that any straight-ahead OR analysis would encounter and deal with. But many OR operators shy away from the soft factors (morale, interpersonal relationships, fog of war and the like.) For example, one of my games (NATO Division Commander, or NDC) was adopted by the CIA as a training device in the early 1980s because it went after items CIA analysts felt were crucial, but most wargames, especially DoD wargames, avoided. Namely personnel issues among the senior leadership. NDC was part wargame, part role playing game and double blind as well. I don't think anyone ever did a game on how division staffs operate, but it was a worthy exercise. But it wasn't just the CIA that found wargames like this useful. I have continually heard from officers, with both peacetime and combat experience, who find that wargames give them an edge. The users don't have taboos about the simulation being near perfection. Like professional gamblers, the troops know that anything that puts their odds of success over 50 percent provides a tremendous advantage. Such an approach will be essential to handle things like Information Warfare (IW) or Revolution in Military Affairs (RMA). IW, for example, deals with shaping both friendly and enemy perceptions of what is happening. This is very difficult to model because there are many subjective and soft elements to be quantified. This makes it tough for traditional OR practitioners that try to deal with combat as hard science. Warfare is anything but and things like IW even less so. But historical game designers have dealt with this sort of thing successfully for decades. While traditional OR tends to focus on attrition, which is easier to model, but run these models by a military historian and they will provide numerous examples of where battles were won or lost not because of attrition but because of troop morale or one commander simply deciding he had been beaten. One solution to the problem of making OR more useful for the troops is the concept of "Battlefield OR Appliances. (BOAs)" The business and financial community already uses such beasts (less the "Battlefield" tag) for doing complex analysis in real time. Neural nets and genetic algorithms are attractive for the business "appliances." The idea is to create apps that think quickly and accurately, far more rapidly than any human practitioner. Program trading for financial markets is based on such concepts and, although few will admit it, these trading droids are often turned loose with little human supervision (mainly because there are situations where the action is so fast that slowing the droid down so humans to keep up would cripple the usefulness of the operation.) The Air Force has been talking about BOAs in the cockpit (pilot's assistant.) The zoomies are thinking about a BOA that would wrestle with things like compensating for battle damage, other equipment problems or EW situations while the pilot continued the battle. Of course, air combat is so complex that pilots could use a little coaching in things like the maneuvers most likely to bring success in a particular operation. Ground units could also use BOAs, especially in conjunction with digitalized maps. Setting up optimal defensive positions, patrol patterns or how best to conduct a tactical move. Sailors have similar needs (and one of the first OR successes was the development of optimal search patterns for ASW.) Imagination versus Knowledge We live in an age of unprecedented knowledge production. Part of OR is finding the right knowledge and applying it as a solution. One of the better Japanese work habits was their diligent collection of new knowledge. This is one reason why, for several decades, they have been one jump ahead of the "more imaginative" Americans in developing popular new products for the American market. In DoD, only the Marines consistently cast a wide net for new knowledge. When commercial wargaming became popular with the military in the 1970s, it was the marines that went after it most aggressively. When the marines recently showed up on Wall Street to study the workings of financial markets, they were really on to something. The manner in which these volatile, and quite huge, markets have moved from all manual to man-machine trading (program trading, etc) has direct application for the military. And the manner in which the man-machine concepts were implemented were classic OR exercises. Items that can be expected to happen in the future, either because they are likely, or because we can only hope. Long a part, often an annoying one, of commercial software, these apps are constantly being beefed up to engage in more complex troubleshooting dialogs with users. There is potential here to obtain technology that can be used for battlefield OR appliances. The development work on Wizards draws heavily from decades of work on AI and Expert Systems. Much exciting OR work is going on in this area, and I believe there are already a number of military applications in development. Troops Rolling Their Own This has been going on since the early 1980s and the results are becoming more and more impressive. As the off-the shelf development tools become more powerful, more OR type military and wargaming apps will come from the field. These apps will co-opt the official wargames and sims in shops that want to get the job done rather than just perform the official drill. Some folks may not like this, but you won't be able to stop it. Much of the technology for these products has long been available off the shelf. Not a lot has been taken up by the DoD crowd, at least not in peacetime. Even slower to cross over have been the commercial development standards, which put wargames to realistic testing routines and quickly modify as needed. This does get done in wartime, as witness some of the rapid development that occurred during the Gulf War. The only military organizations making use of commercial gaming technology are those outside the DoD wargaming mainstream. This is largely a political (commercial stuff is "not invented here") and contractor (doing it from scratch makes for larger contracts) one. Process Control and Program Trading Technology Much of this is proprietary and you'd probably need an Act of Congress to extract a lot of this technology from the firms that developed it. However, much can be obtained from trade journals and a little (legal) competitive intelligence. Most of what is being done is no secret, it's the details of the execution that are closely held. And for good reason. In the financial markets, any edge is usually small and short lived. But this is what makes this technology so valuable. The manufacturing and financial markets are "at war" all the time. They thrive or go bankrupt based on how well their "weapons" perform. And much of the technology is transferable to military uses. Many of the components of commercial apps are available as off the shelf toolboxes or widely know concepts. One of the more useful of these is known as "fuzzy logic." This item addresses many of the problems DoD wargamers have with dealing with soft factors. Civilian practitioners face the same problems and they have come up with many working solutions using fuzzy logic. In my experience, nothing is fuzzier than modeling combat. What Percent Solution? There may eventually be more acceptance of OR solutions that are sufficient rather than perfect. When modeling weapons or equipment performance, you can get a 100% solution (or close to it.) Many OR practitioners are more comfortable with this than those elements that involve more people and less machinery. It is currently impossible to get a 100% (or 90%, or often even 50%) solution for things like how an infantryman performs in certain situations. In peacetime, there is tendency to gold plate things. Errors of any sort are threatening to careers. In wartime, you can make mistakes. Everyone else is and the honors go to those makes the fewest. But the peacetime zero defects attitude hampers innovation and performance. You get a lot of stuff that is perfect, but doesn't work. The latest example of this was seen in the Gulf War. When a more powerful bunker buster bomb was needed, the weapon was designed, developed and delivered in less than two months and used at the end of the war. The air force also improvised a mission planning simulation in record time (using everything from spreadsheets to existing models) and used it. The army CENTCOM wargames shop also improvised (although less successfully, but this was only discovered after the war, a common occurrence in wartime.) We should also remember that during World War II, OR practitioners recognized that their calculations could not cover all critical factors. They had to work with fuzzy situations and before "fuzzy logic" became a recognized tool, the World War II operators managed to work with the problem and not just walk away muttering that "it can't be done." The current rigidity in can be traced to the relative lack of operational experience. And when operational experience does become available, it is often the case that battlefield calculations that ignored those pesky soft factors were way off. A good example of this was the analysis TRADOC did of NTC engagements. They found that the ammunition expenditure data for NTC was much different than the OR predictions, and closer to the expenditures in earlier wars. You cannot ignore dealing with the soft factors, for eventually they will bite you in the ass. The end of the Cold War coincided with a growing demand for OR skills in the civilian sector. So many of the uniformed OR people left, and their numbers are still dwindling. Where there used to be OR groups in HQs and schools, you don't see much of this anymore. This is bad. Part of the problem is that military OR people can make more money, and have fewer unaccompanied overseas tours, as a civilian. But the numbers of uniformed OR operators is shrinking so much that the military is having a hard time properly supervising the civilian hired guns. It is also important that military OR operators be warriors (or military practitioners) first. Otherwise, you often encounter the syndrome of "if the only tool you have is a hammer, all problems look like nails." The same syndrome is noted in the civilian sector, and the solution is often to take a banker, plant manager, structural engineer or whatever and train them in OR techniques (or programming) and then turn them lose on the problems. Putting Operations Back into OR Originally, OR operators researched operations first hand and then devised solutions. There has been a trend away from this and towards an emphasis on technique: linear programming, dynamic programming, queuing theory, chaos theory, neural networks and so on. Knowing these techniques is a good thing, and can even be useful if you collect valid operational data. But it is not real operations research. If you can't get an experienced infantry officer as an OR analyst, then maybe you should consider sending your analysts through boot camp. Most fighter pilots have technical degrees and can pick up OR techniques quickly. Using OR trained pilots to work on combat aviation problems is an enormous advantage, for the operator with practical experience will catch things that the researcher trained only in OR will miss. Happens all the time, and the military is noticing the lack of military experience among the OR practitioners now working on military projects. Putting the R back into OR The "R" -research- has been largely replaced by "A"; analysis. The Navy now talks more of "OA" (Operational Analysis) than "OR" with "OA" often being considered an adjunct to Modeling and Simulation. The result has been an emphasis on quantification and metrics at the expense of understanding the problem. Like so many other debilitating trends, this one developed largely in response to what "decision-makers" have demanded. What we often have now is "advocacy analysis," where much time and effort is spent to provide justification of a position or decision based on having more and "better" numbers and metrics than your critics. This often occurs by focusing on a very narrow slice through a problem that is often far removed from the true context of the overarching problem. Crunching Numbers versus Getting Results There has long been a split among OR practitioners, especially in peacetime, over how best to achieve results. On one side you have the "physics" crowd, who insist on reducing every element of combat to unequivocal data and algorithms. On the other side you have the "whatever works" crew. The "physics" bunch are basically engaged in CYA (Cover Your Ass) operations, because there are many soft factors in combat that are not reducible mathematically the same way weapons effects can be. Better OR Tools You can't have too much of this. I'll never forget the first time I did a regression analysis. I did it manually. Try and get students to do that today and you'll get arrested for child abuse. By the 1980s we had spreadsheet plug-ins for Monte Carlo, Linear Programming and so on. I thought I'd died and gone to heaven when I first got to use that stuff. Then came MathCad, SQL on analytic steroids and more. Yes, we want more. We need more. We deserve more. If we can't get any respect, at least we can get more neat tools. Warning; too much of this stuff appears to contribute to overemphasis on analysis at the expense of getting something useful done. Put OR Back in Uniform A combination of vastly increased demand for civilian quants in the 1990s, reduced promotion opportunities after the Cold War, and the usual problems with having a non-mainstream MOS saw a steady decline in the number of uniformed OR operators. Norm Schwartzkopf was one of these, but he's retired, as are many other OR qualified officers. Either that or out making a lot more money as a civilian quant. I don't know how you're going to get quants back in uniform. It will take a decision at the top. In times of crises and resource shortages, a lot of really important things get shortchanged because they are difficult to understand. Please note: This article is revised and expanded from notes used for a talk at a November, 2000 INFORMS meeting I'll be gradually editing this into a more useful format. One of the people at the talk was Mike Garrambone, a long time wargamer and former Major of Engineers (and instructor in wargaming and the AFIT.) Mike has been working on a history of OR and we are going to work together to integrate that into this document. About the Author If you have any comments or observations, you can contact Jim Dunnigan by e-mail at [email protected]. Jim is an author (over 20 books), wargame designer (over 100 designed and publisher of over 500), defense advisor (since the 1970s), pundit (since the 1970s) and "general troublemaker". Dunnigan graduated from Columbia University in 1970. He has been involved in developing wargames since 1966. His first game with Avalon Hill (now a part of Hasbro), "Jutland" came out in 1967. He subsequently developed another classic game, "1914", which came out in 1968. A year later he began his own game publishing company (Simulations Publications, Inc, or SPI). In 1979 he wrote a book on wargames (The Complete Wargames Handbook). In 1980 he began a book on warfare (How to Make War). In 1982 he accepted an invitation from Georgia Tech "to come down and lecture at the annual course they gave on wargaming. Been doing that ever since." In 1985, he was asked to develop a tactical combat model to see how robotic mines would work. In 1989, he got involved editing a military history magazine (Strategy & Tactics) "which was the one I ran while at SPI." In 1989, he got involved in developing online games, and that continues. Jim edits strategypage.com. Dunnigan, James F., "The Operations Research Revolution Rolls On, To Where?", DSSResources.COM, 05/28/2004. James F. Dunnigan provided permission to archive this article and feature it at DSSResources.COM on Monday, December 7, 2003. This article was posted at DSSResources.COM on May 28, 2004.
<urn:uuid:49810b7e-127a-4a41-a73e-c53e3c564c61>
CC-MAIN-2017-09
http://dssresources.com/papers/features/dunnigan/dunnigan05282004.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00393-ip-10-171-10-108.ec2.internal.warc.gz
en
0.974777
5,584
3.078125
3
The mention of microlending may call up images of loans made to farmers in India or credit extended to women entrepreneurs in immigrant communities. While the village-based credit delivered by non-governmental organizations does constitute microlending, it is only part of the scope of microlending today. Now, instead of rural villages, microlending has shifted to social media-based online communities and the prevalent non-bank payment companies are facilitating the transactions. To prevent the loss of short-term loans to competitors, banks need to establish a social media presence that will help retain customers and generate income. By definition, a microloan is any extension of credit in which either the borrower, the lender or the amount lent is small, seldom above $25,000 and often as little as $500. Oftentimes, a microloan is made to a person who otherwise would not qualify for a traditional loan or credit card. This is not necessarily because they are not credit worthy, but perhaps lack a sufficient credit history. Community is the Basis of Microlending It wasn't so long ago that banks served a defined county or region and that credit unions served only strictly-defined groups. Although many financial institutions have moved away from a purely community-centered orientation, many credit unions and community banks still maintain an emphasis on community in their branches and lending practices. This concept of providing financial services within a defined geography or for a specific group is simply a formalization of the centuries old practice of lending circles in which people pooled their savings and made loans to members of the circle. Lending circles make loans with confidence because they know the applicant's circumstances and capacity to repay. This keeps default rates low. It isn't peer pressure to repay that makes microlending work, it is the community's knowledge of the applicant before the loan is made. A New Type of Community Thinking of Facebook as a community is the first step financial institutions must take in order to understand how to use it, and why loans are logical services to deliver via social media. Facebook enables people to build an online community made up of people they have met and then add new acquaintances that already share a common connection with an existing friend. LinkedIn groups work the same way. People are associated with colleagues and by extension their colleagues' colleagues. It is this connectedness that makes social media communities akin to villages and lending circles. Microlending on Facebook is already underway. Kiva and Accion are online organizations that solicit for charitable investments to fund microloans to entrepreneurs and small businesses. Among the most successful microlenders is Kabbage. Kabbage, financially backed by UPS, focuses on advancing funds to internet-based merchants. Also of significant note is Microplace, a related PayPal Company. It might be no surprise, if in a few years, Microplace or some other online entity becomes serious competition for banks, just as PayPal has on the payments side.
<urn:uuid:fadcc807-1364-472d-a039-4f8d8695b8be>
CC-MAIN-2017-09
http://www.banktech.com/channels/microlending-and-social-media-competition-between-banks-and-non-bank-lenders/a/d-id/1294879
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00393-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961566
596
2.890625
3
Court Throws Out FCC Broadcast Flag In an unexpected win for consumer advocates, the U.S. Circuit Court of Appeals for the District of Columbia struck down a rule by the Federal Communications Commission that would have required manufacturers to support a new "broadcast flag" that prevents the copying and redistribution of television programming. The broadcast flag was introduced in November 2003 and the FCC mandated that all devices capable of receiving television signal, including digital TV receivers and PC tuner cards, must abide by the regulation as of July 1, 2005. The move sparked an outcry from consumer rights groups and library associations who said the flag violated fair use laws. It is also widely believed that such a change would drive up the prices of high-definition television sets and slow adoption. The FCC contends the broadcast flag is a necessity for the emergence of digital TV and is supported in its efforts by the Motion Picture Association of America. The goal, the FCC says, is to prevent Internet distribution of copyrighted broadcast content. However, a three-judge panel concluded that the FCC had "exceed the agency's delegated authority under the statute," by imposing the broadcast flag regulation. "The FCC has no authority to regulate consumer electronic devices that can be used for receipt of wire or radio communication when those devices are not engaged in the process of radio or wire transmission," the panel wrote in its unanimous decision. In order to pursue adoption of the broadcast flag, the FCC must now return to Capitol Hill and receive express authority by the United States Congress.
<urn:uuid:84b93480-dceb-47af-8b42-2ef3e48facfd>
CC-MAIN-2017-09
https://betanews.com/2005/05/06/court-throws-out-fcc-broadcast-flag/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00569-ip-10-171-10-108.ec2.internal.warc.gz
en
0.969211
308
2.53125
3
Wikipedia Considers Coloring Untested TextRegistered Wikipedia users may soon have access to software that colors text deemed untrustworthy. In an effort to enhance the reliability of Wikipedia content, the WikiMedia Foundation, which oversees Wikipedia, is weighing whether to offer an opt-in tool for registered users that colors untrustworthy text. The software, WikiTrust, is available as an extension to the MediaWiki platform, upon which Wikipedia runs. It's a project developed by UC Santa Cruz professor Luca de Alfaro, and computer science grad students B. Thomas Adler and Ian Pye. Reached in Buenos Aires, where Wikimania 2009 has just wrapped up, de Alfaro says that the timing of an experimental trial remains under discussion. The purpose of the trial, he says, will be to gather community feedback about the utility of the tool. Despite its name, WikiTrust can't directly measure whether text is trustworthy. "It can only measure user agreement," said de Alfaro. "That's what it does." Wikipedia remains a target for vandals, pranksters, and anyone with a motive to manipulate entries. The online encyclopedia's community of editors is constantly on the lookout for accidental and deliberate changes that introduce bias to articles. In an e-mail, Jay Walsh, head of communications for the WikiMedia Foundation, says that the WikiTrust code is being reviewed by the organization's technology team and that the timing of a trial, if it happens, has not been determined. "The Foundation is looking at a number of quality/rating tools for Wikipedia content, and for our other projects," he explained in an e-mail. "Many are as simple as 'rate this article' features, and some, like WikiTrust are experimental and more unique in their ability to examine other data to render some context on the article. In all cases we will deeply consult with our community of developers and editors before implementing any technology." While users may make trust decisions based WikiTrust, de Alfaro said that the software should also be useful as an anti-vandalism tool. Vetting Wikipedia content for reliability could enhance scholarly acceptance of the online encyclopedia. De Alfaro says that he expects WikiTrust will be used to help identify reliable content for distribution to schools. "It's always frowned upon to use Wikipedia as something you cite because the content is variable," he said. Register for Interop New York and gain a complete understanding of the most important innovations in Interop's comprehensive conference and expo, where you'll see the full range of IT solutions to position your organization for growth. At the Jacob Javits Center, Nov. 16-20, 2009. Find out more and register.
<urn:uuid:3a39bdbf-34df-48e3-b584-f7e758b27eb3>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/wikipedia-considers-coloring-untested-text/d/d-id/1082714
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00037-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921248
552
2.609375
3
NASA again stepped up its plan to mitigate the asteroid threat to Earth by announcing two significant new programs that call on a multitude of scientists and organizations to help spot, track and possibly alter the direction of killer space rocks. First off, the agency announced the latest in its series of Grand Challenges where it dares public and private partnerships to come up with a unique solution to a very tough problem, usually with prize money attached for the winner. In the past NASA has sponsored such challenges regarding green aircraft and Mars/Moon rovers. [RELATED: The sizzling world of asteroids] Specifics of this asteroid challenge were spotty, but NASA said it will be a large-scale project "focused on detecting and characterizing asteroids and learning how to deal with potential threats. We will also harness public engagement, open innovation and citizen science to help solve this global problem," according to NASA Deputy Administrator Lori Garver. The challenge will involve a variety of partnerships with other government agencies, international partners, industry, academia, and citizen scientists, NASA said. In combination with the Grand Challenge, NASA put out a request for information (RFI) that invites industry and potential partners to offer ideas on accomplishing NASA's goal to locate, redirect, and explore an asteroid, as well as find and plan for asteroid threats. The RFI is open for 30 days, and responses will be used to help develop public engagement opportunities and a September industry workshop. The National Aeronautics and Space Administration (NASA) is seeking information for system concepts and innovative approaches for the agencies recently announced Asteroid Initiative. That mission involves redirecting an asteroid and parking it near the moon for study, possibly by 2021, as well as an increased study of how we can better defend the against the threat of catastrophic asteroid collisions, NASA said. The RFI is looking for a variety of input, including: - Asteroid Observation: NASA is interested in concepts for augmenting and accelerating ground and space-based capabilities for detecting all near-Earth asteroids (NEAs) - including those less than 10 meters in size that are in retrievable orbits - determining their orbits, and characterizing their shape, rotation state, mass, and composition as accurately as possible. - Asteroid Redirection Systems: NASA is interested in concepts for robotic spacecraft systems to enable rendezvous and proximity operations with an asteroid, and redirection of an asteroid of up to 1,000 metric tons into translunar space. a. Solar electric propulsion system concepts available for launch as early as 2017, but no later than June 2018, that have the following general characteristics: Capable of launch on a single Space Launch System (SLS) or preferably a smaller launch vehicle, as part of the complete asteroid redirect vehicle, which includes power generation, propellants, spacecraft bus, and asteroid capture system. Propulsion system power output approximately 40 kW to 50 kW. Deliver thrust required to propel a robotic spacecraft to a target near-Earth asteroid and redirect the captured asteroid to a distant lunar retrograde orbit. - Integrated sensing systems to support asteroid rendezvous, proximity operations, characterization, and capture. The sensing systems should be capable of characterizing the asteroid's size, shape, mass and inertia properties, spin state, surface properties, and composition. Some of the same sensors will also be needed in closed-loop control during capture. - Refinements of the Asteroid Redirect Mission concept such as removing a piece (boulder) from the surface of a large asteroid, and redirecting the piece into translunar space, and other innovative approaches. For a description of early asteroid redirect approaches, see the Keck Institute for Space Studies Asteroid Retrieval Feasibility Study on the references website listed later in this RFI. - Applications of satellite servicing technology to asteroid rendezvous, capture, and redirection, and opportunities for dual use technology development are also of interest. - Asteroid Deflection Demonstration: NASA is interested in concepts for deflecting the trajectory of an asteroid using the robotic Asteroid Redirection Vehicle (ARV) that would be effective against objects large enough to do significant damage at the Earth's surface should they impact (i.e. > 100 meters in size). These demonstrations could include but not limited to: a. Use of the ARV to demonstrate a slow push trajectory modification on a larger asteroid. b. Use of the ARV to demonstrate a "gravity tractor" technique on an asteroid. c. Use of ARV instrumentation for investigations useful to planetary defense (e.g. sub-surface penetrating imaging) d. Use of deployables from the ARV to demonstrate techniques useful to planetary defense (e.g. deployment of a stand alone transponder for continued tracking of the asteroid over a longer period of time). - Asteroid Capture Systems: NASA is interested in concepts for systems to capture and de-spin an asteroid with the following characteristics: a. Asteroid size: 5 m < mean diameter < 13 m; aspect ratio < 2/1 b. Asteroid mass: up to 1,000 metric tons c. Asteroid rotation rate: up to 2 revolutions per minute about any axis or all axes. d. Asteroid composition, internal structure, and physical integrity will likely be unknown until after rendezvous and capture. - Crew Systems for Asteroid Exploration: NASA is interested in concepts for lightweight and low volume robotic and extra-vehicular activity (EVA) systems, such as space suits, tools, translation aids, stowage containers, and other equipment, that will allow astronauts to explore the surface of a captured asteroid, prospect for resources, and collect samples. Check out these other hot stories:
<urn:uuid:a82c383f-5fdb-4ca0-8b0e-0752f78d8636>
CC-MAIN-2017-09
http://www.networkworld.com/article/2224817/security/nasa-issues-grand-challenge--calls-for-public--scientific-help-in-tracking-threatening-aste.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00037-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928028
1,159
3.328125
3
Is a gun better than a knife? I've been trying hard for an analogy, but this one kind of works. Which is better? A gun or a knife? Both will help defend you against an attacker. A gun may be better than a knife if you are under attack from a big group of attackers running at you, but without ammunition, you are left defenseless. The knife works without ammunition and always provides a consistent deterrent, so in some respects, gives better protection than a gun. Which is not a bad way to try and introduce the concept of FIM versus Anti-Virus technology. Anti-Virus technology will automatically eliminate malware from a computer, usually before it has done any damage. Both at the point at which malware is introduced to a computer, thorough email, download or USB, and at the instant at which a malware file is accessed, the AV will scan for known malware. If identified as a known virus, or even if the file exhibits characteristics that are associated with malware, the infected files can be removed from the computer. However, if the AV system doesn't have a definition for the malware at hand, then like a gun with an empty magazine, it can't do anything to help. File Integrity Monitoring by contrast may not be quite so 'active' in wiping out known malware, but - like a knife - it never needs ammo to maintain its role as a defense against malware. A FIM system will always report potentially unsafe filesystem activity, albeit with intelligence and rules to ignore certain activities that are always defined safe, regular or normal. AV and FIM versus the Zero Day Threat The key points to note from the previous description of AV operation is that the virus must either be 'known' i.e. the virus has been identified and categorized by the AV vendor, or that the malware must 'exhibit characteristics associated with malware' i.e. it looks, feels and acts like a virus. Anti-virus technology works on the principle that it has a regularly updated 'signature' or 'definition' list containing details of known malware. Any time a new file is introduced to the computer, the AV system has a look at the file and if it matches anything on its list, the file gets quarantined. In other words, if a brand new, never-been-seen-before virus or Trojan is introduced to your computer, it is far from guaranteed that your AV system will do anything to stop it. Ask yourself - if AV technology was perfect, why would anybody still be concerned about malware? The lifecycle of malware can be anything from 1 day to 2 years. The malware must first be seen - usually a victim will notice symptoms of the infection and investigate before reporting it to their AV vendor. At that point the AV vendor will work out how to counteract the malware in the future, and update their AV system definitions/signature files with details of this new malware strain. Finally the definition update is made available to the world, individual servers and workstations around the world will update themselves and will thereafter be rendered immune to this virus. Even if this process takes a day to conclude then that is a pretty good turnaround - after just one day the world is safe from the threat. However, up until this time the malware is a problem. Hence the term 'Zero Day Threat' - the dangerous time is between 'Day Zero' and whichever day the inoculating definition update is provided. By contrast, a FIM system will detect the unusual filesystem activity - either at the point at which the malware is introduced or when the malware becomes active, creating files or changing server settings to allow it to report back the stolen data. Where is FIM better than AV? As outlined previously, FIM needs no signatures or definitions to try and second guess whether a file is malware or not and it is therefore less fallible than AV. Where FIM provides some distinct advantage over and above AV is in that it offers far better preventative measures than AV. Anti-Virus systems are based on a reactive model, a 'try and stop the threat once the malware has hit the server' approach to defense. An Enterprise FIM system will not only keep watch over the core system and program files of the server, watching for malware introductions, but will also audit all the server's built-in defense mechanisms. The process of hardening a server is still the number one means of providing a secure computing environment and prevention, as we all know, is better than cure. Why try and hope your AV software will identify and quarantine threats when you can render your server fundamentally secure via a hardened configuration? Add to this that Enterprise FIM can be used to harden and protect all components of your IT Estate, including Windows, Linux, Solaris, Oracle, SQL Server, Firewalls, Routers, Workstations, POS systems etc. etc. etc. and you are now looking at an absolutely essential IT Security defense system. This article was never going to be about whether you should implement FIM or AV protection for your systems. Of course, you need both, plus some good firewalling, IDS and IPS defenses, all wrapped up with solid best practices in change and configuration management, all scrutinized for compliance via comprehensive audit trails and procedural guidelines. Unfortunately there is no real 'making do' or cutting corners when it comes to IT Security. Trying to compromise on one component or another is a false economy and every single security standard and best practice guide in the world agrees on this. FIM, AV, auditing and change management should be mandatory components in your security defenses.
<urn:uuid:39b1d04d-a97c-468f-9c72-e0ea6d8fd6d9>
CC-MAIN-2017-09
https://www.newnettechnologies.com/is-fim-better-than-av.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00037-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936152
1,158
2.59375
3
As I stated in our article "Scaleable NFS Acceleration with Enterprise Solid State Cache," one of the more popular use cases, especially initially, is using SSS as a cache. This method is a particularly popular way to accelerate NAS-based systems. The challenge is that caches by definition can get refreshed constantly and that means a lot of write traffic as the cache is updated. The smaller the cache area and the more active the data set is, the more frequent those updates are going to be. A simple solution to the frequent caching update problem is to make the cache so large that it rarely needs to be updated. In other words, a very large portion of the active data set is stored in cache most, if not all, of the time. This not only minimizes the amount of writes to the cache, it also means the caching engine can expend less processing power analyzing which data should be in cache and which should not. Cache Writes To RAM Another caching technique that should help is developing a reliable write cache area that is based on DRAM, which does not suffer the same write performance and wear penalties that flash memory does. DRAM is perfect for writes, since in most environments writes are small and transient. That means that a file or block of data is usually very active right after it is initially created, but the size of live, active data is typically microscopic compared to the older, less active data on a storage system. The DRAM area will need to be smaller because of cost and will need to be protected by either batteries or capacitors in case of power loss. Those technologies have matured greatly over the past few years and can provide power to a DRAM area long enough to clear itself to either flash or mechanical based storage. In addition to large caching and write caching, storage suppliers should be taking a very hard look at using storage optimization technologies like compression and deduplication in conjunction with flash-based storage. For the solid-state purist, this may sound like sacrilege because of the latency that will be introduced as the data is optimized. Certainly some performance loss will occur, but in almost every case that storage would still perform significantly better than a mechanical hard drive alternative. Most vendors look at storage optimization as a means to reduce SSS cost, and it certainly will since more data can now be stored in less space. The other important impact of storage optimization is less data is written to the same space. In the flash world, this means an important reduction in writes. For this to work though, the storage optimization has to be inline, meaning it has to be done prior to the data being written to the flash storage area. A post-process optimization would actually increase the write problem and potentially shorten the life of the flash area since data would be written to flash, analyzed, and then re-written in its optimized state. The technology that suppliers use has a role to play in making sure you get maximum life from your SSS investment. The use of larger and more intelligent caches, leveraging DRAM, and optimizing the flash storage area are good examples. There are two other techniques that may seem more radical that virtually eliminate the concern--pure solid-state storage using only DRAM or flash-based storage. These approaches require an entry of their own and is something that we will discuss in our next entry. Follow Storage Switzerland on Twitter
<urn:uuid:00d08685-fe8c-407b-a5bf-0a291a7ab9bb>
CC-MAIN-2017-09
http://www.networkcomputing.com/storage/technology-can-extend-solid-state-storages-life/157367332?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00209-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959234
686
2.640625
3
quote:The experiment is called the Neutrino Factory, scheduled for construction some time from 2015. Its primary aim is to fire beams of neutrinos (fundamental particles) through the Earth's interior to detector stations on different continents. They're doing this to measure whether they change type en route (there are 3 types of neutrino) and data from this in turn will allow them to determine the neutrino's mass more accurately. quote:You are simulating the part of the process where the proton beam hits the target rod and causes pions to be emitted, which decay into muons. These would then proceed to a storage ring and decay into electrons and the neutrinos that are used for experiments. This is a fairly critical part of the apparatus, which catches the pions and confines some of them into a beam while they decay. The efficiency of this dictates that of the entire machine because it is built of a lot of acceleration stages 'in series' with each other. Whether the project eventually gets funded to be built depends on what levels of performance can be achieved with the designs generated during the present R&D. However, users of this program have already doubled the estimated efficiency of one stage and more are to be optimised in the future. the TAM page of the Foodcourt can be found here Ars Team Stats can be found at I havent been paying attention to this project much lately, just sorta set it and forgot it. I am posting this now to get it going and will check into everything soon and make sure the Foodcourt page is updated and has the correct information. Currently Team Atomic Milkshake is in 11th place and only has a few active users. This is a fairly small project and it won't take much for anyone to move up the ranks pretty fast. Previous Perpetual thread can be read here
<urn:uuid:ce926e91-0b6c-43d8-90ff-eb62d9b7c496>
CC-MAIN-2017-09
https://arstechnica.com/civis/viewtopic.php?p=1042222
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00613-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953155
384
2.890625
3
Weak Passwords Pervasive, Despite Security RisksData from a breach affecting 32 million online accounts reveals the persistent popularity of weak passwords, despite obvious risks. Five years ago, Microsoft Chairman Bill Gates predicted the end of passwords because they failed to keep information secure. The real problem turns out to be people, who just can't pick passwords that offer enough protection. This point has been hammered home in a study of some 32 million passwords that were posted on the Internet after a hacker obtained them from social entertainment site RockYou last year. In a report released on Thursday, Imperva, a security firm, analyzed the strength of the passwords people used and found that the frequent choice of short, simple passwords almost guarantees the success of brute force password attacks. A brute force attack involves automated password guessing, using a dictionary or set of common passwords. According to the report, "the combination of poor passwords and automated attacks means that in just 110 attempts, a hacker will typically gain access to one new account on every second or a mere 17 minutes to break into 1000 accounts." The report reveals that 50% of users rely on slang words, dictionary words, or common arrangements of numbers and letters, like "qwerty," for their passwords. Among users of RockYou, the most common password was "123456." Sadly, this isn't a new problem. Previous password studies, using far smaller data sets, have shown similar findings. Imperva's CTO Amichai Shulman observes that a 1990 Unix password study reveals the same password selection problems. A recent review of Hotmail passwords exposed in a breach also showed that "123456" is the most common password. Even though "123456" occurred only 64 times out of 10,000 passwords, that suggests that a brute force attacker could compromise one account per 157 attacked using a dictionary with only a single entry. Jon Brody, VP at TriCipher, another security vendor, confirms that this isn't a new problem. He puts part of the blame on technology innovators for not recognizing that password policies are doomed to fail if they go against human nature. That is to say, forcing people to change their passwords every month will force them to choose weak passwords every month because that's what they can remember. Brody argues that technology companies need to create security systems that take real world behavior into account.
<urn:uuid:660f41de-1651-486e-bd7a-9bc98e149d5a>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/weak-passwords-pervasive-despite-security-risks/d/d-id/1086360
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00081-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936276
484
2.578125
3
OpenCL (the Open Computing Language) is under development by the Khronos Group as an open, royalty-free standard for parallel programming of heterogeneous systems. It provides a common hardware abstraction layer to expose the computational capabilities of systems that include a diverse mix of multicore CPUs, GPUs and other parallel processors such as DSPs and the Cell, for use in accelerating a variety of compute-intensive applications. The intent of the OpenCL initiative is to provide a common foundational layer for other technologies to build upon. The OpenCL standard will also have the effect of coordinating the basic capabilities of target processors. In particular, in order to be conformant with OpenCL, processors will have to meet minimum capability, resource and precision requirements. This article reviews the organizations and process behind the OpenCL standard proposal, gives a brief overview of the nature of the proposal itself, and then discusses the implications of OpenCL for the high-performance software development community. The Khronos organization supports the collaborative development and maintenance of several royalty-free open standards, including OpenGL, OpenGL ES, COLLADA, and OpenMAX. OpenCL is not yet ratified, but the member companies involved have already arrived at a draft specification of version 1.0, which is currently under review. The OpenCL effort was initiated by Apple, and the development of the draft specification has included the active involvement of AMD, ARM, Barco, Codeplay, Electronic Arts, Ericsson, Freescale, Imagination Technologies, IBM, Intel, Motorola, Movidia, Nokia, NVIDIA, RapidMind, and Texas Instruments. The OpenCL specification consists of three main components: a platform API, a language for specifying computational kernels, and a runtime API. The platform API allows a developer to query a given OpenCL implementation to determine the capabilities of the devices that particular implementation supports. Once a device has been selected and a context created, the runtime API can be used to queue and manage computational and memory operations for that device. OpenCL manages and coordinates such operations using an asynchronous command queue. OpenCL command queues can include computational kernels as well as memory transfer and map/unmap operations. Asynchronous memory operations are included in order to efficiently support the separate address spaces and DMA engines used by many accelerators. The parallel execution model of OpenCL is based on the execution of an array of functions over an abstract index space. The abstract index spaces driving parallel execution consists of n-tuples of integers with each element starting at 0. For instance, 16 parallel units of work could be associated with an index space from 0 to 15. Alternatively, using 2-tuples, those 16 units of work could be associated with (0,0) to (3,3). Three-dimensional index spaces are also supported. Computational kernels invoked over these index spaces are based on functions drawn from programs specified in OpenCL C. OpenCL C is a subset of C99 with extensions for parallelism. These extensions include support for vector types, images and built-in functions to read and write images, and memory hierarchy qualifiers for local, global, constant, and private memory spaces. The OpenCL C language also currently includes some restrictions relative to C99, particularly with regards to dynamic memory allocation, function pointers, writes to byte addresses, irreducible control flow, and recursion. Programs written in OpenCL C can either be compiled at runtime or in advance. However, OpenCL C programs compiled in advance may only work on specific hardware devices. Each instance of a kernel is able to query its index, and then do different work and access different data based on that index. The index space defines the “parallel shape” of the work, but it is up to the kernel to decide how the abstract index will translate into data access and computation. For example, to add two arrays and place the sum in an a third output array, a kernel might access its global index, from this index compute an address in each of two input arrays, read from these arrays, perform the addition, compute the address of its result in an output array, and write the result. A hierarchical memory model is also supported. In this model, the index space is divided into work groups. Each work-item in a work-group, in addition to accessing its own private memory, can share a local memory during the execution of the work-group. This can be used to support one additional level of hierarchical data parallelism, which is useful to capture data locality in applications such as video/image compression and matrix multiplication. However, different work-groups cannot communicate or synchronize with one another, although work items within a work-group can synchronize using barriers and communicate using local memory (if supported on a particular device). There is an extension for atomic memory operations but it is optional (for now). OpenCL uses a relaxed memory consistency model where the local view of memory from each kernel is only guaranteed to be consistent after specified synchronization points. Synchronization points include barriers within kernels (which can only be used to synchronize the view of local memory between elements of a work-group), and queue “events.” Event dependencies can be used to synchronize commands on the work queue. Dependencies between commands come in two forms: implicit and explicit. Command queues in OpenCL can run in two modes: in-order and out-of-order. In an in-order queue, commands are implicitly ordered by their position in the queue, and the result of execution must be consistent with this order. In the out-of-order mode, OpenCL is free to run some of the commands in the queue in parallel. However, the order can be constrained explicitly by specifying event lists for each command when it is enqueued. This will cause some commands to wait until the specified events have completed. Events can be based on the completion of memory transfer operations and explicit barriers as well as kernel invocations. All commands return an event handle which can be added to a list of dependencies for commands enqueued later. In addition to encouraging standardization between the basic capabilities of different high-performance processors, OpenCL will have a few other interesting effects. One of these will be to open up the embedded and handheld spaces to accelerated computing. OpenCL supports an embedded profile that differs primarily from the full OpenCL profile in resource limits and precision requirements. This means that it will be possible to use OpenCL to access the computational power of embedded multicore processors, including embedded GPUs, in mobile phones and set-top boxes in order to enable high-performance imaging, vision, game physics, and other applications. Applications, libraries, middleware and high-level languages based on OpenCL will be able to access the computational power of these devices. In summary, OpenCL is an open, royalty-free standard that will enable portable, parallel programming of heterogeneous CPUs, GPUs and other processors. OpenCL is designed as a foundational layer for low-level access to hardware and also establishes a level of consistency between high-performance processors. This will give high-performance application and library writers, as well as high-level language, platform, and middleware developers, the ability to focus on higher-level concerns rather than dealing with variant semantics and syntax for the same concepts from different vendors. OpenCL will allow library, application and middleware developers to focus their efforts on providing greater functionality, rather than redeveloping code or lower-level interfaces to each new processor and accelerator.
<urn:uuid:a83eea1b-3dac-4181-b0eb-d1c27f60252a>
CC-MAIN-2017-09
https://www.hpcwire.com/2008/11/21/opencl_update/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00433-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922527
1,527
2.609375
3
Spring 4.2 and the Web (TT3364) In this course, you will learn how to use Spring in conjunction with the various technologies used in and supporting rich web interfaces. This course covers a wide spectrum of topics, which provides you with a basic understanding of those technologies and resources prior to taking this class. The Spring framework is an application framework that provides a lightweight container that supports the creation of simple-to-complex components in a non-invasive fashion. Spring's flexibility and transparency is congruent and supportive of incremental development and testing. The framework's structure supports the layering of functionality such as persistence, transactions, view-oriented frameworks, and enterprise systems and capabilities. Spring's Aspect-Oriented Programming (AOP) framework enables developers to declaratively apply common features and capabilities across data types in a transparent fashion. As an enabler for rich web interfaces, the Spring framework represents a significant step forward. If you want to deliver a web application within the Spring framework, you'll find this course essential. Note: As a programming class, this course provides multiple challenging labs for you to work through during the class. This workshop is about 50 percent hands-on lab and 50 percent lecture. Throughout the course, you will be led through a series of progressively advanced topics, where each topic consists of lecture, group discussion, comprehensive hands-on lab exercises, and lab review. Multiple detailed lab exercises are laced throughout the course, designed to reinforce fundamental skills and concepts learned in the lessons. At the end of each lesson, developers will be tested with a set of review questions to ensure that he/she fully understands that topic. Developers who need to understand how and when to use Spring applications with the web.
<urn:uuid:d778793a-8efe-47d8-9d04-b568358acfd0>
CC-MAIN-2017-09
https://www.globalknowledge.com/ca-en/course/121334/spring-42-and-the-web-tt3364/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00309-ip-10-171-10-108.ec2.internal.warc.gz
en
0.913596
357
2.765625
3
As more and more of our daily life happens online, the issue of online privacy should be of prime importance to each of us. Unfortunately, it’s not. Most users are not worried enough to scour the Internet for information about the latest privacy-killing features pushed out by social networks, online services and app makers, and even those who are often find it difficult and too time-consuming to keep abreast of the changes. What we need is a way of getting all the relevant privacy information in a timely, applicable and focused fashion. Arvind Narayanan, Assistant Professor of Computer Science at Princeton, proposes a “privacy alert” system that would know the users’ usual privacy choices and notify them of appropriate measures they should take to tackle potential privacy pitfalls. In his mind, the system should consist of two modules. “The first is a privacy ‘vulnerability tracker’ similar to well-established security vulnerability trackers. Each privacy threat is tagged with severity, products or demographics affected, and includes a list of steps users can take,” he explained in a blog post. “The second component is a user-facing privacy tool that knows the user’s product choices, overall privacy preferences, etc., and uses this to filter the vulnerability database and generate alerts tailored to the user.” This would allow users to keep on top of things, but also prevent them from being overwhelmed with unnecessary and impractical information. “The ideas in this post aren’t fundamentally new, but by describing how the tool could work I hope to encourage people to work on it,” he admitted, and offered to collaborate with someone who is interested in creating it. He also mentioned a few additional “bells and whistles” such a tool might incorporate, such as the possibility of crowdsourcing relevant information, an open API, and the option of connecting the tool to the users’ browsing history and other personal information. This last option, he says, would work only if users trust the creator of the tool.
<urn:uuid:daaeb9e8-52f7-4821-acac-173bf4b97255>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2014/04/23/researcher-proposes-alert-tool-for-managing-online-privacy-risks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00078-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943702
429
2.625
3
It should be abundantly clear to anyone who follows technology that Linux is a worldwide success story and has been for a long time. The open source operating system kernel originally developed by Linus Torvalds while he studied at the University of Helsinki, turned 25 earlier this fall. As of now, Linux is used as a basis for countless new products coming to market every day. Before we take a look at some of those, though, let’s go through some of the background to Linux’s success. 90s computers came with terrible operating systems When Torvalds started hacking away at his pet project, the world was starved for a cheap option to get modern, stable operating systems onto PC computers. Operating systems are the software that make a computer’s hardware work together and interact with the user and the software they need. The mainstream commercial operating systems for personal computers in the early nineties were straight out of clown college: MS-DOS was absurdly bereft of features and Windows, gaining popularity by then, was only a plastered-on windowing system. Apple’s Mac computers of the day ran a similarly rudimentary operating system that may have appeared more elegant on the surface. But classic MacOS too, lacked many of the features we expect computers to do every day now: like the ability to reliably pretend like they’re doing many things at once, preemptive multitasking. Alternatives to PC and Mac operating systems were commercial Unix variants for PCs that were prohibitively expensive. In the early nineties, IBM and Microsoft had just teamed up to build OS/2, in what through its own drama spun off to be Microsoft’s NT. This more modern operating system, is the foundation for all Microsoft systems today. But as Linus started playing around with a 386, all that was far off in the future. So it wasn’t surprising that the world of computer enthusiasts in the early nineties released a lot of bottled-up enthusiasm about an operating system with all the promise to make their 386-class PCs more useful. The holy grail for enthusiasts was to mimic the capabilities of powerful, networked computers seen at universities and businesses and Linux delivered. The early days of Linux’s success story were overshadowed by the public’s quite correct perception that Torvalds’ creation couldn’t take on Microsoft’s near-monopoly on the desktop OS market. But Linux’s real success story is one built up over time. Linux is now the defacto, go-to way in which new systems are being built. Wondering what we’re on about? Well, let’s take a look at where you can find Linux. 1. Almost every smartphone except iPhone Android, used on billions of smartphones, is built around the Linux kernel. Linux for server use and similar, is normally is perceived as distributions like Debian, Ubuntu or Red Hat/Fedora that bundles a similar set of tools beginning from GNU command line utilities/compilers and central pieces like glibc and the Xorg graphical system. Android uses different components to be suited for smartphone use, helping to make the touch screen the most used computer type ever. Among other things, Android apps are mostly Java code run in a special virtual machine. But this sort of modularity is indeed where Linux differs from most operating systems: it really is to be thought of as just a kernel. How you build the rest of a Linux product is just like, your opinion, man. 2. Wireless routers and other network equipment In the early 00s, components like router boards started getting powerful enough to run a general operating systems, rather than tiny, very specialized Real Time Operating Systems (RTOS). Linux’s convenience and free availability made it a good choice for inexpensive network gear. In 2005 Linksys, the manufacturer of the iconic WRT54GL routers, inadvertently made the Linux router a commodity by initially failing to comply with the Linux GNU General Public License at first. The GPL requires modified versions of Linux to be made available to the general public, and after being told to hand over the source in a bit of a media scandal, Linksys did. That Linksys code is now the precursor to a lot of open source router firmware projects, like OpenWRT. Despite its age and relative slowness, Linksys still sells the WRT54GL, which still enjoys significant demand thanks to its tinker friendliness. By now, a lot of the heavy-duty routers and switches powering big and small networks in businesses are starting to be based on Linux, too. 3. The first generation of successful, mass-scale internet companies Companies like Google, Amazon and Yahoo are well known for their use of Linux and other open source operating systems, like FreeBSD to get started building server infrastructure easily. The operating system is now something you just download. But as we hinted at before, proper ones used to cost you an arm and a leg, not counting application software. It bears repeating: it wasn’t clear back then, but Linux played a big role in making the operating system a commodity. Linux isn’t alone in this category, even today. WhatApp’s messaging system is build on FreeBSD and so are Netflix’s ingenious appliances shipped to ISPs over to world to serve close to 100 gigabits worth of video per second! 4. A vast majority of websites, large and small Companies of all sizes on the internet realized the value in using free operating systems to conduct business. Many realized that individuals and companies need websites and need to run them without paying lots of money for hardware and expensive, high-end internet connections, and operational costs. So, the web hosting industry and ‘shared hosting’ was born, around Linux and other open source software, like the web server Apache. Ever since, Linux has become sort of a default platform on top of which tools are developed for making web sites of many kinds, including this one. In other words, when you surf the “information superhighway”, whether it be to buy tickets to a concert or wasting time on Facebook, you’re using Linux most of the time. 5. The most expensive machines Linux’s world domination didn’t start out small: servers running websites are usually relatively beefy computers with a lot of resources that need to be shared efficiently. Remember reading how really small computers, like routers, are using Linux too? Well, that makes Linux scalable. From tiny router boards, Linux scales up to the world’s very fastest computers, HPC (high-performance computing), or supercomputers with tons and tons of CPUs and RAM. Ever since the mid-00s, Linux has been pretty much the default on big computers used for scientific calculation and modelling in academia and other research fields. Linux is even sometimes used on mainframe computers, big commercial machines that are built to run critical applications like financial systems very reliably without hickups. Do you use these machines? Well, indirectly: for example, the weather is being prognosticated on supercomputers and your financial information is being run through heavy duty machines. 6. “Cloud computing”, the idea that makes mobile apps run smoothly and cheaply In the mid 00s, building complicated or big online services required, mostly, that everyone buy or rent their own physical servers, often more capacity than needed, for hundreds or thousands of dollars a month. Granted, this was cheaper than computer operations ever before, but still, infrastructure could become a great capital expenditure. Amazon.com, the bookstore, which by the mid-00s was transforming into an everything store, realized that they could sell unused compute resources in their data centers in the form of bundling virtual servers, isolated copies of several operating systems on one machine. Furthermore, Amazon included tools for developers to buy these virtual servers on-demand. Suddenly, the equipment that makes large scale apps and services crop up became pay-as-you-go infrastructure. Further useful services like mass storage and CDN made Amazon Web Services, instrumental in making mobile apps and web services boom, creating only operational expenses, like they never did during the first dotcom bubble. Unsurprisingly, by this point, Amazon used the open source Xen virtualization hypervisor, or “virtualization engine” on top of Linux, to slice out the virtual hardware they rented. 7. Integrated systems, the “invisible” computers all around us As hinted at with the earlier router example, Linux early became a great operating system for devices you don’t see or think about as computers, or to me more precise, the field of “integrated systems”. You can find Linux on everything from computer kiosks, ATMs, signage, in-flight entertainment systems, ATMs, smart TVs and computer monitors, you name it. Many cars run Linux on some of their many different subsystems, Tesla being a famous example. The same goes for the new generation of “internet of things”, industrial and home devices that are gifted smarts by being connected and containing tiny computers. There’s still potential use cases where Linux isn’t the right choice, and smaller, Real Time Operating Systems like VXWorks and QNX are used. But largely, Linux is the where one would start to look and poke around for building a large number of products. 8. Most home computer that aren’t on Windows run Linux, or share some Linux DNA Home PCs are the holdouts for mass adoption of Linux, and that kind of makes them an exception by now. Still, as we said, Android, “the new Windows”, is Linux based. So is ChromeOS, the limited and well secured Google operating system inside typically cheap Chromebook laptops. Likewise, fun, small hobbyist, and prototyping computer boards, like the Raspberry Pi, are designed to run Linux by default. Furthermore, Unix, the family of operating systems Linux belongs to, originating from academic, big iron computers, is well represented elsewhere where there aren’t copies of Windows. Apple’s main operating systems, iOS, and MacOS, are designed and built as Unix-like systems from the start. Apple uses a kernel known as XNU, surrounding it with tools largely adopted from FreeBSD. Sony’s PlayStation consoles, generations three and four, are also based on FreeBSD. So, there you have it: a summary of how Linux (and open source systems very much like it) are taking the world by storm, or rather fortifying the empire that is free, open source software. All these use cases of Linux and open source operating systems at large, have the further benefit of building a foothold for all kinds of other open source applications in businesses, from web oriented servers and programming languages, ready made web-apps, databases and much more. This means that the components for building the next great thing for taking over the world are standardized and available to everyone. When you think about it, it’s a kind of magic. Latest posts by Thomas Nybergh (see all) - 9 incredible Swedish inventions that changed the world - 03.02.2017 - 17 cool Swedish startups to look out for in 2017 - 25.01.2017 - 7 reasons why MDM is a must for small business - 13.12.2016
<urn:uuid:f589bdf7-e6e2-456b-b8da-10bee83c6cde>
CC-MAIN-2017-09
https://www.miradore.com/you-use-linux-every-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00078-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94392
2,383
2.640625
3