text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Anatomy of a cloud storage infrastructure Models, features, and internals At the rate data is growing today, it's not surprising that cloud storage is also growing in popularity. The fastest-growing data is archive data, which is ideal for cloud storage given a number of factors, including cost, frequency of access, protection, and availability. But not all cloud storage is the same. One provider may focus primarily on cost, while another focuses on availability or performance. No one architecture has a singular focus, but the degrees to which an architecture implements a given characteristic defines its market and appropriate use models. It's difficult to talk about architectures without the perspective of utility. By this I mean the measure of an architecture from a variety of characteristics, including cost, performance, remote access, and so on. Therefore, I first define a set of criteria by which cloud storage models are measured, and then explore some of the interesting implementations within cloud storage architectures. First, let's discuss a general cloud storage architecture to set the context for the later exploration of unique architectural features. Cloud storage architectures are primarily about delivery of storage on demand in a highly scalable and multi-tenant way. Generically (see Figure 1), cloud storage architectures consist of a front end that exports an API to access the storage. In traditional storage systems, this API is the SCSI protocol; but in the cloud, these protocols are evolving. There, you can find Web service front ends, file-based front ends, and even more traditional front ends (such as Internet SCSI, or iSCSI). Behind the front end is a layer of middleware that I call the storage logic. This layer implements a variety of features, such as replication and data reduction, over the traditional data-placement algorithms (with consideration for geographic placement). Finally, the back end implements the physical storage for data. This may be an internal protocol that implements specific features or a traditional back end to the physical disks. Figure 1. Generic cloud storage architecture From Figure 1, you can see some of the characteristics for current cloud storage architectures. Note that no characteristics are exclusive in the particular layer but serve as a guide for specific topics that this article addresses. These characteristics are defined in Table 1. Table 1. Cloud storage characteristics |Manageability||The ability to manage a system with minimal resources| |Access method||Protocol through which cloud storage is exposed| |Performance||Performance as measured by bandwidth and latency| |Multi-tenancy||Support for multiple users (or tenants)| |Scalability||Ability to scale to meet higher demands or load in a graceful manner| |Data availability||Measure of a system's uptime| |Control||Ability to control a system—in particular, to configure for cost, performance, or other characteristics| |Storage efficiency||Measure of how efficiently the raw storage is used| |Cost||Measure of the cost of the storage (commonly in dollars per gigabyte)| One key focus of cloud storage is cost. If a client can buy and manage storage locally compared to leasing it in the cloud, the cloud storage market disappears. But cost can be divided into two high-level categories: the cost of the physical storage ecosystem itself and the cost of managing it. The management cost is hidden but represents a long-term component of the overall cost. For this reason, cloud storage must be self-managing to a large extent. The ability to introduce new storage where the system automatically self-configures to accommodate it and the ability to find and self-heal in the presence of errors are critical. Concepts such as autonomic computing will have a key role in cloud storage architectures in the future. One of the most striking differences between cloud storage and traditional storage is the means by which it's accessed (see Figure 2). Most providers implement multiple access methods, but Web service APIs are common. Many of the APIs are implemented based on REST principles, which imply an object-based scheme developed on top of HTTP (using HTTP as a transport). REST APIs are stateless and therefore simple and efficient to provide. Many cloud storage providers implement REST APIs, including Amazon Simple Storage Service (Amazon S3), Windows Azure™, and Mezeo Cloud Storage Platform. One problem with Web service APIs is that they require integration with an application to take advantage of the cloud storage. Therefore, common access methods are also used with cloud storage to provide immediate integration. For example, file-based protocols such as NFS/Common Internet File System (CIFS) or FTP are used, as are block-based protocols such as iSCSI. Cloud storage providers such as Six Degrees, Zetta, and Cleversafe provide these access methods. Although the protocols mentioned above are the most common, other protocols are suitable for cloud storage. One of the most interesting is Web-based Distributed Authoring and Versioning (WebDAV). WebDAV is also based on HTTP and enables the Web as a readable and writable resource. Providers of WebDAV include Zetta and Cleversafe in addition to others. Figure 2. Cloud storage access methods You can also find solutions that support multi-protocol access. For example, IBM® Smart Business Storage Cloud enables both file-based (NFS and CIFS) and SAN-based protocols from the same storage-virtualization infrastructure. There are many aspects to performance, but the ability to move data between a user and a remote cloud storage provider represents the largest challenge to cloud storage. The problem, which is also the workhorse of the Internet, is TCP. TCP controls the flow of data based on packet acknowledgements from the peer endpoint. Packet loss, or late arrival, enables congestion control, which further limits performance to avoid more global networking issues. TCP is ideal for moving small amounts of data through the global Internet but is less suitable for larger data movement, with increasing round-trip time (RTT). Amazon, through Aspera Software, solves this problem by removing TCP from the equation. A new protocol called the Fast and Secure Protocol (FASP™) was developed to accelerate bulk data movement in the face of large RTT and severe packet loss. The key is the use of the UDP, which is the parter transport protocol to TCP. UDP permits the host to manage congestion, pushing this aspect into the application layer protocol of FASP (see Figure 3). Figure 3. The Fast and Secure Protocol from Aspera Software Using standard (non-accelerated) NICs, FASP efficiently uses the bandwidth available to the application and removes the fundamental bottlenecks of conventional bulk data-transfer schemes. The Related topics section provides some interesting statistics on FASP performance over traditional WAN, intercontinental transfers, and lossy satellite links. One key characteristic of cloud storage architectures is called multi-tenancy. This simply means that the storage is used by many users (or multiple "tenants"). Multi-tenancy applies to many layers of the cloud storage stack, from the application layer, where the storage namespace is segregated among users, to the storage layer, where physical storage can be segregated for particular users or classes of users. Multi-tenancy even applies to the networking infrastructure that connects users to storage to permit quality of service and carving bandwidth to a particular user. You can look at scalability in a number of ways, but it is the on-demand view of cloud storage that makes it most appealing. The ability to scale storage needs (both up and down) means improved cost for the user and increased complexity for the cloud storage provider. Scalability must be provided not only for the storage itself (functionality scaling) but also the bandwidth to the storage (load scaling). Another key feature of cloud storage is geographic distribution of data (geographic scalability), allowing the data to be nearest the users over a set of cloud storage data centers (via migration). For read-only data, replication and distribution are also possible (as is done using content delivery networks). This is shown in Figure 4. Figure 4. Scalability of cloud storage Internally, a cloud storage infrastructure must be able to scale. Servers and storage must be capable of resizing without impact to users. As discussed in the Manageability section, autonomic computing is a requirement for cloud storage architectures. Once a cloud storage provider has a user's data, it must be able to provide that data back to the user upon request. Given network outages, user errors, and other circumstances, this can be difficult to provide in a reliable and deterministic way. There are some interesting and novel schemes to address availability, such as information dispersal. Cleversafe, a company that provides private cloud storage (discussed later), uses the Information Dispersal Algorithm (IDA) to enable greater availability of data in the face of physical failures and network outages. IDA, which was first created for telecommunication systems by Michael Rabin, is an algorithm that allows data to be sliced with Reed-Solomon codes for purposes of data reconstruction in the face of missing data. Further, IDA allows you to configure the number of data slices, such that a given data object could be carved into four slices with one tolerated failure or 20 slices with eight tolerated failures. Similar to RAID, IDA permits the reconstruction of data from a subset of the original data, with some amount of overhead for error codes (dependent on the number of tolerated failures). This is shown in Figure 5. Figure 5. Cleversafe's approach to extreme data availability With the ability to slice data along with cauchy Reed-Solomon correction codes, the slices can then be distributed to geographically disparate sites for storage. For a number of slices (p) and a number of tolerated failures (m), the resulting overhead is p/(p-m). So, in the case of Figure 5, the overhead to the storage system for p = 4 and m = 1 is 33%. The downside of IDA is that it is processing intensive without hardware acceleration. Replication is another useful technique and is implemented by a variety of cloud storage providers. Although replication introduces a large amount of overhead (100%), it's simple and efficient to provide. A customer's ability to control and manage how his or her data is stored and the costs associated with it is important. Numerous cloud storage providers implement controls that give users greater control over their costs. Amazon implements Reduced Redundancy Storage (RRS) to provide users with a means of minimizing overall storage costs. Data is replicated within the Amazon S3 infrastructure, but with RRS, the data is replicated fewer times with the possibility for data loss. This is ideal for data that can be recreated or that has copies that exist elsewhere. Storage efficiency is an important characteristic of cloud storage infrastructures, particularly with their focus on overall cost. The next section speaks to cost specifically, but this characteristic speaks more to the efficient use of the available resources over their cost. To make a storage system more efficient, more data must be stored. A common solution is data reduction, whereby the source data is reduced to require less physical space. Two means to achieve this include compression—the reduction of data through encoding the data using a different representation—and de-duplication—the removal of any identical copies of data that may exist. Although both methods are useful, compression involves processing (re-encoding the data into and out of the infrastructure), where de-duplication involves calculating signatures of data to search for duplicates. One of the most notable characteristics of cloud storage is the ability to reduce cost through its use. This includes the cost of purchasing storage, the cost of powering it, the cost of repairing it (when drives fail), as well as the cost of managing the storage. When viewing cloud storage from this perspective (including SLAs and increasing storage efficiency), cloud storage can be beneficial in certain use models. An interesting peak inside a cloud storage solution is provided by a company called Backblaze (see Related topics for details). Backblaze set out to build inexpensive storage for a cloud storage offering. A Backblaze POD (shelf of storage) packs 67TB in a 4U enclosure for under US$8,000. This package consists of a 4U enclosure, a motherboard, 4GB of DRAM, four SATA controllers, 45 1.5TB SATA hard disks, and two power supplies. On the motherboard, Backblaze runs Linux® (with JFS as the file system) and GbE NICs as the front end using HTTPS and Apache Tomcat. Backblaze's software includes de-duplication, encryption, and RAID6 for data protection. Backblaze's description of their POD (which shows you in detail how to build your own) shows you the extent to which companies can cut the cost of storage, making cloud storage a viable and cost-efficient option. Cloud storage models Thus far, I've talked primarily about cloud storage providers, but there are models for cloud storage that allow users to maintain control over their data. Cloud storage has evolved into three categories, one of which permits the merging of two categories for a cost-efficient and secure option. Much of this article has discussed public cloud storage providers, which present storage infrastructure as a leasable commodity (both in terms of long-term or short-term storage and the networking bandwidth used within the infrastructure). Private clouds use the concepts of public cloud storage but in a form that can be securely embedded within a user's firewall. Finally, hybrid cloud storage permits the two models to merge, allowing policies to define which data must be maintained privately and which can be secured within public clouds (see Figure 6). Figure 6. Cloud storage models The cloud models are shown graphically in Figure 6. Examples of public cloud storage providers include Amazon (which offer storage as a service). Examples of private cloud storage providers include IBM, Parascale, and Cleversafe (which build software and/or hardware for internal clouds). Finally, hybrid cloud providers include Egnyte, among others. Cloud storage is an interesting evolution in storage models that redefines the ways that we construct, access, and manage storage within an enterprise. Although cloud storage is predominantly a consumer technology today, it is quickly evolving toward enterprise quality. Hybrid models of clouds will enable enterprises to maintain their confidential data within a local data center, while relegating less confidential data to the cloud for cost savings and geographic protection. Check out Related topics for links to information on cloud storage providers and unique technologies. - Manageability is one of the most important aspects of a cloud storage infrastructure. For cost efficiency, the cloud storage infrastructure must be self-managing and implement autonomic computing principles. Read more about autonomic computing at IBM Research. - The REST API is a popular method for accessing cloud storage infrastructures. - Although not as common as REST, the WebDAV specification is also used as an efficient cloud storage interface. Egnyte Cloud File Server implements WebDAV as an interface to its cloud storage infrastructure. - The IBM Smart Business Storage Cloud is an interesting perspective on cloud storage for the enterprise. IBM's storage cloud offers high-performance on-demand storage for enterprise settings. - Access methods are one of the important aspects of cloud storage, as they determine how users will integrate their client-side systems to the cloud storage infrastructure. Examples of providers that implement file-based APIs include Six Degrees and Zetta. Examples of providers that implement iSCSI-based interfaces include Cleversafe and the Cloud Drive. - Aspera Software created a new protocol to assist in bulk transfer over the Internet given the shortcomings of TCP in this application. You can learn more about the Fast and Secure Protocol in A Faster Way to the Cloud. - Backblaze decided to build its own inexpensive cloud storage and has made the design and software open to you. Learn more about Backblaze and its innovative storage solution in Petabytes on a budget: How to build cheap cloud storage.
<urn:uuid:b8683891-f3f8-4367-afcd-94569d945fea>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/cloud/library/cl-cloudstorage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00570-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919477
3,316
2.625
3
3 basic steps to thwart most cyberattacks, courtesy of NSA Best practices, proper configurations and network monitoring can enable systems withstand 80 percent of attacks Computer systems with proper security and network controls should be able to withstand about 80 percent of known cyberattacks, according to a senior National Security Agency official. There are common steps that people could take to bolster computer security and make it more difficult for would-be-hackers to gain access, Richard Schaeffer Jr., the NSA’s information assurance director, told the Senate Judiciary Committee’s Terrorism and Homeland Security Subcommittee today. He identified three measures in particular as being especially effective. “We believe that if one institutes best practices, proper configurations [and] good network monitoring that a system ought to be able to withstand about 80 percent of the commonly known attack mechanisms against systems today,” Schaeffer said in his testimony. “You can actually harden your network environment to raise the bar such that the adversary has to resort to much, much more sophisticated means, thereby raising the risk of detection." Schaeffer said NSA works directly and indirectly with vendors to develop and distribute configuration guidance for software and hardware. Since 2005, NSA has worked with Microsoft, the U.S. military, the National Institute of Standards and Technology, the Homeland Security Department, and the Defense Information Systems Agency to establish consensus on common security configurations for Microsoft operating systems, he said. For example, Schaeffer said the announcement by Microsoft of the release of Windows 7 was quickly followed by the release of the security configuration guide for the operating system. He said that NSA, in partnership with Microsoft and parts of the Defense Department, was able to enhance Microsoft’s operating system security guide without hampering a user’s ability to do everyday tasks. “All this was done in coordination with the product release, not months or years later during the product lifecycle,” he said in prepared remarks. Ben Bain is a reporter for Federal Computer Week.
<urn:uuid:37f00bbc-ad6b-4338-b1df-dc65c5caa3d0>
CC-MAIN-2017-09
https://gcn.com/blogs/cybereye/2013/08/~/link.aspx?_id=9790F49633584F19B1042B1841DC14D2&_z=z
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00570-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947682
412
2.640625
3
OpenSSH key management, Part 1 Understanding RSA/DSA authentication Many of us use the excellent OpenSSH (see Related topics later in this article) as a secure, encrypted replacement for the venerable commands. One of OpenSSH's more intriguing features is its ability to authenticate users using the RSA and DSA authentication protocols, which are based on a pair of complementary numerical keys. As one of its main appeals, RSA and DSA authentication promise the capability of establishing connections to remote systems without supplying a password. While this is appealing, new OpenSSH users often configure RSA/DSA the quick and dirty way, resulting in passwordless logins, but opening up a big security hole in the process. What is RSA/DSA authentication? SSH, specifically OpenSSH (a completely free implementation of SSH), is an incredible tool. Like ssh client can be used to log in to a remote machine. All that's required is for this remote machine to be running ssh server process. However, unlike ssh protocol is very secure. It uses special algorithms to encrypt the data stream, ensure data stream integrity and even perform authentication in a safe and secure way. ssh is really great, there is a certain ssh functionality that is often ignored, dangerously misused, or simply misunderstood. This component is OpenSSH's RSA/DSA key authentication system, an alternative to the standard secure password authentication system that OpenSSH uses by default. OpenSSH's RSA and DSA authentication protocols are based on a pair of specially generated cryptographic keys, called the private key and the public key. The advantage of using these key-based authentication systems is that in many cases, it's possible to establish secure connections without having to manually type in a password. While the key-based authentication protocols are relatively secure, problems arise when users take certain shortcuts in the name of convenience, without fully understanding their security implications. In this article, we'll take a good look at how to correctly use RSA and DSA authentication protocols without exposing ourselves to any unnecessary security risks. In my next article, I'll show you how to use ssh-agent to cache decrypted private keys, and introduce ssh-agent front-end that offers a number of convenience advantages without sacrificing security. If you've always wanted to get the hang of the more advanced authentication features of OpenSSH, then read on. How RSA/DSA keys work Here's a quick general overview of how RSA/DSA keys work. Let's start with a hypothetical scenario where we'd like to use RSA authentication to allow a local Linux workstation (named localbox) to open a remote shell on remotebox, a machine at our ISP. Right now, when we try to connect to remotebox using the ssh client, we get the following prompt: % ssh drobbins@remotebox drobbins@remotebox's password: Here we see an example of the sshdefault way of handling authentication. Namely, it asks for the password of the drobbins account on remotebox. If we type in our password for remotebox, ssh uses its secure password authentication protocol, transmitting our password over to remotebox for verification. However, unlike what telnet does, here our password is encrypted so that it can not be intercepted by anyone sniffing our data connection. Once remotebox authenticates our supplied password against its password database, if successful, we're allowed to log on and are greeted with a remotebox shell prompt. While the authentication method is quite secure, RSA and DSA authentication open up some new possibilities. However, unlike the ssh secure password authentication, RSA authentication requires some initial configuration. We need to perform these initial configuration steps only once. After that, RSA authentication between localbox and remotebox will be totally painless. To set up RSA authentication, we first need to generate a pair of keys, one private and one public. These two keys have some very interesting properties. The public key can be used to encrypt a message, and only the holder of the private key can decrypt it. The public key can only be used for encryption, and the private key can only be used for decryption of a message encoded by the matching public key. The RSA (and DSA) authentication protocols use the special properties of key pairs to perform secure authentication, without needing to transmit any confidential information over the network. To get RSA or DSA authentication working, we perform a single one-time configuration step. We copy our public key over to remotebox. The public key is called "public" for a reason. Since it can only be used to encrypt messages for us, we don't need to be too concerned about it falling into the wrong hands. Once our public key has been copied over to remotebox and placed in a special file (~/.ssh/authorized_keys) so that remotebox's sshd can locate it, we're ready to use RSA authentication to log onto remotebox. To do this, we simply type ssh drobbins@remotebox at localbox's console, as we always have. However, this time, ssh lets remotebox's sshd know that it would like to use the RSA authentication protocol. What happens next is rather interesting. Remotebox's sshd generates a random number, and encrypts it using our public key that we copied over earlier. Then, it sends this encrypted random number back to the ssh running on localbox. In turn, our ssh uses our private key to decrypt this random number, and then sends it back to remotebox, saying in effect "See, I really do hold the matching private key; I was able to successfully decrypt your message!" Finally, that we should be allowed to log in, since we hold a matching private key. Thus, the fact that we hold a matching private key grants us access to There are two important observations about the RSA and DSA authentication. The first is that we really only need to generate one pair of keys. We can then copy our public key to the remote machines that we'd like to access and they will all happily authenticate against our single private key. In other words, we don't need a key pair for every system we'd like to access. Just one pair will suffice. The other observation is that our private key should not fall into the wrong hands. The private key is the one thing that grants us access to our remote systems, and anyone that possesses our private key is granted exactly the same privileges that we are. Just as we wouldn't want strangers to have keys to our house, we should protect our private key from unauthorized use. In the world of bits and bytes, this means that no one should be able to read or copy our private key. Of course, the ssh developers are aware of the private keys' importance, and have built a few safeguards into ssh-keygen so that our private key is not abused. First, ssh is configured to print out a big warning message if our key has file permissions that would allow it to be read by anyone but us. Secondly, when we create our public/private key pair using ssh-keygen will ask us to enter a passphrase. If we do, our private key will be encrypted using this passphrase, so that even if it is stolen, it will be useless to anyone who doesn't happen to know the passphrase. Armed with that knowledge, let's take a look at how to configure ssh to use the RSA and DSA ssh-keygen up close The first step in setting up RSA authentication begins with generating a public/private key pair. RSA authentication is the original form of ssh key authentication, so RSA should work with any version of OpenSSH, although I recommend that you install the most recent version available, which was openssh-2.9_p2 at the time this article was written. Generate a pair of RSA keys as follows: % ssh-keygen Generating public/private rsa1 key pair. Enter file in which to save the key (/home/drobbins/.ssh/identity): (hit enter) Enter passphrase (empty for no passphrase): (enter a passphrase) Enter same passphrase again: (enter it again) Your identification has been saved in /home/drobbins/.ssh/identity. Your public key has been saved in /home/drobbins/.ssh/identity.pub. The key fingerprint is: a4:e7:f2:39:a7:eb:fd:f8:39:f1:f1:7b:fe:48:a1:09 drobbins@localbox ssh-keygen asks for a default location for the key, we hit enter to accept the default of /home/drobbins/.ssh/identity. ssh-keygen will store the private key at the above path, and the public key will be stored right next to it, in a file called Also note that ssh-keygen prompted us to enter a passphrase. When prompted, we entered a good passphrase (seven or more hard-to-predict ssh-keygen then encrypted our private key (~/.ssh/identity) using this passphrase so that our private key will be useless to anyone who does not know it. The quick compromise When we specify a passphrase, it allows ssh-keygen to secure our private key against misuse, but it also creates a minor inconvenience. Now, every time we try to connect to our drobbins@remotebox ssh will prompt us to enter the passphrase so that it can decrypt our private key and use it for RSA authentication. Again, we won't be typing in our password for the drobbins account on remotebox, we'll be typing in the passphrase needed to locally decrypt our private key. Once our private key is decrypted, our ssh client will take care of the rest. While the mechanics of using our remote password and the RSA passphrase are completely different, in practice we're still prompted to type a "secret phrase" into # ssh drobbins@remotebox Enter passphrase for key '/home/drobbins/.ssh/identity': (enter passphrase) Last login: Thu Jun 28 20:28:47 2001 from localbox.gentoo.org Welcome to remotebox! % Here's where people are often mislead into a quick compromise. A lot of the time, people will create unencrypted private keys just so that they don't need to type in a password. That way, they simply type in the ssh command, and they're immediately authenticated via RSA (or DSA) and logged in. # ssh drobbins@remotebox Last login: Thu Jun 28 20:28:47 2001 from localbox.gentoo.org Welcome to remotebox! % However, while this is convenient, you shouldn't use this approach without fully understanding its security impact. With an unencrypted private key, if anyone ever hacks into localbox, they'll also get automatic access to remotebox and any other systems that have been configured with the public key. I know what you're thinking. Passwordless authentication, despite being a bit risky does seem really appealing. I totally agree. But there is a better way! Stick with me, and I'll show you how to gain the benefits of passwordless authentication without compromising your private key security. I'll show you how to masterfully use (the thing that makes secure passwordless authentication possible in the first place) in my next article. Now, let's get ready to use ssh-agent by setting up RSA and DSA authentication. Here RSA key pair generation To set up RSA authentication, we'll need to perform the one-time step of generating a public/private key pair. We do this by typing: Accept the default key location when prompted (typically ~/.ssh/identity and ~/.ssh/identity.pub for the public key), and provide ssh-keygen with a secure passphrase. Once ssh-keygen completes, you'll have a public key as well as a passphrase-encrypted private key. RSA public key install Next, we'll need to configure remote systems running use our public RSA key for authentication. Typically, this is done by copying the public key to the remote system as follows: % scp ~/.ssh/identity.pub drobbins@remotebox: Since RSA authentication isn't fully set up yet, we'll be prompted to enter our password on remotebox. Do so. Then, log in to remotebox and append the public key to the ~/.ssh/authorized_keys file like so: % ssh drobbins@remotebox drobbins@remotebox's password: (enter password) Last login: Thu Jun 28 20:28:47 2001 from localbox.gentoo.org Welcome to remotebox! % cat identity.pub >> ~/.ssh/authorized_keys % exit Now, with RSA authentication configured, we should be prompted to enter our RSA passphrase (rather than our password) when we try to connect to remotebox using % ssh drobbins@remotebox Enter passphrase for key '/home/drobbins/.ssh/identity': Hurray, RSA authentication configuration complete! If you weren't prompted for a passphrase, here are a few things to try. First, try logging in by ssh -1 drobbins@remotebox. This will tell ssh to only use version 1 of the ssh protocol, and may be required if for some reason the remote system is defaulting to DSA authentication. If that doesn't work, make sure that you don't have a line RSAAuthentication no in your /etc/ssh/ssh_config. If you do, comment it out by pre-pending it with a "#". Otherwise, try contacting the remotebox system administrator and verifying that they have enabled RSA authentication on their end and have the appropriate settings in /etc/ssh/sshd_config. DSA key generation While RSA keys are used by version 1 of the ssh protocol, DSA keys are used for protocol level 2, an updated version of the ssh protocol. Any modern version of OpenSSH should be able to use both RSA and DSA keys. Generating DSA keys using OpenSSH's ssh-keygen can be done similarly to RSA in the following % ssh-keygen -t dsa Again, we'll be prompted for a passphrase. Enter a secure one. We'll also be prompted for a location to save our DSA keys. The default, normally ~/.ssh/id_dsa and ~/.ssh/id_dsa.pub, should be fine. After our one-time DSA key generation is complete, it's time to install our DSA public key to remote systems. DSA public key install Again, DSA public key installation is almost identical to RSA. For DSA, we'll want to copy our ~/.ssh/id_dsa.pub file to remotebox, and then append it to the ~/.ssh/authorized_keys2 on remotebox. Note that this file has a different name than the RSA authorized_keys file. Once configured, we should be able to log in to remotebox by typing in our DSA private key passphrase rather than typing in our actual remotebox password. Right now, you should have RSA or DSA authentication working, but you still need to type in your passphrase for every new connection. In my next article, we'll see how to use ssh-agent, a really nice system that allows us to establish connections without supplying a password, but also allows us to keep our private keys encrypted on disk. I'll also introduce keychain, a very handy ssh-agent front-end that makes more secure, convenient, and fun to use. Until then, check out the handy resources below to keep yourself on track. - Be sure to visit the home of OpenSSH development. - Take a look at the latest OpenSSH source tarballs and RPMs. - Check out the OpenSSH FAQ. is an excellent sshclient for Windows machines. - You may find O'Reilly's SSH, The Secure Shell: The Definitive Guide to be helpful. The authors' site contains information about the book, a FAQ, news, and updates. - Browse more Linux resources on developerWorks. - Browse more Open source resources on developerWorks.
<urn:uuid:88c41815-98bc-445c-b921-ee9a5fbc78fa>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/library/l-keyc/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00090-ip-10-171-10-108.ec2.internal.warc.gz
en
0.880377
3,595
2.875
3
With more data being moved out of on-premise environments, data needs to be classified and secured to an appropriate level to prevent it from being compromised. This embraces the practice of assessing the risk of data loss and mitigating those risks; and applying adequate resources towards securing the data, both on-premise and in the cloud. Cyber incidents are a fact of contemporary life, and significant cyber incidents are occurring with increasing frequency, impacting infrastructure no matter its physical location. Mitigating the loss of data is therefore a priority. And not all data is equal, some data needs to have a higher levels of security; other much lower; the cost in fact may be 10x different. What are some data strategies that practitioners can adopt to achieve this? The session will discuss: - Data Strategy - Big Data and Analytics - Data Loss Prevention (DLP) strategies - Data Classification for the Public sector - Policy Driven Data Security
<urn:uuid:c05b9450-c030-4d37-9ce3-9079ef9fd3a3>
CC-MAIN-2017-09
http://www.inforisktoday.asia/webinars/how-data-classification-enhances-protection-reduces-severity-breach-w-1144
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00142-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930622
191
2.578125
3
So while a new Cray supercomputer took first place on theTop500, it was another machine, Lawrence Livermore National Laboratory's Sequoia, that proved to be the most adept at processing data intensive workloads on the Graph 500. IN PICTURES: The 10 most powerful supercomputers on the planet Such differences in ranking between the two scales highlight the changing ways in which the world's most powerful supercomputers are being used. An increasing number of high performance computing (HPC) machines are being put to work on data analysis, rather than the traditional duties of modeling and simulation. "I look around the exhibit floor [of the Supercomputing 2012 conference], and I'm hard-pressed to find a booth that is not doing big data or analytics. Everyone has recognized that data is a new workload for HPC," said David Bader, a computational science professor at the Georgia Institute of Technology who helps oversee the Graph 500. The Graph 500 was created to chart how well the world's largest computers handle such data intensive workloads. The latest edition of the list was released at the SC12 supercomputing conference, being held this week in Salt Lake City. In a nutshell, the Graph 500 benchmark looks at "how fast [a system] can trace through random memory addresses," Bader said. With data intensive workloads, "the bottleneck in the machine is often your memory bandwidth rather than your peak floating point processing rate," he added. The approach is markedly different than Top500. The well-known Top500 list relies on the Linpack benchmark, which was created in 1974. Linpack measures how effectively a supercomputer executes floating point operations, which are used for mathematically intensive computations such as weather modeling or other three dimensional simulations. The Graph 500, in contrast, places greater emphasis on how well a computer can search through a large data set. "Big data has a lot of irregular and unstructured data sets, irregular accesses to memory, and much more reliance on memory bandwidth and memory transactions than on floating point performance," Bader said. For the Graph 500 benchmark, the supercomputer is given a large set of data, called a graph. A graph is an interconnected set of data, such as a group of connected friends on a social network like Facebook. A graph consists of a set of vertices and edges, and in the social media context a vertex would be a person and the edge that person's connection to another person. Some vertices have many connections while many others have fewer. The computer is given a single vertex and is timed on how quickly it discovers all the other vertices in a graph, namely by following the edges. Currently, IBM's BlueGene/Q systems dominate this edition of the Graph 500. Nine out of the top 10 systems on the list are BlueGene/Q models -- compared to four BlueGene/Q systems on the November 2011 compilation. For Bader, this is proof that IBM is becoming more sensitive to current data processing needs. IBM's previous BlueGene system, BlueGene/L, was geared more towards floating point operations, and does not score as highly on the list. Like the Top500, each successive edition of the Graph 500 shows steady performance gains among its participants. The top machine on the new list, Sequoia, traversed 15,363 billion edges per second. In contrast, the top entrant of the first list, compiled in 2010, followed only 7 billion edges per second. This jump of four orders of magnitude is "staggering," Bader said. The Graph500 list is compiled twice a year, and, like the Top500, the results are announced at the Supercomputing conference, usually held in November, or the International Supercomputing Conference, usually held in June. Participation is voluntary: entrants will run either the reference implementations, or their own implementations, of the benchmark and submit the results. Despite its name, the Graph 500 has yet to attract 500 submissions, though the numbers are improving with each edition. The first contest garnered 9 participants, and this latest edition has 124 entrants. Bader is quick to point out that the Graph 500 is not a replacement for the Top500 but rather a complementary benchmark. Still, the data intensive benchmark could help answer some of the criticisms around the Top500's use of the Linpack benchmark. Jack Dongarra, who helped create Linpack and now maintains the Top500, admitted during a discussion about the latest results of the Top500 at SC12 that Linpack does not measure all aspects of a computer's performance. He pointed to projects like Graph 500, the Green500 and the HPC Challenge that measure other aspects of supercomputer performance. At least one system, the National Center for Supercomputing Applications' Blue Waters, was not entered in the Top500, because its keepers did not feel Linpack would adequately convey the true power of the machine. Supercomputers are built according to the jobs they will execute, not to an arbitrary benchmark, Bader pointed out. "At the end of the day, you are going to want the machine that does best for your workload," Bader said.
<urn:uuid:5a497df6-8491-405d-8be6-52f6960a55e7>
CC-MAIN-2017-09
http://www.networkworld.com/article/2161508/applications/world-s-most-powerful-big-data-machines-charted-on-graph-500.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00139-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946357
1,062
2.984375
3
By Kevin Brown The total data center universe that most data center professionals are familiar with principally consists of two realms. The first realm, information technology (IT), refers to all systems that address the information processing aspects of the data center (e.g., servers, storage arrays and network switches). The second realm revolves around the physical infrastructure and controls that allow the IT realm to function. This second realm includes the physical infrastructure systems that support both the IT (“white space”) realm of the data center as well as the larger data center facility itself. This would include facility power, cooling and security systems. The management classification system described in this paper is limited in scope to the physical infrastructure of the data center facility and IT areas. Both realms are interrelated but the subsystems within each are procured, managed, and maintained by separate users. Typically, facilities and engineering departments “own” and operate facility and IT infrastructure systems. IT department personnel “own” the IT equipment. In some larger data centers both IT and infrastructure devices share a common communications backbone. As the total data center evolves, these departments will become more intertwined as will the management systems that support them. Table 1 provides definitions of terms utilized in this paper to describe and contrast the data center management software classification system.
<urn:uuid:5bae1b71-cc0a-4eef-af14-7f3b0bef69c3>
CC-MAIN-2017-09
https://www.bsminfo.com/doc/classification-of-data-center-management-software-tools-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00315-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935868
264
2.5625
3
Researchers from the University of California, San Diego have found that totally erasing data from SSDs is not as easy or reliable as previously thought. ability to totally erase data from storage devices is a critical component of secure data management, regardless of whether the organization is just throwing away an old system or repurposing it for someone else's use. Recent indicates, however, that sanitizing solid-state drives is not researchers from the Department of Computer Science and Engineering and one from the Center for Magnetic Recording and Research at the University of California have "empirically" found that existing disk sanitization techniques don't work on SSDs because the internal architecture of an SSD is very different from a that of a hard disk drive, according to a paper presented Feb. 16 at the USENIX File and Storage Technologies Conference in San Jose, Calif. researchers tested built-in ATA commands for erasing the entire SSD, existing software tools to erase the entire drive and software tools capable of securely erasing individual files. The researchers verified the reliability and effectiveness of the technique to determine what worked best. In the verification procedure, researchers wrote a structured data pattern to the drive, sanitized the drive using a known technique, dismantled the drive and extracted the raw data directly from the flash chips using a custom flash SSD sanitization requires built-in, verifiable sanitize operations," the researchers wrote. modern drives have built-in commands that instruct on-board firmware to run a standard sanitization protocol on the drive to remove all data. Since the manufacturer has "full knowledge" of the drive's design, these techniques should be reliable, but researchers found that many of the implementations were flawed. are two standard commands, ERASE UNIT and ERASE UNIT ENH, as well as a BLOCK ERASE command defined in the ACS-2 specification, which is still in draft form, according to the paper. tested the ERASE UNIT command on seven drives that claimed to support the ATA Security set and found only four executed it reliably, the researchers said. Two had a bug that prevented the command from executing and one claimed to have executed the command despite the fact that all the data was still intact and accessible. Researchers called that last drive's results "most disturbing." commands should work correctly almost indefinitely," they wrote. existing software applications for deleting the entire drive work "most, but not all," of the time, the researchers found that none of the available software techniques for securely removing individual files was effective on flash-based SSDs. Sanitizing single files allows the filesystem to remain usable, but it was a "more delicate operation," the researchers said. For software that sanitized drives by overwriting the drive multiple times, doing it twice was usually sufficient, the researchers found. All single-file software sanitization protocols failed, as 4 percent to 75 percent of the file's contents remained intact. researchers defined "sanitized" as erasing all or part of the storage device in such a way that the contained data was difficult or impossible to recover, according to the paper. "Local" sanitization, or "clearing," as defined by the National Institute of Standards and Technology, meant data could not be recoverable via standard hardware interfaces such as SATA or SCSI sanitization meant data could not be recovered via any digital means, including "subversion of the device's controller or firmware," the researchers wrote. Analog sanitization, or "purging," as defined by NIST, degrades the analog signal that encodes the data so that it is "effectively impossible" to reconstruct. "Cryptographic" sanitization referred to encrypting and decrypting incoming and outgoing data, and then just deleting the key from the drive.
<urn:uuid:ac0a63b2-55c4-4206-a190-b83e9591cb86>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Data-Storage/SSDs-Harder-to-Seucrely-Purge-of-Data-than-HDDs-200129
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00191-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936485
855
2.703125
3
Air-cooled servers may soon go the way of the single-core CPU. In high performance computing datacenters, the hottest new trend in energy efficiency is warm water cooling. IBM, Eurotech, and a handful of other vendors have paved the way with this technology and now Appro has announced its own solution in an attempt to set itself apart from the competition. Warm water cooling is benefiting from a confluence of industry trends that have raised the profile of the technology. Especially for the hotter, denser HPC systems that are being shoehorned into datacenters these days, warm water technology can offer an optimal solution, balancing somewhat higher up-front cost with a much lower lifecycle cost. Compared to traditionally air cooled systems, liquid cooling (at any temperature) offers better energy efficiency, thus lowering the power bill. That’s because water has much better thermal properties than air, requiring a lot less of it to cool a given piece of hardware. Using warm water instead of cool water has the additional advantage of doing away with a water chiller unit, greatly simplying the plumbing, not to mention reducing installation costs. While warm water doesn’t have the chilling capacity of cold water, as long as you get liquid in close proximity to the hardware, it can cool even the hottest processors. Even water as warm as 45C (113F) can effectively cool a modern processor. And like cool water setups, the warmer outlet water from the servers can be reused to heat the datacenter and surrounding facilities. The other development that is lighting a fire under this technology is the proliferation of high-wattage chips. The continued demand for performance means server chips are continuing to push the power envelope. Fast, high performance x86 CPUs can easily reach 130 watts. In a dense two-socket (or worse, four-socket) system, heat can build up quickly – all the more so when you consider more memory chips are needed to feed the growing number of cores on these processors. In the HPC realm, an additional burden has been added with the advent of accelerators: GPUs and soon the Intel Xeon Phi. Although the chips themselves aren’t much hotter than a top-bin CPU, the inclusion of multiple gigabytes of graphics memory on an accelerator card pushes these devices well past the 200 watt realm. Once you start gluing a couple of these together on the same motherboard, along with their CPU hosts and main memory, all of a sudden you have over a kilowatt of hardware in a very small space. It is in this environment that Appro has decided to offer its warm water cooling option, which it has dubbed Xtreme-Cool. The company claims it will reduce energy consumption by 50 percent and provide a PUE below 1.1. According the Appro, their cooling gear is designed to fold seamlessly into the company’s Xtreme-X blade system, which previously was offered only with standard air cooling or chilled water setups. But the Xtreme-Cool design is such that it will actually fit into any standard computer rack. It’s especially geared for the dense blade designs available in Xtreme-X, the Appro platform that features dual-processor nodes, 80 of which can be fit into a standard 42U rack. Since the company will be offering a Xeon Phi option for these blades when the chips become available, there will be an extra incentive for customers to consider the warm water option. GPU accelerators will be supported as well. Xtreme-Cool is different from most of the warm water cooling solutions out there, inasmuch as it’s built for standard rack enclosures. The heart of the system is the RackCDU (rack cooling device unit), a radiator-like component that sits on the inside of the rack enclosure. Two sets of tubes run from the unit to the server blade. One set feeds the warm water to the servers; the other transfers the server-heated water back to the RackCDU for cooling to ambient temperatures. The tubes that go into the server wrap around the cold plates on top of the processor and memory components, the primary sources of heat on the motherboard. Dripless interconnectors are used for reliability. Appro does the entire installation, so from the facility manager’s point of view, it’s plug and play. As mentioned before, Xtreme-Cool can be adapted to standard, non-Appro racks. Most other solutions, like that of IBM’s and Eurotech’s, are custom designs, architected to fit their particular blade systems. For example, the IBM solution, which is being used in SuperMUC, a three-petaflop supercomputer cluster constructed from iDataPlex servers, uses the company’s own hot water cooling system. That one consists of custom-fitted aluminum plates that lay over the server motherboard. This design actually does a somewhat better job at extracting the heat – Appro says their solution will only extract about 80 percent of it – but at a cost that is considerably higher. Appro is trying to hit the sweet spot here, designing a system that does a good job at heat extraction, but at a price point that they believe will deliver a faster return on investment than more custom designs. In truth, the company has not specified the price premium on the Xtreme-Cool option yet, but according to Appro marketing director Maria McLaughlin it will be “much cheaper than the proprietary systems.” Xtreme-Cool systems won’t start shipping until the first quarter of 2013, but Appro will be demonstrating the product at the Supercomputing Conference (SC12), on November 12 to 15.
<urn:uuid:445aa048-3f4d-4e3a-8ff9-939f45498ac3>
CC-MAIN-2017-09
https://www.hpcwire.com/2012/10/31/appro_heats_up_hpc_portfolio_with_warm_water_cooling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00191-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936274
1,174
2.78125
3
Five years after the federal government began measuring the digital divide, the nation stands at a crossroads. More U.S. citizens have access to Internet and computer technology than ever before, thanks to rampant expansion of the telecommunications infrastructure and plummeting technology prices. But even as basic connectivity concerns fade, complex problems remain -- particularly for ethnic minorities, people with disabilities and rural residents. In response, communities have begun fashioning comprehensive digital divide plans that extend beyond the laying of network plumbing. These plans include closer partnerships between business and government, and they also may combine high-speed Web access initiatives with outreach efforts designed to help local governments, small businesses and citizens put their newfound Internet capabilities to effective use. The trend signals a maturing approach to solving the digital divide. "There really is a different orientation. I think the focus is now on solutions, so you see the conversation changing from digital divide to digital opportunities," said Anthony Wilhelm, program director of the Digital Divide Network, an organization formed by the nonprofit Benton Foundation and the National Urban League to spread digital divide information. "There are still communities that are not served -- so access isnt completely solved," Wilhelm said. "But I think weve become more sophisticated, and weve realized that access isnt enough." Late last year, Seattle Mayor Paul Schell announced a multimillion-dollar effort to improve services at the citys community technology centers. Funded by federal grant money and multiple corporate sponsors, the project is modernizing equipment and creating technical literacy programs at nine technology centers in low-income and high-unemployment areas. Unlike traditional infrastructure projects, Seattles literacy initiative is designed to give citizens the skills they need to thrive in an increasingly digital economy, said Daria Cal, program coordinator for the Seattle Community Technology Alliance, a coalition of public agencies and community groups that oversees the technology literacy effort. "We need to put programs in place -- I think thats whats been missing," said Cal. "The whole point is what you do with the technology." The centers provide technology courses tailored to various skill levels. Novice computer users learn basic word-processing and spreadsheet applications, while those with more advanced knowledge can receive training in multimedia and digital editing programs. The facilities also offer after-school activities, adult and family literacy courses, career development and job preparation. The city lined up an impressive list of contributors to help pay for the program, including AT&T, Microsoft, Cisco Systems and Gateway. That task was made easier by the fact that the literacy initiative was driven primarily by corporate workforce concerns, according to Cal. "Weve been talking to technology companies that say they cant find people to fill jobs. If they cant find employees, that means were not doing something either in our school curriculum or vocational training," she said. "I think thats why weve been able to get corporate support. [Companies] realize they need to put something back into the community. Its self-serving all the way around." Corporate support is key to sustaining the citys literacy effort, added Cal. Private-industry donors offer advice on appropriate training for potential employees. And, unlike a one-time grant award, corporate donations offer the prospect of continuous funding and regular technology upgrades that are fundamental to the long-term viability of the technology training. "The community technology centers have been around for years, but theyve had no sponsors. Theyve had grant money from the federal government to create infrastructure and buy computers," she said. "But if you visit the site five years later, the computers are the same. You cant train anyone on a computer thats five years old." Another comprehensive response to the digital divide comes from Atlanta, where more than 5,000 residents have visited technology centers created by the citys Community Technology Initiative. Atlanta opened the sixth of its "Cyber Centers" -- neighborhood facilities that provide access to computers and the Web, as well as technology training -- in February. The city expects to operate a total of 15 centers when the project is complete. But the Atlanta initiative -- created in 1999 and financed by $8 million from the citys cable television franchise agreement -- goes beyond technology centers. It is a broad effort to improve citizen access to computer technology and training, and to dramatically boost the amount of locally created Web content. Earlier this year, Atlanta received a $100,000 Urban Challenge Grant from 3Com Corp., which the city will use to create a community Web portal. Residents enrolled in the citys computer training programs will help produce content for the site. Atlanta envisions the portal offering everything from government services, to neighborhood histories, to citizen-authored recipe books. "Providing access is only half the answer. Thats the easy half," said Jabari Simama, head of the Atlanta Mayors Office of Community Technology, in a recent speech. "The other half is not so easy. That is to ensure that community technology, the Internet and new media in general [are] used to build community, serve the public and help break down barriers that keep us apart." To staff the technology centers, Atlanta is forming a "Tech Corps" of computer-savvy high school and college students, according to a 150-page strategic plan for the initiative. Courses offered at the centers are designed both to provide computer training and improve low basic literacy rates that plague inner-city areas. In addition, the family-oriented facilities will offer security and childcare services. Atlantas initiative includes an extensive plan to market the existence of community technology centers and underscore the importance of computer literacy. The citys strategic plan envisions delivering a message of empowerment set to a hip-hop theme via television, radio and print media. Boston also has taken a multi-pronged approach to the digital divide, which includes opening a network of community technology centers, developing advanced technology curriculum for city high schools and creating a program to give new computers to families who complete technology courses. The latter effort, dubbed Technology Goes Home, is the most unusual. Stocked with 1,000 PCs donated by a local computer manufacturer, the program gives computers to low-income families that complete a rigorous 12-week training program. Technology Goes Home requires participating neighborhoods to form collaboratives to help staff local technology centers and provide other resources. Families are chosen for the program based on several factors, including income level and a written goals statement describing how they will use technology to change their lives. Once chosen, families must unfailingly attend weekly two-hour training courses and complete a series of projects, culminating with a lengthy final research report. Only those families that meet all the requirements receive a computer; about 30 had done so by the middle of last year. Good News/Bad News Just how far has the nation come? The latest in a bellwether series of digital divide reports compiled by the National Telecommunications and Information Administration (NTIA) holds reason for both optimism and concern. There are now more American households with personal computers than without, according to the report, "Falling Through the Net: Toward Digital Inclusion." The study, released in October, says PCs sit in 51 percent of homes across the country, and four out of five of those machines are linked to the Internet. The NTIA survey points to progress on a number of fronts. For instance, rural and urban areas enjoy virtually identical rates of dial-up Internet access, and the gender-based gap in technology use has evaporated. Whats more, this progress has come more quickly than many expected. "In 1990, if you would have said, By the year 2000, half of American homes will have computers in them and half of the American population will be accessing the Internet, I would have thought that was pretty ambitious," said Gregory Rohde, former administrator of the NTIA who served as the Clinton administrations principal adviser on telecommunications and information policies. "No one could have predicted the Internet growth. So I think it certainly is very impressive." The fevered pace has even prompted some observers to declare at least partial victory. "Its not an income issue, because prices are so inexpensive that its hard to argue income is a barrier for very many people," said Jeffrey Eisenach, president of the Washington D.C.-based Progress & Freedom Foundation (PFF). "Computers are practically free and Internet access is free. Which part of free is too expensive?" Yet, completely erasing technology inequities remains an elusive and exceedingly complex goal. Bound in a tangled web of social and economic factors, ethnic minorities -- particularly Hispanics and African Americans -- continued to lose ground compared with the nation as a whole. As of last August, less than 24 percent of black and Hispanic households had Internet access, compared with the national average of 41.5 percent, according to the NTIA. Indeed, the report found that the gap in Internet access rates had actually widened for each group, growing by 4 percent for Hispanics and 3 percent for black households since 1998. For the first time, NTIA also examined the ability of Americans with disabilities to access computers and the Internet. The results were startling, indicating that individuals with disabilities were just half as likely to have Web access than those without impairments. Finally, the report hints at what may be a significant challenge for rural communities banking on the Internet to energize tired industrial and agricultural economies: a lack of broadband Web connections. Rural citizens may have achieved near parity with their urban counterparts in conventional dial-up Internet connectivity, but they face a disadvantage when it comes to the high-speed Internet access needed to compete in a global economy. Nationwide, 12 percent of central city households enjoy high-speed Web connections versus 7 percent for those located in rural parts of the country. The gap is even more pronounced regionally. For example, the western United States boasts the highest central city broadband penetration -- 13 percent of households -- and the lowest rural penetration at slightly less than 6 percent. Lack of high-speed Web access spells trouble for small-town businesses hoping to carve themselves a slice of the Internet economy as customers abandon slow-moving Web sites in favor of faster alternatives. "Youre not just competing against the guy down the street. Now youre competing against the guy in San Francisco or New York or Los Angeles. And that competition is just a click away," said Rohde. Looming demands of the Internet economy have put high-speed connectivity at the top of the agenda for progressive small towns such as LaGrange, Ga. The 25,000-person community drew widespread attention last year when it partnered with the local cable provider in an ambitious plan to bring free Web access to all citizens. Last April, the town began equipping cable TV subscribers with set-top boxes and wireless keyboards that allow them to surf the Web and send e-mail. About 4,000 homes are connected to the system thus far, and the city is committed to hooking up any resident who wants it -- even if that means subsidizing the modest monthly cable subscription fee. "We thought if we could bring Internet access into every home, we might be able to upgrade the skills of our workforce and our students," said LaGrange Mayor Jeffrey Lukken. The town worked with Charter Communications to upgrade the existing cable television system to provide broadband Web connections. That network was then linked with 150 miles of city-owned fiber already serving schools, government offices, an industrial park and local merchants. Key to the plan was LaGranges decision to purchase Charters existing cable infrastructure and lease it back to the company, said Lukken. "That gave [Charter] enough cash-flow and enough tax benefits that they were able to upgrade their entire system. It allowed them to do a lot of improvements and modifications that they might not have been in a position to do at that time for our community." Significantly, the deal also kept LaGrange out of the cable/ISP business. Although LaGrange considered developing and running its own system to ensure widespread Web access, the city ultimately decided against such a plan, according to the mayor. "We felt we were capable of doing it, but the problem is you would have free enterprise competing with the government that regulates free enterprise," he said. "Eventually, we both would have cut our prices down to just bloody nothing, and I dont think there would have been [profit] margins enough to have justified the risk involved to our citizens." The city also worried about running a network that carries adult entertainment and other potentially troublesome content, added Lukken. The current arrangement eliminates that concern. "While we are a landlord, we are not in the operation, and we have nothing to do with [Charters] cable business," he said. Defining Governments Role Just how deeply government should dive into the telecommunications market to solve the digital divide is a question simmering in controversy. PFFs Eisenach said a growing number of publicly owned utilities are cranking up cable television and/or ISP operations, and its an area where he contends the public sector does not belong. In a recent report sharply critical of government-run telecom operations, PFF said more than 200 public utilities have entered the market, offering virtually every major category of telecommunications service. The group brands the trend both surprising and destined for failure. "There is strong evidence that government-operated telecommunications enterprises have performed poorly in the past; that they rely on extensive subsidies that burden taxpayers and distort the marketplace; and that they discourage private-sector provision of the very services they seek to provide," the report said. Before resorting to public ownership or subsidies, Eisenach urged governments to take a hard look at how they tax and regulate telecom carriers. He contended that telecom services are among the nations most heavily taxed products. Whats more, those taxes can be overwhelmingly complex, with large companies like AT&T filing 100,000 state and local tax returns each year, according to Eisenach. "For governments to be regulating and taxing these telecommunications providers in the ways that they are, and then turn around and say [connectivity] is not happening fast enough is just the worst kind of hypocrisy," he said. "The first thing governments have to look at is what they are doing to discourage build-out." However, Nancy Stark, director of community and economic development for the Center for Small Communities, has no quarrel with government-owned telecom if it helps expand broadband connectivity in underserved rural areas. "My personal opinion is that its great," she said. "Use whatever legal means you can to get it." Stark added that wiring small towns for high-speed Web access is of no small concern, in light of census data showing that 85 percent of U.S. communities contain fewer than 10,000 citizens. She said small communities often must give private industry a push if they want high-speed Internet service. Although some jurisdictions successfully operate telecom services, a more common strategy involves aggregating the telecom needs of government, schools, hospitals, libraries and local businesses into a package large enough to attract attention from private carriers. "Sometimes you have to really demonstrate [the demand] in a real marketing way," she said. Although high-speed access gaps clearly remain, the U.S. information infrastructure is evolving to the point where policy-makers now face a different set of questions, said Wilhelm of the Digital Divide Network. "In a sense, we now have a national information infrastructure. Its uneven and some communities cant plug in. But in general, theres an infrastructure out there. Legislators are asking, How can we leverage that existing investment?" As a result, communities are investigating ways to keep Internet-connected classrooms open after school to provide citizens with Web access, he said. Theyre also looking to provide better technology training at community centers and libraries. LaGrange is developing a wide range of online applications, according to Lukken. Teachers now use the Web to send grades and progress reports to students and parents. The city has even begun holding Internet scavenger hunts aimed at spurring student and citizen involvement. "Were trying to make it fun by encouraging people to go onto the Internet and find sites for rewards that the city will give them," Lukken said. A look across the nation also shows states taking a larger role in helping localities make effective use of Internet connectivity. By the middle of this year, Virginia expects to complete a series of e-government blueprints designed to help local governments put a wide range of civic and business activities online. According to the state, the blueprints will give local jurisdictions a step-by-step guide for building Web-connected communities. Furthermore, Virginias non profit Center for Innovative Technology regularly holds seminars aimed at helping local businesses harness Internet technology. In North Carolina, the states Rural Internet Access Authority not only promotes affordable, high-speed Web access to rural areas, it also spearheads the creation of telework centers and the development of online health, learning and commerce applications for small communities. The state recently created a grant process designed to put two rural telework centers into operation by January 2002. Using Your Strengths Ultimately, the growing effort to address issues underlying the digital divide puts a premium on partnership. Observers say nearly any successful digital divide initiative involves a mix of public, private and non profit participation -- and the best projects are tailored to the unique requirements of each community. "You really need governments, foundations and industry working in concert. That nexus is incredibly important," said Wilhelm. Lukken recommends partnering with businesses or organizations that may have assisted the community in the past. "Rely on those relationships you already have, and rely on those strengths that the community has," he said. "Charter was so helpful to us because they actually believed in us and bought into our vision. And although technology inequities persist, Wilhelm finds encouragement in the fact that policymakers are getting a firmer grasp on where the challenges lie and where government can make a difference. "I actually think the conversation has advanced. Now we know where the market is not going to solve the problems and where we can just leave it to the market," he said. "There are just a lot of interesting efforts going on that are finally making a dent in this thing."
<urn:uuid:39394e45-591f-4f4a-9d5d-936395aa1631>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Defining-the-Divide.html?page=3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00135-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957875
3,733
2.5625
3
UAVs to power wireless -- anywhere The next time a major hurricane or earthquake strikes, wiping out cell and wireless communications, first responders and citizens alike may be able to get gigabit-per-second wireless service from drones dispatched to the skies over the affected areas. DARPA’s Mobile Hotspots program, just entering its second phase of development, aims to deliver secure, high-speed wireless networks to troops in remote locations, thus giving them fast and reliable access to intelligence and reconnaissance information. While DARPA’s project is targeting military applications, the same technologies could be helpful during disasters and potentially for those conducting search-and-rescue operations in remote areas. The key to the program is deploying millimeter-wave wireless transceivers on small unmanned aerial vehicles (UAVs). And the challenge of doing that, of course, is engineering the components into footprints small and light enough for UAVs. Millimeter-wave communications, which offers bandwidth comparable to fiber optics, was actually first demonstrated in the 1890s. While Guglielmo Marconi was experimenting with radio waves at very low frequencies of 30-300 kHz, J.C. Bose first demonstrated millimeter-wave transmissions in the extremely high-frequency range between 30 GHz and 300 GHz in 1897. It wasn’t until the 1960s, however, that millimeter-wave technology was put to practical use by radio-astronomers. And it wasn’t until the 1980s that millimeter-wave integrated circuits were developed, enabling the use of those frequencies in commercial products. What makes millimeter-wave communications attractive is its efficiency and high throughput when used for point-to-point transmissions. What makes it challenging, particularly for outdoor transmissions, is its susceptibility to loss of signal strength when transmissions go through rainfall and other atmospheric interference. In Phase 1 of the DARPA Mobile Hotspots program, participants – and DARPA declined to name them – focused on refining and testing the underlying technologies. According to a DARPA press release, the program successfully demonstrated: - Smaller, steerable millimeter-wave antennas that can acquire and track a communications link between moving platforms. - Improved low-noise amplifiers that boost the communications signal while minimizing unwanted noise. The prototype cut in half the noise levels of typical low-noise amplifiers, says DARPA - More efficient and capable power amplifiers required to achieve the distances of more than 50 kilometers, as specified by the program requirements. - New approaches for robust airborne networking that allow maintenance of capacity between mobile air and ground units. - Initial engineering designs for low-size, weight and power (SWAP) designs for packaging the components. - Program requirements call for the final device to have a width no greater than 8 inches, a weight of less than 20 pounds and power consumption less than 150 watts. Phase 2 of the program launched in March 2014 with two participants -- L-3 Communications and FIRST RF – designated to lead teams. The goal of Phase 2 is to integrate the technologies demonstrated in Phase 1 into pods capable of being carried by a Shadow UAV. The Shadow, manufactured by AAI Corp., is already extensively used by the military and is a medium-sized UAV device that weighs in at 186 lbs. and has a wingspan of 14 feet. It can fly at up to 8,000 feet and has a range of 68 miles. DARPA has not made public when Phase 2 is expected to conclude. And neither L-3 Communications nor FIRST RF responded to requests for more information about the program’s Phase 2 challenges. It will, however, end with a ground demonstration of at least four pods capable of being carried by a Shadow UAV, made by AAI Corp, two ground vehicles and a ground node. Posted by Patrick Marshall on Apr 15, 2014 at 8:31 AM
<urn:uuid:7970c35f-a86e-4963-a549-6039b1eb5a80>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2014/04/darpa-mobile-hotspot-uav.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00135-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94439
802
3.625
4
Web Application Vulnerability Scanners are tools designed toautomatically scan web applications for potential vulnerabilities.These tools differ from general vulnerability assessment tools in thatthey do not perform a broad range of checks on a myriad of software andhardware. Instead, they perform other checks, such as potential fieldmanipulation and cookie poisoning, which allows a more focusedassessment of web applications by exposing vulnerabilities of whichstandard VA tools are unaware. Web Application Security Web Applications Issues-Scripting issues -Sources of input: forms, text boxes, dialog windows, etc. -Multiple Charset Encodings (UTF-8, ISO-8859-15, UTF-7, etc.) -Regular expression checks -Header integrity (e.g. Multiple HTTP Content Length, HTTP Response Splitting) -Framework vulnerabities(Java Server Pages, .NET, Ruby On Rails, Django, etc.) -Success control: front door, back door vulnerability assessment -Penetration attempts versus failures Technical vulnerabilities-Unvalidated input: .Tainted parameters - Parameters users in URLs, HTTP headers,and forms are often used to control and validate access to sentitiveinformation. -Cross-Site Scripting flaws: .XSS takes advantage of a vulnerable web site to attack clientswho visit that web site. The most frequent goal is to steal thecredentials of users who visit the site. -Content Injection flaws: .SQL injection - SQL injection allows commands to be executeddirectly against the database, allowing disclosure and modification ofdata in the database .XPath injection - XPath injection allows attacker to manipulate the data in the XML database .Command injection - OS and platform commands can often beused to give attackers access to data and escalate privileges onbackend servers. -Cross-site Request Forgeries Security Vulnerabilities-Denial of Service -Broken access control -Broken session management (synchronization timing problems) -Weak cryptographic functions, Non salt hash Architectural/Logical Vulnerabilities-Information leakage -Password change form disclosing detailed errors -Session-idle deconstruction not consistent with policies -Spend deposit before deposit funds are validated Other vulnerabilities-Debug mode -Hidden Form Field Manipulation -Weak Session Cookies: Cookies are often used to transitsensitive credentials, and are often easily modified to escalate accessor assume another user's identify. -Fail Open Authentication -Dangers of HTML Comments - Acunetix WVS by Acunetix - AppScan DE by IBM/Watchfire, Inc. - Hailstorm by Cenzic - N-Stealth by N-Stalker - NTOSpider by NTObjectives - WebInspect by HP/SPI-Dynamics - WebKing by Parasoft - elanize's Security Scanner by Elanize KG - MileScan Web Security Auditor by MileSCAN Tech - Grabber by Romain Gaucher - Grendel-Scan by David Byrne and Eric Duprey - Nikto by Sullo - Pantera by Simon Roses Femerling (OWASP Project) - Paros by Chinotec - Spike Proxy by Immunity (Now as OWASP Pantera) - WebScarab by Rogan Dawes of Aspect Security (OWASP Project) - Wapiti by Nicolas Surribas - W3AF by Andres Riancho A more complete list of tools is available in the OWASP Phoenix/Tools Extraído de http://unlugarsinfin.blogspot.es
<urn:uuid:dc4c7e3a-4776-4902-a1bf-7dce47f40190>
CC-MAIN-2017-09
http://www.hackplayers.com/2008_10_01_archive.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00311-ip-10-171-10-108.ec2.internal.warc.gz
en
0.784988
768
2.890625
3
Powering The Internet Of Things The Internet of Things is designed on the premise that sensors can be embedded in everyday object to help monitor and track them. The scope of this is huge – for example, the sensors could monitor and track everything from the structural integrity of bridges and buildings to the health of your heart. Unfortunately, one of the biggest stumbling blocks to widespread adoption at the moment is finding a way to cheaply and easily power these devices and thus enable them to connect to the internet. Luckily, engineers at the University of Washington have a potential solution. They have designed a new system which uses radio frequency signals as a power source and reuses existing WiFi infrastructure to provide the connectivity. The technology is called WiFi Backscatter and is believed to be the first of its kind. Building on previous research showed how low-powered devices could run without batteries or cords by obtaining their power from radio, TV and wireless signals, the new design goes further by connecting the individual devices to the internet. Previously it wasn’t possible, the difficulty in providing WiFi connectivity was that traditional, low-power WiFi consumes significantly more power than can be gained from the wireless signals. This has been solved by developing a ultra-low power tag prototype which has the required antenna and circuitry to talk to laptops and smartphones. “If Internet of Things devices are going to take off, we must provide connectivity to the potentially billions of battery-free devices that will be embedded in everyday objects”, said Shyam Gollakota, one of University of Washington’s Professor in the Computer Science and Engineering department. “We now have the ability to enable WiFi connectivity for devices while consuming orders of magnitude less power than what WiFi typically requires”. The tags on the new ultra low-power prototype work by scanning for WiFi signals that are moving between the router and the laptop or smartphone. Data is encoded by either reflecting or not reflecting the WiFi router signal – thus slightly changing the signal itself. It means that WiFi enabled devices would detect the miniscule changes and thus receive data from the tag. “You might think, how could this possibly work when you have a low-power device making such a tiny change in the wireless signal? But the point is, if you’re looking for specific patterns, you can find it among all the other Wi-Fi reflections in an environment”, said Joshua Smith, another University of Washington professor who works in the same department as Gollakota. The technology has currently communicated with a WiFi device at a rate of 1 kilobit per second with two metres between the devices, though the range will soon be expanded to twenty metres. Patents have been filed. By Daniel Price
<urn:uuid:b7b15fc5-3ed0-48e3-94d3-eb881fbaec6c>
CC-MAIN-2017-09
https://cloudtweaks.com/2014/08/powering-the-internet-of-things-wifi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00311-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949288
555
3.515625
4
Frame Relay -- Summary Network Consultants Handbook - Frame Relay by Matthew Castelli Frame Relay is a Layer 2 (data link) wide-area network (WAN) protocol that works at both Layer 1 (physical) and Layer 2 (data link) of the OSI model. Although Frame Relay services were initially designed to operate over ISDN service, the more common deployment today involves dedicated access to WAN resources. Frame Relay networks are typically deployed as a cost-effective replacement for point-to-point private line, or leased line, services. Whereas point-to-point customers incur a monthly fee for local access and long-haul connections, Frame Relay customers incur the same monthly fee for local access, but only a fraction of the long-haul connection fee associated with point-to-point private line services. Frame Relay was standardized by two standards bodies -- internationally by the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) and domestically by ANSI (American National Standards Institute). Frame Relay is a packet-switched technology, meaning that each network end user, or end node, will share backbone network resources, such as bandwidth. Connectivity between these end nodes is accomplished with the use of Frame Relay virtual circuits (VCs). Frame Relay WAN service primarily comprises four functional components: - Customer premise Frame Relay access device (FRAD). - Local access loop to the service provider network. - Frame Relay switch access port. Link Management Interface parameters are defined here. - Frame Relay VC parameters to each end site. DLCIs are of local significance, unless an agreement has been made with the network service provider to deploy global DLCIs. Local significance means that DLCIs are of use only to the local Frame Relay network device. Frame Relay DLCIs are analogous to an organizations telephone network utilizing speed-dial functions. Two types of Frame Relay VCs exist: - Permanent virtual circuits (PVCs) -- These are permanently established, requiring no call setup, and utilize DLCIs for endpoint addressing. - Switched virtual circuits (SVCs) -- These are established as needed, requiring call setup procedures and utilizing X.121 or E.164 addresses for endpoint addressing. Two types of congestion-notification mechanisms are implemented with Frame Relay: - Forward explicit congestion notification (FECN) -- The FECN bit is set by a Frame Relay network to inform the Frame Relay networking device receiving the frame that congestion was experienced in the path from origination to destination. Frame relay network devices that receive frames with the FECN bit will act as directed by the upper-layer protocols in operation. The upper-layer protocols will initiate flow-control operations, depending on which upper-layer protocols are implemented. This flow-control action is typically the throttling back of data transmission, although some implementations can be designated to ignore the FECN bit and take no action. - Backward explicit congestion notification (BECN) -- Much like the FECN bit, the BECN bit is set by a Frame Relay network to inform the DTE that is receiving the frame that congestion was experienced in the path traveling in the opposite direction of frames. The upper-layer protocols will initiate flow-control operations, depending on which upper-layer protocols are implemented. This flow-control action is typically the throttling back of data transmission, although some implementations can be designated to ignore the BECN bit and take no action. - Committed information rate (CIR) -- This is the amount of bandwidth that will be delivered as best-effort across the Frame Relay backbone network. - Discard eligibility (DE) -- This is a bit in the frame header that indicates whether that frame can be discarded if congestion is encountered during transmission. - Virtual circuit identifier - Data-link connection identifiers (DLCIs) for PVCs -- Although DLCI values can be 10, 16, or 23 bits in length, 10-bit DLCIs have become the de facto standard for Frame Relay WAN implementations. - X.121/E.164 addressing for SVCs -- X.121 is a hierarchical addressing scheme that was originally designed to number X.25 DTEs. E.164 is a hierarchical global telecommunications numbering plan, similar to the North American Number Plan (NANP, 1-NPA-Nxx-xxxx). Table 15-17: Summary of Network Topology Formulae Note: N is the number of locations |Fully meshed||[(N (N - 1)) / 2]| |Partial-mesh||(Approximation) [N2 / (N - 1)]| (Guideline) [((N (N - 1)) / 2) X (N - 1)] |Hub-and-Spoke||[N - 1]| Local Management Interface (LMI) is a set of enhancements to the basic Frame Relay specification. LMI includes support for keepalive mechanisms, verifying the flow of data; multicast mechanisms, providing the network server with local and multicast DLCI information; global addressing, giving DLCIs global rather than local significance; and status mechanisms, providing ongoing status reports on the switch-known DLCIs. Three types of LMI are found in Frame Relay network implementations: - ANSI T1.617 (Annex D) -- The maximum number of connections (PVCs) supported is limited to 976. LMI type ANSI T1.627 (Annex D) uses DLCI 0 to carry local (link) management information. - ITU-T Q.933 (Annex A) -- Like LMI type Annex-D, the maximum number of connections (PVCs) supported is limited to 976. LMI type ITU-T Q.933 (Annex A) also uses DLCI 0 to carry local (link) management information. - LMI (Original) -- The maximum number of connections (PVCs) supported is limited to 992. LMI type LMI uses DLCI 1023 to carry local (link) management information. - TCP/IP Suite - Novell IPX Suite - IBM SNA Suite - Voice over Frame Relay (VoFr) Novell IPX implementations over Frame Relay are similar to IP network implementation. Whereas a TCP/IP implementation would require the mapping of Layer 3 IP addresses to a DLCI, Novell IPX implementations require the mapping of the Layer 3 IPX addresses to a DLCI. Special consideration needs to be made with IPX over Frame Relay implementations regarding the impact of Novell RIP and SAP message traffic to a Frame Relay internetwork. Migration of a legacy SNA network from a point-to-point infrastructure to a more economical and manageable Frame Relay infrastructure is attractive; however, some challenges exist when SNA traffic is sent across Frame Relay connections. IBM SNA was designed to operate across reliable communication links that support predictable response times. The challenge that arises with Frame Relay network implementations is that Frame Relay service tends to have unpredictable and variable response times, for which SNA was not designed to interoperate or able to manage within its traditional design. Voice over Frame Relay (VoFr) has recently enjoyed the general acceptance of any efficient and cost-effective technology. In the traditional plain old telephone service (POTS) network, a conventional (with no compression) voice call is encoded, as defined by the ITU pulse code modulation (PCM) standard, and utilizes 64 kbps of bandwidth. Several compression methods have been developed and deployed that reduce the bandwidth required by a voice call to as little as 4 kbps, thereby allowing more voice calls to be carried over a single Frame Relay serial interface (or subinterface PVC). That concludes our serialization of Chapter 15 from Cisco Press' Network Consultants Handbook.
<urn:uuid:b4cdf966-9939-4f7f-816c-0c78a00c50f8>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/974881/Frame-Relay--Summary.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00487-ip-10-171-10-108.ec2.internal.warc.gz
en
0.872137
1,663
2.96875
3
Internet Addiction Might Become a Diagnosis In recent months, the awareness about people unable to connect from technology for even a short period of time has gotten a lot of attention. Last November, The New York Times ran an article on a Korean boot camp to cure kids of their computer addiction. South Korea, a country where 90 percent of homes are connected to the Web, feels that it has a responsibility to deal with the effects of this, holding the first international symposium on Internet addiction in September. "Korea has been most aggressive in embracing the Internet," said Koh Young-sam, head of the government-run Internet Addiction Counseling Center told the Times. "Now we have to lead in dealing with its consequences." U.S. psychiatrists appear to be taking Internet addiction more seriously as well, proposing that this "compulsive-impulsive" disorder be added to the next release of its Diagnostic and Statistical Manual of Mental Disorders, the DSM-V in 2011. "Internet addiction appears to be a common disorder that merits inclusion in the DSM-V," wrote Dr. Jerald Block, a psychiatrist at Oregon Health and Science University in the March issue of the American Journal of Psychiatry, agrees. Block argued that internet addiction shared four components with other compulsive-impulsive disorders, including excessive use, often associated with a loss of sense of time or neglect of basic drives; withdrawal, including feelings of anger, tension or depression when away from the computer; tolerance, including the need for better computer equipment, more software and more hours of use; and negative repercussions, including arguments, lying, poor achievement and social isolation. Block notes that South Korea already considers this a serious public health issue, as does China, which as of March 13, had surpassed the U.S. in its number of Internet users. In 2007, China began restricting computer game use, discouraging more than 3 hours each day. A study suggests that the U.S. isn't very far behind. According to a study by the Solutions Research Group, 68 percent of Americans feel anxious when they're not connected in one way or another, and this "disconnect anxiety" causes feelings of disorientation and nervousness when a person is deprived of Internet or wireless for a period of time.
<urn:uuid:1b8d3c35-b3ad-4c30-a4c4-9ca5d12c0925>
CC-MAIN-2017-09
http://www.eweek.com/careers/internet-addiction-might-become-a-diagnosis-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00131-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955317
463
2.984375
3
There’s an old IT joke that blames many common computer glitches on PEBKAC: Problem Exists Between Keyboard And Chair In other words, users are often the source of major technology issues, not the systems themselves. This old saw is doubly true in the world of IT security. User error, mishandling of data, and failure to follow security policies often end up creating the gaps that attackers use to breach a company’s or a government’s systems. Despite a perpetual focus by the security industry to improve user awareness and security training, recent studies and reports show that humans remain the weakest link in IT security. Phishing Season Never Ends By far the most common attack vector leading to major data breaches is the theft of user credentials via phishing. Fully 91% of attacks begin with a phishing email, according to a recent report by vendor PhishMe cited in Dark Reading. An employee is duped into clicking on a link in an email that looks like a legitimate offer from a business he or she frequents. Or it seems to come from a bank or other credible institution. But in clicking on the link, the user’s device is infected with malware, or the user is prompted to enter access credentials, and the breach begins. Once in possession of the employee’s log-in credentials for the enterprise systems, the attacker masquerades as the user and passes right through typical security controls into the “trusted” interior networks and systems. Phishing on Steroids In years past, it was not difficult for the typical user to see through a phishing attack after getting a little training. Poorly written phishing emails, unprofessional formats, and odd looking links or attachments were the fingerprints of attackers in those early phishing emails. But the phishing vector has reached a new level of dangerous sophistication and effectiveness with the latest incarnations. One recently documented attack mimics a Google Gmail log-in page to near perfection. And of course spearphishing is proving able to fool many victims. By engaging in some intel gathering and reconnaissance, the attacker carefully crafts an attack email that looks like it comes from your boss or your bank. The breach of the US Democratic National Committee was perpetrated by attackers with spearphishing campaigns. The US Department of Homeland Security and the Federal Bureau of Investigation detailed these campaigns in a report outlining the attack vectors believed to be used by Russia-backed attackers targeting the DNC. Zero Choice but to use Zero Trust The security industry has long advocated improving user training to help mitigate the human attack vector. But if even a single user absentmindedly clicks on a single phishing email, all bets are off. In even a small organization, what are the odds that this will never happen? There is no other choice but to embrace the notion that any user might already be compromised. In fact, a “trusted” user accessing your systems right now may be an attacker in disguise. That is the basis of the “Zero Trust” model of security. Old security models assume your LAN or internal networks are “trusted” because they are protected by a firewalled perimeter. But Zero Trust does away with the notion of implicit trust. Instead, it is assumed that any user or device on the network could be compromised. Given that, how do you protect your valuable digital assets? Deploying Zero Trust means recognizing that traditional perimeter and infrastructure-based security cannot stop an attacker that has compromised a user and is disguised as that user. So tools in the Zero Trust arsenal include enforcing role-based access control across all users. This limits attackers’ ability to move laterally through the enterprise systems once they have gained access via a compromised user. Another technique is to deploy end-to-end encryption of data communications flows, even on internal networks and in environments that traditionally would be considered to be trusted. Coming Soon to an Enterprise Near You But now enterprises and government entities large and small are embracing the concept. Google urges enterprises to adopt Zero Trust security. In the wake of devastating federal department breaches, the US Federal Government is also going Zero Trust. Increased risk, financial impact, penalties and awareness will lead enterprise security architects to abandon the obsolete notion that any network or system can be implicitly trusted. Analysis of the mega-breaches of recent years indicates that Zero Trust could have mitigated and in some cases completely prevented what turned into cybersecurity catastrophes. And the stakes have never been higher, such as with the new General Data Protection Regulation privacy rules in Europe. The GDPR will result in fines in the many millions of Euros for companies that fail to protect consumer data. As more companies embrace Zero Trust, they will become hardened targets, no longer easy pickings for attackers going phishing. It’s certain that attackers will then work to uncover new exploits. But no matter what attack emerges next, Zero Trust should keep enterprises and governments safer than did the old fashioned, implicit trust alternatives.
<urn:uuid:56b77ec1-ddd1-40b5-8f73-1f6b43eb0c41>
CC-MAIN-2017-09
https://certesnetworks.com/phishing-means-time-zero-trust/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00131-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940951
1,038
2.53125
3
Internet on a chip: MIT team aims for massively multicore processors - By Henry Kenyon - Apr 16, 2012 Researchers at the Massachusetts Institute of Technology are developing methods for multiple processors on a microchip to share data, working in a manner similar to the Internet. If it is successful, multi-processor chips will be able to operate more efficiently and effectively, allowing engineers to greatly increase the number of processors per chip. Current chip designs could have six to eight processors, or cores, communicating with each other via a bus. The only problem with this design is that only one pair of cores can communicate at a time, which is a major hurdle for future computing designs, which call for scaling up chips to have hundreds or thousands of cores. Single-atom transistor: The future of computing? IBM's experimental chip 'thinks' like a human brain An MIT team, led by associate professor of electrical engineering and computer science Li-Shiuan Peh, is working on ways to make cores share information by bundling them into packets. Each core would have a router and would send packets by any of several paths, depending on network conditions, MIT News reports. In a paper to be presented at the Design Automation Conference in June, Peh and her colleagues will discuss the theoretical limits of the efficiency of packet-switched on-chip communication networks based on measurements they conducted on a test chip operating close to those limits. Multicore chips run faster than single processor chips by dividing computing jobs among several cores simultaneously. These teamed cores sometimes need to share data, but the core count on existing chips has been low enough that only a single bus was needed to handle the load. However, this bus design has reached its limit, Peh said in a statement. Buses require a lot of power to push data along the wires linking eight to 10 cores, Peh said. Her proposed design has cores communicating with only the four adjacent cores, which allows for shorter wires and lower voltage needs. Another challenge facing on-chip networks is that inter-core data packets have to stop at every router on the chip. If two packets simultaneously arrive at the same router, one must be stored in memory while the other is processed, Peh said, adding that many industry engineers fear that any advantages offered by packet switching will be offset by the added complexity of the system. To address these concerns, Peh and her team have developed two techniques: virtual bypassing and low-swing signaling. In traditional Internet packet routing, routers inspect each arriving packet’s addressing information before determining which path to send it on. In virtual bypassing, routers send an advance signal to the next router on the path, presetting it and allowing the packet to speed along without any added processing. In the test chip Peh's team developed, she said, virtual bypassing allowed it to run at speeds very close to the maximum predicted rates. For low-swing signaling, the team developed a circuit that reduced the shift between high and low voltages from one volt to 300 millivolts. When data is transmitted over a network, it moves as high and low voltages. By using virtual bypassing and low-swing signaling, the MIT test chip used 38 percent less energy than other packet-switched test chips. More work is need before the test chip’s power use gets as close to the theoretical limit as its data rate does, but Peh said it still is “orders of magnitude” more efficient than a bus in power consumption.
<urn:uuid:0fba5b54-0936-473c-9c17-06be08b41b8b>
CC-MAIN-2017-09
https://gcn.com/articles/2012/04/16/mit-internet-on-a-chip-multicore-data-sharing.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00131-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949658
730
3.15625
3
Lack of Binary Protection is the last one in OWASP Mobile Top 10 Risk Android Application are delivered through an .apk file format which an adversary can reverse engineer it and can see all the code contained in it. Below are scenarios of reverse engineering an application:- - Adversary can analyze and determine which defensive measure are implemented in the app and also find a way to bypass those mechanism. - Also adversary can also insert the malicious code, recompile it and deliver to normal users. - For example, gaming apps which have some feature unlocked are widely downloaded by youngster through insecure sources(sometimes through Google PlayStore as well). Most of those modified apps contains malware and some contain advertising to gain profit from those users. An adversary is the only threat agent in this case. Below is the demostration of reverse engineering an app. I will use Sieve app for demonstration. First use dex2jar to convert .apk file in to .jar file. Open up the jar file in jdgui If you followed the above steps, you will be able to see the code of Sieve App. How To Fix Application Code can be obfuscated with the help of Proguard but it is only able to slow down the adversary from reverse engineering android application, obfuscation doesn’t prevent reverse engineering. You can learn more about proguard here. For security conscious application’s application, Dexguard can be used. Dexguard is a commercial version of Proguard. Besides encrypting classes, strings, native libraries, it also adds tamper detection to let your application react accordingly if a hacker has tried to modify it or is accessing it illegitimately.
<urn:uuid:9965b262-7b1d-4d76-a01d-23df0cdbfbb3>
CC-MAIN-2017-09
https://manifestsecurity.com/android-application-security-part-9/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00479-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921229
350
2.640625
3
If you're worried about an Internet "fast lane" squeezing out all the futuristic connected devices you're hoping to use around your home, fear not. The vaunted Internet of Things, which already includes a variety of industrial sensors and machines and a growing number of consumer devices, is likely to make itself more at home in the coming years. Some such devices, like the connected refrigerator, are still more curiosity than useful tool. But others are playing important roles in health care and home security, taking advantage of always-on broadband connections to keep people and machines elsewhere informed in real time. U.S. net neutrality advocates' main concern is that Internet service providers will sell priority delivery on their networks to the highest bidder, then squeeze out all the traffic from the users that don't pay. Whether that can or will happen is a matter of fierce debate. But whatever websites or services might get left behind in a paid-priority world, the Internet of Things demands so little bandwidth that it should get away without a scratch, according to people on both sides of the debate. The question of IoT and net neutrality is likely to revolve mostly around connected devices that use home broadband connections. For IoT gear on power grids or industrial sites, enterprises can already buy special plans with guaranteed quality of service. In addition, cellular networks have different capacity issues from cable and DSL (digital subscriber line) and have been treated differently in terms of net neutrality. The FCC expects them to stay separate, though it's asked whether the two should be lumped together. IoT hasn't escaped the debate entirely. On an early conference call to discuss its new proposal to ensure a so-called open Internet, the U.S. Federal Communications Commission cited connected heart monitors as one technology that anyone might agree deserves priority. Broadband for America, a group generally opposed to any new net neutrality rules, also told a press briefing on Friday that health-care needs might justify prioritization. "When one thinks of it in terms of real-time monitoring of pacemakers ... if you think about network management and prioritization, I think there would be support," said Harold Ford, Broadband for America's co-chairman. However, people involved in the IoT device and services business said they don't see a need for priority traffic handling now, and it hasn't been a hot topic in the industry. Most uses of IoT are built around small, power-efficient devices, often powered by batteries and using a relatively slow wireless connection such as Bluetooth or ZigBee. They exchange small bits of information such as the current temperature, the operating condition of a machine, or whether an elderly person has moved from room to room in the past few hours. As game-changing as those applications may be, the bits they generate won't add up to enough traffic to butt heads with the likes of Netflix streams and peer-to-peer file sharing -- at least not yet. "The amount of traffic from Internet of Things compared to the amount of traffic that's created by you and me just surfing the Web ... is like less than 1 percent," said analyst Steve Hilton of IoT consultancy Machnation. Even if consumers' broadband speeds were affected by a paid-priority scheme, it probably wouldn't get bad enough to hurt IoT, said Tom Lee, co-founder of IoT cloud provider Ayla Networks. "If it's good enough to satisfy most Netflix consumers, it almost automatically satisfies the needs of the IoT things," Lee said. Some home IoT applications do have some basic performance requirements. For example, when keeping track of a patient's health from home or detecting a break-in, users might want to make sure there are no delays. The predecessor of such emergency services, the 911 phone call, already gets priority on carriers' networks. IoT is opening up a new world of alerts that go beyond 911, but for now at least, the new types of services aren't demanding special treatment. IoT devices and software have built-in mechanisms for making sure they get messages out. Those include protocols for falling back to another carrier or another form of communication, such as SMS (short message service) if the usual method fails, said Daniel Collins, chief technology officer of Jasper Technologies. Jasper is a SaaS (software-as-a-service) provider for IoT infrastructure. "What I haven't seen yet is where companies are saying, 'Because my application is some kind of a life-or-death application, that therefore my traffic should get some priority treatment over other traffic," Collins said. But if providers of connected-health services are allowed to pay for priority, they probably will, Hilton said. And though there may be objections to it, prioritizing those narrow streams of traffic probably wouldn't affect anything else consumers are trying to do, he said. Paid prioritization could help IoT, like other services on the network, achieve the right performance if they had special requirements, said Doug Brake, a telecommunications policy analyst at the Information Technology and Innovation Foundation. But he doesn't expect regulators to carve out exceptions for certain services deemed more deserving. Whether a Fitbit or a life-saving alert bracelet, IoT would probably have the same right to paid priority as an online game, Brake said. AT&T, which on its wireless network lets content providers cover the cost of delivering their data to consumers' phones, indicated that it's committed to keeping the Internet open to all types of services. "AT&T has built its broadband business, both wired and wireless, on the principle of Internet openness. That is what our customers rightly expect, and it is what our company will continue to deliver," the company said in a written statement. An opponent of paid priority said it isn't needed because there's enough bandwidth for all services now. Instead, ISPs have floated the idea, and fear of congestion, for their own benefit and profit, said Derek Turner, research director at the consumer advocacy group Free Press. "The first thing you're going to see prioritized is the ISP's own affiliated content," such as video and voice calls, Turner said. Then the ISPs will sell priority to a few other companies with deep pockets, he said. "If you break down the basic economic and engineering reality of this, this isn't about grandma's heart monitor," Turner said. "This is about the big ISPs who face very little competition tapping into an additional revenue stream." Naturally, the picture could change if IoT in areas such as remote medical care grows more complex and demanding, said Ayla's Lee, a former researcher at the Defense Advanced Research Projects Agency. "The future tends to happen a lot faster than we expect," Lee said.
<urn:uuid:2c5e46d8-5ad2-4f39-a1d2-188acd3de24b>
CC-MAIN-2017-09
http://www.cio.com/article/2376191/internet/bandwidth-sipping-iot-steers-clear-of-net-neutrality-debate----for-now.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00003-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966056
1,373
2.53125
3
Massachusetts Institute of Technology researchers have developed a device that can see through walls and pinpoint a person with incredible accuracy. They call it the "Kinect of the future," after Microsoft's Xbox 360 motion sensing camera. Shown publicly this week for the first time the project from MIT's Computer Science and Artificial Laboratory (CSAIL) used three radio antennas spaced about a meter apart and pointed at a wall. A desk cluttered with wires and circuits generated and interpreted the radio waves. On the other side of the wall a single person walked around the room and the system represented that person as a red dot on a computer screen. The system tracked the movements with an accuracy of plus or minus 10 centimeters, which is about the width of an adult hand. See the system in action in a video on YouTube. Fadel Adib, a Ph.D student on the project, said that gaming could be one use for the technology, but that localization is also very important. He said that Wi-Fi localization, or determining someone's position based on Wi-Fi, requires the user to hold a transmitter, like a smartphone for example. "What we're doing here is localization through a wall without requiring you to hold any transmitter or receiver [and] simply by using reflections off a human body," he said. "What is impressive is that our accuracy is higher than even state of the art Wi-Fi localization." He said that he hopes further iterations of the project will offer a real-time silhouette rather than just a red dot. In the room where users walked around there was white tape on the floor in a circular design. The tape on the floor was also in the virtual representation of the room on the computer screen. It wasn't being used an aid to the technology, rather it showed onlookers just how accurate the system was. As testers walked on the floor design their actions were mirrored on the computer screen. One of the drawbacks of the system is that it can only track one moving person at a time and the area around the project needs to be completely free of movement. That meant that when the group wanted to test the system they would need to leave the room with the transmitters as well as the surrounding area; only the person being tracked could be nearby. At the CSAIL facility the researchers had the system set up between two offices, which shared an interior wall. In order to operate it, onlookers needed to stand about a meter or two outside of both of the offices as to not create interference for the system. The system can only track one person at a time, but that doesn't mean two people can't be in the same room at once. As long as one person is relatively still the system will only track the person that is moving. The group is working on making the system even more precise. "We now have an initial algorithm that can tell us if a person is just standing and breathing," Adib said. He was also able to show how raising an arm could also be tracked using radio signals. The red dot would move just slightly to the side where the arm was raised. Adib also said that unlike previous versions of the project that used Wi-Fi, the new system allows for 3D tracking and could be useful in telling when someone has fallen at home. The system now is quite bulky. It takes up an entire desk that is strewn with wires and then there's also the space used by the antennas. "We can put a lot of work into miniaturizing the hardware," said research Zach Kabelac, a masters student at MIT. He said that the antennas don't need to be as far apart as they are now. "We can actually bring these closer together to the size of a Kinect [sensor] or possibly smaller," he said. That would mean that the system would "lose a little bit of accuracy," but that it would be minimal. The researchers filed a patent this week and while there are no immediate plans for commercialization the team members were speaking with representatives from major wireless and component companies during the CSAIL open house.
<urn:uuid:233b08bb-bc02-48f2-840b-f5e8cb1017b5>
CC-MAIN-2017-09
http://www.networkworld.com/article/2170788/data-center/mit--39-s---39-kinect-of-the-future--39--looks-through-walls-with-x-ray-like-vision.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00355-ip-10-171-10-108.ec2.internal.warc.gz
en
0.975906
843
3.015625
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, and it provides the authors analytical data about your interactions with their content. Embed code for: - New York as a global city - booklet Select a size New York A Global City Question du programme THEME Les dynamiques de la mondialisation QUESTION Les territoires dans la mondialisation Lesson Plan Introduction I. An attractive city A. Transport in NYC B. A diverse population C. Tourism and Education II. A world power A. The financial capital B. Politics and diplomacy C. Culture III. Challenges for today and for tomorrow A. Transforming urban areas B. A divided city Introduction What is a global city ? Saskia Sassen literally wrote the book on global cities back in 2001 (though her global cities work dates back well over a decade prior to that book). In short form, in the age of globalization, the activities of production are scattered on a global basis. These complex, globalized production networks require new forms of financial and producer services to manage them. These services are often complex and require highly specialized skills. In this world then, a global city is a significant production point of specialized financial and producer services that make the globalized economy run. Sassen covered specifically New York, London, and Tokyo in her book, but there are many more global cities than this. A number of studies were undertaken to produce various rankings. However, when you look at them, you see that the definition of global city used is far broader than Sassen’s core version - these rankings attempt to look at global cities in four basic ways: 1. Advanced producer of services 2. Economic giants 3. International Gateway. 4. Political and Cultural Hub. From www.newgeography.com New York city facts 1624: first Dutch settlement 1674: New York City returned to the English and remained English. The city’s commercial ties to London gave it an advantage over other American cities 1883: opening of the Brooklyn Bridge. Manhattan and Brooklyn became a single city of 3.4 million people over an area f 359 square miles 1895: the metropolis had 298 firms with assets of $1 million 1921: the port was merged with that of the New Jersey to create a single Port Authority 1932: New York’s governor, Franklin D/ Roosevelt was elected president and his administration launched a New Deal ; New York City alone received $1 billion between 1933 and 1939 1934-1945: mayoralty of Fiorello La Guardia : major bridges, sixty miles intracity expressway, a traffic tunnel for East River, additions to subway lines 1939: opening of La Guardia airport and 14 new piers added to the port 1930, 1931, 1939: Chrysler building, Empire State Building, Rockefeller Center 1945: United Nations established in New York City 1947-1963: massive construction boom, addition of 58 million square feet of office space 1955: 7.8 million people 1960s: race riots 1970s: the city experienced near bankruptcy 1985: 6 of the big 8 accounting firms and 7 of the top 10 management consulting agencies were in New York City 1988: the metropolitan region reached 18 million people; central city shrunk to 7.3 million 1990s: NYSE remained the world’s largest capital market 2001: terrorist attack, a large part of downtown is destroyed Source : Christopher KENNEDY, The Evolution of Great Cities. Urban Wealth and Economic Growth. 2011. Pages 26-29 Transport in New York City Port Authority Facilities John F. Kennedy Airport is the busiest international air passenger gateway in the United States. Over seventy airlines operate out of the airport, with non-stop or direct flights to destinations in all six inhabited continents. The state-of-the-art World Trade Center Transportation Hub, when completed in 2015, will serve over 200,000 daily commuters and millions of annual visitors from around the world. At approximately 800,000 square feet, the Hub, designed by internationally acclaimed architect Santiago Calatrava, will be the third largest transportation center in New York City, rivaling Grand Central Station in size. The WTC Transportation Hub's concourse will conveniently connect visitors to 11 different subway lines, it will represent the most integrated network of underground pedestrian connections in New York City. The Hub features an "Oculus” design, which will give the facility a distinctive, wing-like appearance. When completed, the "Oculus,” the upper portion of the Transportation Hub, will serve as the main concourse. Incorporating 225,000 square feet of exciting, multi-level retail and restaurant space along all concourses, the Hub promises to be a destination location, becoming the centerpiece for the entire Lower Manhattan district. When complete, this structure will reach five stories underground into a basement with connecting ramps leading to the parking and below-grade facilities of all of the adjacent projects on the 16-acre WTC site. From http://www.panynj.gov/wtcprogress/transportation-hub.html Who are the New Yorkers? New York City has historically attracted job seekers from outside the United States. In 2013, foreign-born immigrants accounted for 42.7% of all New York City workers, 37.0% of all New York City residents, and 46.0% of the New York City labor force. As of 2013, over 1.9 million immigrants to the United States worked in New York City; of those, 1.6 million lived in New York City. In the last decade, the growth of the foreign-born population, at 7 percent, outpaced the growth of the city's overall population, at 3 percent. At the same time, the origins of New York City's immigrant population are changing. Immigration from Europe has fallen dramatically as a proportion of overall immigration to New York, while Latin America has surged to the top spot, followed closely by Asia. Tourism and Education City officials estimate the overall economic impact of tourism in 2013 to be $58.7 billion. Direct visitor spending was estimated to be $39.4 billion. With seven universities in New York featured within the QS World University Rankings 2015/16 and an additional three in close proximity to the city, there’s good reason why New York City is one of the most popular study destinations in the world, ranked 15th in the QS Best Student Cities 2015. 1. Cornell University: New York City’s highest ranking institution, Cornell University is currently ranked 19th in the world. A member of the prestigious Ivy League group, Cornell University’s main campus is actually in Ithaca, around 200 miles to the north-west of New York City, but it also has a strong presence in NYC. 2. Columbia University: Currently stands in 22nd place in the QS World University Rankings. Another member of the prestigious Ivy League, Columbia University has a central location in the Upper West Side of Manhattan. It boasts a highly diverse faculty and student body (just under 30,000 students overall), with more than 7,000 international students from over 150 different countries. 3. New York University (NYU): Ranked among the world’s best, at 53rd this year. Notably, New York University has a strong focus on internationalization. Its main hub is its Washington Square campus, in Greenwich Village. This area is one of New York City’s most creative neighborhood, and over the years the school has attracted an eclectic mix of writers, artists, musicians and intellectuals. From www.topuniversities.com Economic Power 2014 saw the New York Stock Exchange lead the world's markets in global capital raising for the fourth consecutive year. Fueled in part by the largest IPO in history - that of Alibaba which raised $25 billion - it was a landmark year for the NYSE across a range of industry categories including the number and value of tech IPOs. For the fourth year in a row, NYSE led in capital raised at more than $70 billion, and for the third year in a row led in tech IPOs with $29 billion in proceeds. Today, NYSE- Listed companies account for $27 trillion in market capitalization, representing the most valuable listed franchise in the world. From www.nyse.com For the past 61 years, Fortune Magazine has been ranking the top 1000 companies in the United States based on revenues for the latest respective fiscal years for each company. The 2015 list shows there are a few areas of the country where Fortune 1000 companies are clustered. The biggest cluster is a corridor along the East Coast : stretching from Boston, Ma. to Norfolk, Va., 267 Fortune 1000 companies are headquartered in this nine state area of the East Coast. New York City is home the most headquarters with 72, followed by Houston (49), Atlanta (22), Chicago (22), and Dallas (15). Political Power Shortly after the establishment of the United Nations in 1945, the U.S. government negotiated the Agreement Between the United Nations and the United States Regarding the Headquarters of the United Nations (1947) established the specific geography of the U.N. “headquarters district” as the property on the East River where the 38-floor U.N. tower is located along with an easement over Franklin D. Roosevelt Drive. In four subsequent supplemental agreements (completed in 1966, 1969, 1980, and 2009), the headquarters district of the U.N. has expanded significantly over a dozen times in four separate agreements -most recently under the Obama administration in 2009- and now encompasses entire buildings and warehouses in New York and Long Island beyond the original U.N. headquarters building in Turtle Bay. As part of the U.N. headquarters district, these locations, which in some cases are simply floors and offices in commercial buildings, are “inviolable” to U.S. officers and officials and “under the control and authority of the United Nations” except as specified in the agreement. From Heritage.org Cultural Power Chinese Investors Star on Broadway Who’s the latest behind-the-scenes investor on Broadway? China. Three of the hottest musicals on Broadway have Chinese backers as China starts expanding live theatrical entertainment at home and looks to New York for expertise. “This is the first season that Chinese companies are investing on Broadway,” said Simone Genatt, chairman of Broadway Asia, a New York-based production and licensing entertainment company primarily focused on Asia. “They’ve been doing Broadway musicals in mainland China for the last decade, but this is the first time China is here in New York.” The New York investments are part of a broader push to expand musical theater inside China. Big Broadway shows such as “Cats” and “The Sound of Music” have been touring China for years. In a next step, “Cats” and “Mamma Mia” have been translated into Chinese. Chinese companies say they’re hoping to leverage their stakes in Broadway shows to gain expertise in U.S. productions, bring shows to China and eventually develop more original Chinese-produced musicals. From the Wall Street Journal, June 4, 2015. New York City, a vibrant cultural scene INCREASED GROWTH OF NEW YORK CITY’S ENTERTAINMENT INDUSTRY October 15, 2015 Mayor Bill de Blasio, Deputy Mayor Alicia Glen, and Media & Entertainment Commissioner Cynthia López today announced that New York City’s filmed entertainment industry now contributes $8.7 billion to the local economy, an increase of more than 1.5 billion, or 21 percent, since 2011. According to an independent study conducted by the Boston Consulting Group (BCG), New York City is one of only three cities in the world with a filming community large enough to enable a production to be made without needing any roles to be brought in from other locations, including cast, crew members, and the creative team. Additionally, a rich real-life history, iconic locations, diverse storytellers and top talent are among the reasons productions choose to film in New York City. While television has seen the greatest increase (from 29 series in the 2013-2014 season to a record 46 series in the 2014-2015 season), New York City was home to 242 film productions in 2014 and as of this month, 256 films have been shot so far in 2015. “There’s something special about New York City – and the TV and film industry has picked up on it. The filmed entertainment industry channels nearly 9 billion dollars into our local economy each year, supporting the creation of thousands of dependable good-paying jobs and showcasing the history, creativity and vivacity of our people and our city,” said Mayor Bill de Blasio. From www.nyc.gov Transforming urban areas Reshaping the Financial District after 9/11 "Because of those buildings being attacked, there was an outpouring of awareness and generosity and people wanting to help rebuild, coupled with an openness to other cultural influences. The architectural scene in the US, which once was probably led by architects in Los Angeles and other places in the west, has returned once again to the east coast. I'm not saying there aren't any good architects on the west coast, but there's a tremendous concentration of architects in New York City now that haven't been here since the turn of the 20th century." Interview of architect Craig Dykers of Snøhetta, who designed a recently opened pavilion on the memorial site. (www.dezeen.com) Two examples of gentrification: The Meat Packing District between 1985 and 2015 Brooklyn's Hipster Heaven All the people waiting on the L train platform are in their 20s and 30s and have full body tattoos, piercings and funny hairstyles. They’re going to Williamsburg, a neighborhood where they’ve created, next to Latino and Hassidic communities, a community of their own. Williamsburg, one subway stop into Brooklyn, has turned into a neighborhood of artists, students and people who go out at night. They demand good food at fair prices and, above all, think they are different from the sophisticated, arrogant, money-driven Manhattanites. It’s peaceful, with trees on both sides of the street and not a megastore in sight. Williamsburg looks like a village, with its own style, pace and rules (especially, “be cool”). It’s all about modesty and conviviality. The avenue is the center of it all. There are Italian, Mexican and macrobiotic restaurants mingled with bagel, thrift and antiques shops. People stroll calmly down the sidewalks, often followed by a dog or bike. (from journalism.nyu.edu) Transforming abandoned buildings into trendy bars (2007-2009) A tale of two cities A report by the city comptroller’s office found an alarming rise in the share of overcrowded housing units from 2005 to 2013. Here’s a press release on the implications of this finding: “Studies make it clear that crowding hurts the whole family, it makes it harder for kids to learn and puts the entire family at a greater risk of homelessness. This new report shows that the problem of crowding is stubbornly increasing, with nearly 1.5 million New Yorkers now living in a crowded (2 people in a studio) or severely crowded home (three or more people in a studio).” New York City has prospered during the 12-year mayoralty of Michael Bloomberg, which comes to an end this year. But the same cannot be said of all New Yorkers. In January 2013, for the first time in recorded history, the New York homeless shelter system housed an average nightly population of more than 50,000 people. That number is up 19 percent in the past year alone, up 61 percent since Bloomberg took office, and it does not include victims of Hurricane Sandy, who are housed separately. While homelessness is increasing in other cities, the numbers from New York are astoundingly high. This January, on average, over 21,000 children slept in city shelters each night, a 22 percent increase over the same period in 2011. More than one percent of NYC children (21,034 of 1,780,000) slept in a shelter this January. From www.citylab.com, March 8, 2013. oductions choose to film in New York City. While television has seen the greatest increase (from 29 series in the 2013-2014 season to a record 46 series in the 2014-2015 season), New York City was home to 242 film productions in 2014 and as of this month, 256 films have been shot so far in 2015. “There’s something special about New York City – and the TV and film industry has picked up on it. The filmed entertainment industry channels nearly 9 billion dollars into our local economy each year, supporting the creation of thousands of dependable
<urn:uuid:20008122-a809-4b6a-94ba-08f7d319bd0f>
CC-MAIN-2017-09
https://docs.com/franck-gasulla/1934/new-york-as-a-global-city-booklet
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00231-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94055
3,511
2.640625
3
Computer clouds have been credited with making the workplace more efficient and giving consumers anytime-anywhere access to emails, photos, documents and music as well as helping companies crunch through masses of data to gain business intelligence. Now it looks like the cloud might help cure cancer too. The National Cancer Institute plans to sponsor three pilot computer clouds filled with genomic cancer information that researchers across the country will be able to access remotely and mine for information. The program is based on a simple revelation, George Komatsoulis, interim director and chief information officer of the National Cancer Institute’s Center for Biomedical Informatics and Information Technology, told Nextgov. It turns out the gross physiological characteristics we typically use to describe cancer -- a tumor’s size and its location in the body -- often say less about the disease’s true character and the best course of treatment than genomic data buried deep in the cancer’s DNA. That’s sort of like saying you’re probably more similar to your cousin than to your neighbor, even though you live in New York and your cousin lives in New Delhi. It means treatments designed for one cancer site might be useful for certain tumors at a different site, but, in most cases, we don’t know enough about those tumors’ genetic similarities yet to make that call. The largest barrier to gaining that information isn’t medical but technical, said Komatsoulis who’s leading the cancer institute’s cloud initiative. The National Cancer Institute is part of the National Institutes of Health. The largest source of data about cancer genetics, the cancer institute’s Cancer Genome Atlas, contains half a petabyte of information now, he said, or the equivalent of about 5 billion pages of text. Only a handful of research institutions can afford to store that amount of information on their servers let alone manipulate and analyze it. By 2014, officials expect the atlas to contain 2.5 petabytes of genomic data drawn from 11,000 patients. Just storing and securing that information would cost an institution $2 million per year, presuming the researchers already had enough storage space to fit it in, Komatsoulis told a meeting of the institute’s board of advisers in June. To download all that data at 10 gigabytes per second would take 23 days, he said. If five or 10 institutions wanted to share the data, download speeds would be even slower. It could take longer than six months to share all the information. That’s where computer clouds -- the massive banks of computer servers that can pack information more tightly than most conventional data centers and make it available remotely over the Internet -- come in. If the genomic information contained inside the atlas could be stored inside a cloud, he said, researchers across the world would be able to access and study it from the comfort of their offices. That would provide significant cost savings for researchers. More importantly, he said, it would democratize cancer genomics. “As one reviewer from our board of scientific advisers put it, this means a smart graduate student someplace will be able to develop some new, interesting analytic software to mine this information and they’ll be able to do it in a reasonable time frame,” Komatsoulis said, “and without requiring millions of dollars of investment in commodity information technology.” It’s not clear where all this genomic information will ultimately end up. If one or more of the pilots proves successful, a private sector cloud vendor may be interested in storing the information and making it available to researchers on a fee-for-service basis, Komatsoulis said. This is essentially what Amazon has done for basic genetic information captured by the international Thousand Genomes Project. A private sector cloud provider will have to be convinced that there’s a substantial enough market for genomic cancer information to make storing the data worth its while, Komatsoulis said. The vendor will also have to adhere to rigorous privacy standards, he said, because all the genomic data was donated by patients who were promised confidentiality. One or more genomic cancer clouds may also be managed by university consortiums, he said, and it’s possible the government may have an ongoing role. The cancer institute is seeking public input on the cloud through the crowdsourcing website Ideascale. The University of Chicago has already launched a cancer cloud to store some of that information. It’s not clear yet whether the university will apply to be one of the institute’s pilot clouds. Because the types of data and the tools used to mine it differ so greatly, it’s likely there will have to be at least two cancer clouds after the pilot phase is complete, Komatsoulis said. As genomic research into other diseases progresses, it’s possible that information could be integrated into the cancer clouds as well, he said. “Cancer research is on the bleeding edge of really large scale data generation, he said. “So, as a practical matter, cancer researchers happen to be the first group to hit the point where we need to change the paradigm by which we do computational analysis on this data . . . But much of the data that I think we’re going to incorporate will be the same or similar as in other diseases.” As scientists’ ability to sequence and understand genes improves, genome sequencing may one day become part of standard care for patients diagnosed with cancer, heart problems and other diseases with a genetic component, Komatsoulis said. “As we learn more about the molecular basis of diseases, there’s every reason to believe that in the future if you present with a cancer, the tumor will be sequenced and compared against known mutations and that will drive your physician’s treatment decisions,” he explained. “This is a very forward looking model but, at some level, the purpose of things like The Cancer Genome Atlas is to develop a knowledge base so that kind of a future is possible.”
<urn:uuid:6febb0e6-092e-43cc-9f36-9fc1c5cffd0c>
CC-MAIN-2017-09
http://www.nextgov.com/cloud-computing/2013/08/computer-clouds-can-help-cure-cancer/68096/?oref=ng-HPriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00351-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931194
1,246
3
3
NASA, NWS blazed an early path on lightning detection systems - By William Jackson - Feb 11, 2013 Government agencies have been getting a better understanding of how severe weather develops by tapping into lightning data — particularly cloud-to-cloud lightning — provided by private companies. But the National Weather Service and NASA have experience of their own with that kind of data. NASA, in fact, has been a pioneer in lightning detection systems. Severe weather is a major concern in rocket launches, so long before Total Lightning Information was available commercially, the National Weather Service partnered with NASA to research correlations between in-cloud lightning and tornado development in tropical storms in southern Florida. NASA deployed its Lightning Detection and Ranging (LDAR) system in 1990 at the Kennedy Space Center. It consists of six observer stations to monitor radiation in the 66 MHz frequency band, each placed about 8 kilometers from a central station with a computer and a display station. The observing stations can detect lightning discharges from tens of kilometers away and measure the time of detection to within 10 nanoseconds. By taking timing data from at least three stations, the location of the discharge can be pinpointed and plotted on a three-dimensional display on the workstation. The advantage of LDAR was that, unlike commercially available lightning detection systems at that time, it could detect cloud discharges as well as ground strikes. “As a result, the LDAR system detects lightning at least as early as other systems do (sometimes 10 to 20 minutes earlier), thereby providing greater warning lead times,” a 1998 NASA technical article explained. The extra data also allowed forecasters to terminate lightning warnings more quickly, sometimes as much as an hour earlier. The LDAR display showed constantly updated lightning flash data in three dimensions over a five-minute period. In 1995, the Massachusetts Institute of Technology developed a workstation to display LDAR data (the Lightning Imaging Sensor Data Application Display, or LISDAD) in a simplified way that also could show trends over time. In the mid-1990s, both the NASA LDAR and a prototype LISDAR workstation were used side-by-side at the Weather Service’s Melbourne, Fla., facility to monitor cloud lightning activity in Tropical Storms Gordon (1994) and Josephine (1996) to see if it could be used to forecast tornado activity in the storms. The correlations were not exact, but the study did find that “cells that exhibit [cloud-based lightning activity] appear to imply the presence of stronger updrafts than within adjacent cells which lack lightning.” A problem with LDAR was that the area it covered was small, providing limited research data and no way to extend the findings to the rest of the country. With the recent development of commercial networks capable of producing Total Lightning Information across North America, NWS now is working to incorporate this data into its Advanced Weather Information Processing System. William Jackson is a Maryland-based freelance writer.
<urn:uuid:59ec350d-6516-4844-aa42-a9d30d67b25f>
CC-MAIN-2017-09
https://gcn.com/articles/2013/02/11/nasa-nws-early-path-lightning-detection-systems.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00351-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940814
610
3.546875
4
Photo of the Week -- Ice Sculptures that Rival Skyscrapers Found Beneath Greenland Ice Sheet / June 17, 2014 The constant melting and re-freezing occurring in the Greenland Ice Sheet has had an unexpected effect -- blocks of ice as tall as city skyscrapers and as wide as the island of Manhattan have formed at the ice sheet's base (as shown below), a discovery researchers made using ice-penetrating radar, according to the Earth Institute at Columbia University. Image courtesy of the Earth Institute at Columbia University/Mike Wolovick. According to the institute, these skyscraper-sized blocks are formed as water beneath the ice refreezes and warps the surrounding ice upward. The researchers estimate that they cover about one-tenth of northern Greenland, and are becoming bigger and more common as the ice sheet narrows into ice streams, or glaciers, headed for the sea.
<urn:uuid:7a7270e8-7987-40f2-9f77-a4dd03a7afc9>
CC-MAIN-2017-09
http://www.govtech.com/photos/Photo-of-the-Week-Ice-Sculptures-that-Rival-Skyscrapers-Found-Beneath-Greenland-Ice-Sheet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00051-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957941
187
3.4375
3
When I was teaching at the University level I tried to convey to my students a number of real-world examples of some of the basic communication principles that guide communication (and misunderstanding) patterns – often without them knowing it. This post can help you understand how Social Justice advocates use a sleight-of-hand on unsuspecting innocent people in order to control the conversation. What is Metacommunication? Think of this as an out-of-context lesson in Communication Theory. There are three basic concepts that are very useful to know when analyzing communication patterns, whether it be political debate, interpersonal relationships, or even business communication. In short, if you ever find yourself confused about how and where misunderstandings happen, try to think of it in these terms and it will help isolate where a misunderstanding comes from. Here are the three basic concepts: I know, I know – when I first heard of this I thought people were playing a joke. You can think of the definitions like this: - Communication: Talking about something - Meta-Communication: Talking about Talking about something - Meta-meta-communication: Talking about Talking about Talking about something Yeah… not helpful. Let’s try this: - Communication: Content - Meta-Communication: Talking about the Content - Meta-Meta-Communication: Talking about Talking about the Content Hmmm… getting closer. Let’s look at an example - You go and see a movie. The movie is the content that is communicated to you (communication). - Afterwards, you sit down with your friend and talk about the movie (metacommunication). It’s a good conversation and your friend makes some good points that you want to share with people. - Later, you want to talk about the good points that your friend makes with someone else. So, you talk about the conversation you had regarding the movie (you talked about talking about the movie) – Violá! Metametacommunication This is one of the most innocuous examples I can think of. Many times, however, the examples are far more unpleasant. Let’s take a look at a generic interpersonal encounter that isn’t quite so friendly. “My boss really pissed me off today.” “He said my report was late when I know it was on time” “Have you sat down and talked with him about it?” “Why do you always do that?” “Why do you always try to fix things? All I wanted to do was tell you about my day. I didn’t expect you to step in and control everything” “I wasn’t trying to control anything.” (Later, with the marriage counselor) “Tell me what happened.” “I came home from having a rough day at work, and as soon as I tried telling him about it he started barging in, trying to fix everything. Just like he always does.” “I did not! All I did was ask a simple question.” In this hypothetical example, our unfortunate couple (thankfully who is seeing a counselor!) is showing us clearly what the different types of communication are. The content of the communication was the discussion about the day she had. When she got upset at his reaction, she decided to start talking about the way he communicated, rather than continuing to discuss the original content. In other words, she made the metacommunication the focus of the conversation. So, instead of talking about the original subject (content), she decided to talk about how they talk about these kinds of things (“why do you always do that?”). When they went to the counselor, they began discussing that metacommunicative process. By talking about the metacommunicative state they “always” seem to find themselves in, they were discussing a metametacommunicative process. And here is where things get messy. When the metacommunicative state becomes the content, when the topic of discussion becomes what you’re communicating, it’s extraordinarily easy to get lost and confused. You can see how quickly and easily – too easily – it is to slide into different states. If we had examined the conversation a little longer, we could have seen them fall headlong into a tennis match between the different communication states – moving from the content of the communication (the bad day at work) to the metacommunicative state (how they talk about these kinds of conversations) and even into the metametacommunicative state. Applying to Real Life I’m not convinced that my students got it, to be honest. For me, this was as intuitive as could be. I became very adept at figuring out that when people were talking, it was rare that they were actually talking about what they thought they were talking about. Most students just wanted to pass a test and get a grade, and this was true throughout history as it is today. Knowledge of this power to change the discourse comes easily for some more than others. Once people find a way of distracting a conversation by slipping (or “up-leveling”) a conversation into a metacommunicative state, they can befuddle debate opponents, or slip into ad hominem attacks without their victims even realizing it. Public life and academe have become bastions of this. Nearly every conversation about “Social Justice” falls into this category. Identitarian politics are all metacommunicative. By being able to avoid talking about the actual content of communication, they can shift the focus elsewhere – usually using some sort of metacommunicative litmus test (“are you qualified to talk about this content?”). As it is possible to completely rewrite the subject of a conversation by shifting it into a metacommunicative state, these approaches play a leprechaun dance around the matter, all the while gleefully decrying no willingness to have a “serious debate.” In another post, I’ll be talking more explicitly about how this works with direct examples. Then, I’ll show you a real-life example of how dangerous this really is. For now, I’ll leave you – dear reader – with an exercise for your homework. Look at the meme below and see if you can map the metacommunicative state to the conversation. Until next time…
<urn:uuid:b6679364-2a4c-46e3-90bb-eefbfeb44c6e>
CC-MAIN-2017-09
https://jmetz.com/2016/10/weaponized-metacommunication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00275-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963699
1,371
3.09375
3
Why Don't RIPv1 and IGRP Support Variable-Length Subnet Mask? The ability to specify a different subnet mask for the same network number on different subnets is called Variable-Length Subnet Mask (VLSM). RIPv1 and IGRP are classful protocols and are incapable of carrying subnet mask information in their updates. Before RIPv1 or IGRP sends out an update, it performs a check against the subnet mask of the network that is about to be advertised and, in case of VLSM, the subnet gets dropped. Let's look at an example. In the figure below, Router 1 has three subnets with two different masks (/24 and /30). Router 1 goes through the following steps before sending an update to Router 2. These steps are explained in more detail in Behavior of RIP and IGRP When Sending or Receiving Updates. - First Router 1 checks to see whether 22.214.171.124/24 is part of the same major net as 126.96.36.199/30, which is the network assigned to the interface that will be sourcing the update. - It is, and now Router 1 checks whether 188.8.131.52 has the same subnet mask as 184.108.40.206/30. - Since it doesn't, Router 1 drops the network, and doesn't advertise the route. - Router 1 now checks whether 220.127.116.11/30 is part of the same major net as 18.104.22.168/30, which is the network assigned to the interface that will be sourcing the update. - It is, and now Router 1 checks whether 22.214.171.124/30 has the same subnet mask as 126.96.36.199/30. - Since it does, Router 1 advertises the network. The above checks determined that Router 1 only includes 188.8.131.52 in its update that is sent to Router 2. Using the debug ip rip command, we can actually see the update sent by Router 1. It looks like this: RIP: sending v1 update to 255.255.255.255 via Serial0 (184.108.40.206) subnet 220.127.116.11, metric 1 Notice that in the output above only one subnet is included in the update. This results in the following entry in Router 2's routing table, displayed using the show ip route command. 18.104.22.168/30 is subnetted, 3 subnets C 22.214.171.124 is directly connected, Serial0 C 126.96.36.199 is directly connected, Ethernet0 To avoid having subnets eliminated from routing updates, either use the same subnet mask over the entire RIPv1 network or use static routes for networks with different subnet masks.
<urn:uuid:5f011fb6-1205-44ca-9e42-060dc975b064>
CC-MAIN-2017-09
https://www.certificationkits.com/rip-igrp-ccna/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00451-ip-10-171-10-108.ec2.internal.warc.gz
en
0.893297
606
2.53125
3
Natural gas leaks can cause serious problems in our cities -- on Dec. 11, for instance, a 20-inch Columbia Gas transmission line exploded in Sissonville, W. Va., and destroyed four homes and essentially cooked a section of the interstate. To see what Boston had going on in its natural gas pipelines, a team of scientists set out to map and measure leaks throughout the city. After all was said and done, they spent a month driving 785 miles -- and they discovered 3,356 methane leaks. Jackson helped lead the study along with Nathan Phillips, associate professor at Boston University’s Department of Earth and Environment. During the month they spent measuring, they drove about 20 miles-per-hour down each street, sometimes in both directions, using a tool called a Picarro G2301 Cavity Ring-down Spectrometer. It’s a fast-response methane analyzer that measures methane concentrations higher than 2 parts per million (the normal amount in the air). They combined the methane measurement device with a high-resolution GPS, and loaded those in the car. The devices were connected so as they measured methane concentrations, they also mapped where they were. This information was then loaded into a GIS file to produce a map of the measurements. When they found large methane leaks that could pose an immediate explosion threat, they stopped and gathered an air sample to analyze the chemistry to determine if the leak was coming from a sewer or landfill, or if it was pipeline gas. A vast majority of the leaks were from pipelines. While all leaks were measured and mapped, the team called in six pipeline leaks they felt were explosion risks, which the utility companies then fixed. “We tried to work constructively with the gas distribution companies as well as the city,” Jackson said. “The main reason they have so many leaks is because they have so many old pipes. The No. 1 goal should be to replace those old pipes as quickly as possible.” The report even got the attention of a top U.S. congressman on the Energy and Commerce Committee. Finding Gas Leaks via Plane In Central California, UC Davis atmospheric researchers recently surveyed utility company's Pacific Gas & Electric Co's 600 miles of natural gas pipelines in a plane -- a single-engine Mooney TLS filled with "scientific instruments designed to sniff out leaks of methane," the Los Angeles Times reported. The mission was to quickly and cheaply find gas leaks several miles downwind from the source, and then dispatch ground crews to fix the problem. Image courtesy of AEROSPACE “This study shows that we need a plan to ensure leaks from aging natural gas pipelines in Boston and other cities and communities are repaired, so that we can conserve this important natural resource, protect the consumers from paying for gas that they don’t even use, and prevent emissions of greenhouse gases into the environment,” Massachusetts Rep. Ed Markey wrote in a letter to the Pipeline and Hazardous Materials Safety Administration. This technique used on the ground does not provide an amount of gas, just a concentration. “This first step gave us the hot spots,” Jackson said. “Just because you have a high concentration, that doesn’t tell you if it’s a big leak or a small leak. For example, if the wind is blowing hard, the methane will be carried away from the leak. Now we’re trying to get the total amount of methane leaking into the atmosphere.” They can do this by combining the street work with sensors on top of buildings and skyscrapers that can track the movement of the methane. This information has a number of important uses. The study itself makes publicly available this information about the health and safety of the community. The information can then be utilized for environmental purposes to help cut methane emissions, reduce greenhouse gas losses and lower ozone formation. It can help save money from the consumers. According to Boston Mayor Thomas Menino, natural gas leaks cost tens of million dollars per year, and those costs typically are passed along to the community. And it can help keep citizens safe. “I think in the very near future you’ll see this approach used by a majority of cities,” Jackson said. “I think you could now put a study like this on the ground for less than $100,000. If by doing such a survey you can stop one explosion from happening, you’ve saved some lives and paid for the survey. Better will information will provide the impetus for fixing problems.” Image at top shows relative volume of methane leaks at various locations around Boston. Credit: Boston University
<urn:uuid:c999b85e-b656-48cd-8c17-932b0bd996f9>
CC-MAIN-2017-09
http://www.govtech.com/geospatial/Researchers-Map-Bostons-Natural-Gas-Leaks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00571-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956175
959
3.15625
3
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach. Over the past few years organizations have awakened to the fact that there is knowledge hidden in Big Data, and vendors are feverishly working to develop technologies such as Hadoop Map/Reduce, Dryad, Spark and HBase to efficiently turn this data into information capital. A That push will benefit from the emergence of another technology Software Defined Networking (SDN). Much of what constitutes Big Data is actually unstructured data. While structured data fits neatly into traditional database schemas, unstructured data is much harder to wrangle. Take, for example, video storage. While the video file type, file size, and the source IP address are all structured data, the video content itself, which doesn't fit in fixed length fields, is all unstructured. Much of the value obtained from Big Data analytics now comes from the ability to search and query unstructured data -- for example, the ability to pick out an individual from a video clip with thousands of faces using facial recognition algorithms. The technologies aimed at the problem achieve the speed and efficiency required by parallelizing the analytic computations on the Big Data across clusters of hundreds of thousands of servers connected via high-speed Ethernet networks. Hence, the process of mining intelligence from Big Data fundamentally involves three steps: 1) Split the data into multiple server nodes; 2) Analyze each data block in parallel; 3) Merge the results. These operations are repeated through successive stages until the entire dataset has been analyzed. A A Owing to the Split-Merge nature of these parallel computations, Big Data Analytics can place a significant burden on the underlying network. Even with the fastest servers in the world, data processing speeds the biggest bottleneck for Big Data can only be as fast as the network's capability to transfer data between servers in both the Split and Merge phases. For example, a study on Facebook traces show this data transfer between successive stages accounted for 33% of the total running time, and for many jobs the communication phase took up over 50% of the running time. By addressing this network bottleneck we can significantly speed up Big Data analytics which has two-fold implications: 1) Better cluster utilization reduces TCO for the cloud provider that manages the infrastructure; and 2) faster job completion times and results in real-time analytics for the customer that rents the infrastructure. A A What we need is an intelligent network that, through each stage of the computation, adaptively scales to suit the bandwidth requirements of the data transfer in the Split & Merge phases, thereby not only improving speed-up but also improving utilization. The role of SDN SDN has huge potential to build the intelligent adaptive network for Big Data analytics. Due to the separation of the control and data plane, SDN provides a well-defined programmatic interface for software intelligence to program networks that are highly customizable, scalable and agile, to meet the requirements of Big Data on-demand. SDN can configure the network on-demand to the right size and shape for compute VMs to optimally talk to one another. This directly addresses the biggest challenge that Big Data, a massively parallel application, faces - slower processing speeds. Processing speeds are slow because most compute VMs in a Big Data application spend a significant amount of time waiting for massive data during scatter-gather operations to arrive so they can begin processing. With SDN, the network can create secure pathways on-demand and scale capacity up during the scatter-gather operations thereby significantly reducing the waiting time and hence overall processing time. This software intelligence, which is fundamentally an understanding of what the application needs from the network, can be derived with much precision and efficiency for Big Data applications. The reason is two-fold: 1) the existence of well-defined computation and communication patterns, such as Hadoop's Split-Merge or Map-Reduce paradigm; and 2) the existence of a centralized management structure that makes it possible to leverage application-level information, e.g. Hadoop Scheduler or HBase Master. With the aid of the SDN Controller which has a global view of the underlying network its state, its utilization etc. -- the software intelligence can accurately translate the application needs by programming the network on-demand. A A A A A SDN also offers other features that assist with management, integration and analysis of Big Data. New SDN oriented network protocols, including OpenFlow and OpenStack, promise to make network management easier, more intelligent and highly automated. OpenStack enables the set-up and configuration of network elements using a lot less manpower, and OpenFlow assists in network automation for greater flexibility to support new pressures such as data center automation, BYOD, security and application acceleration. From a size standpoint, SDN also plays a critical role in developing network infrastructure for Big Data, facilitating streamlined management of thousands of switches, as well as the interoperability between vendors that lays the groundwork for accelerated network build out and application development. OpenFlow, a vendor-agnostic protocol that works with any vendor's OpenFlow-enabled devices, enables this interoperability, unshackling organizations from the proprietary solutions that could hinder them as they work to transform Big Data into information capital. As the powerful implications and potential of Big Data become increasingly clear, ensuring that the network is prepared to scale to these emerging demands will be a critical step in guaranteeing long-term success. It is clear that a successful solution will leverage two key elements the existence of patterns in Big Data Applications & the programmability of the network that SDN offers. From that vantage point, SDN is indeed poised to play an important role in enabling the network to adapt further and faster, driving the pace of knowledge and innovation. About the Author: Bithika Khargharia is a senior engineer focusing on vertical solutions and architecture at Extreme Networks. With more than a decade in the field of technology research and development with companies including Cisco, Bithika's experience in Systems Engineering spans sectors including green technology, manageability and performance; server, network, and large-scale data center architectures; distributed (grid) computing; autonomic computing; and Software-Defined Networking. Read more about software in Network World's Software section. This story, "SDN Networks Transform Big Data Into Information Capital" was originally published by Network World.
<urn:uuid:ff43ec33-f313-45bb-9405-6962c546d804>
CC-MAIN-2017-09
http://www.cio.com/article/2381657/big-data/sdn-networks-transform-big-data-into-information-capital.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00271-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916522
1,325
2.609375
3
Is Surfing the Internet Altering your Brain?By Reuters - | Posted 2008-10-27 Email Print A neuroscientist at UCLA in California who specializes in brain function has found that Internet searching and text messaging has made brains more adept at filtering information and making quick decisions. But while technology can accelerate learning and boost creativity it can have drawbacks as it can create Internet addicts whose only friends are virtual and has sparked a dramatic rise in Attention Deficit Disorder diagnoses. CANBERRA (Reuters) - The Internet is not just changing the way people live but altering the way our brains work with a neuroscientist arguing this is an evolutionary change which will put the tech-savvy at the top of the new social order. Gary Small, a neuroscientist at UCLA in California who specializes in brain function, has found through studies that Internet searching and text messaging has made brains more adept at filtering information and making snap decisions. But while technology can accelerate learning and boost creativity it can have drawbacks as it can create Internet addicts whose only friends are virtual and has sparked a dramatic rise in Attention Deficit Disorder diagnoses. Small, however, argues that the people who will come out on top in the next generation will be those with a mixture of technological and social skills. "We're seeing an evolutionary change. The people in the next generation who are really going to have the edge are the ones who master the technological skills and also face-to-face skills," Small told Reuters in a telephone interview. "They will know when the best response to an email or Instant Message is to talk rather than sit and continue to email." In his newly released fourth book "iBrain: Surviving the Technological Alteration of the Modern Mind," Small looks at how technology has altered the way young minds develop, function and interpret information. Small, the director of the Memory & Aging Research Center at the Semel Institute for Neuroscience & Human Behavior and the Center on Aging at UCLA, said the brain was very sensitive to the changes in the environment such as those brought by technology. He said a study of 24 adults as they used the Web found that experienced Internet users showed double the activity in areas of the brain that control decision-making and complex reasoning as Internet beginners. "The brain is very specialized in its circuitry and if you repeat mental tasks over and over it will strengthen certain neural circuits and ignore others," said Small. "We are changing the environment. The average young person now spends nine hours a day exposing their brain to technology. Evolution is an advancement from moment to moment and what we are seeing is technology affecting our evolution." Small said this multi-tasking could cause problems. He said the tech-savvy generation, whom he calls "digital natives," are always scanning for the next bit of new information which can create stress and even damage neural networks. "There is also the big problem of neglecting human contact skills and losing the ability to read emotional expressions and body language," he said. "But you can take steps to address this. It means taking time to cut back on technology, like having a family dinner, to find a balance. It is important to understand how technology is affecting our lives and our brains and take control of it." (Editing by Paul Casciato) © Thomson Reuters 2008 All rights reserved
<urn:uuid:d2498fde-8746-4124-be8b-dc4deae64c35>
CC-MAIN-2017-09
http://www.baselinemag.com/mobility/Is-Surfing-the-Internet-Altering-your-Brain
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00147-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94923
683
2.703125
3
With the release of the new Star Wars film moviegoers everywhere are being re-introduced to The Force and its power. More than a few people may even walk out imagining what life would be like in a world inhabited by The Force. While we may never be able to control each other’s minds, the ability to manipulate the physical world through our speech, and even our thoughts, is very much a reality today. It is a reality being made more possible all the time by advances in technology like Natural Language Processing, sensors and IoT. Today, it is even possible to move a BB-8 Droid with your mind… IoT changes how we interact with the physical world The Internet of Things is really a two-sided coin. On the one hand it is taking dumb things and making them smart; things like your fridge, the door to your house, elements in your car or even a kid’s toy. It’s about engaging technologies that we’re used to and familiar with and then getting them onto the internet and making it meaningful. Then the other side to this coin – which this video is really about – is taking things that are already internet enabled and combining them together to have new and meaningful interactions. Companies like SilverHook Powerboats use IoT to help monitor and optimize engine performance during races. IoT Resources for Developers For developers that want to get started building their next IoT application on Bluemix, these resources can help get you moving in the right direction: Share this post:
<urn:uuid:ac2dbdd6-70d3-446f-a48c-bfcfaad4265f>
CC-MAIN-2017-09
https://www.ibm.com/blogs/bluemix/2015/12/the-force-bb8-emotiv-insight-bluemix/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00091-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957481
319
2.59375
3
June 17, 2010—Over the last 20 years, scientists have collected vast amounts of data about climate change, much of it accessible on the Web. Now the challenge is figuring out how to integrate all that information into coherent datasets for further analysis—and a deeper understanding of the Earth’s changing climate. In 2000, more than 20 countries began deploying an array of drifting, robotic probes called Argo floats to measure the physical state of the upper ocean. The floats, which look a little like old-fashioned hospital oxygen tanks with antennas, are designed to sink nearly a mile below the surface. After moving with the currents for about 10 days, they gradually ascend, measuring temperature, salinity, and pressure in the top 2,000 meters of the sea as they rise. At the surface, they transmit the data to a pair of NASA Jason satellites orbiting 860 miles above the equator, then sink again to repeat the process. So far more than 3,000 floats have been deployed around the world. The data they’ve collected has provided a big leap forward in understanding the upper levels of the Earth’s oceans—and their effect on global climate change—in the same way as early weather balloons expanded understanding of the earth’s atmosphere. What’s more, the data they collect is available in near real time to anyone interested, without restrictions, in a single data format. Dr. Thomas Peterson, a scientist at the U.S. National Oceanographic and Atmospheric Administration (NOAA)’s National Climatic Data Center in Asheville, North Carolina, has been with the data center since 1991. “Back then,” says Peterson, “people came to us for integrated climate information because it was so hard to find the large amounts of data they needed to derive the information themselves. With the Internet, people can just download the data from the Web.” The Argo floats are funded by some 50 agencies around the world. The program is one example among thousands of the ways in which the Web is facilitating scientists’ understanding of global climate change. Without the Web, in fact, the float system would not exist. Studying human-caused change Humans have probably been studying weather at least since they began raising crops. But rigorous climatology—the study of weather patterns over decades, centuries, or even millennia—dates only from the late 1800s. The study of anthropogenic, or human-caused, climate change is much younger. Until the 1950s, few suspected the earth’s climate might be changing as a result of human activity. And if a scientist in, say, Germany did suspect it, it would have been difficult indeed for him to work with scientists in England or China to explore the possibility. By the 1980s, as evidence began accumulating of rising levels of atmospheric carbon dioxide, scientists who were pursuing particular aspects of climate change independently began holding international conferences to exchange information. But not until the 1990s did the Web enable them to collaborate remotely, in real time. That collaboration, along with the enormous amounts of data collected using web technologies, has revolutionized the field. Today, climate scientists conduct studies with colleagues on the other side of the world, hold marathon webinars, and co-author papers with dozens or even hundreds of collaborators, all via the Web. Scientists use the Web to access, monitor, and share everything from in situ data collected by such means as the Argo floats and a worldwide network of 100,000 weather stations, to remote data from radar and satellites, to paleoclimatologic indicators like tree rings and core samples from glaciers and ancient lake beds. A staggering amount of data The sheer volume of scientific data on climate is staggering, collected around the world by government agencies, the military, universities, and thousands of other institutions. NOAA stores about 3,000 terabytes of climate information, roughly equal to 43 Libraries of Congress. The agency has digitized weather records for the entire 20th century and scanned records older than that, including some kept by Thomas Jefferson and Benjamin Franklin. All of it is accessible on the Web. As part of its educational mission, NOAA has even established a presence in the Second Life virtual world, where members can watch 3D data visualizations of a glacier melting, a coral reef fading to white, and global weather patterns evolving. The U.S. National Aeronautics and Space Administration (NASA) is an equally important player in climate research. Its Earth Observing System (EOS) of satellites collects data on land use, ocean productivity, and pollution and makes its findings available on the Web. There is even a NASA-sponsored program involving a network of beekeepers to collect data on the time of spring nectar flows, which appears to be getting earlier (http://honeybeenet.gsfc.nasa.gov). The U.N.’s World Meteorological Organization’s Group on Earth Observations (GEO)—launched by the 2002 World Summit on Sustainable Development and the G8 leading industrialized countries—is developing a Global Earth Observation System of Systems, or GEOSS, both to link existing climatological observation systems and to support new ones. Its intent is to promote common technical standards so that data collected in thousands of studies by thousands of instruments can be combined into coherent data sets. Users would access data, imagery, and analytical software through a single Internet access point called GEOPortal. The timetable is to have the system in place by 2015. But—and this is a huge but—despite the wealth of information that’s been collected bearing on climate change, finding specific datasets among the thousands of formats and locations in which they’re stored can be daunting or even impossible. How MIT’s DataSpace could help Stuart Madnick, who is the John Norris Maguire Professor of Information Technology at MIT’s Sloan School of Management, believes a new MIT-developed approach called DataSpace could help. “Right now, papers on hundreds of subjects are published, but the data that backs them up often stays with the researcher,” says Madnick. “We want DataSpace to become the Google for multiple heterogeneous sets of data from a variety of distributed locations. It wouldn’t necessarily work the way Google does, but it would be as useful, scalable, and easy to use, and it would allow scientists to access, integrate, and re-use data across disciplines, including climate change.” As a simple example of how DataSpace could work with respect to climatology, Madnick posits that a scientist wants to know the temperature and salinity of the water around Martha’s Vineyard, Massachusetts, over the past 20 years. Data that could answer the question could exist in all kinds of locations, from nearby Woods Hole Oceanographic Institute, to NOAA, to international fishing fleets. But right now, there is little or no integration of that data. DataSpace could perform that integration, which can require adjustments ranging from such simple things as reconciling Centigrade data with Fahrenheit, to compensating for differences in the ways various instruments measure. Semantic Web technologies DataSpace would incorporate “reasoning systems” that would “understand” disparate data in a way that now requires human intervention. Often called Semantic Web technologies, such linked-data systems would collect unstructured data, interpret data that is structured but not interpreted, and interpret what the data means. How would such Semantic Web technologies be used to study climate change? Madnick provides an example. “Microbes are the most abundant and widely distributed organisms on Earth. They account for half of the world’s biomass and have been integral to life on Earth for more than 3.5 billion years. Marine microbes affect climate and climate affects them. In fact, they remove so much carbon dioxide from the atmosphere that some scientists see them as a potential solution to global warming. Yet many of the feedbacks between marine biogeochemistry and climate are only poorly understood. The next major step in the field involves incorporating the information from environmental genomics, targeted process studies, and the systems observing the oceans into numerical models. That would help to predict the ocean’s response to environmental perturbations, including climate change.” Madnick believes such integration of disparate data, including genetics, populations, and ecosystems, is the next great challenge of climatology, and that Semantic Web technologies will be needed to meet the challenge. A Wikipedia for climate change? Another approach being developed at MIT is the Climate Collaboratorium, part of MIT’s Center for Collective Intelligence. MIT Sloan School Professor Thomas Malone describes the Climate Collaboratorium, still in its formative stage, as “radically open computer modeling to bring the spirit of systems like Wikipedia and Linux to global climate change.” His hope is that thousands of people around the world—from scientists, to business people, to interested laypeople—will take part via the Web to discuss proposed solutions in an organized and moderated way and to vote on proposed solutions. Malone has written, “The spectacular emergence of the Internet and associated information technology has enabled unprecedented opportunities for such interactions. To date, however, these interactions have been incoherent and dispersed, contributions vary widely in quality, and there has been no clear way to converge on well-supported decisions concerning what actions—both grand and ground-level—humanity should take to solve its most pressing problem.” Malone says the Collaboratorium will not endorse positions, but be “an honest broker of the discussion.” Asked what the biggest challenges are facing climate change scientists, NOAA’s Tom Peterson answers, “Communication. Too much is written by scientists for scientists, so it is often too dense for laypeople to understand. It’s rare for a scientist to take time out of trying to make progress on scientific questions to rigorously disprove some of the widely propagated errors about climate change.” Still, what is truly remarkable about how the study of climate has changed over the past 20 years is the way the Web has given scientists from around the world, in disparate areas of research, a new way to collaborate. Thanks to the Web, many millions of people not only within but also beyond the scientific community now have access to an enormous tapestry of information. And new technologies like the Semantic Web will undoubtedly enrich that tapestry.
<urn:uuid:ccca54a8-0920-4414-8cbc-45bc783b32a1>
CC-MAIN-2017-09
https://www.emc.com/leadership/articles/climate-change.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00143-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937904
2,165
3.890625
4
Over the past few years, the public sector spent considerable time and money making myriad transactions available to the public via the Internet. People appreciate the convenience, and they appreciate "their" government responding to their wants. The public sector is slowly changing people's perception by creating an image of government agencies that care about their customers and want to nurture the relationship. Some governments now use the Web to provide information to help citizens become smarter consumers. Public agencies have long collected price and performance data from a wide range of industries, but rarely made it available in a user-friendly form for average citizens. Agencies at all governmental levels are beginning to offer online services that help customers make better decisions on everything from gasoline purchases and investing to choosing hospitals and schools. That's new ground. In the past, information tended to flow one way: from citizens and businesses to government. Society gained because the data was used to ensure compliance with environmental, safety and fairness laws, as well as other regulations. But many citizens felt little direct benefit from this activity. Nowadays, some governments are doing more to help people as they go about their lives, doing the seemingly million and one things they must accomplish on any given day. Florida's Legislature passed the Affordable Health Care for Floridians Act in 2004. The bill directed state government to implement a consumer-focused, transparent health-care delivery system in the state. The bill also stipulated that the state create a mechanism to publicly report health-care performance measures and distribute consumer health-care information. Florida's Agency for Health Care Administration (AHCA) is revamping its Web presence to report and distribute health-care data to consumers. One expanded site, floridahealthstat.com , delivers health-care data collected by the AHCA's State Center for Health Statistics to consumers. The site is designed to make it easy for health-care consumers, purchasers and professionals to access information on quality, pricing and performance. One such tool is Florida Compare Care , which was launched in November 2005. Through the Compare CareWeb site, Florida now reveals data on infections, deaths, complications and prices for each of its 207 hospitals. Residents can use the site to compare short-term-care hospitals and outpatient medical centers in various categories, such as length of stay, mortality, complications and infections. The Web site lists hospitals' rates of medical problems in seven categories, and provides patient death rates in 10 areas, including heart attacks, strokes and pneumonia. When the AHCA started devising Florida Compare Care, the agency turned to the Comprehensive Health Information System (CHIS) Advisory Council for help. The council and various CHIS technical workgroups, which include hospital representatives and various other stakeholders, were involved in the Web site's development from the very beginning, said Toby Philpot, the AHCA's deputy press secretary. "The workgroups have studied the technical issues of reporting performance data, as well as discussed the most appropriate options for reporting and displaying the information on the Web site," Philpot said. Creating a Web site such as Florida Compare Care is dicey because of the complex nature of the information being presented, he said. "Because of their expertise, some hospitals treat more high-risk patients," Philpot explained. "Some patients arrive at hospitals sicker than others, and often, sicker patients are transferred to specialty hospitals. That makes comparing hospitals for patients with the same condition but different health status difficult." To get the most accurate data on the Web site, he said, each hospital's data is risk adjusted to reflect the score the hospital would have if it provided services to the average mix of sick, complicated patients. The risk adjustment is performed by 3M Corp.'s All Patient Refined-Diagnosis Related Groups. "This adjustment should allow comparisons between facilities that reflect the differences in care delivered, rather than the differences in the patients," Philpot said. State Rep. Frank Farkas, a chiropractor, said he sponsored the Affordable Health Care for Floridians Act, in part, to help health-care consumers. "High on the list was transparency -- and transparency in a couple of forms," Farkas said. "One was being able to shop price comparisons for pharmaceutical drugs. The second part was hospital outcomes for mortality and infection rates. This is all information that our department had that was being given to them by the pharmacies and hospitals, but nothing was ever done with it." The most difficult part of creating the Florida Compare Care Web site was creating consumer-friendly information out of federal reporting data, Farkas said, explaining that such standards didn't hit the level of detail needed for Florida's new Web site. He cited infection rate data as a prime example because it was difficult for the AHCA to determine whether patients came in with infections prior to admission or developed infections while in the hospital. "The way it's measured right now, it just shows an infection rate for the hospital, but it doesn't break it out." The federal government, which was redesigning a form hospitals use to report data to states, added a new data entry point to extrapolate infection rate information, he said. Interestingly enough, collecting the information didn't create much of a hardship for the AHCA, which had been gathering medical data all along. "The information was required, yet the AHCA was never required to do anything with it," Farkas said. "It was basically information that I'd say was useless because you're requiring hospitals to provide it, but it was just reams of paper sitting in a room." The Florida Hospital Association (FHA) supports the creation of Florida Compare Care, as well as other transparency issues in health care, because of the state's approach, said Rich Rasmussen, the FHA's vice president of strategic communications. "What Florida tried to do was include everybody in this transparency effort, so that we'd have transparency on the pharmaceutical side, the hospital side, the health plan side and the physician side," Rasmussen said. Though the Florida Compare Care site helps individual consumers, Rasmussen said the benefits extend to wider audiences, such as employers, health planners, health plans, insurance companies, hospitals and the FHA itself. "We purchase that data and we customize it for our members," he said. "If you're trying to do strategic planning and want to look at, for example, the health disparities in the community, we can do that. If you wanted to find out how many heart procedures were performed in a certain ZIP code, we can give you that. All of that information is very helpful when you're doing that planning." Employers can make excellent use of this information too. The transparency effort enjoyed strong support from the business community, Rasmussen said, because companies view the data as a valuable tool for large purchasers of health-care services. The next phases of the project will incorporate similar information on health plans and physicians to the FloridaHealthStat site, he added. "Most employers don't shop hospitals; most consumers don't shop hospitals," Rasmussen said. "But they shop health plans. So having good information out there will help consumers and employers know what the out-of-pocket [costs] will be for their employees, what the co-payments and deductibles are going to be, what the exclusions are going to be, what performance measures are used by health plans. "As we roll more of this information out, consumers -- and I lump into that group purchasers, such as businesses -- will have a better idea of the total continuum of services they're buying." Florida's new approach to providing health-care data came about because of the trend toward consumer-directed health care, explained Farkas. "We're seeing a huge increase in the amount of health savings accounts being sold nationwide," Farkas said. "People are using their own money. We want to make sure they're getting information to make good decisions. It's interesting health care is the only industry that you've not been able to price shop or even quality shop. "You've never been able to really see how safe a hospital is compared to other hospitals; which doctors had the best outcomes; how much they charge for elective procedures -- those are things you're going to start seeing on this Web page." This is part of a larger movement in which the public sector is enabling constituents to choose service providers, said Bill Eggers, global director for Deloitte Research, Public Sector, and a senior fellow at the Manhattan Institute for Policy Research. "Yet to have choice, you need good information about price, quality and performance, and that was typically not available for many public services, whether it was education, social services or health care," Eggers said. "Choice became very, very difficult to implement well in practice." The Internet, of course, makes it much simpler for the public sector to take the complex information it stores, package that data and present it in a way that helps consumers. At the federal level, Eggers said, the U.S. Securities and Exchange Commission (SEC) embarked on an effort to give better informational tools to investors and analysts. "The SEC is putting together a lot of technology initiatives to change the way public companies, mutual funds and so forth disclose financial data," Eggers said. In early March 2006, the SEC announced it would hold a series of roundtables throughout 2006 at its headquarters in Washington, D.C., to discuss the best way to hasten implementation of these new Internet tools. The roundtables also will review first-year results of a pilot that has been using interactive data from company filings with the SEC to let Internet users search and use individual data from financial reports, such as net income, executive compensation or mutual fund expenses. Currently financial information is generally presented in the form of entire pages of data that can't easily be separated by people reviewing the data from PCs. The rationale for the SEC's initiative is that if investors have better information to make choices, the market will be improved because companies and mutual funds will have to improve performance to attract investor attention. "It's yet another example of something where you had a marketplace and a lot of information," Eggers said. "But now the SEC, which was holding a lot of different information, has the ability to improve the marketplace by putting the information online and nudging some of the mutual funds and companies to do so also." Local governments also collect a wealth of information that can help constituents make the most of their buying decisions. Nearly five years ago, Westchester County, N.Y., posted a gasoline price database on the Department of Consumer Protection's Web site, said John Gaccione, the department's deputy director. "The department always conducted a gasoline price survey -- it was a random sample -- and then we would release an average price," Gaccione said, noting that the county executive and the director of the Department of Consumer Protection wanted to modernize the existing process by putting the survey information online. "It [the site] also allows us to give trends in prices or show stations in a particular area or ZIP code so that a consumer is empowered with information and can make better choices, or choices that better fit their needs," Gaccione said. The department surveys 400 gas stations bi-monthly and informs consumers on which stations have the best prices. The online database also provides information about specials, such as "cents off" days, and stations that offer diesel fuel. Gaccione said people told the department they appreciate the service, recalling that some feedback indicated consumers were surprised by the database's availability. "It gave them access to information they didn't even know existed," he said, explaining the constituents' surprise. "Second, it allowed them instant access to that information. There are people, traveling salesmen, that if they know they can go to a certain place and gas up their car and save 10 cents a gallon, that's something they're going to rely on in the course of their everyday life or business." It's something people want from the public sector, he said. "You can get a sense that there's a growing expectation of, 'If government is collecting this data and it can help the average person, get it out there.'"
<urn:uuid:75b07ba2-7d29-468f-9aa9-ff1ee8d113d5>
CC-MAIN-2017-09
http://www.govtech.com/health/99395379.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00140-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966224
2,533
2.84375
3
Cisco Named a Leader Cisco is positioned as a leader in the Gartner Magic Quadrant for Unified Communications What is VoIP and What Can it Do for Your Business? VoIP and IP telephony are becoming increasingly popular with large corporations and consumers alike. For many people, Internet Protocol (IP) is more than just a way to transport data, it's also a tool that simplifies and streamlines a wide range of business applications. Telephony is the most obvious example. VoIPor voice over IP is also the foundation for more advanced unified communications applications including Web and video conferencing that can transform the way you do business. What is VoIP: Useful Terms Understanding the terms is a first step toward learning the potential of this technology: - VoIP refers to a way to carry phone calls over an IP data network, whether on the Internet or your own internal network. A primary attraction of VoIP is its ability to help reduce expenses because telephone calls travel over the data network rather than the phone company's network. - IP telephony encompasses the full suite of VoIP enabled services including the interconnection of phones for communications; related services such as billing and dialing plans; and basic features such as conferencing, transfer, forward, and hold. These services might previously have been provided by a PBX. - IP communications includes business applications that enhance communications to enable features such as unified messaging, integrated contact centers, and rich-media conferencing with voice, data, and video. - Unified communications takes IP communications a step further by using such technologies as Session Initiation Protocol (SIP) and presence along with mobility solutions to unify and simply all forms of communications, independent of location, time, or device. (Learn more about unified communications.) What is VoIP: Service Quality Public Internet phone calling uses the Internet for connecting phone calls, especially for consumers. But most businesses are using IP telephony across their own managed private networks because it allows them to better handle security and service quality. Using their own networks, companies have more control in ensuring that voice quality is as good as, if not better than, the services they would have previously experienced with their traditional phone system. Explore the Cisco Unified Communications System Why VoIP? Learn how VoIP can help your small business meet its biggest challenges.
<urn:uuid:53287b8e-c7b0-4860-9cb4-3ae795438ca2>
CC-MAIN-2017-09
http://www.cisco.com/c/en/us/products/unified-communications/networking_solutions_products_genericcontent0900aecd804f00ce.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00612-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948415
481
2.65625
3
Digital telephone systems are the most popular telephone systems in service today. While obsolete analog phone systems converted voice conversations (sound waves) into electrical waves, digital phone systems take the same voice conversations and convert them into a binary format where the data is compressed into "1"s and "0"s. More specifically digital phone systems sample your voice using a method called Time Division Multiplexing or TDM. When you speak into a digital telephone, your voice is digitally sampled into time slots so that a conversation doesn’t have to use the entire bandwidth of a circuit. The system then uses a clock to synchronize the digital samples and turn them back in to voice. Whereas analog telephone stations can only handle one conversation at a time, digital phone stations can compress more than one conversation as well as a variety of phone system features onto a single pair of wire. The advantages of a digital telephone station over an analog telephone station include: Clarity- While analog telephone stations offer richer sound quality over digital, the binary code used by digital telephone stations to transmit data keeps it in tact so that the end transmission is distortion free. This results in clearer phone calls. Increased Capacity- Digital telephone stations can fit more conversations on a single pair of wires than an analog station. This allows for less wiring and more efficient communication than an More Features- Due to increased capacity, a digital telephone station can also fit more features on a single pair of wires. Mute, redial, speed-dial, function keys, call transfers, voicemail, conferencing, and other features are all available through digital systems more resourcefully than with analog systems. Longer Cordless Range - If you need a cordless telephone at your office, cordless phones with digital technology can apply more power to the signal and increase the range of the phone. Each manufacturer produces digital telephone stations that use slightly differing TDM protocols, so be sure to speak with your phone installer before purchasing any equipment to make certain they are compatible with your current system.
<urn:uuid:df8f88f2-0e80-4018-bcb8-f99d28f99dcc>
CC-MAIN-2017-09
http://www.metrolinedirect.com/digital-telephone-stations-and-features.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00612-ip-10-171-10-108.ec2.internal.warc.gz
en
0.893322
448
3.046875
3
Black Box Explains…Sizing a UPS The power delivered by a UPS is usually expressed both in volt-amps (VA) and watts. There’s often confusion about what the difference is between these figures and how to use them to select a UPS. VA is power voltage multiplied by amps. For instance, a device that draws 5 amps of 120-volt power has a VA of 600. Watts is a measure of the actual power used by the device. VA and watts may be the same. The formula for watts is often expressed as: Watts = Volts x Amps This formula would lead you to believe that a measurement of VA is equal to watts, and it’s true for DC power. AC power, however, can get complicated. Some AC devices have a VA that’s higher than watts. VA is the power a device seems to be consuming, while watts is the power it actually uses. This requires an adjustment called a power factor, which is the ratio of watts to VA. AC Watts = Volts x Amps x Power Factor Watts/VA = Power Factor Simple AC devices, such as light bulbs, typically have a power factor of 100% (which may also be expressed as 1), meaning that watts are equal to VA like they are with DC devices. Computers have had a much lower power factor, traditionally in the 60–70% range. This meant that only part of the power going into the computer was being used to do useful work. Today, however, because of Energy Star requirements, virtually all computing devices are power factor corrected and have a power factor of more than 90%. Which brings us around to how to use this information to select a UPS. The capacity of a UPS is defined as both VA and watts. Both should be above the power requirements of the connected equipment. Because of the computers that had a low power factor, UPSs typically had a VA that was much higher than watts, for instance, 500 VA/300 watts. In this case, if you use the UPS with a power factor corrected device that requires 450 VA/400 watts, the UPS won’t provide enough wattage to support the device. Although UPSs intended for enterprise use now normally have a high power factor, consumer-grade UPSs still typically have a lower power factor—sometimes even under 60%. When using these UPSs, size them by watts, not VA, to ensure that they can support connected equipment.
<urn:uuid:117623ca-2a4f-423a-b7b8-1e5ec06a1ab9>
CC-MAIN-2017-09
https://www.blackbox.com/en-ca/products/black-box-explains/black-box-explains-sizing-a-ups
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00136-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971475
511
4
4
The new face of biometric access control Passfaces’ authentication tool relies on the brain’s ability to know faces - By Michael Arnone - May 22, 2006 Getting computers to recognize human faces is a multibillion-dollar business. One small company, though, is approaching the authentication problem from another direction by using computers to take advantage of the human brain’s facial-recognition ability. Facial recognition is hardwired into the brain from infancy, said Paul Barrett, chief executive officer of Passfaces. The human brain can recognize familiar faces in 20 one-thousandths of a second without conscious effort, he said. Passfaces’ software works on any computer with a graphical user interface, Barrett said. It creates a “passface” using a sequence of three to seven faces. Users see the images of the faces mixed in with eight decoy faces in a three-by-three grid. They must click on the faces in the correct order to gain access. With passfaces, people don’t have to worry about remembering passwords or carrying tokens, Barrett said. People are wary of the company’s approach, called cognometrics, because it’s so novel, said Jonathan Penn, a principal analyst at Forrester Research. But passfaces offer a superior alternative to passwords for single-factor authentication and don’t require physical tokens, which people can lose, he said. The incidence of people forgetting their passfaces is extraordinarily low, Penn said. After he signed up, he didn’t return to the company’s Web site for a month, yet he got all the faces right the first time, he said. The faces are assigned at random so algorithms can’t figure them out other than by brute-force attacks, Barrett said. Phishers would have to have a copy of all Passfaces’ example faces to fool users, he said. For example, a user given five faces to remember has 95 possible combinations, or a 1 in 59,049 chance that someone else could randomly pick the same faces, Barrett said. That provides much more security than a four-digit PIN, which has only 10,000 possible combinations, he said. The technology is immune to spyware, phishing and social engineering because remembering faces can’t be shared the way words or numbers can, Penn said. “This is something I couldn’t give away if I wanted to,” he added. Passfaces’ software-only product suits situations in which people want to improve authentication without adding hardware, Penn said. “They don’t want something as cumbersome or as expensive as a token or smart card,” he said. It would also help business-to-business Web sites that deal with financial or other sensitive data. The technology does have downsides, Penn said. It is less secure than a physical token and is susceptible to hackers who can look over the user’s shoulder to see the faces the user picks. People worried about such “shoulder surfing” should use physical tokens that provide one-time passwords instead, he said. Cognometrics could disrupt current ID technology, replacing the majority of passwords and PINs used in nearly all online transactions, Barrett said. Passfaces hopes to form partnerships with RSA Security, VeriSign, IBM and other companies to have them deploy its product as part of broader authentication solutions, Barrett said.
<urn:uuid:8d167932-322a-46b6-817e-afbe8a42f40c>
CC-MAIN-2017-09
https://fcw.com/articles/2006/05/22/the-new-face-of-biometric-access-control.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00188-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938002
724
2.6875
3
This story was originally published by Data-Smart City Solutions. Americans are dying at an unprecedented rate from opiate drug poisoning -- the national average stands at 78 fatal overdoses per day. Heroin and prescription painkillers are killing more people per year than car accidents, a signal that the epidemic has reached historic proportions. Forced to grapple with the ravaging effects of drug addiction and overdose in their communities, cities, counties, and states across the country are turning to data analytics and mapping to help alleviate the human toll and better direct critical resources to address the epidemic. Increasingly, the focus has shifted to tracking prescription drug providers, as it is further understood that the road to addiction and overdose can start with an innocent prescription filled for chronic pain or post-surgery -- the CDC estimates at least half of all fatal overdoses involve a prescription opiate. The National Association of Counties released an interactive map this summer that shows the staggering amount of overdoses nationwide and tracks opiate providers and filled prescriptions. The map, created by Esri, allows users to hone in on a specific county and compare numbers against national averages. The Pacific Northwest, for instance, has a higher-than-average rate of opiate prescriptions. The map also highlights individual counties that have particularly high rates of prescribing opiates. In Red Lake County, Minnesota, for example, 37 percent of the total annual prescriptions filled in the county are for opiates, considerably higher than the 5 percent state average for Minnesota. The interactive map also confirms one of the insights that has defined the present drug epidemic: overdose and addiction are no longer a problem confined to the inner city. Some of the highest rates of overdose, according to the map, are seen across Appalachia; West Virginia currently has the highest rate of overdose in the country. And places from Knoxville to Cincinnati and Salt Lake City are pinpointed with thick red circles on the map, signaling the density of fatal overdoses in those locations and the widespread reach of the epidemic. Prescription drug monitoring programs to track opiate prescriptions and attempt to prevent overprescribing have existed in states across the country for several years, but the present epidemic suggests that tactic alone is not enough. Combining the power of data mapping with predictive analytics, states and counties are now starting to identify “hotspot” neighborhoods in the most immediate need of resources and predict where the next swath of overdoses may happen. In January, the Massachusetts Department of Public Health began using predictive analytics to foresee and prevent future deaths. Using painkiller prescription and overdose data in combination with specifically designed algorithms, the state can more quickly and effectively allocate critical resources. Alleghany County in Pennsylvania has incorporated data from a variety of departments including the justice and mental health systems to develop a powerful strategy to target specific neighborhoods burdened by opiate overdose and addiction. The county can now track patterns resulting from analysis of cross-departmental data, dating from 2008 to 2014, to draw critical conclusions. This kind of analysis allows the county to recognize that at least 45 percent of fatal overdose victims during the eight-year period had a painkiller prescription filled within 90 days of their death. Another startling trend showed that 54 fatal overdoses over the eight years happened within a 30-day window of the victim’s release from the Alleghany County jail. The data-driven conclusions and trends shown by the Esri map and the neighborhood-level strategies in Massachusetts and Pennsylvania all drive one point home: help needs to be in the hands of those most at risk of overdosing. Treatment and prevention education are the only realistic solutions to the historic epidemic, and data mapping and predictive analytics represent powerful tools in government’s outreach toolbox. These kinds of strategies, in combination with the opening of state resources for easy public navigation in ways like the state of Virginia’s website and app, are sure to mark local, state, and federal policy for the duration of the opiate crisis.
<urn:uuid:d7dc8b8a-6e23-4ee2-bf71-9d5d3faafea9>
CC-MAIN-2017-09
http://www.govtech.com/data/Solving-the-Opiate-Crisis-Through-Mapping-and-Data-Analytics.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Esri-News+%28ESRI.com+-+News%29
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00364-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936951
802
2.953125
3
According to the development trend, New methods for Computer Facilities or Mainframe network cabling have appeared. This is mainly manifested in the following respects. 1) In the host server room, switch routers and connecting lines have reached gigabit standards. 2) Using fiber optic jumper in connecting, SC interface has been eliminated. SC-SC interface is no longer being used. SC-LC interfaces are used with caution in some of the old switches. LC interface is mainly used, while based on the fiber network card replaced, LC-LC fiber jumper is basically used. 3) UTP cable has stopped being used. In the fiber-to-access layer switch, using all six jumpers. 4) ST fiber jumpers are still in use, but only over long distances, and used in conjunction with the fiber coupler. Although the more single the host room line is, the more it is stable and easier to be managed, but for the common fiber interface and fiber jumpers, we should have a certain understanding. FC: Round threaded (the most widely used on MDF) SC: Snap-square (mostly used on the switch routers) PC: Micro-spherical polishing APC: 8 degrees and micro-spherical polishing MT-RJ: Square, one pair of fiber transceiver (used on Huawei 8850) Fiber optic modules (general are hot-swappable) GBIC: Giga Bitrate Interface Converter, using the optical interfaces of SC or ST SFP: Small package SFP GBIC, using LC type optical fiber Single-mode: L, wavelength 1310; LH, singlemode long distance, wavelengths 1310,1550 Multimode: SM, Wavelength 850 Fiber optic interfaces Fiber optic interface is physical interface that used to connect fiber optic cables. ST (AT & T Copyright), is a most common connection device in multi-mode network. It has a bayonet holder, and a 2.5 mm long cylindrical ceramic or polymer ferrule to contain the entire fiber. ST is sometimes denoted “Stab & Twist”, very vivid description of the first insert and tighten. FC is a most common connection device in the the single-mode network. It also uses 2.5 mm ferrule, but part of the FC connector is designed for the ceramic embedded in stainless card holder in the early time. Currently in most applications, FC has been replaced with SC and LC connectors. FC is an abbreviation of Ferrule Connector, indicating that the external reinforcement is made of a metal sleeve, and fastened with the turnbuckle. SC also has a 2.5 mm ferrule. Unlike ST / FC, it is a plug-in device, widely used for its high performance. It is the TIA-568-A standardized connector, but is not widely used because the initial price is expensive (twice price of ST). SC is sometimes denoted “Square Connector”, because the SC is always square-like shape. More types of connectors: Fiber jumpers are very important as switching devices and servers connected to the line. If there is no suitable fiber jumpers, the service is not going to work. Here are the main fiber jumpers types. LC-LC: LC is the thread inserted in SFP(mini GBIC) which is used in the routers. FC-SC: FC end is connected to the fiber optic cabling racks, and SC end to GBIC. ST-FC: For 10Base-F connections, the connector type is usually ST, and the other end is connected with FC fiber optic cabling racks. SC-SC: Both ends are connected to GBIC. SC-LC: One is GBIC end, and another is SFP end. On the label indicates of Pigtail connectors, we can often see the “FC / PC”, “SC / PC” and so on. “/” the front part indicates the connector pigtail type “SC” connector is a standard square connector, using engineering plastics, high temperature, not easily oxidized advantages. Transmission equipment sidelight SC connector interface is generally used. “LC” connector has the similar shape with SC connector, but is smaller than the SC connector. “FC” connector is a metal connector, usually used in the ODF side. The Pluggable times of metal connectors are more than plastic connectors. More varieties of the signal connector, in addition to the above described three, there MTRJ, ST, MU and so on. “/” Indicates that fiber optic connectors behind sectional process, that grind mode. “PC” in the telecom operator’s devices are used widely, the joint cross-section is flat. “UPC” the attenuation ratio “PC” to be small, generally used for devices with special needs, some foreign manufacturers jump fiber ODF of internal use is FC / UPC, mainly to improve the ODF device itself indicators. In addition, “APC” model is the most application in early CATV broadcasting. It uses a pigtail head angled face, which can improve the quality of television signals. The main reason is that TV signals are analog optical modulation, and when the joint coupling plane is vertical, the reflected light back along the original path. Due to fiber index will once again return to the uneven distribution of the coupling surface, although energy is very small at this time but due to the analog signal is not completely eliminate the noise, so a clear equivalent in the original signal superimposed on a weak signal with a delay, performance on the screen is ghosting. Pigtail headband reflected light inclination can not return along the original path. General digital signal generally does not have this problem. “SC” represents that the pigtail connector model is SC connector. Transmission equipment industry sidelight interfaces general use SC connectors. SC connectors are engineering plastics with good thermostability, and not easily oxidized; ODF side optical interfaces generally use FC connectors. FC is a metal connector. ODF does not have temperature problems, and at the same time the Pluggable times of metal connectors to more than plastic. Maintenance of ODF pigtail is more than the fiber optic pigtail.
<urn:uuid:b6bbf66b-b933-4860-a1ab-762bc9a86027>
CC-MAIN-2017-09
http://www.fs.com/blog/common-interfaces-and-connectors-in-fiber-optic-jumpers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00184-ip-10-171-10-108.ec2.internal.warc.gz
en
0.924947
1,331
2.640625
3
We used to say that all data has a decaying value; the further away from its creation date it gets, the less valuable that data becomes. Compliance and regulatory requirements as well as big data analytics and archive have changed that. We now have to assume that all data will become valuable again -- we just don't know which data or when. If decades from now your grandchildren check into a hospital, the doctors might want to access your medical records. They need them quickly and they better be readable. In theory, these archiving needs strengthen the position of many disk-based object-storage vendors. Their systems can provide data durability as well as quick access and cost effectiveness when compared to primary storage. The problem is that object storage is not as inexpensive as tape storage nor is it as power efficient. [ Learn more about archiving schemes. Read Find The Right Data Archive Method. ] Because we are talking about potentially storing all data for decades, we need to do everything we can, without putting data at risk, to reduce the overall storage cost of the system. After all, those records won't do you any good if the hospital can't afford to keep the system that stores them powered on and up-to-date. However, before we turn over all archive data to the object storage vendors, there is a part of that "all data has a decaying value" theory that is still applicable. It's this: All data has a decaying speed at which it needs to be accessed. Using our medical example above, the doctors might need to access your medical records 50 years from now, but they probably don't need to have them in seconds. They can probably wait a minute or two. As I noted in my article "Comparing LTO-6 to Scale-Out Storage for Long-Term Retention," in these situations tape is an ideal storage type. Data on tape can still be automatically scanned for durability and it certainly meets the cost-effectiveness requirements. What surprises most people that are either new to tape or have forgotten about it is how quickly a modern tape library can deliver data. In most cases access takes less than a minute; in the worst case it is two to three minutes. Understanding The Data Access Decay Rate The speed at which you need to have data returned to primary storage will depend on the needs of the business. Because the predictable response to, "How long can you wait?" is, "I need it now," it is important to make sure that business line managers understand the value of waiting. If they understand that waiting two minutes could save the organization $2 million a year in storage expenses, waiting sounds much more attractive. In almost every case the durability of the data is far more important than the speed at which it can be recovered. I typically suggest a blended strategy: As little primary storage as possible, a reasonable amount of object/archive storage, and a hefty amount of tape. The amount of object/archive disk storage will be driven by your data access decay rate. For many organizations that might mean keeping all data on object storage for three to five years. For almost all organizations, longer-term retention should be on tape. This blended strategy gives the right balance between access, affordability and durability.
<urn:uuid:92286866-9e9c-4bef-9e0a-dedba66d60da>
CC-MAIN-2017-09
http://www.networkcomputing.com/storage/plot-effective-data-archive-strategy/355219580
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00360-ip-10-171-10-108.ec2.internal.warc.gz
en
0.962855
658
2.515625
3
Tweets come with all kinds of actionable elements inside them these days. Shortened URLs, #hashtags, @replies, and the actual message body. When you’re consuming a list of tweets, it’s important to be able to parse out these elements from the plain text so that they can be made into hyperlinks and directed at the proper location. Take two common examples where this technique comes in handy: I) Displaying a tweet on a webpage. II) Displaying a tweet on an iOS application. In both cases you want the user to interact with the tweet if actions are available. With iOS this usually involves loading the tweet into a UIWebView so that you can work with multiple links within a single tweet. As is generally the case, there is more than one way to skin this cat. Option 1: Parse the message body using Regular Expressions Once you’ve retrieved the tweet message text, you can parse the action elements using the following functions: Let’s say you’ve got the tweet text in a variable: You can convert the plain text into hyperlinked action items by doing the following: tweetText = tweetText.parseURL().parseUsername().parseHashtag(); Option 2: Reconstruct the message body using the entity array The second option uses a separate array of elements returned by the Twitter API in conjunction with the message text to give you the actionable items (links, hashtags, mentions, media) and the insertion points within the text body. For a more robust library, check out the twitter-text-js project on GitHub which also supports international character sets and handles a wide variety of common Twitter text processing tasks.
<urn:uuid:92df77c3-1a6c-4e88-b515-d21562680745>
CC-MAIN-2017-09
http://www.itworld.com/article/2704521/development/how-to-parse-urls--hash-tags--and-more-from-a-tweet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00480-ip-10-171-10-108.ec2.internal.warc.gz
en
0.81416
356
2.53125
3
A Internet Explorer Cookie Forensic Analysis Tool. Many important files within Microsoft Windows have structures that are undocumented. One of the principals of computer forensics is that all analysis methodologies must be well documented and repeatable, and they must have an acceptable margin of error. Currently, there are a lack of open source methods and tools that forensic analysts can rely upon to examine the data found in proprietary Microsoft files. Many computer crime investigations require the reconstruction of a subject's Internet Explorer Cookie files. Since this analysis technique is executed regularly, we researched the structure of the data found in the cookie files. Galleta, the Spanish word meaning "cookie", was developed to examine the contents of the cookie files. The foundation of Galleta's examination methodology will be documented in an upcoming whitepaper. Galleta will parse the information in a Cookie file and output the results in a field delimited manner so that it may be imported into your favorite spreadsheet program. Galleta is built to work on multiple platforms and will execute on Windows (through Cygwin), Mac OS X, Linux, and *BSD platforms. galleta [options] <filename> -t Field Delimiter (TAB by default) [kjones:galleta/galleta_20030410_1/bin] kjones% ./galleta antihackertoolkit.txt > cookies.txt Open cookies.txt as a TAB delimited file in MS Excel to further sort and filter your results
<urn:uuid:03f2f849-adf3-4fe5-8c51-b8d78eb3b5da>
CC-MAIN-2017-09
https://www.mcafee.com/br/downloads/free-tools/galleta.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00232-ip-10-171-10-108.ec2.internal.warc.gz
en
0.887901
310
2.59375
3
Primer: Storage PartitioningBy Kevin Fogarty | Posted 2004-03-01 Email Print What is it? A set of digital instructions that makes it easier to manage the increasingly large amounts of data found on farms of inexpensive disk drives. A set of digital instructions that makes it easier to manage the increasingly large amounts of data found on farms of inexpensive disk drives. At its simplest, it's an administrative technique for dividing a disk or an array of disks into clearly defined chunks of storage space that can be assigned to one or more servers each. The phrase itself is rarely used, however, because it's so integral to other storage-management schemes; usually you'll hear about "storage consolidation" instead. Because storage capacity is increasing faster than the ability to manage it. Without partitioning, you can forget managing your data library efficiently. Companies consolidate storage to get rid of the little pools of disk space attached to each server. While it's good to have storage for each machine, it's not always easy to predict the amount of disk space each will need. A server in Finance, for example, might connect to a drive with 150 gigabytes of disk space, but only take up 100GB with its data. Managers in Operations, however, might be asking for 50 gigabytes of additional space after maxing out the 200-gigabyte disk in their own server. It's very hard, however, to give Operations access to Finance's excess capacity, so the company ends up buying an additional 50GB for Operations, while still not using the 50GB of empty disk in Finance. In consolidating, companies throw out storage attached to individual servers in favor of big honking arrays accessible to all the servers. That big pool of storage space can then be divided, by the operating software in the arrays, into chunks that can be assigned to each server. When a server needs more space, administrators can expand its chunk by changing the numbers in a configuration screen, without physically touching either the array or the server. "The theory is that you get economies of scale by going to a larger subsystem, but you can still divide it up into smaller pieces that are usable," says Dianne McAdam, senior analyst at Data Mobility Group in Nashua, N.H. Managing the access rights of the servers to these arrays of disks gets complicated as the number of disks and servers increases. Storage area networks (SAN) and network-attached storage (NAS) systems have emerged to make it easier to create ever-larger pools of storage that are still manageable. NAS boxes, as their name implies, attach to a network and add a lot of capacityas much as several trillion bytes of data. Until recently, however, NAS boxes had a limited ability to expand, meaning when one was maxed out, the owner had no choice but to buy another box and reassign servers so that some would use the first box and some the second. In general, a company that loves its first and second NAS boxes loathes its tenth because of the additional work required to manage each. Newer NAS boxes can connect more easily to one another to share space and ease administration, but not as easily as storage area networks, which are designed specifically to link many storage devices into one enormous pool that can range into the tens of trillions of bytes. Setting up those connections on the SAN controller is complex, however, because administrators have to assign a port on the SAN controller to each server, as well as the ports the storage arrays can use to respond. NAS and SAN systems, long incompatible, are coming together rapidly as it becomes clearer that large companies need the drop-in usability of a NAS box with the flexible capacity of the SAN. The two are merging, but the process is far from complete. The next major storage-system update, which will trickle down from the most expensive systems available today, is likely to be the ability to prioritize requests so the most time-sensitive applications get data first.
<urn:uuid:f6ef722f-7820-4392-8e43-9e0c17a7e834>
CC-MAIN-2017-09
http://www.baselinemag.com/it-management/Primer-Storage-Partitioning
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00408-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956862
810
3.234375
3
There are several kinds of industrial applications through the use of fiber optical cable. Thin fiber of glass or plastic, through which data can light and sound propagation is called optical fiber. These optical fibers as thin as human hair. When they are assembled together, they form a cable that can be used for transmitting information and signals. Optical fiber is widely used in telephone and telecom industry. Optical lighting is also an indispensable medical, aerospace and military applications. Other systems such as intrusion detection alerts or light through optical fiber movements. Thanks to their big data carrying capacity, the cable is particularly important in the local area network (LAN). Applications such as machine vision lighting are enabled via optical lightings. A major advantage of these cables is proposed is their lower cost than the traditional use of copper wire. Here are some other offers important advantages of fiber optic cable: Long-distance data transmission High bandwidth can be reach even over long distances using this cable. They can carry critical signals without the loss of data. These cables also do not get jammed, making them ideal for mission critical operations such as sending flight signals. Immune to electromagnetic interference Since these cables use the medium of light, and not electricty, to transmit signals, electromagnetic interference doesn’t usually affect the data transmission process. The ideal security data transmission It is a known fact that electromagnetic interference (EMI) could also cause data leaks. This is a potential threat for sensitive data transfer operations. It may not always be possible to shield the wire, and even with the shielding, also cannot guarantee 100% safety. On the contrary, an optical cable has no external magnetic field so signal tapping is not easily achieved. This makes an optical cable is the most preferred components or sensitive data transmission security. No spark hazard Electrical wiring constantly needs to be safeguarded against a potential spark hazard. This isn’t the case with optical fiber cables as they are inherently safe. This particular attribute is especially significant in industries such as chemical processing or oil refineries where the risk of explosion is high. Signals that are sent using cables do not spark. No heat issues Fiber optics can carry small amounts of light without the risk of producing heat. Thus, fiber optic cables are safe to use in surgical probes that are inserted inside a patient’s body to study internal organs. These very cables are also used during surgeries to relay laser pulses. With no heat or shock hazard, such cables are safe to use during the most critical surgeries. This attribute makes optical cables safe for use in machine vision lighting applications too. These are a few fundamental advantages of optical cables. There are several other benefits that a professional optical cable manufacturing house shall be able to discuss with you. More information about products such as LSZH cable, armored fiber cable for all you industrial applications.
<urn:uuid:c21312c1-1747-42b0-bb2d-38db8fe96aed>
CC-MAIN-2017-09
http://www.fs.com/blog/the-industrial-purpose-of-using-fiber-optic-cables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00408-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937074
576
3.21875
3
From New York to California, state and local governments are taking steps to educate citizens on the plethora of dangers in the online world. This is the third annual National Cyber Security Month which was started by the National Cyber Security Alliance (NCSA) with the mission to "stay safe online." Many dangers await online, from children being exposed to inappropriate content or being solicited over instant messaging, to identity thieves using, rootkits , worms or malware. People must be extremely careful when entering personal information on even familiar Web sites because they could be phishing But the question remains: is one month enough to educate the public? Some have questioned the government's actions calling this nothing more than a reason to spend more tax money. But with any cause it is only through consistence and persistence that anything can be accomplished. And although federal and state governments are involved, much of the effort to educate people is coming from grassroots endeavors. Take for example The Guardian Angels working in the schools of New York State to educate both parents and children about Internet safety. The group is partnering with the New York State Office of Cyber Security & Critical Infrastructure to create a "Strike Force on Cyber Safety." Change needs to be a collaborative effort, not just a governmental one. But the government is doing its part. Different plans have been enacted, steps taken, departments created. News coverage of the various initiatives has been wide, bringing to light the diversity of efforts. Many attorneys general have released consumer alerts about phishing and other Internet scams, such as Arkansas Attorney General Mike Beebe and Illinois Attorney General Lisa Madigan. New York Governor George Pataki signed a proclamation earlier this month recognizing October as Cyber Security Awareness Month. In Illinois, Governor Rod R. Blagojevich created an Internet Crimes Unit to be dedicated solely to combating online crime such as identity theft. Many different organizations and groups, including NASCIO and the Cyber Security Industry Alliance, have spoken up in favor of promoting cyber security. Forty-two attorneys general signed a declaration in support of the goals and ideas being promoted during Cyber Security Awareness Month. Colleen Pedroza, state information security officer in the California State Office of Technology Review, Oversight, and Security, explained that her office has released Internet safety video clips, as well as pamphlets and a newsletter on Internet safety. Much of this information will be passed on to various California counties. "We are excited about this month being National Cyber Security Month," Pedroza said. California also held a Cyber Security Summit which focused on keeping children safe online . Governor Schwarzenegger, in his address at the summit called for the building of "stronger partnerships between governments and between the private and public sectors, between law enforcement and everyone, to fight cyber crime." Online safety is more that just watching who children are chatting with, or keeping Social Security Numbers close to the vest. It is about becoming aware of online actions, trends and habits. It is about common sense and it is a collaborative effort. Taking a month to increase awareness will only be successful if people remember all the tips and guidelines, and, more importantly, practice them.
<urn:uuid:fecb782d-acc5-4a9d-afcd-fdd4391c320e>
CC-MAIN-2017-09
http://www.govtech.com/security/A-Different-Type-of-October-Scare.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00408-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955266
636
2.796875
3
FileGen is a command-line program to create test files of different lengths. FileGen takes 2 or 3 parameters: filegen file size [byte] - filegen test 1000 this will create a file named test and 1000 bytes long, the bytes are random - filegen test 1000 0 this will create a file named test and 1000 bytes long, and all the bytes will be zero The size and byte can be specified in hexadecimal notation, like this: filegen test 0x100 0xA0 When you create a random byte sequence, C’s pseudo random number generation function rand is seeded with the current time (srand(time(NULL))). This means that the generated byte sequence is different each time you run the command. The algorithm is not optimized for speed. FileGen doesn’t test if the generated file already exists, it will be overwritten without warning. And it will not test if you have the required disk space to create the file. Generating a 1.000.000.000 bytes long random file takes 95 seconds on my 2GHz machine. Compiled with Borland’s free C++ 5.5 compiler.
<urn:uuid:3a5516a3-65fc-4656-a188-049d6e21514c>
CC-MAIN-2017-09
https://blog.didierstevens.com/programs/filegen/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00528-ip-10-171-10-108.ec2.internal.warc.gz
en
0.656326
251
3.3125
3
Stratasys today unveiled two industrial 3D printers that can build objects of virtually any size using materials such as carbon fiber for lighter wieght and stronger parts. The printers were designed to address the requirements of aerospace, automotive and other industries by being able to build completed parts with repeatable mechanical properties. The 3D printers are not yet commercially available, but Stratasys is working with Ford Motor Company and Boeing to develop new applications for them. The two new machines include the Infinite-Build 3D Demonstrator and the Robotic Composite 3D Demonstrator. Because both printers use a "screw" or "worm" drive filament extruder, they're able to print with composite materials, such as carbon fiber, which don't shrink or warp as much as traditional thermoplastics. Typically, fused deposition modeling (FDM) 3D printers press a polymer filament through a pair of wheels or gears and out of a heated extruder head, layer by layer. A screw extruder winds the filament through the head, which increases the flow pressure needed tor extruding composite materials. Stratays' Infinite-Build 3D Demonstrator uses a horizontal build platform versus a traditional vertical platform to create printed objects. By turning the platform horizontally, the machine can build parts sideways, which translates into a build area that's only limited by the space a manufacturer has. "This gives us the capacity for making much larger parts and to gain much lighter assembly," said Ellen Lee, technical leader at Ford's additive manufacturing research facility. 3D printing allows the Ford design freedom it cannot get from traditional manufacturing methods, such as injection molding, Lee said. With injection molding, one automobile or truck part often consists of many smaller pieces that must be joined together, creating less strength and greater weight. "With the increased print size, we could consolidate many large parts into one," Lee said. "We're producing parts that are now measured in feet rather than inches," said Richard Garrity, president of Stratasys Americas. Aerospace giant Boeing also played an influential role in defining the requirements and specifications for the demonstrator 3D printers, Stratasys said. Boeing is currently testing an Infinite-Build 3D Demonstrator to explore the production of low volume, lightweight parts. "Additive manufacturing represents a great opportunity for Boeing and our customers, so we made a strategic decision more than a decade ago to work closely with Stratasys on this technology," Darryl Davis, president of Boeing Phantom Works, said. "We are always looking for ways to reduce the cost and weight of aircraft structures, or reduce the time it takes to prototype and test new tools and products so we can provide them to customers in a more affordable and rapid manner." By flipping an FDM 3D printer on its side, Stratasys was not only able to increase the size of parts, but also achieve speeds 10 times or more greater than traditional FDM technology, according to Garrity. Stratasys' other new machine, the Robotic Composite 3D Demonstrator, utilizes a material extruder at the end of a robotic arm that can maneuver along a 5-point axis. The printer also has a robotically-controlled print platform that can move along three axes. Combined, the 8-axis robotic printer enables extremely precise builds from virtually any angle, reducing the need for typical support materials, which has dramatically slowed 3D parts production. Just as with a desktop 3D printer, the new Robotic Composite 3D Demonstrator is an FDM model and uses a heated extruder head to melt and deposit multiple layers of polymers to create an object. Once the basic part is constructed, a second smaller extruder head on the robotic arm can be used to create fine details. A major advantage to the Robotic 3D printer is the ability to print across fused layers, adding strength to a part. For example, traditional vertical FDM 3D printers construct an object using a process that creates ribbing that is more easily broken. Multi-axis printing allows the composite materials to move right to left, up and down or diagonally, adding strength. "Unfortunately, composites production is constrained by labor-intensive processes and geometric limitations," Stsratasys said in a statement. "The Robotic Composite 3D Demonstrator...redefines how future lightweight parts will be built, and provides a glimpse into how this technology could be used to accelerate the production of parts made from a wide variety of materials." This story, "Stratasys unveils mega, robotic 3D printers to build large parts" was originally published by Computerworld.
<urn:uuid:eb489bb3-9bc4-46f8-af06-d231a49bf82e>
CC-MAIN-2017-09
http://www.itnews.com/article/3112144/3d-printing/stratasys-unveils-mega-robotic-3d-printers-to-build-large-parts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00524-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93023
967
2.546875
3
Wikimedia wants today's voices to echo through the ages. The non-profit foundation responsible for Wikipedia and other crowd-sourced online resources announced recently that it would add voice samples from notable people to their respective Wikipedia biographies. The new project is called "Wikipedia voice intro project" (WikiVip) and seeks to catalog the voices of the world's famous (and not so famous) people immortalized with their own pages in the online encyclopedia. The idea behind WikiVip is to let "Wikipedia's readers know what they sound like and how to correctly pronounce their names," according to the Wikimedia UK blog. For several years, Wikipedia has added short sound files that offer proper pronunciation of a subject's name. But WikiVip is the first time the online encyclopedia is seeking our original voice samples for its Wikipedia biographies. When available, you will find WikiVip clips at the bottom of the Infobox (the summary section in the upper right hand corner) on each Wikipedia page. Wikimedia hopes that anyone with a Wikipedia page--whether they are Hollywood super stars, renowned scientists, or lowly journalists--will contribute a short, 10-second sound recording of their voice. Relying on the thousands of people with Wikipedia entries to voluntarily submit voice samples is probably not the best strategy to improve Wikipedia as a resource, so the Wikimedia Foundation is also proactively seeking out voice samples through other programs such as the BBC Voice Project. Working together with the British broadcaster, Wikimedia is hoping to add over three hundred voice samples to the online encyclopedia. Wikimedia says this is the first time the BBC has openly licensed content from its broadcast archives. So far, 133 of the more than 300 voice samples have been uploaded by the BBC for insertion into Wikipedia articles. The BBC's contributions include samples from notable people such as the inventor of the World Wide Web Tim Berners-Lee, Sherlock starA Benedict Cumberbatch, autho rJohn Updike, and Myanmar politician and Nobel Peace Prize winner Aung San Suu Kyi. While Wikimedia is off to a good start with WikiVip, it has a long way to go before the majority of biography pages include a voice recording. At this writing, there were less than 200 voice recordings available on Wikipedia via the WikiVip project--you can find all the voice recordings on Wikimedia.org. This story, "Wikipedia Adding Voice Recordings to Famous People's Pages" was originally published by PCWorld.
<urn:uuid:485ee1bf-7ba0-4b6a-b85c-8a7df243b00b>
CC-MAIN-2017-09
http://www.cio.com/article/2379245/internet/wikipedia-adding-voice-recordings-to-famous-people-s-pages.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00224-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925798
503
2.578125
3
Autonomous Vehicles Drive ForwardBy Samuel Greengard | Posted 2015-03-24 Email Print The self-driving car raises a critical question: How do we approach and manage machines that do things better than humans do? Do we value safety over choice? In case you missed it, Tesla Motors Chairman Elon Musk is back in the news. His most recent prediction: driving may eventually be banned. The reason? Computers do a much better job of driving vehicles than humans could ever do. "It's too dangerous," Musk noted. "You can't have a person driving a two-ton death machine." At this point, I'm not sure anyone is ready for cars that drive without any human intervention, but we are certainly careening toward autonomous vehicles. Already, a growing number of manufacturers are including LIDAR (Light Detection and Ranging)-based collision avoidance systems, and some cars can park themselves. Meanwhile, Google's self-driving car has logged more than 700,000 flawless miles, and Tesla (surprise!) has developed a self-driving vehicle, the Model S. With all the technology on board and the growing problem of distracted driving, automating functions is a good idea. Over the short term, let's hope that in-vehicle systems and interfaces such as Apple's CarPlay and Google's Android Auto make things safer by getting phones out of motorists' hands, making it easier to control functions without fiddling with buttons and dials, and, in the end, helping everyone keep their eyes on the road. Yet, even then, you're still stuck with horrible drivers who are perpetually in a hurry and are driving recklessly—something that seems to be in abundance these days. There's also the reality that even the best controls and interface can't stop some people from creeping into the distraction zone, and safer cars actually cause some motorists to take greater risks, including driving faster. To be sure, all roads lead to semi-autonomous and autonomous vehicles—perhaps in a decade or two. However, the technology must advance further, bureaucrats and politicians need to move faster, and security must get better. Then there's a human element. A lot of motorists somehow equate driving with freedom and want the control of having a steering wheel in hand. No amount of logic or research is likely to change their minds—though radically different configurations and designs might. In fact, Musk later spun into reverse and tweeted that he wasn't advocating outlawing humans from driving. Of course, the bigger question is: How do we approach and manage machines that do things better than humans do? The same questions and scenarios will play out across a wide swath of industries and situations. Do we value safety and lives over choice? Do we value cost and efficiency over the feeling— and sometimes the illusion—that we're in control?
<urn:uuid:727c29d9-479c-4d5a-892f-0c88aa34cd6e>
CC-MAIN-2017-09
http://www.baselinemag.com/blogs/autonomous-vehicles-drive-forward.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00576-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961431
579
2.84375
3
Technology firms wowed us in 2011, delivering tablets, ultrathin laptops, innovative cloud services, and voice command digital assistants. Not so long ago, the technology underlying these products was nothing more than research and development projects. So, in an effort to peek into our not-so-distant tech future, here's a glimpse at ten promising projects percolating in tech research labs. I'll take you behind the scenes at Microsoft, IBM, and university labs where researchers are experimenting with a range of fascinating technology that may appear in future consumer products. Look forward to 3D images you can "touch," touchscreens that get sticky, robotic astronauts, and -- yes -- even flying cars. Scientists at Microsoft Research, a global R&D arm of Microsoft with 300 researchers and engineers, are working on a project called HoloDesk that lets your hands interact with three-dimensional virtual objects. The device uses an overhanging screen to project a 2D image through a beam splitter into the viewing area. A user's hands and face are tracked via Microsoft's Xbox Kinect technology and a webcam, to help keep the holographic illusion in sync with the user's physical spatial relationship to the viewing area. Named after the fictional simulated reality facility on the TV show Star Trek: The Next Generation, Microsoft's version of the HoloDesk is designed to deliver holographic-based board games, remote collaboration tools, and advances in telepresence. Another Microsoft Research project called PocketTouch is being developed with the goal of enabling users to manipulate a touch device through clothing and other materials. The idea with PockTouch is that you could use finger gestures to control the cell phone in your pants pocket. With a flick of a finger, you could send a call to voice mail, skip a song playing on the phone, or send canned text replies to inbound messages. PocketTouch technology uses a capacitive sensor mounted on the back of your touch device that Microsoft says lets you navigate your pocketed phone using gestures without ever having to remove it from its case. PocketTouch is currently in an early development phase, and, as you can see from the video, it is not yet user friendly. Another Microsoft Research project, Vermeer, centers around 3D images that respond to touch. Vermeer uses two facing parabolic mirrors to create a glasses-free 3D image that you can touch. In Microsoft's demonstration video, Vermeer projects an image of a person that moves when "touched" by a finger. Microsoft says it creates Vermeer using 2880 images per second with a refresh rate of 15 frames per second. And just like the HoloDesk, Vermeer also uses a Kinect camera to track the user's fingertips as they interact with the virtual image. The most famous research project of 2011 is IBM's Watson, a super computer designed to use artificial intelligence to play the TV game show Jeopardy by processing natural language queries. After thrashing its human opponents on television in February, Watson was routed to more practical applications. Now, Watson provides its artificial intelligence to help doctors decide on optimal treatments for cancer patients. Watson parses data comparing symptoms with cures, and finds the most effective treatment for individuals. What's next for Watson? IBM is turning Watson's attention to the business world where the technology is being used to conduct real-time analytics for financial institutions and also to spot fraud within large bureaucracies. Researchers at the University of British Columbia in Vancouver are working on a new kind of haptic feedback for touchscreens. Programmable friction uses small mechanical discs to make the display of a tablet or smartphone vibrate so that it feels more or less "sticky" depending on how you are interacting with objects on the screen, according to a report in NewScientist. One practical use of this technology could be moving folders around a desktop using touch. Friction on the screen increases when you hover over a folder to make it an easier touch target, and the screen oscillates when you hover over a trash bin as a kind of warning that you're about to discard a file. NASA and General Motors are working on a humanoid robot designed to help astronauts carry out their tasks in space. The current iteration, Robonaut 2 (or R2), is a 300-pound bucket of bolts made to resemble a human torso. R2 can lift up to 20 pounds and its arms have similar mobility to a human being's. The NASA/GM prototype left Earth in February to take up permanent residence at the International Space Station. R2 successfully moved aboard the ISS for the first time on October 13 and has since undergone tests on November 4 and December 15. You can follow R2's progress in space via Twitter and Facebook. GM hopes to adapt research from R2 including advancements in controls, sensors, and vision technology into future vehicle safety systems. The Defense Advanced Research Projects Agency is reportedly looking at "feasible designs" from AAI Corporation and Lockheed Martin for the agency's Transformer (TX) flying Humvee project, according to Aviation Week. The project's goal is to create a dual-purpose military vehicle that can fly and also drive along a road. The flying Humvee must be able to seat four, survive small arms fire, and rapidly convert into an aircraft that does not require a qualified pilot to operate. Rumor has it, DARPA is also working on getting pigs to fly. Intel in September was showing off a low-power processor that is efficient enough to run a PC powered by a solar cell the size of a postage stamp. Codenamed Claremont, the chip is only an experimental CPU and Intel currently has no plans to release it as a commercial project. The company may adapt some of Claremont's technology for future products as part of Intel's goal to reduce by fivefold the power consumption of current processors. Sharp and Japanese broadcaster NHK are working on a high-definition resolution display called Super Hi-Vision that is 16 times the resolution of current full HD 1080p (1920 by 1080) panels. SHV resolution is 7820-by-4320 and each frame of film is equal to a 33-megapixel image. Sharp in May produced an 85-inch Super Hi-Vision LCD display to demonstrate the technology. Super Hi-Vision is set to begin limited trials in Japan in 2020. Massachusetts-based Boston Dynamics is helping DARPA and the U.S. Marine Corps develop an all-terrain robot dog that can help soldiers carry heavy loads in remote locations such as the mountains of Afghanistan. The goal is for the company's LS3-Legged Squad Support Systems project to produce a robot that can carry up to 400 pounds across 20 miles or more of rough terrain. It will also be powered by an internal combustion engine with enough power to last 24 hours. The company's current prototype, AlphaDog, can regain its balance should it fall or be knocked over. The end product won't require remote control and will be able to use computer sensors and GPS to follow a leader. AlphaDog can't walk without help yet, but Boston Dynamic plans to produce the first independent version of the robot in 2012. This story, "10 tech research projects to watch" was originally published by PCWorld.
<urn:uuid:a0cf5c28-ccbd-4fba-9dff-5cb9b0ad866c>
CC-MAIN-2017-09
http://www.itworld.com/article/2733625/hardware/10-tech-research-projects-to-watch.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00576-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933971
1,492
2.859375
3
Energy's Titan supercomputer goes to 20 (petaflops) I at first wondered if the scene the other day at the Energy Department’s Oak Ridge National Laboratory was something like the scene in “National Lampoon’s Christmas Vacation,” when Chevy Chase caused the local power plant to tap into emergency nuclear power when his Christmas lights came on. That’s because the Oak Ridge crew flipped the switch on Titan, the world’s newest and likely fastest supercomputer. Titan is capable of an amazing 20,000 trillion calculations each second. That’s 20 petaflops if you are up with the lingo. But it turns out the strain on the local power plant wasn’t anything out of the ordinary, because Oak Ridge designed Titan for relatively low power consumption. One secret to Titan’s success is that it uses graphical processor units in conjunction with standard processors. The GPUs are familiar to gamers, because they are needed to be able to play bleeding edge titles like Call of Duty. Titan contains 18,688 nodes, with each holding a 16-core AMD Opteron 6274 processor and an NVIDIA Tesla K20 graphics processing unit GPU accelerator. And the 700 terabytes of memory doesn’t hurt matters any. Combining GPUs and CPUs not only gives Titan its amazing petaflop power, but is also more economical in terms of power use. "One challenge in supercomputers today is power consumption," said Jeff Nichols, associate laboratory director for computing and computational sciences. "Combining GPUs and CPUs in a single system requires less power than CPUs alone and is a responsible move toward lowering our carbon footprint. Titan will provide unprecedented computing power for research in energy, climate change, materials and other disciplines to enable scientific leadership." Titan, an upgrade of Oak Ridge’s Jaguar supercomputer, marks a new architecture design path for energy-efficient supercomputers. Titan is not only 10 times more powerful, but it takes up the same space Jaguar, while only using a tiny bit more power. Jaguar had been the fastest U.S. supercomputer on the Top500 ranking, but at a clocked speed of 1.75 petaflops before the upgrades it trailed supercomputers in Japan (the K Computer, which clocked 10.51 petaflops) and China, as well as the newest champ, Energy’s Sequoia supercomputer, an IBM BlueGene/Q system launched earlier this year at Lawrence Livermore National Laboratory with a sustained speed of 16.32 petaflops. Sequoia also uses a more energy-efficient design. I realize that Titan will be invaluable for climate research and all the fields of physical sciences Oak Ridge will use it for. But I have to say, what I really want to do is play some cool shooters with it. After all, we don’t want all those GPUs to go to waste, do we? Posted by John Breeden II on Oct 30, 2012 at 9:39 AM
<urn:uuid:5fa3da20-ff9d-4136-9e14-fcf3ea2a3f0a>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2012/10/titan-energy-supercomputer-20-petaflops.aspx?admgarea=TC_BigData
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00452-ip-10-171-10-108.ec2.internal.warc.gz
en
0.899403
629
2.953125
3
Here's a short history on computer science student enrollments. Leading up to the dot.com bust, computer science enrollments soared to new highs, and then they plunged. Like a rock. Computer science graduates at Ph.D. granting institutions reached a low of 8,021 in 2007, down from 14,185 in the 2003-2004. But it's been rising since. The number of new undergraduate computing majors at Ph.D.-granting U.S. universities rose by more than 13.4% last year, according to the Computing Research Association's just released annual report on computer science programs. This wasn't as large of an increase as the last few years but, nonetheless, it represents the sixth straight year of enrollment gains. The dot.com crash in 2001 turned people away from computer science, and sent enrollments falling until they bottomed out in 2007. Bachelor degree production in computer science last year increased 3.7% overall to 12,503, but 9.4% among those schools that reported in both years. The number of computer science graduates will continue to increase. While last year's enrollment increase is positive, it is behind 2011-12, when computer science enrollments rose by nearly 30%, and the year before, when it increased 23%. The much larger increase in new enrollments since 2010 "bode well for future increase in undergraduate computing production," according to the report. When the recession that kicked in 2008, it sent IT unemployment soaring, but it may have done more damage to the finance sector, especially in terms of reputation. That prompted some educators at the time to predict that the recession might send math-inclined students from hedge funds to computer science. It's hard to draw a direct apples-to-apples comparison on computer science enrollments versus business, in part, because it's a smaller base and it may not be a fair comparison. But still, according to government data, there as 327,500 business bachelor degrees awarded in 2006-07, rising to 366,800 in 2011-12, a 12% increase. Meanwhile, computer science bachelor degrees have increased by 55%, but over a slightly longer period. There were 63,873 students enrolled in computer science programs last year, versus 56,307 in 2012. This includes all the majors in computer science departments, such as computer engineering. The overall number doesn't include computer science schools without Ph.D. programs. Despite this relative slowdown in enrollments last year, the data may be better than it appears. Among those schools that submitted their enrollment data to the annual Taulbee Survey in two consecutive years, enrollments were up 22%. There are 266 Ph.D.-granting institutions, and of that number 179 responded to the survey. The list of responding schools includes Harvard, Yale, Princeton, George Tech, Purdue, Berkeley, Davis, and other California system schools, as well as many of the major universities in all the states. Women are still under-underrepresented in the tech workforce, as is reflected in the graduate data. The fraction of women among bachelor's degree graduates in computer science increased to 14.2% in 2012-13 up from 11.7% in 2010-11. The number of women who enrolled in computer science programs specifically last year was 13.9%. There were 1,991 Ph.D. degrees granted last year in computing programs, a 3.2% increase. The fraction of Ph.D. degrees awarded to non-resident aliens was at 58%. Artificial intelligence, networking and software engineering, in that order, were the most popular areas of specialization for doctoral graduates, according to the report. Databases, and theory and algorithms were the next most popular areas. These five areas "have been the most popular for the past few. These five areas "have been the most popular for the past few years," the report said. The job prospects for Ph.D. grads are exceptional. Their unemployment rate is 0.8%, compared to 0.4% last year, and only 8% of these grads took jobs outside of North America, according to the report. Bachelor degrees in computer science and computer engineering at Ph.D.-granting schools. (Source: Computing Research Association) Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed. His e-mail address is [email protected]. Read more about it careers in Computerworld's IT Careers Topic Center. This story, "Wall Street's Collapse Was Computer Science's Gain" was originally published by Computerworld.
<urn:uuid:3f65faba-05e9-468f-9253-01eb1e7d0c05>
CC-MAIN-2017-09
http://www.cio.com/article/2376372/education/wall-street-s-collapse-was-computer-science-s-gain.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00396-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960348
981
2.515625
3
The U.S. National Transportation Safety Board (NTSB) is calling for motor vehicles to be equipped with "connected technology," machine-to-machine communications tools that could help drivers avoid accidents. The accident occurred on February 16, 2012, when a Mack truck struck the left rear of a bus carrying 25 students. One student was killed in the crash and and five others were seriously injured. The truck driver was not injured. Among its conclusions, the NTSB found that connected vehicle technology could have provided active warnings to the school bus driver of an approaching truck and possibly prevented the crash. "Effective countermeasures are needed to assist in preventing intersection crashes," the NTSB stated in its report on the crash. "For example, systems such as connected vehicle technology could have provided an active warning to the school bus driver of the approaching truck as he began to cross the intersection. Although the bus driver was adamant in his post crash interview that he had pulled forward sufficiently to see clearly in both directions, he failed to see the oncoming truck and proceeded into its path," the NTSB said. Researchers are currently developing machine-to-machine (M2M) communication technology that would allow the exchange of data between vehicles, allowing each to know what's going on around them. A car, for instance, could "see" the velocity of nearby vehicles and react when they turn or brake suddenly. Using computer algorithms and predictive models, the car could measure the skills of nearby drivers -- and ensure you're safe from their bad moves -- and predict where other vehicles are going. "We're even imagining that in the future cars would be able to ask other cars, 'Hey, can I cut into your lane?' Then the other car would let you in," said Jennifer Healey, a research scientist with Intel. Intel is working with National Taiwan University on M2M connectivity between vehicles as a way to make roads more predictable and safe. "Car accidents are the leading cause of death for people [age] 16 to 19 in the United States. And 75% of these accidents have nothing to do with drugs or alcohol," said Healey, who delivered a TED Talk on the subject in March. The NTSB's animated reenactment of the school bus collision in New Jersey. The Alliance of Automobile Manufacturers (AAM), an industry trade group working with the NTSB on connected vehicle technology research, has thrown its support behind the creation of a radio spectrum to be used for vehicle-to-vehicle communication. Earlier this year, the AAM joined The Intelligent Transportation Society of America and major automakers in urging the Federal Communications Commission (FCC) to protect the 5.9 GHz band of spectrum set aside for connected vehicle technology. In a statement, the groups said the technology "is expected to save thousands of lives each year -- from potentially harmful interference that could result from allowing unlicensed Wi-Fi-based devices to operate in the band." A study by the National Telecommunications & Information Administration (NTIA) showed that connected vehicle technology "could help prevent the majority of types of crashes that typically occur in the real world, such as crashes at intersections or while changing lanes." The NTIA's conclusions also revealed some reservations around implementing the technology. "Further analysis is required to determine whether and how the identified risk factors can be mitigated,: the report said. "While the state-of-the-art of existing and proposed spectrum sharing technologies is advancing at a rapid pace, NTIA recognizes ... the potential risks of introducing a substantial number of new, unlicensed devices into them without proper safeguards." Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian, or subscribe to Lucas's RSS feed . His email address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "Feds want cars to talk to each other" was originally published by Computerworld.
<urn:uuid:a48b7984-7724-40dd-b1aa-58ae741957d4>
CC-MAIN-2017-09
http://www.itworld.com/article/2707386/hardware/feds-want-cars-to-talk-to-each-other.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00396-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964826
839
3.203125
3
Smartphones, it seems, can do anything. Today's phones pack so much power and so many sensors into such a small space -- and at a relatively low cost -- that they are increasingly being used for inventive purposes. Just this week Strand 1, a nano-satellite based on a Google Nexus One cell phone, was launched into space on an Indian rocket. It's not just the hardware that brings an advantage: Android is an increasingly popular platform for development. So, when researchers at Germany's Fraunhofer Institute for Integrated Circuits were asked to come up with a camera that could be mounted on an eagle to get a literal bird's-eye view of its life, they too turned to a cell phone. But you can't exactly tape a smartphone to a bird's back. Instead, the Fraunhofer engineers broke apart the phone and repackaged some of the components on custom boards. The boards leverage the small, low-cost, standard interface on phone components into units that can be easily employed for other projects. "The idea three years ago was to use the very powerful processor used in cell phones or tablets for other applications, like professional cameras and different markets like surveillance, broadcast," said Michael Schmid, a researcher at the Fraunhofer Institute for Integrated Circuits. The eagle cam contained a camera module, processor, memory, and could communicate over Wi-Fi or LTE, so it was possible to stream real-time video from the back of the bird. Another camera the institute has developed retains the compass, gyroscope, temperature sensor, accelerometer and barometer often found in modern phones, and there's a Bluetooth interface for connection to other devices such as a GPS unit. That means the camera isn't just capable to recording video, but also of bringing in sensor data. Video can be output using standard interfaces like HDMI, Ethernet or HD-SDI, and all controlled by Android in a device smaller than a cell phone. The Fraunhofer researchers say they are looking to work with other organizations on developing custom projects. "On the one side, there are these very powerful processors, but on the other side, small and medium companies in different markets don't have access to that," said Schmid. "That's why we decided that we close the gap between that. We try to bring up reference boards, train engineers in these companies, how they can use these processors. We wrote API drivers, we wrote software, sample applications, that make life easier for these end customers." The movie, "The Way of the Eagle," will be released in 2014 by Terra Mater Factual Studios, a part of Red Bull Media House. Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service. Follow Martyn on Twitter at @martyn_williams. Martyn's e-mail address is [email protected]
<urn:uuid:6c014ed2-815d-4460-9d0e-41d0f715dda5>
CC-MAIN-2017-09
http://www.computerworld.com/article/2495621/mobile-wireless/german-engineers-deconstruct-smartphones-to-find-new-uses.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00272-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955715
608
3.140625
3
The majority of today’s telecommunication systems is run on a fiber optic network. This has been largely due to the fact that such networks are ideal for transferring information. The development in fiber optics has continued to improve considerably during the last decade, providing ever more benefits to their users. It does not take an expert scientist to understand just how the process works. An optical fiber is used to transmit a pulse of light from one place to another. An electromagnetic carrier wave is then modulated in order to use the light to transfer the information. A transmitter is thus needed to create the signal before this is sent along the said cable. It is important to note that such networks also counteract any distortions to the signal, which would result in interference. Once the signal is received at the other end, it is converted into an electric signal. Is transmission of data a problem for you with your old networking technology? Your company should then consider installing a fiber optic network! Light is passed in the form of light pulses through an optical glass fiber. This beats the conventional way of transmitting information with the help of copper wires, as this method of using optical fiber is faster and is therefore a better option. All this adds to the cost of optical fiber being relatively high. Fiber optic networks are mostly suited in situations where data is transmitted to longer distances. This includes a few telephone companies as well. These fiber optic networks can carry higher amounts of data in short distances as well. The rapid development of the internet in recent years has brought about the need for new ways to transfer information. Naturally, the faster this process is done, the better for everyone. However, the amount of virtual traffic going around the world has also been steadily increasing, so these types of networks have become indispensable in transferring data efficiently and effectively. Telephone companies have played the most vital part in the increasing reliance on fiber optics. In fact, a number of telecommunication companies realised that the future will depend on such cables and optical solutions rather than the old copper wires of that time. The possibility of monopolizing the market drove these companies to invest a lot in fiber optics. Not only the larger companies use fiber optics but also the smaller business firms and personnel. Instead of using wireless networks this fiber optic technology can be easily be implemented in a home based computer networks as well. These optical fibers are generally made of plastic. Anyone who wants a faster connection can use Ethernet technology at home or in the working environment Due to the low power LED bulbs been used, the maintenance cost of fiber optic networks are comparatively low. In the educational sphere, fiber optic networks have also been an immediate success. One has to realize that nowadays education is becoming increasingly reliant on technology, so computers are playing a chief role in schools. Universities all around the world employ such networks to transfer educational matter between students and lecturers, as well as between the students themselves. There is no doubt that these types of networks will continue to shape the near future as regards the transfer of information. More and more governments, companies and educational institutions are investing in fiber optic infrastructure since it is clear that right now there is no better alternative in the field. However, fiber optic networks have not been implemented as yet in many parts of the country. Another major factor for the less usage of fiber optics is the labour charges involved in installation. The glass fiber is more sensitive than copper wires, which means that more care needs to be taken in installing and maintaining a fiber optic network. This is why you will find several layers covering the glass fiber in fiber optics. About the author:
<urn:uuid:bec5b5b5-5263-4ce0-b76c-a147b40734d2>
CC-MAIN-2017-09
http://www.fs.com/blog/fiber-optic-network-has-become-the-protagonist.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00568-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963901
725
3.15625
3
After the LinkedIn debacle, it’s depressing but not entirely surprising to see yet further reports of large-scale hacks, this time of Yahoo! losing 400,000 usernames and passwords to anonymous hackers. It seems amazing to us, working as we do in the security industry, that the passwords were stored in plaintext with absolutely no form of encryption. This is a huge mistake and should never be the case in a modern enterprise… especially one that operates entirely online. We provide considerable guidance on the internal controls an organization should follow if it wants to have the security it needs to conduct its business. If a company wants to provide even stronger security, they should offer their customer the ability to move away from static passwords which are notoriously unsecure, especially if they remain unchanged. It would be best for companies like Yahoo! to follow the model of business like Amazon’s Web Services, which gives users the ability to safeguard themselves and their data through strong authentication. It is impossible to hack a password when you are using a one-time password or other advanced authentication options, which are widely available in the market today. Companies of all sizes need to work to align to best practices for securing the network and the data stored within the network. Better data encryption and verified access rights would add another layer of protection here. A best practice layered security approach involves three key areas: 1. Strong identity control – one of the first areas of cyber defense is to ensure you know exactly who is accessing specific resources on the network. This would mean implementing security controls which strengthen online identity including additional factors of identification. Commonly referred to as strong authentication, this access control requires users to use something they have (a smart identity card or USB device), something they know (passphrase or PIN), and if available something they are (biometric detail like a fingerprint). 2. Network based access control – Working off defined roles, these tools segment the network to ensure only authorized personal are allowed access to appropriate areas of the network. All others get blocked. This is also an area where step-up authentication could be used. This is where a person enters through a low security area of the network, but when the user tries to access a more secure area they are prompted for additional authentication attributes (a second identity factor could be used, for example) 3. Data Encryption for stored information – All data stored on a network server should be encrypted to ensure that even if the data is accessed it is unusable. It is critical to have a 360 view on your data protection as this case clearly shows – particularly on the latter point. Only then can we start to avoid hacks and security breaches affecting the everyday user.
<urn:uuid:4f7c3d38-d6df-4ff9-b0fe-1843eb61e521>
CC-MAIN-2017-09
https://blog.gemalto.com/blog/2012/07/16/yet-another-hack-passwords-and-storing-them-fail-again/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00444-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940089
549
2.640625
3
NASA's Cassini spacecraft has detected propylene, a key ingredient in plastics, on Saturn's moon Titan. This is the first time the chemical has been definitively found on any moon or planet, other than Earth. The discovery fills in what NASA called a "mysterious gap" in scientists' knowledge of the makeup of Titan's atmosphere and gives them confidence that there are more chemicals there still to discover. Propylene is an ingredient in many consumer plastic products like car bumpers and food storage containers. The interest lies in the small amount of propylene that was discovered in Titan's lower atmosphere by one of Cassini's scientific instruments called the the composite infrared spectrometer (CIRS), which measures the infrared light, or heat radiation, emitted from Saturn and its moons. The instrument can detect a particular gas, like propylene, by its thermal markers, which are unique like a human fingerprint. Scientists have a high level of confidence in their discovery, according to NASA. "This chemical is all around us in everyday life, strung together in long chains to form a plastic called polypropylene," said Conor Nixon, a planetary scientist at NASA's Goddard Space Flight Center. "That plastic container at the grocery store with the recycling code 5 on the bottom -- that's polypropylene." The discovery gives NASA scientists the missing piece of the puzzle for determining the chemical makeup of Titan's atmosphere. In 1980, NASA's Voyager 1 spacecraft, which has flown past Jupiter, Saturn, Uranus and Neptune, did a fly-by of Titan. According to the space agency, Voyager identified many of the gases in Titan's hazy brownish atmosphere as hydrocarbons, the chemicals that primarily make up petroleum and other fossil fuels on Earth. Titan has a thick atmosphere, clouds, a rain cycle and giant lakes. However, unlike on Earth, Titan's clouds, rain, and lakes are largely made up of liquid hydrocarbons, such as methane and ethane. When Titan's hydrocarbons evaporate and encounter ultraviolet radiation in the moon's upper atmosphere, some of the molecules are broken apart and reassembled into longer hydrocarbons like ethylene and propane, NASA said. Voyager's instruments detected carbon-based chemicals in the atmosphere but not propylene. Now scientists have that piece of the puzzle. "This measurement was very difficult to make because propylene's weak signature is crowded by related chemicals with much stronger signals," said Michael Flasar, a scientist at NASA's Goddard Space Flight Center and chief investigator for CIRS. "This success boosts our confidence that we will find still more chemicals long hidden in Titan's atmosphere." This article, NASA's Cassini finds plastic ingredient on Saturn's moon, was originally published at Computerworld.com. NASA's Cassini spacecraft detected propylene, an ingredient of household plastics here on Earth, in the atmosphere of Saturn's moon, Titan. (Video: NASA) Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "NASA's Cassini finds plastic ingredient on Saturn's moon" was originally published by Computerworld.
<urn:uuid:8cfe19f9-367e-467d-9e27-156c9516dc21>
CC-MAIN-2017-09
http://www.networkworld.com/article/2170400/data-center/nasa--39-s-cassini-finds-plastic-ingredient-on-saturn--39-s-moon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00144-ip-10-171-10-108.ec2.internal.warc.gz
en
0.918884
712
3.9375
4
The so-called armored fiber optic cable, is outside the optical fiber is then wrapped in a layer of protective of “armor”, is mainly used to meet the requirements of animals rodent, moisture proof, etc. Armored cable is a power cable made up by assembling two or more electrical conductors, generally held together with an overall sheath. This electrical cable with high protective covering is used for transmission of electrical power, especially for underground wiring needs. However, these cables may be installed as permanent wiring within buildings, buried in the ground, run overhead, or may even be kept exposed. They are available as single conductor cable as well as multi-conductor cables. To be more precise, armored cables can be explained as electrical cables with stainless steel or galvanized wire wound over the conductors and insulation. They often have an outer plastics sheath for main distribution supply and buried feeders. The main role of Armored fiber optic cable Armored optical fiber in telecommunications fiber optic long-distance lines, twele trunk transmission has important applications. Armored fiber mostly used for general network management usually come into contact with two fiber optic network equipment in the engine room building internal connections. The armored fiber length is relatively short. Often referred to as the armored jumper. General armoured jump line there is a layer of metal armoured in skin, make protection inside the fiber core, crush tensile resistance function, can prevent rat bite to eat by moth, etc. The Types of Armored fiber cable According to the use of premises Indoor armored cable: A single armor and indoor fiber optic cable. Single-core indoor armored cable structure: Single mode indoor armored cable, Tight fiber+ kevlar (tensile effect) + stainless steel hose (the compressive strength, resistance to bending, rodent) + stainless steel woven wire (torsional) + outer sheath (usually using PVC, according to the barrier role of fire-retardant PVC, LSZH, Teflon, silicone tube, etc.) Single armor excluding stainless steel braided wire cable; Double armored fiber optic cable with stainless steel hose and stainless steel woven wire. Advantages: high tensile strength, high compressive strength, rodent bite; possess resistant to the improper torsional bending damage; construction is simple, saving maintenance costs; adapt to harsh environments and man-made damage. Disadvantages: weight heavier than common fiber optical cable. The price is higher than common fiber optic cable Outdoor armored cable: There are divided into light armored and heavy armored (outdoor fiber optic cable). light armored strip and aluminum, is to strengthen and anti-rodent bite. heavy armored is wrapped in a circle wire, generally used in the riverbed, submarine. The market in general armored cable than the non-armored cable is cheap, usually steel, aluminum, much cheaper than the aramid (Kevlar is mainly used for special occasions). According to the material divided by metal armor There are steel armored and aluminum armored. Was usually bolt-type core with a layer of metal armored to protect narural bent, with high pressure, resistance to the advantage of the strong pull, provides excellent cable protection and safety. Aerial optical fiber cable If it is an outdoor aerial optical fiber cable, in order to avoid the harsh environment, human or animal damage (such as someone with a shotgun with a birds when the fibers interrupted case often occurs), play a role in the protection core armored cable ships. Recommended steel armor, light armor, cheap and durable better. Light armor, the price is cheap and durable. There are two general outdoor aerial cables: one is the center beam tube; another Standed. In order to durable, overhead layer sheath, and direct burial with two layers of sheating safer.
<urn:uuid:8edc88b2-6380-4673-8e30-186b6257bf54>
CC-MAIN-2017-09
http://www.fs.com/blog/what-is-armored-fiber-optic-cable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00320-ip-10-171-10-108.ec2.internal.warc.gz
en
0.908757
778
3.0625
3
Azambuja R.,Fertilitat Reproductive Medicine Center | Petracco A.,Fertilitat Reproductive Medicine Center | Petracco A.,University of Porto | Okada L.,Fertilitat Reproductive Medicine Center | And 5 more authors. Reproductive BioMedicine Online | Year: 2011 Human embryo cryopreservation techniques enable the storage of surplus embryos created during assisted reproduction procedures; however, the existence of these same surplus embryos has sparked further debate. What can be their fate once they are no longer desired by their parents or if the parents are deceased? Thus, the level of interest in the cryopreservation of oocytes has increased, as has the necessity for further scientific study. This study had the objective of reporting 10 years of experience of freezing and thawing human oocytes from patients who did not wish to freeze embryos. A total of 159 cycles using frozen-thawed oocytes were performed (mean age 33.7 years). Survival and fertilization rates were 57.4% and 67.2%, respectively. Cleavage rate was 88.4% and the pregnancy rate was 37.7%. Clinical pregnancy was observed in 43 cycles (27.0%) with 14.5% of transferred embryos implanted. These pregnancies delivered 19 boys and 23 girls, two pregnancies are ongoing and nine were miscarriages. The average gestational week was 37.6 weeks and birthweight was 2829.2 g. These data suggest that the use of frozen-thawed oocytes in IVF represents a reasonable alternative for those patients not comfortable with the cryopreservation of supernumerary embryos. Embryo cryopreservation techniques have improved, allowing storage of viable surplus embryos for an indefinite period of time. However, the disposition of surplus frozen embryos has sparked significant debate regarding the disposition of these embryos when they are no longer required by their parents or the parents die. A possible solution to this problem is to cryopreserve supernumerary oocytes rather than embryos. The first birth of a baby after oocyte cryopreservation occurred in 1986. Since then, improvements in technology have made oocyte cryopreservation a feasible possibility for young women at risk for premature ovarian failure and for those women who require radiotherapy or chemotherapy for cancer, among other indications. This paper is a report of our IVF experience using cryopreserved oocytes over a 10-year period including survival, fertilization, cleavage and pregnancy rates, and the outcome of the pregnancies that resulted in the delivery of live-born children. © 2010, Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved. Source
<urn:uuid:8b0ea442-e13c-4818-9722-4bba2b65d5c0>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/fertilitat-reproductive-medicine-center-2584425/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00089-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941242
549
2.59375
3
This resource is no longer available How To Secure Online Activities The Internet is not automatically a secure or safe place to be. It is important to be clear and distinct when discussing security. Security is not a singular concept, solution, or state. It is a combination of numerous aspects, implementations, and perspectives. In fact, security is usually a relative term with graded levels, rather than an end state that can be successfully achieved. In other words, a system is not secure; it is always in a state of being secured. There are no systems that cannot be compromised. However, if one system's security is more daunting to overcome than another's, then attackers might focus on the system that is easier to compromise.
<urn:uuid:e08cfb9c-437c-4f60-bcdc-c2148fd0dd60>
CC-MAIN-2017-09
http://www.bitpipe.com/detail/RES/1377097047_848.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00017-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95074
145
3.296875
3
What Level of SSL or TLS is Required by HIPAA? SSL and TLS are not actually monolithic encryption entities that you either use or do not use to connect securely to email servers, web sites, and other systems. SSL and TLS are evolving protocols which have many nuances to how they may be configured. The “version” of the protocol you are using and the nuances of the configuration directly affect the security achievable through your connections. Some people use the terms SSL and TLS interchangeably, but TLS (version 1.0 and beyond) is actually the successor of SSL (version 3.0). … see SSL versus TLS – what is the difference? In 2014 we have seen that SSL v3 is very weak and should not be used going forward by anyone (see the POODLE attacks, for example), TLS v1.0 or higher should be used. Among the many configuration nuances of SSL and TLS, which “ciphers” are permitted have the greatest impact on security. A “cipher” defines the specific encryption algorithm to be used, the secure hashing (message fingerprinting / authentication) algorithm to be used, and other related things. Some ciphers that have long been used, such as RC4, have become weak over time and should not be used in secure environments. Given these nuances, people are often at a loss as to what is specifically needed for HIPAA compliance or any kind of effective level TLS security. What HIPAA Says about TLS and SSL: Health and Human Services has published guidance for the use of TLS for securing health information in transit. In particular, they say: Electronic PHI has been encrypted as specified in the HIPAA Security Rule by “the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key” (45 CFR 164.304 definition of encryption) and such confidential process or key that might enable decryption has not been breached. To avoid a breach of the confidential process or key, these decryption tools should be stored on a device or at a location separate from the data they are used to encrypt or decrypt. The encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard. They go on to specifically state what valid encryption processes for HIPAA compliance are: Valid encryption processes for data in motion are those which comply, as appropriate, with NIST Special Publications 800-52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations; 800-77, Guide to IPsec VPNs; or 800-113, Guide to SSL VPNs, or others which are Federal Information Processing Standards (FIPS) 140-2 validated. In other words, SSL and TLS usage must comply with the details set out in NIST 800-52. This implies that other encryption processes, especially those weaker than recommended by this publication, are not valid. If you are using a level of encryption weaker than recommended, it is not valid, and thus for all intents and purposes your transmitted ePHI is unsecured and in violation (breach) of HIPAA. So, What does NIST 800-52 Say? NIST 800-52 is a long and detailed document that covers what is needed for strong TLS for government use. In addition to many small nuances, the biggest things to get out of this document are: - SSL v3 must not be used - TLS v1.0+ is OK to be used - Only ciphers in a specific, recommended list are OK to use The list of technically allowed ciphers (converted into the names used by openssl) are: - Turning OFF SSL v2 and SSL v3 - Enabling TLS 1.0 and higher - Restrict the ciphers you will be using to ONLY those in the CBC-free above list. One thing that is interesting to note is that there are many ciphers included in this list that are not 256-bit. E.g. 128bit AES is allowed for HIPAA and high-security government use. We often hear people stating that 256-bit encryption is a requirement of HIPAA … it is not (that answer is “too simple” — it comes down to which specific algorithms are used, for example). What Does LuxSci Do? LuxSci’s services use TLS for secure web site, MySQL, POP, IMAP, and SMTP connections. LuxSci enables you to use TLS in a HIPAA compliant way by: - Only allowing TLS v1.0+ (no SSL v3) - Only allowing connections using a subset of ciphers in the above recommended list Furthermore, LuxSci allows HIPAA-compliant customers to have email delivered to recipients using “TLS Only” secured connections to recipients servers that support TLS for SMTP. For many customers, the easy-of-use of TLS for secure email delivery is a great solution when available. LuxSci’s systems auto-check all of the recipient’s inbound email servers to ensure that all of them support TLS v1.0+ and at least of the recommended ciphers …. only in this case do we permit use of “TLS Only”. E.g. only in this case can we deliver messages to them in a compliant manner. We do observe some (a small subset) email servers on the Internet that only support SSL v3 or which only support old, weak ciphers. We do not allow our HIPAA customers to communicate with them using only TLS (SSL), as that would place them out of compliance. We recommend that these servers either upgrade their software configurations or that something like SecureLine Escrow is used to ensure compliant communications with them. Use our TLS Checker Tool to see if a domain supports SMTP TLS and if its support is “good enough” for HIPAA-compliant email delivery. - How to Tell Who Supports SMTP TLS for Email Transmission - How Can You Tell if an Email Was Transmitted Using TLS Encryption? - 256-bit AES Encryption for SSL and TLS: Maximal Security - Is SSL/TLS Really Broken by the BEAST attack? What is the Real Story? What Should I Do? - Infographic – SSL vs TLS: What is the Difference?
<urn:uuid:98557a23-91a5-40c3-8a89-12f3d97409a5>
CC-MAIN-2017-09
https://luxsci.com/blog/level-ssl-tls-required-hipaa.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00193-ip-10-171-10-108.ec2.internal.warc.gz
en
0.835849
1,349
2.65625
3
As has become apparent to nearly everyone in the HPC community, life beyond petascale supercomputing will be power limited. Many efforts around the world are now underway to address this problem, both by commercial interests and researchers. One such effort that brings both into play is the Mont-Blanc research project at the Barcelona Supercomputing Center (BSC), which is looking to exploit ARM processors, GPUs, and other off-the-shelf technologies to produce radically energy-efficient supercomputers. In this case, radically means using 15 to 30 times less energy than would be the case with current HPC technologies. The idea is to be able to build petascale, and eventually exascale supercomputers that would draw no more than twice the power of the top supercomputers today. (The world champ 10-petaflop K computer chews up 12MW running Linpack.) Specifically, the goal is develop an architecture that can scale to 50 petaflops on just 7MW of power in the 2014 timeframe, and 200 petaflops with 10MW by 2017. The Mont-Blanc project was officially kicked off on October 14, and thanks to 14 million Euros in funding, is already in full swing. Last week, NVIDIA announced that BSC had built and deployed a prototype machine using the GPU maker’s ARM-based Tegra processors that have, up until now, been used only in mobile devices. The power-sipping ARM is increasingly turning up in conversations around energy efficient HPC. At SC11 in Seattle last week, there were a couple of sessions along these lines, including a BoF on Energy Efficient High Performance Computing that featured the advantages of the ARM architecture for this line of work as well as a PGI exhibitor forum on some of the practical aspects of using ARM processors for high performance computing. There was also the recent news by ARM Ltd of its new 64-bit ARM design (ARMv8), which is intended to move the architecture into the server arena. NVIDIA is already sold on ARM, and not just for the Tegra line. In January, the company revealed “Project Denver,” its plan to design processors that integrate NVIDIA-designed ARM CPUs and CUDA GPUs, with the idea of introducing them across their entire portfolio, including the high-end Tesla line. “We think that the momentum is clearly pointing in the direction of more and more ARM infiltration into the HPC space,” said Steve Scott, CTO of NVIDIA’s Tesla Unit. The Mont-Blanc project is certainly an endorsement of this approach. The initial BSC prototype system is a 256-node cluster, with each node pairing a dual-core Tegra 2 with two independent ARM Cortex-A9 processors. The whole machine delivers a meager 512 gigaflops (peak) and an efficiency of about 300 megaflops/watt, which is on par with a current-generation x86-based cluster. The numbers here are somewhat meaningless though. The initial system is a proof of concept platform designed for researchers to begin development of the software stack and port some initial applications. The second BSC prototype, scheduled to be built in the first half of 2012, will employ NVIDIA’s next-generation quad-core Tegra 3 chips hooked up to discrete NVIDIA GPUs, in this case, the GeForce 520MX (a GPU for laptops). This system is also 256 nodes, but will deliver on the order of 38 peak teraflops. Energy efficiency is estimated to be a much more impressive 7.5 gigaflops/watt, or more than three and a half times better than the industry-leading Blue Gene/Q supercomputer. In conjunction with this second prototype, NVIDIA will be releasing a new CUDA toolkit that will include ARM support. The first two prototypes are BSC inventions. The project will subsequently develop its own more advanced prototype. According to Scott, that cluster will be 1,000 nodes, although the internal make-up is still not decided. Given the timeframe though (2013-2014), the system is likely to include NVIDIA processors using Project Denver technology, with the chip maker’s homegrown ARM implementation and much more performant GPUs. By the end of the three-year project, the researchers intend to have a complete software stack, including an operating system, runtime libraries, scientific libraries, cluster management middleware, one or more file systems, and performance tools. They also hope to have 11 full-scale scientific applications running on the architecture, which encompass fluid dynamics, protein folding, weather modeling, quantum chromodynamics, and seismic simulations, among others. Whether Mont-Blanc leads to any commercial HPC products remains to be seen. NVIDIA, for its part, is certainly happy to see this level interest and adoption of its ARM-GPU approach. “We see this as seeding the environment, where people can do software development and experimentation,” said Scott. “We think that it will grow into something larger down the road.”
<urn:uuid:fb8f7644-3fd4-43a2-849f-13bd20178975>
CC-MAIN-2017-09
https://www.hpcwire.com/2011/11/23/nvidia_tegra_processors_blaze_the_way_for_arm_in_supercomputing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00193-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943998
1,041
2.953125
3
The U.S. government wants to force cars to talk to each other over wireless networks, saying that could save more than 1,000 lives every year. The National Highway Traffic Safety Administration (NHTSA) is seeking input about a possible federal standard for vehicle-to-vehicle (V2V) technology, which would let cars automatically exchange information, such as whether they’re close to each other. The agency will accept comments from the public and industry for 60 days from when the advance notice of proposed rulemaking (ANPRM) is published in the Federal Register. V2V would let cars do some of the work of driving or even accomplish things humans can’t, such as virtually “seeing” into blind intersections before entering them. It may be one step on the path to self-driving cars. On Monday, the NHTSA published a research report on V2V and issued an ANPRM in hopes of collecting a lot of feedback before issuing a full NPRM in 2016. In the report, it estimated the safety benefits of just two possible applications of V2V, called Left Turn Assist and Intersection Movement Assist. Together, they could prevent as many as 592,000 crashes and save 1,083 lives per year, the agency said. Neither system would necessarily take control of a car. Left Turn Assist would warn drivers not to turn left into the path of an oncoming car, and Intersection Movement Assist would warn them not to enter an intersection when there’s a high probability of crashing into other vehicles there. The two technologies could help drivers avoid more than half of those types of crashes, the agency said. Other V2V systems could include blind spot, do not pass, and forward collision warnings, as well as stop light and stop sign warnings. “V2V technology represents the next great advance in saving lives,” U.S. Transportation Secretary Anthony Foxx said in a press release. In addition to improving safety, V2V might smooth the flow of traffic and improve cars’ fuel economy, the NHTSA said. V2V would run over wireless networks using the IEEE 802.11p specification, a variant of the standard used for Wi-Fi, on a band of spectrum between 5.85GHz and 5.925GHz. That’s crucial to making the technology work between vehicles from different manufacturers, NHTSA said. V2V doesn’t identify individual vehicles, nor does it collect or share personal information about drivers, and it would have layers of security and privacy technology to protect users, the agency said.
<urn:uuid:ae448fcb-ffe5-42f9-ae26-3ed310a85eb9>
CC-MAIN-2017-09
http://www.cio.com/article/2466601/vehicletovehicle-networks-could-save-over-1000-lives-a-year-us-says.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00137-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95373
543
2.640625
3
The Brookings Institution published an interesting paper yesterday on the use of big data in education. In it, Brookings vice president Darrell M. West discussed the uses of real time analytics to help shape and guide education policy in the future. For West, data analytics were essential to evaluating school performance and providing educators with feedback: “It is apparent that current school evaluations suffer from several limitations. Many of the typical pedagogies provide little immediate feedback to students, require teachers to spend hours grading routine assignments, aren’t very proactive about showing students how to improve comprehension, and fail to take advantage of digital resources that can improve the learning process. This is unfortunate because data-driven approaches make it possible to study learning in real-time and offer systematic feedback to students and teachers.” West said smart data analysis could improve transparency and accountability in schools where administrators and educators were looking to increase students’ grades on standardized tests. He suggested the use of dashboards for fast and up-to-date performance assessment. He pointed out the Education department’s dashboard as an example of how governments and schools could use real time data on their policy in action. Dashboards are definitely a great way to quickly visualize, interpret and process data, but they must be based on credible data to be effective for the end user. In the past, the federal government’s dashboard for IT-spending has been criticized for being inaccurate, and one can assume the many pitfalls that lie with educational data. As the Columbia Journalism Review argued in May, education data has been known to be flawed, inflated and misleading. Having this type of data is definitely a start for better education policy, but it shouldn’t be the lynchpin behind an effective approach.
<urn:uuid:2122779d-0893-427c-a78f-90809260f0ea>
CC-MAIN-2017-09
http://www.nextgov.com/technology-news/tech-insider/2012/09/big-data-could-play-role-improving-education/57896/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00189-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951118
356
2.734375
3
Last year, Wired magazine featured a demonstration that showed precisely how easy it is for a hacker to take over the onboard computer of an automobile while driving down the highway in an adjacent car. This disturbing proof of concept for the high-speed hack may have made the population at large more aware of the fact that automobiles these days are dependent on multiple computer systems, and that those computers—just as much as any in a home or in an office—are potentially vulnerable to being compromised. But this one stunt, though chilling in its immediate impact, has not done a whole lot to change the way that manufacturers treat concerns about digital security. A recent article at darkreading.com highlights this problem, noting that steps that manufacturers are taking to manage digital security in the auto industry are being outpaced by the emerging threats. When it comes to cars functioning properly, people’s lives can be at stake. And while the prospect of a hack attack as seen in the Wired demo might seem unlikely at the moment, it is becoming clearer to people watching that digital security, as it pertains to automobiles, is no far-out fringe concern. For the automotive industry, effectively managing software vulnerabilities and addressing digital security are going to have to become a priority. More IPs, More Problems The susceptibility of cars’ internal computers to hack attacks is only one of the potential cybersecurity problems that the automotive industry faces. The popularity of the Internet of Things has led to the creation of more devices within any given car that are accessible by wireless connection, just as it has with homes. Everything from the locks on a car’s doors to the radio can be managed by smartphone, and every point of wireless access provides a potential way in for a hacker. Smarter Cars, Smarter Hackers When thinking about digital security, it’s not just the automobiles of today to take into consideration. In the near future, automobiles will make even more use of the types of technology that hackers can exploit. Self-driving cars using GPS to navigate are already being demoed in various cities throughout the U.S. and Europe. Prominent futurists are envisioning a day when package deliveries and some, if not all, personal transportation are undertaken by cars that drive themselves. And if we reach a point where such sophisticated self-driving smart cars truly do rule the roads and highways, it will be critical to make sure hackers aren’t commandeering them for the purposes of theft, mischief, or even destruction. What Can the Automotive Industry Do? The darkreading.com article suggests that bringing security professionals into the software development cycle is one way the industry could handle an impending wave of cyberattacks and exploits launched against vulnerable automobiles. Because we’ve already reached a point where having a car’s software upgraded sometimes requires a trip to the dealer, it seems as though the idea of remote security upgrades the way they are done for smartphones could eventually have its day—but that too poses its own, quite significant security challenges. So as with any rapidly evolving technology, there are more questions than answers. Will there someday evolve a model in which an outside security solution provider is as critical to an automobile as it is to an enterprise? Will we reach a point where a trip to a mechanic will involve a run-through of patches to onboard computers? It’s truly tough to say what shape the merging of automotive maintenance and digital security will eventually take. But it’s something the automotive industry will be keeping its eye on—for the sake of the reliability of its products and the safety of its customers. How do you think auto manufacturers should deal with the coming wave of digital security concerns?
<urn:uuid:c1b168cb-40e2-4573-b004-5f55da1c3e21>
CC-MAIN-2017-09
http://www.ingrammicroadvisor.com/security/digital-security-is-about-to-become-a-priority-for-the-auto-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00541-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953429
755
2.59375
3
Can Machines Really Think? The idea that machines will dominate human intelligence is mere science fiction today, but machines are getting better at complex decision making and active reasoning. By Chetan Dube Since the birth of computer science, we have been asking the same question: can machines really think? Some have argued that artificial intelligence is impossible (Dreyfus), immoral (Weizenbaum) and perhaps even incoherent (Searle). And yet, despite the cynicism of those before him, Alan Mathison Turing, the dubbed father of computer science, posed his famous challenge in the mid-1900s: is it possible to create a machine so intelligent that we cannot discern any difference between human and machine intelligence? On these grounds, the world has wrestled with various cognitive models to mimic human intelligence, although none has yet been able to replicate it. In 1966, Weizenbaum created “Eliza,” a robot that imitated the behavior of a Rogerian psychotherapist by using rules that transformed users’ questions. For example, in response to a statement such as “I am feeling sad,” Eliza would ask “Why are you feeling sad?” Eliza could carry an effective psychotherapeutic discussion, but the technology did not go any further than asking these simple questions. She was therefore unable to beat the Turing Challenge despite many convinced patients. Today, artificial intelligence has progressed much further than restructuring sentences and we have seen a grand spectrum of its capabilities in many different forms. IBM’s Watson can beat human champions on Jeopardy; cars manufactured by Google can drive themselves; and Apple’s Siri, now built into iPhone devices, can follow rudimentary oral commands to retrieve information and perform basic actions. Although these tools are helpful and perhaps entertaining, the question remains: can machines graduate from domain-specific game playing and office management tools to truly emulate and rival human intelligence? It is quite tempting to study a specific body of knowledge and distill copious amounts of information into supercomputers in order to replicate intelligence. By this method, a machine may house more information than is possible for a human, demonstrating intelligence that rivals that of the human brain. However, this method also ignores a pivotal suggestion from Turing: the way to make machines think is not to simulate the adult mind, but to simulate a child’s brain and let it rapidly learn about its own environment. Thus, adaptive learning is the key to making machines intelligent and fostering their ability to counter human intelligence. Leveraging this proposal from Turing, as well as theoretical computer science principles, we are precipitously close to sincerely answering the Turing Challenge. The goal is not to fake human intelligence but to make a sincere emulation of a human brain that is capable of adaptively learning just as a child learns, rapidly gaining aptitude by interacting with humans. It is clear at this point that we cannot fake it. According to Nobel Prize winner Francis Crick, the father of modern genetics and DNA helix structures, there is a fundamental framework of ideas that are necessary for machines to achieve intelligence. Thus, while Watson and Siri demonstrate knowledge and comprehension that rivals humans, we must study hierarchical temporal memory systems in order to gain insight into the neuroscience behind a human brain’s work. To clone human intelligence, we must relay this neuroscience research into a sincere emulation of the human brain and its ability to think, comprehend, and solve problems rather than simply program a machine. Some may ask, once machines have, in fact, achieved human intelligence, what impact will there be on modern times and our individual lives? At this point, however, it is hard to tell all the ramifications of machines’ potential to learn and think comprehensively. When microprocessors were invented, they were predominantly developed for calculators and traffic light controllers. Today, the world is a more efficient shrinking village through the use of Internet and mobile communications. This was an unpredictable development from the original innovation, and will certainly be reflected in the widespread growth and evolution of intelligent machines. While any speculation at this point is just that -- speculation -- it is likely that machine intelligence will liberate mankind from menial and mundane tasks, allowing us to engage in higher forms of creative expression across all fields. This type of innovation has long been squelched by a fear that machines will steal jobs, especially in an economic setting wholly focused on job creation. However, as a case study example of technology replacing humans, examine the outcome of the horse-and-carriage drivers once cars were introduced to the masses: they were promoted from driving horses to driving cars. Since then, we’ve seen a similar evolution from factories run by human hands to factories based on computer-aided design, modeling, and automatic assembly. Rather than limiting job availability to humans, I predict that machines will goad mankind to move the power of our brains higher in the value chain. A human brain is too beautiful a thing to waste, and should be focused only on the most creative, innovative, and thought-provoking matters. At this point, the idea of machines dominating human intelligence is mere science fiction, as complex decision making, active reasoning, and emotional intelligence are still only human characteristics. It’s on the horizon, though, slowly coaxed along by our desire to solve the six decade-old Turing Challenge through the implementation of technologies such as expert systems and autonomics. One thing we know for sure: these are the most exciting times of artificial intelligence and transformation. Chetan Dube is the president and CEO of Ipsoft, a global provider of autonomic-based IT services. Dube founded the company in 1998 with the mission of powering the world with expert systems. During his tenure at IPsoft, Dube has led the company to create a radical shift in the way infrastructure is managed. Prior to joining IPsoft, Dube served as an Assistant Professor at New York University. In conjunction with Distinguished Members of the Technical Staff at AT&T Bell Labs, he researched cognitive intelligence models that could facilitate cloning human intelligence. Dube has been working at IPsoft to create an IT world where machine intelligence will take care of most mundane chores, allowing mankind to concentrate on higher forms of creative thinking. Dube speaks frequently about autonomics and utility computing and has presented seminars about the environmental benefits of automation. You can contact the author at [email protected].
<urn:uuid:f6e04df3-6ca5-4c18-8d07-a5f9e7d3765d>
CC-MAIN-2017-09
https://esj.com/articles/2012/11/12/can-machines-think.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00133-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954876
1,318
3.328125
3
Justin Clarke is a co-founder and Director at Gotham Digital Science. He has over twelve years of experience in assessing the security of networks, web applications, and wireless networks. He is the author of the open source SQLBrute blind SQL injection testing tool, several books, and is the Chapter Leader for the London chapter of OWASP. In this interview he discusses SQL injection and his latest book. Exactly how big of a problem is SQL injection? Can you provide a rough picture to our readers not familiar with the issue? SQL injection is a huge problem. It is both fairly common, as well as having the potential to be a huge risk to a business either directly or indirectly. A good example of the risk of SQL Injection is the recent revelation that SQL injection was a key part in a number of the recent large credit card data breaches – Heartland, Hannaford, TJX and others. In these cases, SQL injection was used in order to get further into an organization and do something else, however in a lot of cases SQL injection can be serious enough all on its own. I’ve seen examples such as a small US bank that had over 5,000 mortgage applications stolen from an application via SQL Injection, and hence all of the information needed to steal all of those people’s identities. For those not familiar with SQL injection, it occurs when unvalidated user input is included within Structured Query Language (SQL) statements that are dynamically assembled within the application. For example, say the application is assembling a query for product information with a “productid” variable supplied by the user with a query something like this: string query = “SELECT * from products WHERE productid =” + productid Where the “productid” variable is as the developer expects, this works fine. However if the “productid” variable contains SQL language statements instead of the expected values, an attacker can modify what the query does, and even execute totally different database functionality in some cases. For example, on Microsoft SQL Server the attacker may be able to execute operating system commands like the following: Attacker supplies the following value for “productid” -> 1; exec xp_cmdshell(‘cmd.exe /c ping 10.11.12.13’)– What’s are the main motivations behind SQL injection attacks? Precisely what are the attackers after? We are also seeing an increasing amount of reports where SQL injection is used as a method of getting into an organization’s network. In this case, its likely they are using common database functionality to run underlying operating system content in order to gain a foothold on the database server. As database servers have a tendency to be in a network segment closer to the organization’s internal network (or may be on the internal network in some cases), this can allow an attacker to then gain access further into the organization, and then attack other systems they are interested in. What are the basic steps developers can take in order to prevent SQL injection? Is there a solution to the problem or is it just an ongoing battle? Preventing the majority of cases of SQL injection is actually quite straightforward – avoid the use of dynamic SQL with user input included in it. The use of parameterized SQL statements (also known as bind variables) is usually the best way of doing this, and in many cases also has the benefit of better database performance as well. Using parameterized SQL means that the values passed in as parameters are not treated as part of the executable SQL statement, and therefore can’t cause SQL Injection to happen. If that isn’t possible, the other alternative is to ensure that dangerous characters in user input included in dynamic SQL is encoded to prevent SQL injection. This approach is far more prone to errors in the implementation of the encoding though, and therefore you might want to use something like OWASP ESAPI as a reference implementation on how to do this safely. The difficulty with taking either of these approaches is in identifying all of the places where your applications are vulnerable to SQL Injection, or alternatively where all of the database access occurs and then changing how this is done. This is potentially a massive effort for even a small organization, so I think we are likely to have SQL injection being a problem for a long time to come. What software solutions (both free and commercial) would you recommend for security professionals working to uncover SQL injection issues? For commercial solutions, you will probably want to look at either the dynamic scanners (such as HP WebInspect or IBM AppScan) and/or static code analysis tools such as Fortify SCA, or Ounce (now owned by IBM) depending on your preferred approach for finding SQL injection issues. On the free side, the list if far more limited. From the source code analysis side of things if you’re on .NET, Microsoft has released their CAT.NET static analyzer. Also available are tools such as YASCA (Yet Another Source Code Analyzer) and OWASP Orizon that look promising. On the dynamic scanning side, the best free tools for finding SQL Injection are penetration testing tools such as OWASP SQLiX, although most of the tools available assume that you’ve managed to find the issue yourself manually, and only help you exploit them. What challenges did you face when writing “SQL Injection Attacks and Defense“? What did you learn? The main challenge with “SQL Injection Attacks and Defense” was to make sure that we covered as much as possible. There is a huge amount of SQL Injection knowledge out there, and we made the effort to capture as much as possible in the book, but that is always a moving target based on the timescales you work with in a book project. The team of contributing authors on the book were a huge help here – there was a lot of cross review back and forth during the process of writing the book to ensure we’d covered what we wanted to in each chapter. Are you satisfied with the response from the technical reviewers and the audience? What would you do differently next time around? We’ve had a very positive reception to the book so far. I’ve had a few folks drop me a note via email thanking us for the book, and I’m always careful to note that this was a team effort with a lot of smart folks pulling together to get it written. One thing that I would do differently next time would be to ship example applications with the book (or have them available on a supporting site) so that readers would be able to walk through examples themselves. This is something that we did discuss during writing, but with the deadlines we were working to this just wasn’t possible this time around. There are many highly knowledgeable security professionals out there that still haven’t tackled the challenge of writing a book. What advice would you give them based on your writing experience? The one thing I would advise people of is that it takes a lot of time to write a book, so the idea of writing a book on your own in your spare time is usually not realistic. My advice would be to assemble a small group of people you trust, preferably with at least one of you having been involved in a book project before, and write the book as a collaboration. That is also another way to get started – contribute a chapter to a book someone else is writing, so if someone you know is writing a book on a topic you know a lot about, ask is you can get involved.
<urn:uuid:a41c7322-51e9-4887-aa87-04ec0e0ba82f>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2009/09/10/qa-sql-injection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00133-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958036
1,571
2.546875
3
Within an hour of posting my last blog on the Variety of Location data, I remembered three other sets of Location data that I did not include but should note: Electronic Addresses, Voice/Phone Addresses and Virtual World Addresses. Adding in the Electronic World There's a whole virtual world out there that's grown around us, but you can't attach geospatial coordinates to: the realm on email addresses, IP addresses, and URLs. Should these be considered Locations as well? I think a reasonable case can be made to do so. 1) Consider a customer: they have a physical address you can ship goods to; and they have an electronic address that you can send a shipment notice to. - Both represent routes to the customer. - Both are distinct from the customer and the actual data about the customer. 2) Consider their uniqueness: a given physical address (once you've accounted for subdivisions such as apartments and floors) is a unique place; a given electronic address is also unique - while it might be accessed from many virtual points, my email address or the IP address I'm currently connected to are distinct from any others. 3) Consider that they are routing mechanisms: they serve as endpoints for their relevant protocols to deliver content to. Generally, the electronic addresses are less complex in content than physical addresses, but I think it worthwhile to consider this set as more of the larger variety of Location data. And what of the World of Voice? Phones are an interesting crossing point. Through most of their history they are physical devices, whether in the form of landline phones in your home, fax machines in the office, or mobile phones in your pocket. At any a given point in time, they occupy a physical space which can be described as a Physical Address or as a Geospatial Coordinate. However, they also have a phone number which serves as a Voice Address -- I can call your phone number to reach you just as I can send a letter to your physical address or an email to your electronic one. Those phone numbers are unique at a given point in time and are clearly distinct from either the user of the phone or the premise at which the phone currently resides. These Voice Addresses are also distinct from the serial numbers that uniquely indicates that your iPhone is distinct from your friend's iPhone. Now the distinctions start to blur when you consider Conference Numbers and VoIP - these are virtual addresses, numbers you can dial from some other phone or machine to reach someone else but are not necessarily at any specific physical location or linked to any specific device. Emergence of Virtual Locations Given the blurring of boundaries between Electronic Address and Voice Address, it may be useful to consider these as Virtual Locations. And that in turn allows us to bring Virtual World Locations in to a common picture as well. I see this aspect in the online games my kids play where they decide which server or world they want to participate in at any given time. Like conference call numbers, these virtual locations have specific identities to select and often restrictions on the number of connections supported at a given point in time. Why do we want to include these electronic, voice, and virtual addresses in our consideration of Location data? In many instances, these are the only Location data we have for a given customer, client, vendor, etc. Many of our interactions are solely electronic. Where a product is virtual, such as an eBook or a pdf document or even a tax form, delivery of goods and acknowledgment of receipt is based on these locations. Our understanding of customers and suppliers is increasingly shaped by our awareness of the Virtual World as well as the physical one. What do you think? Do you agree with this broad perspective on Location? In my next post, I'll look take up the increasing sources of Location data and their possible uses and interactions as I think this is a key to determining whether it is worthwhile to turn Location into Master Data, and if so, which pieces. As always, the postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
<urn:uuid:b98f0a8f-aaea-4042-9bd8-f70176ec44ab>
CC-MAIN-2017-09
https://www.ibm.com/developerworks/community/blogs/haraldsmith/date/201310?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00485-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957654
834
2.765625
3
The 8-bit era (1980-1984) Time Magazine: January 1983 The star of the personal computer was rising. For 1982, Time Magazine featured a generic version as "Machine of the Year," the one and only occasion a nonhuman has won that award. By this time, many new firms had joined the fight, releasing the third generation of machines. Most of these used the 6502 CPU, because of its excellent bang-for-the-buck ratio. Commodore, realizing that the PET was running out of gas, started a number of new 8-bit computer projects based around their 6502 and custom chips developed in-house by MOS Technologies. One of them, the VIC-20, named after the Video Interface Chip that ran its graphics, was enormously successful. Despite limitations in the VIC chip that could only display 22 columns of text, the colorful and inexpensive computer sold 600,000 units in 1982. But the big winner for Commodore was yet to come. Rushed into production by shoehorning components into the VIC-20 case, the Commodore 64 made its debut in late 1982 for US$595 with a full 64KB of RAM, the maximum directly addressable by 8-bit CPUs. At the time, computers with that much RAM cost at least three times as much. By integrating the full 64KB of RAM as part of the standard model, the C-64 enabled software developers to write and port stunning high-resolution games to the platform. An improved VIC chip allowed not only 40 column text but supported sprites, making it easier to create fast-moving, flicker-free game graphics. In addition, a multifrequency synthesized sound chip (the legendary SID) gave the small machine a sweeter voice than any other machine of its time. Sales of this new computer took off, reaching a staggering 2 million units in 1983. Commodore VIC-20, top. Commodore 64, bottom This incredible volume, unheard of in the personal computer industry before (and it would still be a respectable figure for a new computer model today!) allowed Commodore to get its final revenge on Texas Instruments. A price war which left the C-64 selling for as little as US$199 caused TI to panic and remove its own computer, the TI-99/4A, from the market. The Atari 400 and 800 Atari, flushed with the success of PONG and its 2600 games console, released the 400 and 800 series of computers in 1979. The 400 was essentially a cheaper version of the 800 with less memory and an awkward "membrane" keyboard. Designer Jay Miner had fitted these machines with impressive technology, including a custom blitter chip that could blast large sections of graphics on the screen without involving the CPU. The 400/800 could play games, like Frogger, that were indistinguishable from the arcade versions. However, Atari kept most of the details about its hardware secret in order to try and give an advantage to its in-house software developers. This limited the long-term success of the platform, which peaked at 600,000 units in 1982 and went steadily downhill. The Atari 800 could have been much more, were it not for an accident of fate. Mighty IBM, the 800-pound gorilla of the computer industry, was starting to worry about being left behind in this new market, which had emerged at dizzying speeds. "The worry was that we were losing the hearts and minds," IBM executive Jack Sams reminisced. "So the order came down from on high: 'Give me a machine to win back the hearts and minds.'" At first, IBM thought about rebranding an existing computer, and had selected the Atari 800. However, after a visit to Atari headquarters, where IBM businessmen were literally put in a box and run through the assembly line by unorthodox and sometimes stoned Atari employees, the computing giant decided they would rather build their own computer. To gain an advantage over existing personal computer models, IBM decided to use the new Intel 8088 CPU, which had a 16-bit memory model making it capable of directly addressing 1MB of memory (although unlike the fully 16-bit 8086, the 8088 chip saved money by being 8-bit externally). Bill Gates, in negotiations at the time for delivering BASIC for this new machine, was tremendously excited about the potential for a 16-bit CPU, and to this day claims that his input "tipped the balance" in favor of this chip. Because the personal computer market was growing so rapidly, a rogue IBM design group in Boca Raton, Florida was given the go-ahead to design and build the new computer in less than a year. This necessitated a few shortcuts in the design, which unlike most IBM computers used primarily off-the-shelf chips. Even the operating system was contracted out to another company. Originally it was meant to be Digital Research, maker of the popular CP/M operating system for 8-bit computers. However, when DR took its time signing IBM's non-disclosure agreements, Microsoft seized its chance and won a deal to provide the operating system instead. A combination of off-the-shelf hardware and an operating system available from a third party made the rise of 100% compatible IBM PC clones possible, once the ROM BIOS of the PC had been successfully reverse-engineered. Compaq was the first company to do so, in 1982, but many others followed. The original IBM PC, Model 5150. 5150 is also a police code for a severely disturbed person. The IBM PC was released in late 1981 and retailed for US$2880, with 64k of RAM and a monitor. Despite the popularity of the IBM brand name, sales were initially sluggish, but picked up dramatically in the following year. The PC's own version of the killer app, the multi-function spreadsheet Lotus 1-2-3, drove many sales. By 1984 the PC and its innumerable clones were selling 2 million units a year, nearly as many as the Commodore 64 and eclipsing older machines like the Apple ][. Many personal computer companies saw the PC as a threat. Their answer was to try and beat the giant with superior technology. It almost worked. Personal computer marketshare during the 8-bit era
<urn:uuid:f3626eaa-8675-4e66-b44e-934a93b11553>
CC-MAIN-2017-09
https://arstechnica.com/features/2005/12/total-share/4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00009-ip-10-171-10-108.ec2.internal.warc.gz
en
0.970682
1,286
3
3
Relying solely on the models of the past as predictors of the future can leave even the bestreasoned business plans all wet. If we had to do it all over again, says the inventor Dean Kamen, we wouldn’t distribute water the way we always have. We wouldn’t rely so much on centralized collection and processing points, but instead would move water collection and treatment closer to the end user. Kamen’s thesis is that safe drinking water can be created on the spot with the help of a machine (one he happened to invent) that effectively eliminates, or at least reduces, the need to rely on massive distribution networks of aqueducts and pipes. Kamen’s purification machine, the Slingshot, is an affront to centuries of water distribution practice tracing back to ancient Persia, where sloping tunnels called “quanats” were carved into hillsides to coax water down into channels built around cities, and later to Rome, where among the Roman Republic’s most impressive feats was the construction of the Aqua Marcia aqueduct, a slithering channel that laced its way across and under hills and earth for 23 miles before delivering water to public baths and fountains. Similar approaches to moving large amounts of water would come to prevail in colonial America and beyond. Boston is credited with incorporating the first public waterworks system, in 1652. The Croton Aqueduct, which carried water from Westchester County to New York City, was completed in 1842. Chicago devised an underground tunnel system for distributing water in 1869. With these delivery networks established, the next big leap was purification. In the early 1900s, American and European scientists and engineers began to devise largescale purification systems that protected waterways from pollutants and purged deadly waterborne contaminants from the public water supply, making it safe to drink distributed water without fear of contracting dysentery or typhoid fever. The endurance of centralized, networked water delivery testifies to its utility and practicality. But occasionally, technology and invention rise up to challenge common thinking. That may be happening in a micro sense with Kamen’s Slingshot, which Kamen insists can create potable water out of just about any liquid source – no matter how tainted. There are interesting parallels here with modern communications networks, which seem to be trying to make up their minds as to which model to pursue. One school of thought anticipates the Moore’s Law engines of volume and scale will drive more power into the hands of end users, placing control at the network’s edge. The products of this dynamic include the DVR, Blu-ray players and Google TV, devices that attach to networks and allow for brew-your-own entertainment experiences that are nothing like the point-to-multipoint symmetry of old-school broadcast television. But as compelling as that vision may be, it turns out that things don’t exactly move in orderly alignment. Even as multichannel video providers feed the nation’s DVR appetite – more than 40 percent of U.S. homes now have one, says researcher Bruce Leichtman – an opposing “network DVR” movement wants to stuff the DVR’s capabilities back up through the network pipe and into a centralized server that does the same thing, but without the need for local storage. Also, telecommunications providers lately are warming to the notion of so-called “cloud” networks that take the jobs of data and application storage away from local machines and give it to a big server somewhere in the beyond. Both Verizon Communications and Time Warner Cable recently made significant acquisitions in the cloud category for purposes of satisfying rising demand among enterprise customers. So which direction is the train headed? Probably both. For investors, service providers and customers, the trick is going to be placing the right bets at the right time. Certain applications, content and capabilities will favor edge intelligence. Others will profit from centralized storage and distribution. But as Kamen’s inventive idea for quenching the undeveloped world’s thirst suggests, relying solely on the models of the past as predictors of the future can leave even the best-reasoned business plans all wet.
<urn:uuid:ddba211c-560b-406a-a2ca-f687117bc99f>
CC-MAIN-2017-09
https://www.cedmagazine.com/article/2011/02/memory-lane-thirsty-change
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00009-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945294
880
3.4375
3
Hidalgo C.,Colegio de Mexico | Etchevers J.D.,Colegio de Mexico | Martinez-Richa A.,University of Guanajuato | Yee-Madeira H.,IPN ESFM | And 3 more authors. Applied Clay Science | Year: 2010 In Mexico, 70% of the land surface shows some degree of degradation. A substantial portion of these degraded soils are located in the central part of the country, where a high population density exerts unusual pressure on the land. The study of these degraded soils is important because of the ecological, social and economic consequence of this ecosystem component. Two types of degraded volcanic soils were studied in the present research: one coming from tepetates (a volcanic tuff, partially altered and ameliorated for production purposes) and the other, a highly eroded Acrisol developed from old volcanic materials. These soils have not been much studied and are here explored due to their potential to sequester carbon. In studies to focus the relationship between mineralogy and the carbon sequestration it will be necessary to clarify the characteristics of the fine fraction of the soil (<2μm). Clays have been reported to show different mechanisms of association with soil organic matter, in accordance with their nature. In this paper, a mineralogical characterization was made of the fine (2-1μm) and very fine (<1 μm) fractions of these soils, considered to be the most active in the sequestration process. The characterization was initially developed by using X-ray diffraction (XRD). However, the results obtained with this technique were not conclusive. In addition, due to the fact that XRD sometimes requires tedious chemical treatments and take time, it is proposed here to use spectroscopical techniques other that the traditional ones to more accurately define the mineralogical characteristics of the studied fractions. The diffuse reflectance infrared Fourier transform (DRIFT), 27Al magic angle spinning nuclear magnetic resonance (27Al MAS NMR), 29Si magic angle spinning nuclear magnetic resonance (29Si MAS NMR), transmission electron microscopic (TEM), high resolution transmission electron microscopy (HRTEM) and Mössbauer spectroscopy were employed.The fine fractions (< 2 μm) of the degraded soils are made up of low activity clays: tubular halloysites in the Te-Tl tepetates and kaolinites in the Acrisol. The coarse (2-1μm) ones in Te-Tl consist also of cristobalite and albite. In Ac-At, akaganeite, goethite and hematite are the principal Fe-mineral components. For this reason, the restoration techniques proposed for these degraded soils must be complemented with appropriate practices of fertilization providing basic elements (Ca, Mg, K and Na) to the soil that can be rapidly lost, and are associated with low activity of the fine fraction. © 2009 Elsevier B.V. Source
<urn:uuid:9fdec56b-b1e0-480a-b083-0642cd1a4954>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/esfm-ipn-765962/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00185-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93624
621
2.6875
3
Way back at the dawn of the automobile, people were skeptical of the new invention. They were accustomed to horses, and the UK passed a law that required motorists to be preceded by a man on foot waving a red flag so as to warn pedestrians. Well, a 21st-century equivalent of that practice is on the way. A new rule announced by the US National Highway Traffic Safety Administration on Monday will require electric vehicles (plug-in hybrid EVs as well as battery EVs) to make some noise at low speed to warn oncomers. At speeds of 19mph (30km/h) or below—either moving forward or in reverse—EVs will need an audible warning. Above that speed, a warning is not considered necessary because tire and wind noise should provide sufficient notice to the visually impaired that a vehicle is headed their way. "We all depend on our senses to alert us to possible danger," said US Transportation Secretary Anthony Foxx. "With more, quieter hybrid and electrical cars on the road, the ability for all pedestrians to hear as well as see the cars becomes an important factor of reducing the risk of possible crashes and improving safety." While this might at first glance seem like a silly idea, the organizers of the annual Pikes Peak International Hill Climb in Colorado have insisted on for years. EVs have become more popular at Pikes Peak of late because, unlike their internal combustion rivals, they are unaffected by the climb in altitude. Organizers have required EVs competing in the race to use sirens to alert the corner workers that a vehicle is approaching. Automakers have until September 1, 2019 to comply with the new safety standard. Listing image by Elle Cayabyab Gitlin
<urn:uuid:a15557de-a699-41c9-aa48-d08bfbd15b5e>
CC-MAIN-2017-09
https://arstechnica.com/cars/2016/11/the-federal-government-wants-evs-to-make-some-noise-at-low-speed/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00537-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966038
349
2.515625
3
Given the fact that most of the Internet got started in America -- and Americans dominate its use -- it's easy to forget the rest of the world is not only using the Web, but doing so in unique and creative ways. Amazon.com and AOL tend to get the lion's share of attention when it comes to how things should be done online, but names such as Bhoomi, Golaganang and e-Boks, although lesser known, are still valuable online creations that have sprung up around the globe. Some are surprising and unusual solutions to familiar problems -- a "last mile" powered by bicycle pedals, a cell phone-based parking solution and many flavors of online education. Here is a brief look at some innovative IT solutions happening around the world. Access Across Archipelagos, Slums and Deserts The Solomon Islands People First Network is a rural e-mail network that connects this remote island nation using solar-powered computers networked over short-wave radio. Archipelago Net uses a fiber-optic backbone with wireless LAN links to connect the thinly populated 30,000-island archipelago between Sweden and Finland . Connectivity has made the islands more attractive for year-round residents, and as one official joked, "Every fish will have an Internet address." While Internet penetration remains low in Egypt , the government is making it easy for people to get started -- with free Internet access and nearly 400 government-subsidized IT clubs. In a rather strange experiment called the Hole in the Wall project, computer scientist Sugata Mitra installed touchscreen computers without instructions in walls throughout the Indian slums. Poor children with only access and opportunity quickly taught themselves to use the computers, access information and play games. , bringing the Information Age to rural farmers in Ban Phon Kam and nearby Laotian villages is a difficult task. There are no telephone lines or electricity in the area. The Jhai Foundation's answer is a rugged solid-state computer, which draws only 20 watts (70 watts when printing) and is powered by a car battery and a bicycle-type foot crank. The computer runs on a Lao-language version of Linux. A wireless LAN, based on the 802.11b Wi-Fi protocol, transmits signals between the villages to a server at the Phon Hong Hospital for switching to the Internet or Lao telephone system. Villagers can now make telephone calls with voice over IP, send e-mail and print materials, which will help villagers profit from crop surpluses and export textiles by giving them the ability to communicate with Laos' capital. Young entrepreneurs are helping to launch business development activities. Dialing Up Train Tickets and Parking Places 's two largest cities, Montreal and Toronto, Bell Canada converted pay phone boxes into Wi-Fi hot spots for its AccessZone pilot. The phone boxes were replaced by wireless transmitters, and DSL carries both pay phone and wireless service from the same location. , commuter rail tickets are a dial-tone away. NTT DoCoMo and the East Japan Railway Co., are developing Mobile Suica, which incorporates a rail-pass transponder in a DoCoMo cell phone. The Suica technology, scheduled to be operational by year-end, includes an integrated chip and antenna, and acts as a prepaid fare card as the commuter passes through the gate. Parking in Ireland 's capital city has gone mobile. Dublin motorists can use mPark to pay parking fees by mobile phone. The motorist stops at a parking facility and dials a number posted on the payment machine. The machine gives instructions verbally, telling the motorist to enter a four-digit number displayed on the machine. The motorist's name appears on the parking machine's screen, he or she selects the amount of parking time desired and the machine prints a ticket to display on the vehicle dashboard. also developed a mobile phone payment system for parking. The my-T-phone pilot project allows customers to register their mobile phone number and vehicle details online. They prepay their parking fees by credit card and go online to check their account balance, parking history or change their vehicle details. To pay for parking, customers can call the number displayed at the car lot to start the virtual parking meter when they arrive, and call again to stop it when they leave. SMS is used to confirm fee payments and warn if an account has insufficient funds. Customers also have the option to prepay for parking time. The service sends a reminder 5 minutes before the prepaid time expires. Parking inspectors view a list of vehicles authorized to park in the area using a WAP phone or handheld computer. Tooling Education with the Internet The United Kingdom 's government-backed media giant, BBC, got permission to launch a tax-supported online digital curriculum service for schools in the country. Following complaints by educational publishers, money was also made available as "e-learning credits" to allow schools to acquire software from commercial educational publishers. UaeMath, based in the United Arab Emirates , aims to provide free math help and integrate technology, curriculum and user needs to make students appreciate math and thus improve their socioeconomic status. Educ.ar is Argentina 's national Internet education portal aimed at democratizing education in the country by providing high-quality, interactive educational content and services. Educ.ar integrates all official academic subjects in all levels of the Argentine educational system. , the BBC reported that the Venerable Bede Church of England Aided School, scheduled to open in September, will use retinal scanners to identify children in the school lunchroom and library. Administrators of the 900-pupil "school of the future" maintain that the scanning technology will be safe and less costly than swipe cards or other identification systems. However, the technology is questioned by some who don't see the need for it. Four Out of Five Continents Use E-Government To ramp up government use of IT, South Africa 's Golaganang project will provide computers, software and Internet connectivity to 50,000 government employees and their families. The government hopes the project will stimulate a culture of digital learning and propagate an information-driven economy -- something the South African government places high on its policy agenda. The package will cost employees from $10 to $40 per month, and subsidies are available based on a sliding scale. Bhoomi, a major document computerization project, delivers 20 million land records to 6.7 million landowners through 177 government-owned computer kiosks in the Indian state of Karnataka. The project has reduced red tape and corruption in access to land title records. On Oct. 6, 2002, Brazil conducted one of the first totally "informatized" elections in the world. Brazil's 115 million voters -- who are required by law to vote -- all used electronic voting machines witnessed by representatives from 37 countries and three international agencies. Electronic voting machines powered by car batteries were carried in canoes up the Amazon to remote villages. The operating panels, with numerical keys from zero to nine, display a photograph of the candidate once their number is keyed in. Once voting is completed, the machine plays a tune to let the voter know the job is done. The system covers state and federal representatives, senators, governors, and presidential candidates. In a recent election, citizens of Anieres -- a suburb in Geneva, Switzerland -- cast their ballots in person, by mail or on the Internet. The online voting used several methods of security and marked the first binding vote occurrence online in Switzerland. Online voting is scheduled in Zurich and Neuchatel, and if successful, could spread to national polls. E-Boks is a secure and free electronic archive for citizens of Denmark . Documents, from both public authorities and private enterprises, as well as the citizens' private documents, can be transmitted and stored electronically in a secure, remote location, which is accessed via the Internet. Denmark's 2.4 million households each receive an average of 230 administrative letters annually by mail. Replacing this physical mail with e-mail is expected to save the senders approximately 1.6 billion Danish Kroner annually (approximately $220 million). The costs associated with establishing and running the e-Boks service are covered by charges to senders, who pay a fee to join and a transmission fee equivalent to about 25 percent of the current costs of mailing a physical letter. is the editor of Government Technology International
<urn:uuid:d6971b38-2f31-4ff6-aefb-7628a2c779de>
CC-MAIN-2017-09
http://www.govtech.com/magazines/pcio/Not-Invented-Here.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00005-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940559
1,727
2.578125
3
Multimedia Reading with the eyeBook A plane crashes in the desert, the reader hears an explosion and then the little prince from Antoine de St. Exupéry's book of the same name suddenly appears between the lines. Thanks to the eyeBook, a new reading system developed by the German Research Center for Artificial Intelligence (DFKI), a new multimedia reading experience has been made possible. The prototype is being exhibited at CeBIT 2009. Digital media such as sounds or images augment the printed word and literally bring books to life. A device called an Eyetracker follows the reader's eyes and provides content-related feedback for the relevant part of the story.
<urn:uuid:9ffee3af-c0dd-4c98-8466-81b32fed2243>
CC-MAIN-2017-09
http://e-channelnews.com/ec_storydetail.php?ref=418115
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00357-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941899
136
2.53125
3
The 5-3-2 Principle of Cloud Computing: An Easier Approach Even though, cloud computing has become exceedingly common, the fact remains that people seem to have differing opinions when it comes to its definition. The issue is not the lack of definition or the lack of an agreed coherent one, but rather the lack of apparent direction. Cloud computing is an extremely large topic, but it is being used in very specific ways. In the same vein that makes it hard for people to stay on the same page when talking about general computing without agreeing to something specific, people need to be specific when they talk about cloud computing. This is because it encompasses infrastructure, architecture, deployment, applications, development, automation, operations, management, optimization and a dozen other topics that are equally beneficial and all would be a valid topic when referring to cloud computing. Because of this inherent confusion with regards to conversation structuring, Yung Chou, a Technology Evangelist of the Microsoft US Developer and Platform Evangelism Team, devised an easy to remember principle that we can all agree on and use as a proper base when discussing cloud computing. The principle is called the “5-3- 2 Principle” and refers to the five essential characteristics of cloud computing, the three cloud service delivery methods, and the two deployment models which when put together properly describes what cloud computing is. First up is he “5 Essential Characteristics of Cloud Computing”: On-demand and self-service, ubiquitous network access, location transparent resource pooling, rapid elasticity, and measured service with pay per use. The characteristics were defined by the National Institute of Standards and technology (NIST) as part of their “Definition of Cloud Computing” publication, and they mostly speak for themselves as far as what each means at first glance. This publication is also where the principle is derived from. These five characteristics are all required for something to be qualified as cloud computing, according to NIST. The 3 stands for the three service delivery methods, namely: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure (IaaS). All services being offered using cloud computing would fall under one or more of the above delivery methods, whether it be office applications and games (SaaS) or cloud backup storage or computing resources (IaaS). It becomes easier to define whether the service you are using is really of cloud computing or not just by judging if it is being delivered through one of the above methods. And lastly, we have the two main deployment models of cloud computing: The Private and the Public Cloud models. NIST lists down four deployment models, but the other two is really just either a combination or a derivative of these two main ones. The public cloud is meant for public consumption while a private cloud’s infrastructure is dedicated for private use, like in large corporations or government agencies. The 5-3-2 Principle is a new, simple, and structured way to approach cloud computing whether it is for simple conversations between colleagues or for educating people on what cloud computing truly is. The structure for using the principle in discussions is straightforward, validate business needs starting with the 5 characteristics, then go into the feasibility of delivering the intended service or function with the 3 delivery methods, the finally end with which deployment model would be more preferred in the situation. By Abdul Salam
<urn:uuid:318923ba-6d98-4a61-945d-d7296746763f>
CC-MAIN-2017-09
https://cloudtweaks.com/2013/03/the-5-3-2-principle-of-cloud-computing-an-easier-approach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00533-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942861
689
2.71875
3
BASIC-256: computer programming for (complete) beginners Computers may be everywhere these days, but computer programming is still often seen in a very stereotypical way: it’s complicated, strictly for geeks only, not something of much use to anyone else. The reality is very different. Anyone can learn the fundamentals of programming. It’s great for developing problem-solving skills, or just helping you understand how other applications work. And it’s really not that difficult, especially if you start with a simple language like the open source BASIC-256. The program installs easily, and launches into a basic-looking text editor. There’s no initial guidance on what to do next, unfortunately, but click File > Open and you’ll find various programs in the Examples folder. These are very small and simple -- "hello world", basic graphics, animations and so on -- but you can view the code, run the program with a click, and see the results right away. Or that’s the theory, at least. Text and graphics output is directed to separate windows, and these aren’t necessarily displayed by default, so you might find that running a program doesn’t appear to do anything at all. To fix this, click View, and make sure Edit Window, Text Window and Graphics Window are all selected. BASIC-256 is a BASIC language interpreter, which means it’s all very interactive. The program mousedoodle.kbs, say, allows users to draw on the graphics window with the mouse, and the key code looks like this: circle mousex, mousey, 2 Even a total novice might realize that changing "black" to "red" will change the color of your drawing. This is extremely easy to do -- just edit the text accordingly -- and there’s no compilation required afterwards. Just click "Run" and try drawing again. There’s more to learn. BASIC-256 offers For/ Next, Do/ Until looping, If/ Then/ Else and Case statements for conditional testing, with GoSub, Functions and GoTo for flow control. And there are some surprising extras, including commands to work with SQLite databases, or handle network communications. This still isn’t a tool for anyone with previous development experience, as it’s just so limited. There’s no ability to call external code, no form designer, no objects, no output options beyond scrolling text and extremely basic graphics, and of course no ability to generate stand-alone programs. Sample code like this won’t boot the program’s reputation amongst purists, either. # do something BASIC-256 does provide a simple, unintimidating environment for total programming novices, though, perhaps middle or high-school students. Bundled examples and online tutorials make it easy to learn, and there’s a reasonable amount of functionality to explore. Just keep in mind that you’ll have to move elsewhere before you can produce anything useful.
<urn:uuid:d70f471d-3759-4d10-a2c6-81a9a9acbb07>
CC-MAIN-2017-09
https://betanews.com/2014/02/07/basic-256-computer-programming-for-complete-beginners/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00353-ip-10-171-10-108.ec2.internal.warc.gz
en
0.903932
643
3.21875
3
Medical identity theft is a national healthcare issue with life-threatening and hefty financial consequences. According to the 2013 Survey on Medical Identity Theft conducted by Ponemon Institute, medical identity theft and “family fraud” are on the rise; with the number of victims affected by medical identity theft up nearly 20 percent within the last year. For the purposes of this study, medical identity theft occurs when someone uses an individual’s name and personal identity to fraudulently receive medical services, goods, and/or prescription drugs, including attempts to commit fraudulent billing. Half of the consumers surveyed are not aware that medical identity theft can create life-threatening inaccuracies in their medical records, resulting in a misdiagnosis, mistreatment, or the wrong prescriptions. Yet, 50 percent of consumers surveyed do not take steps to protect themselves, mostly because they don’t know how. The survey also finds that consumers often put themselves at risk by sharing their medical identification with family members or friends—unintentionally committing “family fraud”—to obtain medical services or treatment, healthcare products, or pharmaceuticals. “Medical identity theft is tainting the healthcare ecosystem, much like poisoning the town’s water supply. Everyone will be affected,” said Dr. Larry Ponemon, chairman and founder of the Ponemon Institute. “The survey finds that consumers are completely unaware of the seriousness and dangers of medical identity theft.” Key findings of the 2013 report: Medical identity theft is growing in volume, impact, and cost. Medical identity theft and fraud are major societal problems, placing enormous pressure on the country’s healthcare and financial ecosystems. In 2013, the economic consequences of medical identity theft to victims are estimated at more than $12.3 billion in out-of-pocket expenses. Fifty-six percent of victims lost trust and confidence in their healthcare provider. Fifty-seven percent of consumers would find another provider if they knew their healthcare provider could not safeguard their medical records. Medical identity theft can cause serious medical and financial consequences, yet most consumers are unaware of the dangers. Half of the consumers surveyed are not aware that medical identity theft can create permanent, life-threatening inaccuracies and permanent damage to their medical records. Medical identity theft victims surveyed experienced a misdiagnosis (15 percent of respondents), mistreatment (13 percent of respondents), delay in treatment (14 percent of respondents), or were prescribed the wrong pharmaceuticals (11 percent of respondents). Half of respondents have done nothing to resolve the incident. Most consumers don’t take action to protect their health information. Fifty percent of respondents do not take any steps to protect themselves from future medical identity theft. Fifty-four percent of consumers do not check their health records because they don’t know how and they trust their healthcare provider to be accurate. Likewise, 54 percent of respondents do not check their Explanation of Benefits (EOBs). Of those who found unfamiliar claims, 52 percent did not report them. Consumers often share their medical identification with family members or friends, putting themselves at risk. Thirty percent of respondents knowingly permitted a family member to use their personal identification to obtain medical services including treatment, healthcare products or pharmaceuticals. By sharing medical identification with family members or friends, consumers unintentionally leave themselves and their health records vulnerable. People do not know that they are committing fraud. More than 20 percent of people surveyed can’t remember how many times they shared their healthcare credentials. Forty-eight percent said they knew the thief and didn’t want to report him or her.
<urn:uuid:71229234-b131-4277-a6d0-bb80eba6511f>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/09/13/medical-identity-theft-affects-184-million-us-victims/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00529-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951796
735
2.59375
3
NASA Servers At High Risk Of Cyber AttackAuditors were able to pull encryption keys, passwords, and user account information over the Internet from systems that help control spacecraft and process critical data. (click image for larger view) Slideshow: NASA, Microsoft Reveal Mars In Pictures The network NASA uses to control the International Space Station and Hubble Telescope has unpatched vulnerabilities that could be exploited over the Internet, NASA's inspector general warned in a new report. The risk of an attack is real, according to the report. In 2009 alone, hackers stole 22 GB of export-restricted data from NASA Jet Propulsion Laboratory systems and were able to make thousands of unauthorized connections to the network from as far afield as China, Saudi Arabia, and Estonia. "Until NASA addresses these critical deficiencies and improves its IT security practices, the agency is vulnerable to computer incidents that could have a severe to catastrophic effect on agency assets, operations, and personnel," according to the report, titled "Inadequate Security Practices Expose Key NASA Network To Cyber Attack." The inspector general pinned the problems on the lack of oversight. Despite agreeing to establish an IT security oversight effort for the network after a critical audit last May, that effort hadn't yet been launched as of February. As part of its investigation, NASA's inspector general used open source network mapping and security auditing tool nmap to uncover the fact that 54 separate NASA servers -- all associated with efforts used to "control spacecraft or process critical data" -- were able to be accessed over the Internet. Network vulnerability scanner NESSUS uncovered several servers at high risk of attack. For example, one server was susceptible to an FTP bounce attack, which can be used to, among other things, scan servers through a firewall for other vulnerabilities. Several other servers, which were configured improperly, served up encryption keys, user account information, and passwords to investigating auditors, which could have opened the door to more NASA systems and personally identifiable data. In response to the report, NASA CIO Linda Cureton agreed to add continuous monitoring to the network, mitigate risks to currently Internet-accessible servers, and put in place more comprehensive agency-wide cyber risk management strategies. However, neither the report nor Cureton's response indicate whether the vulnerabilities in question have yet been patched.
<urn:uuid:5db986c5-5f76-4830-9f89-147181a4de9b>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/nasa-servers-at-high-risk-of-cyber-attack/d/d-id/1096929
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00105-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933845
470
2.5625
3
NASA contest yields Space Apps for Earth, too - By Kevin McCaney - May 09, 2012 NASA has opened the voting for its International Space Apps Challenge, an effort to provide a platform to let developers find innovative solutions to problems both in space and on Earth. The contest drew more than two dozen apps from teams on six continents around the world, some of them working from multiple locations. On the contest site, each team submitted a video explaining its application. The submissions range from health care-related apps and an app that uses an iPhone to locate stars on a cloudy night, to several farming apps and a telerobotic submarine — build mostly with off-the-shelf parts — that would let anyone explore underwater. There also are entries such as Vicar2png, which lets anyone view and work with images from NASA’s Planetary Image Atlas, whose VICAR format is otherwise unreadable by open-source tools. And an Australian team submitted and app that uses “people as sensors” in an early-warning system by monitoring social media for word of disasters and quickly putting emergency warnings on a map. NASA, along with Innovation Endeavors and Talenthouse, are asking people to check out the apps and vote for their favorites. Those votes and a jury from Innovation Endeavors will determine the winners. Voting closes May 15. Apps contests are becoming a common way for agencies to engage the public while finding useful public-service applications that could have cost a significant amount to develop via a contract. Cities such as New York and Washington have staged app contests in which developers made use of those cities’ data. NASA, along with the Harvard Business School, since October 2010 has held a number of developer competitions through its NASA Tournament Lab. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:8a0092f8-d8d2-4c2e-8a9d-a8b071f898f8>
CC-MAIN-2017-09
https://gcn.com/articles/2012/05/09/nasa-space-apps-challenge.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00525-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960946
381
2.625
3
‘Identity theft’ has recently hit the headlines as a major security issue but the importance of digital identity and how to protect it has been a consideration for much longer. Even before the dramatic events of September 11, 2001, corporations worldwide were already well aware of the need to ensure a positive verification of the identity of people conducting business online. The concepts that there is no security without identity and that identity provides accountability are increasingly understood by an ever-wider audience. Fortunately the technology for strong user authentication, whether based on two or three factors, is already available to establish trusted digital ID credentials for secure access to multiple applications. And the issuance, usage and management of those credentials can now be achieved in a very rapid, convenient and cost-efficient way while at the same time meeting the network security needs of governments, corporations and financial institutions worldwide. Digital IDs Gaining Recognition In Government Circles The recognition that digital IDs can ensure the level of confidence needed to do business online has gained significant ground, not least in the government sector as strategies are put in place to deal with the threat of international cyber terrorism. In addition, national, regional and local government organisations are increasingly looking to deploy digital identity solutions for a host of applications such as national ID card schemes, student cards, online voting, online tax return submissions, online passport applications, health benefits cards and drivers’ licences. For example, protecting and managing those digital IDs on a large scale is at the heart of the deployment by the US Department of Defense of its Common Access Card (CAC), a smart card-based ID badge. The US Defense Manpower Data Center has recognised the dual benefits of increased security and a strong return on investment that result from being able to consolidate and manage multiple user credentials on a single chip-based ID card. The Common Access Card has already been issued to over 1.3 million military and civilian personnel out of an initial target population of 4.3 million people. The cards enable staff to access physical areas and logical systems such as computer networks. To access these systems, staff strongly authenticate themselves by inserting their Common Access Card into the smart card reader of the terminal and keying in a PIN code on its keyboard. In doing so, staff are not changing anything in their ATM user experience. Then, in a totally transparent and automatic manner, staff will use all of the ID credentials that are loaded on the chip of the ID card. Depending on the nature of the applications, these ID credentials can be static passwords, PKI keys and certificates for digital signing and encryption, fingerprint biometrics as well as demographic credentials required, for example, to manage medical benefits and other entitlements. This infrastructure has been rolled out to around 60% of over 900 DMDC locations worldwide and issuance is continuing at an average rate of around 10,000 cards per day. Today, the IDentity Management (IDM) system that has been developed by ActivCard and its partners for the U.S. DoD Common Access Card project has become a ‘commercial off-the-shelf’ solution for enterprise applications. Enterprises Seek to Maximise Return on Investment The decision-making process as to which digital identity solution to deploy extends well beyond issues of security. In today’s corporate environment security for security’s sake is no longer sufficient to justify the significant IT investment involved. Corporations are increasingly focused on cost reduction and invest only where there are clearly identified operational efficiencies and a measurable return on the initial investment. IT managers are required to maximise their return on investment in all of the credential systems that they have already deployed. The key here is to deploy a solution that enables multiple digital IDs to be consolidated on a single card, thus saving the enterprise money and enhancing employee productivity. ActivCard’s Corporate Access Card solution suites, the civilian equivalent of the Common Access Card, have proved themselves to be deployable, manageable, robust and very flexible in establishing and managing identity in the corporate networked environment. These smart corporate ID cards enable employees to access their corporate resources as confidently and easily as they access cash at ATM terminals. Global corporations such as Microsoft, Sun Microsystems and Hewlett Packard have already turned to smart card-based corporate ID badges that leverage the military-strength architecture developed for the DoD to manage employees’ digital IDs for access to buildings as well as for access to corporate network resources and applications, both remotely and locally. All three enterprises are deploying digital identity management solutions which enable them to consolidate on a single card the wide diversity of credentials a company has to manage for each individual user. These include a picture ID card, a remote authentication token to access network resources while on the road, PKI certificates to digitally sign emails, potentially also biometric information and any number of static passwords. The cards can also be configured to contain employee health and benefit information, payroll information and even e-cash for purchases. And they offer the flexibility of embedding the ID credentials in a variety of different form factors, including smart cards, tokens and USB keys. EMV and beyond – winning and retaining customers There are already a significant number of promising initiatives underway in the financial sector to offer multiple applications on a bank card, thus extending the comfort of the familiar ATM experience for simple financial transactions. Croatia’s Zagrebacka Banka is among leading European financial institutions which are already offering smart card-based digital identity solutions for secure online consumer and corporate banking. The bank has successfully adopted a smart card-based PKI solution which is already used by over 30,000 corporate banking customers, the largest such deployment in Europe. This has created a standard for other banks to follow. The migration towards the EMV (Europay/MasterCard/Visa) electronic payment standard will offer significant opportunities to financial service providers to deliver additional applications. E-purse, secure EMV-compliant credit/debit transactions, electronic payments and loyalty programmes are already being tested or rolled out to cardholders as part of the financial and retail organisations’ strategy to win and retain customers in the highly competitive financial services marketplace. Digital ID applets embedded in the chip on the EMV card will enable these ‘smart financial cards’ to perform strong authentication in addition to the traditional debit/credit services, thus expanding their appeal. The Corporate Access Card solution suites enable governments, enterprises and financial institutions to combine and manage multiple credentials on a single multi-application smart card-based device. The CAC does for the networked environment what the ATM or bank card has been doing for personal finance for at least twenty years -¦changes the way we do business. Leveraging the military-grade IT architecture developed for the US Department of Defense, the civilian CAC will enable enterprises to enhance overall business confidence. Corporate networks will be secure from the threat of unprotected digital IDs and the flexible nature of the infrastructure will mean enterprises can select just how many applications they want to download onto the card. At the same time they will be able to maximise their return on investment for a true Return on Identity. Corporate Access Cards in action – experiencing identity management Microsoft Corporation: Microsoft has issued smart card-based ID badges to more than 25,000 employees at its Redmond campus. The authorised users will carry the ID smart card for physical access to on-site campus facilities as well as remote access to Microsoft’s corporate network. Microsoft has adopted the ActivCard Identity Management Systemâ„? for secure issuance and distribution of smart cards and user credentials, leveraging the built-in authentication and digital certificate management capabilities of Windows .NET Server and Windows XP. Hewlett Packard: Hewlett Packard is deploying ActivCard smart corporate ID solutions to employees worldwide, migrating from its existing single-function token system to a multi-application smart card solution that offers new levels of mobility, security, productivity, and user convenience. HP is using ActivCard for global remote access with dynamic one-time passwords, secure mobile usage of PKI user certificates, secure email, digital signatures, secure Web access, and a single sign-on experience with legacy applications. Sun Microsystems: Through a service provided by ActivCard licensee SchlumbergerSema, Sun Microsystems is issuing new corporate ID badges to all of its employees worldwide. Sun is using ActivCard Digital IDentity software as the underlying platform to consolidate multiple credentials, applications, and budgets into a single cost-efficient system. The new Java Card-based ID badges consolidate a number of current employee credentials and IDs – including picture IDs, building access cards, network login, digital signature, remote access tokens, and static passwords. ActivCard are exhibiting at Infosecurity Europe, Europe’s largest and most important information security event. Now in its 8th year, the show features Europe’s largest FREE education programme, and over 200 exhibitors at the Grand Hall at Olympia from 29th April – 1st May 2003. Infosecurity Europe is Europe’s largest and most important information security event. Now in its 8th year, the show features Europe’s most comprehensive FREE education programme, and over 200 exhibitors at the Grand Hall at Olympia from 29th April – 1st May 2003. www.infosec.co.uk
<urn:uuid:88ad7d7b-3ac0-40bb-a6e7-daa7aa78e7c4>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2003/04/24/corporate-access-cards-securing-corporate-networks-with-military-strength-digital-identity-solutions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00225-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933496
1,914
2.84375
3
A local government uses a centralized customer service system - sometimes called 311 - so residents can call a centralized government phone number, place requests for service and are assigned tracking numbers to monitor their requests. Though a centralized customer service system is valuable for residents, local governments benefit too. Some big cities - Baltimore, Las Vegas, Chicago, New York, Houston and Dallas - have implemented these systems to ease the burden on 911 emergency systems, and they seem to be doing the trick. The International City/County Management Association recently conducted a Local Government Customer Service Systems (311) national survey. Funded by the Alfred P. Sloan Foundation, the survey explored successful 311 implementations and how they're used to respond to citizen needs and strengthen local government-constituent relationships. Of 710 survey respondents, only 104 reported they use a centralized system. But the results show that not only large cities and counties are using them: Thirty-two local governments that use a centralized system have a population under 30,000. Although that number of adopters seems low, twice as many local governments are considering installing a system. For local governments that lack systems, the major concerns were cost and the process of obtaining a 311 designation. But implementation leads to demonstrable savings, such as reduced calls to 911, and improved customer service, information, reporting and management. There are also alternatives to a 311 designation, such as an easy-to-remember, seven-digit number. View Full Story
<urn:uuid:dc930620-e654-48a0-8d37-c65bc5b02a69>
CC-MAIN-2017-09
http://www.govtech.com/e-government/311-Survey-Customer-Service-Systems-Spread.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00221-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932427
298
2.75
3
As long as criminals have been plotting to circumvent order and established protocols, there have been crime-fighters pushing the envelope to foil their efforts. As criminals hatch sophisticated schemes to bypass detection, law enforcement agencies and other dedicated groups devise original and inventive ways to thwart their advances. Technology and innovation play key roles in crime detection, responding to ever-changing criminal methods. Throughout history, technological advances have changed the way law enforcement personal and concerned citizens police their neighborhoods. Automobiles, radios, and mobile telephones, for example, each represent game-changing innovations that enhance crime prevention, detection and control. Just as these revolutionary advances shed new light on crime-fighting practices, today’s technological innovations open new avenues of detection for modern crime fighters. Using data to identify patterns and potentially predict crimes is a growing law enforcement trend that relies on machine learning to identify predictive patterns. Old-fashioned police work requires manual data analysis, especially in crime series’, when individuals or groups commit multiple crimes. But a recent machine learning innovation called Series Finder helps police narrow their searches by growing a crime series projection from a couple of unsolved crimes. The algorithm uses historical crime data to “learn” and construct patterns based on analysis of particular data points like geographical location and modus operandi. Burglaries, for example are charted according to their occurrence, and Series Finder attempts to return relevant information about future vulnerabilities. While predictive policing is in its early stages, high profile endorsements from the Justice Department and other agencies attest to its important place in the future of crime detection. Analyzing and interpreting digital information is an increasingly important part of modern criminal proceedings, as data mining and other sophisticated electronic crimes are seen with increasing frequency and severity. Exposing the digital criminal footprint left by fraudsters and other digital criminals requires cutting-edge tactics, which continually strive to keep pace with criminal innovations. Communications and mobile technology, including smartphones and tablets, play a role in most cases, so emails, hard-drives, mobile phones and other devices each furnish digital points of reference for prosecutors, law enforcement officials and criminal defense attorneys. Whether it is a result of the stakes being higher today than at other points in history, or simply that modern technology accommodates it, surveillance is a part of daily life for most citizens. Cameras and other monitoring equipment have grown smaller and less obtrusive, and new technologies furnish greater visual clarity than older models. As a result, video surveillance has become so pervasive as to become a social issue, pitting personal liberties against safe and secure societies. Gunshot Detection Systems Technology innovations look for solutions beyond conventional wisdom, so not every tech-inspired crime detection effort is going to pay big dividends. Gunshot alert systems are deployed in many major cities, with mixed results. The sound-sensing devices are placed in high crime areas, where they relay information to police officers in real time – including notifications of gunshots detected within the sensors’ range. While the systems have helped solve crimes, detractors say the devices are not worth their expense, because residents are quick to report gunshots on their own – making the sensors obsolete. To a certain extent, crime detection technology mirrors trends among citizens. Tablets and other mobile devices, for example, dominate information technology on the streets, so it is only natural that law enforcement personnel use mobile technology to detect and solve crimes. Tablets furnish efficient tools, enabling officers to take tasks mobile, which were once performed at the station or in squad cars. As a result, officers spend more time policing and less time hidden-away completing paperwork. Mobile devices also furnish access to materials like state crime databases and other investigative resources, streamlining crime detection and enforcement. Technology and innovation are at the heart of effective crime detection; especially in the rapidly changing electronic age. Information technology plays a particularly important role in policing, so law enforcement agencies use state-of-the-art surveillance, digital forensics, and predictive policing to stay one step ahead of criminals. Author: Daphne Holmes contributed this guest post. She is a writer from ArrestRecords.com and you can reach her at [email protected].
<urn:uuid:bd2fca88-b4ed-4ec4-916e-5e96a23e28e8>
CC-MAIN-2017-09
http://www.2-spyware.com/news/post4097.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00449-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927972
846
2.640625
3
Computer fires severe enough to prompt a 911 call are so unusual that when it does happen, local media sometimes makes note of it. That was the case in Arlington, Va., recently, when firefighters found a computer burning on the balcony of an apartment complex. According to the Arlington County Fire Dept., the resident of the apartment had built his own desktop computer. The computer wasn't in use, but was plugged in -- and the resident was in another room when it caught fire. "He was alerted to the fire by the sound of the smoke alarm and then found smoke coming from his hard drive," said department Lt. Sarah-Maria Marchegiani. The resident carried the computer out to the balcony after it caught on fire, according to a local media report on Arlington Now. Thanks to working smoke detectors, the fire was found and extinguished quickly. There were no injuries and property damage was limited, said Marchegiani. "Computer fires are fairly rare for us," said Marchegiani. That may be the case nationally, as well. From 2007 through 2011, local fire departments responded to an average of 350 computer fires per year, according to a study (PDF) by the National Fire Protection Association (NFPA). "It's a fairly small number, but they do happen," said Marty Ahrens, manager of fire analysis services at NFPA. Other components that are associated with fire department calls included a yearly average of 61 computer printer fires, as well as 46 computer monitor fires per year. Detachable power cords, which can be used by computers as well as other devices, are responsible for an average of just over 300 fires per year, according to NFPA data. This number isn't included in the computer-related total. This story, "Computer fires requiring a 911 call rare" was originally published by Computerworld.
<urn:uuid:9f38a27f-da17-47c0-b618-cf451a41142c>
CC-MAIN-2017-09
http://www.itnews.com/article/2954364/computer-hardware/computer-fires-requiring-a-911-call-rare.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00449-ip-10-171-10-108.ec2.internal.warc.gz
en
0.979207
381
2.796875
3
Codecs like H.264 reduce bandwidth by only sending full frames every so often, mixing them with partial frames only capturing changes in between the full ones. They are called 'I' frames because they are the initial / full frames, followed by 'P', or predictive frames.* Note: if you are not familiar with codecs, please read our Surveillance CODEC Guide before continuing. I Frame Questions Since I frames require much more bandwidth than P frames (frequently 10 or 20x more), some will argue that reducing the rate of I frames will reduce overall bandwidth significantly. For instance, instead of having an I frame each second, reduce it to 1 every 5 seconds. On the other hand, some will argue that reducing I frames can result in quality problems because it can be harder for the processor to continue to faithfully update and represent the image if it has changed significantly since the last I frame. We seek to answer these two questions: - How much bandwidth savings can you achieve by reducing the I frame interval? - How much quality degradation can occur by reducing the I frame interval? The Tests Conducted In order to answer these questions, we used five 720p cameras at various price points and performance levels: - Avigilon H3 1MP - Axis M1114 - Axis Q1604 - Bosch NBN-733V - Dahua HF3101 We aimed these cameras at a toy train set to create consistent motion, and varied I-frame levels from a default of one per second to as high as five and as low as one every four seconds. *Some versions of H.264 also support 'B' or bidirectionally predictive frames, but these are less common in surveillance cameras and therefore excluded from this study.
<urn:uuid:97004af2-2c05-47fe-9cfe-7d82943e8895>
CC-MAIN-2017-09
https://ipvm.com/reports/test-i-frame-rate
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00621-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936109
366
3.421875
3
Recently, cloud computing has attracted considerable attention. Cloud computing is becoming one of the most important computing and service paradigm. Cloud computing employs a group of interconnected computers which are dynamically provisioned and serve as one or more unified computing resources. Customers are able to access applications and data from a cloud at any place and at any time. Cloud computing appears to be a single point of access for all the computing needs of users. Cloud computing technology has already developed much and is continuously proceeding on the path of development. Cloud service providers are actively seeking to develop more robust cloud computing platforms for consumers and enterprises to facilitate the on demand access regardless of time and location. Some of the available cloud based technologies are providing virtualized computing environments which host different kinds of Linux based services. Another example is which provides a centralized storage for applications and data, so users could access all the information through a web based live desktop. Internet Data Center (IDC) is a common form to host cloud computing. An IDC usually deploys hundreds or thousands of blade servers, densely packed to maximize the space utilization. Running services in consolidated servers in IDCs provides customers an alternative to running their software or operating their computer services in house. The major benefits of IDCs include the usage of economies of scale to amortize the cost of ownership and the cost of system maintenance over a large number of machines. With the rapid growth of IDCs in both quantity and scale, the energy consumed by IDCs, directly related to the number of hosted servers and their workload, has enormously increased over the past ten years. The annual worldwide capital expenditure on enterprise power consumption has exceeded billions of dollars, and sometimes has even surpassed spending on new server hardware. The rated power consumptions of servers have increased by ten times over the past ten years. The power consumption of data centers has huge impacts on environments. The surging demand has called for the urgent need of designing and deployment of energy efficient Internet data centers. Information scientists are constantly trying to find better solutions to reduce power consumption by data centers. Many efforts have been made to improve the energy efficiency of IDCs including network power management, Chip Multiprocessing (CMP) energy efficiency, IDC power capping, storage power management solutions, etc. Among all these approaches, Virtual Machine (VM) technology has emerged as a focus of research and deployment. Virtual Machine (VM) technology (such as Xen, VMWare, Microsoft Virtual Servers, and the new Microsoft Hyper-V technology etc.), enables multiple OS environments to coexist on the same physical computer, in strong isolation from each other. VMs share the conventional hardware in a secure manner with excellent resource management capacity, while each VM is hosting its own operating system and applications. Hence, VM platform facilitates server-consolidation and co-located hosting facilities. Virtual machine migration, which is used to transfer a VM across physical computers, has served as a main approach to achieve better energy efficiency of IDCs. This is because in doing so, server consolidation via VM migrations allows more computers to be turned off. Generally, there are two varieties regular migration and live migration. The former moves a VM from one host to another by pausing the originally used server, copying its memory contents, and then resuming it on the destination. The latter performs the same logical functionality but without the need to pause the server domain for the transition. In general when performing live migrations the domain continues its usual activities and from the user’s perspective—the migration is imperceptible. Using VM and VM migration technology helps to efficiently manage workload consolidation, and therefore improves the total IDC power efficiency. For cloud computing platforms, both power consumption and application performance are important concerns. The Green Cloud architecture is utilized by cloud computing industry as an effective method to reduce server power consumption while achieving required performance using VM technologies. Reliability, flexibility, and the ease of management are the essential features of Virtual Machine (VM) technology. Due to these features Virtual Machine (VM) technology has been widely applied in data center environments. Green Cloud is an IDC architecture which aims to reduce data center power consumption, while at the same time guarantee the performance from users’ perspective, leveraging live virtual machine migration technology. A Green Cloud automatically makes the scheduling decision on dynamically migrating/consolidating VMs among physical servers to meet the workload requirements meanwhile saving energy, especially for performance-sensitive (such as response time-sensitive) applications. Green Cloud architecture guarantees the real-time performance requirement as well as saves the total energy consumption of the IDC. In the design of Green Cloud architecture, several key issues including when to trigger VM migration, and how to select alternative physical machines to achieve optimal VM placement are taken into consideration. Green Cloud intelligently schedules the workload migration to reduce unnecessary power consumption in the IDC. Green Cloud balances performance and power in such a way that users hardly notice that their server workloads are being or have been migrated. The above discussed technology reduces hazardous impact on our planet, keeps our environment green, and at the same time saves a lot of capital expenditure for a company that provides cloud computing service to various businesses. These benefits are ultimately passed to customers availing cloud computing services.
<urn:uuid:fa500a75-3991-4540-b32b-6dfde0ee241a>
CC-MAIN-2017-09
http://www.myrealdata.com/blog/142_cloud-computing-is-green
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00090-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937594
1,061
3.1875
3
At a recent dinner, I sat next to a retired physicist from Raytheon whose work focused on the development and application of microwave technology. Microwave use goes far beyond household ovens, including radar and the creation of industrial diamonds. The scientist had taken it upon himself to preserve an entire room's worth of books and written documents that describe the history of microwave technology. He was concerned about what would happen to this material after his death. While his daughter kindly offered to preserve the material, it's not a long-term solution. That shocked me into thinking about all the technology developed in the 20th century. What's happening to the documentation that describes the evolution of scientific and technological developments across innumerable fields? Pockets of materials are probably being preserved in various industries and university research facilities, but as far as I know, there is no concerted effort to store a broad base of technology documentation. I see two compelling reasons for a technology-focused preservation. First, we already take steps to protect other kinds of history, including cultural artifacts such as music, art, literature, film and photographs. Political history (wars, elections, laws and so on) is similarly well covered. Yet culture and politics are directly influenced by technology, so a lack of technological history would give us a poor and incomplete understanding of the 20th century. Second, preserving information in a non-digital format is a hedge against digital disaster. Besides the historical research, there is another reason for preserving physical material (in addition to searchable digital documentation): the ability to be able to recreate a technology if our digital information goes away. That may seem apocalyptical and hopefully the probability is minute, but we really don't know as our digital, electrical and electronic age is very young in historic terms. What would happen if the sun geomagnetic storm of 1859 (called the Carrington Event) that was much stronger than anything seen since were to occur again? The predictions for the impact of such an event are dire. You probably shouldn't lose any sleep over it, but, still, an ounce of preservation is worth a pound of cure. How to address the situation? Although this may be simplistic, there seem to be only three steps that need to be taken: 1. Sponsorship: If organizations can adopt miles of highway, why can't they adopt technologies? This could be a company with a close connection to a technology, an association that deals with the technology or some other means. The cost could be relatively low, such as a room in a business or a university or sponsorship in an underground former limestone mine in Pennsylvania with some kind of archivist support (which could be shared among multiple technologies). 2. Abstract: Although a process of digitization might take place, all that really needs to be done is an abstract and indexing scheme. The abstract says what is available and elicits interest whereas the index tells where to find it. 3. Clearinghouse: Some organization could maintain the abstract, indices and location of all the saved information. Since this is an electronic reference set and individual technology preserving organizations could actually contribute the information. Some Web company may offer as a public service to provide this capability as the cost could be very low. The real challenge is how to get started. Those of you with a passion for such a technological preservation could create a social media group, such as on LinkedIn, and go from there. Tempest fugit. We only have one chance to do the right thing. Hopefully I have given one or more readers a germ of an idea that will grow into a sustained, collaborative effort to preserve our 20th century technology patrimony. Do you think such an effort is necessary? If so, is it even feasible? I welcome your comments.
<urn:uuid:8bab88b5-e8c5-4d00-961d-1287dd865678>
CC-MAIN-2017-09
http://www.networkcomputing.com/storage/preserve-our-technological-history-proposal/1028523289?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00318-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964951
768
2.71875
3
The EU cyber security Agency ENISA is launching an in-depth study on 30 different digital traps or honeypots that can be used by CERTs to proactively detect cyber attacks. The study reveals barriers to understanding basic honeypot concepts and presents recommendations on which honeypot to use. An increasing number of complex cyber attacks demand better early warning detection capabilities for CERTs. Honeypots are, simplified, traps with the sole task of luring in attackers by mimicking a real computing resource (e.g. a service, application, system, or data). Any entity connecting to a honeypot is deemed suspicious, and all activity is monitored to detect malicious activity. This new study presents practical deployment strategies and critical issues for CERTs. In total, 30 honeypots of different categories were tested and evaluated. Goal: to offer insight into which open source solutions and honeypot technology are best for deployment and usage. Since there is no silver bullet solution, this new study has identified some shortcomings and deployment barriers for honeypots: the difficulty of usage, poor documentation, lack of software stability and developer support, little standardisation, and a requirement for highly skilled people, as well as problems in understanding basic honeypot concepts. The study also presents a classification and explores the future of honeypots. The Executive Director of ENISA Professor Udo Helmbrecht commented: “Honeypots offer a powerful tool for CERTs to gather threat intelligence without any impact on the production infrastructure. Correctly deployed, honeypots offer considerable benefits for CERTs; malicious activity in a CERT’s constituency can be tracked to provide early warning of malware infections, new exploits, vulnerabilities and malware behaviour, as well as give an opportunity to learn about attacker tactics. Therefore, if the CERTs in Europe recognise honeypots better as a tasty option, they could better defend their constituencies’ assets.” For an interview with Professor Udo Helmbrecht check out the September issue of (IN)SECURE Magazine.
<urn:uuid:d650d95a-49ab-47c1-880c-d44776121c0d>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2012/11/22/understanding-basic-honeypot-concepts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00018-ip-10-171-10-108.ec2.internal.warc.gz
en
0.914595
414
2.5625
3
Textbooks on software engineering prescribe to check preconditions at the beginning of a function. This is a really good idea: the sooner we detect that the input data or environment does not match our expectations, the easier it is to trace and debug the application. A nice function with precondition checking refuses to “work” if the preconditions are not satisfied. The next question is: how exactly should our function refuse to work it detects that an unsatisfied precondition? I see the following possible answers to this question (sorted from the least invasive to the most destructive): - silently repair and continue - return an error code - throw an exception - stop or abort the application The least invasive approach, to repair and silently continue, is a bad idea. An application consisting of many ‘intelligent’ functions doing something despite of erroneous input would be extremely difficult to debug and use. Such an application would always return an answer but we would never know if this answer is correct at all. The second approach, to return an error code, requires a lot of manual work. Not only we have to establish different error codes for different situations, we have also to generate them and the caller must not forget to check them. As common experience shows, we do forget to check error conditions… Exceptions are much better since once an exception is thrown it will be propagated to the callers until someone catches it. So the programmer’s burden of checking the error codes disappears. Or does it?… In fact it gets replaced by the burden of specifying exception handlers at the right places and by the burden of remembering that almost any line of the program can be interrupted by an exception. If we want to make our program not only ‘exception generating’ but also ‘exception safe’, then we have to consider many possible execution paths – with and without exception. This turns out to be quite a feat in itself. If you want more gory details, consider Exceptional C++ and its follow-ups. This book contains vital information about programming with exceptions. The last choice is the easiest one. If the preconditions are not satisfied, simply abort the application. This is no brainer – no error codes, no exceptions, just pay the price of killing the application (if the application is a quick & dirty perl script, then the tradition is to tell it literally to die…) Alas, this is acceptable only in a limited number of situations. If we encounter a fatal condition and the application can not meaningfully continue, then ok, there is nothing to lose, dump it. For example: a compiler which can not find the input file, or a mail client which can not find the account settings. The best thing they can do is to stop immediately. But in all other cases, you should not use this kill the application. For example, you can not use this approach in IDA plugins. Imagine a plugin which works with MS Windows PE files. It is natural for such a plugin to check the input file type at the initialization time. This is the wrong way of doing it: if ( inf.filetype != f_PE ) error("Sorry, only MS Windows PE are supported"); This is bad because as soon as we try to disassemble a file different from PE, our plugin will interfere and abort the whole application, i.e. IDA. This is quite embarrassing, especially for unsuspecting users of the plugin who never saw the source code of the plugin. The right way of refusing to work is: if ( inf.filetype != f_PE ) If the input file is not what we except, we return an error code. IDA will stop and unload the current plugin. The rest of the application will survive. Do not let your software be capricious without a reason 🙂
<urn:uuid:11197c7a-5b60-4b18-afc9-50822310d7d2>
CC-MAIN-2017-09
http://www.hexblog.com/?p=30
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00314-ip-10-171-10-108.ec2.internal.warc.gz
en
0.911703
800
2.640625
3
Last week Japanese IT company Fujitsu used the K supercomputer to conduct the world’s first simulation of the magnetization-reversal process in a permanent magnet. According to an announcement from the firm, “this opens up new possibilities in the manufacture of electric motors, generators and other devices without relying on heavy rare earth elements.” The process of magnetization reversal is a worthy avenue of scientific study, but to accurately model magnetic materials requires an enormous amount of computing power. The technique that Fujitsu developed combines a finite-element method with micromagnetics, the process of dividing magnets into regions the size of a few atoms. This technology makes it possible to compute magnetization processes with complex microstructures on a nanometer scale, many times smaller than conventional methods. The research is viewed as a stepping stone toward the development of new magnetic materials, including strong magnets free from heavy rare earth elements. This is important because the supply for these elements is limited. State-of-the-art motors like the ones used in hybrid and electric vehicles rely on these heavy rare earth elements, so the advent of new super magnets would be a boon to this growing sector. The simulations of magnetization reversal in rare-earth magnets were performed on the K supercomputer in cooperation with Japan’s National Institute for Materials Science (NIMS). On September 5, the results of this simulation were presented jointly by Fujitsu and NIMS at the 37th Annual Conference on Magnetics in Japan being held at Hokkaido University. Developed by Fujitsu, the K supercomputer is installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan. It is the number four system on the most recent TOP500 list with a performance of 10.51 petaflops (Linpack). The next step is for researchers to perform ultra-large-scale computations on the K system and develop a “multi-scale magnetic simulator.”
<urn:uuid:4725256d-1e86-4563-9616-00b62219b76c>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/09/09/supercomputing_for_super_magnets/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00314-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927167
409
3.328125
3
The US Government's decision to adopt the Advanced Encryption Standard (AES) for securing sensitive information will trigger a move from the current, ageing Data Encryption Standard (DES) in the private sector, according to users and analysts. But it will not happen overnight. Technology standards bodies representing industries such as financial services and banking need to approve AES as well, and that will take time. Products such as wireless devices and virtual private networks that incorporate AES have also yet to be developed. Companies using Triple DES technologies, which offer much stronger forms of encryption than DES, will have to wait until low-cost AES implementations become available before a migration to the new standard makes sense from a price perspective. "AES will likely not replace more than 30% of DES operations before 2004," said John Pescatore, an analyst at Gartner. US secretary of commerce Don Evans announced the approval of AES as the new Federal Information Processing Standard on 4 December. The formal approval makes it compulsory for all US Government agencies to use AES for encrypting information from 26 May. AES is a 128-bit encryption algorithm based on a mathematical formula called Rijndael (pronounced "rhine doll") that was developed by cryptographers Joan Daemen at Proton World International and Vincent Rijmen at Katholieke Universiteit Leuven, both in Belgium. Experts claim that the algorithm is small and fast, and that it would take 149 trillion years to crack a single 128-bit AES key using today's computers. AES offers a more secure standard than the 56-bit DES algorithm, which was developed in the 1970s and has already been cracked. AES is considered even better than Triple DES, which is compatible with DES but uses a 112-bit encryption algorithm that is considered unbreakable using today's techniques. In software, AES runs about six times as fast as Triple DES and is less chip-intensive. The advantages of AES make it inevitable that private companies will start using it for encryption, said Paul Lamb, chief technology officer at Oil-Law Records, which provides regulatory and legal information to oil and gas companies. "[Companies will adopt AES] because of the perceived problems with DES and the greater sense of security with AES," he added. "I would expect the adoption curve to be pretty steep," said Steve Lindstrom, an analyst at Hurwitz Group. Any concerns companies had about AES not being widely adopted have been put to rest with the Government's decision, he added. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
<urn:uuid:1b79e8cd-a505-4840-9181-2c7d61c4627b>
CC-MAIN-2017-09
http://www.computerweekly.com/news/2240043602/US-companies-to-embrace-encryption-standard
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00486-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960614
541
2.546875
3
STMicroelectronics hopes to make blurry low-light images from smartphone cameras a thing of the past with a new chip designed to boost light output from LED-based flashes. The ideal camera flash delivers a lot of light in a short time, freezing action and illuminating more distant objects. Professional cameras use a xenon strobe light to produce a brief burst -- or flash -- of light, but the lighting on smartphone cameras is typically provided by an LED. The light output of a battery-powered LED is continuous, and much lower in intensity than a flash, leading to longer exposure times and darker, blurrier pictures. But ST hopes to change that with its new STCF04 multifunction chip, which it says can control flash power up to 40W, compared to perhaps 2W for typical LED flashes today. The key is the chip's ability to control the charging and discharging of a supercapacitor, which it uses to gradually store energy from the phone's battery and then deliver it to the LED in a short burst. That's similar to the way that xenon strobes work, but the LED-supercap combination does the job with just a few volts, making it much safer -- and the components more compact -- than the hundreds or thousands of volts needed to drive a xenon strobe. ST's chip, just 3 millimeters square, will add $2 or less to the cost of a cell phone, it said. It contains a charger to store energy in a supercap, and a driver for an external transistor used to deliver 40 watts or more of peak power from the supercapacitor to a bank of LEDs. It also contains a temperature sensor to detect when the LEDs are in danger of overheating -- useful if they are being used as a torch rather than a flash; a light sensor for setting the exposure and flash intensity, and a driver for an auxiliary LED used either as a privacy indicator (you're on camera) or perhaps to help autofocus systems in low light. The chip's built-in timer can be used to set the flash duration in steps of around one-100,000th of a second, although it takes around one-3,000th of a second for the LED to reach full power, according to ST. The controller can also discharge the supercap in stages to produce several flash pulses in a row, useful for red-eye reduction. Samples quantities of the chip are available now, and ST said it will begin full production by the end of March. The company has published a datasheet for the STCF04 giving full details of the chip and sample circuits. Peter Sayer covers open source software, European intellectual property legislation and general technology breaking news for IDG News Service. Send comments and news tips to Peter at [email protected].
<urn:uuid:4743a370-4b67-45ca-941a-f26e0dd67167>
CC-MAIN-2017-09
http://www.computerworld.com/article/2500794/computer-hardware/stmicro-sees-40w-led-flash-in-future-smartphones.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00482-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936321
584
2.796875
3
Containerizing Flowroute Apps in Node.js The evolution of virtualization and cloud platforms combined with mature adoption of agile methodologies and DevOps has impacted the way organizations build and ship software. The traditional waterfall-style approach that ended with developers throwing code “over the wall” to the operations team has been replaced with automated testing, continuous integration, and continual deployment. What was once considered only possible for “unicorns” (see http://bit.ly/unicornwat) is now within reach of any development shop willing to leverage open source tools. One powerful concept that transformed development and is now changing the way large companies approach production software is called Docker, or generically, “containers.” A Question of Layers To better understand containers and where they fit, it is important to have a basic knowledge of the various “as-a-Service” or -aaS layers that exist today. Although the list continues to grow, three service layers have fundamentally transformed cloud technology. IaaS is probably the most well-known service layer because it is the easiest to implement (but also tends to be the most expensive). It refers to managing infrastructure as a service rather than relying on physical hardware within the organization. For example, virtual networks and virtual machines are both examples of IaaS components. Through on-premises hypervisors like VMWare and in-cloud providers like Amazon AWS and Azure, technology professionals are able to create resources like web and application servers “on the fly.” IaaS removes the headache of capital expenditure and having to acquire, configure, and deploy physical resources by turning the provisioning of assets into an operational expenditure instead. The negative aspect of IaaS is that the owner of a virtual machine is still responsible for configuring the operating system, installing software dependencies, and patching security flaws. The relative effort of managing the environment and aspects of the virtual machine is very high compared to the business value the machine provides through the functionality it hosts. In terms of deploying software, there is risk that the application may get installed to a machine that is missing a key dependency and can cause “stable production software” to fail. PaaS is an approach that Microsoft bet on early with Azure and unfortunately experienced slow adoption. In PaaS, a platform itself (such as a web server or application service) is provided as a service and the provider deals with the nuances of the underlying operation system host. In other words, the consumer of PaaS can focus on the application itself rather than worry about maintaining the environment. This eliminates the need to worry about dependencies because the platform is already managed and any additional dependencies the software may have are shipped as part of the deployment process. PaaS is typically much lower cost to operate compared to IaaS because fees are based on usage of the application itself and not on the underlying operating system or storage. In other words, you don’t pay for an empty virtual machine that is not doing anything and only for an active application that is being used. Perhaps one reason PaaS did not experience rapid adoption despite the lower cost is the fact that the majority of PaaS offerings are tightly coupled to the host platform. In other words, to leverage PaaS on Azure means building .NET websites leveraging technology like ASP.NET and MVC. Traditionally, alternative solutions like Node.js were not compatible with the service. Customers wary of “platform lock-in” or with incompatible codebases were therefore unlikely to participate. Software as a service is a model that is more focused on the consumer than the producer. The key to IaaS and PaaS is how an organization is able to deploy and publish assets to their environment. SaaS is more involved with how services can be consumed. Flowroute® is the perfect example of the SaaS model. As a consumer, you do not have to worry about how the Flowroute APIs are hosted, what language they were written in, or how they were made available over the Internet. Instead, you are provided with a well-defined set of services exposed as API endpoints that you can leverage directly to manage your telephony. Figure 1 illustrates several of these services in terms of what you “own” and maintain. You may notice two unique stacks: the “microservice” (a further refined version of SaaS), and the container. What is a container? Figure 1: Service Stacks A container can be considered a “standard unit for deployment.” It represents a function that includes the code you write, the runtime for the code, related tools and libraries, and the filesystem. In other words, a container encapsulates everything needed to run a piece of software. What makes containers so powerful is that a container will always run exactly the same regardless of its environment. This gives it the flexibility of PaaS without an affinity to a particular provider. The benefits of containers include: Consistency – everything is encapsulated in a single unit, so you don’t run into issues with missing dependencies when you deploy code Size – containers are typically only megabytes in size (compared to gigabytes of storage needed for typical virtual machine images) so they can be stored and managed in a repository and shipped across the network Cross-platform – containers can run on all of the major platforms and across different cloud providers (the same container will run equally well on Azure and Amazon AWS Container Services) Resiliency – because containers share a kernel with the host that is already running, they can spin up quickly and are therefore ideal for recovery (i.e. a new container can be quickly spun up when an existing container crashes) Elasticity – containers can run as clusters and quickly “fan out” to accommodate increased requests and then “scale down” during idle periods – this can result in significant cost savings The leading container host is Docker and current container standards are based on Docker images. You can find an installation of Docker for your platform online at https://www.docker.com/. Docker deals with two primary concepts: an image that defines the components needed for a container, and a container which is a running instance. When you run multiple containers they are always based on a single source image. For developers, think of images relating to class definitions and containers as runtime instances of the class. Consuming containers is very straightforward. Flowroute provides several containers, including: Phone Number Masking SMS Proxy: https://developer.flowroute.com/docs/sms-proxy Two-factor Authentication: https://developer.flowroute.com/docs/sms-identify-authorization Appointment Reminder: https://developer.flowroute.com/docs/appointment-reminder To get started, however, you can begin with a simple “Hello, World” example. Docker hosts a registry of trusted images online at https://hub.docker.com/. There you can search for images that do everything from providing output to running complex web servers and hosting language services. Type “hello world” in the search box on the Docker hub and you should get several results as shown in Figure 2. Figure 2: Docker Hub The first image has over ten million pulls which means the image was loaded to a local Docker instance over ten million times. After you click on the name of the image it provides instructions on how to use it. The easiest way is simply to type in a command line: docker run hello-world This will search for a local copy of the image. If it doesn’t find it, the image is pulled from the repository and then a running container is created from the image. You could have pre-loaded the image as well with the command: docker pull hello-world If everything goes well, you should see a message that begins with, “Hello from Docker!” In this case, the container ran and immediately exited. If you list running containers, you shouldn’t see any: However, if you add the -a switch for “all, including stopped containers,” you’ll see a container id for the hello-world image. Figure 3 shows what it looks like for me: Figure 3: Docker PS There are two ways to “clean up” running containers. The first and most specific way is to use the remove command with the container id, like this: docker rm e433f0898211 A more comprehensive way to clean up is to use the new “prune” command: docker system prune To remove an image, you simply use the “remove image” command with the image name, like this: docker rmi hello-world Now that you are familiar with pulling and running Docker images, you are ready to learn how to build one of your own. Build Your First Container Let’s take the command line tool described in the blog post “Command Line Telephony with Flowroute” (https://blog.flowroute.com/2017/01/04/command-line-telephony-with-flowroute/) and containerize it. If you haven’t already built the example, get it up and running with these steps: 1. From your root folder, grab the Flowroute numbers SDK: git clone https://github.com/flowroute/flowroute-numbers-node.js.git 2. Make a directory for your new project: 3. Copy the library for the SDK in to your project: cp -r flowroute-numbers-nodejs/flowroutenumberslib/lib flowroute-cli/lib 4. Inside the directory you created, create two files (make sure flowroute-cli is your working directory): 5. Populate the json file with the contents of this gist: https://gist.github.com/JeremyLikness/4167d97dda7bceb20c7cb017027075f3 6. Populate the js file with the contents of this gist: https://gist.github.com/JeremyLikness/0b384d63dcb5d404dc3866e30ccdd2b7 7. Install dependencies: Finally, test that the program works by running this command (retrieve your access key and secret key from the Flowroute developer site here: https://manage.flowroute.com/accounts/preferences/api/): node fr.js -u <access_key> -p <secret_key> listNPAs If you did everything correctly, you should see a list of area codes. To create the container, Docker uses a special file named Dockerfile. You can read more about this file online here: https://docs.docker.com/engine/reference/builder/. The file contains a list of commands that explain how to build the image. Most images are layered, or built on top of other images. Each time you execute a command in the file, a new image is built based on the previous, until all commands have been run. The final image is the one you work with. For this image, a special Node.js image can be used as the starting point to dynamically build a target image based on the project code. Create the file (no extension) and populate it with these commands: FROM node:6-onbuild ENTRYPOINT ["node", "fr.js"] That’s it! The command leverages a Node.js 6.x image with a special “on build” tag to build the project, install dependencies in the container, then package it up. The entry point tells it that when the container is run, it should call node to launch the command line interface program and then pass in any arguments. Save the file, then build the image like this: docker build -t flowroute-cli . The build command instructs Docker to read the configuration file and create an image. The -t switch “tags” the image with the name flowroute-cli. The last command is a context, or the directory for Docker to work from, which is passed in as the current working directory. After the image builds you can run a Flowroute command in the container. To convince yourself it’s not simply running the local program, change to a different directory. Now run the following: docker run -i flowroute-cli -u <access_key> -p <secret_key> listNPAs You should see a list of area codes. You now have a fully encapsulated version of the command line interface that you can run on any environment with a Docker host! No Node.js installation is required because all of the dependencies are contained in the image. You can see the image by running: You should see an image around 660 megabytes in size. You will also see some other images such as a “node” image that were used as base layers to build your custom image. If you run the Docker “ps” command with the “-a” switch you’ll see some discarded containers used to execute your commands. You can either remove them individually or use the “system prune” command to clean-up. There is much more you can do with containers. For example, you can publish images to a repository and pull them down to share with other developers. Docker compose allows you to orchestrate multiple containers and Docker swarm enables you to manage clusters of containers. In previous years, containers have exponentially increased in adoption for development workflows. More recently as part of DevOps, organizations are leveraging containers in production using services like Kubernetes, Azure Container Services, and Amazon EC2 Container Service. Mature organizations no longer ship “production source code” but instead move production images through staging, QA and into production. You now have a practical, hands-on jumpstart to leveraging containers yourself!
<urn:uuid:f04d612e-a6de-4abc-9f9b-b3a8536acc50>
CC-MAIN-2017-09
https://blog.flowroute.com/2017/02/10/containerizing-flowroute-apps-in-node-js/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00358-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917118
2,936
2.65625
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, and it provides the authors analytical data about your interactions with their content. Embed code for: Hidalgo, C ResearchPaper1 Select a size Research paper on the topic of Food Handler knowledge and Foodborne Illness English 1A: 1:00 TR 18 October 2016 “Hi, my name is Noro, I’ll be your server tonight.” Food – it’s essential to life and a centerpiece of every culture. Whether it’s a birthday, a religious celebration, a wedding, or a myriad of other reasons to celebrate, across the globe, life is celebrated with food. These common celebrations are mostly enjoyed en masse at restaurants where people expect professional food service and a clean environment. According to the 2014 published report of the Centers for Disease Control and Percent, “864 foodborne disease outbreaks were reported, resulting in 13,246 illnesses, 712 hospitalizations, 21 deaths, and 21 food recalls” (CDC). What many people don’t know is that they can become sick from eating contaminated food in restaurants and it could be because food servers don’t know how to handle food properly; while Kern County has a fairly robust and attentive Department of Public Health, the community can do more to support the constant vigil against food contamination in restaurants, such as, signing a petition to enact legislation that will require food service workers to be licensed prior to beginning work in a food service establishment and display that license on their person while working, utilizing social media in an effectual manner to inform people of the actual conditions of food service establishments, and utilizing mobile apps such as “Safe Diner” to inform public health officials, and the public, of discrepancies that occur between inspections. Foodborne illness is a nationwide epidemic that is tracked by the Centers for Disease Control and Prevention, more commonly known as the CDC, in Atlanta, Georgia. The CDC publishes their findings in an annual report entitled “Surveillance for Foodborne Disease Outbreaks United States.” The title is somewhat ironic because the organization is about 18 months behind in publishing the report. This irony tends to underscore another reason why the prevalence of foodborne illness is not widely known. The symptoms of foodborne illness may not be recognized by healthcare practitioners because they are common to many other ailments, and often, people afflicted with foodborne illnesses do not seek medical treatment because they do not realize that what they ate has made them sick. Foodborne illness can come from quite a number of sources, “etiologies were grouped as bacterial, chemical and toxin, parasitic, or viral” (CDC). The findings listed in the report for 2015 contain the following: A single etiologic agent was confirmed in 462 (53%) outbreaks (Table 1) which resulted in 8,810 (67%) illnesses. Bacteria caused the most outbreaks (247 outbreaks, 53%), followed by viruses (161, 35%), chemicals (46, 10%), and parasites (7, 2%). Norovirus was the most common cause of confirmed, single-etiology outbreaks, accounting for 157 (34%) outbreaks and 3,835 (43%) illnesses. Salmonella was next, accounting for 140 (30%) outbreaks and 2,395 (27%) illnesses. Among the 131 confirmed Salmonella outbreaks with a serotype reported, Enteritidis was the most common (40 outbreaks, 31%), followed by Typhimurium (15, 11%), I 4,,12:i:- (6, 5%), Javiana (6, 5%), and Newport (6, 5%). Shiga toxin-producing Escherichia coli (STEC) caused 23 confirmed, single-etiology outbreaks, of which 12 (52%) were caused by serogroup O157, 3 (13%) by O111, 3 (13%) by O26, 2 (9%) by O121, 1 (4%) by O103, 1 (4%) by O145, and 1 (4%) by O186. (CDC) According to the organization Stop Foodborne Illness, “Salmonella bacteria are the most frequently reported cause of foodborne illness. Salmonella is a gram-negative, rod-shaped bacillus that can cause diarrheal illness in humans. They are passed from the feces of people or animals to other people or other animals” (STOP Foodborne Illness). The symptoms of Salmonellosis resemble those of the flu and many people do not seek treatment for it as they generally recover on their own if they have a healthy immune system. “Norovirus is a very contagious virus that can infect anyone. A person can contract norovirus from contaminated food or water, an infected person, or by touching contaminated surfaces” (STOP Foodborne Illness). Norovirus typically causes vomiting and diarrhea due to its attack on the stomach and intestines. Although food workers are prohibited from working when they have vomiting or diarrhea, Lynne Shallcross, a reporter for The Salt, a publication of Valley Public Radio, writes, “Fifty-one percent of food workers – who do everything from grow and process food to cook and serve it – said they ‘always’ or ‘frequently’ go to work when they’re sick, according to results of a survey…” (Shallcross). Most people who are employed as food handlers cannot afford to miss hours due to sickness, so they continue to work even though they may be contagious. This is a particular problem because “fruits and vegetables, and in particular, leafy greens that are consumed raw, are increasingly being recognized as important vehicles for transmission of human pathogens that were traditionally associated with foods of animal origin” (Berger). In the past, food safety training programs were concerned mostly with the spread of microbes connected with raw meats, dairy products, and eggs, but new information revealed in current studies indicate that crop foods such as lettuce, peppers, and cilantro, to name just a few, are being contaminated with disease causing bacteria and viruses. Safe food is a concern of everyone alive because we cannot survive without sustenance. Every government in developed countries throughout the world concerns itself with safeguarding its national food supply and foodservice establishments. The Obama administration signed the “Foodservice Modernization Act” (FSMA) into law on January 4th, 2011. Margaret Hamburg, M.D. is the Commissioner of Food and Drugs and she writes, The idea of prevention is not new. FDA has established prevention oriented standards and rules for seafood, juice, and eggs, as has the U.S. Department of Agriculture for meat and poultry, and many in the food industry have pioneered ‘best practices’ for prevention. What’s new is the recognition that, for all the strengths of the American food system, a breakdown at any point on the farm-to-table spectrum can cause catastrophic harm to the health of consumers and great disruption and economic loss to the food industry. (Hamburg) One of the most critical points of food preparation and consumption are the personnel who prepare and serve the food. In California, “Pursuant to SB 602 enacted into law in 2010, Health and Safety Code 113790 et seq., (“California Food Handler Card Law”), food handlers, as defined, will be required to obtain a food handler card after taking a food safety training course and passing an exam” (CCDEH). In light of the FMSA, all states have developed their own laws to mandate that foodservice workers receive proper training; however, the training or the trainee may still be lacking in their performance of food safety practices. The ServSafe program is utilized by most California counties to administer training and certification of food handlers; however, the training and certification test can be taken online and does not require identification of the person receiving the certification. This makes it possible for friends or relatives to take the test using someone else’s name to obtain a food handler card for them. Moreover, food handlers are not required to keep their cards on their person, which makes it impossible for a patron or a public health inspector to ask to see it. The restaurant managers are only required to keep a copy on file and that could very easily be suborned by digital forgery. In a study conducted by the University of Illinois at Chicago, 72 food service workers were polled “to test their familiarity with meat and poultry handling protocol. Of those surveyed, only half received a score of 70 percent or above” (Goetz). In another study conducted by Valeria K. Pilling, et al. and published in “Food Protection Trends,” results indicated that only 72 percent of observed food service employees “washed their hands when starting their shift” and more alarmingly, only 52 percent were observed to “wash their hands when returning to work after smoking, eating, chewing gum or tobacco, bussing tables, or using the bathroom.” While observing the handwashing procedures, none of the objective behaviors was performed 100 percent (Pilling). Even though training and certification programs have been mandated across the country, the consumer may still be at risk because the food handler has insufficient training and understanding of what it takes to prevent contamination of food with harmful pathogens. In a peer reviewed article published in the International Journal of Environmental Health Research, Mitchell et al. report on a 2005 study conducted by Lillquist et al. The study involved three groups of 22 participants each. One group was a control group, another group watched a “standard video presentation on hand washing,” and the last group watched the standard video presentation and participated in a “hand washing demonstration exercise.” When given a test following the training, the participants from the interactive group scored higher than the participants from the other groups. “Despite the small sample size, group differences were statistically significant” (Mitchell). Mere certification is not enough, an active program of supervision is required and consumers also need to be proactive to protect their health and the health of their loved ones. Handwashing sinks should be in the public view and should remain adequately stocked so as to encourage proper handwashing procedures and patrons should not fear to ask their server to wash their hands. “Dr. Ben Chapman is an assistant professor and food safety extension specialist at North Carolina State University in Raleigh, North Carolina.” In an article fir “Food Safety Magazine” he writes, “Ideally, food safety in foodservice establishments begins with managers who are knowledgeable about the following: where contaminants exist, how they transfer to food, the steps to control or eliminate hazards” (Chapman). Managers need to analyze their establishment and their staff to determine where any risks of contamination enter the establishment and service of food. Manufacturing and processing of foods involves what is known as process control, which Jim Mann writes about in “A Recipe for Hand Hygiene Process Control.” The training, understanding, and implementation of a successful hygiene program for food service workers can be developed in the same manner as a process control. The purpose of process control is to remove obstacles to the efficiency and efficacy of the process. At times in monitoring a process, changes to the infrastructure mat be indicated. Although such changes are costly, the restaurateur needs to weigh the cost against the potential losses caused by a lawsuit stemming from a foodborne illness being tracked back to the establishment. A successful process control for handwashing will change the antecedent behaviors of staff. “Food workers must understand the risks associated with contaminated hands, hands that more than likely look clean. It is critical for staff to connect their good behaviors with personal, family, and customer wellness” (Mann). It is not sufficient for restaurateurs and managers to discover the pattern of their public health inspections and make sure that the facility is in tip-top-shape to receive an A grade. When customers and staff realize the personal stake they have in promoting public health, the establishment will consistently avoid the risks of food contamination and bolster the reputation of the restaurant. The expression, “It takes a village,” is usually connected with raising a child, but it is apropos to public health as well. The health and well-being of all of the members of a society depends upon the interaction of each member of that society. Eating is such a basic necessity that we sometimes forget about the importance of making sure our food is safe. Dining out is typically a social occasion and people frequently talk about their experiences, whether positive or negative, in person with friends, and on social media such as Facebook, Twitter, Yelp, and Pinterest. This is a behavior that restaurateurs should encourage by offering coupons to restaurant patrons for writing objective reviews about their experience. Even a negative review has value toward improving the service when it might be found lacking, or worse, risky. The Kern County Department of Public Health has offered a mobile application called “Safe Diner” that works with GPS location to load a restaurant’s most recent inspection grade and report when a patron walks into the restaurant with their enabled device. The app includes a place for the public to report a restaurant that may not be living up to the a grade on the door. Honest and objective reporting by the public helps the community to rely on the grades that are posted and dine in comfort and safety. Lastly is a novel thought. When we drive our cars, we are mandated by law to have a valid license to drive and carry that license on our person whenever we operate a motor vehicle. The law requires a physician to post their qualifications, certifications, and affiliations in a conspicuous place in their office. Unfortunately, when we go to a restaurant, we have no means of knowing that the person who is serving our food, or the people who have prepared it, are qualified to do so. We cannot identify our server as having a definitively identified certification to serve food in a safe manner. The only way to change this shortcoming in the Food Service Modernization Act is to petition the government to enact legislation that will amend the requirement of SB 603 in California to require that ServSafe implements a means of unequivocal identification of the person being trained and certified and that there is a photo identification badge issued to food service workers that they must wear while working with food. Berger, Cedric N., et al. “Fresh Fruit and Vegetable as Vehicles for the Transmissions of Human Pathogens.” Environmental Microbiology, vol. 2, no. 9, 2010, pp. 2385-397. Academic Search Premier, EBSCO. Chapman, Ben. “Food Safety for Food Handlers.” Food Safety Magazine, Magazine Archive, Dec. 2010/Jan. 2011. Web. Goetz, Gretchen. “Kitchen Confusion: Food Workers Score Poorly on Safe Handling Test.” Food Safety News, 29 Aug. 2011. Web. Hamburg, Margaret A. “Food Safety Modernization Act: Putting the Focus on Prevention.” FoodSafety.gov. Web. Kern County Public Health. Safe Diner 18 May 2017. Web. Mann, Jim. “A Recipe for Hand Hygiene Process Control.” Food Quality & Safety, 21 Sep. 2011. Web. Mitchell, Roger E., et al. “Preventing Foodborne Illness in Food Service Establishments: Broadening the Framework for Intervention and Research on Safe Food Handling Behaviors.” International Journal of Environmental Health Research, vol. 17, no. 1, Feb. 2007, pp. 9-24. Academic Search Premiere, EBSCO. Pathogens 101. STOP Foodborne Illness. Web. Pilling, Valerie K., et al. “Food Safety Training Requirements and Food Handlers’ Knowledge and Behaviors.” Food Protection Trends, vol. 28, no. 3, Mar. 2008, pp. 192 – 200. K-State Research Exchange, Kansas State University. SB 602 Food Handler Card Requirements. California Conference of Directors of Environmental Health (CCDEH). 28 May 2015. Web. Shallcross, Lynne. “Survey: Half of Food Workers Go to Work Sick Because They Have To.” The Salt, NPR Valley Public Radio, 19 Oct. 2015. Web. Surveillance for Foodborne Disease Outbreaks, United States, 2014, Annual Report. Centers for Disease Control and Prevention (CDC). 18 Mar. 2016. Web. Hidalgo 10he expression, “It takes a village,” is usually connected with raising a child, but it is apropos to public health as well. The health and well-being of all of the members of a society depends upon the interaction of each member of that society. Eating is such a basic necessity that we sometimes forget about the importance of making sure our food is safe. Dining out is typically a social occasion and people frequently talk about thei
<urn:uuid:509ae43b-7b03-4a28-9cc6-8d18d549fea0>
CC-MAIN-2017-09
https://docs.com/chad-hidalgo/4827/hidalgo-c-researchpaper1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00358-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952836
3,573
2.65625
3
So! Everything is turned in, edited, spreadsheets are checked, R code is checked. I've even assembled a playlist to go along with the middle school dance floor clustering tutorial. 2. Some algorithms just feel natural using the "drag the formula down to fill the cell" approach that you have in Excel. It's like an artisanal apply() function. ;-) For example, when looking at the error correction formulas for Holt-Winters, you can do a single time period, and then a second one, and then drag everything down. It feels a bit like induction. 3. Spreadsheets are great for teaching predictive modeling/forecasting, data mining/graphing, and optimization modeling. While many of the techniques are opaque in R when you use packages, if you do them by hand in R, they're actually pretty clear. Except for optimization. If you want to teach other modeling techniques plus optimization in R then you're kinda screwed, because all the optimization hooks in R just take a full-on constraint matrix and a right hand side vector. Contrast this with Excel Solver where you get to build constraints individually. It's totally better for teaching. Now, that said, Python has some nice hooks into optimization modeling that would be similar to Excel. Since spreadsheets are so nice for viewing data, then prepping data, objective functions, and constraints, and then optimizing, it means that algorithms such as modularity maximization using branch and bound plus divisive clustering can be taught there, and it's actually easier to see than it would be in nearly any other environment. Plus, if you're careful you can actually cluster data better than even Gephi's native Louvain method implementation can. Bam! 4. Quite simply, I didn't need to teach any code in the book. Yes, in two places I have the reader record a macro of some clicks and then press the macro shortcut key a couple times, but that's it. And actually watching this loop run using keypresses is in itself a valuable lesson for those who don't intuitively get how something like a monte carlo simulation works. So there are a few things I really enjoyed about using spreadsheets to teach data science. Where did the spreadsheets fail? 1. Visualization. Visualization in Excel is nice when there's native support for the particular type of graph you want. But if you want a fan chart or a correlogram with critical values marked, then things get slightly annoying. You can often graph what you need by doing formatting cart wheels. Grrrrr. 3. Spreadsheets are occasionally slow. While Solver is awesome for teaching, its simplex and evolutionary algo implementations aren't going to blind anyone with speed. That's why in the book I recommend using OpenSolver plugged into Excel any time the reader can. Anyway, I think that on balance the book is extremely powerful as a teaching tool, especially for a particular type of student...a student like me. Someone who has a deep seated fear of script-kiddie-ness. Someone who needs to teach and see the data in order to believe. I am the Doubting Thomas of data scientists, but once I do work through a problem piece by piece, then I'm able to internalize a confidence in the technique. I know when and how to use it. Then and only then am I happy to stand on the shoulders of R packages and get work done.
<urn:uuid:43622fa6-ff93-4c9f-867f-d853401eb7b3>
CC-MAIN-2017-09
http://www.john-foreman.com/blog/beef-to-slurry-to-wiener-reflections-on-teaching-algorithms-in-spreadsheets
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00534-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94603
710
2.5625
3
Iron Mountain makes Public Greenhouse Gas Emissions Disclosure to CDP How Carbon measurement and reporting are driving business decisions With the US administration’s announcement of new EPA carbon pollution rules and the build-up to the United Nations Conference on Climate Change (COP21) in Paris this December, there is a lot of news these days about Greenhouse Gas (GHG) emissions and climate disruption. It can be hard to separate the important information from the politics in North America, but with real changes in climate becoming increasingly apparent, the key question for a global company like Iron Mountain is; how does this affect our business? From more severe storms or droughts disrupting business operations or supply chains to the value of real estate as flood zones move, the effects of a changing climate are real, far reaching and very frequently difficult to predict. And the impacts on people are even more difficult to guess. What are the consequences if 100 million Bangladeshis are forced to leave their homes? The first businesses to realize that something was going on were the reinsurance companies like Swiss-RE who saw the risks for increased claims. Then institutional investors started to worry that companies in their portfolio had risks that they didn’t know about because business leaders didn’t have a way to measure, analyze or share their contributions to the problem or their exposure to the issues. In response to this blind spot, the London based Carbon Disclosure Project (now called CDP) was formed in 2000 with the goal of convincing large companies to measure and report their GHG emissions, risks and actions. Today 1000s of companies report and CDP counts 822 institutional investors with US$95 trillion under management who use this information. But how can counting carbon deliver a value to our business? Just like in driving, blind spots in business are dangerous. By learning and measuring all the ways our operations create direct and indirect GHG emissions we see our business differently and more information often leads to better decisions. For example in general people think that the new carbon rules will impact energy costs, some say up and some say down which is not very helpful. Because of our new understanding of GHG impacts, we know our carbon emissions from energy by location and can use that information to predict where costs are likely to be most volatile in the future. And we’re already taking action such as signing deals for on-site solar power and considering long term fixed price renewable energy contracts which will help us avoid risks. Blind spots can also hid opportunities. For example addressing the carbon intensity of the energy we use for data centers helps us see how to solve a problem for our customers. What if they could reduce their GHG footprint by using our services? These are just a couple of examples of how understanding our environmental impact can quickly translate to financial results for the company. By measuring and publically sharing our environmental and social impacts through our annual CR Report and with our disclosure to CDP we not only satisfy the demands of customers, investors and stakeholders for transparency and accountability, we can add new information to business decisions of all kinds. This is new for us and we’re still learning, but so far it’s been amazing to see employees from different parts of the business discover how new information can help them make more informed decisions. We’re looking forward to expanding our reach to more territories of the business and sharing this new approach with more employees.
<urn:uuid:817b009d-cc37-4cc4-b3e7-c0e40c761575>
CC-MAIN-2017-09
http://www.ironmountain.com/About-Us/Corporate-Social-Responsibility/News-and-Noteworthy/Our-Planet/I/Iron-Mountain-makes-Public-Greenhouse-Gas-Emissions-Disclosure-to-CDP.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00058-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957459
691
2.546875
3
It’s 2011 and there are now more phones in the world than computers. Every day, more of these phones become smartphones AKA portable computers. Unfortunately, if your phone can browse the web and check email, you will be targeted by some of the same malicious attacks and scams that go after your PC. Here are a few basic tips from the F-Secure Labs on how to secure your mobile phone. - Keep your system updated An updated mobile operating system allows you to enjoy the latest and greatest features and while protecting your information. Get rid of security holes or vulnerabilities by maintaining updated software on both your PC and your smartphone. - Install a security application As your mobile device functions more like a mini computer, it becomes a more attractive target for hackers or thieves. A reliable security app safeguards your data, protect against threats and locate your lost or stolen phone. Here’s a quick video about our F-Secure Mobile Security, in case you’re interested. - Watch where you click and land The mobile threats you’re most likely to face are scams and phishing attacks that will attempt to steal credit card information. Social engineering methods would be used to lure you into clicking on malicious links. Always check to see if a website starts with “https” before you enter sensitive information. - Avoid shopping or banking on a public network Keep in mind that the public Wi-Fi that your phone is connected to might not be secure. Limit your activity to browsing and avoid committing any transaction that involves your account information. - Get applications from trusted source Part of the fun in having a smartphone is having an app for everything. There are plenty of applications out there, and some are offered through independent, unmonitored channels. Stick to app stores when you can. If you’re downloading an app from a third party, do a little research to make sure the app is reputable. - Make it a habit to check each app’s data access on your phone Some applications may have access to your data or personal information. Be wary of the access that is outside of the scope or purpose of the applications. A game application doesn’t need access to SMS (read, write and send), calling, phonebook entries and system files. If game wants all the access, get a little suspicious. If you have any doubt about an application, do not install it. Mobile security is a new concept for many people. So let us know what you want to know about the topic in the comments of this post. CC image by Jacob Bøtter
<urn:uuid:f7fc9906-2d24-4505-8059-08923c991f13>
CC-MAIN-2017-09
https://safeandsavvy.f-secure.com/2011/01/31/secure-your-mobile/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00054-ip-10-171-10-108.ec2.internal.warc.gz
en
0.918565
534
2.5625
3
A team of scientists from ETH Zurich and the University of Leeds have solved a 300-year-old riddle about the nature of the Earth’s rotation. Using the Cray XE6 supercomputer “Monte Rosa” installed at CSCS, the researchers uncovered the reason for the gradual westward movement of the Earth’s magnetic field. For three centuries, scientists have known about this westward-drift, but they did not fully understand the dynamics involved. Now, thanks to Monte Rosa’s advanced computing power and a sophisticated computer model, they have their answer. First it’s necessary to explain that the Earth’s magnetic field is no mere artifact. The field acts as a shield, protecting the planet from harmful radiation. It’s also a navigation system for flying animals like birds and bats. The field is the result of geodynamo processes in both the liquid outer core and in the solid inner core. Using advanced computer simulations, head researcher Philip Livermore from the University of Leeds along with Rainer Hollerbach and Andrew Jackson from ETH Zurich discovered that the magnetic field also influences the dynamic processes that are taking place in the Earth’s core. The Earth’s inner core, which is made up of solid iron and roughly the same size as the moon, ‘superrotates’ in an eastward direction. The inner core spins faster than the rest of the planet, while the outer core and the magnetic field are essentially pushed westward. This principle was first revealed in 1692 by the natural scientist Edmund Halley of Halley’s comet fame, but could not be explained up to now. From the supercomputer simulations, it is apparent that the force of the Earth’s magnetic field in the outermost edges of the liquid core is driving the magnetic field westwards. At the same time, these forces push the solid inner core the opposite direction (aka east). As a result, the Earth’s inner core rates faster than the Earth. This is the first time that scientists have been able to draw a direct connection to the spin of the outer and inner cores. “The link is simply explained in terms of equal and opposite action,” remarks Dr. Livermore of the School of Earth and Environment at the University of Leeds. “The magnetic field pushes eastwards on the inner core, causing it to spin faster than the Earth, but it also pushes in the opposite direction in the liquid outer core, which creates a westward motion.” The project relied on the supercomputing power of Monte Rosa, a resource of the Swiss National Supercomputing Center in Lugano, Switzerland. With this advanced computational ability and improved software, the science team was able to simulate the Earth’s core with an accuracy about 100 times better than other models. A writeup of the research at the Swiss HPC Service Provider Community website hpc-ch.org, asserts that numerical models of the Earth’s magnetic field are among the most compute-intensive simulations in high performance computing. Solving equations governing fluid dynamics involves both classical mechanics and thermodynamics. The article notes that “together with seismic measurements, such simulations are the only tool for researching the Earth’s interior from depths of 2,900 kilometres all the way to the Earth’s centre at a depth of 6,378 kilometres.” It’s only been in recent years, that modeling and simulation has advanced to the point where these kinds of deep discoveries of the Earth’s interior have been possible. The full study, “Electromagnetically driven westward drift and inner-core superrotation in Earth’s core,” is available in Proceedings of the National Academy of Sciences.
<urn:uuid:3f494b75-d2df-4e59-9928-3e318aaf8ef0>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/10/30/swiss-supercomputer-solves-core-mystery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00230-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919101
786
4.25
4
DARPA aims to replace regular satellites with floating clusters - By Henry Kenyon - Apr 25, 2012 In the near future, satellites may come in pieces. That’s the goal of a new effort launched by the Defense Departments research and development agency — to fly clusters of small spacecraft that communicate with each other and which work together to perform the work of a traditional single-piece satellite. The Defense Advanced Research Project Agency’s System F6 program seeks to develop technologies to build and deploy “disaggregated” satellites. These groups of small satellites would share information and a variety of capabilities over their own wireless networks, such as communications links, sensors and data storage. By spreading out these various capabilities among a group of replaceable spacecraft, DARPA hopes to create platforms that are much more survivable, adaptable and repairable than traditional satellites. DARPA seeks ways to rebuild space junk DARPA challenge: Program satellites to salvage space tech A recent proposal outlined the System F6 program and described its three parts: - The F6 Developer’s Kit (FDK): A set of open-source, exportable, non-proprietary interface standards, protocols, software and reference information that will allow any participating company to develop a spacecraft design that can participate in a satellite cluster. - The F6 Tech Package (F6TP): This is network computing device that physically connects to and provides data switching and routing between the spacecraft bus, wireless inter-module transceivers, shared resource payloads such as high performance computing, data storage and mission payloads such as sensors and hosted payloads. - The F6 On-Orbit Demonstration Testbed: This will provide affordable satellite buses for the demonstration cluster, host the F6TP and inter-satellite communications crosslinks on each spacecraft, provide or host additional payloads, and provide support for integration and orbital demonstration operations. An in-person proposers’ day will take place May 3 in Arlington, Va. At the event, DARPA will provide information on the progress of the System F6 program’s multiple efforts and provide details of the System F6 On-Orbit Demonstration Testbed broad agency announcement. DARPA is also interested in maximizing the number of non-traditional performers for more innovative concepts, agency officials said. The proposal process is open to small businesses, academic and research institutions, and first-time government contractors. There are also no restrictions on the citizenship or nationality of proposer’s day attendees, DARPA officials said.
<urn:uuid:dbe240f3-71f1-4afa-b6c5-f288d96f28d5>
CC-MAIN-2017-09
https://gcn.com/articles/2012/04/25/darpa-satellite-floating-clusters.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00526-ip-10-171-10-108.ec2.internal.warc.gz
en
0.907481
532
2.71875
3
NTP, network time protocol, is a time synchronization protocol that is implemented on a network protocol called UDP. UDP is designed for speed at the cost of simplicity, which plays into the inherent time-sensitivity (or specifically, jitter sensitivity) of NTP. Time is an interesting scenario in computer security. Time isn’t exactly secret; it has relatively minor confidentiality considerations, but in certain uses it’s exceedingly important that multiple parties agree on the time. Engineering, space technology, financial transactions and such. At the bottom is a simple equation: denial of service amplification = bytes out / bytes in When you get to a ratio > 1, a protocol like NTP becomes attractive as a magnifier for denial of service traffic. UDP’s simplicity makes it susceptible to spoofing. An NTP server can’t always decide whether a request is spoofed or not; it’s up to the network to decide in many cases. For a long time, operating system designers, system implementers, and ISPs did not pay a lot of attention to managing or preventing spoofed traffic. It was and is up to millions of internet participants to harden their networking configuration to limit the potential for denial of service amplification. Economically there’s frequently little incentive to do so – most denial of service attacks target someone else, and the impact to being involved as a drone is relatively minor. As a result you get systemic susceptibility. My advice is for enterprises and individuals to research and implement network hardening techniques on the systems and networks they own. This often means tweaking system settings, or in certain cases may require tinkering with routers and switches. Product specific hardening guides can be found online at reputable sites. As with all technology, the devil is in the details and effective management is important in getting it right.
<urn:uuid:62021f30-03a7-4e03-9ef0-828dd2d8f88c>
CC-MAIN-2017-09
https://labs.neohapsis.com/2014/02/12/on-ntp-distributed-denial-of-service-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00226-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927683
378
3.390625
3
“Rather than some automated tool or complex virus, Google and Wikipedia searches appear to have been the weapons used to knock down the walls guarding [Sarah Palin’s] e-mail,” according to this eWeek item. Most people are vulnerable to the type of attack that compromised Palin’s email account, as Markus Jakobsson wrote recently in IT World, “Almost all of us reuse what we may think of as ‘meta passwords’ – the information used to reset passwords.” Every three months about 1.5% of Yahoo’s 250 million email account holders forget or lose their email login or password. This creates tens of millions of password email reset and password recovery requests per year, according to this research report. This translates into a lot of wasted time in password recovery purgatory (at best) or opportunities for privacy problems and online fraud (at worst). The password security and password recovery process is vulnerable to several different types of attacks: 1) Phishing attacks: Where someone mimics a trusted website, usually by sending an email directing you to a “fake site.” There they get you to enter in personal information and data like passwords, credit card information, social security numbers, or “meta password data” like birthdays, mother’s maiden name, or name of your first pet. The phisher captures this information and uses it to assume your identity and either access your sensitive accounts or create new accounts in your name. LastPass protection: LastPass protects against phishing attacks by verifying that every site you log into is the actual website you’re trying to enter. When you attempt to log-in to a website using LastPass, the password manager will highlight login and form fill fields, and offer autologin only to a confirmed, legitimate website where you have an account. You’ll see the LastPass icon and highlighted fields and know it is safe to proceed. 2) Brute force attacks: Where someone methodically applies password combinations in an attempt to guess your password. One popular variation of this theme is a dictionary attack where weak passwords are uncovered by simply probing your password by testing it against the words in a dictionary. LastPass protection: LastPass makes creating, using and remembering strong passwords simple. Most people, myself included, make it too easy for brute force attacks to be successful because we use weak passwords (that are easier to remember than strong, complex ones) and reuse these weak passwords across different sites (meaning if one password is stolen or compromised, many of my sites are vulnerable). LastPass makes it easy to use strong and unique passwords for every website. I use LastPass to auto generate strong passwords for me and remember these passwords for me so I don’t have to. 3) “Meta password” attacks (a.k.a. mother’s maiden name and other common password retrieval challenges): Under this increasingly common scenario, someone collects your personal information via Facebook, public record searches, etc. They use that information to figure out what they need to reset my account password and access my information. LastPass protection: The password manager enables me to change the way I answer these “meta password” questions. Basically, I can offer less personal information. Gone are the days where I enter in simple answers, now I auto generate strong passwords as answers to questions like mother’s maiden name and my elementary school. I use the password generator to make up “junk” answers and save these answers in the “edit site information” notes section with each new account. Because LastPass auto logs me in to websites I no longer have to use the meta password data to reset passwords. If I were to need to access the meta question answers, that info is securely saved and accessed from my LastPass portal page. Because LastPass does password management differently, they sync all my information across platforms and machines and I can still access all my account information, log-into my websites without uploading any sensitive information to their servers. So, unlike many password managers, LastPass doesn’t require too much “trust” from me. It saves all my sensitive information and encrypts it locally on my machine. They don’t have access to any of my information, and it doesn’t get saved onto their servers. It remains secure, encrypted and on my computer. It’s probably time for all of us – including Sarah Palin – to rethink our online information management and make life easier and safer with a password manager like LastPass.
<urn:uuid:1a537bf6-7d23-46f6-aae3-ddf44e58dda0>
CC-MAIN-2017-09
https://blog.lastpass.com/2008/09/after-yahoo-email-debacle-sarah-palin-needs-lastpass.html/?showComment=1222424340000
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00574-ip-10-171-10-108.ec2.internal.warc.gz
en
0.91084
949
2.578125
3
When New Scientist published a leader that Britain’s National Health Service (NHS) had quietly shared the healthcare data of 1.6 million patients with DeepMind, Google’s London-based artificial intelligence (AI) division, there were complaints across the nation’s newspapers. While the information was ostensibly shared to build an app that would help hospital staff monitor kidney disease, the journal reported that the agreement went much further. The notion of a tech behemoth poking around the most private of personal data, without direct consent, concerned patient and privacy groups. The unwillingness of DeepMind or the NHS Trust involved to discuss its plans for using the data -- anonymized yet intimate details including patient location, visitor information, pathology reports, HIV status, past drug overdoses and abortions -- didn’t help matters. Among clinicians who analyze large datasets in the hopes of curing disease and improving care, this was a non-story. Seeking insights and trends in anonymized data is common practice and leads to vital medical discoveries. Applying AI to this task promises to extend and accelerate the benefits. Recognizing this, New Scientist commented: “This tension between privacy and progress is a critical issue for modern society. Powerful technology companies can tell us valuable things, but only if we give them control of our data.” I think this is a false and risky assumption. As a computer scientist who focuses on data mining and works as the CTO of a cognitive computing company, I have no lack of enthusiasm for the possibilities of AI and data analytics. However, as a former military officer, I also understand why it’s important, often essential, to keep certain data private. To read full article, click here.
<urn:uuid:13143185-1a57-499e-ae8c-be34168b4052>
CC-MAIN-2017-09
http://www.digitalreasoning.com/buzz/why-you-dont-need-to-share-personal-data-to-benefit-from-ai.2467871
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00570-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953616
350
2.640625
3
Retrofitting a data center is about managing limitations and trade-offs. Decision-makers have to consider physical limits (such as the weight a floor will support and how much cooling equipment can fit into an existing space). Then there’s infrastructure to think about: It would be difficult to swap out an old uninterruptible power supply (UPS) cable for a brand-new one. Such restrictions have an impact on energy efficiency too: Existing UPS cables generally operate at 85 percent efficiency, whereas the newest ones are in the range of 97.5 percent. To reach the highest efficiency numbers, you’d need to change your entire data center architecture, which is impractical for most companies. Retrofitting a data center to make it more energy efficient has its restrictions, but doing so can be less costly than having to rebuild an entire facility. To weigh the variables—and achieve energy cost savings – you need to know what’s broken. Here are five tips for determining the efficiency of your data center and how to make it green as can be. 1. Get to know your data center. An energy efficiency assessment from someone who specializes in data centers should be a priority, says Neil Rasmussen, CTO of American Power Conversion (APC), a provider of data center power and cooling equipment. IBM, EYP Mission Critical, Syska Hennessy, APC and Hewlett-Packard offer such services. HP recently added Thermal Zone Mapping to its assessment offering. This service uses heat sensors and mapping analysis software to pinpoint problem areas in the data center and helps you adjust things as needed, says Brian Brouillette, vice president of HP Mission Critical Network and Education Services. For example, the analysis looks at the organization of equipment racks, how densely the equipment is populated, and the flow of hot and cold air through different areas of the space. It’s important to place air-conditioning vents properly so that cool airflow keeps equipment running properly, without wasting energy, says Brouillette. 2. Manage the AC: Not too cold, not too hot, but just right. Energy efficiency often starts with the cooling system. “Air conditioners are the most power hungry things in the data center, apart from the IT equipment itself,” says Rasmussen. If your data center is running at 30 percent efficiency, that means for every watt going into the servers, two are being wasted on the power and cooling systems, he says. To reduce wasted energy, one of the simplest and most important things you can do is turn on the AC economizers, which act as temperature sensors in the data center. According to Rasmussen, 80 percent of economizers are not used, just as IT administrators often turn off the power management features in PCs. It’s also important to monitor the effects of multiple air-conditioning systems attached to a data center; sometimes, Rasmussen says, two AC systems can be “out of calibration” one sensing humidity is too high and the other sensing it'stoo low; their competition, like a game of cooling tennis, can waste energy. Richard Siedzick, director of computer and telecommunications services at Bryant University, uses such features in his data center. “If the temperature rises to a certain level, the AC in that rack will ramp up, and when it decreases, it will ramp down.” The result is a data center climate that few are used to. Instead of being met with an arctic blast at the door, Siedzick says people have told him his data center is too warm. That’s not actually the case: AC economizers help cooling stay where it is needed, rather than where it is not. And that means increased efficiency and monetary savings. “We estimate we've seen a 30 percent reduction in energy [in part, due to more efficient cooling] and that translates into $20,000.” Siedzick says other precision controls, such as humidity sensors, are used in the data center as well.
<urn:uuid:0b508f77-c80d-4569-a929-c2f85117af1b>
CC-MAIN-2017-09
http://www.cio.com/article/2438302/energy-efficiency/five-ways-to-find-data-center-energy-savings.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00270-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946917
833
2.609375
3