0
stringlengths
9
22.1k
As a patent examiner, I can say some of the problems with the patent system are much more hidden than people are lead to believe. First - We have a very archaic system for searching and tracking patents. Sure we have patent classes, but a couple bad placements by examiners and the entire system becomes nearly useless. Furthermore, with forms of advancement happening every day opening up what should be new categories and bridging between others, a classification system quickly becomes useless without constant update. The tools they give us to search work, but not as well as they should to keep up. I can tell you that Google patents is where I start searching 99% of the time. I do rely on other tools, but most of the time, Google finds what I need. They have some amazing programmers out there that keep on top of the search game. Second - The PTO has manpower to examine, but IT and web design are far behind. Up until 5 years ago, work was done on word, printed out, handed in for review, and then scanned back into the computer. Fortunately that has changed, but the website and web portal systems need a healthy redesign and I told you about the search tools - we're behind the times. I give the PTO major props here because they are moving forwards leaps-and-bounds lately, but I see some major programming that really needs to be done. Third - Transparency is lacking. The worst part about the entire system I read about deals with 3rd party extortion. Patent holders are allowed to discuss and set terms of licensing behind closed doors with non-disclosures. That is terrible. It prevents the true trolls from being readily identified. Fourth - There needs to be a cap on royalties. If the cap is say 10% and you infringe 1 patent, you need to find a different way to work around the patent or be 10% more efficient than the other company and pay your royalties. If you infringe on 30 patents, well I guess each one didn't contribute nearly as much to your invention. Pay your 10% and let the trolls fight over the pennies. The only exception I would be apt to make is the medical industry where literally billions of dollars and decades can go into research. Fifth [see update below] - We need to allow the public side to appeal at the same legal standard patents are examined at. Did you know a patent being examined can be rejected for obviousness, but a patent being appealed (after allowance... generally) cannot?! Examiners are human, and with about a million applications every year, even if we can get the error rate to only .1%, it means 1,000 patents will still slip through the cracks. Some of the trolling comes specifically from this double standard of examining. The problems in the system truly are getting smaller and I would like to say that these are the only real things I can detect that the government side could be better on. They are getting far more right than wrong honestly. I also applaud companies like Newegg that are standing up to trolls, but through caps and transparency, the trolls couldn't exist in the first place. Patents are important - if you think a corporation could easily eat a little guy with a patent now, you're actually probably right... but at least if it's a good enough idea, that guy will get a nice bonus for the rights. Sometimes enough for a used car, sometimes enough to retire comfortably. Without them, I guarantee if the same company got a hold of the idea, he wouldn't even be acknowledged. [edit] as many people mentioned in comments below, there was actually an update that addressed this double standard in post-AIA law. I was unaware of this until Reddit made me look it up. 5 should be considered a problem that has been addressed, and is going away soon.
Fallacious argument. Hardware is not software for two reasons: Software is computation (i.e. information). Hardware is a machine designed to store information or replay calculations. Calculating machines, such as adding machines and calculators, are patentable because hardware patents specify how to construct a device that can perform mathematical computations. The patents are not on the computations. There are programmable CPU and ROM chips that can be written (or rewritten) with new software. That doesn't mean the software becomes hardware when it is copied to it. The hardware is just a storage medium for the software in the same way a cassette tape is a storage medium for songs and speech. You patent the cassette tape and cassette playing machine, not the bits of information on the tape. Another example is the old fashioned player piano. The patent is on the piano device that plays songs--not the songs on the paper. The information stored on the recording medium is already protected by copyright law. Other people can play the same song in their own way, but they are not violating any patents when they do so. That is covered by copyright law.
I've used both for both work and personal. I've supported users of both for work. outlook.com functions, usually. I've found users are confused by what it is. Especially if they were hotmail or live users. Then outlook.com shows up and they don't know what they are logging into. There's a large amount of inconsistency. Some things will reference "outlook.com" and others will reflect the users email address. Oh, you want to see the settings for [email protected], you need to click on [email protected]. WTF? Ok, so that means they have a working @outlook.com email address? Nope, bounce. Oddly, it seems to have less problems in Firefox than IE. Example, I've seen on several PCs spanning a few different companies... while using IE, click on an email in the list and the read pane appears to start loading ... and ... nothing. Try again, same. Close IE, open Firefox, works fine. Back to IE, works ok now. Next email, same problem. Outlook.com looks like they put a lot of time into designing it to look pretty and be simple. Yet it still has that underlying Microsoft feel to it. A function is there, but good luck finding it. There's a function you don't ever need, it's front and center. Sometimes things just appear and disappear. Feels like talking to someone whos phasing in and out of your own time line. One minute they know exactly what you are talking about, next they have no clue. Gmail is damn near perfect. Any issues I've had, really are users expecting more from email than they should be.
Depends what stops functioning first If the battery cannot produce the current needed the phone will not function If it can the voltage regulators and other power circuitry will heat up until they break If both keep up the phone will get very hot very quickly and the processor will then break from the internal temperature
That doesn't mean you will have 500GHz CPUs thanks to this stuff, at least not initially. This is the switching speed of a single transistor, modern transistors can switch at many 10's of Ghz, and probably even faster when made small in an IC. This just proves that these graphene transistors are very high performance, much more so than conventional transistors.
The longest chain of gates in a combinational logic circuit is called the critical path. Each gate you add to the critical path increases the amount of time from when an input is provided to when a deterministic output is guaranteed. Each component in a cpu pipeline has this property. The cpu can only be as fast as the longest critical path in the pipeline.
Apple's software pricing has been starting a war with Microsoft ever since they cut the price of the system to $30 a couple of years ago. People keep saying this and people keep being wrong. Here is the logic why: Since OS X came out in 2001 there have been 10 different versions (10.0 - 10.9). Since Windows XP came out in 2001 there have 5 different versions (XP, Vista, 7, 8, 8.1). Exclude 10.9 and 8.1 (since they are both less than a month old and essentially free upgrades from previous versions of the OS) and you get the total numbers: OS X = 9, Windows = 4. It should be very obvious that if a person bought every new OS in each line, that the Windows version can be twice as expensive as the Apple one and the consumer will still break even between the two. Additionally, if you buy the Windows upgrade soon after release you can typically get it for ~$50, which is less than twice the $30 Apple was charging for upgrades. And then you can look at the support lifecycle of the two OSes. Microsoft specifically [spells out the support lifecycle]( of Windows. Windows XP was supported from 2001 - 2014. Windows 7 is supported from 2009 - 2020. That means you can go over a decade without paying for upgrades with Windows and still be supported the entire time. What about OS X? Apple refuses to officially announce their support and end-of-life policies for OS X versions. [This Computerworld article]( says that the unofficial OS X policy is that you can expect only the latest 2 versions of OS X to reliably receive patches and updates. This is much more of an upgrade "treadmill" where you must buy newer OS versions to ensure you are supported.
actually my name is reference to Always Sunny in Philadelphia and was also my nickname when I played on a tournament paintball team. I have a wife, 2 year old son, and a dog. I have a 3000GT which I use to take to car shows. I play video games and obviously play on reddit now and then. I listen to all kinds of music, hunt and fish, and believe and support the U.S. Constitution. I have tattoos and don't mind a drink of alcohol from time to time (don't smoke because my grandma died of cancer.) I am an every day person just like you are. I've never felt the urge to kill someone, and wouldn't get away with it if I did. (I would actually be more scrutinized than you were if you did.) I grew up in a family where most of my family members have been in and out of jail and prison. I worked for over 2 years around murders and drug dealers and only got physical with inmates 2 times. If you spoke with inmates I worked around they'd tell you that I am a nice guy. I run into them in public on occasion and usually have a short conversation with them.
There should be due process, review & oversight, even if it is after the fact. If the police wanted to tap my phone, they'd need a warrant first. Though they can tap my phone & then get a warrant if there is an immediate threat, like they think I kidnapped a 5yo. A lot of the violence & crime in Chicago is immediate. If one person gets shot, there is a high likely good someone else will get shot in retaliation. In a case like this, I think the cops should be able to act at 2am & then get a warrant later if they have reasonable suspicion that a crime would happen & blocking my social media could assist in stopping that crime. But at 9am someone better be going before a judge to review their actions.
This sentiment and other "Ahha" drawbacks are expressed throughout the comments, I presume by angry economists, so I feel it's worth taking the time to debunk them. 1) Someone has to pay for the printer. You may not know this but there are companies out there that let you simply order 3D print jobs from them: Typical markup is shipping plus about double the cost of material so the actual price is more like $30. I feel fairly sure the guy bought the 3D printer because he wanted to have a 3D printer.... 2) But what about his time assembling the item? Well, one it isn't that hard and 2 no-one trots out this argument about Ikea flat-pack furniture. This is the myth that anyone who spends time doing work could spend that time doing paid work. That's not how it works, so to speak. 3) Someone has to give up their time to make the design. Yes, but again, some of us are happy to do this for free or to release stuff under a creative commons licence. In addition, there are an increasing number of companies out there that make money off open hardware platforms, e.g. they make their design freely available but also offer to sell the device pre-built or to take money to help support the hardware design if you build it yourself. 4) What about the material being used? Is it a good one to have in contact with someone's skin all the time? This is a FARE more interesting question and actually gets atone of the major barriers to all prosthetic and wearable/embedded sensors (the area I do some work in). It turns out that placing any material against skin is challenging because it can reduce it's ability to sweat and can cause chaffing, certain materials are more comfortable than others (someone posted an old prosthetic with leather straps, this was one of the original attempts to alleviate the problem) but this is a legitimate issue. However, it's still one we haven't perfectly addressed with professional builds: And, form that link, something which fits better will cause less irritation. Indeed, the low cost could allow the production of multiple attachments as the shape of the stump the prosthesis attaches to changes over the course of the day. You can imagine someone carrying a couple of different attachment sockets and switching them out as the day progresses and indeed having an assortment of them to fit as their body changes size like some people have "fat day jeans".
It's worth evaluating why Yale's administration would bother spending their time and absorbing negative PR over this. Tenured Faculty's Influence Efficiently displaying courses by rating and workload makes it easier for students to optimize their time away from specific courses. Imagine charting ratings on the Y-axis and workload on the X-axis. People will avoid courses low and to the right. In my undergrad experience, courses that would fit that criteria were often also taught by the old, tenured department stalwarts who have no reason to ever modify their courses. They're influential and they don't like suddenly speaking to empty rooms. Old Fears New Private colleges are terrified of new forms of learning that the internet enables. Companies like Coursera are beginning to fundamentally challenge the $55,000/year colleges' reason for existence. As irrational as it might sound, I suspect Yale views things like YBB+ has further eroding their model, which depends on a predictable flow of students doing predictable things. For example, what if YBB+'s data showed students almost universally dislike Yale's Philosophy major? Philosophy is a feeder for law schools, and top law schools generate relatively wealthy alums. Well, maybe aspiring lawyers would start choosing Harvard or Stanford over Yale. This, and a thousand other potential scenarios, scares the poor administration so they flail about and shut YBB+ down. The Business Model Let's get something straight: private colleges are businesses. Students play a key role in a reinforcing cycle: the better Yale's students do after graduating the more donations Yale will receive and the more tuition Yale can charge. But students are not the only stakeholders. Three other key stakeholders are: 1) parents 2) professors and 3) businesses. 1) Parents expect a certain experience when they send their kids to Yale. This is in part due to brand and in part because Yale has a high percentage of parent alums. YBB+ freaks them out: kids will start making choices based on crowdsourced data, not on tradition or institutional knowledge. Old institutions, especially those steeped in tradition, hate this kind of change. 2) World class professors are key marketing points for elite colleges, who love to talk about how many professors are Nobel laureates or frequently present to the UN. YBB+ undermines these people's star power by putting actual student opinions against their celebrity. 3) Businesses pay colleges in a variety of ways to get access to top talent. That model is predicated on a consistent flow of top talent; if that dries up, the businesses will stop paying. YBB+ empowers more informed choices from students, which might lead students away from previously popular channels to certain companies. Yale can't have that -- they depend on that revenue stream. Fear, Uncertainty, and Doubt Finally, the most primal motivation: fear. Yale is a traditional company run by traditional people, and they're likely worried about what students will do next. They have the authority and resources to shut down this type of behavior (or so they think), so that seems like the more prudent choice than letting this go, only to watch other students do something else that's even more objectionable.
When looking for a chrome replacement I did in fact try aurora hoping it would fit the bill, and while the touch aspects were greatly improved its actually a resource hog in comparison to ie. Granted it's in beta and so I likely will give it a second go once it hits primetime, but truthfully I'm actually coming to like ie. I think a lot of people are still in the IE6 mindset when it comes to judging it, and its really unfortunate. Speedwise (outside of Google products) I see no discernable difference in speed between chrome and IE. I also suspect that IE is cheating a bit by taking advantage of being deeply integrated in Windows much like how safari does on Mac. Either way though, the result is a more seamless transition between desktop and metro mode while hanging on the benefits of both. The only reason I would prefer to use chrome is because my phone, tablet, and desktop all use chrome and I liked being able to leave a page open on one and continue on another easily, but unfortunately that seems unlikely. Should IE release an android based version though, I would seriously consider using IE across everything. I've used wp8 for a bit and was actually really impressed.
The issue is that this isn't trivial and you being in IT doesn't matter. This is an issue for the registrar. Think about it; the initial website was created for a purpose, to provide some information to help the students choose their courses, but the particulars of the information that is not included could sway the students to choose one class over another, leading to classes that will fill up too fast or classes that aren't getting any attention. It is the registrars job to distribute classes (English, Calc, CS, etc.) and classrooms (lecture halls, etc.) Using trends and information from the previous years, they have to predict (at least one or two years in advanced) what the make up of interest and availability will be throughout the campus. If there is already a tightly wound system based off the current information given to students when they are looking to register and a new variable that hasn't been considered gets thrown into the mix, shit is gonna be fucked up for the students; discontent with class availability, begrudgingly taking class because there is no space otherwise, and a worsening relationship between student and admin.
The majority ("even at Yale") will likely praise the easiest courses, the most lenient professors, and thereby generally screw up 'ratings'. Do you honestly think the students are too stupid to know the biases inherent in scoring, that they'll see the numbers and confidently know, "Hmm yes, this is all clearly an objective measure of quality," while blindly picking the highest ranks? It doesn't take a psychologist to figure that one out. My university had a pretty great course instructor survey system, where you could search the professors and see their scores for each course, every semester they've taught it over the last 10 years. I knew the workload questions would always be skewed higher than reality, that difficulty would be overrated, but there were also questions about "Did the professor take interest in the progress of the students" or about availability, or if you felt like you learned something of value. Believe it or not, the majority of the time I didn't choose the highest ranked professor with the easiest workload, even when that fact was plainly available in front of me. I purposefully chose ones that would be challenging; such as ones that had high workload and difficulty, yet students still conceded that their grades were fair, or that the professor was extremely knowledgeable in the subject area, or took value in the students. I'm not a genius, but it was obvious that I'd have to look at the averages, trends, outliers, and biases to get a good guess at the professor, or just do the old fashioned asking around to get personal insight to how the class works. Yale students will know that's how scoring is slanted. The ones that blindly choose the highest numbers are the ones who wanted the easiest classes anyways, why not let them know which ones are the easiest? Students who want to challenge themselves are obviously the ones who would also take the initiative to research more into how their classes are run, and won't sign up purely on one number. >So, in short, why on earth would a prominent university want to risk a bunch of undergraduates (mis)present the quality of its main asset: scholarship? Misrepresenting to whom? The general public doesn't have access to this, where random internet-goers will gawk at scores saying, "Wow, that professor only got a 2/5! Jeeze, I thought Yale was supposed to have smart professors". If you mean misrepresenting to other students , then you're mistaken for the same reasons I've stated above. These numbers aren't fooling anyone. A high score means that it appealed to a high number of students in the class for some reason or another, so basing selection on that score means that statistically you will also come out of that class with that same sentiment for the same reasons those before you did. If satisfaction is all you're looking for, and you're an average student, then the score really is all that matters to you. >Yale is not a teaching college; it's a research university. The best researchers aren't always the best teachers. Undergrads don't necessarily understand this. If I'm using these scores to guide which classes I'm going to take, why would I give a damn about how good a researcher the professor is? That rightly matters to the university, grad students, and undergraduate researchers, but if he really is poor at teaching, what reason do I have to take his class? I've never in my life heard someone say, "Man that was those were the shittiest lectures with the most unfair grading system ever, but his research I have no part in is so cutting edge , 10/10 would definitely take that class again." The research talent is a great reason why the university should keep them around, and why you should seek to be in their lab, but rankings that place brilliant researchers on a low scale due to poor teaching is exactly what students taking classes from them need to know. On pretty much every other point, I do agree with you. It's not a democracy, you need appropriately licensed material, and easy access to the more holistic and thorough responses instead of a couple numbers. But these numbers should be a measure of their teaching, and failing to take into account the fact that they're really busy with their field of research is completely justified.
how old are you? let me give you a hint: the fafsa considers your parent's income until you are 23. if they make too much for you to qualify for student aide, and you don't have enough scholarships to make up the difference, wait till you're 23. they only consider the previous 12 months, so the thing to do is to minimize your income the year you are 23. once you are 23, if you make under 10k/year, you will qualify for the most student aid possible, including (especially) subsidized direct loans, etc. you'll be able to afford college. just make sure you pick a degree that will actually pay off after you graduate (i.e. STEM), because you WILL accrue debt. but before anyone reaches for their pitchforks at that thought, consider this: prior to school my best year without a degree amounted to about $27k, as a manager of a retail store. After school, I started t $52k, and now make $100k after 6 years in the job market. I did accrue about $50k in debt, but my loans are in the 3.375% range mostly, and i have about 30 years to pay them back, so the monthly payment is less than 5% of my monthly take home salary, after taxes, benefits, everything.
Nope. How about you actually keep up with what's happening? They posted a couple days ago that it was a problem on both sides: they fucked up and so did the administration. In their new post, which (since it's a webpage and not a blog) overrides the old one, they have since whitewashed all mention of their error, lol. The webpage is just dedicated to "sign our petition" and attacking Yale for a stupid thing that got out of hand. Edit: they just moved it off the front page: > In 2001, course evaluations were shown only to professors to improve their teaching. When OCI came out, the faculty agreed to put evaluations online for students, but also agreed on a specific format for them. We still don't know the details, but it seems to forbid displaying averages right with the course title and description. We've asked them for the specific text. > Yale asked us last Friday (January 10) to shut down immediately because we violated this agreement with faculty (of which we were unaware until recently), and they thought there would be minimal impact because only a part of campus was using it. And so on, and so forth.
Current Yale student here. I don't really understand how this comment, made based on vast generalizations about the "private college business model," got upvoted so much and in fact made it to the front page. First of all, we should make sure we're on the same page as to what YBB+ does that's different from the other course selection softwares that Yale students have, most prominently Yale Bluebook, also a student developed website that Yale ended up purchasing. That difference is that YBB+ allows students to sort classes based on average ratings and workload. This data is represented as a number, e.g. 3.73, instead of a bar graph distribution ranging from "poor" to "excellent" (poor corresponding to a 1, excellent a 5). Where did the original ratings and workload data come from? This data, along with student courses evaluations, is made publicly available by Yale through its official Online Course Information/Selection system (OCI/OCS). It's what YBB+ and Yale Bluebook both use, just that Yale Bluebook is basically a more streamlined version of OCI, whereas YBB+ is a spreadsheet. Therefore, your thesis that YBB+ "places information into the control of the student instead of the institution controlling access to information as they desire in their current business model" is incorrect insofar as if Yale truly wanted to control access to information, it never would have made all this data available to students back in 2003 in the first place. Before YBB+, how did you think students chose classes? By randomly flipping through the course catalogue? No - they compared reviews and ratings and workload using OCI/OCS and then Yale Bluebook. So students were using peer review data to make course selection decisions way before YBB+. With OCS and Yale Bluebook, people usually just searched for classes in their own majors. The real benefit of YBB+ is that it allows students to "discover" highly rated classes that they otherwise would not have thought of taking, which were usually classes outside of their majors. So the practical effect is that already popular and oversubscribed classes become even more popular, and less popular classes suffer. However, I would argue that this effect is not as great as all the recent reporting on YBB+ would have you think, for the following reasons: People still have to take classes required for their majors. Intro econ lectures, for example, are almost always terrible classes that are not easy to get good grades in (because of the curve). However, they are still huge because there are so many econ majors. Super highly rated classes are often seminars, and it's already pretty much close to impossible to get into a popular seminar that's outside of your major. The popular classes get filled up fast and people eventually have to resort to other classes. Regardless of the ratings, reputation still goes a long way. I'm not sure that it is truly the "old, tenured" professors that suffer the most from the advent of YBB+. From what I have seen, people don't make decisions based on ratings information alone - they are still more likely to take classes with famous, accomplished professors, and they still value word of mouth. The big benefit of YBB+ was that it made sure that the best rated classes got filled first, and it may have caused already unpopular classes to become even more unpopular, although the extent of this effect is unknown. Yale chose to shutdown YBB+ because it was concerned that students would make decisions based on the numbers alone. According to Dean Mary Miller, when the College first decided to release course ratings data back in 2003, faculty stressed the importance of giving students a "holistic" view of course evaluations of which written evaluations constituted a key part (see Miller's newest open letter to students: Now it realized that through the Chrome extension, students have figured out a way to get past controls that they can't possibly block. So the administration now must decide whether they want to continue releasing course evaluation data, knowing that they can't control how students use it, or stop releasing it completely. The above is a more objective story of what YBB+ is, what it does, and why it got blocked. Most of your points, especially the ones about parents, professors and businesses, seem to me to be exaggerated generalizations that have little basis. How does YBB+ undermine "previously popular channels to certain companies"? What are those so-called "channels" anyway? If I saw anything during my four years at Yale, it's that if there do exist such channels and if these channels present themselves in the form of professors and classes, then regardless of the ratings, students WILL fight to the death to get to them. You think people care more about classes than jobs? Yeah right.
As an information security specialist I can tell you definitively that your Intranet is still very much accessible from the Internet. In theory, locking down corporate resources to an Intranet will protect them from being accessible, however in practice there are about 1000 things that can and do go wrong. We use the term "island hopping" to describe how an Intranet breach occurs. Generally it begins with compromising a legitimate host on the network. This could be something as simple as a drive by Java plugin exploit that allows dumping of network credentials. From there the host is spoofed and the stolen credentials used to gain access. Is it possible to ensure that each authorized host on the Intranet is configured in such a way as to minimize the potential for such breaches? Sure it is possible, but in practice the inconvenience of such controls negates their usefulness. Some super high level important assets are indeed secured this way. I seriously doubt that your company has either the intent or ability to implement such rigorous security standards.
That would make me feel like my teacher is a liar. (I also don't think this is a realistic thing that would happen at a university, where someone gets a better grade for what ends up being an arbitrary reason, but we're going to pretend I never made this objection) > set wages for the individual needs of each person this happens? I was under the impression that employers paid people based on how much or well they worked, not because they have high blood pressure or some other thing. again, as you can tell, I don't study the economy. (this is when, in my hypothetical perfect model, the government steps in and uses tax revenue to pay for individuals unique needs like high blood pressure medication/moving expenses, instead of footing the bill to the employer who shouldn't pay people more than others for those reasons. again, in my opinion.) > takes away the ability to bargain Is this really hurtful? I look at that as a "perceived" injustice. EDIT:
Other sources have found otherwise]( /u/linkprovidor, would you mind supplying the abstract for your souce? It is behind a paywall. edit: It's ok. I found your abstract [on Google Scholar]( >The prevalence of gender wage gaps in academic work is well documented, but patterns of advantage or disadvantage linked to marital, motherhood, and fatherhood statuses have been less explored among college and university faculty. Drawing from a nationally representative sample of faculty in the US , we explore how the combined effects of marriage, children, and gender affect faculty salaries in science, engineering and mathematics (SEM) and non-SEM fields. We examine whether faculty members’ productivity moderates these relationships and whether these effects vary between SEM and non-SEM faculty. Among SEM faculty, we also consider whether placement in specific disciplinary groups affects relationships between gender, marital and parental status, and salary. Our results show stronger support for fatherhood premiums than for consistent motherhood penalties. Although earnings are reduced for women in all fields relative to married fathers, disadvantages for married mothers in SEM disappear when controls for productivity are introduced. In contrast to patterns of motherhood penalties in the labor market overall, single childless women suffer the greatest penalties in pay in both SEM and non-SEM fields. Our results point to complex effects of family statuses on the maintenance of gender wage disparities in SEM and non-SEM disciplines, but married mothers do not emerge as the most disadvantaged group.
And? I don't understand why the disparity between male and females in a certain occupation matters at all. I am busting my ass in school to become a programmer because I LOVE IT, not because I have some sense of righteous indignation. Do what you enjoy, regardless of who else is doing it. Is it a predominantly male environment? I can't say. My place of employment has a female as the Director of IT. My boss is a female. Several of our Java developers are females. Never once have I felt criticized or shunned for my gender - or for anything else. The 'guys' have been more than happy to involve me in projects, teach me things, and provide constructive criticism alongside praise where it's due. I don't think the burden falls on members of an industry to change the ill-conceived assumptions that people outside that industry have made of it.
It's not so much devaluing the work done, as it's an evaluation of the value added to the business of the work. A programmer potentially makes a massive value contribution to a company in comparison to their wage. For example, automating internal book-keeping processes which scale and save the need for hiring extra personnel. Or the whole value of the product that the company sells (same goes for engineers, when you're a critical part of the product development your value is linked to the product). Whereas the work that is 'devalued' often doesn't measurably increase profits. Think retail, most of us have done it, I've done it, is the process of persuading a customer to buy a product (which sometimes sells itself) anywhere near as valuable as developing the product? Human interaction jobs aren't as inherently valuable or scalable as product development jobs. Nor value maximising jobs. For example Teaching and Child care. Yes it's an important part of society, but how much value does the teacher add to the business? Not society, the business. Profit driven businesses want as high a ratio of kids to a teacher as possible, while still being able to charge the same or higher cost of the child going there. An ideal model from a a profit perspective is automating the teaching away from humans, having 1 programmer paid ~100K (as low as possible preferably) for a potentially infinite number of children, compared to 1 teacher (at ~20-60K depending where in the world you are) for 20-30 students. The problem here is convincing the parents that it's worth paying X amount for their child to attend a school without teachers.
Which I'm willing to bet is most of us . ;) I have a CS degree. Been programming for 16 years, worked at fortune a 50 company and never once needed to explain a heapsort to anyone but maybe a college professor while earning the degree. Things like that are considered "solved problems". Otherwise known as things you should be able to google in 10 seconds flat. What's way more important, a few examples How to google things Written communication skills. Deep knowledge of the languages used. Oral communication skills. Knowledge of design patterns. Knowledge of anti -patterns. Knowledge of Test Driven Development. Knowledge of field relevant technologies. Knowledge of industry standards. Knowledge of industry conventions. UNIX knowledge SQL knowledge Interpersonal skills How to manage your manager
Women now are very blessed to be able to have the opportunities that were not available to them in the past. I'm really grateful that you
Women are culturally raised to not be assertive are they? which cultures/ethnicities do this? what dialects do they speak and what recreational activities do they engage in? it sounds to me like part of the christian ideal of being demure, but i'm not aware of any sizeable group actually practising that for over a century. more over in seccular school there is deliberate conditioning against that behavior. it is commonly called 'active voice'.
You and the article appear to have made the mistake of assuming the ratio of women to men in CS and number of women in CS are the same thing. There's a good reason the article in question never mentions specific numbers of female coders, only ratios and percentages when compared to males. It lets them be intellectually dishonest to push an agenda. It's hard to insist that women are on the decline when the hard numbers probably oppose that statement. My god, when you look at the basis for their claim that women are on the decline (4.2% of female freshmen interested in CS in 82 vs .5% today), men are facing just as great a hurdle! They've fallen from nearly 7.5% to 2.15%! Where have all the men in CS gone!? Oh right, that title doesn't make for very good clickbait.
Well now, hang on, it's not as simple as that. As annoying as the "70 cents on the dollar" misconception is, so is the "pure merit" conclusion. Of course reward exactly proportional to merit makes perfect sense on its own. But everybody making the claims stops there as if that principle is everything and it isn't possible that there are other things to consider. The "pure merit" argument is essentially that of a level playing field. Great. And then we find that one team on the field consistently beats the other team. OK, that's fair, they won on merit. So be it. OK, but what if the slope of the field is linked to the score? What if having more money means you can afford more education which earns you even more money. Or you can afford more services (or servants) to free up your time to work more, which earns you even more money. If winning more is what allows you to win even more, is that fair? Forget even "fair"; what about democracy. In a society where the likes (choices) of half of the population are rewarded more than the likes (choices) of the other half, and everybody voted in their best interests, shouldn't the second half vote for policy that attempts to equalize the rewards for doing what you like in life? Ah, but that isn't how pure markets work, right? OK, but now we're placing an ideological belief in letting markets rule the roost over democracy, interests of individuals, or happiness, as if "what the markets do" is necessarily and automatically the correct thing to do. When it comes down to it, a society, economy, and life in general is not a series of games on a field. Consistently losing in life isn't just a momentary disappointment. When you lose a game, or consistently lose a game, you might just say "OK, this isn't for me, I'll do something else." But you can't do that when you replace the game metaphor with the reality of life it is supposed to represent. You can't chose to drop out of life, or society, or the economy, and do something you are better at. We actually do need to decide what to do with the "losers", and by "we" I mean the "losers" too. The problem with the "pure merit" arguments is, ultimately, that is says that the system and rules must be this certain way, and the merit is what people put into the system, and that's all that should matter. I have never seen anyone justify why that should be the case. As a systems dynamics and control person, my first thought is to feed back the output into the system rules. As a simple example, you would never design a thermostat as a simple open loop controller setting a rule for "turn on the heat for X seconds to raise it 1 degree", and then take as input "I'd like it to be 3 degrees warmer". You'd have no idea if the desired outcome was achieved. Controllers like this are feedback loops for a reason. You tell it the outcome you would like to see, not the rule you'd like to see. So what outcome of society would we like to see. I see an excellent argument for suggesting it should be one that maximizes the most happiness, but even that is ill-defined. Is one extremely happy person and millions of sad people better than millions of mildly happy people? Really, the goal would have to be some balance of maximizing total happiness with the distribution of happiness, and two degrees of freedom means there will be tradeoffs, so there is no clear "correct" optimizations. This also implies a problem with just looking at the income measures; income isn't the same as happiness. Perhaps there is a happiness gap and women tend to be happier with their options in life than men. I make more than my wife, but it comes with great cost; she relaxes when she gets home at night because she can't take her job home with her; I don't relax at night because I'm constantly worried about finishing my workload, emails, clients to deal with, and so forth. She also took years off to give birth to two kids, costing her lifetime income and advancement, but it's been the most amazing experience of her life and she still beams about it. I've never had that same feeling from a single dollar I've made.
I think her point was that sexism from a manager is different from sexism from the family in the sense that governmental policy focuses on managerial sexism. If the assumptions are incorrect, the solutions will be ineffective and the policy will be useless at best or harmful at worst. This is the difference between recognizing a cause in general an and prescribing a specific solution, and I think your outlook illustrates why complicated issues are so hard to solve. You aren't seeing things other people don't see - you are misinterpreting and using your misinterpretation as a source of superiority, which insulates you to counterarguments. I don't mean this as an insult, and I wish I could confer tone over the inter webs.
Not quite. It's the job of HR to retain and hire good employees, terminate or help improve bad employees, and to mitigate the company's risk with respect to potential violations of contract, health & safety, employment, and labor law, etc. You pay people fairly to keep them happy and productive, it's not a goal in its own right. Sometimes you need to pay them more than what you think is "fair"; othertimes an employee isn't very good and you'd fire them them if you had to pay them a "fair" wage, but because they're earning less and they're somewhat useful, you keep them around.
You are correct in that statement. Except one thing which is crucial, women are more vulnerable to all STIs because they are being penetrated (same goes for gay men). When women end up getting STIs such as certain strains of HPV they have a chance of ovarian cancer. Men however are more or less carriers (although research is coming out that esophageal cancer and rectal cancer could be caused by HPV, and has had a correlation in gay communities). also the Gardasil shot to protect against HPV has been found infective against some stains in black women specifically. Also, any STI increases the likelihood of HIV, which means heterosexual/bi sexually active women are more at risk for almost everything in the realm of STIs and related infections (like BV = Bacterial Vaginosis) Women also get pregnant from sex and men don't. Women can pass on STIs to babies as well. Women are personally more affected by reproductive health than men. So many clinics have popped up because it is a huge deal and there is a great amount of money spent on it. Now, if you look at California (sorry I can't speak to the rest of the country atm) men and women do get free codoms, dental dams, and lube if you go to Planned Parenthood, which is often low to no cost.
Im the guy you hire. No degree, with about a year and a half in IT industry. Got started installing POS systems and did help desk for an Internet company. I have some college education I lie and say I have an associates degree. Just got hired recently as a systems engineer. During the interview they were testing me on my knowledge. They stopped me and said your bullshitting us but you know enough to bullshit customers. We like your personality and can teach the rest to you. I asked for 40k, they gave me 28k to start and 30k at hire after 3 months. I live in Houston, TX. I sit in a server room in the a lawyers office and tell people to turn it off and back on again. I almost had a 52k job but the guy that was interviewed me later shot his hand off with a hand gun. Guess his boss didn't think he had good judgement anymore. What really sucks though is I have to sign a contract after my 3 month probation. 3 year cant quit/fired , and 4 year non-compete. Also I have to have my A+ cert once the 3 months is up. I figure I should get my A+ cert and look for another job. 30k is just so low, I use to make $600 a weak waiting tables in fine dinning. Oh yea, as an installer of POS systems I made 35k, no previous work experience with computers or a dgree.
We have hired a few degree holding individuals for our small Cable modem ISP (sub 20k customers). Most people fresh off the college boat are useless. They may know concepts but they have zero real world experience on doing things. We had one guy who demanded he be payed more because quote 'he knew more' problem was he didnt, he also couldn't configure a simple nat router on a Cisco router in less than a week. We will help educate on our exact cases but if they cannot google to save their life they need to get out of IT.
We're entering an era now where you really have to think about what you want to do post high school. That can be tough for a 17-18 yr old. Way back in the day, if you got decent grades (and/or family had $) you went to college and everyone else went into the military or blue collar work at factories. All of these roads had the earning potential that'd allow a person to own a home, buy a car, raise a family, etc... Then we had the shutdown of the factories and college became looked at as the "only way out". This narrative was pushed pretty heavy in the late 80s - 90s up until current time. However, things have quietly changed. College costs have risen at an incredibly high rates. College doesn't provide the same value that it did in the past so now you HAVE to know what you plan on getting out of the time/$$$ that you spend at a UNiversity. "A job" is no longer an acceptable answer. If you want to go on to post graduate work or want to enter to certain specialized fields, a degree is necessary. Go to the most competitive program you can get into/afford and kick ass. If you are one of those average c+ students that has no idea what he/she wants to do with themselves after high school, a college degree could be more of a burden than anything else at this point. What I mean is, a 4-12 month training program (like the program mentioned in this article) and some experience is probably going to get you further than the 4 year degree w/ the 2.65 gpa from the no name school. College & the associated loans have become a big hustle and the poor and middle class are getting fucked right now. You have to FIGHT to make sure you're getting the value out of the money spent. Just like the housing crisis and the credit cards, we'll read more about it in 5 after the powers that be have moved on to another hustle.
I'm a CTO. I've had almost every job in IT and I consider myself a jack of all trades. I've developed high-end websites, worked tech support, built servers by hand, administered the network (all layers) and I've managed engineers, help-desk, developers, DBAs, QA/test, project managers, and knowledge management teams. I've worked in big corporate, mom & pop, Fortune 100 and in the consulting arena. And I'm here to tell you I do not have a degree and you do not need one to get ahead, you need one to get in the door. I was able to get in the door because I have an interesting background that got me into IT and military experience. That's my door opener, from there it's my experience, attitude and professional work ethics that land the jobs and keeps me there. However, a lot of doors stay closed since I do not have a degree, the higher up the food chain I go, the more doors are closed due to my lack of degree. As an employer I look for experience, self motivation, ability to learn & adapt and if the person can fit in with the current team. If my team comprises mostly of rough and tumble ex-sailors, marines and oil workers I will not hire an extremely qualified straight laced Sunday school teacher. My point is this. Get your degree if you can, as it WILL open doors that you need to enter but more importantly, cultivate a professional work ethic and learn to be self motivated, focused and adaptable. If you just can't get the degree due to money or time, join the military, let them train you and pay for your college. Get out with pride, education and the knowledge that you are part of a brotherhood that will help you. Plus you may get some good stories and experiences that will help open those doors even further.
It's because we have a matured work force. People will try to steal your job, constantly. You can't waltz in here with your bullshit paper degree. Get that weak shit out of here, we can sense you have no experience or skill and WILL test you with some sort of real world problem you probably have no fucking idea to fix, as you repeatedly rub your pocket wanting to use your smart phone to google the correct answers, or beg on sysadmin forums for help, etc. You take that fake ass paper degree and your dreams of living in midtown and sail your ass back to California or Ohio or wherever they tell you $120K in student loans and laser printed diploma are all you need. It bad enough with all the Russians (with balls bigger than you will ever have) waiting until I am on lunch to come in and try and take my job FOR FREE, just to get the visa and work experience.
I have been talking about this for years. I dropped out of school and started as a bench tech for Best Buy (pre geek squad). Had my mcse, a+, and ccna by age of 20. I live and breathe this business. I truly love it. At 35, I make 200k. I manage and implemented nearly every facet of systems from Cisco IOS, all brands of firewall, server and desktop OS, VMware cluster consisting of nearly 150 hosts and 250 virtual machines, shared storage, VPN, citrix XenApp, and am directly responsible for the daily workload of 2500+ end users that make up $500 M in revenues. Funny thing is my other teammate makes the same and he has a GED with a pretty similar career path to mine. In fact, then only guy we have that has a degree is the weakest technically speaking. He makes up for that in over-the-top work ethic. Certainly these are arbitrary observations, but there is something to this that speaks to the psyche of the typical IT guy. Devs seem to be more likely to come from a 4 year school from my experience.
depends on the technology you build your mesh upon. (you're thinking of zigbee type meshes) A target of 5megabit uncapped would be sufficient to carry a 1080p stream and is a modest goal using modern standards. especially when bringing peak use into play. Wifi signals overlap all the time. it's just a matter of organizing the channels into a nice loopback system instead of the chaos of multiple unorganized smaller networks.
The $2k reference point is highly misleading: A single shipping container is only 8'x8' across and would feel extremely small to the vast majority of people. Even in the article, they show a layout that uses 4 shipping containers, merged together into one structure. Even a small home would need 2 or 3. The container(s) would form just the basic outside frame of the house. When building a house, the basic frame is pretty cheap compared to the rest of the house. Every other part would still apply from groundwork (excavation) and rough-in (gas, electric, plumbing water) to insulation and finishing. While you'd save on those frame and insulation costs, you'd have other costs to take their place. For instance, a vapor barrier. You'd probably want a rubber or plastic coating on the inside surface steel or the finished surfaces would rot. Vapor barrier would be cheap though. The vast majority of fixtures and rough-in items are designed around normal housing standards. Things like whether your studs are 16 or 24 inches OC (on center) make a difference to how certain things hang. It is assumed that there will be spacing between studs and sill plates for ducts and vents and flu's. Without that, you'd have to add your own framing or rough-in space which would take away from the container's original value.
So we need to amend anti-trust laws for the case of regional monopolies: Exiting a market that you are the sole provider of a service deemed necessary (telecommunications basically is), defaults all hardware ownership to the local government to lease or sell as it sees fit Regional monopolies shall be regulated as a utility until such time as a competing provider of an equivalent service is provided. It is determined that land line cables are the only reasonable competition for land line provided services. Air and satellite are considered acceptable competition, so long as the cost is not prohibitively different within a region. In essence - retroactively outlaw any anti-competition agreement within a region, or make them cost prohibitive to maintain. Then hard line them into competing with each other. Eventually, failure to compete will effectively turn over the lines as public property that will then be maintained and owned by local governments and towns, which can then lease the lines out to providers. Local contractors can be hired out to maintain the regional lines and creates local economic stimulus. And as far as small / medium business goes? Doesn't negatively impact (most of) them. Of course the big telecoms will bitch and complain. But then, they will bitch and complain at the idea that they would actually have to compete in a free market driven by supply and demand.
Just to add ammo to this point. I got curious about the laws around this, and found this paper (PDF) > If a credit card holder orders merchandise and the merchandise is not delivered, the credit card-issuing bank is required to treat the matter as a billing error and resolve it (i.e. get the card holder reimbursed or the merchandise/services delivered). However, if a debit card or ACH is used no comparable federal law requires the card issuer to become involved. For example, if a consumer uses a credit card to purchase a computer from an Internet merchant and the merchant declares bankruptcy after processing the transaction but prior to shipping the computer, the credit card holder has a right to reimbursement from the card issuer under the TILA and Regulation Z billing error provisions. The card issuer, under card association rules would then charge back the transaction to the merchant bank. However, if a debit card or ACH is used, no comparable right exists and the consumer would have to file a claim against the seller in bankruptcy court (as a general creditor) and hope for reimbursement. This reimbursement would typically not occur or, if it did, it would generally involve mere cents on the dollar. Also on Pages 6-7 (PDF 10-11) we have this gem: > Under TILA the credit card holder can be held liable for the lesser of $50 or the amount obtained by the unauthorized use before notification to the card issuer about the loss, theft or possible unauthorized use. This is the generally the maximum consumer liability irrespective of when the card issuer is notified. Under EFTA the rules are more complex -- three possible tiers of liability are specified. ... (3) an unlimited amount depending on when the unauthorized electronic fund transfer occurs ... If a stolen debit card is used to initiate the transaction, all three tiers of consumer responsibility are potentially applicable. However, if the transaction is an ACH transaction against a deposit account and no card or personal identification number is used, than only the third tier of consumer responsibility is applicable.
Uh, credit card companies are also lending money too dude. Like, that's their core business. Allow me to explain the credit card model, as it has existed in the past and how it is being disrupted. This is high level so naturally, I will leave out some nuance and detail. I apologize in advance but you can spend your entire career trying to understand the nuances of the credit card business. In the old model, there are 4 major types of players, not including the consumer. Merchants, Acquirers, Networks and Issuers. In the old model, when you go to the store (merchant) and buy something with your credit card, it's usually swiped at something called a terminal. The terminal is given to the merchant by an acquirer (alternatively known as an acquiring bank). They charge a fee for this, basically ~200bps per transaction. That fee is then split amongst the Networks (Mastercard, Visa) and the Issuers (Chase, Citi, etc.) The networks basically act as an interface between the issuers and the acquirers. There's a huge amount of data processing and risk involved in playing that role. By the way, there are some exceptions to the model. Amex and Discover are both issuers and networks. Chase has launched something called Paymenttech which allows it to act as an acquirer as well. Finally, you have the issuers. These are the big companies you know (and robably don't love). Chase, Wells, Citi, BoA etc. They issue the credit cards to the consumer and assume the risk associated with that. That's what makes it so hard to dispense of issuers. They're the ones literally lending the money to people. The only way to get rid of them is if you have huge companies like Apple with huge capital reserves willing to join the fray. The issue their is that you expose yourself to HUGE regulatory risk by essentially becoming a bank. There is a lot of oversight and there are a lot of requirements associated with lending which I'm sure major tech companies want no part of. So about the disruptions. The disruptions like Apple Pay, Google Wallet, Soft Card, Square etc. are more worrisome for the acquirers. As you move towards NFC and in-app payments, there is very little need for traditional terminals. That's not to say that there isn't some disruption for issuers. These new wallets cut into the share of interchange that the issuers get because they have more leverage than acquirers. That said, services like Apple Pay theoretically also drive spend volume UP, increasing the size of the interchange pie. If I'm being honest, I know the least about how this affects networks. All in all, it's a pretty phenomenally exciting time to work in payments. Lots of cool new tech, some of which will actually have a lasting impact on the business model.
While that may be true you have to consider one thing: 95% of the population are not important enough for someone to find and infiltrate their private computernetwork. Just makes no sense to look some stoner dude up. If you have your data saved in a cloud thats a whole different story. There are thousands or millions of accounts saved. Now, THAT makes sense to get your fingers on. You only have to get access to that. The return on investment is way bigger in that scenario. Instead of one account, you probably get hundreds or more, after a lot of people changed their credentials and so on.
Googling" might be too literal, but I hire with the expectation that my engineers and IT guys can learn anything and everything that needs to be learned to do the job. If a support team can't support a product, then it better be a problem with the product (and it better be someone else's shitty product. :) ) If a new product is demonstrably supportable but, after a reasonable amount of time, an employee still cannot support it, then he needs to be replaced regardless of how good he was at something else. As i've said, there are special cases where specialties are necessary, but usually, specialties by themselves are insufficient. If I hire a Java programmer, they better be able to learn C# if I need them to do C#. If I hire an IT guy that can code in PowerShell, he better be able to learn Bash when it comes time to automate some stuff in linux. I don't expect them to know everything immediately just because they can Google so of course there needs to be generous runways, but if they can't or refuse to be constantly learning and growing, they won't succeed here. I can't think of a single specialist here where they do the same thing 40 hours a week, year over year. It just doesn't happen.
It's not unusual for non-IT folks (who generally lead these companies) to completely ignore and budget-stomp the IT folks into submission ... "why does IT need any good tools? We could be using that money for our sales reps to sell stuff instead.... suck it IT, and stop telling us we have to be more secure!" "oh, and btw, we're outsourcing most of you with cheaper, lower skilled workers ... at the end of the day, you're just not part of our core business" (attitude of most large companies towards IT).
Well, how do you think these evil ISPs got to where they are now? Our local Gov't and municipalities chose the winners and losers on who can provide Internet and Cable. Gov't stifled competition in local areas making the current ISP a Gov't backed monopoly. The solution isn't MORE Gov't, it's abolishing the current regulations and opening up the local market for more competition. Giving power to the FCC will do nothing more than provide them censoring power by them having control over what is "acceptable" content and also implement additional Internet taxes.
Because almost all of Reddit chose to ignore the fact that he hasn't been a "cable lobby insider" for over 30 years. He's a "lobbiest" in the same way a 50-year old with a PhD is a "high school graduate". There was never any certainty that he was going to bend-over-and-fuck anyone other than Mrs. Wheeler. People just love to brandish their pitchforks.
I never said I thought they were the same. I just don't find much meaning in the way one party distinguishes itself from the other, it all just seems shallow. "We believe in X, they believe in Y" "We want X, they want Y" But both parties are concerned with 1 thing, advancing their party's agenda. Some times it'd be stupid to not be bipartisan, other times one party opposes the other just because. Democrats in the past have done on other issues exactly what Republicans are doing with the Net Neutrality issue. That's what politics is.
then there's duty of care What? Do you think they are nurses or something? Duty of care harm to come to others. It has no relationship to competitive or financial interests (unless you are bound particularly to a financial role where the duty is to act in the interest of the patron such as investor or power of attorney).
scripting is the simplest form of programming, and for that reason it's my personal threshold for measuring computer literacy. Then your threshold is way too high. I don't want to sound too judgmental, but I'm getting the feeling from your comments that you intentionally set the bar too high so you can perch on it and look down at others. Scripting is not relevant or important to the vast majority of computer users. I'm not saying it's not a good thing to learn or a basic skill if you work primarily in a computer-related field. But there is next to zero reason for Grandma, a receptionist, a filmmaker working in Premier or Final Cut Pro, or a designer working in InDesign to be intimate with scripting and the command line. But all of them can be computer-literate. I'd go one further and say that for a lot of tasks the command line is the wrong tool. I'm as big of a fan of vim as the next guy (or at least one out the next five people, probably), and I use it for basically all my simple scripts and quick edits. But it's really not the best tool for working on bigger projects.
A few things: High blood cholesterol is linked to increased risk of a heart attack. She had a stress test 2 years ago. A lot of things can happen in two years. Also, a negative stress test is not 100% predictive of a lack of heart disease. Cardiac catheterization (the "surgery") they refer to in the article is not actually a surgery. Its a non-invasive procedure where they go through an artery in your wrist or pelvis and test the blood flow through the arteries in your heart. Since they put a stent in her heart, they found a blockage in one of the arteries in her heart. These blockages are caused by plaque build ups. Which are usually caused by poor diet/lifestyle over a long period of time.
Fair point about the school acting irrationally when combatting a bomb threat - seriously rounding up everyone in the same small area is retarded. But you aren't making any strong point at all about how a phone actually helps your kid be safer. You weren't safer because you texted your mom during a bomb threat and your kid will hardly be more likely at all to be saved in the extremely rare chance they are kidnapped. Like someone going through the effort of kidnapping your kid is going to just keep the phone. Seems like it'd instantly get thrown out the window. If teachers want to ban smart phones in class that makes a lot of sense. It made sense to me why I wasn't allowed to play game boy in class back in the day and a smart phone is magnitudes more distracting than that.
The author doesn't seem to understand the word spectrum . Spectrum of HTML colors, for example, ranges from 0x000000 (complete absence of color, i.e. black) to 0xFFFFFF (full saturation, i.e. white). Would his argument, then, be that because black is a result of absence of color, it is not a color? How would he mark the 2 ends of a spectrum?
The thing about Albrecht is that he is just a charismatic geek. He loves doing geeky stuff, not because it's hip or trendy at the time, but because he enjoys it. This becomes very clear if you watch "The Totally Rad Show". Kevin, on the other hand, likes technology, but it's not the same relationship. He doesn't play games, read comics, or keep up with movies. He likes technology when it is shiny and new, but as soon as the shine starts to fade he wants something shinyer and newer. Of course we all want the shiny and new things, but Kevin seems to only want them because they are shiny, not because they are a cool technological advancement to be played with.
True story: When I walk by my co-worker's desk, I used to always see him on Digg. I'd say, "WTF? Why are you on Digg? Get to Reddit!" He'd say, "Meh, it works fine." I'd shake my head and walk away in disgust. We've always had this back and forth with sending each other links, but we'd always send the direct link to YouTube, for instance. Instead, I began sending the link to the Reddit thread. This went on for a while and when I'd walk by I was noticing Reddit open on his browser, but not just the homepage, he was deep into the comments. Still, on his second screen or another tab I could still see he was continuing to use Digg, as well. Then I sent him a link to the Reddit Enhancement Suite, walked over to his desk and forced him to install it to Chrome. Everything changed. He's sending me links from Reddit and I haven't seen him on Digg in weeks. I still can't get him to create an account and post comments, and that's my current goal. I figure it's only a matter of time...
How about you actually read the bill before you mislead people?]( >Two public hearings, at least 30 days apart, have to be held before setting up municipal broadband. Any feasibility studies, plans, and such associated with the proposal have to be made available to the public before the meeting. This right here is a huge barrier. Why? The municipality is effectively unable to advertise their PoV on this, the telcos and cable companies have no such restrictions though, and cable can run free ads whenever it wants to making the idea into a boogie man. (160A‑340.1.A6). The monopolists/duopolists actually get a month and a half before the FIRST public hearing, so really they get two and a half months to lie to people. (160A‑340.3) If they do somehow manage to surpass these hurdles then they get to spend a minimum of five months trying to negotiate with a private contractor to build out service before they can even start doing so themselves. (160A‑340.6.A, 160A‑340.6.C, 160A‑340.6.F) The bill also forbids the city from expanding service beyond its city limits, a disadvantage which the incumbents certainly do not suffer under (160A‑340.1.A2) Of course it makes it very easy to discontinue service, as unlike starting service that doesn't require any kind of vote. Funny that, hey? (160A‑340.1.B1) >Cannot provide the service at below cost. >Must pay to the city general fund the equivalent of the fees and taxes that a private provider would have to pay in that city. There are similar provisions covering county and state taxes. The cost calculation rules are not permitted to take into account any savings the city obtains by sharing infrastructure for multiple purposes, the fee also includes property taxes that the city is normally not required to pay. In short it artificially increases the price of service for subscribers to the municipality's service. (160A‑340.1.A8, 160A‑340.1.A9) >These requirements do NOT apply to municipal broadband in "unserved areas". An unserved area is an area where only 10% or less have broadband (and satellite does not count as broadband). Actually an area where 50% or less have access, although to do this it requires filing a petition with the NCUC (North Carolina Utilities Commission) for the area to be designated as unserved and guess who is likely to have sympathetic voices on that? Even if they don't, they have at least a month to start gumming up the works with an "objection". Oh and it has to use U.S. Census Data, which need I remind you is only taken every 10 years, an eon in IT. (160A‑340.2.B)
As a NC resident, it is important not to look at this decision in a vacuum. One of the more frustrating aspects of NC government is that municipalities are not necessarily independent political entities, but exist and function by the will of the General Assembly. In other words, municipalities may not do anything without signoff from the GA. This (supposedly) prevents urban areas from getting an unfair advantage over rural areas in the attraction of people and jobs, but more importantly keeps power concentrated in the hands of the State. In this environment, it is not surprising that the GA would be against cities independently getting into the ISP business. It flies in the face of the way the state government runs. Additionally, municipal autonomy was a minor issue in the 2008 gubernatorial campaign. Purdue's side supported the status quo. McCrory supported greater autonomy in city decision making. Of course, this stemmed from the issues he had as mayor of Charlotte and, as is apparent by Republican support of the this bill, is only supported by a minority in the state GOP.
ARGH After this comment I'm just fucking done with this whole thread. The whole lot of you are a bunch of back-ass weirdos who make fun of the world for not embracing technology and yet you can't see the huge benefit of QR codes. You will soon have a QR on your ID badge instead of a barcode that stores some metadata and is more secure. You will have a QR on every business card you see. Every ad will have one in the corner so smartphones can learn more about the product. People forget their roots, I think. We live in a world where "rly, l8r, ttyl, rtfm,
What bothered many developers wasn't that they were crushing competitors but that they had worse technology and, because they won, the world is forced to deal with their technology. Windows was a big step back from the state of the art at the time (it's debatable that it every really caught up; there's a reason most websites are run on Linux). Internet Explorer is another example of something the world would've been better off without but we have to deal with.
Well, I can understand that. He does seem to be softening up and being less evil. The corporate Microsoft is a different story. Anyone into computing knows the sco warfare and the constant litigation and bullying that Microsoft does to hardware manufactures.Just very shady and uncool business practices, like when IBM and MS worked on API(s), ms abandoned the project and released a direct competitor based on the know how of IBM and even made the api so close it was compatible. IBM turned around and tried to pass themselves off as the business man's Microsoft, until nt came out trying to get their business back. Ms also bought qdos which was a blatant copy of cp/m and renamed it Ms-Dos to sell to IBM. Just a shady company with a history of forcing out competition, threatening unending litigation on companies that challenge their dominance. That is why Linux has always had such a hard time breaking into the desktop, though it runs most servers in the world and even runs on your android phone. Yeah, admittedly I am a UNIX, Linux enthusiast that also uses windows. Unix and Linux pay my bills. At home I use both Linux on my personal machine and Windows on my home theater, I use a Linux based android phone. My point is that Linux is completely free and very usable in most situations, especially as developers increasingly drop support for IE due to developmental costs. I also have windows on my personal machine for gaming, world of tanks is just so much fun. I can understand why man.
So now the soccer moms with internet connections will go crazy wanting to buy this prize winning bulb for this ridiculous bulb but then wait! 1-2 years down the line the technology has doubled it's efficiency for a fraction of a price. So now I go and buy the new cheap technology and make the same savings in a fraction of the time. Moral of this lesson? Don't give into this expensive energy saving crap because technology has this thing where it improves ALL THE TIME. Better shit is just around the corner so only buy the energy saving appliances when they become dirt cheap or at least close to affordable because there WILL be a better more efficient model sooner than you think.
Apple is in a unique position. It can sell on its brand alone which allows it to sometimes sell products that aren't right up there in terms of specs. No other manufacturer has that luxury so their specs must be at least as good as Apple's. If Apple has BOTH the brand image AND specs then why the fuck would someone buy something that lags in both? The remaining two things are ecosystem and experience. No-one can touch Apple in ecosystem, so it leaves us with experience. Windows 8 better be one mother of an OS for it to make up for the 3 other deficiencies I mentioned.
I don't think anyone should be suing anyone. All this litigious nonsense is very anti-competitive and not in the best interests of the consumer at all. But you're missing my point. There is a big different between making similar products, and making identical products. Looking specifically at the USB wall adaptor and the USB to 30-pin connector... what Samsung has come out with looks ridiculously identical to that from Apple. In the end, it's just a USB power adaptor and cord... certainly not worth banning a phone/tablet over. Or being able to sue. But still... let's be real, you shouldn't be able to make identical copies of your competitors products.
IBM has an entire division for IP protection. It is absolutely mind-blowing how they operate. Usually, all they do is threaten to sue for copyright infringement, and the other party agrees to pay. Most of the time, they don't even make a specific claim towards a specific infringement. Rather, they use a formula, that may state that there is a 60% chance that you are using one or more of our patents revenue royalty % * other factors, and you now owe us $YY amount of money each year. This usually is far cheaper than getting into a patent war with IBM, which probably has some validity to it. They just throw their mighty weight and huge patent catalog around and companies have to accept it as a cost of doing business.
While providing power to the device (no external power supply - you CAN use one, but I don't), and transferring digital data, yes. The device works by using 2 of the stereo RCA inputs (i.e. 4 mono channels) to read a frequency-encoded signal (analog) from either CD players or turntables being played in real time. It has built-in A/D converters to convert that signal into control signal for the computer. The Scratch Live software running on the computer (Mac or PC) uses that frequency (control) signal to determine the position of the song (MP3, WAV, M4A) and the speed at which it's playing. If you've ever seen a DJ perform, you know that they manipulate their control surface (CD players, turntables, or controllers) a lot. It then sends that processed data (i.e. the music) back down the same USB cable to the device, digitally, and the device decodes it using built-in D/A converters and sends it out another pair of RCA jacks. It does all of this at the same time, and there is no noticeable lag/delay/latency. The reason I know it's under 10ms is because you can set the latency in the software - you can, in theory, drop it to 1ms - but that increases the load on your CPU dramatically and it's not worth it as the difference between 1ms and 10ms is not worth the trade off. As a DJ who has 13 years of experience playing on everything from REAL vinyl, to home CD players, to $2000/ea CD decks, as well as now this software, using vinyl controllers every weekend, there is no real difference between playing with REAL vinyl and playing on this. That's how little latency there is. USB has 4 wires - 2 are dedicated for power (charging or running, it doesn't matter), the other 2 are dedicated for data. And data can be ANYTHING that can be described with 1's and 0's. What USB CAN'T do natively is to transfer analog AND digital data at the same time. You'd need extra wires for that; i.e. a dock connector. In fact, though I haven't read the standards on USB, I don't think there are ANY USB devices out there that use the 2 data wires for strictly analog communications. And, although I wouldn't recommend it, you could, in theory (with some very out-of-standard mods to the USB hardware at each end), use those same 2 data wires (or even the power wires, really) to transfer analog information as well. No multiplexing required.
Wrong. (ish). If you're talking about analog AV and some out-of-band control, then you are correct. What's Apple's dock connector allows (through the use of different wires that are out of spec for standard USB) is for non-USB systems and devices to interact with the device. This means that it's cheaper for, say, a snowboarding jacket maker to implement an arm-based push-button control system through the use of the connector . However, it is completely possible for USB to do power and data at the same time over the 4 wires it has; "data" being a loose term here, which can include control (by which I assume you mean volume up/down, track skip, that sort of "control") and A/V (digital audio and video flows down USB just the same as any other kind of 1's and 0's). Simultaneously? Yes, absolutely, though not in the perfect sense of the word; it's accomplished using multiplexing and demultiplexing, analogous to time slicing (scheduling) in a modern operating system. It's also important to note that the Apple dock connector is going away. If only because of European Trade agreements Apple made the standard Micro USB charger; hence this .
That is the kind of
I'm not sure why using javascript is laughable. There's nothing stopping you from implementing end to end encryption outside of the standard SSL tunnel. You use JS to package all data and then send it in the clear. It's the same idea as SSL - it's not like SSL data uses a special internet. Edit: Nevermind, after a few minutes of research from other links in the thread, I have my answer.
If modification of the JS were a common problem, e-commerce would not work since JS already has access to all your credit card info. Any page with secure info, which also used JS would be at risk. If JS is delivered securely over HTTPS, I don't see the problem with that attack. I do accept the argument that the JS implementation could be untested and insecure. Like it doesn't have a good way of picking relative primes for RSA, isn't random enough, etc.
Ah, well, if it makes you feel better, I've thought about this stuff so much that posts like the one I wrote previously don't take nearly as long to write as they appear, but it did touch a nerve. One way to think of it is that the more demanding people are of their employers, the more their employers are forced to capitulate by providing benefits and higher salaries. Far from being a zero sum game, when one employer is forced to pay their employees more, other firms have to pay more in order to compete for the best workers. People that you describe as "making life difficult for others", are only making life difficult for a certain kind of person, the legal person defined as corporations. Any other headaches they generate for yourself, while significant, pale in comparison to the headache you will have when they are fired. When you automate people out of a job, your salary won't go up, it will go down. That's because the pool of unemployed people will increase, and no matter how smart you are, it WILL affect you. Case in point, my wife is an MD, and salaries for skilled surgeons have been cut in half over the past 20 years. That's a job (MD) that actually produces positive value. Where is the money going? It's going to insurance companies, wealthy investors, and other people who do nothing more than push paper around. So, you are increasing the efficiency of people that are already working to the bone, so that billionaires can get a better return for doing nothing other than sitting on their ass and collecting interest. That's "efficiency". What's sad is we could immediately have a dramatic, across the board increase in actual efficiency. How? Create an economic system that spreads the rewards for increased efficiency across the board (rather than treating it as profit and leaving workers out of the equation). There are a ton of ways to increase efficiency that are very obvious to the guy that has done the job all day long, for years, that would never be obvious to you. If gains were shared, then all of those millions of workers, rather than fighting you tooth and nail, would probably end up doing your job for you, even better than you. Increases in efficiency would be celebrated, because it would mean that our work days would be shortened, or we would get paid more for the same work day. Instead, in our current system those gains go to the top, and labor is punished through job loss. But really, what else would we expect from a system based on profit/rents/interest, which for millenia were properly defined as ill-gotten and unethical. What can we expect from a system designed by merchants and nobility to serve their needs? Certainly not an efficient economy, anything but an efficient economy.
chrome says my java version is 6.0.290.11. Does this mean i'm in the clear as not having java 1.7 i.e. the exploit version? I love some things about Java, but the release schedule (version management) is terrible. Depreciation support is great but client-side updates are embarrassingly fragmented. Your comment here is one example to why Chrome's designers chose to not rely on this spotty release schedule. (Other factors such as Oracle, licensing, etc. drive decisions such as this too) Chrome therefore keeps it's own internal version (by default). Java versions and "libraries" can be made to support or deny features. Other factors such as Chrome's sandbox design support this practice. There are perhaps a few disadvantages to maintaining an internal library including but not limited to... Security patches / testing for custom libraries requires constant attention. (Example: software A has a security patch but company Z is not as fast and does not patch their Az software quickly. As a result this publicly exposed patch (to some extent) immediately increases the vulnerability of their product until they catch up.) Large deviations over time from the core/central release may cause new patches otherwise considered easy to instead break a ton of existing features. Java with great depreciation support does not suffer as heavily from this. Other people build applets for a core version and may not cater to Chrome's quirks / library choice. etc. Since I have wandered considerably from your original question, I want to just add some relevance from the article: >In advance of any official patch, and because of the seriousness of the vulnerability, malware researchers at DeepEnd Research have developed an interim fix that they say seems to prevent the rogue Java code from executing its payload, although it has received little testing. Here it can be seen how Chrome would have to deal with both a slow release schedule and terrible (client-side) version management. Depending on the central release agency (Oracle) here can be argued as intolerable from the view of Google: they likely see this release schedule, etc. even as negligence. So while there may be disadvantages to external developers, support for new features, higher maintenance, a slow release schedule, and inconsistent versions installed on clients provide enough motivation for decentralizing/internalizing Java. This was too long of an answer to a simple question but I am in a waiting room bored on my cellphone. Disclaimer: I made a lot of generalizing statements above that are meant to show common arguments for Chrome's philosophy and may not even address the primary motivating factors. Chrome also does not develop Java really but rather choses versions and specific libraries. Furthermore my opinions on "terrible" and "best practices" are just that: my opinions; there have been and are examples of worse release schedules and version fragmentation... however I believe standards should never be considered by comparison to those that already failing or considered poor.
Exactly, that's why I don't like it when people get the wrong impression; especially when the Java runtime syntax is very prone to these types of attacks. Hell, the project I have on Github could be used for malicious intent; which allows you to invoke private methods extremely easily - now, we also know the JVM is big enough that loopholes can creep in (with update 7 an .execute() reflection method was added) it's only a matter of time before Oracle fixes the issue.
Java is not 'shit'. The syntax is remarkable the same compared with C++; if you read the white books the annoyances of such language are removed (header files, structs) the pointer operator is removed but still reserved - you can still use pointer in [Java reflection]( If you look carefully at Java's history you'll see security in Java 1.0+ was a top priority, numerous exploits were fixed. I know, this will sound anecdotal to all of you, yet I still think it's worth raising the issue, Java was one of the secure programming languages; there's been many other VM languages that are less secure. I believe the influx of malware in the past four years is the popularity of Java; especially with the Android API. So, please, get your facts in a coherent order, I recommend you read the Java white book, or even go further than programming 'Hello World' in Java - hopefully then will you be qualified to persuade people on what to use on their damn computers. [Java white book](
The government has nothing to do with this. Youtube implemented its own copyright removal tool which allows media companies to auto remove videos, subject to youtubes internal rules. As a reward for this thing, youtube doesn't ever get sued, even if it is easy to find copyrighted material on youtube. Youtube still follows DMCA requests, but what these companies do isn't a DMCA takedown notice. A wrongful DMCA takedown notice can be contested in court. A wrongful DMCA takedown notice enables the actual video owner to collect a lot of money from the people filing the wrongful DMCA takedown notice. A wrongful youtube removal enables the actual video owner to complain to google. The DMCA is actually a pretty good law once you read it yourself. If I host a video and someone else tries to censor me by saying it is copyrighted I can get money from them by suing. If it turns out I am right, then the person I am suing would have to pay me damages and lawyers fees. (basically an easy open and shut case if you know someone is trying to censor you)
I agree that none of those four reasons are sufficient explanations. In my opinion, to redistribute another's work, even though it requires just a marginal amount of energy or time, is a violation of the mutual trust between the producer and me. The producer uses their creative ability and technical skill to create a work. This work is an original expression of the author, and because of that it is theirs. They have created it, or at the very least caused it's manifestation as we see/hear/sense it. Because this talent/skill of bringing ideas from thought into existence is not easy, we as a society agree to allow the producer exclusive right to control that work (who gets to experience it, use it, etc). In a world of only ideas, this is one of the few things that we can do reward someone (without giving them material things). I believe that as it becomes easier to make the journey from thought to expression (3D printers, improved tools for creation, etc) this system will begin to dwindle. To some extent it already has. However, we cannot force the adoption of one system while wholesale abandoning the other. The old system will die out because it is outmoded, but it will have to die on it's own.
Gold has a much more intrinsic value because it's ingrained in culture and meshes well with the human psyche. Gold has the dual attributes of being shiny and a status symbol due to being relatively uncommon and wearable. Women are attracted to men with high status, shiny things and wearable objects that show off to other women.
What about the fact that people who download movies are the biggest consumer of them, in real form, for money? Your whole argument is based on the belief that downloading a movie is equal to lost revenue. I'll concede this is true in very small amount (1 downloaded movie != $40 in lost revenue). This is a fallacy that's been disproven over and over. The real issue is the movie and music industry's refusal to change their business models. They could offer a better product and way to deliver it to encourage more revenue. Instead, they've made a conscious decision to attack the problem through lobbying and MPAA/RIAA legal actions. Sources:
I've seen too many responses of "downloaders want free stuff" and "justifying theft" etc... There is more to the cost of content than the price placed on it by a retailer. There is the opportunity cost of acquiring the content. For example, Baen publishes via electronic means as well as print. The electronic version is available at the same time and in part before the print release. The price is reasonable for the content you receive and they place no restrictions on your use. As a consequence I've never pirated their content - there's been no benefit to doing so. I can go right to their site and pay a nominal fee and have it without any difficulty. Similarly TV shows, movies and music that are available for a reasonable price and in a reasonable way online are less likely the target of piracy as well. If you want to see someone pirate your content, restrict it to certain platforms, delay a release on regional or "avoiding cannibalizing broadcast viewers" grounds, price it as high or higher than the physical media etc..
He doesn't seem quite right... Kind of like an eccentric millionare. Also, seeing as >"A chemistry lab, $20,000 in cash and a stock of firearms were found in the house..." I wouldn't be surprised.
It's all about timeframes. Lets say you patent a new steering column for a car, and release it. GM sees it, decides to use it, and begins itegrating into its new car designs. A best case scenario would take 6 months before the cars hit the road. So here's the first timeframe: the time between deciding you are interested in patented design and it reaching the public. Google could do it in an afternoon. With physical goods you're looking at months. Then you fail to reach an agreement before launch day, you go to court and they get an injunction to stop GM from selling the product. The war lasts a year, meanwhile the cars sit on a lot. The second timeframe: the time the product is valuable. A phone is really only profitable for a year. Every week an injunction is active, you're real value. GM could sell the same cars the next year at pretty much the same price, or even retro-fit them to add additional value. There's more options. And finally there is the timeframe of the patent. How often might a steering column see a real improvement that is patenable? Maybe every couple of years, just so throw out a crazy number. The software 6 months from now will need the innovations of today. Slide to unlock and universal search are a good example of this. Those features are basic ideas which people will build on. Patents last over a decade, imagine if somehow Netscape had patented the browser, we would only now see competition in that area. Google wouldn't even exist. Your software would look and feel like cable box software since no one would be able to build on what came before.
I. Software patents are ridiculously easy to find prior art for and all are obvious II. There are "5 million restrictions" based on 250k patents that would take 2 million patent attorneys working full time to check all of them. It would be impossible to check every patent if you are an inventor.
I'm not the guy you asked, but my understanding of the situation is that the number of lawyers far exceeds the need atm and so new lawyers are pretty much stuck getting the law equivalent of a post-doc. Except law school is expensive and those types of positions are usually heavy drivers of people away from academia to 'industry', but in this case they're already in the industry....
See, you might have fooled these other guys, but I think you're actually an atheist. The way you talk just smacks of atheist confirmation-smearing propaganda. Which I'm not entirely against, I just think that its bound to occur in some places and this fits the bill.
The two convicted of rape were juveniles and not subject to the same punishments as adults who commit the same crime. The juvenile justice system is very different than the adult criminal justice system because it, in part, assumes that children do not have the capacity to make their own decisions and be held to the same level of culpability as adults.
This article is ridiculously biased. I too am biased, and in the same way as the article, but this is too much, even for an opinion piece: >Rape can be constituted as a war crime, according to the United Nations. It is among the worst offenses one human can commit upon another. Hacking, on the other hand, while maybe not a totally victimless crime, rarely has implications of physical harm. Let's disect this piecewise: >Rape can be constituted as a war crime In war, not in peace time. >It is among the worst offenses one human can commit upon another. Together with everything else that can ruin a persons life. Which could happen electronically (cyber rape, blackmail, fraud, identity theft, etc). >Hacking, on the other hand, while maybe not a totally victimless crime Hacking (unless that means security penetration testing, which it does not here) is not at all a victimless crime. It's not "maybe not totally victimless", it's "definitely not even a little victimless" or more consisely "also has victims". I seriously object to the use of 2 diminishing words, and a double negation (which is generally weaker than a simple positive assertion). It's like saying "I shot her, but she's maybe not totally braindead". >rarely has implications of physical harm. This is the worst part. It's wrong in three different ways. Firstly the point it wants to make is irrelevant, secondly the comparison is not applicable, and finally it's false. First point is that you can ruin somebody's life without physical harm. For example, I could take all your posessions, break all contact with friends and family (by ruining your reputation), even take your identity and use it for crime, break your carreer, etc. And I can do this to hundreds, thousands or even millions of people at the same time. Second point, is that physical harm (at least in the strict sense) was not involved in that particular rape case. The harm done is strictly psycholocial; she was, after all, uncontious when it happend. And you and me both agree that it's still terrible regardless . Hence the broken comparison. Thirdly, e-crime can lead to physical harm. One can blackmail you to harm yourself (or others). Or you could lose your job due to hacking (identity theft, slander, company being hacked and ruined, etc), which leads you to have insufficient funds for medicine or other basic needs. Or more straightforward examples, such as military hacking leading to soldiers dying. Or a traffic controlling device (for cars, trains, boats or airplanes) being broken leading to an accident. Etc.
The people that can afford to ruin for election are, for the most part, backed by third party donors with separate interests. Regardless of who you vote for you're still going to have someone who is beholden to the highest bidder. Unless there's a way to match the lobbying power of the copyright industry with our own power then it's a futile struggle. I know, I know, typical apathetic voter here, but just look around you at what continues to happen. Got rid of SOPA? Here's Six Strikes and another form of SOPA. Oh, there are nation wide protests lasting for months on end? Let's change nothing . I'll agree that OWS lacked direction and a clear message, but there were protests across nation by the droves that resulted in... nothing.
If you already have a DOCSIS 3.0 infrastructure in your area, then the cost to give you more speed is not reliant on the last-mile infrastructure, but rather on the ISP's capacity (but only to a certain point). Increasing speeds always requires more capacity on the backend, but the actual cost can vary greatly. Comcast could probably afford to give you 50Mbps without raising prices and while still maintaining a decent profit margin. They could go further too. Giving you a gigabit for the same price while maintaining a decent profit margin would be impossible for them to do (right now at least), since that would require massive investments in infrastructure and their backend.
According to some rough math I just did 4k video (which will be pointless to stream for a very very long time to come due to LCD limitations, not to mention video processing client side) needs 4-6 times the amount of bandwidth of a 1920*1080 video. Depending on the compression used a HD video needs 2-6 Mb/s so worst case scenario you need 36 Mb/s. or ~.04% of a 1Gb connection....
Gbps not GBps. Gb is Gigabit and GB is gigabyte. There are 8 bits to a byte. so since a Gigabyte = 2^10 megabytes, a gigabit = 2^7 megabytes.
This is whats gonna happen: TW's aol offered dial-up. They still do offer that for those who want dial-up. That business model is still profitable for them. They rake in approx [$150 million] ( per quarter. Likewise, they will continue to offer their shitty cable broadband in areas free of any considerable competition. If they face stiff competition from google (or other) fiber providers, they will play the price game, just in those markets. Other than that, they won't innovate. They will stay with docsis 3.0 for the next 15 years. They don't need to innovate. Simply because they have the cow they can continue to milk. As long as FCC is infested with lobbyists they will still have the cow, with extremely sore tits.
I know this thread is getting old, but Comcast story here. In August of '11, my roommate I sign up for a packaged deal, tv and internet. In December '11, we had an issue with our bill, called in, guy was real nice and helpful and actually put us on a better deal (we were shocked about this). We get a letter around April '12 from Comcast, saying 'oops, we screwed up, and the internet speeds you are getting are not supposed to be offered in the deal you have. We're sorry for this. Please pay us the amount you should be charged for the internet speeds you have been getting.' Needless to say, no way in hell we were going to pay for their mistake. We call again, win the call-center lottery again, and get another nice, helpful person. This person get's rid of these charges by putting a credit on our account. We paid the rest of the bill on the last day of April, and were at $0 owed. This is where shit goes downhill. We check our bill half-way through May, and find that not only are the charges back, but we we're charged a $50 late-fee. Beyond furious at this point (3 bill issues within 6-7 months), we call again and ask immediately for a manager. The manager proceeds to call us liars, that the credit doesn't exist and was never posted to our account. Literally, they had zero record of someone from their offices put the credit on our account. Not only did the manager call us liars, but was snotty with us the entire conversation. They made us pay for not only the late fee, but the charges for the 'mistaken' internet speeds. If it wasn't for the fact they have a monopoly in our area, we could switch.
You only need 9Mbps to stream 1080p content. If you have 30Mbps (like I do), you can stream 3 full 1080p movies at the same time. If you play games online up stream speeds are more important, but still not significant. If I had 1Gbps bandwidth, I can stream over 100 full HD movies or download a full HD movie in a matter of seconds (if my latency permits).
it will take a 5 yrs to a decade for some carrier to license, possibly add rights-of-way, run the CEQA gauntlet (California), provision, and deliver 100mb. By the time it is available we will be streaming at the very minimum 4K HD or some holographic google-glass bullshit at which point 100mb will be as useful as the 64kb of PC memory mentioned above. Just pull your heads out of your asses, take my money, and bring me my fucking gigabit internet already ffs. I've said it before I'll say it again: You know how every year our laws progressively become more restrictive regarding low flow plumbing fixtures, lower emissions and MPG for autos, and greener power consumption for household appliances? We NEED THAT SAME FUCKING LEGISLATION PASSED FOR MINIMUM ACCEPTABLE INTERNET SPEEDS!!! USA ranks someplace between 27th or 35th in the world for internet speed depending which studies you read. Basically a third world country. When I needed to use the internet in Tijuana the internet cafe charged me by the kilobyte. Same business model all our carriers/ISPs are switching over to here. Edit:
So, theres no good damn reason why the faster the speed, the more expensive right? Stop scamming us with your useless excuses. Sure you dont need 1Gbps, but some people would like to use it like LAN centers, or big LAN parties, or a group of streamers in a home. This also favors the concept of "Cars? Who needs them, we have feet" "Phones? Who needs them we have Mail" "Email? Who needs them, we have letters with mail^" And in the near future "1Gbps for high technology that were shaped because of it? Who needs it, we have DSL" Edit:
I'll assume you need Linux support. Intel chipset drivers, if I recall, are in the typical kernel, so there should be no issue there. The same goes with the Realtek Ethernet driver. Intel Graphics? If someone can let me know on that, I'd be grateful, but last I checked, there should be drivers available.
IPTV in 4k/8k. Watching one show, recording another, possibly recording yet another thing too, all while a few other users at home also use the connection for regular internet applications such as gaming/video or audio chat (requires low delay), downloading stuff (takes up a lot of bandwidth) or even streaming something (YT/Netflix/Hulu/...) The crucial point is that those few low-delay applications (gaming and video/audio chat) keep working fine while all those other high-bandwidth applications (downloading an streaming) are at work. The second important point is that those high-bandwidth, low-jitter applications (streaming) also keep working without having rebuffering. Those two points can either be accomplished by QoS bullshit (which boils down to throttling), or simply providing more than enough bandwidth.
This is a trick to get people to like comcast. I am sure of it. It would be one thing if they said "The industry needs to have some work on it" but he specifically said Google Fiber. Anyone of the current ISPs are capable of lowering their prices and increasing their speeds. But they aren't going to be the ones to do it until they absolutely have to.
It has been reported that the NSA has specifically targeted encrypted messages for long term storage. Encryption might be the worst thing to do if you are looking to stay under the radar. The best bet is to not use electronic communication for anything you wouldn't want to see at trial. Nobody of course wants to consider this as an option but since there will never be any way to confirm that NSA or other entities have stopped collection of data, even if they agreed to, I have concluded that for the rest of the history of the net we are compromised. I am sure that there are those who will claim I don't understand how public key encryption works. Trust me I do. I also know that even if a message was uncrackable, its identification as coming from a particular source is tantamount to admitting guilt in the eyes of the police state lackies.
It's not an argument of "either have complex features or not". It's a matter of deciding the best way to bring those features to the user. If I can install a completely new application with only one command line (or a nice GUI manager).. then there is no problem.
I personally dislike ads, but am aware that people like you are giving me fucking awesome things for nothing. I run without Ad-block, and even sometimes click ads if it look's interesting (gasp). I'm pretty happy with your site with ads, and happy to be throwing money at you via kickstarter as well. I don't feel entitled to an ad-less site for that, but am grateful to those (which here means quite a lot of comic authors I read) that at least try to make sure that fucking awful adds (particularly noisy ones that start on page load) don't come up.
I've only had one issue with amazon customer service and it was with a foreigner who couldn't understand me (born and raised in the US with no recognizable regional difficult accent such as New England Area or "deep southern" drawl). Other than that it has been a breeze. My favorite part is I've never had to "wait for the next available customer support technician". They have a system where you enter your phone number and they call you. This system has never taken more than 30 seconds to happen in my experience. (This was a clusterfuck due to the address I was at was not located sequentially, so UPS didn't believe it existed, amazon ended up sending TWO replacements because of UPS's dumbassery) Another weird transaction I had occurred when on one of my credit cards had "reward/bonus cash" tied to the account that I could use directly on amazon. However, my CC got cancelled due to my wallet being stolen, so the CC would no longer work until I got a new number. But I wanted to buy something with my free cash ASAP and couldn't because the card was no longer valid, and they wouldn't let me pay with more than one card. Called them up, explained the situation, the customer service rep said "hmmm, never had this happen before, let me think.... how about you buy a digital amazon gift card for the amount of your bonus cash, and have it emailed to yourself? then use that and your other CC on file." It was an easy solution and something I'd never think to do.