id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.285568
Good day. I need help for a specific case. A little background: We have an existing app, it is like a PDF viewer, and you can draw freehand, highlight, add highlight with notes, add action items, etc. In this discussion, I'll discuss the two database tables 1) ACTION_ITEM table containing the list of action items of a document, and 2) HIGHLIGHT table containing the highlight and the notes.The difference between a highlight note and an action item is that in action item, you can assign it to a person, specify a due date, and mark it as completed.Currently we added an enhancement, for the end user to convert a highlight note to an action item. So we added a checkbox [ ] Action Item. When you convert to an action item, the UX of it will still look the same: it visually still looks like a note, it is still associated with a highlight, and it still located on a page, except that it has the Action Item checked. (Our usual action item, is not associated to a page).The programmers (including me) already coded the enhancement (this is in multiple platforms), in such a way we knew best: We added the needed new columns to HIGHLIGHT table: due date, assigned person, status and an action item indicator.Now here's the tricky part, there's an ongoing design discussion to change the physical design to transfer the highlight note marked as an action item to ACTION_ITEM table. This means, copying all the columns needed in the HIGHLIGHT table like highlight rectangle, note location, page number, document id, etc and copying it to ACTION_ITEM table. This only applies to highlight notes marked as an action item. If the highlight note is not marked as an action item, then we still need to store it in HIGHLIGHT table. Currently, marking a note as an action item, we just change a column, in the new design, we have to transfer the entire record to another table.Not only that the physical design will change, but all logic related to its storage and retrieval (which is a lot).The reason is, the Systems Analyst said that instead of programming for convenience, we need to align the physical design to the business concept. In this case, an Action Item is a different business concept so it needs to be in the ACTION_ITEM table.My question is, is Systems Analyst correct? Do we merge a physically different entities in the same table, just because it's the same business concept? Seems illogical to me as a programmer. It's not merely for convenience, it's for efficiency.
Business concept design vs logical database design
design;database;business logic
I think the SA is right. (in that you should have actions in the actions table) But for more concrete reasons than just 'align with business definitions'1: By adding the exta cols to highlight you have broken the normalisation of that table.2: you presumably do stuff with actions, now you have to check two places to get them all. breaking the single source of truth principle.Obviously the same reasons would apply to 'copying the highlight fields to action' so your SA is just as wrong if there are suggesting thatInstead I would have added an association between actions and highlights.ie.ACTION_ITEM Id FieldsForActionHIGHLIGHT Id FieldsForHighlightHighlightsWithActions HighlightId ActionItemIdso when you tick the highlight to make it an action, you populate a new ActionItem and add a row to HighlightsWithActions specifinging the selected highlight and the Id of the new ActionItem you created.You can add unique indexes to limit the relationship to a 1-0/1 or 1-many or many-many as requiredIn the long run this will make the programming easier as the business will ask for new stuff like 'make all the actions flash red on tuesdays' etc
_webmaster.105100
I've just seen Google suggests sites in a sort of category of results, as shown in the image below. Any idea how to get listed in those results?
How to get listed in Google's carousel results
seo;google;google carousel results
null
_unix.362288
I have a file ~/.local/share/gsettings-data-convert containing:[State]timestamp=1442453369converted=org.gnome.crypto.cache.convert;gsettings-desktop-schemas.convert;gvfs-dns-sd.convert;org.gnome.crypto.pgp.convert;wm-schemas.convert;org.gnome.crypto.pgp_keyservers.convert;Googling shows that it is for GNOME, which I've never (knowingly) used. Is this normal?I am running this on cygwin 2.2.1-1, x11 7.5-2.
Why I have ~/.local/share/gsettings-data-convert?
gnome
null
_codereview.138646
I need to handle configuration errors and ask the user for the right credentials if something is wrong. I cannot decide between three implementations. Which style is better in Python and why?A:while not backend.check_config(): click.echo('Invalid Configuration parameters!') for param_name, question in backend.config.check_config_requires: value = prompt(question, default=getattr(backend.config, param_name)) setattr(backend.config, param_name, value)B:while True: try: backend.check_config() break except ConfigurationError: click.echo('Invalid Configuration parameters!') for param_name, question in backend.config.check_config_requires: value =prompt(question, default=getattr(backend.config, param_name)) setattr(backend.config, param_name, value)C:while True: try: backend.check_config() except ConfigurationError: click.echo('Invalid Configuration parameters!') for param_name, question in backend.config.check_config_requires: value =prompt(question, default=getattr(backend.config, param_name)) setattr(backend.config, param_name, value) else: break
Requesting credentials from users
python;comparative review
I am assuming that backend.check_config() is different between your implementations. In one case, it seems to return a boolean, but the other two make it look like it might raise an exception. My answer is based on that assumption.The name of the function is check_config. If it were, parse_config, for example, I would say that it should throw an error because it is expected to do something, but can't because of a problem. Since it is merely checking, I think a boolean fits better, so my preference goes to A.Between B and C, it looks like the only difference is whether break is in the try block or the else block. I am happy to see that you are aware of the else block, but my opinion is that it isn't necessary. The else block is useful for the cases when you are trying to exclude a certain piece of code from the try block. That makes sense for when the try is in another try, or for when you don't expect an error and want to know the error happened. In this case, it is impossible for break to throw an error, so I believe the else block is an unnecessary complication.I did notice that you took out the space between = and prompt(...) in B and C. PEP 8, the Python style guide, recommends a space on each side of = in assignments. (It recommends no spaces in a function call such as dict(x=4, y=6)) That could be just a typo, but this is a review.Based on how you use it, I would think that backend.config would be better as a dictionary. That way, you could do backend.config[param_name] instead of the more complicated getattr(backend.config, param_name) (or settattr(...)). If config is created externally, you might consider using vars(config) to create a dictionary of its attributes. Without further information, I couldn't guarantee that it would be easier, but that's how it looks.
_unix.47918
Assuming a simple grep such as:$ psa aux | grep someApp1000 11634 51.2 0.1 32824 9112 pts/1 SN+ 13:24 7:49 someAppThis provides much information, but as the first line of the ps command is missing there is no context for the info. I would prefer that the first line of ps be shown as well:$ psa aux | someMagic someAppUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND1000 11634 51.2 0.1 32824 9112 pts/1 SN+ 13:24 7:49 someAppOf course, I could add a regex to grep specifically for ps:$ ps aux | grep -E COMMAND|someAppHowever, I would prefer a more general solution as there are other cases in which I would like to have the first line as well.
How to grep a specific line _and_ the first line of a file?
bash;command line
Good wayNormally you can't do this with grep but you can use other tools. AWK was already mentioned but you can also use sed, like this:sed -e '1p' -e '/youpattern/!d'How it works:Sed utility works on each line individually, running specified commands on each of them. You can have multiple commands, specifying several -e options. We can prepend each command with a range parameter that specifies if this command should be applied to specific line or not.1p is a first command. It uses p command which normally prints all the lines. But we prepend it with a numerical value that specifies the range it should be applied to. Here, we use 1 which means first line. If you want to print more lines, you can use x,yp where x is first line to print, y is last line to print. For example to print first 3 lines, you would use 1,3pNext command is d which normally deletes all the lines from buffer. Before this command we put yourpattern between two / characters. This is the other way (first was to specify which lines as we did with p command) of addressing lines that the command should be running at. This means the command will only work for the lines that match yourpattern. Except, we use ! character before d command which inverts its logic. So now it will remove all the lines that do not match specified pattern.At the end, sed will print all the lines that are left in buffer. But we removed lines that do not match from the buffer so only matching lines will be printed.To sum up: we print 1st line, then we delete all the lines that do not match our pattern from input. Rest of the lines are printed (so only lines that do match the pattern).First line problemAs mentioned in comments, there is a problem with this approach. If specified pattern matches also first line, it will be printed twice (once by p command and once because of a match). We can avoid this in two ways:Adding 1d command after 1p. As I already mentioned, d command deletes lines from buffer and we specify it's range by number 1, which means it will only delete 1st line. So the command would be sed -e '1p' -e '1d' -e '/youpattern/!d'Using 1b command, instead of 1p. It's a trick. b command allows us to jump to other command specified by a label (this way some commands can be omitted). But if this label is not specified (as in our example) it just jumps to the end of commands, ignoring rest of the commands for our line. So in our case, last d command won't remove this line from buffer.Full example:ps aux | sed -e '1b' -e '/syslog/!d'Using semicolonSome sed implementations can save you some typing by using semicolon to separate commands instead of using multiple -e options. So if you don't care about being portable the command would be ps aux | sed '1b;/syslog/!d'. It works at least in GNU sed and busybox implementations.Crazy wayHere's, however, rather crazy way to do this with grep. It's definitely not optimal, I'm posting this just for learning purposes, but you may use it for example, if you don't have any other tool in your system:ps aux | grep -n '.*' | grep -e '\(^1:\)\|syslog'How it worksFirst, we use -n option to add line numbers before each line. We want to numerate all the lines we we are matching .* - anything, even empty line. As suggested in comments, we can also match '^', result is the same.Then we are using extended regular expressions so we can use \| special character which works as OR. So we match if the line starts with 1: (first line) or contains our pattern (in this case its syslog).Line numbers problemNow the problem is, we are getting this ugly line numbers in our output. If this is a problem, we can remove them with cut, like this:ps aux | grep -n '.*' | grep -e '\(^1:\)\|syslog' | cut -d ':' -f2--d option specifies delimiter, -f specifies fields (or columns) we want to print. So we want to cut each lines on every : character and print only 2nd and all subsequent columns. This effectively removes first column with it's delimiter and this is exactly what we need.
_softwareengineering.280528
I have a lot of database experience, but virtually no application programming experience. At work, we have an EDMX? model generated from entities in the database, and we transform T4 templates to create what I assume are classes. I think this is the Entity Framework? From there, the application (in C#) takes the data and (uses an MVC structure?) to bind the data to XAML (it is a Silverlight application). I assume the XAML is embedded into the webpage using Javascript, which is contained in HTML.I struggle to find a generic top-down roadmap online that can explain how data gets passed around in such a structure, but I was wondering if anyone had a good solid explanation of how this generally works? If I can get a clearer picture of how data is passed, I can figure out what areas I need to improve on, knowledge-wise.
How can I approach application programming from a database background?
database;mvc;entity framework;silverlight;xaml
Entity Framework is an Object-Relational Mapper; it translates the results of SQL queries into objects and collections. For example, this query:SELECT name, address, city, state, zip FROM customers;might produce a collectionIEnumerable<customer> resultof objects that looks like this:public class Customer{ public string Name { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string Zip { get; set; }}What happens from there depends on your application's structure. The ASP.NET server may expose a series of REST services that produce JSON or XML that is consumed by the Silverlight app (the usual arrangement): { firstName: John, lastName: Smith, age: 25, address: { streetAddress: 21 2nd Street, city: New York, state: NY, postalCode: 10021 }, phoneNumber: [ { type: home, number: 212 555-1234 }, { type: fax, number: 646 555-4567 } ] }
_codereview.159557
I'm learning Sass and responsive websites, I've made this simple portfolio website with the help of a bootstrap template (freelancer) and I'd like to know if I'm doing SASS correctly and ways to improve my website code. Also is my project structure correct as a standard? Thanks in advance!index.html<body id=page--top> <!-- Navigation --> <nav id=mainNav class=navbar navbar-default navbar-fixed-top> <div class=container> <div class=navbar-header page-scroll> <button type=button class=navbar-toggle navbar__button data-toggle=collapse data-target=#mainNavCollapse>Men</button> <a class=navbar-brand navbar__brand href=#page--top>Evan</a> </div> <div class=collapse navbar-collapse navbar__collapsingNav id=mainNavCollapse> <ul class=nav navbar-nav navbar-right> <li class=hidden> <a href=#page--top></a> </li> <li class=page-scroll> <a href=#portfolio class=navbar__item>Portfolio</a> </li> <li class=page-scroll> <a href=#about class=navbar__item>About</a> </li> <li class=page-scroll> <a href=#contact class=navbar__item>Contact</a> </li> </ul> </div> </div> </nav> <!-- Header --> <header> <div class=container header tabindex=-1> <div class=row> <div class=col-lg-12> <img class=img-responsive header__profileimage src=assets/img/profile.jpg alt=myPhoto> <div class=header__textbox> <h1 class=header__textbox--title>EVAN SURNAME</h1> <h4 class=header__textbox--subtitle>Web Developer</h4> </div> </div> </div> </div> </header> <!-- Portfolio Section1 --> <section id=portfolio class=section1> <div class=container> <div class=row> <div class=col-lg-12> <h2 class=section1__title>Portfolio</h2> </div> </div> <div class=row> <div class=col-sm-4 section1__item> <a href=# class=section1__item--link> <img src=assets/img/placeholder.png class=img-responsive section1__image alt=Project1> </a> </div> <div class=col-sm-4 section1__item> <a href=# class=section1__item--link> <img src=assets/img/placeholder.png class=img-responsive section1__image alt=Project2> </a> </div> <div class=col-sm-4 section1__item> <a href=# class=section1__item--link> <img src=assets/img/placeholder.png class=img-responsive section1__image alt=Project3> </a> </div> <div class=col-sm-4 section1__item> <a href=# class=section1__item--link> <img src=assets/img/placeholder.png class=img-responsive section1__image alt=Project4> </a> </div> <div class=col-sm-4 section1__item> <a href=# class=section1__item--link> <img src=assets/img/placeholder.png class=img-responsive section1__image alt=Project5> </a> </div> <div class=col-sm-4 section1__item> <a href=# class=section1__item--link> <img src=assets/img/placeholder.png class=img-responsive section1__image--last alt=Project6> </a> </div> </div> </div> </section> <!-- About Section2 --> <section class=section2 id=about> <div class=container> <div class=row> <div class=col-lg-12> <h2 class=section2__title>About</h2> </div> </div> <div class=row> <div class=col-lg-4 col-lg-offset-2> <p class=section2__text>Freelancer is a free bootstrap theme created by Start Bootstrap. The download includes the complete source files including HTML, CSS, and JavaScript as well as optional LESS stylesheets for easy customization.</p> </div> <div class=col-lg-4> <p class=section2__text>Whether you're a student looking to showcase your work, a professional looking to attract clients, or a graphic artist looking to share your projects, this template is the perfect starting point!</p> </div> </div> </div> </section> <!-- Contact Section3 --> <section id=contact class=section3> <div class=container> <div class=row> <div class=col-lg-12> <h2 class=section3__title>Contact Me</h2> </div> </div> <div class=row> <div class=col-lg-8 col-lg-offset-2> <form name=sentMessage id=contactForm novalidate> <div class=row control-group> <div class=form-group col-xs-12 floating-label-form-group controls> <input type=text class=form-control section3__form--input placeholder=Name id=name required data-validation-required-message=Please enter your name.> <p class=help-block text-danger></p> </div> </div> <div class=row control-group> <div class=form-group col-xs-12 floating-label-form-group controls> <input type=email class=form-control section3__form--input placeholder=Email Address id=email required data-validation-required-message=Please enter your email address.> <p class=help-block text-danger></p> </div> </div> <div class=row control-group> <div class=form-group col-xs-12 floating-label-form-group controls> <input type=tel class=form-control section3__form--input placeholder=Phone Number id=phone required data-validation-required-message=Please enter your phone number.> <p class=help-block text-danger></p> </div> </div> <div class=row control-group> <div class=form-group col-xs-12 floating-label-form-group controls> <textarea rows=5 class=form-control section3__form--input placeholder=Message id=message required data-validation-required-message=Please enter a message.></textarea> <p class=help-block text-danger></p> </div> </div> <br> <div id=success></div> <div class=row> <div class=form-group col-xs-12> <button type=submit class=btn btn-success btn-lg section3__button>Send</button> </div> </div> </form> </div> </div> </div> </section> <!-- Footer --> <footer class=footer> <div class=footer--above> <div class=container> <div class=row> <div class=footer-col col-md-4> <h3 class=footer--above__title>Location</h3> <p class=footer--above__text>3481 Melrose Place <br>Beverly Hills, CA 90210</p> </div> <div class=footer-col col-md-4> <iframe class=footer--above__map src=https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d183615.334415116!2d1.2244688902059417!3d44.02160942347321!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x12ac0de6de9463e9%3A0xc7fb153793253908!2sMontauban%2C+France!5e0!3m2!1sen!2sie!4v1490815128696 frameborder=0 allowfullscreen></iframe> </div> <div class=footer-col col-md-4> <h3 class=footer--above__title>Social Media</h3> <ul class=footer--above__list> <li> <a href=# class=footer--above__btnicon><span class=footer--above__btnicon--fb></span></a> <a href=# class=footer--above__btnicon><span class=footer--above__btnicon--tw></span></a> <a href=# class=footer--above__btnicon><span class=footer--above__btnicon--lkin></span></a> <a href=# class=footer--above__btnicon><span class=footer--above__btnicon--ghub></span></a> </li> </ul> </div> </div> </div> </div> <div class=footer--below> <div class=container> <div class=row> <div class=col-lg-12> <p class=footer--below__text>Copyright &copy; Evan 2017</p> </div> </div> </div> </div> </footer>sass/* FONT FAMILY AND SIZE */$text__family--sans: Lucida Sans Unicode,Lucida Grande,sans-serif !default;$text__size--base: 15px !default;$text__size--xbig: 70px !default;$text__size--big: 50px !default;$text__size--md: 35px !default;$text__size--lowmd: 25px !default;$text__size--sm: 17px !default;$text__size--xsm: 13px !default;/* TEXT COLORS */$text__color--white: #FFFFFF !default;$text__color--black: #000000 !default;$text__color--base: #333333 !default;$text__color__secondary--light: #ffc03d !default;$text__color__secondary: #e09f16 !default;/* BACKGROUND COLORS */$background__color--navbar: #990000 !default;$background__color--footer--above: #e81e1e !default;$background__color--secondary: #e09f16 !default;$background__color--white: #FFFFFF !default;/* LAYOUT */$padding--xbig: 100px !default;$padding--big: 60px !default;$padding--sm: 30px !default;$padding--xsm: 15px !default;/* ICONS */$bars-icon: \f0c9;$facebook-icon: \f09a;$twitter-icon: \f099;$linkedin-icon: \f0e1;$github-icon: \f09b;/* ------ MIXINS ------ */@mixin text($size, $family: $text__family--sans, $color: $text__color--base, $align: center, $line: normal, $weight: normal) { font-size: $size; font-family: $family; color: $color; text-align: $align; line-height: $line; font-weight: $weight;}@mixin img($height: auto, $width: auto, $border: auto, $radius: 0, $shadow: 0, $display: block) { height: $height; width: $width; border: $border; border-radius: $radius; box-shadow: $shadow; display: $display;}@mixin layout($padding: auto, $margin: 0 auto, $minheight: auto) { padding: $padding; min-height: $minheight; margin: $margin;}@mixin icon($icon, $color: $background__color--white) { content: $icon; color: $color; font-family: FontAwesome; text-rendering: auto; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale;}/* ------------------- STYLES ------------------- */body { margin-bottom: 25px;}p { @include text($size: $text__size--base, $color: $text__color--base);}/* NAVBAR */.navbar { background: $background__color--navbar; text-transform: uppercase; border: none; @include layout($padding: 10px); .navbar__button { @include text($size: $text__size--md, $color: $text__color--white); background: $background__color--secondary; border: 1px solid $background__color--secondary; &:hover, &:focus { background: $background__color--secondary; } &:after { @include icon($bars-icon); @include layout($margin: 0 0 0 6px) } } .navbar__brand { @include text($size: $text__size--md, $color: $text__color--white, $line: 28px, $weight: bold); &:hover, &:active, &:focus { color: $text__color--white; } } .navbar__collapsingNav li.active a { color: $background__color--white; background: $background__color--secondary; &:hover, &:active, &:focus { color: $background__color--white; background: $background__color--secondary; } } .navbar__collapsingNav .navbar__item { @include text($size: $text__size--sm, $color: $text__color--white, $weight: bold); &:hover, &:focus { color: $text__color__secondary--light; } }}/* HEADER */.header { background-image: url('../../../assets/img/header-background.jpg'); background-repeat: no-repeat; background-size: contain; width: 100%; @include layout($padding: $padding--big 0 90px 0); .header__profileimage { @include img(300px, 300px, $radius: 50%, $shadow: 2px 2px 8px 2px gray); @include layout($margin: 10px auto 0 auto); } .header__textbox--title { @include text($size: $text__size--xbig, $color: $text__color--white, $weight: bold); } .header__textbox--subtitle { @include text($size: $text__size--lowmd, $color: $text__color--white, $line: 50%, $weight: bold); @include layout($padding: 0 0 $padding--big 0); }}/* SECTIONS */.section1 { background-color: $background__color--white; @include layout($padding: $padding--sm auto $padding--big auto); .section1__title { @include text($size: $text__size--big, $color: $text__color--base, $weight: bold); @include layout($padding: $padding--big); text-transform: uppercase; } .section1__image, .section1__image--last { text-align: center; @include img(250px, 300px, 5px); @include layout($margin: 20px 20px 20px 20px); } .section1__image--last { @include layout($margin: 20px 20px 100px 20px); }}.section2 { background-color: $background__color--secondary; @include layout($padding: $padding--sm 0 $padding--sm 0, $minheight: 500px); .section2__title { @include text($size: $text__size--big, $color: $text__color--white, $weight: bold); @include layout($padding: $padding--big); text-transform: uppercase; } .section2__text { @include text($size: $text__size--base, $color: $text__color--white, $align: justify); }}.section3 { background-color: $background__color--white; @include layout($padding: $padding--sm 0 $padding--big 0); .section3__title { @include text($size: $text__size--big, $color: $text__color--base, $weight: bold); @include layout($padding: $padding--big 0 $padding--sm 0); text-transform: uppercase; } .section3__form--input { @include text($size: $text__size--base, $color: $text__color--base, $align: left); } .section3__button { @include text($size: $text__size--base, $color: $text__color--white, $weight: bold); @include layout($margin: 0 0 80px 0); }}/* FOOTER */.footer .footer--above { background-color: $background__color--footer--above; .footer--above__title { @include text($size: $text__size--md, $color: $text__color--white, $weight: bold); @include layout($padding: $padding--sm 0 20px 0); transform: uppercase; } .footer--above__text { @include text($size: $text__size--base, $color: $text__color--white); @include layout($padding: $padding--xsm 0 $padding--big 0); } .footer--above__map { @include img(200px, 350px); @include layout($padding: $padding--xsm 0 $padding--xsm 0); } .footer--above__list { list-style-type: none; text-align: center; .footer--above__btnicon { @include layout(); @include img(50px, 50px, $border: 2px solid $background__color--white, $radius: 100%, $shadow: 2px 2px 2px 2px $background__color--navbar, $display: inline-block); text-align: center; font-size: 25px; &:focus, &:hover, &:active { background: red; } .footer--above__btnicon--fb:after { @include icon($facebook-icon); } .footer--above__btnicon--tw:after { @include icon($twitter-icon); } .footer--above__btnicon--lkin:after { @include icon($linkedin-icon); } .footer--above__btnicon--ghub:after { @include icon($github-icon); } } }}.footer--below { background: $background__color--navbar; .footer--below__text { @include text($size: $text__size--xsm, $color: $text__color--white); @include layout($padding: 14px); }}/* MEDIA */@media (max-width: 767px) { .header { margin-top: 90px; } section { padding: 75px 0; } section.first { padding-top: 75px; }}@media (min-width: 768px) { .navbar { padding: 25px 0; -webkit-transition: padding 0.3s; -moz-transition: padding 0.3s; transition: padding 0.3s; .navbar__brand { font-size: 2em; -webkit-transition: all 0.3s; -moz-transition: all 0.3s; transition: all 0.3s; } } .navbar.affix { padding: 10px 0; } .navbar.affix .navbar__brand { font-size: 1.5em; } .header { padding-top: 200px; padding-bottom: $padding--xbig; .header__textbox--title { font-size: 4.75em; } .header__textbox--subtitle { font-size: 1.75em; } } .section1 .section1__item { margin: 0 0 30px; }}Full project live: https://evandcp.github.io/MyWebsite/
Website using HTML/CSS/SASS/Bootstrap/JQuery
beginner;jquery;html;css;sass
I'm only a little familiar with SASS because I work with LESS most of the time, so I'll focus on your markup to get the ball rolling.MarkupOutlineKeep in mind, that in HTML5 sectioning elements can start with a h1-element as well.However, try to create a hierarchic structure. This means, that a h1-element directly followed by a h4-element doesn't make much sense. Especially when you start to use h2 afterwards for other content:<h1 class=header__textbox--title>EVAN SURNAME</h1><h4 class=header__textbox--subtitle>Web Developer</h4><!-- [] --><h2 class=section1__title>Portfolio</h2>Also, this looks like a subheading where w3.org says: h1h6 elements must not be used to markup subheadings, subtitles, alternative titles and taglines unless intended to be the heading for a new section or subsection.From w3.org 4.12.1 Subheadings, subtitles, alternative titles and taglinesThe linked specification suggests different approaches to handle the problem. One possibility is this:<h1> EVAN SURNAME <span>Web Developer</span></h1>Links without any textYou have some links like this:<a href=#page--top></a><a href=# class=footer--above__btnicon><span class=footer--above__btnicon--fb></span></a>Of course, most people will see the icon and understand the link's meaning. But visitors using a screenreader or search bots don't have a clue, what the link is about. Simply spend them a title-attribute:<a href=#page--top title=Scroll back to top></a><a href=# class=footer--above__btnicon title=Follow me on Facebook><span class=footer--above__btnicon--fb></span></a>Some goes for the project listing.Empty p-elementsIn your form you have empty elements:<p class=help-block text-danger></p>I think you'll fill these, if an error for the input occurs. I would say, insert these elements when they are really needed and remove the empty paragraphs in the beginning.Live PageA few things I've noticed in the live demo:There's an error in the console, because the file favicon.ico is missing.It's recommended that <meta charset=UTF-8> is the first child of the head-element. You can see it in all examples on w3.org. This answer to In <head>, which comes first: <meta> or <title>? has some more insights:[] if your title came before that, it has already been interpreted as ASCII, which could be wrong, depending on what was in the title.You include font-awesome.css and font-awesome.min.css, which seems redundant.Appearance of the Live PageA few more things on the live demo regarding the design*:The header's blue background image doesn't fill the container for screen sizes over ~1200px width.The header seems broken on mobile. Only on a few screen widths name and subtitle are visible. Sometimes the image is larger than the blue background.In the mobile view the projects are aligned left. It might look better, if they are centered as the rest of the page as well.There's a white area below the red footer.It seems that one or multiple containers/elements are wider than the screen. You can always scroll horizontal a few pixels.* Seen in Safari 9.1 on macOS.
_unix.234840
I would like to partition a disk, but some partitions should not be mounted, so far I have to following workaround :part /srv/tmp1 --fstype=ext4 --size=1000 --ondisk=sdathen in a post-install script the partition is removed from fstab, /srv/tmp1 is umounted then deleted.I would like to know if there is a 100% kickstart solution ?
Kickstart: is it possible to partition without a mount point?
rhel;kickstart
null
_unix.123731
I have ext4 partition, which holds disk image files VirtualBox works with. They all are fixed-size images (i.e. files never change their size). They are defragmented as much as possible (with e4defrag).I assume that a lot of filesystem features are redundant in this case. I.e. as files are never created, never deleted, never change their size, only reading and in-place writing happens, and file contents are aligned on the hard-drive continuously, thus much simpler filesystem could be used in this case (no need for file attributes, directories, journal, etc). Theoretically, I could even use logical volumes instead of files in this case (I am just not sure that I want).So, questions:How to tune ext4 filesystem, to get best performance in this case?May be another filesystem is more suitable? (Some filesystem that don't support directories, and only continuous fixed-size files?)Or may be Linux has a possibility to mount part of existing partition as a file? I.e. that I create unformatted partition /dev/sda2 and thenmount K-th to L-th bytes of it as /somepath1/somefile1.vdi,mount M-th to N-th bytes of it as /somepath2/file2.vdi,and so on.
Best parameters for ext4 filesystem to handle virtualbox disk images
linux;filesystems;partition;ext4;large files
null
_softwareengineering.226107
So I'm working on a software product where we have a number of fields that the customer can leave blank, some of which are numeric. To persist these in the database we use nullable columns. Easy peasy.I'm considering the utility of an object oriented domain model and one thing that bugs me is the issue of nullable fields. The reason is, in the world of Java and C# one often finds advice against having null values and indeed writing code with null checks all over the place sucks. And then you get null reference exceptions when you forget to check for null. And it's a mess. So a good approach is to initialize everything when you declare it.Now, the idea of null actually makes sense for these fields... the customer did not enter a value thus it has no value (not 0, not -1, etc). But application programmers and programming languages seem to be configured for binary rather than ternary logic, and also for variables just darn well having a value rather than maybe having a value but also maybe not.With object oriented design I presume one could come up with some clever system of representing cases where there is no value but I personally haven't done the analysis yet and the reason is I find incremental change more successfully sells to people than radical change does, so proposing an object oriented domain model that's too object oriented might kill the idea and thus my hopes of improving the structure of our software. I'd rather propose a version 0 domain model that works and is easy to sell internally... and so this whole thing about null values weighs heavy on my mind.Since we are using C# I could just define nullable numbers, its very easy one just adds a question mark. But I'm not convinced that just because it's possible it's also the best approach.So, with all this background junk out of the way my question is: how have you/your company handled nonexistent values in an object oriented domain model, in what ways did you find it effective and in what ways did you find it ineffective?My selection criteria for the answer will be the one that seems most sellable as defined in the background junk. But really I'd like to see what people come up with and learn some new things so any answer with some thought behind it will receive an upvote from me.Note: I've seen other discussions about null values in other threads but nothing that quite lines up with what I want to talk about, hence a new question.EDIT: My question's scope includes value typed properties like numbers. So in C# one way to allow for representation of a null purchase price would be to declare PurchasePrice as decimal? which is the nullable decimal type. I just see a number of disadvantages to numbers (for example) that can also be null, so I'm looking for an alternative.EDIT 2: The Null Object Pattern makes sense for references, and using a nullable type for values appears as if it may be tractable with coalescing. What bothers me about coalescing null types is what if someone forgets to coalesce and we're exposed to a null reference exception?... and I suppose I could maybe use a static analysis tool which ensures that coalescing is used where expected to allay that worry. EDIT 2': In a sense, if someone has made a conscious decision to make a field nullable they are adding the ability to represent an extra piece of information that would otherwise need another field to represent it such as a boolean HasValue. Does that mean the anti-null zealots I've met in the past were perhaps incorrect, and measured use of nullity can in fact improve a design?
Handling unspecified values in software
object oriented design;domain model;null
I think you are mistaking something here. The problem of null in languages like Java and C# is that reference types are null by default. Eg, the null is implicit. That means there is no way to express idea of reference type always having a valid value. You always have to check for null. Even if you are sure the value cannot possibly be null, the Murphy's law says it will happen sometime, so checking is necessary. There is no way to have compiler do the checking for you. Of course there are ways like code contracts, but those are relatively new in those languages and require additional work from developer to do right.The way Nullable value types work is exactly how it should work. The compiler knows there is one additional state and knows how you work with the variable. So it can create an error when it sees there is possibility dereferrencing this null state. For example, you cannot add Nullable<int> and int simply because compiler knows there is possibility of first value being null. So it forces you to do the checking before hand. Even simple assignment from nullable to non-nullable type is checked by compiler simply thanks to it's type. No such thing is possible for reference types. Of course there is problem of programmer just adding .Value to every access and be done with it, but that is problem of programmer's discipline that no tool can fix.Summed up. Your worries about nullable value types are completely unjustified because null is only problem when it is implicit and compiler cannot check for possible errors of accessing null value. Which is not the case of Nullable value types.
_codereview.48869
I decided to roll out my own EventBus system which is intended to be thread-safe.Hence a review should focus extra on thread safety apart from all regular concerns.The EventBus can work in two ways:You can register events and listeners directly on the EventBus.You can the methods, of a specific object, that are single argument void-methods annotated with @Event.First the code, then the unit tests below:@Retention(RetentionPolicy.RUNTIME)@Target(ElementType.METHOD)public @interface Event { }public interface EventBus { void registerListenersOfObject(final Object callbackObject); <T> void registerListener(final Class<T> eventClass, final Consumer<? extends T> eventListener); void executeEvent(final Object event); void removeListenersOfObject(final Object callbackObject); <T> void removeListener(final Class<T> eventClass, final Consumer<? extends T> eventListener); void removeAllListenersOfEvent(final Class<?> eventClass); void removeAllListeners();}public class SimpleEventBus implements EventBus { private final static Set<EventHandler> EMPTY_SET = new HashSet<>(); private final ConcurrentMap<Class<?>, Set<EventHandler>> eventMapping = new ConcurrentHashMap<>(); private final Class<?> classConstraint; public SimpleEventBus() { this(Object.class); } public SimpleEventBus(final Class<?> eventClassConstraint) { this.classConstraint = Objects.requireNonNull(eventClassConstraint); } @Override public void registerListenersOfObject(final Object callbackObject) { Arrays.stream(callbackObject.getClass().getMethods()) .filter(method -> (method.getAnnotation(Event.class) != null)) .filter(method -> method.getReturnType().equals(void.class)) .filter(method -> method.getParameterCount() == 1) .forEach(method -> { Class<?> clazz = method.getParameterTypes()[0]; if (!classConstraint.isAssignableFrom(clazz)) { return; } synchronized (eventMapping) { eventMapping.putIfAbsent(clazz, new HashSet<>()); eventMapping.get(clazz).add(new MethodEventHandler(method, callbackObject, clazz)); } }); } @Override @SuppressWarnings(unchecked) public <T> void registerListener(final Class<T> eventClass, final Consumer<? extends T> eventListener) { Objects.requireNonNull(eventClass); Objects.requireNonNull(eventListener); if (!classConstraint.isAssignableFrom(eventClass)) { return; } synchronized(eventMapping) { eventMapping.putIfAbsent(eventClass, new HashSet<>()); eventMapping.get(eventClass).add(new ConsumerEventHandler((Consumer<Object>)eventListener)); } } @Override public void executeEvent(final Object event) { if (classConstraint.isAssignableFrom(event.getClass())) { eventMapping.getOrDefault(event.getClass(), EMPTY_SET).forEach(eventHandler -> eventHandler.invoke(event)); } } @Override public void removeListenersOfObject(final Object callbackObject) { Arrays.stream(callbackObject.getClass().getMethods()) .filter(method -> (method.getAnnotation(Event.class) != null)) .filter(method -> method.getReturnType().equals(void.class)) .filter(method -> method.getParameterCount() == 1) .forEach(method -> { Class<?> clazz = method.getParameterTypes()[0]; if (classConstraint.isAssignableFrom(clazz)) { eventMapping.getOrDefault(clazz, EMPTY_SET).remove(new MethodEventHandler(method, callbackObject, clazz)); } }); } @Override @SuppressWarnings(unchecked) public <T> void removeListener(final Class<T> eventClass, final Consumer<? extends T> eventListener) { Objects.requireNonNull(eventClass); Objects.requireNonNull(eventListener); if (classConstraint.isAssignableFrom(eventClass)) { eventMapping.getOrDefault(eventClass, EMPTY_SET).remove(new ConsumerEventHandler((Consumer<Object>)eventListener)); } } @Override public void removeAllListenersOfEvent(final Class<?> eventClass) { Objects.requireNonNull(eventClass); eventMapping.remove(eventClass); } @Override public void removeAllListeners() { eventMapping.clear(); } private static interface EventHandler { void invoke(final Object event); } private static class MethodEventHandler implements EventHandler { private final Method method; private final Object callbackObject; private final Class<?> eventClass; public MethodEventHandler(final Method method, final Object object, final Class<?> eventClass) { this.method = Objects.requireNonNull(method); this.callbackObject = Objects.requireNonNull(object); this.eventClass = Objects.requireNonNull(eventClass); } @Override public void invoke(final Object event) { try { method.setAccessible(true); method.invoke(callbackObject, Objects.requireNonNull(event)); } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException ex) { throw new RuntimeException(ex); } } @Override public int hashCode() { int hash = 7; hash = 71 * hash + Objects.hashCode(this.method); hash = 71 * hash + Objects.hashCode(this.callbackObject); hash = 71 * hash + Objects.hashCode(this.eventClass); return hash; } @Override public boolean equals(final Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final MethodEventHandler other = (MethodEventHandler)obj; if (!Objects.equals(this.method, other.method)) { return false; } if (!Objects.equals(this.callbackObject, other.callbackObject)) { return false; } if (!Objects.equals(this.eventClass, other.eventClass)) { return false; } return true; } } private static class ConsumerEventHandler implements EventHandler { private final Consumer<Object> eventListener; public ConsumerEventHandler(final Consumer<Object> consumer) { this.eventListener = Objects.requireNonNull(consumer); } @Override public void invoke(final Object event) { eventListener.accept(Objects.requireNonNull(event)); } @Override public int hashCode() { int hash = 5; hash = 19 * hash + Objects.hashCode(this.eventListener); return hash; } @Override public boolean equals(final Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final ConsumerEventHandler other = (ConsumerEventHandler)obj; if (!Objects.equals(this.eventListener, other.eventListener)) { return false; } return true; } }}public class SimpleEventBusTest { static { assertTrue(true); } private AtomicInteger alphaCounter; private AtomicInteger betaCounter; private AtomicInteger gammaCounter; @Before public void before() { alphaCounter = new AtomicInteger(0); betaCounter = new AtomicInteger(0); gammaCounter = new AtomicInteger(0); } private Stream<AtomicInteger> counters() { return Stream.of(alphaCounter, betaCounter, gammaCounter); } @Test public void testConstructor() { EventBus eventBus = new SimpleEventBus(); eventBus.registerListenersOfObject(new Object() { @Event public void onAlphaEvent(final AlphaEvent alphaEvent) { alphaCounter.incrementAndGet(); } }); eventBus.executeEvent(new AlphaEvent()); assertEquals(1, alphaCounter.get()); } @Test public void testConstructorWithEventClassConstraint() { EventBus eventBus = new SimpleEventBus(BetaEvent.class); eventBus.registerListenersOfObject(new Object() { @Event public void onAlphaEvent(final AlphaEvent alphaEvent) { alphaCounter.incrementAndGet(); } }); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.executeEvent(new AlphaEvent()); assertEquals(0, alphaCounter.get()); } @Test public void testRegisterListenersOfObject() { EventBus eventBus = new SimpleEventBus(); eventBus.registerListenersOfObject(new Object() { @Event public void onAlphaEvent1(final AlphaEvent alphaEvent) { alphaCounter.incrementAndGet(); } @Event public void onAlphaEvent2(final AlphaEvent alphaEvent) { alphaCounter.incrementAndGet(); } @Event public void onAlphaEvent3(final AlphaEvent alphaEvent) { alphaCounter.incrementAndGet(); } @Event public void onBetaEvent1(final BetaEvent betaEvent) { betaCounter.incrementAndGet(); } @Event public void onBetaEvent2(final BetaEvent betaEvent) { betaCounter.incrementAndGet(); } @Event public void onGammaEvent(final GammaEvent gammaEvent) { gammaCounter.incrementAndGet(); } }); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(2, betaCounter.get()); assertEquals(1, gammaCounter.get()); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(6, alphaCounter.get()); assertEquals(4, betaCounter.get()); assertEquals(2, gammaCounter.get()); } @Test public void testRegisterListener() { EventBus eventBus = new SimpleEventBus(); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet()); eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet()); eventBus.registerListener(GammaEvent.class, gammaEvent -> gammaCounter.incrementAndGet()); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(2, betaCounter.get()); assertEquals(1, gammaCounter.get()); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(6, alphaCounter.get()); assertEquals(4, betaCounter.get()); assertEquals(2, gammaCounter.get()); } @Test public void testExecuteEvent() { EventBus eventBus = new SimpleEventBus(); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.executeEvent(new AlphaEvent()); assertEquals(1, alphaCounter.get()); } @Test public void testExecuteEventSameInstance() { AlphaEvent specificAlphaEvent = new AlphaEvent(); EventBus eventBus = new SimpleEventBus(); eventBus.registerListener(AlphaEvent.class, alphaEvent -> assertTrue(alphaEvent == specificAlphaEvent)); } @Test public void testRemoveListenersOfObject() { EventBus eventBus = new SimpleEventBus(); Object object1 = new Object() { @Event public void onAlphaEvent(final AlphaEvent alphaEvent) { alphaCounter.incrementAndGet(); } @Event public void onBetaEvent(final BetaEvent betaEvent) { betaCounter.incrementAndGet(); } @Event public void onGammaEvent(final GammaEvent gammaEvent) { gammaCounter.incrementAndGet(); } }; Object object2 = new Object() { @Event public void onAlphaEvent(final AlphaEvent alphaEvent) { alphaCounter.incrementAndGet(); } @Event public void onBetaEvent(final BetaEvent betaEvent) { betaCounter.incrementAndGet(); } @Event public void onGammaEvent(final GammaEvent gammaEvent) { gammaCounter.incrementAndGet(); } }; eventBus.registerListenersOfObject(object1); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); counters().allMatch(counter -> counter.get() == 1); eventBus.registerListenersOfObject(object2); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); counters().allMatch(counter -> counter.get() == 3); eventBus.removeListenersOfObject(object2); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); counters().allMatch(counter -> counter.get() == 4); eventBus.removeListenersOfObject(object1); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); counters().allMatch(counter -> counter.get() == 4); } @Test public void testRemoveListener() { EventBus eventBus = new SimpleEventBus(); Consumer<AlphaEvent> alphaEventListener = alphaEvent -> alphaCounter.incrementAndGet(); Consumer<BetaEvent> betaEventListener = betaEvent -> betaCounter.incrementAndGet(); Consumer<GammaEvent> gammaEventListener = gammaEvent -> gammaCounter.incrementAndGet(); eventBus.registerListener(AlphaEvent.class, alphaEventListener); eventBus.registerListener(BetaEvent.class, betaEventListener); eventBus.registerListener(GammaEvent.class, gammaEventListener); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(1, alphaCounter.get()); assertEquals(1, betaCounter.get()); assertEquals(1, gammaCounter.get()); eventBus.removeListener(GammaEvent.class, gammaEventListener); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(2, alphaCounter.get()); assertEquals(2, betaCounter.get()); assertEquals(1, gammaCounter.get()); eventBus.removeListener(BetaEvent.class, betaEventListener); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(2, betaCounter.get()); assertEquals(1, gammaCounter.get()); eventBus.removeListener(AlphaEvent.class, alphaEventListener); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(2, betaCounter.get()); assertEquals(1, gammaCounter.get()); } @Test public void testRemoveAllListenersOfEvent() { EventBus eventBus = new SimpleEventBus(); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet()); eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet()); eventBus.registerListener(GammaEvent.class, gammaEvent -> gammaCounter.incrementAndGet()); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(2, betaCounter.get()); assertEquals(1, gammaCounter.get()); eventBus.removeAllListenersOfEvent(AlphaEvent.class); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(4, betaCounter.get()); assertEquals(2, gammaCounter.get()); eventBus.removeAllListenersOfEvent(BetaEvent.class); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(4, betaCounter.get()); assertEquals(3, gammaCounter.get()); eventBus.removeAllListenersOfEvent(GammaEvent.class); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(4, betaCounter.get()); assertEquals(3, gammaCounter.get()); } @Test public void testRemoveAllListeners() { EventBus eventBus = new SimpleEventBus(); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet()); eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet()); eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet()); eventBus.registerListener(GammaEvent.class, gammaEvent -> gammaCounter.incrementAndGet()); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(2, betaCounter.get()); assertEquals(1, gammaCounter.get()); eventBus.removeAllListeners(); eventBus.executeEvent(new AlphaEvent()); eventBus.executeEvent(new BetaEvent()); eventBus.executeEvent(new GammaEvent()); assertEquals(3, alphaCounter.get()); assertEquals(2, betaCounter.get()); assertEquals(1, gammaCounter.get()); } private static class AlphaEvent { } private static class BetaEvent { } private static class GammaEvent { }}
My EventBus system
java;thread safety;reflection;event handling
The synchronization in the code is in some places overly broad, and in others, it is absent where it is needed.synchronizing on eventMapping in your registerListenersOfObject method means that only one thread can be accessing the eventMapping instance at any one time. This defeats using the ConcurrentHashMap concept entirely (where only a small portion of the map is locked and other portions are available for other threads). The granularity of this lock is overly broad.Inside that lock, you add data to (and potentially create) a HashSet<EventHandler> instance. This HashSet is then used in other methods, but without any synchronization. Those other methods may have issues with concurrency because they are not included in any synchronization at all.@Overridepublic void executeEvent(final Object event) { if (classConstraint.isAssignableFrom(event.getClass())) { eventMapping.getOrDefault(event.getClass(), EMPTY_SET).forEach(eventHandler -> eventHandler.invoke(event)); }}in the above code, while performing the forEach, any of the following things are possible (and other things as well, I am sure):data could be added to the Set you are streaming, and that data may, or may not be included in the stream.the stream could throw a ConcurrentModificationExceptionthe steam could end early (and some data may not be processed at all.......Consider the following code in the SimpleEventBus. This code handles adding and using event handlers (though removing handlers needs to be fixed as well)....private final void includeEventHandler(final Class<?> clazz, final EventHandler handler) { Set<EventHandler> existing = eventMapping.get(clazz); if (existing == null) { final Set<EventHandler> created = new HashSet<>(); // optimistically assume that we are the first thread for this particular class. existing = eventMapping.putIfAbsent(clazz, created); if (existing == null) { // we are the first thread to add one for this clazz existing = created; } } synchronized (existing) { existing.add(handler); }}private final EventHandler[] getEventHandlers(final Class<?> clazz) { Set<EventHandler> handlers = eventMapping.get(clazz); if (handlers == null) { return new EventHandler[0]; } synchronized(handlers) { return handlers.toArray(new EventHandler[handlers.size()]); }}@Overridepublic void registerListenersOfObject(final Object callbackObject) { Arrays.stream(callbackObject.getClass().getMethods()) .filter(method -> (method.getAnnotation(Event.class) != null)) .filter(method -> method.getReturnType().equals(void.class)) .filter(method -> method.getParameterCount() == 1) .forEach(method -> { Class<?> clazz = method.getParameterTypes()[0]; if (!classConstraint.isAssignableFrom(clazz)) { return; } includeEventHandler(clazz, new MethodEventHandler(method, callbackObject, clazz)); });}@Override@SuppressWarnings(unchecked)public <T> void registerListener(final Class<T> eventClass, final Consumer<? extends T> eventListener) { Objects.requireNonNull(eventClass); Objects.requireNonNull(eventListener); if (!classConstraint.isAssignableFrom(eventClass)) { return; } includeEventHandler(eventClass, new ConsumerEventHandler((Consumer<Object>)eventListener));}@Overridepublic void executeEvent(final Object event) { if (classConstraint.isAssignableFrom(event.getClass())) { Arrays.stream(getEventHandlers(event.getClass())).forEach(eventHandler -> eventHandler.invoke(event)); }}The above code uses the ConcurrentHashMap in a way that is minimally locked. It uses an optimistic process for creating a new HashSet only when it is likely going to be used (instead of creating, and throwing it away almost all the time). It also makes sure that, if one is created in a different thread, and our optimism was proven wrong, that we use the one that other threads are using.Then, for the actual HashSet, it synchronizes on the whole set, and all operations are completely isolated from other threads.This is OK, because, the only time there will be thread blocking, is when two threads are accessing the event handlers for a single Class.... which is likely to be uncommon.Note, that the getHandlers creates a defensive copy of the Set, so that iteration has a consistent copy of the data, and that there does not need to be any locking during the iteration.Edit: To remove unnecessary work in the code, I would actually recommend the following:private final EventHandler[] getEventHandlers(final Class<?> clazz) { Set<EventHandler> handlers = eventMapping.get(clazz); if (handlers == null) { return null; } synchronized(handlers) { return handlers.toArray(new EventHandler[handlers.size()]); }}@Overridepublic void executeEvent(final Object event) { if (classConstraint.isAssignableFrom(event.getClass())) { EventHandler[] handlers = getEventHandlers(event.getClass()); if (handlers != null) { Arrays.stream(handlers).forEach(eventHandler -> eventHandler.invoke(event)); } }}
_unix.77967
I'm trying to run virtual mouse driver from book Essential Linux Device Driversbut when I'm load this module into kernel using insmod in /var/log/Xorg.0.log I see:[ 757.212] (II) config/udev: Adding input device (/dev/input/event10)[ 757.212] (II) No identifier specified, ignoring this device.How can I force Xorg to don't ignoring this device? or what I must add to kernel module code?
Virtual mouse is ignored by xorg when loaded
xorg;udev;input
null
_unix.276665
I have a tsv file that contains some values. I want the sum of each column and total number of values and percentage values.Eg:file.tsv containsx 1 1 0 1 x x 1 x1 1 x 0 0 x 1 x 00 0 x 1 1 x 1 1 x0 x x x 1 x x x 1(tsv file contain more than 4 rows)result: x 1 1 0 1 x x 1 x 1 1 x 0 0 x 1 x 0 0 0 x 1 1 x 1 1 x 0 x x x 1 x x x 1sum 1 2 1 1 3 0 2 2 1total 3 3 1 3 4 0 2 2 2percent 33 66 100 33 75 0 100 100 50I have used a sed script to calculate number of one and zero but that did not append to the end of file. And in the result sum represents the addition of '1' present in the column, total is the number of zero and one in the column ignoring the value of x(non-numeric character).
How do I calculate the column percentage of a file?
shell script;text processing;python;perl
You can do this with awk, keeping track of numeric versus non-numeric columns and summarizing at the end:#!/usr/bin/awk -fBEGIN { width = 0;}{ if (width < NF) width = NF; for (n = 1; n <= NF; ++n) { if ( $n ~ /^[0-9]+$/ ) { number[n] += $n; total[n] += 1; } else { others[n] += $n; } } print; next;}END { printf sum; for (n = 1; n <= width; ++n) { printf %5d, number[n]; } printf \n; printf total; for (n = 1; n <= width; ++n) { printf %5d, total[n]; } printf \n; printf percent; for (n = 1; n <= width; ++n) { if ( total[n] != 0) { printf %5d, 100 * number[n] / total[n]; } else { printf %5d, 0; } } printf \n;}
_codereview.126044
Scan the array of strings, for each string get the code(which is nothing but the summation of char values of the string) and add the string to TreeSet<String> corresponding to the code if the code exists in the hashmap and if the string is an anagram with the first string in the TreeSet corresponding to the code else create a new TreeSet<String> and add the string to it and put it in the HashMap<Integer, TreeSet<String>>. Finally convert the all the sets in the HashMap to list of lists and return the same.import java.util.Arrays;import java.util.HashMap;import java.util.ArrayList;import java.util.List;import java.util.Map;import java.util.Set;import java.util.TreeSet;public class Anagrams { public static boolean anagramsHelper(String[] words) { for (int i = 1; i < words.length; i++) { if (!areAnagrams(words[0], words[i])) { return false; } } return true; } public static boolean areAnagrams(String word1, String word2) { if (word1.length() != word2.length()) { return false; // anagrams are strings with same length } int[] charCount = new int[128]; // 128 unique chars in ASCII // count the chars in word1 for (int i = 0; i < word1.length(); i++) { charCount[(int) word1.charAt(i)]++; } // decrement the char count for chars in word2 for (int i = 0; i < word2.length(); i++) { if (charCount[(int) word2.charAt(i)] == 0) { return false; } charCount[(int) word2.charAt(i)]--; } // verify if any char count is non zero // for anagrams it should be zero else they are not anagrams for (int i = 0; i < 128; i++) { if (charCount[i] != 0) { return false; } } return true; } public static List<List<String>> groupAnagrams(String[] words) { Map<String, TreeSet<String>> anagramsGroup = new HashMap<String, TreeSet<String>>(); TreeSet<String> listOfAnagrams; String keyValue; for (String word : words) { char[] charsInWord = word.toCharArray(); Arrays.sort(charsInWord); keyValue = String.valueOf(charsInWord); if (anagramsGroup.containsKey(keyValue)) { listOfAnagrams = anagramsGroup.get(keyValue); if (areAnagrams(word, listOfAnagrams.first())) { listOfAnagrams.add(word); anagramsGroup.put(keyValue, listOfAnagrams); } } else { listOfAnagrams = new TreeSet<>(); listOfAnagrams.add(word); anagramsGroup.put(keyValue, listOfAnagrams); } } List<List<String>> groupsOfAnagrams = new ArrayList<List<String>>(); for (Set<String> setOfAnagrams : anagramsGroup.values()) { List<String> anagramsList = new ArrayList<>(setOfAnagrams); groupsOfAnagrams.add(anagramsList); } return groupsOfAnagrams; } public static void main(String[] args) { String[] words = { man, elephand, nam, viswa, v, i, handelep, aba, baa, aab, xyz, yyy }; System.out.println(groupAnagrams(words)); }}
Given an array of strings, group the anagrams together
java;strings
null
_unix.139488
I am using lsof to view the list of open files.One such file that it displays open is :Google 3864 malaykeshav 46u REG 1,1 470455334 32578671 /Users/malaykeshav/Library/Application Support/Google/Chrome/Default/Pepper Data/Shockwave Flash/.com.google.Chrome.CiGbDZBut when I go to the location and run a sudo ls -al no such file is displayed.How do I access this file?My current directory where I am executing ls is $pwd/Users/malaykeshav/Library/Application Support/Google/Chrome/Default/Pepper Data/Shockwave Flash lsof displays this file open after exectuing ls.
ls does not show a hidden file (OS X)
files;ls;open files;lsof
null
_codereview.112612
I have created a Backbone Movie app as part of a learning exercise, the app is hosted on codepen although I actually have it built on my local but thought this approach would be handier for code review. I have some dummy JSON containing some movies that are stored in a Movie Collection and then another collection to store movies added to the watchlist, I created a watchlist collection instead of filtering the Movies collection so that I could save these models to local storage and then show the watchlist on deeplink. I'm pretty happy with what I have but unsure about areas such as the WatchlistItemView and how I dispose this view when I click to delete a movie from the watchlist. I'm also still quite unsure about what level of management each view should have, if I have a collection view with sub views does that mean the collection view is primarily in charge of the subviews or can the subviews manage themselves? With Backbone being so unopinionated I often find it so hard to know what views should be taking care of what management.Here is the link to the codepen http://codepen.io/styler/pen/wKVZxrWill be great to get your feedback!JSconsole.clear();// Namespace our app.var App = { Models: {}, Views: {}, Collections: {}, utils: {} }// Model ClassesApp.Models.MovieModel = Backbone.Model.extend({ localStorage: new Store('watchlist'), defaults: { watchlist: false }, toggleWatchlist: function() { this.set( 'watchlist', !this.get('watchlist') ) }, parse: function(response) { if(response.backdrop_path) { response.backdrop_small = 'http://image.tmdb.org/t/p/w500/'+response.backdrop_path; response.backdrop_large = 'http://image.tmdb.org/t/p/w1280/'+response.backdrop_path; } else { response.backdrop_small = 'https://placeimg.com/500/281/any/grayscale/1'; response.backdrop_large = 'https://placeimg.com/1280/720/grayscale/1'; } return response; }});// Collection ClassesApp.Collections.MovieCollection = Backbone.Collection.extend({ model: App.Models.MovieModel, url: 'http://codepen.io/styler/pen/yYmWWY.js', // Original http://codepen.io/styler/pen/yYmWWY.js // Missing attributes http://codepen.io/styler/pen/PZYjOr.js // Updated attribute names http://codepen.io/styler/pen/Rrbgxv.js initialize: function(options) { console.log('MovieCollection', options); }, parse: function(response) { return response[0].results; }});App.Collections.WatchlistCollection = Backbone.Collection.extend({ model: App.Models.MovieModel, localStorage: new Store('watchlist')});// View ClassesApp.Views.AppView = Backbone.View.extend({ currentView: null, _firstLoad: true, initialize: function() { // Set up route listeners this.listenTo(Backbone.Events, 'show:movies', this.onShowMovies); this.listenTo(Backbone.Events, 'show:movie', this.onShowMovie, this); this.listenTo(Backbone.Events, 'show:watchlist', this.onShowWatchlist); this.listenTo(Backbone.Events, 'add:watchlist', this.onWatchlistAdd, this); this.listenTo(Backbone.Events, 'remove:watchlist', this.onWatchlistRemove, this); }, onShowMovies: function() { var movieCollectionView = new App.Views.MovieCollectionView({ collection: dataStore.movies }); this._changeView(movieCollectionView); if(dataStore.movies.length <= 0) { dataStore.movies.fetch(); } }, onShowMovie: function(id) { var deferred = $.Deferred(), self = this; if(dataStore.movies.length > 0) { deferred.resolve(); } else { dataStore.movies.fetch().done(function() { deferred.resolve(); }); } deferred.done(function() { var movieModel = dataStore.movies.get(id); var detailedMovieView = new App.Views.DetailedMovieView({ model: movieModel }); this._changeView(detailedMovieView); }.bind(this)); }, onShowWatchlist: function() { console.log('onShowWatchlist'); if(this._firstLoad && dataStore.watchlist.length <= 0) { dataStore.watchlist.fetch().done(function(response) { console.log(response); var watchlistCollectionView = new App.Views.WatchlistCollectionView({ collection: dataStore.watchlist }); this._changeView(watchlistCollectionView); this._firstLoad = false; }.bind(this)); } else { var watchlistCollectionView = new App.Views.WatchlistCollectionView({ collection: dataStore.watchlist }); this._changeView(watchlistCollectionView); } }, _changeView: function(view) { if (this.currentView) { this.currentView.dispose(); this.currentView = null; } this.currentView = view; this.$el.html(this.currentView.render().el); }, onWatchlistAdd: function(model) { console.log('AppView::onWatchlistAdd', model); model.set('watchlist', true); var clonedFilm = model.clone(); dataStore.watchlist.add(clonedFilm); clonedFilm.save(null, { success: function(model, response, options) { console.log('Success::', model, response, options); }, error: function(model, response, options) { console.log('Error::', model, response, options); } }); console.info(dataStore.watchlist); }, onWatchlistRemove: function(model) { console.log('AppView::onWatchlistRemove', model); model.set('watchlist', false); //@todo need to add a url to model var clonedFilm = model.clone(); dataStore.watchlist.remove(model); clonedFilm.destroy(null, { success: function(model, response, options) { console.log('Success::', model, response, options); }, error: function(model, response, options) { console.log('Error::', model, response, options); } }); console.info(dataStore.watchlist); }});App.Views.MainHeaderView = Backbone.View.extend({ events: { 'click .js-nav-link': 'onNavItemClicked' }, initialize: function() { console.log('MainHeaderView::initialize', this.$el); Backbone.Events.on('route:change', this.onDeeplink, this); }, onNavItemClicked: function(event) { event.preventDefault(); var $clickedEl = $(event.currentTarget), clickedHref = $clickedEl.attr('href'); this.toggleActiveNavClass($clickedEl); App.utils.router.navigate(clickedHref, { trigger: true }); }, onDeeplink: function(route) { console.log('onDeeplink', route); var $element = $('.js-nav-link[href=/'+route+']'); this.toggleActiveNavClass($element); }, /** * [toggleActiveNavClass description] * @param {obj} element jQuery selector */ toggleActiveNavClass: function(element) { $('.js-nav-link').removeClass('is-active'); element.addClass('is-active'); }});App.Views.MovieCollectionView = Backbone.View.extend({ className: 'movie-grid', events: { 'click .js-movie--show': 'onMovieClick' }, template: _.template( $('.tmpl-movies').html() ), initialize: function() { console.log('MovieCollectionView::initialize'); this.collection.on('sync', this.render, this); }, render: function() { console.log('MovieCollectionView::render'); this.$el.html(this.template({ movies: this.collection.toJSON() })); setTimeout(function() { this.afterRender(); }.bind(this), 0); return this; }, afterRender: function() { console.log('MovieCollectionView::afterRender'); var $images = imagesLoaded( this.$el.find('.js-trans-img') ); $images.on('progress', function(instance, image) { $(image.img).removeClass('is-hidden'); }); }, onMovieClick: function(event) { event.preventDefault(); var $clickedEl = $(event.currentTarget), clickedHref = $clickedEl.attr('href'), movieId = $clickedEl.data('id'); Backbone.Events.trigger('show:movie', movieId); App.utils.router.navigate(clickedHref, {trigger: false}); }, dispose: function() { this.remove(); console.info('MovieCollectionView::dispose', this); }});App.Views.DetailedMovieView = Backbone.View.extend({ className: 'movie-detailed', events: { 'click .js-action--back': 'onBackBtnClick', 'click .js-watchlist--add': 'onAddClick', 'click .js-watchlist--remove': 'onRemoveClick' }, template: _.template( $('.tmpl-movie-detailed').html() ), initialize: function() { console.log('DetailedMovieView::initialize'); }, render: function() { console.log('DetailedMovieView::render', this.model.toJSON()); this.$el.html( this.template( this.model.toJSON() ) ); setTimeout(function() { this.afterRender(); }.bind(this), 0); return this; }, afterRender: function() { console.log('DetailedMovieView::afterRender'); var loader = new App.Views.LoaderView(); this.$('.js-img-placeholder').append(loader.render().el); loader.show(); imagesLoaded(this.$('.js-trans-img'), function() { this.images.forEach(function(i) { var $image = $(i.img); $image.removeClass('is-hidden'); loader.hide(); }); }); this.createWatchlistBtn(); }, createWatchlistBtn: function() { var watchlistBtn = new App.Views.WatchlistBtnSubview({ model: this.model }); this.$('.js-watchlist--add').replaceWith( watchlistBtn.render().el ); }, onBackBtnClick: function(event) { event.preventDefault(); window.history.back(); }, onAddClick: function(event) { event.preventDefault(); Backbone.Events.trigger('add:watchlist', this.model); }, onRemoveClick: function(event) { event.preventDefault(); Backbone.Events.trigger('remove:watchlist', this.model); }, dispose: function() { this.remove(); console.info('DetailedMovieView::dispose', this); }});App.Views.WatchlistCollectionView = Backbone.View.extend({ className: 'movie-grid', events: { 'click .js-movie--show': 'onMovieClick' }, template: _.template( $('.tmpl-watchlist').html() ), initialize: function() { console.log('MovieListItemView::initialize', this.collection.toJSON()); this.listenTo(this.collection, 'add', this.onAddMovie, this); this.listenTo(this.collection, 'remove', this.onMovieRemoved, this); }, render: function() { console.log('MovieListItemView::render', this.collection.toJSON()); this.$el.html(this.template()); if(this.collection.length > 0) { this.collection.forEach(this.onAddMovie, this); } setTimeout(function() { this.afterRender(); }.bind(this), 0); return this; }, afterRender: function() { console.log('WatchlistCollectionView::afterRender'); var $images = imagesLoaded( this.$el.find('.js-trans-img') ); $images.on('progress', function(instance, image) { $(image.img).removeClass('is-hidden'); }); }, onAddMovie: function(movie) { console.log('ADD MOVIE'); var watchlistMovie = new App.Views.WatchlistItemView({ model: movie }); this.$el.prepend(watchlistMovie.render().el); }, onMovieClick: function(event) { event.preventDefault(); var $clickedEl = $(event.currentTarget), clickedHref = $clickedEl.attr('href'), movieId = $clickedEl.data('id'); Backbone.Events.trigger('show:movie', movieId); App.utils.router.navigate(clickedHref, {trigger: false}); }, onMovieRemoved: function(model) { console.log('REMOVE MOVIE', model.toJSON()); // this.collection.remove(model); console.log(this.collection.toJSON()); if(this.collection.length === 0) { this.$el.html(this.template()); } }, dispose: function() { this.remove(); console.info('WatchlistCollectionView::dispose', this); }});App.Views.WatchlistItemView = Backbone.View.extend({ className: 'movie', template: _.template( $('.tmpl-watchlist-movie').html() ), events: { 'click .js-watchlist--remove': 'onWatchlistRemove' }, render: function() { this.$el.html( this.template( this.model.toJSON() ) ); return this; }, onWatchlistRemove: function(event) { event.preventDefault(); event.stopPropagation(); this.dispose(); var model = dataStore.movies.get(this.model.get('id')); Backbone.Events.trigger('remove:watchlist', model); }, dispose: function() { this.remove(); }});App.Views.WatchlistBtnSubview = Backbone.View.extend({ template: _.template( $('.tmpl-watchlist-movie-btn').html() ), initialize: function() { console.log('WatchlistBtnSubview::initialize', this.model.toJSON()); this.listenTo(this.model, 'change:watchlist', this.render, this); }, render: function() { console.log('WatchlistBtnSubview::render', this.model.toJSON()); this.$el.html( this.template( this.model.toJSON() ) ); return this; }});App.Views.LoaderView = Backbone.View.extend({ className: 'loading tt is-hidden', stateClass: 'is-hidden', template: _.template( $('.tmpl-loader').html() ), initialize: function() { this.listenTo(Backbone.Events, 'loader:show', this.show, this); this.listenTo(Backbone.Events, 'loader:hide', this.hide, this); }, render: function() { this.$el.html( this.template() ); return this; }, show: function() { this.$el.removeClass(this.stateClass); }, hide: function() { this.$el.addClass(this.stateClass); this.$el.one('transitionend', function() { this.dispose(); }.bind(this)); }, dispose: function() { this.remove(); }});// UtilitiesApp.utils.router = new (Backbone.Router.extend({ routes: { '': 'onShowMovies', 'movie/:id': 'onShowMovie', // 'movie': 'onShowMovie', 'watchlist': 'onShowWatchlist' }, initialize: function() { console.log('Router Started'); this.on('all', this._signalRouteChange); }, onShowMovies: function() { console.log('router::onShowMovies', this); Backbone.Events.trigger('show:movies'); }, /** * [onShowMovie description] * @param {[number]} id movie id */ onShowMovie: function(id) { console.log('router::onShowMovie', this); Backbone.Events.trigger('show:movie', id); }, onShowWatchlist: function() { console.log('router::onShowWatchlist'); Backbone.Events.trigger('show:watchlist'); }, _signalRouteChange: function(route) { if(route === 'route') return; console.log('Firing', route); var fragment = Backbone.history.getFragment(); if(fragment.split('/')[0] === 'movie') { fragment = ''; } console.log('Fragment::', fragment); Backbone.Events.trigger('route:change', fragment); }}));// Create a data storevar dataStore = {};dataStore.movies = new App.Collections.MovieCollection();dataStore.watchlist = new App.Collections.WatchlistCollection();(function() { new App.Views.MainHeaderView({ el: '.js-header' }); // Start App. new App.Views.AppView({ el: '.js-app' }); // Start History API. Backbone.history.start();})();Templates<header class=global-header js-header> <nav class=main-nav> <a href=/ class=main-nav__link is-active js-nav-link>Popular</a> <a href=/watchlist class=main-nav__link js-nav-link>Watchlist</a> </nav></header><div class=main-content js-app></div><!-- Templates --><script type=text/template class=tmpl-movies> <% _.each(movies, function(movie) { %> <a href=/movie/<%= movie.id %> class=movie js-movie--show data-id=<%= movie.id %>> <img src=<%= movie.backdrop_small %> alt= class=trans-img is-hidden js-trans-img> <span class=movie__info> <h2><%= movie.title %></h2> </span> </a> <% }); %></script><script type=text/template class=tmpl-movie-detailed> <div class=poster-holder js-img-placeholder> <img src=<%= backdrop_large %> alt= class=movie-detailed__poster trans-img is-hidden js-trans-img> </div> <div class=movie-details> <a href=/ class=action action--back js-action--back> <svg height=80px width=80px class=action__icon action__icon--back viewBox=0 0 80 80><rect height=38 width=5 x=19 y=20.547/><polygon points=46.371,59 49.906,55.465 36.488,42.047 61,42.047 61,37.047 36.652,37.047 49.906,23.793 46.371,20.258 27,39.629/></svg> </a> <h1 class=movie-details__title><%= title %></h1> <div class=movie-details__content> <p><%= overview %></p> </div> <button class=action action-watchlist--added js-watchlist--add> <svg height=80px width=80px class=action__icon action__icon--add viewBox=0 0 80 80><polygon points=61,37 43,37 43,19 37,19 37,37 19,37 19,43 37,43 37,61 43,61 43,43 61,43 /></svg> Add to watchlist </button> </div></script><script type=text/template class=tmpl-watchlist> <% if(this.collection.length <= 0) { %> <div class=notification> <img src=http://cdn.meme.am/instances2/500x/3145144.jpg alt= class=sadface /> </div> <% } %></script><script type=text/template class=tmpl-watchlist-movie> <a href=/movie/<%= id %> class=js-movie--show data-id=<%= id %>> <img src=<%= backdrop_small %> alt= class=trans-img is-hidden js-trans-img> <span class=movie__info> <h2><%= title %></h2> <button class=action action-watchlist--remove js-watchlist--remove> <svg height=80px width=80px class=action__icon action__icon--remove viewBox=0 0 80 80><polygon points=56.971,52.728 44.243,40 56.971,27.272 52.728,23.029 40,35.757 27.272,23.029 23.029,27.272 35.757,40 23.029,52.728 27.272,56.971 40,44.243 52.728,56.971/></svg> </button> </span> </a></script><script type=text/template class=tmpl-watchlist-movie-btn> <% if(!watchlist) { %> <button class=action action-watchlist--added js-watchlist--add> <svg height=80px width=80px class=action__icon action__icon--add viewBox=0 0 80 80><polygon points=61,37 43,37 43,19 37,19 37,37 19,37 19,43 37,43 37,61 43,61 43,43 61,43 /></svg> Add to watchlist </button> <% } else { %> <button class=action action-watchlist--removed js-watchlist--remove> <svg height=80px width=80px class=action__icon action__icon--remove viewBox=0 0 80 80><polygon points=56.971,52.728 44.243,40 56.971,27.272 52.728,23.029 40,35.757 27.272,23.029 23.029,27.272 35.757,40 23.029,52.728 27.272,56.971 40,44.243 52.728,56.971/></svg> Remove from watchlist </button> <% } %></script><script type=text/template class=tmpl-loader> <div class=loader>Loading...</div></script>
Backbone learning piece, Movie Application using Views, Collection, Models, Routes and Localstorage
javascript;jquery;backbone.js;underscore.js
null
_softwareengineering.141245
The recent explosion of phone platforms has depressed me (slightly), and made me wonder if we will ever reach any kind of standard for presentation? I don't mean language or IDE. Different languages have different strengths and I can see that there may always be a need for disparity, although I do note that languages are merging somewhat in functionality, with traditional imperitive languages like C++ now supporting things like lambdas.What I'm really talking about is a common presentation mechanism. Before smart phones and tablets came along, the web seemed to be finally becoming a reasonable platform for presenting an application that was globally accessible, not just geographically, but by platform too. Sure there are still (sometimes infuriating) implementation differences and quirks, but if you wrote a decent site you knew it could be accessed on anything from a PC to a phone to a C64 running the right software. Write Once Run Anywhere seemed to finally be becoming a reality.However, in the last few years we've seen an explosion of mobile operating systems, and the ubiquitous app. A good site is no longer enough, you need a native app, and of course we have a sudden massive disparity in OS, language, and APIs needed to write them as each battles for supremecy.It's kind of weird how the cycle of popularity goes.Mainframes with terminals - thin client.PC - thick client.Web browser - thin client.Phone app - thick(ish) client.I just wonder if you think there will ever be a global standard for clients, or whether the shiny and different cycle will always continue along with the battle of the tech du jour.
Do you think we will ever settle on a standard platform?
languages;presentation;standardization
null
_unix.257698
I have a feeling the setuid bit, the nosuid mount option, sudo, and su are all related given their names. But how do they relate to one another? Are some of them used in conjunction? If they are not, then why are their names so similar?
How are setuid, suid, sudo, and su all related?
sudo;su;setuid
null
_webapps.45523
There is a blockquote box which is well established meaning for putting quotes. I would like to add another box in my post a sidenote, something like you see in books when some part of the text needs emphasis.I dont care about visual style all it is required to be different than regular text and different than blockquote.So, is there such tag present in regular WordPress (free, wordpress.com)? If yes, what is the tag?For the record, example of the sidenote (not WP site): http://www.gigamonkeys.com/book/As the workaround I simply put a raw div with some style defined (see: skila.pl). The downside is I will have to copy&paste it in every post.
How to create a sidenote box at WordPress.com?
wordpress.com;wordpress.com tags
null
_softwareengineering.190344
I have an MVC3 project that uses SQL Server.I use data from the SQL database all the time and I often find that I'm reusing/duplicating some SQL queries. I thought I'd solve this problem by creating a few static helper classes that just contain a bunch of static methods for retrieving common things.GetAllUsers(database As MyEntity)GetAllNonActivatedUsers(database As MyEntity)GetUser(database As MyEntity, userId As Integer)The problem is that this made things slightly worse. Now instead of having SQL queries all over my controller actions, there are large numbers of them in these helper classes. The names of these methods are becoming silly.GetPendingUserApplicationByApplicationId(database as MyEntity, applicationId As Integer, userId As Integer)At this stage I'm thinking of scrapping the helper classes and going back to just random SQL queries throughout my controller actions.Where have I gone wrong and how do people manage their SQL queries?Is it ok to duplicate SQL queries?
How do I handle having so many SQL queries?
code quality;refactoring;asp.net mvc;vb.net;code smell
Well, first of course you are going to have lots of queries because you expect the aplication to do lots of things. Databases have a couple of things that can help but you can make things worse by using them badly. ORMs are one tool that will help write the queries for you. But you will still have a lot of queries if you havea lot of database work that needs to be done.Next you can use Views to create some of the main things you will want to use over and over. Views are good for complex things that you wil need to make sure have the same calculations in multiple situations. For instance we have views that collect all the data from various table for financials whcih ensures that all finaicial data is based on the same set of busines logic. Do not however use views to call views.You can use stored procedures and then just call them in multiple places. (Make sure you put them in source control though, sps are code!).However, you may have to learn to live with the fact that there is lots of code. Our database has well over a thousand stored procs. Organizing them by schema has helped. If your database doesn't have schemas, organizing by using a systematic naming convention heps. that way you know allthe things that relate to finance have the word finaince in them and those relating to users have user in them. That helps you to find the one you arelooking for when you need to reuse it. Code reuse can be a wonderful thing, but there is a very strong caveat when querying databases. If the queries are slightly different, then they must be two separate queries rather than one that sends more data than the application needs at that point. Adding fields you don't need to queries so you can use them elsewhere can cause horrible performance issues that are very hard to correct (I just spent several days trying to performance tune such a mess and ended up eliminating over 400 lines of unnecesary SQL! I also improved performance by cutting the time to excute more than 60%). So don't go on that path just to have fewer queries. More specific queries is generally better than fewer general ones for database performance.
_unix.31376
Is it possible to implement the wake-on-lan magic packet in bash? I'm using a old, customized BusyBox and don't have ether-wake. Is it possible to replace it with some other shell command, like:wakeonlan 11:22:33:44:55:66
Wake-on-LAN with BusyBox?
busybox;wake on lan
You need something that's capable of sending an Ethernet packet that will be seen by the device you want to wake up.The ether-wake command in BusyBox is exactly what you're after. If your BusyBox doesn't have it, consider recompiling BusyBox to include it.If you have a sufficiently bloaty netcat (BusyBox can have one of two nc implementations, one of which handles TCP only), you can send a manually crafted UDP packet to the broadcast address of the network segment that the device is connected to.mac=$(printf '\xed\xcb\xa9\x87\x65\x43') # MAC = ed:cb:a9:87:65:43wol_packet=$(printf \xff\xff\xff\xff\xff\xff$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac$mac)echo $wol_packet | nc -u 7 192.0.2.255Another BusyBox utility that you could abuse into sending that packet is syslogd.syslogd -n -O /dev/null -l 0 -R 192.0.2.255/7 &syslogd_pid=$!logger $wol_packetkill $!If the MAC contains a null byte, you won't be able to craft the packet so easily. Pick a byte that's not \xff and that's not in the MAC, say \x42 (B), and pipe through tr.echo $wol_packet | tr B '\000' | nc -u 7 192.0.2.255If you really have bash (which is extremely unusual on devices with BusyBox are you sure you really have bash, and not another shell provided by BusyBox?), it can send UDP packets by redirecting to /dev/udp/$hostname/$port.echo $wol_packet >/dev/udp/192.0.2.255/7
_cstheory.17637
I am trying to understand the relation between algorithmic complexity and circuit complexity of Determinants and Matrix Multiplication.It is known that the determinant of an $n\times n$ matrix can be computed in $\tilde{O}(M(n))$ time, where $M(n)$ is the minimum time required to multiply any two $n\times n$ matrices. It is also known that the best circuit complexity of determinants is polynomial at depth $O(\log^{2}(n))$ and exponential at depth 3. But the circuit complexity of matrix multiplication, for any constant depth, is only polynomial.Why is there a difference in circuit complexity for determinants and matrix multiplication while it is known that from an algorithm perspective determinant calculation is similar to matrix multiplication? Specifically, why do the circuit complexities have an exponential gap at depth-$3$? Probably, the explanation is simple but I do not see it. Is there an explanation with 'rigor'?Also look in: Smallest known formula for the determinant
Determinants and Matrix Multiplication - Similarity and differences in algorithmic complexity and arithmetic circuit size
algebraic complexity;arithmetic circuits;matrix product;determinant
null
_unix.20813
Under Linux I can use netstat -tulpnw and ps, like so:# netstat -tulpnw | grep :53tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 1482/named udp 0 0 127.0.0.1:53 0.0.0.0:* 1482/named # ps aux | fgrep 1482named 1482 0.0 1.0 93656 44900 ? Ssl Sep06 3:17 /usr/sbin/named -u namedroot 20221 0.0 0.0 4144 552 pts/0 R+ 21:09 0:00 fgrep --color=auto 1482# How can I get the full path of a program bound to a port when using ksh in AIX 6?
Under AIX, how can I get the full path of a program bound to a port?
process;aix;socket;lsof;netstat
null
_softwareengineering.245400
I am reviewing an approach I see commonly used in storing objects (e.g. a socket client object). Namely, usage of a static container to hold the objects. Said objects are created by some helper function as follows:create_client(params) { ... client* cl = new client(...); return cl;}The thing that confused me at first was function called like this:if (!create_client (...)) //generate error messagei.e. a copy of the pointer just seems to be thrown away.But on investigation I see this in the client constructor:client::client(...) { ... coll[id] = this;}Where coll is a map of id to a pointer to the object. But anyway, just a collection. coll is static (not sure if that is relevant).Is there a name for this idiom? Is it good practice?
Is storing pointer (of new'd object) in static collection from object constructor a common idiom in C++
c++
Not so familiar with C++, but I do not like the assignment into the static structure from the constructor, because:1) Introduces a dependency: now the contained class needs to know about the containing instance. You cannot reuse the class without either rewritting or using the same structure.2) It causes a reference leak from the constructor, since the reference to the class is available to other threads before the constructor has finished executing.At the very least, I would have moved the instance assignment from the constructor to the create_client function.
_unix.190431
I've tried to figure this out myself, but the myriad of options just baffles me.I want to use ideally either ffmpeg or mencoder (or something else, but those two I know I have working) to convert any incoming video to a fixed screen size.If the video is wider or too short for it, then centre crop the video. If it's then not the right size, the resize up or down to make it exactly the fixed screen size.The exact final thing I need is 720x480 in a XVid AVI with an MP3 audio track.I've found lots of pages showing how to resize to a maximum resolution, but I need the video to be exactly that resolution (with extra parts cropped off, no black bars).Can anyone tell me the command line to run - or at least get me some/most of the way there? If it needs to be multiple command lines (run X to get the resolution, do this calculation and then run Y with the output of that calculation) I can script that.
Convert a video to a fixed screen size by cropping and resizing
video;ffmpeg;video encoding;mencoder
I'm no ffmpeg guru, but this should do the trick.First of all, you can get the size of input video like this:ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width in.mp4With a reasonably recent ffmpeg, you can resize your video with these options:ffmpeg -i in.mp4 -vf scale=720:480 out.mp4You can set the width or height to -1 in order to let ffmpeg resize the video keeping the aspect ratio. Actually, -2 is a better choice since the computed value should even. So you could type:ffmpeg -i in.mp4 -vf scale=720:-2 out.mp4Once you get the video, it may be bigger than the expected 720x480 since you let ffmpeg compute the height, so you'll have to crop it. This can be done like this:ffmpeg -i in.mp4 -filter:v crop=in_w:480 out.mp4Finally, you could write a script like this (can easily be optimized, but I kept it simple for legibility):#!/bin/bashFILE=/tmp/test.mp4TMP=/tmp/tmp.mp4OUT=/tmp/out.mp4OUT_WIDTH=720OUT_HEIGHT=480# Get the size of input video:eval $(ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width ${FILE})IN_WIDTH=${streams_stream_0_width}IN_HEIGHT=${streams_stream_0_height}# Get the difference between actual and desired sizeW_DIFF=$[ ${OUT_WIDTH} - ${IN_WIDTH} ]H_DIFF=$[ ${OUT_HEIGHT} - ${IN_HEIGHT} ]# Let's take the shorter side, so the video will be at least as big# as the desired size:CROP_SIDE=nif [ ${W_DIFF} -lt ${H_DIFF} ] ; then SCALE=-2:${OUT_HEIGHT} CROP_SIDE=welse SCALE=${OUT_WIDTH}:-2 CROP_SIDE=hfi# Then perform a first resizingffmpeg -i ${FILE} -vf scale=${SCALE} ${TMP}# Now get the temporary video sizeeval $(ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width ${TMP})IN_WIDTH=${streams_stream_0_width}IN_HEIGHT=${streams_stream_0_height}# Calculate how much we should cropif [ z${CROP_SIDE} = zh ] ; then DIFF=$[ ${IN_HEIGHT} - ${OUT_HEIGHT} ] CROP=in_w:in_h-${DIFF}elif [ z${CROP_SIDE} = zw ] ; then DIFF=$[ ${IN_WIDTH} - ${OUT_WIDTH} ] CROP=in_w-${DIFF}:in_hfi# Then crop...ffmpeg -i ${TMP} -filter:v crop=${CROP} ${OUT}
_unix.229177
I am looking into setting up my web server to have its own IMAP/POP/SMTP server. For the IMAP and POP3 and I am using Dovecot and the webmail I am using Horde. Its all mostly working, I can connect to the imap and pop server, and I can send emails, and horde shows the emails in the inbox. The problem I am having is at the moment there seems to be a bit of a disconnect between between Horde and Dovecot. At the moment, if I create a new user inside horde, before they can give the inbox, I first of all need log in to my web server and create a user using useradd, specifying the directory, creating the Maildir directory within the home user directory and set the password for the account. What I am wanting to do, instead of having to do multiple different steps, e.g. create user in horde, create user in linux, I want it all to be done together, i.e. I create the user in horde, and I don't need to do anything else, the inbox directory is all created without needing to do anything else.
Dovecot Authentication with MySQL and Horde Webmail
dovecot;imap;webmail
null
_unix.3765
I'm trying to write a script to upgrade a bunch of remote machines and would like to verify that one package in particular is upgraded.With yum, I could say yum upgrade specific-package, and it would complain if it fails to upgrade the package. With apt, as far as I know, I can only say apt-get upgrade, and if apt fails to find the new version or fails to resolve the dependencies for the new version, it will silently decide not to install it.Is there any way to get apt-get to complain if it decides to not upgrade a package (short of scripting a call to dpkg --compare-versions)?
apt-get - How to complain on failed upgrade?
scripting;apt
You can use apt-get install to do what you want. The apt-get manpage says the following:This is also the target to use if you want to upgrade one or more already-installed packages without upgrading every package you have on your system. Unlike the upgrade target, which installs the newest version of all currently installed packages, install will install the newest version of only the package(s) specified. Simply provide the name of the package(s) you wish to upgrade, and if a newer version is available, it (and its dependencies, as described above) will be downloaded and installed.If you are wanting to install a known version of a package you can specify that on the commandline too.apt-get install apache2=2.2.14-5ubuntu8.3
_cogsci.1836
I have limited experience feeding wildlife, and while it's fun to watch and I'm sure fun for the wildlife too, I'm interested in just how much fun. Is there's any noticeable difference in the following scenarios?Scenario A: a bird is given a pile of food, and can eat it at any pace.Scenario B: an element of play or competition is introduced. The birdhas to seek out seeds that are scattered on the ground, or has to compete with other birds. Alternatively, a challenging bird feeder is used.The end result is that there's enough food to satiate the bird in either case. It seems that in scenario B, it would take longer for the bird to get full. In scenario B, the bird has to hop from one cluster of seeds to the next, picking out the ones that the bird wants. There may be additional birds, also competing for the tastiest seeds. Alternatively, the bird has to balance on the birdfeeder, and coordinate precise movements to retrieve a seed from the feeder.If the bird is observed over the next X minutes or hours following the feeding, is there any noticeable difference in the overall demeanor, activity pattern, etc., between the two scenarios?UPDATE: I originally thought that this question does not apply to humans - we all sit at a table and eat meals from dishes, etc. I couldn't ask a question about kids throwing pie, which is what I thought of when I think play with food. But this evening, as I was deseeding a pomegranate, I realized that humans may like Playing with difficult fruit too PomegranateCitrus (oranges/grapefruit)PineappleGetting to the best parts of these fruit can be turned into a game too. For example, deseeding pomegranate can be done with the minimum amount of juice splashing, while an orange peel can be removed in many ways as well.The scenarios above can be modified to be easily applicable to humans too:Scenario A: Given unlimited supply of whole pomegranates to deseedScenario B: Given an unlimited supply of pomegranate seeds on a plateWhich person will leave the table feeling better? To get a sense of how people may play with the fruits above, search google for Peel an orange, and you will find very varying screenshots.
Does introducing an element of play, hunt, or competition make feeding more rewarding?
motivation;animal cognition;gamification;feeding
Yes. See contra-freeloading or (for humans) ikea effect.Contrafreeloading: (verb) The behavior in which animals offered the choice between eating food provided to them for free or working to get that food would eat the most food from the source that required effort. This term was created in 1963 by animal psychologist Glen Jensen. Jensen ran a study on 200 male albino rats where the end result was the rats ate more from the food source where the rats had to press on a bar to get the pellet rather than the dish of pellets where they didnt have to do anything at all. Jensen then studied the behaviors of gerbils, mice, birds, fish, monkeys and chimpanzees. In fact many have studied contrafreeloading since then with similar results, except for the domestic cat which likes to be served. This 1963 studys results were surprising because it would be more logical, from an evolutionary point of view, to not expand energy to get food when food is freely available. Why do pet bird people care about this? Birds seem to want to work for food, which is a wild instinctual behavior. Avian behaviorists recommend that pet bird owners encourage contrafreeloading behavior with foraging setups and bird toys within the pet birds cages and that pet bird owners engage their parrots by training commands like Step up or tricks such as the eagle, and then use a treat reward system. This keeps pet birds busy, active and healthy.From http://www.birdchannel.com/bird-words/contrafreeloading.aspx
_unix.228366
Should manpages avoid using unicode characters like the m-dash ()?I've noticed most manpages use hyphens/minuses in their taglines where, I believe, an actual dash would have been more correct typographically.The man-page of the dash shell lives up to its name and uses a proper m-dash.I've filed a tentative pull request at https://github.com/rtomayko/ronn/pull/94 that fixes (?) this in ronn.Opinions?
Manpage typography and proper dashes
man;unicode
Unix man pages (using the man macros) used a single hyphen in the NAME section to separate the name from the description..SH NAMEsh \- command languageUnix documents and papers often used an em dash (\(em in troff) where appropriate.More recent man pages are based on the mdoc macros, and use an .Nd macro to output an em dash. For instance, dash's man page includes.Sh NAME.Nm dash.Nd command interpreter (shell)and the definition of .Nd is.de Nd.nop \[em] \$*..It's probably best to use either \(em or the .Nd macro, but you'd probably be OK using a UTF-8-encoded em dash instead.
_hardwarecs.2173
I have a Bluetooth Bluetake BT500 mouse and one of the buttons is going (the left one of course), the case has some wear, and I'd like a comparable replacement. It seems that the mouse is no longer in production, and I can't find a comparable replacement. This mouse was a good match for a UMPC, so we're talking technology from almost 10 years ago.My current mouse still mostly works, but if I can't repair the left mouse button switch, I'll need a replacement.Is there a comparably small and well built Bluetooth mouse suitable to be a UMPC/Netbook/Tablet companion?Things I like(d) about the mouse:SmallEasy to pairFit in a tacklebox (This happened to be dumb luck. I often store gadgets in tackleboxes to secure them in transit)used AAA batteries (standard batteries)Things I dislike(d) about the mouse:Used AAA batteries (I had to carry extra batteries)the battery cover latch got bent once and I could never get it quite right. It was a hard plastic and I was worried about breaking itthe left mouse button started to go flaky after a few months to a year of useNew features that I might like to seebuilt in rechargeable batterybuilt in battery can be swapped out for standard and/or replacement batteryabove battery would use a USB cable and also double as USB HID mousecomes with caseespecially durablepresentation features (laser, forward, back, etc)not expensive (< $50)
Replacement mouse for Bluetake BT500
mice
null
_codereview.93263
I'm writing unit-tests for my Node.js/Express application with REST endpoints which retrieve stuff via Mongoose from db. Since I'm testing only route functions I want to mock Mongoose by providing custom request and response functions etc. I also had to mock my Person mongoose schema module with mockery. Here is my api-endpoint functions: /* GET one person */function getPerson(req, res, next) { Person.findOne({_id: req.params.id}, function(err, person) { if (!person) { res.status(404); res.json({ message: ERROR: Person with id: + req.params.id + was not found from database. }) } else { res.json(person) } });}/* POST one person */function postPerson(req, res, next) { console.log(Person); var p = new Person(req.body); p.save(function(err) { console.log(p); console.log(Person); if (err) { res.status(400); res.json(err); } else { var r = { message: New person created, person: p }; res.json(r); res.status(201); } });}The important part is that the Person schema can be called in two ways:Person.save(function(){}) //static functionvar p = new Person(req.body); p.save(function(err){}) //instance functionHere is my mockery mock for Mongoose schema of Person: function MockPerson(person) { this.content = {}; for (var k in person) { if (person.hasOwnProperty(k)) { this.content[k] = person[k]; //add person's properties here to simulate Mongoose's Object } }}MockPerson.prototype.find = function(params, callback) { console.log(mockfind); return null};//Ugly. Note that in JS prototype functions are accessible only from instances and statics only from class.//Mongoose supports both Person.findOne and var p = new Person(); p.findOne() so we need both prototype and static functionsMockPerson.prototype.findOne = MockPerson.findOne = function(params, callback) { console.log(mockfindOne); callback(null, null);};MockPerson.prototype.save = MockPerson.save = function(callback) { console.log(mocksave); callback();};I'm still quite new with JS so I got a bit confused until I realized that prototype functions are only accessible in instances of Person and statics only from class Person. So this led me to add lines:MockPerson.prototype.findOne = MockPerson.findOne = function(params, callback)Maybe it's just me but that looks a bit dubious. Any ideas how I could refactor either my endpoint code or mocks or is this okay in your opinion?Finally here is my work in progress Mocha test for this thing:describe('personApiEndpoints', function() { before(function(){ // The before() callback gets run before all tests in the suite. Do one-time setup here. mockery.enable({ useCleanCache: true }); console.log(MOCKKAA); mockery.registerMock(../../models/person, MockPerson); endpoints = require(../routes/api/persons); }); beforeEach(function(){ // The beforeEach() callback gets run before each test in the suite. }); it('return approriate error response when getPerson() fails', function(done) { var mockRequest = createMockRequest(); var mockResponse = createMockResponse(); endpoints.getPerson(mockRequest, mockResponse); expect(mockResponse.resStatus).to.equal(404); expect(mockResponse.resJson).to.deep.equal({ message: 'ERROR: Person with id: 12345678910 was not found from database.' }); done(); }); it('return response when postPerson succeeds', function(done) { var mockRequest = createMockRequest(); var mockResponse = createMockResponse(); endpoints.postPerson(mockRequest, mockResponse); expect(mockResponse.resStatus).to.equal(201); expect(mockResponse.resJson.message).to.equal(New person created); expect(mockResponse.resJson.person.content).to.deep.equal(mockRequest.body); done(); }); after(function() { // after() is run after all your tests have completed. Do teardown here. mockery.deregisterAll(); mockery.disable(); })});
Javascript static and prototype mocking
javascript;unit testing;static
You have a couple HTTP return codes here in there in your code, but don't explain what each return code means.I recommend creating an object of the error codes with descriptive names so you can use those in place of just the plain code; it'll aid your readability greatly.Here is what I came up with:var HTTP_CODES = { CLIENT = { NOT_FOUND: 404, BAD_REQUEST: 400 }, SUCCESS = { CREATE:201 }}Then, you can easily access these codes like this, for example:res.status(HTTP_CODES.CLIENT.NOT_FOUND);Notice how I used all capitals for the naming? Generally across other languages, all capital letters is used for constant values.From postPerson,if (err) { res.status(400); res.json(err);}else { // <---------- var r = { message: New person created, person: p }; res.json(r); res.status(201);}Typically, the else { will come on the same line as the close } of the preceding if statement.You wrote it the correct way in getPerson so I don't know why you changed it. function MockPerson(person) { this.content = {}; for (var k in person) {The signature of MockPerson should not be indented.I realized that prototype functions are only accessible in instances of Person and statics only from class PersonThe line you entered after:MockPerson.prototype.findOne = MockPerson.findOne = function(params, callback)Looks perfectly fine to me. However, do you really need to have a method that can be accessed as static and from an instance?You say in a comment://Mongoose supports both Person.findOne and var p = new Person(); p.findOne() so we need both prototype and static functionsMongoose supports.... I don't know much about Mongoose, but if it supports both, can't you choose one?
_unix.293613
I am trying to recover data from a MyBookLiveDuo 3TB*2 RAID1 setup, which is not responding to http/ssh access. I pulled one drive out of the enclosure and attached it via SATA to a Linux laptop. I have run the following.Dmesg says it is /dev/sdc:[125141.929807] ata4: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen[125141.929811] ata4: irq_stat 0x00000040, connection status changed[125141.929813] ata4: SError: { CommWake DevExch }[125141.929827] ata4: hard resetting link[125147.695101] ata4: link is slow to respond, please be patient (ready=0)[125151.729658] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 300)[125151.909228] ata4.00: ATA-9: WDC WD30EZRX-00D8PB0, 80.00A80, max UDMA/133[125151.909231] ata4.00: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA[125151.909888] ata4.00: configured for UDMA/133[125151.925597] ata4: EH complete[125151.925652] scsi 3:0:0:0: Direct-Access ATA WDC WD30EZRX-00D 0A80 PQ: 0 ANSI: 5[125151.925828] sd 3:0:0:0: [sdc] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB)[125151.925830] sd 3:0:0:0: [sdc] 4096-byte physical blocks[125151.925833] sd 3:0:0:0: Attached scsi generic sg2 type 0[125151.925971] sd 3:0:0:0: [sdc] Write Protect is off[125151.925973] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00[125151.925995] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA[125152.002603] sdc: sdc1 sdc2 sdc3 sdc4[125152.003134] sd 3:0:0:0: [sdc] Attached SCSI disk[125152.220477] md: bind<sdc4>[125152.232731] md: bind<sdc3>[125152.233892] md/raid1:md126: active with 1 out of 2 mirrors[125152.233912] md126: detected capacity change from 0 to 512741376[125152.234269] md126: unknown partition tableTrying to mount sdc4 directly does not work:$ sudo mount /dev/sdc4 /home/xxx/wdc -t automount: unknown filesystem type 'linux_raid_member'Mdstat shows only sdc3 as active, but sdc4 as inactive:$ sudo cat /proc/mdstatPersonalities : [linear] [raid1] md126 : active (auto-read-only) raid1 sdc3[2] 500724 blocks super 1.0 [2/1] [_U]md127 : inactive sdc4[0](S) 2925750264 blocks super 1.0unused devices: <none>Parted shows missing filesystem for sdc4 :$ sudo parted -lModel: ATA WDC WD30EZRX-00D (scsi)Disk /dev/sdc: 3001GBSector size (logical/physical): 512B/4096BPartition Table: gptNumber Start End Size File system Name Flags 3 15.7MB 528MB 513MB primary raid 1 528MB 2576MB 2048MB ext3 primary raid 2 2576MB 4624MB 2048MB ext3 primary raid 4 4624MB 3001GB 2996GB primary raidGdisk v (verify) does not find any problems though:$ sudo gdisk /dev/sdcGPT fdisk (gdisk) version 0.8.8Partition table scan: MBR: protective BSD: not present APM: not present GPT: presentFound valid GPT with protective MBR; using GPT.Command (? for help): pDisk /dev/sdc: 5860533168 sectors, 2.7 TiBLogical sector size: 512 bytesDisk identifier (GUID): FA502922-25C1-4759-AAF8-3D1DDA73F5C4Partition table holds up to 128 entriesFirst usable sector is 34, last usable sector is 5860533134Partitions will be aligned on 2048-sector boundariesTotal free space is 31597 sectors (15.4 MiB)Number Start (sector) End (sector) Size Code Name 1 1032192 5031935 1.9 GiB FD00 primary 2 5031936 9031679 1.9 GiB FD00 primary 3 30720 1032191 489.0 MiB FD00 primary 4 9031680 5860532223 2.7 TiB FD00 primaryCommand (? for help): vNo problems found. 31597 free sectors (15.4 MiB) available in 2segments, the largest of which is 30686 (15.0 MiB) in size.Command (? for help): qTrying to assemble and scan only shows /dev/sdc3 as active:$ sudo mdadm --stop /dev/md12[567]mdadm: stopped /dev/md126mdadm: stopped /dev/md127$ sudo cat /proc/mdstatPersonalities : [linear] [raid1] unused devices: <none>$ sudo mdadm --assemble --scanmdadm: /dev/md/MyBookLiveDuo:3 assembled from 1 drive - not enough to start the array.mdadm: /dev/md/MyBookLiveDuo:2 has been started with 1 drive (out of 2).mdadm: /dev/md/MyBookLiveDuo:3 assembled from 1 drive - not enough to start the array.$ sudo cat /proc/mdstatPersonalities : [linear] [raid1] md127 : active raid1 sdc3[2] 500724 blocks super 1.0 [2/1] [_U]unused devices: <none>E2fsck says bad magic number in super-block:$ sudo e2fsck /dev/sdc3e2fsck 1.42.9 (4-Feb-2014)/dev/sdc3 is in use.e2fsck: Cannot continue, aborting.$ sudo e2fsck /dev/sdc4e2fsck 1.42.9 (4-Feb-2014)ext2fs_open2: Bad magic number in super-blocke2fsck: Superblock invalid, trying backup blocks...e2fsck: Bad magic number in super-block while trying to open /dev/sdc4The superblock could not be read or does not describe a valid ext2/ext3/ext4filesystem. If the device is valid and it really contains an ext2/ext3/ext4filesystem (and not swap or ufs or something else), then the superblockis corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> or e2fsck -b 32768 <device>$ Trying to mount md127 thinks it is NTFS ???$ sudo mount /dev/md127 /home/xxx/wdc -t autoNTFS signature is missing.Failed to mount '/dev/md127': Invalid argumentThe device '/dev/md127' doesn't seem to have a valid NTFS.Maybe the wrong device is used? Or the whole disk instead of apartition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?Any suggestion on how to proceed further to be able to mount /dev/sdc4 and recover data ? EDIT 1: Output from examine sdc4 is below. It says RAID level = linear.$ sudo mdadm --examine /dev/sdc4/dev/sdc4: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 374e689e:3bfd050c:ab0b0dce:2d50f5fd Name : MyBookLiveDuo:3 Creation Time : Mon Sep 16 14:53:47 2013 Raid Level : linear Raid Devices : 2 Avail Dev Size : 5851500528 (2790.21 GiB 2995.97 GB) Used Dev Size : 0 Super Offset : 5851500528 sectors State : clean Device UUID : 9096f74b:0a8f2b61:93347be3:6d3b6c1b Update Time : Mon Sep 16 14:53:47 2013 Checksum : 77aa5963 - correct Events : 0 Rounding : 0K Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing)Output from examine sdc3 is below, which mdstat says is active. It says RAID level = RAID1.$ sudo mdadm --examine /dev/sdc3/dev/sdc3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 7c040c5e:9c30ac6d:e534a129:20457e22 Name : MyBookLiveDuo:2 Creation Time : Wed Dec 31 19:01:40 1969 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1001448 (489.07 MiB 512.74 MB) Array Size : 500724 (489.07 MiB 512.74 MB) Super Offset : 1001456 sectors State : clean Device UUID : 1d9fe3e3:d5ac7387:d9ededba:88ca24a5 Update Time : Sun Jul 3 11:53:31 2016 Checksum : 31589560 - correct Events : 101 Device Role : Active device 1 Array State : .A ('A' == active, '.' == missing)$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active (auto-read-only) raid1 sdc3[2] 500724 blocks super 1.0 [2/1] [_U]md127 : inactive sdc4[0](S) 2925750264 blocks super 1.0unused devices: <none>And partitions is:$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 1953411072 sda2 8 16 1953514584 sdb 8 17 248832 sdb1 8 18 1 sdb2 8 21 1953263616 sdb5 252 0 1953261568 dm-0 252 1 1919635456 dm-1 252 2 33488896 dm-2 8 32 2930266584 sdc 8 33 1999872 sdc1 8 34 1999872 sdc2 8 35 500736 sdc3 8 36 2925750272 sdc4 9 126 500724 md126EDIT 2: (Different RAID disk from another enclosure)Using a good disk from a different WDLiveDuo enclosure, which is configured in same RAID1 way (2*3TB disks), the output of examine for sdc3 and sdc4 shows that both have RAID level = RAID1. So, for the bad disk listed above, sdc3 shows correct RAID level = RAID1, but somehow, sdc4 does not have the correct RAID level configured. Is there a way to change/fix the RAID level to be RAID1 and then will it allow me to load it and get data out of the bad disk ?$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : inactive sdc4[2](S) 2925750136 blocks super 1.0md127 : inactive sdc3[1](S) 500724 blocks super 1.0unused devices: <none>$$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 1953411072 sda2 8 16 1953514584 sdb 8 17 248832 sdb1 8 18 1 sdb2 8 21 1953263616 sdb5 252 0 1953261568 dm-0 252 1 1919635456 dm-1 252 2 33488896 dm-2 8 32 2930266584 sdc 8 33 1999872 sdc1 8 34 1999872 sdc2 8 35 500736 sdc3 8 36 2925750272 sdc4$$ sudo mdadm --examine /dev/sdc3/dev/sdc3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 7c040c5e:9c30ac6d:e534a129:20457e22 Name : MyBookLiveDuo:2 Creation Time : Wed Dec 31 19:01:40 1969 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1001448 (489.07 MiB 512.74 MB) Array Size : 500724 (489.07 MiB 512.74 MB) Super Offset : 1001456 sectors State : clean Device UUID : e0963cfc:7ba16214:94e24c90:32988d39 Update Time : Mon Jul 4 13:20:48 2016 Checksum : 412bab99 - correct Events : 288 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing)$$ sudo mdadm --examine /dev/sdc4/dev/sdc4: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : ac48cc98:1d450838:dd0b0364:61b3168e Name : MyBookLiveDuo:3 Creation Time : Sat Jul 2 19:57:43 2016 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5851500272 (2790.21 GiB 2995.97 GB) Array Size : 2925750136 (2790.21 GiB 2995.97 GB) Super Offset : 5851500528 sectors State : clean Device UUID : 1233b655:2f5fd745:c1c71658:f05af045 Update Time : Mon Jul 4 13:21:15 2016 Checksum : b83b08dc - correct Events : 54432 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing)$$ sudo mdadm --stop /dev/md12[567]mdadm: stopped /dev/md126mdadm: stopped /dev/md127$$ sudo mdadm --assemble --scanmdadm: /dev/md/MyBookLiveDuo:3 has been started with 1 drive (out of 2).mdadm: /dev/md/MyBookLiveDuo:2 has been started with 1 drive (out of 2).$$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc3[1] 500724 blocks super 1.0 [2/1] [_U]md127 : active raid1 sdc4[2] 2925750136 blocks super 1.0 [2/1] [_U]unused devices: <none>$$ sudo parted -l /dev/sdcModel: ATA WDC WD30EFRX-68E (scsi)Disk /dev/sdc: 3001GBSector size (logical/physical): 512B/4096BPartition Table: gptNumber Start End Size File system Name Flags 3 15.7MB 528MB 513MB primary raid 1 528MB 2576MB 2048MB ext3 primary raid 2 2576MB 4624MB 2048MB ext3 primary raid 4 4624MB 3001GB 2996GB ext4 primary raidError: /dev/md126: unrecognised disk label Model: Linux Software RAID Array (md)Disk /dev/md127: 2996GBSector size (logical/physical): 512B/4096BPartition Table: loopNumber Start End Size File system Flags 1 0.00B 2996GB 2996GB ext4$$ sudo gdisk /dev/sdcGPT fdisk (gdisk) version 0.8.8Partition table scan: MBR: protective BSD: not present APM: not present GPT: presentFound valid GPT with protective MBR; using GPT.Command (? for help): pDisk /dev/sdc: 5860533168 sectors, 2.7 TiBLogical sector size: 512 bytesDisk identifier (GUID): BDE03D26-EB85-4348-A4E0-A229FC01EE93Partition table holds up to 128 entriesFirst usable sector is 34, last usable sector is 5860533134Partitions will be aligned on 2048-sector boundariesTotal free space is 31597 sectors (15.4 MiB)Number Start (sector) End (sector) Size Code Name 1 1032192 5031935 1.9 GiB FD00 primary 2 5031936 9031679 1.9 GiB FD00 primary 3 30720 1032191 489.0 MiB FD00 primary 4 9031680 5860532223 2.7 TiB FD00 primaryCommand (? for help): vNo problems found. 31597 free sectors (15.4 MiB) available in 2segments, the largest of which is 30686 (15.0 MiB) in size.Command (? for help): q$$ sudo mount /dev/md127 /home/xxx/wdc -t ext4mount: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so
mdadm - not enough to start the array
mdadm;raid1
null
_unix.363063
I'm testing Crystal web server what seems the fastest platform we have today.Here you can see urls related : https://github.com/crystal-lang/crystalhttps://crystal-lang.org/docs/overview/http_server.html?q=http://blog.seraum.com/crystal-lang-vs-nodejs-vs-golang-vs-http-benchmarkhttps://github.com/kemalcr/kemalWhat I want is to serve pages based on https too.This is a simple example to create an http server in crystal : I assume you already have installed crystal and you know how to run this example.require http/serverserver = HTTP::Server.new(PORT) do |context| context.response.print Welcome on To my Worldendputs Listening on http://HOST:PORTserver.listenIs possible to create https instead ?Is there any other project what use crystal lang ? Regards.Ricardo / Brqx.
Crystal - Http2 - Https server - Is possible?
openssl;webserver;ssl;https
null
_webmaster.13398
Has Google ever published or has anybody ever calculated how much PageRank juice follows from one page linking to another?For example, if I have a page about cats with a PR of 3 and I link to someone else's page about cats (i.e. relevant category) then the target page will get some PR benefit from that link. What percentage of the linking page do they get?
PageRank flow percentage
google;pagerank
The PR that is passed through a link is (PR of the page / # of links on the page) * dampening factor The dampening factor was originally .85 although it is likely that has changed since then. Also keep in mind that although PR is cumulative, the calculation of PR iterates until PR flattens (I believe to 1. You'll need to read the actual formula or summary of what it means to understand that). Basically it's impossible to determine what the PR of a page will be as the number of pages indexed by Google influences every page's final PR.FYI, relevance has nothing to with PR. PR is a numerical representation of link popularity only.
_webapps.46499
I have used Gmail for many years, and during this time I have deleted a lot of email messages and contacts.I discovered that when I use some services (for example, DropBox) to invite my friends by my Gmail account, Google also suggests to me contacts I deleted a long time ago!Why? Where is Dropbox finding these old contacts?
Google keeps old and deleted contacts list?
gmail;google contacts
null
_codereview.75603
This is actually a second follow-up to Correct MVVM format, but I have made different changes.This is my ViewModel.cs class for my MainPage.xaml:public partial class ViewModel : INotifyPropertyChanged{ private ObservableCollection<MenuItem> _back = new ObservableCollection<MenuItem>(); public ObservableCollection<MenuItem> Back { get { return _back; } set { _back = value; } } private ObservableCollection<string> _back1 = new ObservableCollection<string>(); public ObservableCollection<string> Back1 { get { return _back1; } set { _back1 = value; } } private ObservableCollection<MenuItem> _itemList = new ObservableCollection<MenuItem>(); public ObservableCollection<MenuItem> ItemList { get { return _itemList; } set { _itemList = value; } } private ObservableCollection<string> _itemTitles = new ObservableCollection<string>(); public ObservableCollection<string> ItemTitles { get { return _itemTitles; } set { if (_itemTitles == value) return; _itemTitles = value; OnPropertyChanged(); } } private string _currentTitle = Menu 1; public string CurrentTitle { get { return _currentTitle; } set { _currentTitle = value; OnPropertyChanged(); } } private MenuItem _currentItem = new MenuItem(Menu 1, typeof(Menus.Menu1), AvailWSMenu.Menu1); public MenuItem CurrentItem { get { return _currentItem; } set { _currentItem = value; } } public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged([CallerMemberName]string propertyName = null) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } public Windows.ApplicationModel.Resources.ResourceLoader resourceFile = new Windows.ApplicationModel.Resources.ResourceLoader(); public ViewModel() { ItemList.Add(new MenuItem(resourceFile.GetString(Menu 1), typeof(Menus.Menu1), AvailWSMenu.Menu1)); ItemList.Add(new MenuItem(resourceFile.GetString(Menu 2), typeof(Menus.Menu2), AvailWSMenu.Menu2)); ItemList.Add(new MenuItem(resourceFile.GetString(Menu 3), typeof(Menus.Menu3), AvailWSMenu.Menu3)); ItemList.Add(new MenuItem(resourceFile.GetString(Menu 4), typeof(Menus.Menu4), AvailWSMenu.Menu4)); ItemTitles = new ObservableCollection<string>(ItemList.Select(x => x.Title)); } public void SelectionChanged(string newSelection) { Back.Insert(0, new MenuItem(newSelection, ItemList[ItemTitles.IndexOf(newSelection)].Page, ItemList[ItemTitles.IndexOf(newSelection)].Menu)); if (newSelection.StartsWith( )) return; ItemList.RemoveAll(_ => _.Title.StartsWith( )); ItemTitles.RemoveAll(_ => _.StartsWith( )); switch (newSelection) { case Menu 1: ItemList.Insert(1, (new MenuItem(resourceFile.GetString(Submenu 2), typeof(Menus.MenuItems.Submenu2), AvailWSMenu.Menu1))); ItemList.Insert(1, (new MenuItem(resourceFile.GetString(Submenu 1), typeof(Menus.MenuItems.Submenu1), AvailWSMenu.Menu1))); ItemTitles.Insert(1, resourceFile.GetString(Submenu 2)); ItemTitles.Insert(1, resourceFile.GetString(Submenu 1)); break; case Menu 2: ItemList.Insert(2, (new MenuItem(resourceFile.GetString(Submenu 4), typeof(Menus.MenuItems.Submenu4), AvailWSMenu.Menu2))); ItemList.Insert(2, (new MenuItem(resourceFile.GetString(Submenu 3), typeof(Menus.MenuItems.Submenu3), AvailWSMenu.Menu2))); ItemTitles.Insert(2, resourceFile.GetString(Submenu 4)); ItemTitles.Insert(2, resourceFile.GetString(Submenu 3)); break; case Menu 3: ItemList.Insert(3, (new MenuItem(resourceFile.GetString(Submenu 5), typeof(Menus.MenuItems.Submenu5), AvailWSMenu.Menu3))); ItemTitles.Insert(3, resourceFile.GetString(Submenu 5)); break; case Menu 4: ItemList.Insert(4, (new MenuItem(resourceFile.GetString(Submenu 10), typeof(Menus.MenuItems.Submenu10), AvailWSMenu.Menu4))); ItemList.Insert(4, (new MenuItem(resourceFile.GetString(Submenu 9), typeof(Menus.MenuItems.Submenu9), AvailWSMenu.Menu4))); ItemList.Insert(4, (new MenuItem(resourceFile.GetString(Submenu 8), typeof(Menus.MenuItems.Submenu8), AvailWSMenu.Menu4))); ItemList.Insert(4, (new MenuItem(resourceFile.GetString(Submenu 7), typeof(Menus.MenuItems.Submenu7), AvailWSMenu.Menu4))); ItemList.Insert(4, (new MenuItem(resourceFile.GetString(Submenu 6), typeof(Menus.MenuItems.Submenu6), AvailWSMenu.Menu4))); ItemTitles.Insert(4, resourceFile.GetString(Submenu 10)); ItemTitles.Insert(4, resourceFile.GetString(Submenu 9)); ItemTitles.Insert(4, resourceFile.GetString(Submenu 8)); ItemTitles.Insert(4, resourceFile.GetString(Submenu 7)); ItemTitles.Insert(4, resourceFile.GetString(Submenu 6)); break; } } public void GoBack() { if (Back.Count == 1) return; if (Back[1].Title.StartsWith( ) && Back[0].Menu != Back[1].Menu) { Back.RemoveAt(0); switch (Back[1].Menu) { case AvailWSMenu.Menu1: CurrentTitle = resourceFile.GetString(Menu 1); break; case AvailWSMenu.Menu2: CurrentTitle = resourceFile.GetString(Menu 2); break; case AvailWSMenu.Menu3: CurrentTitle = resourceFile.GetString(Menu 3); break; case AvailWSMenu.Menu4: CurrentTitle = resourceFile.GetString(Menu 4); break; } } Back.RemoveAt(0); CurrentTitle = Back[0].Title; Back.RemoveAt(0); }}My MenuItem.cs contains two things, an enum AvailMenus and a class MenuItem:public enum AvailMenus{ Menu1, Menu2, Menu3, Menu4, MenuItem}public class MenuItem{ public MenuItem(string title, Type page, AvailMenus menu) { Title = title; Page = page; Menu = menu; } private string _title = ; public string Title { get { return _title; } set { _title = value; } } private Type _page = null; public Type Page { get { return _page; } set { _page = value; } } private AvailMenus _menu = new AvailMenus(); public AvailMenus Menu { get { return _menu; } set { _menu = value; } }}This is my MainPage.xaml:<Grid Background=White> <Grid.RowDefinitions> <RowDefinition Height=100 x:Name=TitleRow/> <RowDefinition Height=* x:Name=DataRow/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width=300 x:Name=ItemsColumn/> <ColumnDefinition Width=* x:Name=DataColumn/> </Grid.ColumnDefinitions> <ListBox Background=WhiteSmoke Name=Items Grid.Column=0 Grid.RowSpan=2 ItemsSource={Binding ItemTitles} SelectionChanged=OnSelectionChanged Margin=-2,-2,0,-2 Padding=0,10 SelectedValue={Binding CurrentTitle, Mode=TwoWay} /> <Viewbox Name=TitleView Margin=10 VerticalAlignment=Center HorizontalAlignment=Left Grid.Column=1> <TextBlock Name=TitleText Foreground=Black Margin=5,10,5,5 Text={Binding SelectedValue, ElementName=Items} /> </Viewbox> <Border Grid.Column=1 BorderBrush=Black BorderThickness=0,1,0,0 VerticalAlignment=Bottom Height=1/> <Frame Grid.Column=1 Grid.Row=1 Foreground=Black FontSize=20 Margin=20,20,0,20 Name=DataFrame VerticalAlignment=Top /></Grid><Page.BottomAppBar> <CommandBar> <AppBarButton Label=Back Icon=Back Click=AppBarButton_Click/> </CommandBar></Page.BottomAppBar>This is the code-behind in MainPage.xaml.cs:public sealed partial class MainPage : Page{ private ViewModel Data = new ViewModel(); public MainPage() { this.InitializeComponent(); this.DataContext = Data; } private void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { Data.SelectionChanged(Items.SelectedValue.ToString()); DataFrame.Navigate(Data.ItemList[Items.SelectedIndex].Page); } private void AppBarButton_Click(object sender, RoutedEventArgs e) { Data.GoBack(); }}This is the RemoveAll extension for ObservableCollection:public static class ExtensionMethods{ public static int RemoveAll<T>(this ObservableCollection<T> coll, Func<T, bool> condition) { var itemsToRemove = coll.Where(condition).ToList(); foreach (var itemToRemove in itemsToRemove) { coll.Remove(itemToRemove); } return itemsToRemove.Count; }}The one thing I really dislike is maintaining the two ObservableCollections ItemList and ItemTitles, but I haven't been able to get it to work any other way. I would like my use of MVVM reviewed in particular, as well as my method of handling navigation and everything else.
MVVM, Navigation, and More - Part 3
c#;wpf;mvvm;xaml
You are using this a lot: ItemList.Add(new MenuItem(resourceFile.GetString(Menu 1), typeof(Menus.Menu1), AvailWSMenu.Menu1));and also:ItemList.Insert(2, (new MenuItem(resourceFile.GetString(Submenu 4), typeof(Menus.MenuItems.Submenu4), AvailWSMenu.Menu2)));It Might Make more sense to create a:MenuItem CreateMenuItem(string resourceName,Type type, AvailMenus item){}then in your add and insert can both use it. Why bother? Well say you want to add a setting to each and every one of them later, or you change the way you construct one e.g change out the resource file call. it should be abstracted. In the interest of saving you having a lot of messy constructors also using the Property initializer syntax might be cleaner. new MenuItem { Title = resourceFile.GetString(Submenu4String), Menu = typeof(Menus.MenuItems.Submenu4String), AvailMenus = AvailWSMenu.Menu2 };Of course if any of those properties are required then the constructor is the way to go, but the lack of null validation makes me think they are not Also there is a lot of magic strings going on. At minimum, you should make those constants. I would personally just do an enum but that is a design choice:const string Menu1String = Menu1;const string Menu2String = Menu2;May seem superfluous but when it comes to debugging it makes things alot easier than finding that one place you wrote Memu1 Also have a think about your viemodels goals. It is effectively providing data to the view and allowing a select, right?In the interest of only exposing the abilities I want i prefer to make an interface for my ViewModelinterface IMainViewModel{ ObservableCollection<MenuItem> Back {get; } ObservableCollection<MenuItem> Back1 {get; } ObservableCollection<MenuItem> ItemList {get; } string CurrentTitle{ get; set; } MenuItem CurrentItem { get; set; } string SelectedValue{ set; } ICommand UpdateSelection{ get; } ICommand GoBack{ get; }}having a public set exposed the possibility of the View completely changing the Observable collection or setting it null. should that ever happen?Also using ICommand instead of calling methods directly is the MVVM way, you can then bind it directly to things like buttons Command={Binding GoBack}which has some nice features like auto deactivating the button when the CanExecute fails. and having the ability to have a canexecute is nice. Good separation of concerns.Also if you decide down the line to maybe go fill ViewModelLocator using the interface as a binding in your view allows some hot swapping of views/models.So what i mean by that is IMainViewModel Vm { get { return DataContext as IMainViewModel } }in your view means if you want to swap out logic/models you dont have to change references tied directly to the model instance. private void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { if(Vm != null) { Vm.UpdateSelection.Execute(); } DataFrame.Navigate(Data.ItemList[Items.SelectedIndex].Page); }In fact that SelectedChanged, I would recommend moving that to your model too, the logic i mean. Your ViewModel Should, if possible handle your navigation not the view. As for your notification changes Have a look at fody weavers. When doing a WPF ui personally I prefer the cleanliness of adding a single [ImplementNotifyPropertyChanged]public class ViewModel{}and have compile time addition of the notifyProperties. Although if you do want more manual control that is fine too.Also, Xaml grows fast. Try to remove any extraneous properties. Do you NEED names for each row/column?P.s Also a pet peeve of mine, HUGE METHODS, especially switch case. I try to avoid them as much as possible. If i have to use them as a golden rule I only ever have one line after the actual Case so: public void SelectionChanged(string newSelection){ const string BLANK = ; Back.Insert(0, CreateMenuItem(newSelection)); if (newSelection.StartsWith(BLANK)) return; ItemList.RemoveAll(_ => _.Title.StartsWith(BLANK)); ItemTitles.RemoveAll(_ => _.StartsWith(BLANK)); switch (newSelection) { case Menu1String: UpdateMenu1(); break; case Menu2String: UpdateMenu2(); break; case Menu3String: UpdateMenu3(); break; case Menu4String: UpdateMenu4(); break; }}I find that much cleaner IMO.
_unix.111311
I'm tasked with writing a shell script which checks that a filename conforms to a specific pattern and I'm not sure how to go about it. The filename should follow a pattern which looks like: (Project-ID)_(Env)_(Source-System-ID)_(DataDescriptor)_(CCYYMMDD)_(Seq)_(Freeformat)_(codepage)Project_ID should be alphanumeric and between 3-8 characters.Env should consist of a 3 character code (DEV, SYT, SIT, UAT or PRD)Source-System-ID should be a variable numberDataDescriptor should be alphanumeric such as CUSTCCYYMMDD should be a date in the format CCYYMMDDSeq should be a number such as 01, 02, 03 etcFreeformat should be alphanumeric - used to give the filename additional descriptioncodepage should represent the file extension such as .ascii or .EBCDICAn example file might look like:ABC_PRD_00227_ACC_20130128_01_LTSB.CP1252If the file doesn't conform to the pattern it would be good if some sort of warning could be displayed.
How do I check that a filename conforms to a pattern?
shell script;grep;filenames;patterns
Assuming you use a recent version of zsh, ksh93 or bash and the filename doesn't contain newline characters:# split up the filename into its partsIFS=_ read -r pjid env srcid desc date seq free <<< $filename# extract the codepage from the free textcode=${free##*.}free=${free%.*}# validateif [[ $pjid =~ ^[[:alnum:]]{3,8}$ ]] && [[ $env == DEV || $env == SYT || ... ]] && [[ $srcid =~ ^[[:digit:]]+$ ]] && [[ $desc =~ ^[[:alnum:]]+$ ]] && [[ $date =~ ^[[:digit:]]{8}$ ]] && date -d $date >/dev/null 2>&1 && [[ $seq =~ ^[[:digit:]]+$ ]] && [[ $free =~ ^[[:alnum:]]+$ ]] && [[ $code =~ ^[[:alnum:]]+$ ]] # need specific codepage validation?then echo file name format is OKfi
_webmaster.65336
For my website www.trickwarehouse.com i have created one post on keyloggers. When i search in google that what are keyloggers? then my website's post position was 16 but after 1 hour, when i checked once again then BOOOM!!!! nothing found for my website within top 100 results. and also other keywords which was in top 100 results was shifted behind by 2-3 results...... Why it is happening?Any answers?
Links removed from google search results
seo;google;google search;googlebot
null
_reverseengineering.8837
I would like to make IDA disassemble the .plt section of ELF files correctly, e.g. as objdump does: objdump -D -M intel asdf | grep Disassembly of section .plt -A80I don't know why but IDA gives me this (Note the dw ? and dq ?):Even the IDA hexeditor does not show me the correct values at the corresponding addresses, but gives me ??s.I tried selecting and deselecting the settings described in the IDA Online help (search for PLT) but this didn't help...0: Replace PIC form of 'Procedure Linkage Table' to non PIC form1: Direct jumping from PLT (without GOT) regardless of its form2: Convert PIC form of loading _GLOBAL_OFFSET_TABLE_[] of address3: Obliterate auxiliary bytes in PLT & GOT for 'final autoanalysis'4: Natural form of PIC GOT address loading in relocatable file5: Unpatched form of PIC GOT references in relocatable fileHow can I configure IDA so that I can access the instructions in the .plt section of an ELF file with IDAPython?
ELF: How to make IDA show me the correct PLT (Procedure Linkage Table) content?
ida;disassembly;idapython;elf;plt
For a 32bit (but not 64bit) x86 ELF binary, selecting the following options works:UPDATE:There is a bug in IDA 6.8 (and probably earlier versions): For 64bit x86 ELF binaries, I get the desired disassembly result only when additionally deselecting Replace PIC form of .... This was the reason for my confusion and made me post my question.Hex-rays sent me a patch which fixed it (and which will probably be part of future versions... )
_unix.150978
On GNU/Linux system I seen only positive PIDs, but when kernel panic occured I seen info about process with PID=0. What's it?On Minix 3 I seen processes with negative PIDs. Minix is POSIX-compatible system, but POSIX allows only positive PIDs. What's it?What variable type I should use in C for saving process ID?
Process IDs range
process;posix
null
_unix.374906
I have text file which has below linesFrom: arkit Corp. <[email protected]>Sent: Friday, June 16, 2017 6:35 PMTo: User NameSubject: arkit Corp.: activity alert. <http:// arkit.co.in/>ACTIVITY ALERT FOR:Ravihttps:// arkit.co.in/ Path Read (03/07/2017)Path: /website/upload/file.txthttps:// arkit.co.in/ Path Read (04/07/2017)Path: /website/upload/file1.txt Copyright 2017 arkit Corp.. All Rights Reserved.I would like to print them as belowhttps:// arkit.co.in/ Path Read (03/07/2017) Path: /website/upload/file.txthttps:// arkit.co.in/ Path Read (04/07/2017) Path: /website/upload/file1.txtCan any one suggest how can i print side-by-side
Print matched pattern lines side by side
shell script
I you want to parse the whole message, use addresses like this:sed -n '/https:/h;/Path:/{H;g;s/\n/ /p;}' yourfileDon't output by default (-n). Put a line starting with https: in the hold buffer, then for the Path: line, append it to the hold buffer, move it to the pattern space and replace the newline with a whitespace.Or a different approach:sed -e '/^https:/!d;:a' -e '$!N;/Path:/!ba' -e 's/\n\n*/ /' yourfileThis means: If the line doesn't start with https:, delete it (/^https:/!d). Else start a loop (:a) to add new lines if there are any ($!N), until we added the Path: line. /Path:/!ba. Finally replace newlines with a whitespace to put everything in one line (s/\n\n*/ /).
_softwareengineering.176085
My question, at the bottom line, is what is the appropriate(best) way to manage our connection towards MySQL db with C#. Well, currently I'm working on some C# (winforms type) <-> MySQL application and I've been looking at Server Connections in MySQL Administrator, been witness of execution of my mysql_queries, connection opens an closes, ... an so on! In my C# code I'm working like this and this is an example:public void InsertInto(string qs_insert) { try { conn = new MySqlConnection(cs); conn.Open(); cmd = new MySqlCommand(); cmd.Connection = conn; cmd.CommandText = qs_insert; cmd.ExecuteNonQuery(); } catch (MySqlException ex) { MessageBox.Show(ex.ToString()); } finally { if (conn != null) { conn.Close(); } } }Meaning, every time I want to insert something in db table I call this table and pass insert query string to this method. Connection is established, opened, query executed, connection closed. So, we could conclude that this is the way I manage MySQL connection. For me and my point of view, currently, this works and its enough for my requirements. Well, you have Java & Hibernate, C# & Entity Framework and I'm doing this :-/ and it's confusing me. Should I use MySQL with Entity Framework?What is the best way for collaboration between C# and MySQL? I don't want to worry about is connection that I've opened closed, can that same connection be faster, ...
What is appropriate way for managing MySQL connection through C#
c#;mysql
null
_softwareengineering.163105
This question may be considered subjective (I got a warning) and be closed, but I will risk it, as I need some good advice/experience on this.I read the following at the 'About' page of Fog Creek Software, the company that Joel Spolsky founded and is CEO of:Back in the year 2000, the founders of Fog Creek, Joel Spolsky and Michael Pryor, were having trouble finding a place to work where programmers had decent working conditions and got an opportunity to do great work, without bumbling, non-technical managers getting in the way. Every high tech company claimed they wanted great programmers, but they wouldnt put their money where their mouth was.It started with the physical environment (with dozens of cubicles jammed into a noisy, dark room, where the salespeople shouting on the phone make it impossible for developers to concentrate). But it went much deeper than that. Managers, terrified of change, treated any new idea as a bizarre virus to be quarantined. Napoleon-complex junior managers insisted that things be done exactly their way or youre fired. Corporate Furniture Police writhed in agony when anyone taped up a movie poster in their cubicle. Disorganization was so rampant that even if the ideas were good, it would have been impossible to make a product out of them. Inexperienced managers practiced hit-and-run management, issuing stern orders on exactly how to do things without sticking around to see the farcical results of their fiats.And worst of all, the MBA-types in charge thought that coding was a support function, basically a fancy form of typing.A blunt truth about most of today's big software companies! Unfortunately not every developer is as gutsy (or lucky, may I say?) as Joel Spolsky! So my question is:How best to work with such managers, keep them at bay and still deliver great work?
How best to keep bumbling, non-technical managers at bay and still deliver good work?
project management;software;engineering
null
_unix.145867
We are developing an embedded Linux system using BusyBox v1.19.4. The system is working fine except for a strange mount issue with time and date.When we create a directory it has the correct time and date:\$ ls -ldrwxrwxrwx 2 root root 40 Jul 21 16:16 media_2e040\$However, once we mount a device, the time and date changes:\$ mount /dev/sdb1 media_2e040\$ ls -ldrwxr-xr-x 9 root root 16384 Jan 1 1970 media_2e040\$Not knowing much about mounting, I can run touch on the directory, and the time/date updates to the correct time.Is there a reason for this operation of mount?Should we be running touch to keep the time and date?Thanks.
Mount changes directory time to 1970
mount;date;timestamps;busybox
your drive /dev/sdb1 is mounted in media_2e040 directory now so all the properties of media_2e049 are sdb1 properties. if you change them with touch you have changed sdb1 properties.
_unix.85782
I have a script, simply to run my Graphical (GUI) Application, as below. #cat gui.sh#!/bin/bash ./gui -display 127.0.0.1:0.0 When I run it from local machine (./gui.sh) it runs perfectly fine. But when I am trying to run it from remote machine via ssh, I got following error. [root@localhost]# ssh -f 192.168.3.77 cd /root/Desktop/GUI/ && ./gui.sh No protocol specified gdm: cannot connect to X server 192.168.3.77:0.0 [root@localhost]# I don't know, which protocol it is asking or am I missing anything? I tried directly by starting the application, without script [ssh -f 192.168.3.77 cd /root/Desktop/GUI/ && ./gui], but the result is same. I have tried various combinations like ssh -Y, ssh -fY and more but the result is same!Secondly for my application, there is a must condition that, we have to first go into the directory where the program is located.Any Solutions?
Error `No protocol specified` when running from remote machine via ssh
ssh;x11
null
_unix.33567
I'm new to Debian. I've been using Ubuntu for almost 5 years now and I want to switch and use a different distro. I chose Debian. I would like to know if is it possible store the home directory in a different partition on Debian? I'll use Debian 6.0.3.I would like to use a FAT partition for /home directory, is it possible? I want to use fat because sharing files between Windows and Debian would be easier this way, and storing /home in a FAT partition would allow me to browse my documents whenever I need.Also, having /home directory would help me to keep my current /home directory or when I change OS if Debian is not for me.
Can I use FAT partition for /home?
debian;home;partition
null
_unix.66802
I'd like to try some shell codes and I want to disable linux protections.I know I could compile using flags but I know another way exists to disable these protections in general I just can't remember. Can you help me?
Disable stack protection on Ubuntu for buffer overflow without C compiler flags
linux;security;compiling
Stack protection is done by the compiler (add some extra data to the stack and stash some away on call, check sanity on return). Can't disable that without recompiling. It's part of the point, really...
_codereview.144957
I have been trying to write a sample code block to remove multiple backward or forward slashes from a path (e.g. //a//b should be /a/b; on Windows, c:\\a\\b\\\c should be c:\a\b\c).I am assuming that:Code must be platform independentShould work in all cases (or report a proper error)It should be readable and maintainableIt shouldn't be over-complex and shouldn't have errorsPlease provide point-wise comments for the following code sample (you may ignore main and assume that path is not NULL - it is just for demonstration purposes). Or else you may provide a better mechanism for the same. void die( int error_code, char* message ) { fprintf( stderr, message ); exit( error_code ); } /*Following function removes duplicate slashes from a given path*/ char* validate_path ( char* path ) { char* path_copy = path; int dup_flag = 0; char* path_without_dup_slash = (char*)malloc(strlen(path)+1); if ( path_without_dup_slash == NULL ) { die( EXIT_FAILURE, Failed to allocate memory for processing path.); } char* copy_pwds = path_without_dup_slash; /* Travel through given path */ while( *path_copy != '\0' ) { /* If there is '\' or '/' then copy a single '/' * , ignore others until a real char comes */ if ( (( *path_copy == '\\') || ( *path_copy == '/' )) && (dup_flag == 0) ) { *copy_pwds = *path_copy; dup_flag = 1; copy_pwds++; }else if (( *path_copy != '\\') && ( *path_copy != '/' )) { *copy_pwds = *path_copy; dup_flag = 0; copy_pwds++; } path_copy++; } *copy_pwds = '\0'; path_without_dup_slash = (char *)realloc( path_without_dup_slash, strlen( path_without_dup_slash ) + 1 ); if ( path_without_dup_slash == NULL ) { die( EXIT_FAILURE, Failed to allocate memory for processing path.); } return path_without_dup_slash; } /*You please ignore the parts written in main - it is just for demonstration purpose. */ int main( void ) { char* bad_path_1 = malloc(100); char* bad_path_2 = malloc(100); char* p1 = bad_path_1; char* p2 = bad_path_2; char* path_1; char* path_2; bad_path_1 = //a//b//c; bad_path_2 = c:\\\\a\\\\b\\\\c; path_1 = validate_path(bad_path_1); printf( Good path:%s Bad Path:%s\n,path_1,bad_path_1); path_2 = validate_path(bad_path_2); printf( Good path:%s Bad Path:%s\n,path_2,bad_path_2); free(path_1); free(path_2); free(p1); free(p2); return EXIT_SUCCESS; }
Removing multiple slashes from path
c;strings;file system;linux;windows
null
_unix.91644
Is it possible to set up an envinronment variable that can be accessible from any shell, not an specific one, and that doesn't decay as soon as your session ends?I'd like to set up a NODE_ENV variable system wide, how could I achieve that?
Setting user environment variable permanently, outside shell?
environment variables
If all the shells you're interested in are Bourne-compatible, you can use /etc/profile for this purpose.The header of /etc/profile:# /etc/profile: system-wide .profile file for the Bourne shell (sh(1))# and Bourne compatible shells (bash(1), ksh(1), ash(1), ...)To ensure you've got csh and tcsh covered, you can also add your variables to /etc/csh.login.The header of /etc/csh.login:# /etc/csh.login: system-wide .login file for csh(1) and tcsh(1)For zsh, you want /etc/zshenv.For ease of maintenanceI would write all the variables you want in a single file and write a simple Perl (or other) script that would read these variables and update the relevant files for all the shells.Something like the following in Perl should work: #!/usr/bin/perluse strict;use warnings;my $bourne_file = '/etc/profile';my $csh_file = '/etc/csh.login';my $zsh_file = '/etc/zshenv';open my $BOURNE,'>>', $bourne_file;open my $CSH,'>>', $csh_file;open my $ZSH,'>>', $zsh_file;while(<DATA>){ chomp; my $delimiter = ','; #Change , to whatever delimiter you use in the file my ($var, @value) = split /$delimiter/; my $value = join $delimiter,@value; print $BOURNE qq{export $var=$value\n}; print $CSH qq{setenv $var $value\n}; print $ZSH qq{export $var=$value\n};}close $BOURNE;close $CSH;close $ZSH;__DATA__var1,val1var2,val2You can use a delimiter other than , to delimit variables and values as long as this delimiter isn't allowed in variable names.This can be further tidied up by inserting unique delimiters around the portion you want your script to write in each file so that each time you use the script to update the files, it doesn't duplicate previous entries but substitutes them in situ.
_cs.17963
1-in-3 SAT is the set of 3CNF formulas with no negated variables such thatthere is a satisfying assignment that makes exactly one variable in each clause true. Show 1-in-3 SAT is NP-completePlan on doing a reduction from 3SAT.My confusion arises from the no negated variables. You need some way of representing negated variables. How do you do that? I can do the reduction from 3SAT to 1-in-3 SAT without the restraint that there are no negated variables. I'm just not sure how to do it with this constraint.
Show 1-in-3 SAT
np complete
null
_softwareengineering.267117
I'm wondering the performance of reflection in this situation. I'm iterating a (probably) large excel file (let's say 3000 max) which it's going to be done from time to time, and the implementation that my mate is something like this, iterating each row: while (cells.hasNext()) { XSSFCell cell = (XSSFCell) cells.next(); String cellData = this.getCellData(cell); if (cellData.trim().isEmpty()) { break; } Field field = RowPrestacionVO.class.getField(fields.get(idx)); field.set(vo, cellData); idx++; }Is there any probability that I'm going to have some performance issues iterating like that?
Reflection performance in this iteration of a (probably) large excel file
java;performance;excel;reflection
While reflection is slow, doing it a handful of thousand times while processing a file will probably not be an issue: the file access is likely to be slower still.Still, you should be able to improve the performance of the code you posted by moving the initiation of ' field' outside of your loop.
_unix.137986
I had a couple of consoles open for other work and when I came back to them they both contained the following:Message from syslogd@Stacey-Windows at Jun 19 09:43:28 ...kernel:[ 4375.127165] general protection fault: 0000 [#1] SMP Message from syslogd@Stacey-Windows at Jun 19 09:43:28 ...kernel:[ 4375.127311] Stack:Message from syslogd@Stacey-Windows at Jun 19 09:43:28 ...kernel:[ 4375.127330] Call Trace:Message from syslogd@Stacey-Windows at Jun 19 09:43:28 ...kernel:[ 4375.127423] Code: 00 00 89 44 24 24 c7 44 24 30 04 00 00 00 c7 44 24 34 6c 00 00 00 48 89 4c 24 38 e8 e1 fa ff ff 85 c0 78 03 8b 04 24 48 83 c4 48 <c3> 48 83 ec 38 8b 07 48 8b 7f 08 48 89 54 24 24 ba 30 00 00 00 This looks like an exception has been thrown somewhere in my os, I gather it's a stacktrace of some kind. The code looks like a hexdump of something. What exactly is this and is there anything I can do with this information to know exactly what went wrong/where? What is it used for?EDIT:I just took a look at /var/log/messages and got this:Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127227] Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127241] Pid: 4462, comm: vivado Tainted: G O 3.2.0-4-amd64 #1 Debian 3.2.51-1 innotek GmbH VirtualBox/VirtualBoxJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127247] RIP: 0010:[<ffffffffa02a0876>] [<ffffffffa02a0876>] vboxCallCreate+0x7e/0x7f [vboxsf]Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127259] RSP: 0018:ffff88019ab47be0 EFLAGS: 00010292Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127263] RAX: 0000000000000000 RBX: ffff88019ab47c7c RCX: 00000000000018fbJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127267] RDX: 0000000000000041 RSI: ffff88007131f540 RDI: 0000000000000000Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127271] RBP: ffff88019a974000 R08: 00000000ffffffff R09: 00000000ffffffffJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127275] R10: ffffffff81600000 R11: ffffffff81600000 R12: ffff88019ab47e78Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127279] R13: ffff88012c06c500 R14: ffff88019b84a180 R15: ffff88012c06f300Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127283] FS: 00007f76b998d700(0000) GS:ffff8801a2c00000(0000) knlGS:0000000000000000Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127288] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003bJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127291] CR2: 0000000731c95000 CR3: 000000019860a000 CR4: 00000000000006f0Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127298] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127303] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127307] Process vivado (pid: 4462, threadinfo ffff88019ab46000, task ffff880198dd0930)Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127314] 33d59c00137905a4 33d59c00137905a5 33d59c00137905a5 001041ed137905a5Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127320] 000003e800000002 00000002000003e8 00e6017800000801 0000000000000000Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127325] 0000000000000000 0000000000000000 0000000000000000 0000000000000000Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127338] [<ffffffffa029f009>] ? sf_inode_revalidate+0x78/0xae [vboxsf]Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127346] [<ffffffff8110a5c7>] ? dput+0x27/0xeeJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127352] [<ffffffff8110b5ac>] ? __d_lookup+0x3e/0xceJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127358] [<ffffffff8110252c>] ? path_to_nameidata+0x19/0x3aJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127364] [<ffffffffa029f055>] ? sf_dentry_revalidate+0x16/0x20 [vboxsf]Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127370] [<ffffffff81103223>] ? walk_component+0x28f/0x406Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127376] [<ffffffff81104fe2>] ? do_last+0x108/0x58dJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127381] [<ffffffff81105a5f>] ? path_openat+0xce/0x33aJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127386] [<ffffffff81105d8d>] ? do_filp_open+0x2a/0x6eJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127393] [<ffffffff8134deec>] ? _cond_resched+0x7/0x1cJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127400] [<ffffffff811b41f9>] ? __strncpy_from_user+0x18/0x48Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127407] [<ffffffff8110eb13>] ? alloc_fd+0x64/0x109Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127413] [<ffffffff810f9d59>] ? do_sys_open+0x5e/0xe5Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127419] [<ffffffff81354212>] ? system_call_fastpath+0x16/0x1bJun 19 09:43:28 Stacey-Windows kernel: [ 4375.127455] RSP <ffff88019ab47be0>Jun 19 09:43:28 Stacey-Windows kernel: [ 4375.127551] ---[ end trace 7aaadecc7e737e8f ]---So there's a vivado in there - it's one of the applications I use, and there's a vbox in there. I'm running debian (and vivado) in virtualbox. Is this something I should be worried about? Is this something I can fix?Linux details: Debian Wheezy 7.0 64-bit
What does this general protection fault in my console mean, and how do I interpret it?
debian;kernel
null
_unix.224546
When I open many tabs on browser, UI completely freezes. In some cases I press Ctrl+Alt+F1 twenty or thirty times, after 15 minutes UI unfreeze, I switch to text terminal, kill browser and after that operating systems work fine. In other cases UI freezes completely, I'm not able to switch to text terminal and I have to reset my computer.I'm experiencing difficulties in one of these browsers: Firefox with flash plugin enabled (firefox without flash works fine) Google Chrome (it doesn't matter if flash is enabled or disabled inChrome)When computer freezes I don't see any strange errors in syslog.Chrome, Firefox and Flash work perfectly on my computer in Windows 7. I have no freezing issues in Windows.My configuration is:Now I use Linux Mint 17.1 64-bit. CPU Intel Core i5-4440 (with 4 cores) 8GB RAM Samsung 840 EVO 250 GB SSD I have 2 videos: Intel HD Graphics 4600 and GeForce GT610. Swap is disabled CPU cooling works perfectly Memtest show no errors In the past I had same issue with Google Chrome on different computer:Back then I used: Scientific Linux 6.3 64-bit Intel Pentium E5500 4 GB RAM On that configuration Google Chrome was also freezing X server. If I remember correctly, flash in firefox didn't freeze UI on my old PC.How can protect myself from UI freezing? Should I change some linux config file to avoid freezing? What is the reason of freezing?
UI freezes when I open many tabs in browser
linux mint;firefox;chrome;freeze;adobe flash
null
_cs.64946
I searched the exchange and couldn't seem to find an answer to this. I am trying to find an algorithm that, given a directed acyclic graph (DAG) $G = (N,E)$ with a single root node $r\in N$, finds all connected subgraphs $G_i\subseteq G$ that contain r.What I've attempted so farConsider the following DAG,To start, we have the graph:$(\{1\},\varnothing)$From the root, we should do a one-step search, including $m$ nodes, where $m$ ranges from $1$ to the number of children (of the root), in each subgraph. That is, the next set of subgraphs are:$(\{1,2\},\{(1,2)\})$$(\{1,3\},\{(1,3)\})$$(\{1,4\},\{(1,4)\})$$(\{1,2,3\},\{(1,2),(1,3)\})$$(\{1,2,4\},\{(1,2),(1,4)\})$$(\{1,3,4\},\{(1,3),(1,4)\})$$(\{1,2,3,4\},\{(1,2),(1,3),(1,4)\})$For each one of the above subgraphs, should we just do another one-step search? QuestionsDoes anyone know of a better algorithm for recursively counting all of the subgraphs? I'm now aware that there is likely no polynomial-time algorithm (thanks to Raphael's comment) and such an algorithm may have exponential complexity in the worst-case.
Find all rooted subgraphs of a DAG
algorithms;graphs;enumeration;dag
null
_softwareengineering.47090
Every time I want to read something for example a book on Java, then I find so much stuff like many tutorials, many ebooks that I am not able to decide which one to choose.I spend some time reading one, then two and so on and in the end I leave and gain nothing.I like the old days when we had only few resources like one hard book and at least I finish that from start to finish and gained much but now days.There is so much information that mind jumps from one source to other and gain nothing.What should I do?
How to deal with or survive with the information overload
management;information
there is so much information that mind jumps from one source to other and gain nothingwhat should i doTake it easy.You are in fact gaining knowledge. You read a tutorial, you learn something. You read another, you learn something new or maybe reinforce the info you've read in the first tutorial. You then read a book, and another... one at a time.We work (sometimes live) in Information Technology. Information beeing the key word here. Lots and lots of information!Even if there was only a book and you read that, in e few years it would be obsolete anyways. You always have to keep yourself up to date.Take it easyIf you start reading all of them at once or want to read them all, you will get overwhelmed with info and then feel discouraged... which I think happened since you asked this question.Q: How do you eat an elephant? A: One bite at a time.We all go through this. When I was young I didn't have lots of info, books or tutorials so it was very hard to learn something or do something, it was always a struggle. Then I started gathering a lot of infos, tutorials, books etc to reduce the effort and now I have about one Terabyte of PDFs, DOCs, HTML files, CHMs etc. It will take me 50 life times to read them all so I don't even try.Start with something, learn the basicsthen go apply them. You will see that you are missing some info to get it done so goto step 1 with this new need for information.At this point all documentation becomes a place to search inside for that missing info. Only that missing info! You will see it is much more manageable that trying to know everything.Good luck!
_webmaster.61556
I'm using cPanel and Mailman (mailing lists) on a shared server with a webhost.After changing the 'A' record for one of my add-on domains (let's say abc.org), so that its website traffic goes to another sort-of webhost (123.com) but email stays where it is (a few other changes were done to keep mail were it was), I now find that my Perl scripts which try to access the Mailman mailing lists for that domain, don't work.Note: I prefer to sometimes use Perl scripts to work with the mailing lists, instead of doing it from cPanel because:It allows people to make mailing list updates via my Perl webapplication.It allows me to make bulk queries and updates.The 1st reason the scripts are failing is because the URL they usually use to access all my mailing lists for several domains from Perl is: domain-name/mailman/listinfoSo, because the abc.org's 'A' record now points to 123.com, the above URL tries to find abc.org/mailman/listinfo at 123.com, but fails with a 404 - Not Found error, of course, because the mailing lists are not there. My webhost has suggested I create a subdomain (e.g. mailman.abc.org), and then recreate all the mailing lists for abc.org in that, but I want to avoid that if possible because of the hassle and side-effects.Q1. Any suggestions on what URL I can now use to access the mailman/listinfo page of my abc.org mailing lists?Q2. Or, can you suggest a way to prevent my requests for abc.org/mailman/listinfo from being redirected to 123.com with the rest of the traffic for abc.org?Months later...Now that I happen to have another domain name for the same site (let's call it abc.net), I'm planning to work around this issue as follows. In cPanel, redirect abc.net web traffic to abc.org, and I can even use the Wildcard Redirect option. I'll then move the mailing lists and accounts to the abc.net domain, and add a domain forwarder to forward all abc.org emails to abc.net. If I then visit pages like abc.net/mailman it will fail because of the wildcard redirect (and that's fine coz I don't need to go there). However, my tests show that the wildcard redirect works only to a depth of 1 directory, so when I visit pages like abc.net/mailman/admin, it should work. (I've tested that last part and it does work.)
Redirect 'A' record of domain, except for some paths
domains;dns;email;cpanel
Unfortunately with your current hosting setup, there is nothing you can do except to make a separate subdomain for your mailing lists.You need a reverse proxy in front of your abc.org site, that forwards traffic based on the URL either to the webydo hosting service or your server that has the mailman software.This cannot be accomplished via DNS.Your terminology is not correct. There is no redirection performed when DNS A records are used. DNS A records simply tell on which IP address a particular domain name is. So, abc.org/mailman/listinfo is not redirected anywhere.
_unix.161915
I have a Debian Wheezy server that's been running for a while with an encrypted drive. The password for the encrypted drive (/dev/sda5) was lost when my encrypted password file was corrupted.I'd like to be able to reboot this server, but that will of course require that password. Since the drive is clearly in a decrypted state, is there a way to change the password without knowing the old one?cryptsetup luksChangeKey /dev/sda5 requires the password of the volume.I could of course rsync everything off and rebuild, but I'd like to avoid that. I looked through memory (#cat /dev/mem | less), but was unable to find it (which is a very good thing!).
Change password on a LUKS filesystem without knowing the password
linux;luks
Yes, you can do this by accessing the master key while the volume is decrypted.The quick and dirty to add a new passphrase:device=/dev/sda5volume_name=foocryptsetup luksAddKey $device --master-key-file <(dmsetup table --showkeys $volume_name | awk '{ print $5 }' | xxd -r -p)device and volume_name should be set appropriately.volume_name is the name of the decrypted volume, the one you see in /dev/mapper.Explanation:LUKS volumes encrypt their data with a master key. Each passphrase you add simply stores a copy of this master key encrypted with that passphrase. So if you have the master key, you simply need to use it in a new key slot.Lets tear apart the command above.$ dmsetup table --showkeys $volume_nameThis dumps a bunch of information about the actively decrypted volume. The output looks like this:0 200704 crypt aes-xts-plain64 53bb7da1f26e2a032cc9e70d6162980440bd69bb31cb64d2a4012362eeaad0ac 0 7:2 4096Field #5 is the master key. $ dmsetup table --showkeys $volume_name | awk '{ print $5 }' | xxd -r -pNot going to show the output of this as it's binary data, but what this does is grab the master key for the volume, and then convert it into raw binary data which is needed later. $ cryptsetup luksAddKey $device --master-key-file <(...)This is telling cryptsetup to add a new key to the volume. Normally this action requires an existing key, however we use --master-key-file to tell it we want to use the master key instead.The <(...) is shell command substitution & redirection. It basically executes everything inside, sends the output to a pipe, and then substitutes the <(...) with a path to that pipe. So the whole command is just a one-liner to condense several operations.
_webmaster.102523
I've been running my company's website, which uses the same header, footer and left column on every page, since late 2014. Because of this I set up the Apache server to interpret HTML documents as if they were PHP. I am using include() to bring the header.html, footer.html, and leftcolumn.html into each page, with some other minor variable exchanges, etc.Now I'm working on my own website and was going to make the same move, but saw other questions on here being told to not force HTML to be read as PHP (in PHP form creation answers, which I successfully implemented, without forcing).Question: Is it bad practice to configure a server to parse HTML files as PHP? Are there possible SEO repercussions for doing so? Should I forgo file extensions altogether and move into a folder structure? (.com/players/ instead of .com/players.html)As there doesn't seem to be a clear answer on similar questions, this may be considered a discussion type question; if so I will turn on forcing HTML to PHP as it seems the easiest option for me, and consider rebuilding into a folder structure.
Is it bad practice to put PHP directives in .html files and have the server interpret them as PHP?
seo;html;php;apache
Is it bad practice to force html to be read as php? Are there possible SEO repercussions for doing so?Search engines don't know nor care about how your pages are generated. They only see the output the request URL provides (in other words the output of your PHP file). Should I forgo file extensions altogether and move into a folder structure? ( .com/players/ instead of .com/players.html)Cool URIs don't change. It doesn't matter if you have a file extension or not, and if you do use one, it doesn't change when you change technologies. So choose whatever you think will be easiest for you to manage and stick with it.
_cs.76436
I have a dataset of about 26,000 words along with their gender tags [m, f or any]. Which ML algorithm should I use for Gender Tagging/Classification purpose. How should I go about it? What should be the input features?Given a word , it's Part Of Speech tag and in some cases the singularity I am trying to predict the gender of the word.
Which Machine Learning algorithm should I use for gender tagging of words in NLP?
algorithms;machine learning;natural language processing;classification
null
_webmaster.4110
i saw this on a website, and looked a bit through the css, but coudn't quite find what I am looking for.How to make a html element let's say a <div> with width of eg. 3000px but make so that page doesen't put up scrollers automaticlly so that you can scroll to the right most and see the end of that content. But if the page is resized to let's say resolution smaller then 800x600, scrollbars appear (becouse the MAIN content is bigger than that), but they only let you scroll and see the MAIN content, not the right most part of the said <div>.So basically, how to a browser ignore an elemets width or height no matter how big or sall it is?
Web page element width
css;html
null
_unix.21169
need to search for something in entire contentI am trying:find . | xargs grep wordI get error:xargs: unterminated quoteHow to achieve this?
How to search for a word in entire content of a directory in linux
linux;shell;grep;find;xargs
xargs expects input in a format that no other command produces, so it's hard to use effectively. What's going wrong here is that you have a file whose name must be quoted on input to xargs (probably containing a ').If your grep supports the -r or -R option for recursive search, use it.grep -r word .Otherwise, use the -exec primary of find. This is the usual way of achieving the same effect as xargs, except without constraints on file names. Reasonably recent versions of find allow you to group several files in a single call to the auxiliary command. Passing /dev/null to grep ensures that it will show the file name in front of each match, even if it happens to be called on a single file.find . -type f -exec grep word /dev/null {} +Older versions of find (on older systems or OpenBSD, or reduced utilities such as BusyBox) can only call the auxiliary command on one file at a time.find . -type f -exec grep word /dev/null {} \;Some versions of find and xargs have extensions that let them communicate correctly, using null characters to separate file names so that no quoting is required. These days, only OpenBSD has this feature without having -exec {} +.find . -type f -print0 | xargs -0 grep word /dev/null
_cs.68814
This question might be redundant but I want to verify my understanding further. Suppose I have this linear directed graph:$$S \overset{2}{\to} A \overset{1}{\to} B \overset{3}{\to} E \overset{1}\to D.$$Here, $S$ is the source, $D$ is the target, and numbers indicate to edge weight:When taking about shortest path, do we only count the number of edges? So in this example, shortest paths from $S$ to $D$ is 4.Thus, when taking about shortest distance, we find the sum of the edges weights. So in this example, shortest distance from $S$ to $D$ is 7.The reason why I elaborate this question using example is that I've seen plently of answers when I google this question but it seems they treat the work path and distance in similar way which is counting the number of edges (hops).
shortest distance vs shortest path
graphs;graph theory
If it is ambiguous (as in this case), a well-written text should indicate what it means. If it doesn't, you might have to guess. Sometimes people are sloppy or imprecise, or rely on you to infer what they mean from context. There's certainly no guarantee that people will use language the way you describe -- it's not a standard assumption you can make.In a graph without weights on the edges, the situation is clear, and we will use shortest path to mean the path with the fewest edges, and use distance to mean the number of edges. In such a graph, the notion of number of edges and sum of edge weights coincides (since we assume each edge has weight 1), so there's no distinction to draw. Thus, in that situation, we'd use the two words similarly.In a graph with weights (lengths) on the edges, the most common situation is probably to use shortest path to mean the path with the smallest sum of edge weights, and use distance to refer to the length (sum of edge weights) of the shortest path. In other words, when there are weights (lengths) on the edges, the most common situation is that we never refer to anything involving the number of edges on the path.But sometimes we have a graph with weights (lengths) on the edges, and we want to talk about two different distance metrics: the number of edges on the path, and the sum of the weights on the edges on the path. Then it's anyone's guess what terminology people will use; I don't think there's any accepted standard. In this situation, I think you have to hope that the writer defines their terms, or try to infer their meaning from context.
_unix.372260
I had java plugin working on Firefox 53, but I accidentally upgraded Firefox to 54. So I've gone back to the version 53 from apt-cache, recreated the symlink ~/.mozilla/plugins/libnpjp2.so but now the java applets don't work. It just says The java plugin has crashed. The only message I get is this in ~/.xsession-errors###!!! [Parent][MessageChannel::Call] Error: Channel error: cannot send/recv###!!! [Parent][MessageChannel::Call] Error: (msgtype=0x9A000C,name=PPluginInstance::Msg_NPP_GetValue_NPPVpluginScriptableNPObject) Channel error: cannot send/recvI have the applet listed in about:plugins so this part should be fine:Java(TM) Plug-in 10.79.2File: libnpjp2.soPath: /home/user/SW/jdk1.7.0_79/jre/lib/amd64/libnpjp2.soVersion: 10.79.2State: EnabledNext Generation Java Plug-in 10.79.2 for Mozilla browsers
Java plugin crashes on Firefox 53
debian;java;firefox
null
_webmaster.90069
I have a confusion in implementing canonical URLs;Client has shared a set of URLs like;http://example.com/eg/http://example.com/eg/index?32312323http://example.com/eg/index?54545545http://example.com/eg/index?45554455What I did to make them canonical I have added below tag on each of the above page header;<link href=http://example.com/eg/ rel=canonical>I have following questions:Do implementation is right?If yes; google will take care of rest like content duplication issue etc?Any further improvement or any better alternative?Please do let me know if I am not clearThanks
Implementing Canonical URLs
seo;url;canonical url
If the following URL's output exactly the same code:http://example.com/eg/http://example.com/eg/index?32312323http://example.com/eg/index?54545545http://example.com/eg/index?45554455And of all the URLs, you want to see the following URL in search engines:http://example.com/eg/then edit your scripts so that this line:<link href=http://example.com/eg/ rel=canonical>is between <head> and </head> in the code produced from each of these URLs:http://example.com/eg/index?32312323http://example.com/eg/index?54545545http://example.com/eg/index?45554455By adding that tag, you declared the code in the above three URLs an exact copy of the code in this URL:http://example.com/eg/You don't need to declare canonical inside:http://example.com/eg/because you designated that URL as the original content URL.
_webapps.19474
I've a free Yahoo! Mail account nearing 300 emails and I want to download the mail to my computer. I also have a backup of the emails in my another Gmail account. Is there something in Yahoo! Mail that will help me download all my email?
Downloading all my Yahoo! Mail to my desktop computer
email;yahoo mail;download
null
_unix.72377
Recently I have installed openSUSE 12.3 on my Lenevo U410. I am using Windows on this machine too. But when I using openSUSE I realize that my laptop get much hotter than what is it in Windows. I also used Ubuntu before openSUSE. Ubuntu works fine, but now my fan works a little.Do you know how to solve this?
My Laptop gets hot on OpenSUSE
opensuse;fan
I have finally find the problem. The problem was from my discrete NVIDIA GPU. OpenSUSE 12.3 found it and all the drivers were good, but I do not know why its gets hot. I think the main problem is from Optimus technology (I have some display problems with it in Ubuntu 12.04 too!). Since I am not using my discrete GPU in Linux, therefore I have disabled it. There is two way to do this: 1.if there is no BIOS option to use UMA GPU only we must install bumblebee and required packages and then turn off discrete GPU.2.if there is a BIOS option, we can disable Optimus. (there will be no problem with windows since it use UMA for all programs except programs you defined as run with high-performance NVIDIA's GPU. So you can enable it from BIOS and come back to run them)first solution:do instructions from heresecond solution:go to your BIOS and disable optimus. Then come back to linux and delete all nvidia packages. Check your hardware and make sure there is no nvidia's GPUanother point:you can install YsST power management package for better controlling over power savings and therefore fan and temp of your laptop. Instructions are hereI hope someday linux will support optimus technology like windows!
_unix.31751
Possible Duplicate:What is $debian_chroot in .bashrc?In Bash, why is PROMPT_COMMAND set to something invisible? I logged into my computer from a normal user and executed the following command:pratheep@pratheep-laptop:~$ echo $PS1\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$I know that if I keep the value of PS1='\u@\h:\w\$ ' then also it will display the same.I want to know what the extra thing in front i.e.\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)} is for?
Doubt on the value of PS1 environment variable
bash;prompt
null
_webapps.20871
Possible Duplicate:How to get Facebook tagged photos in Groups to appear in profile? When tagging a photo in an open group, why does that photo not appear on the tagged persons facebook page?
Facebook: tagging in an open group
facebook groups
null
_bioinformatics.771
Provide an overview of 10x data analysis packages.10x provides Cell Ranger which prepares a count matrix from the bcl sequencer output files and other files (see bottom of page https://support.10xgenomics.com/docs/license for the programs it uses).What can we do with the output files?
10x Genomics Chromium single-cell RNA-seq data analysis options?
rna seq;r;bioconductor;scrnaseq;10x genomics
Data preparationCell Ranger uses the Illumina sequencing output (.bcl) filesMake fastq files:cellranger mkfastq ==> .fastqPrepare count matrix: cellranger count ==> matrix.mtx, web_summary.html, cloupe.cloupeOptional: combine multiple matrix.mtx files (libraries): cellranger aggrData analysisLoupe Cell Browser visualization of cloupe.cloupe filesCount table matrix.mtx analysis options:PythonR Cell Ranger R Kit: cellrangerRkit::load_cellranger_matrix() ==> ExpressionSetR Scater: scater::read10XResults() ==> SCESet objectR Seurat: Seurat::Read10X() ==> Seurat object
_unix.353162
I have installed Octave using the following commands:sudo apt-add-repository ppa:octave/stablesudo apt-get updatesudo apt-get install octaveTyping octave --no-gui in the shell launches Octave command line. However, if I try opening octave GUI by typing octave or octave --force-gui, octave icon appears in the sidebar, and nothing else happens (no error message, either) and octave doesn't launch. Clicking on the icon doesn't do anything either.I have tried following the accepted answer to this similar question, but when I write cd .config/octaveI get the following message:bash: cd: .config/octave: No such file or directoryThanks.
Octave GUI doesn't launch on Ubuntu 16.04
gui;octave
null
_unix.284680
I'm trying to compare two floats in bash and something is going wrong. Here is the code sample based on solution herenum1=0.502E-01num2=0.01echo $num1'>'$num2 | bc -lecho $num2'>'$num1 | bc -lI expect the output of 1 for first echo and 0 for second echo, but instead I get 0 for the first and 1 for the second. What is wrong with this input? How to get consistent comparison of these floats?
Wrong output in comparing floats
awk;bc;floating point
awk can certainly do float comparisons if called from your shell script.num1=0.502E-01num2=0.01awk -v a=$num1 -v b=$num2 'BEGIN{print(a>b)}'1awk -v a=$num1 -v b=$num2 'BEGIN{print(b>a)}'0
_unix.279614
I'm writing a tar implementation for a package manager, and I wonder if I can skip the permission setting of the user in the tar headers and keep what is there by default, the user who wrote the files, in my case root.Would this present a problem? The package manager is designed to not write anything inside /home.
Are all files outside of /home/userabc owned by root?
files;arch linux;directory structure;chown
Most but not all files that are part of the system are owned by the root user. It's rare for system files not to be owned by root, because a user that owns system files can modify them and this is usually not desirable. It's a lot more common to have files that are owned by a group other than root, and that have mode 660 or 664 or 640.It's possible to design a Unix system where all system files (outside of /dev, /home, and the parts of /var containing user data such as mailboxes and crontabs) are owned by root. I don't know whether this is the case for Arch Linux. But not allowing files to be owned by a different group would significantly restrict the security protections of the system, it wouldn't be viable. So you'll need to remember group ownership anyway. Why not remember user ownership as well?
_codereview.158359
The returned class as result: public class cs_HttpFetchResults { public bool blResultSuccess = false; public string srFetchBody = ; public string srFetchingFinalURL = ; public bool bl404 = false; }The HttpWebRequest version: public static cs_HttpFetchResults func_fetch_Page(string srUrl, int irTimeOut = 60, string srRequestUserAgent = Mozilla/5.0 (Windows NT 6.3; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0, string srProxy = null, int irCustomEncoding = 0, bool blAutoDecode = true, bool blKeepAlive = true, string srIPandHost = null) { cs_HttpFetchResults mycs_HttpFetchResults = new cs_HttpFetchResults(); mycs_HttpFetchResults.srFetchingFinalURL = srUrl; HttpWebRequest request = null; WebResponse response = null; try { request = (HttpWebRequest)WebRequest.Create(srUrl); request.CookieContainer = new System.Net.CookieContainer(); if (srProxy != null) { string srProxyHost = srProxy.Split(':')[0]; int irProxyPort = Int32.Parse(srProxy.Split(':')[1]); System.Net.WebProxy my_awesomeproxy = new WebProxy(srProxyHost, irProxyPort); my_awesomeproxy.Credentials = new NetworkCredential(); request.Proxy = my_awesomeproxy; } else { request.Proxy = null; } request.ContinueTimeout = irTimeOut * 1000; request.ReadWriteTimeout = irTimeOut * 1000; request.Timeout = irTimeOut * 1000; request.UserAgent = srRequestUserAgent; request.KeepAlive = blKeepAlive; request.Accept = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8; WebHeaderCollection myWebHeaderCollection = request.Headers; myWebHeaderCollection.Add(Accept-Language, en-gb,en;q=0.5); myWebHeaderCollection.Add(Accept-Encoding, gzip, deflate); request.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip; using (response = request.GetResponse()) { using (Stream strumien = response.GetResponseStream()) { Encoding myEncoding = Encoding.UTF8; string srContentType = ; if (response.ContentType != null) { srContentType = response.ContentType; if (srContentType.Contains(;)) { srContentType = srContentType.Split(';')[1]; } srContentType = srContentType.Replace(charset=, ); srContentType = func_Process_Html_Input(srContentType); } try { myEncoding = Encoding.GetEncoding(srContentType); } catch { myEncoding = irCustomEncoding == 0 ? Encoding.UTF8 : Encoding.GetEncoding(irCustomEncoding); } using (StreamReader sr = new StreamReader(strumien, myEncoding)) { mycs_HttpFetchResults.srFetchBody = sr.ReadToEnd(); if (blAutoDecode == true) { mycs_HttpFetchResults.srFetchBody = HttpUtility.HtmlDecode(mycs_HttpFetchResults.srFetchBody); } mycs_HttpFetchResults.srFetchingFinalURL = Return_Absolute_Url(response.ResponseUri.AbsoluteUri.ToString(), response.ResponseUri.AbsoluteUri.ToString()); mycs_HttpFetchResults.blResultSuccess = true; } } } if (request != null) request.Abort(); request = null; } catch (Exception E) { if (E.Message.ToString().Contains((404))) mycs_HttpFetchResults.bl404 = true; csLogger.logCrawlingErrors(crawling failed url: + srUrl, E); } finally { if (request != null) request.Abort(); request = null; if (response != null) response.Close(); response = null; } return mycs_HttpFetchResults; }The HttpClientHandler: async public static Task<cs_HttpFetchResults> func_fetch_Page_New(string srUrl, int irTimeOut = 60, string srRequestUserAgent = Mozilla/5.0 (Windows NT 6.3; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0, string srProxy = null, int irCustomEncoding = 0, bool blAutoDecode = true, bool blKeepAlive = true, string srIPandHost = null) { cs_HttpFetchResults mycs_HttpFetchResults = new cs_HttpFetchResults(); mycs_HttpFetchResults.srFetchingFinalURL = srUrl; try { using (HttpClientHandler myClientHandler = new HttpClientHandler()) { myClientHandler.AllowAutoRedirect = true; myClientHandler.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip; myClientHandler.UseDefaultCredentials = true; if (srProxy != null) { string srProxyHost = srProxy.Split(':')[0]; int irProxyPort = Int32.Parse(srProxy.Split(':')[1]); System.Net.WebProxy my_awesomeproxy = new WebProxy(srProxyHost, irProxyPort); my_awesomeproxy.Credentials = new NetworkCredential(); myClientHandler.Proxy = my_awesomeproxy; myClientHandler.UseProxy = true; } else { myClientHandler.Proxy = null; myClientHandler.UseProxy = false; } using (var httpClient = new HttpClient(myClientHandler)) { httpClient.DefaultRequestHeaders.Add(Accept-Language, en-gb,en;q=0.5); httpClient.DefaultRequestHeaders.Add(Accept-Encoding, gzip, deflate); httpClient.Timeout = new TimeSpan(0, 0, irTimeOut); httpClient.DefaultRequestHeaders.Add(User-Agent, srRequestUserAgent); httpClient.DefaultRequestHeaders.Add(User-Agent, srRequestUserAgent); if (blKeepAlive == true) { httpClient.DefaultRequestHeaders.Connection.Clear(); httpClient.DefaultRequestHeaders.ConnectionClose = false; httpClient.DefaultRequestHeaders.Connection.Add(Keep-Alive); } else { httpClient.DefaultRequestHeaders.Connection.Clear(); httpClient.DefaultRequestHeaders.ConnectionClose = true; } using (var vrResponse = await httpClient.GetAsync(srUrl)) { if (vrResponse.IsSuccessStatusCode == true) { var contenttype = vrResponse?.Content?.Headers?.First(h => h.Key.Equals(Content-Type)); string srContentType = contenttype?.Value?.First(); Encoding myEncoding = Encoding.UTF8; if (srContentType != null) { if (srContentType.Contains(;)) { srContentType = srContentType.Split(';')[1]; } srContentType = srContentType.Replace(charset=, ); srContentType = func_Process_Html_Input(srContentType); } try { myEncoding = Encoding.GetEncoding(srContentType); } catch { myEncoding = irCustomEncoding == 0 ? Encoding.UTF8 : Encoding.GetEncoding(irCustomEncoding); } var bytes = await vrResponse.Content.ReadAsByteArrayAsync(); mycs_HttpFetchResults.srFetchBody = myEncoding.GetString(bytes); if (blAutoDecode == true) { mycs_HttpFetchResults.srFetchBody = HttpUtility.HtmlDecode(mycs_HttpFetchResults.srFetchBody); } string responseUri = vrResponse.RequestMessage.RequestUri.AbsoluteUri.ToString(); mycs_HttpFetchResults.srFetchingFinalURL = Return_Absolute_Url(responseUri, responseUri); mycs_HttpFetchResults.blResultSuccess = true; } else { if (vrResponse.StatusCode == HttpStatusCode.NotFound) mycs_HttpFetchResults.bl404 = true; } } } } } catch (Exception E) { if (E.Message.ToString().Contains((404))) mycs_HttpFetchResults.bl404 = true; csLogger.logCrawlingErrors(crawling failed url: + srUrl, E); } return mycs_HttpFetchResults; }Other helper methods:public static string func_Process_Html_Input(string srHtmlInput) { srHtmlInput = HttpUtility.HtmlDecode(srHtmlInput); srHtmlInput = Regex.Replace(srHtmlInput, @(\s)\s+, $1).Trim(); srHtmlInput = srHtmlInput.Replace(&#39, '); return srHtmlInput; } public static string Return_Absolute_Url(string srRelativeUrl, string srCrawledUrl, List<string> lstBannedExtensions = null, bool blIgnoreBaseUriHost = false, bool blDoNotRemoveDash = false) { srRelativeUrl = HttpUtility.UrlDecode(srRelativeUrl); srRelativeUrl = HttpUtility.HtmlDecode(srRelativeUrl); lstBannedExtensions = (lstBannedExtensions == null) ? new List<String>() : lstBannedExtensions; string srReturnUrl = null; if (srRelativeUrl.Length > 0) if (srRelativeUrl[0] == '.') srRelativeUrl = srRelativeUrl.Substring(1); Uri baseUri = new Uri(srCrawledUrl); Uri NewUrl; bool blUriResult = Uri.TryCreate(baseUri, srRelativeUrl, out NewUrl); if (blUriResult == true) { if (NewUrl.AbsoluteUri.ToString().StartsWith(http) && (NewUrl.Host == baseUri.Host || blIgnoreBaseUriHost == true)) { string srLastSegment = NewUrl.Segments[NewUrl.Segments.Length - 1].ToString(); if (lstBannedExtensions.Where(pr => srLastSegment.ToLowerInvariant().IndexOf(pr) != -1).Count<string>() == 0) { srRelativeUrl = NewUrl.AbsoluteUri.ToString(); if (srRelativeUrl.IndexOf(#) != -1 && blDoNotRemoveDash == false) { srRelativeUrl = srRelativeUrl.Substring(0, srRelativeUrl.IndexOf(#)); } srRelativeUrl = HttpUtility.UrlDecode(srRelativeUrl); srRelativeUrl = HttpUtility.HtmlDecode(srRelativeUrl); srReturnUrl = srRelativeUrl; } } } return srReturnUrl; }
Composing best web page fetcher function by HttpClientHandler for C#
c#;http;web scraping
null
_cstheory.38809
Hi I am studying the computer science Hardware. I have one question. I studies processor strurcture and came to know that the processor sends the request of read and write commands to the memory unit if doesnt see the data or instruction in its registers. The processor use to send these commands in bulk as these requests are buffer some where in the memory controller hub. But when the memory return back the data it simply sends the data so how does the processor knows that this data or instruction belongs to specific process or what mechanisms it will deploye to clasify the information comming back from the memory or I/O. For example the processor sends the address and data to be fetched back from I/O and then again it sends the request to fetch but this time from memory. When the data returns its simply the bits so how does the processor classify that the particular information belongs to which page or which processor ?
Memory Data Retrieve?
machine learning;memory
null
_unix.207058
I have a following bash prompt string:root@LAB-VM-host:~# echo $PS1${debian_chroot:+($debian_chroot)}\u@\h:\w\$ root@LAB-VM-host:~# hostname LAB-VM-hostroot@LAB-VM-host:~# Now if I change the hostname from LAB-VM-host to VM-host with hostname command, the prompt string for this bash session does not change:root@LAB-VM-host:~# hostname VM-hostroot@LAB-VM-host:~# Is there a way to update hostname part of bash prompt string for current bash session or does it apply only for new bash sessions?
How to change bash prompt string in current bash session?
bash;prompt;hostname
Does Debian really pick up a changed hostname if PS1 is re-exported, as the other answers suggest? If so, you can just refresh it like this:export PS1=$PS1Don't know about debian, but on OS X Mountain Lion this will not have any effect. Neither will the explicit version suggested in other answers (which is exactly equivalent to the above).Even if this works, the prompt must be reset separately in every running shell. In which case, why not just manually set it to the new hostname? Or just launch a new shell (as a subshell with bash, or replace the running process with exec bash)-- the hostname will be updated.To automatically track hostname changes in all running shells, set your prompt like this in your .bashrc:export PS1='\u@$(hostname):\w\$ 'or in your case:export PS1='${debian_chroot:+($debian_chroot)}\u@$(hostname):\w\$ 'I.e., replace \h in your prompt with $(hostname), and make sure it's enclosed in single quotes. This will execute hostname before every prompt it prints, but so what. It's not going to bring the computer to its knees.
_codereview.67293
I have the following code to get a NATO date-time group for a DateTime. I usually need Zulu-Time or from a specific time zone. Is it clear what those methods are providing? If want to parse a NATO dtg string would those be extension methods for string or DateTime?using System;using System.Collections.Generic; using System.Linq;using System.Text;using System.Threading.Tasks;using System.Collections.ObjectModel;using System.Globalization;namespace scratchpad{/// <summary>/// Create NATO date-time group from DateTime/// </summary>static class NatoDtg{ /// <summary> /// Get NATO date-time group for UTC/Zulu timezone /// </summary> /// <param name=dt>this</param> /// <returns>NATO date-time group</returns> public static string ToZuluDtg(this DateTime dt) { DateTime utc = dt.ToUniversalTime(); CultureInfo ci = new CultureInfo(en-GB); StringBuilder builder = new StringBuilder(); builder.Append(utc.ToString(ddHHmm, ci)); // day, hours, minutes builder.Append(Z); // time zone builder.Append(utc.ToString(MMMyy, ci).ToLower()); // month, year return builder.ToString(); } /// <summary> /// /// </summary> /// <param name=dt>this</param> /// <param name=tz>Time zone for the </param> /// <returns></returns> public static string ToDtg(this DateTime dt, TimeZoneInfo tz) { if (tz == null) throw new ArgumentNullException(TimeZoneInfo is null); CultureInfo ci = new CultureInfo(en-GB); StringBuilder builder = new StringBuilder(); builder.Append(dt.ToString(ddHHmm, ci)); TimeSpan ts = tz.GetUtcOffset(dt); char letter = '#'; if(ts.Hours == 0) { letter = 'Z'; } else if(ts.Hours >= 1 && ts.Hours <= 9) // A-I { letter = 'A'; letter += (char)(ts.Hours-1); } else if(ts.Hours >= 10 && ts.Hours <= 12) // K-M TRICKY: skip J (Juliet) { letter = 'A'; letter += (char)ts.Hours; } else if(ts.Hours >= -12 && ts.Hours <= -1) // N-Y { letter = 'Z'; letter += (char)ts.Hours; } else { throw new InvalidOperationException(Unknown UTC offset for timezone); } builder.Append(letter); builder.Append(dt.ToString(MMMyy, ci).ToLower()); return builder.ToString(); }}class Program{ static void Main(string[] args) { DateTime now = DateTime.Now; Console.WriteLine(Zulu DTG {0}, now.ToZuluDtg()); Console.WriteLine(Berlin DTG {0}, now.ToDtg(TimeZoneInfo.GetSystemTimeZones().Where(i => i.DisplayName.Contains(Berlin)).FirstOrDefault())); Console.WriteLine(Local DTG {0}, now.ToDtg(TimeZoneInfo.Local)); Console.ReadLine(); }}}
Least suprising extension methods for DateTime to NATO date-time group
c#;datetime
Firstly, I don't like the acronyms. With intellisense you can hit tab pretty early on to finish your names for you, so there's no sense adding an extra cognitive leap to work out what these functions do (even with your fairly comprehensive commenting). Call them ToDateTimeGroup and ToZuluDateTimeGroup or, even better, ToNatoDateTimeGroup and ToNatoZuluDateTimeGroup.Speaking of commenting, your meta comments for the second method are unfinished. Additionally your commenting in-method is only commenting obvious code, which is unhelpful. Comments should explain why, not what or how, unless it's really hard to work it out (in which case, it probably needs refactoring). Additionally, comments on a new line above the code in question are generally more readable.Your parameter names would also be better written out fully, e.g. dateTime instead of dt.Your Z code is a magic string and should be a constant somewhere. It is arguable that your format strings are too.I seem to say this a lot, but prefer to use var when the right hand side of a declaration makes the type obvious. Doing so means that should you need to change a variable's type, you only have to edit the declaration once.I like to extend this rule to also include when the variable name makes the type obvious too, but many disagree, so your mileage may vary.Resulting code looks like this:/// <summary>/// Extension methods to create NATO date-time groups from DateTime instances./// </summary>static class NatoDtg{ private const char ZuluTimeZoneCode = `Z`; /// <summary> /// Get NATO date-time group for UTC/Zulu timezone /// </summary> /// <param name=dt>this</param> /// <returns>NATO date-time group</returns> public static string ToNatoZuluDateTimeGroup(this DateTime dateTime) { var utcTime = dateTime.ToUniversalTime(); var culture = new CultureInfo(en-GB); var builder = new StringBuilder(); builder.Append(utcTime.ToString(ddHHmm, culture)); builder.Append(ZuluTimeZoneCode); builder.Append(utcTime.ToString(MMMyy, culture).ToLower()); return builder.ToString(); } /// <summary> /// Get NATO date-time group for a time zone /// </summary> /// <param name=dateTime>this</param> /// <param name=timeZone>Time zone for the date-time group </param> /// <returns></returns> public static string ToDtg(this DateTime dateTime, TimeZoneInfo timeZone) { if (timeZone == null) throw new ArgumentNullException(TimeZoneInfo is null); var culture = new CultureInfo(en-GB); var builder = new StringBuilder(); builder.Append(dt.ToString(ddHHmm, culture)); TimeSpan time = timeZone.GetUtcOffset(dateTime); char letter = '#'; if(ts.Hours == 0) { letter = ZuluTimeZoneCode; } else if(ts.Hours >= 1 && ts.Hours <= 9) { //Letters A-I are based on hours letter = 'A'; letter += (char)(ts.Hours-1); } else if(ts.Hours >= 10 && ts.Hours <= 12) { //Letters K-M are based on hours, offset because we skip J letter = 'A'; letter += (char)ts.Hours; } else if(ts.Hours >= -12 && ts.Hours <= -1) // N-Y { //Letters N-Y are based on hours backwards. letter = 'Z'; letter += (char)ts.Hours; } else { throw new InvalidOperationException(Unknown UTC offset for timezone); } builder.Append(letter); builder.Append(dt.ToString(MMMyy, culture).ToLower()); return builder.ToString(); }}class Program{ static void Main(string[] args) { var now = DateTime.Now; Console.WriteLine(Zulu DTG {0}, now.ToZuluDtg()); Console.WriteLine(Berlin DTG {0}, now.ToDtg(TimeZoneInfo.GetSystemTimeZones().Where(i => i.DisplayName.Contains(Berlin)).FirstOrDefault())); Console.WriteLine(Local DTG {0}, now.ToDtg(TimeZoneInfo.Local)); Console.ReadLine(); }}
_cs.55165
Could anyone tell the reason for popping the top of stack(dollar symbol) as said in this lecture(p.54) when there's already dollar symbol in the stack.I would like to know if we could replace the transition given here(p.54) of the formA,$ $\to$ 0$withA,$\epsilon \to$ 0.
Why do we pop the dollar symbol when it's already present in the stack in PDA?
automata;pushdown automata
The end-of-stack symbol (often $\$$) is necessary for the automaton to know when there are no more stack symbols. Therefore, when we reach $\$$ we have two options:Remove the $\$$ and terminate.Put it back there for later, maybe adding additional symbols.Ergo, as long as you do not want to terminate you have to keep putting $\$$ back when you find it.Why does it have ot be like that, you ask? In the definition I know, PDAs always take out the top-most symbol. So the creator of the PDA has no choice but to deal with the $\$$ once it is on the top of the stack.But why define it like that, you ask? If PDAs were allowed to not consume the topmost stack symbol they'd be inherently nondeterministic. And since you gain nothing by this choice -- doing nothing with the stack is trivial to simulate by just putting the symbol you read back -- there is no reason to introduce this source of nondeterminism.Depending on the definition, termination is defined by empty stack or you have to switch into a final state. The definitions are equivalent. Remember that it's an declarative, not an algorithmic definition!
_unix.129530
I am using pam_mount to mount cifs and sshfs shares for users when they log in to an Arch Linux workstation. I have the a volume line in /etc/security/pam_mount.conf.xml:<volume user=strongbad fstype=fuse path=sshfs#%(USER)@myserver.com: mountpoint=/mnt/%(USER) options=idmap=user,password_stdin />Everything works great on log in and I get an entry in /proc/mounts:[email protected]: /mnt/strongbad fuse.sshfs rw,nosuid,nodev,relatime,user_id=1002,group_id=1002 0 0The problem is the shares do not get unmounted when the user logs out. I hae tried logging in with both KDE/KDM and directly to a TTY. I get the same error in the system logs no matter how I log in:systemd[474]: (pam_mount.c:706): received order to close thingssystemd[474]: (pam_mount.c:538): * PAM_MOUNT WAS INVOKED WITH INSUFFICIENT PRIVILEGES (euid=1002)systemd[474]: (pam_mount.c:539): * THIS IS A BUG OF THE CALLER. CONSULT YOUR DISTRO.systemd[474]: (pam_mount.c:540): * Also see bugs.txt in the pam_mount source tarball/website documenTaking a look at bugs.txt uninformatively, at least to me, says:== su, probably others privilege drop ==I sometimes get reports about unmount failing because of insufficient privileges. Some programs and/or distributions and/or pam configurations seem to drop the root privileges after successful authentification. This goes counter to pam_mount which needs these privileges for umount. (May not apply for FUSE mounts.)Known constellations includesu from coreutils, on some distrosGDM on UbuntuThis seems to describe my problem. Am I doing something wrong? Is Arch broken? Are there any distributions where pam_mount is actually able to unmount shares on logout?
Unmounting shares on logout with pam_mount
arch linux;pam;unmounting
null
_unix.320278
I use tmux, and everytime i open a terminal it create a new session and don't destroy the old one...I have set the destroy-unattaced to on but at the tmux ls they are still hereIs it possible that when i close the window of the terminal the session is ended ?Like i only have the session '0' ?Thanks
Automatically kill tmux session
tmux;gnome terminal
null
_codereview.40285
How should I optimize my code for better performance? When I execute the code outside of MySQL stored proc, it is 500% faster. MySQL stored procedureSELECT bs.business_id, adr.street, bs.`name`, bs.description, adr.latitude, adr.longitude FROM businesses bs INNER JOIN address adr ON bs.address_id = adr.address_id WHERE bs.business_id = inBusinessid; //code that fetches the data from the database public static final String SP_GET_BUSINESS_BY_ID = call small.tbl_business_get_by_id(?); public static final String BUSINESS_ID = inBusinessid; Business bs = null; try { SqlStoredProc sproc = new SqlStoredProc(StoredProcs.SP_GET_BUSINESS_BY_ID, getConnection()); sproc.addParameter(businessId, ProcParam.BUSINESS_ID); ResultSet reader = sproc.executeReader(); if (reader.next()) { bs = setBusinessData(reader); } reader.close(); sproc.dispose(); }Here is SQL wrapper I created. public class SqlStoredProc{ private CallableStatement mCallableStatement; private PreparedStatement mPreparedStatement; private Connection mConnection; private boolean mConnectionOpen = false; private boolean mInitConnectionClosed = true; public enum SqlType { Integer, BigInt, TinyInt, Varchar, Char, Date, TimeStamp, Array, Blob, Boolean, Float, Decimal, Double } public SqlStoredProc(String storedProcName, Connection connection) throws SQLException { mConnection = connection; mCallableStatement = mConnection.prepareCall(storedProcName); mConnectionOpen = true; } public SqlStoredProc(Connection connection) throws SQLException { mConnection = connection; mConnectionOpen = true; } /* START OF PREPARED STATEMENT CODE */ public void setPreparedStatement(String preparedQuery) throws SQLException { mPreparedStatement = mConnection.prepareStatement(preparedQuery); } public void addPreparedParamether(int parameterIndex, String value) throws SQLException { mPreparedStatement.setString(parameterIndex, value); } public void addPreparedParamether(int parameterIndex, int value) throws SQLException { mPreparedStatement.setInt(parameterIndex, value); } public void addPreparedParamether(int parameterIndex, float value) throws SQLException { mPreparedStatement.setFloat(parameterIndex, value); } public void addPreparedParamether(int parameterIndex, double value) throws SQLException { mPreparedStatement.setDouble(parameterIndex, value); } public ResultSet executePreparedQuery() throws SQLException { return mPreparedStatement.executeQuery(); } /* END OF PREPARED STATEMENT */ /* START OF STORED PROC */ public void setStoredProcName(String storedProcName) throws SQLException { mCallableStatement = mConnection.prepareCall(storedProcName); } public void addParameter(int value, String parameterName) throws SQLException { mCallableStatement.setInt(parameterName, value); } public void addParameter(int value, int parameterIndex) throws SQLException { mCallableStatement.setInt(parameterIndex, value); } public void addParameter(String value, String parameterName) throws SQLException { if (value != null) mCallableStatement.setString(parameterName, value); else mCallableStatement.setNull(parameterName, java.sql.Types.VARCHAR); } public void addParameter(String value, int parameterIndex) throws SQLException { if (value != null) mCallableStatement.setString(parameterIndex, value); else mCallableStatement.setNull(parameterIndex, java.sql.Types.VARCHAR); } public void addParameter(Date date, String parameterName) throws SQLException { if (date != null) { mCallableStatement.setTimestamp(parameterName, new java.sql.Timestamp(date.getTime())); } else { mCallableStatement.setNull(parameterName, java.sql.Types.TIMESTAMP); } } public void addParameter(double value, String parameterName) throws SQLException { mCallableStatement.setDouble(parameterName, value); } public void addParameter(float value, String parameterName) throws SQLException { mCallableStatement.setFloat(parameterName, value); } public void addParameter(float value, int parameterIndex) throws SQLException { mCallableStatement.setFloat(parameterIndex, value); } public int getOutParameterTypeInt(String parameterName) throws SQLException { return mCallableStatement.getInt(parameterName); } public float getOutParameterTypeFloat(String parameterName) throws SQLException { return mCallableStatement.getFloat(parameterName); } public double getOutParameterTypeDouble(String parameterName) throws SQLException { return mCallableStatement.getDouble(parameterName); } public void registerOutParameter(String parameterName, SqlType sqlType) throws SQLException { switch (sqlType) { case Date: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.DATE); break; case TimeStamp: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.TIMESTAMP); break; case Integer: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.INTEGER); break; case TinyInt: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.TINYINT); break; case Varchar: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.VARCHAR); break; case Array: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.ARRAY); break; case BigInt: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.BIGINT); break; case Blob: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.BLOB); break; case Char: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.CHAR); break; case Boolean: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.BOOLEAN); break; case Float: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.FLOAT); break; case Decimal: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.DECIMAL); break; case Double: mCallableStatement.registerOutParameter(parameterName, java.sql.Types.DOUBLE); break; default: break; } } public int executeNonQuery() throws SQLException { int rowsAffected = mCallableStatement.executeUpdate(); return rowsAffected; } public void addBatch() throws SQLException { mCallableStatement.addBatch(); } public boolean execute() throws SQLException { return mCallableStatement.execute(); } public int[] executeBatch() throws SQLException { return mCallableStatement.executeBatch(); } public ResultSet getResultSet() throws SQLException { return mCallableStatement.getResultSet(); } public boolean getMoreResults() throws SQLException { return mCallableStatement.getMoreResults(); } public ResultSet executeReader() throws SQLException { return mCallableStatement.executeQuery(); } public CallableStatement getCurrentStatement() { return mCallableStatement; } public void dispose() throws SQLException { closeOpenConnections(); } private void closeOpenConnections() throws SQLException { if (mConnectionOpen && mInitConnectionClosed) { if (mCallableStatement != null) mCallableStatement.close(); if (mPreparedStatement != null) mPreparedStatement.close(); mInitConnectionClosed = false; mConnection.close(); } }}
Optimize MySQL in a stored procedure
java;performance;mysql;sql
After three days for research, I discovered that CallableStatements are much slower than prepared statements because there is overhead when setting up the stored procedure. That's why my stored proc takes 300ms+ vs the prepared statement.This explains the issue:As you may recall, CallableStatement objects are used to execute database stored procedures. I've saved CallableStatement objects until last, because they are the slowest performers of all the JDBC SQL execution interfaces. This may sound counterintuitive, because it's commonly believed that calling stored procedures is faster than using SQL, but that's simply not true. Given a simple SQL statement, and a stored procedure call that accomplishes the same task, the simple SQL statement will always execute faster. Why? Because with the stored procedure, you not only have the time needed to execute the SQL statement but also the time needed to deal with the overhead of the procedure call itself.
_softwareengineering.113720
We are a team of about 15 employees, in a non-IT enterprise. Today, we mainly develop websites, using PHP, MySQL, etc. We run a bit less than 100 linux servers on ourselves.But today, we are confronted with a too-big-for-us-to-code project. We selected a great provider (yes, outsourcing), and now they ask us to choose between C++/Qt and .NET/WPF.The app is highly graphical, and meant to be distributed on Windows systems. .NET is recommanded to us as easier to design and more lightweight.But what about our team ? Can we handle, with our small number, both universes at the same time ?responsibilites :during the dev : design the app, review the codeafter the dev : host servers, do maintenance and answer user callsin the case we need a v2, we intend to call back the same providerWhat do you think ?EDITWe finally chose C++/Qt. The reasons are :the dev time has been tested to be somewhat equalit suits us better from a management point of viewand it increases cross-platform portability
Can a small team enter .NET world while most of them are working on OpenSource languages?
team;organization;wpf;qt
I'll go the other direction on this one: if your team has linux experience and familiarity, and you run your own servers, outsourcing to a .NET shop will be a disaster. You won't have the experience to rein in the outsourcers when they get crazy, your linux and PHP intuitions will fail you in the Windows environment, you won't easily spot goofy .NET constructs, and you'll curse the fates that bind you to supporting two sets of servers, one Linux, one Windows. If you hire some Windows folks to support that set of servers, the culture clashes and arguments will astonish you.
_unix.317353
I am reading a large file sequentially from the disk and trying to understand the iostat output while the reading is taking place.Size of the file : 10 GBRead Buffer : 4 KBRead ahead (/sys/block/sda/queue/read_ahead_kb) : 128 KBThe iostat output is as followsDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsda 0.00 0.00 833.00 14.00 103.88 0.05 251.30 6.07 5.69 2.33 205.71 1.18 100.00Computing the average size of an I/O request = (rMB/s divided by r/s) gives ~ 128 KB which is the read ahead value. This seems to indicate that while the read system call has specified a 4KB buffer, the actual disk I/O is happening according to the read ahead value.When I increased the read ahead value to 256KB, the iostat output was as followsDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsda 0.00 28.00 412.00 12.00 102.50 0.05 495.32 10.78 12.15 4.76 265.83 2.36 100.00Again the average I/O request size was 256 KB matching the read ahead.This kept up until I set 512 KB as the read ahead value and did not hold up when I moved up to a read ahead value of 1024 KB - the average size of the I/O request was still 512 KB. Increasing max_sectors_kb (maximum amount of data per I/O request) from the default of 512 KB to 1024 KB also did not help here. Why is this happening - ideally I would like to minimize my read IOPS as much as possible and read larger amount of data per I/O request (larger than 512 KB per request). Additionally, I am hitting 100% disk utilization in all cases - I would want to throttle myself to read at 50-60% disk utilization with good sequential throughput. In short, what are the optimized application/kernel settings for sequential read I/O.
Tuning sequential disk reads for performance
linux;performance;disk
null
_softwareengineering.165195
I have code to switch between databases when using EF, but now also need to allow the user to choose a particular table. (The tables all use the same schema but may have different names because of the way the client updates datasets). Is there a way to do this? And if it's done can it be done without being broken by updates being made to the Entity data model?
Possible to switch table at runtime using Entity Framework?
entity framework
null
_unix.340964
On booting the Rescue System from an openSUSE DVD, I find myself at a rescue login prompt:What are the default login details?
What is the default openSUSE Rescue login?
login;opensuse;rescue
The rescue login: text is a login prompt expecting you to type in a username. Enter root and press Enter, that should give you a root shell. If it asks you for a password, you press Enter again.Further reading: https://doc.opensuse.org/documentation/leap/startup/single-html/book.opensuse.startup/index.html#sec.trouble.data.recover.rescue
_softwareengineering.164050
We have a Staging Branch. Then we came out with a Beta branch for users to move whenever they wanted to from old Production branch to the new features.Our plan seemed simple, we test on Staging, when items get QA'd, they get cherry-picked and deploy to Beta.Here's the problem! A bug will discreetly make its way on to Beta, and since Beta is a production environment, it needs fixes fast and accurate. But not all the QA's got done. Enter Git hell..So I find a problem on Beta. No sweat, it's already been fixed on Staging, but when I go to cherry-pick the item over, Beta barely has any of the other pre-requisites of code to implement this small change. Now Beta has a little here and a little there, and I can't imagine it as a code base being as stable as Staging. What's more, is I'm dealing with some insane Git conflicts, and having to monkey patch a bunch of things to make up for what Beta hasn't caught up with Staging.Can someone polite or non-polite terms, tell me what we're doing wrong here as far as assembling this project? Any awesome recommendations or workarounds or alternatives to the system we came up with?
How would you manage development between many Staging branches?
project management;branching
You fix outright bugs in the Beta branch in Beta, not in Staging. Then you back port the fix from Beta to Staging, if necessary, before it gets overwritten.
_unix.39982
Is there any way to set +x bit on script while creating?For example I run:vim -some_option_to_make_file_executable script.shand after saving I can run file without any additional movings.ps. I can run chmod from vim or even from console itself, but this is a little annoying, cause vim suggests to reload file. Also it's annoying to type chmod command every time.pps. It would be great to make it depending on file extension (I don't need executable .txt :-) )
vim: create file with +x bit
shell script;vim;executable;chmod
I don't recall where I found this, but I use the following in my ~/.vimrc Set scripts to be executable from the shellau BufWritePost * if getline(1) =~ ^#! | if getline(1) =~ /bin/ | silent !chmod +x <afile> | endif | endifThe command automatically sets the executable bit if the first line starts with #! or contains /bin/.
_webmaster.52630
I have a query string attached to a Request URI.Whilst I can see this data within the pages report and it works, I was thinking about setting up an advanced filter to convert the request URI to an Event, with the hope this would clean up my pages report and sit this query with related events in my data.I can see in advanced filters that this is possible, but seems limited to specifying a single event area, so Cat, action or Label, not all 3.Does any one know how I could set up an advanced filter to find any URIs that contain a specific query string, say example below.www.example.com?querystring=123and convert this into an event, where I can set the Cat, action and label.
Google Analytics Request URI to Event advanced filter
google analytics
null
_cs.65353
The report referred in the title is the following:A Block-sorting Lossless Data Compression Algorithm, by M. Burrows and D.J. Wheeler, SRC Research Report, 1994The step I do not understand is step D3, on page 4:for each i = 0, ... , N - 1: S[N - 1 - i] = L[T^i[I]]Question is: why does this work, i.e. why does this give us the desired result?(To be clear, I understand, what T means, and how T^i is constructed. I even coded down the formula above in Python, and it did give me the desired result -- I know, because I coded the other steps as well. I just don't see why it works?)The algorithm described in the report consists of a Compression and a Decompression part, my question being about the last step of the Decompression.In order to give more context, as requested in the comment, I'll try summarise here the two mentioned parts.CompressionInput text: SC1. Take all the cyclic rotations of S, and sort them lexicographically (result: matrix M). Return M and I, where I is the index of the first row of M, which is equal to S.C2. Let L be the last column of M.Output of compression: (L, I)DecompressionInput: (L, I)D1. Sort L. Result: F.D2. Calculate vector T, such that If L[j] is the kth instance of ch in L, then T[j] = i where F[i] is the kth instance of ch in F.In other words: F[T[j]] = L[j]D3. Calculate S:for each i = 0, ... , N - 1: S[N - 1 - i] = L[T^i[I]] where T0[x] = x, and T(i + 1)[x] = T[Ti[x]]EDIT: Example for further clarification, based on the answer of @KWillets.Let's take the example abraca used in the paper as well.As also shown in the paper, the suffixes are the following: F| |L T T0 T1 T2 T3 T4 T5 a|a b r a|c 4 0 4 2 5 3 1 a|b r a c|a<--I=1 0 1 0 4 2 5 3<--I=1 a|c a a b|r 5 2 5 3 1 0 4 b|r a c a|a 1 3 1 0 4 2 5 c|a a b r|a 2 4 2 5 3 1 0 r|a c a a|b 3 5 3 1 0 4 2From the answer of @KWillets:Think of the source string S as a (array-based) linked list, with one character per node, so that we can output S from left-to-right by traversing pointers.I (suppose), that I understand this part. In the concrete example this would mean the following: position 0 1 2 3 4 5 label A --> B --> R --> A --> C --> A index of 1 2 3 4 5 ? (0? EOF?) next node What I'm not sure about, is if the last position (5) in this case wouldalso have an index to item 0 (thus making a cyclic list, or it wouldjust be the end of the list).But let's also arrange it so that the nodes are suffix sorted, ie each node is assigned a position i such the suffix that begins at that node is greater than the one at i-1, and so on.The links in this array are T [...]As far as I understand, this would mean the following (i.e., making the links equal to T): ________________ _________________ | | | | | _______________________ | V | | | V | position 0 1 2 3 | 4 5 | label A <--- A -> A B<- C R---- index of 4 0 | 5 1 2 3 next node |___________|_________________^ \ |____________________|(and traversing them is D3 above, and the label for each node is in F above)Makes sense, if I traverse this list, in the sequence of T (i.e. 0->4->2->5->3->1),then the result will be ACARBA, i.e. the reverse of the original input string.Main question, that I still do not understand: why does T have this property?. I.e. why willa list defined as described in D2 happen to be the correct indexes for generating the reverse of the original string?Specific questions:L[i] is the character just to the left of suffix i. What does it mean the character to the left of suffix i?Considering the above example, would this mean that e.g. for i=2 L[2] = 'r' is to the left of acaabr?Does this mean that 'r' cyclically precedes the first 'a' in 'acaabr', i.e. if from the first 'a' we went one step to the left, then we would get the 'r' at the end?Also, how does this help us?[...] for instance if L[i]='b', that means there is some suffix starting with 'b' that T[i] should point to, but we don't know which oneLet's consider 'a' instead of 'b', since there are more 'a''s. So, for i=3, L[3]='a', that means there isa suffix starting with 'a', to which T[3] should point to. Indeed, T[3]=1, which is the second suffixstarting with 'a', so it corresponds to the description of the algorithm.What I don't understand: how does L[i] come into the picture? Wasn't T supposed to be the orderingamong the F's? (I guess this must have something to do with the above mentioned the character to the left property, but I do not see the connection, yet.)Also, how do we know for sure that there must be such a suffix?
Why does the last step of Decompression Transformation of the cited report work?
algorithms;data compression
First, a bit of terminology:The BWT represents a list of suffixes in lexicographical order (I'll use the term suffix instead of block, since the string is usually terminated with a metacharacter that makes suffix and block sorting equivalent). Each index i in the L and F arrays represents a unique suffix, and it can be illustrative to make a table with both L and F side-by-side.L[i] is the character just to the left of suffix i, or the last column in a cyclic shift of the string; it basically tells us for any given suffix S[j..n] what S[j-1] is. T allows a scan of the text without certain overheads which I'll describe. The implementations I've seen use $T^{-1}$, which allows a forward scan of the text, but in D3 it's doing a backward scan. Structure:Think of the source string S as a (array-based) linked list, with one character per node, so that we can output S from right-to-left (using T) or left-to-right (using $T^{-1}$) by traversing pointers. But let's also arrange it so that the nodes are suffix sorted, ie each node is assigned a position i such the suffix that begins at that node is greater than the one at i-1, and so on.The links in this array are $T$ (and traversing them is D3 above, and the label for each node is in F above). But it's a bit bulky, and there are a number of artistic ways to compress it.The BWT is probably the most difficult way. It's a representation of the links of T, but very indirectly, since for instance if L[i]='b', that means there is some suffix starting with 'b' that T[i] should point to, but we don't know which one -- there's a run of 'b's in F that could be the right index.To disambiguate 'b' we have to use a property of $T^{-1}$: for a range of suffixes that begin with the same character, their $T^{-1}$ pointers are in ascending order. That is, if 'bat..' and 'bug...' are suffixes, the $T^{-1}$ pointer from 'b' to 'at...' comes before the one from 'b' to 'ug...'. In other words, their order is the same as the order of their suffixes.That means that for any given 'b' at index i in L, we can figure out which entry in the 'b' range T[i] points to by counting only the b's that precede i in L, ie the rank of the 'b' at i among all the b's in L. That's D2. Basically T[i] = the rank of L[i] amongst other incidences of the same character in L + the start of the range that begins with L[i]. Calculating ranks one i at a time is slow (although there are other data structures that make it fast), so the translation to T is done all at once, by counting and summing in a process similar to counting sort (D1 and D2). But regardless of how, the output T is an array-based linked-list structure that can be decoded much more quickly than the ambiguous back-links in L. (Also, while traversing T to output S, we can omit the character labels at each node if we use L and output the character just preceding, and adjust everything by 1. That's the L[...] in D3. )
_unix.298135
I need to build a command concatenating other smaller string values:function processES { local status=$KO_UNKNOWN local method=GET if [ ! -z $1 ]; then method=$1; fi local curl=curl -s -X${method} $2 if [ ! -z $3 ]; then curl+= -d '$3'; fi local jq=jq 'if has(\error\) then .error.type elif has(\acknowledged\) then \ok\ else \unknown\ end' local sed=sed -e 's/^\//' -e 's/\$//' local command=$curl | $jq | $sed <<<<<<<<<<<< CONCATENATION echo $command local r=$(command) e_warning $r if [ $? == $RESPONSE_OK ]; then status=$OK elif [ $? == $RESPONSE_ERROR_IDXALDEXISTS ]; then status=$KO_IDXALDEXISTS else status=$KO_UNKNOWN fi return $status}The concatenated command is well built:curl -s -XPUT http://localhost:9201/living_v1 | jq 'if has(error) then .error.type elif has(acknowledged) then ok else unknown end' | sed -e 's/^//' -e 's/$//'The problem is I'm not quite to see how to perform it and get the result and set it on a variable.Any ideas?
howto perform a concatenaned command and set result on variable
shell script
null
_unix.378101
I want to create a script (script.sh) to change the contents of /etc/proxychains.conf. I would like to do it in the form of a menu though, so something like this:After runing ./script.sh[1] Add Proxychain[2] Start Proxychains[3] ExitPlease select an option:If the user selects [1] I would like to to ask for these inputs:Type of Proxy:Proxy IP Address:Proxy Port:And use these inputs in this form: (socks5 127.0.0.1 9050) at the bottom of the file /etc/proxychains.conf where you would normally add your proxy info. I was thinking of maybe assigning each input like $type $ip $port (so you can display each field next to each other). But how can I make it so it automatically finds that spot in the file proxychains.conf and adds those fields to it under any other ones priviously added. And After I would like to return back to the option menu where if the user presses [2] then it would run:proxychains firefoxAnd of course after firefox closes I would like to bring back the menu where if the user selects [3] the it would echo Goodbye sleep 1s and exit.
(Shell Scripting) - Modify files with user input
bash;shell script;terminal
This script should have solved your problem.And Next Time try to write a script and ask if you have any problem instead of asking for whole solutions.#! /bin/bashfunction menu { echo echo ++++++++++++++++++++++++++++++++++++++++++++ echo +++++proxychain applications version1.0+++++ echo ++++++++++++++++++++++++++++++++++++++++++++ echo [1] Add Proxychain echo [2] Start Proxychains echo [3] Exit echo [4] Cat specified file read -p Please select an option : option}function input { read -p Input Type of Proxy : type read -p Input Proxy IP Address : ip_addr read -p Inport Proxy Port : port if [ ! -z ${type} ]&&[ ! -z ${ip_addr} ]&&[ ! -z ${port} ] then echo -e ${type}\t${ip_addr}\t${port} >>proxychains.conf else input fi}function start { echo proxychains firefox}function terminate { if [ -z $(pgrep firefox) ] then echo -e Goodbye\n sleep 1 end_script=1 exit 0 else echo -e Terminating firefox!\n pkill firefox echo Goodbye end_script=1 exit 0 fi}function catme { read -p Input the file path : file_path cat ${file_path} exit 0}end_script=0while [ ${end_script} != 1 ];do menu current_stage=${option} case ${option} in 1) input ;; 2) start ;; 3) terminate exit 0 ;; 4) catme ;; *) echo unknown usage! ;; esac
_opensource.5680
A fellow developer friend of mine and myself have been working on an iOS app for the last months. We have put a considerable amount of work into it, utilizing GitHub for source control. We share our code through GitHub, but kept the project public since we are not willing to pay the fee to host private projects on Github. Now we don't have any problems with people looking through our code. In fact, that is one of the other reasons why we kept our project public. We want to be able to share our code to the world once we launch the project to the public.However, we are concerned that someone could steal our code before we public it to the Apple App Store. We are worried that someone publishes or submits our app as their own to the app store before we do. How can we make sure that this is not allowed? What kind of license would allow an open-sourced project to be protected in a way that the owners maintain those publishing permissions? What kind of suggestions do you have for open-sourced projects that still want to maintain ownership of their projects?
Open-sourced project license
licensing
null
_webmaster.48850
I have a project which requires a web shop. It should be a Java based web shop, since I know Java well.The functional requirements are:modifiable layout for main/goods pagesmodifiable processes like checking out etc.Is there a solution, which would meet my requirenments, which I should prefer? (e.g. something as popular as Magento)
A Java based modifyable web shop platform?
cms;ecommerce;java
null
_unix.343715
Due to a fault with my screen (white vertical bar, the rest is fine), I would like to be able to tell Linux (X and console) to only use part of the screen.Does anyone know if I can achieve this with kernel boot params, under-scanning, xrandr, or clever X configuration (or a combination thereof).I want to configure it to use something like 800px x 1080px (it's a 1920x800 display, the white bar appears to the right hand side), but without trying to centre the image (as adjusting the screen resolution does).All ideas welcomed.TIA.
Use partial screen display
linux;kernel;x11;xrandr;display
null
_unix.172607
I apologize if this is a duplicate, but I could not find any related question.How can I verify that my Linux 3.16.1 kernel supports FAT32?Things to keep in mind:I do not have /proc/config.gz available to me. When I build the kernel I enabled the following config values:# DOS/FAT/NT Filesystems#CONFIG_FAT_FS=yCONFIG_MSDOS_FS=yCONFIG_VFAT_FS=yCONFIG_FAT_DEFAULT_CODEPAGE=437CONFIG_FAT_DEFAULT_IOCHARSET=iso8859-1CONFIG_NTFS_FS=y# CONFIG_NTFS_DEBUG is not set# CONFIG_NTFS_RW is not setHowever, when I plug in a USB drive formatted with FAT32 it is not automounted. This is fine. I'm not a complete Linux n00b. I ran lsusb, and saw my device listed. Fantastic! Lets see what its listed under in /dev. I ran sudo blkid, nothing but my HDD is the only disk listed.Running modprobe vfat and restarting did not change the results from above. Is there something else that I'm missing?Edit Kernel Messages when a USB device is connected and disconnected.Dec 10 11:46:45 narrator kernel: [ 20.164811] usb 2-1.8: new full-speed USB device number 5 using ehci-pciDec 10 11:46:45 narrator kernel: [ 20.280044] usb 2-1.8: New USB device found, idVendor=0a5c, idProduct=5801Dec 10 11:46:45 narrator kernel: [ 20.280055] usb 2-1.8: New USB device strings: Mfr=1, Product=2, SerialNumber=3Dec 10 11:46:45 narrator kernel: [ 20.280061] usb 2-1.8: Product: 5880Dec 10 11:46:45 narrator kernel: [ 20.280067] usb 2-1.8: Manufacturer: Broadcom CorpDec 10 11:46:45 narrator kernel: [ 20.280072] usb 2-1.8: SerialNumber: 0123456789ABCDDec 10 11:46:45 narrator kernel: [ 20.280200] usb 2-1.8: config 0 descriptor??Dec 10 11:46:45 narrator kernel: [ 24.792805] usbcore: registered new interface driver usbhidDec 10 11:46:45 narrator kernel: [ 24.792812] usbhid: USB HID core driverDec 10 11:46:45 narrator kernel: [ 24.924891] input: Logitech USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.1/2-1.1:1.0/0003:046D:C03E.0001/input/input14Dec 10 11:46:45 narrator kernel: [ 24.925127] hid-generic 0003:046D:C03E.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB-PS/2 Optical Mouse] on usb-0000:00:1d.0-1.1/input0Dec 10 11:49:43 narrator kernel: [ 209.720215] usb 2-1.3: USB disconnect, device number 4/proc/filesystems:$ cat /proc/filesystems nodev sysfsnodev rootfsnodev ramfsnodev bdevnodev procnodev cgroupnodev cpusetnodev tmpfsnodev devtmpfsnodev debugfsnodev securityfsnodev sockfsnodev pipefsnodev devptsnodev hugetlbfs vfat msdos ntfsnodev pstorenodev mqueue ext4nodev autofs ext2nodev binfmt_misc fuseblknodev fusenodev fusectl
Verify Linux FAT32 Support
debian;filesystems;vfat;fat32
null
_unix.338164
I work for a company that provides internet to very remote locations. The bandwidth we provide to these locations is very limited - around 1Mbps down. The people at these locations would love nothing more than to watch netflix, but unfortunately, the 1Mbps link doesn't allow for HD streaming, especially for multiple people at once.Would it be possible to setup a Linux server on site, and preload several terabytes worth of encrypted netflix movies and TV shows, so that when people go to netflix and try to watch something, they would be streaming from our local server instead of over the internet? People would still need a netflix account - we're not trying to rip off netflix.I imagine that SSL would give certificate warnings if we tried to transparently intercept netflix traffic, so my guess is that we would do something like this:Block netflix.comGet an SSL certificate for something like netflix.ourcompanydomain.comHave the DNS server on site point netflix.ourcompanydomain.com to the LAN IP of the linux serverInstall the certificate on the linux serverTell people on site to go to netflix.ourcompanydomain.com to reach netflixAnalyze requests to netflix.ourcompanydomain.comif a request is for a web page, forward it to the real netflix, and forward netflix's reply to the clientif a request is for a video from netflix, either return it from local cache, or if it's not in local cache, deny the requestThe solution isn't transparent to the user, since they won't be going to netflix.com, but rather netflix.ourcompanydomain.com, but that's not a big deal.Does this sound right? Would we need to setup squid for something like this, or would a web server like nginx be able to handle it?Any other advice?
How can we cache netflix content locally?
proxy;nginx;webserver;cache;squid
null
_unix.366523
Im new using Bind 9. I have follow a tutorial on web, and i dont know why its not working....i have configured this files:named.conf.local ://// Do any local configuration here//// Consider adding the 1918 zones here, if they are not used in your// organization//include /etc/bind/zones.rfc1918; zone ejemplo.com { type master; file /var/lib/bind/db.ejemplo.com.hosts; };db.ejemplo.com.hosts :;; BIND Database file for ejemplo.com zone ; @ IN SOA ejemplo.com. hostmaster.ejemplo.com. (2011091601 ; serial number3600 ; refresh600 ; retry1209600 ; expire3600 ) ; default TTL; IN NS ns.ejemplo.com.IN MX 10 mail.ejemplo.com.IN TXT ( v=spf1 mx ~all ); localhost A 127.0.0.1ns A 192.168.200.250mail A 192.168.200.251www A 192.168.200.252This is my resolv.conf# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 127.0.0.1when i try to do host ejemplo.com its result is:Host ejemplo.com not found: 2(SERVFAIL)
Problem with Bind9 initial configuration
bind9
null
_webapps.48207
I want my Gmail password to come to my recovery email address without changing the password.Is it possible. If yes, how can it be done?
Is it possible to get Gmail Password sent to my recovery email address?
google account;account recovery
null
_webmaster.95549
Optimizley landing page banner has 2 h1 elements: <div class=header-container tooltip-container> <h1 class=mkt-hero__header optly-header-one>Hello!</h1> <h1 class=mkt-hero__header>Let&apos;s optimize digital experiences for your customers.</h1> <span class=tooltip><!-- [] --></span></div>If I saw this in any other site I would dismiss it as bad practice. But Optimizely are a YC alumni that wrote the book on optimizing site performance. Does anyone know why they chose to use two h1 elements?
If multiple h1 elements are bad practice, how come Optimizely has more than one h1?
seo;html;h1
You're right. Two h1 elements in the same section is not a good idea, semantically or structurally, but it's not a HTML error. It's most likely a copy/paste error on their part but who knows why some people do what they do. Don't always look at any one company and think they write code perfectly in every way, all the time, no matter who they are.
_unix.354982
I have a requirement where I need to add a new entry for the custom top to be created, in the context_file of ebs application installed in Oracle Linux 6. The context_file is an XML file. We need to search for a string and then we need to add a new entry just after the searched string.Search the string AU_TOP> in the file.Insert a string in a new line just after AU_TOP><TEST_TOP oa_var=s_testtop oa_type=PROD_TOP oa_enabled=FALSE>/u01/oracle/oracle/apps/apps_st/appl/test/12.0.0</TEST_TOP>Save the file.How can this be accomplished using a shell script?
Edit an EBS XML context_file to append a line after a particular one
text processing;scripting;xml
null
_webapps.107195
I want to turn off the security mechanism of gmail for my account so that it never ever asks me you're logging from a strange place. We'll send an sms to your phone or call it and you're to enter the code you'll receive. I mean, never ever. No matter what country I'm in or continent. How can I do this?Actually, I want more:no requirement out of the blue to provide a phone once I've registered and logged in or have used it for a while.No 2FANo other kind of security bullish: additional email, the name of a dog of my teacher, the size of my neck. Not at all.No physical keys or android applications.
How to turn off the security of my gmail account completely? I mean, completely
gmail;security
null