id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.335822 | Recently I noticed that my Linux installation was loading at least one module that seems unnecessary to my system. The module in question is fjes, for the FUJITSU Extended Socket Network Device Driver.This module matches with the following devices in /sys:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/PNP0C02:03/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C02:01/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/INT3F0D:00/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C02:02/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C02:00I want to know how I can get the following information about those devices:What kind of devices are those?What are their manufacturers?For fairness, I have another open question related to this. But in that question I wanted to know why I had so many different devices with a single modalias. Now I want to make sure that none of them are really related to the fjes module. My ultimate intetion is to open a bug report, so I want to be sure I'm not overseeing something silly. | How can I get more information about ACPI devices? | linux;kernel modules;acpi | null |
_unix.82397 | I was wondering: If I type history I can see the previous commands I typed and see what happened and perhaps keep commands as reference. But I don't know in which directory I was when I typed the commands.Is there a version of history that displays also the directory you were in when you typed the command? | History of commands and directories we were in | bash;command history | null |
_codereview.95191 | Question 7 from Lab 3 of Berkeley's CS61A course says:The following examples of recursive functions show some examples of common recursion mistakes. Fix them so that they work as intended. [...]def even_digits(n): Return the percentage of digits in n that are even. >>> even_digits(23479837) # 3 / 8 0.375 if n == 0: return num_digits / num_evens num_digits, num_evens = 0, 0 if n % 2 == 0: counter += 1 num_evens += 1 return (even_digits(n // 10) + num_evens) / counterAs per the assignment, below code is suppose to solve this problem, recursively (MUST not be non-recursive).def even_digits(n, count_even = 0, count = 0): Return the percentage of digits in n that are even. >>> even_digits(23479837) # 3 / 8 0.375 if n == 0: return count_even / count if (n % 2) == 0: return even_digits(n // 10, count_even + 1, count + 1) else: return even_digits(n // 10, count_even, count + 1)Is this code readable? Can we improve this solution? | Percentage of digits in n that are even | python;python 3.x;recursion | There is a serious error in your code. If the function is called as even_digits(0), then it causes the following error:ZeroDivisionError: integer division or modulo by zerosince in the first if you divide by count which is 0.So, before improving the code, you should check if it is correct.ADDED:THE CASE OF n = 0It is interesting to understand what should be the result in case the input (that is n) is 0. The problem is labeled as: Percentage of digits in n that are evenand the natural interpretation is:Percentage of digits in the decimal representation of n that are evenSo, what result should give the function when provided with the value 0? I argue it should be 1, since every decimal representation of 0 has only zeroes as digits, so it has a 100% even digits. |
_cs.25810 | I got a matrix of integers of size $3\times n$. Of each one of the three rows, for each column I got to choose one number, with the restriction that, for each $i$, the numbers chosen in the $i$th and $(i+1)$st rows cannot come from the same column.The problem asks to make an algorithm such that the summation of all the numbers I've chosen is the minimum possible, has time complexity $O(n)$ and space complexity of $O(1)$.I know how to do this with dynamic programming, in a more classical way: I calculate the sub-problems, store the results in a matrix of size $3\times n$ until the matrix is complete, then the result is the minimum value of the last row.The problem is that will have space complexity of size $O(n)$, what I'm not seeing? | Finding dynamic programming algorithm | dynamic programming;space complexity | Run along the columns of the matrix keeping three numbers. For each row determine the minimum score assuming the last number chosen came from that row. |
_unix.165165 | Why won't this statement work?total=`expr $mPercent / 100 * .482 + $fPercent / 100 * .518`mPercent is a number as a result of an awk statement, as well as fPercent. I also get expr: non-numeric argument as an error message.NOTE: For the purposes of his question, let's say that the 2 variables have the value 3.27. | Arithmetic operations with expr and variables | bash;shell;arithmetic;floating point | null |
_webapps.51558 | I'm not a programmer or coder, so please be gentle with me :-)I have a Google Spreadsheet that is used by different people to add tasks for my team. Here's an example of what it looks like:https://docs.google.com/spreadsheet/ccc?key=0AqSW43t8FdSfdDdaNk1QV2g1N25GTzg3YnRlem5renc&usp=sharingPeople can add tasks to column B and I will assign them to the right people in the team, filling in the other columns. The first field I'm capturing is the date that a new task is logged. I'd like to automate this so that when people add a new task starting in column B, column A gets updated with the date that the new line is created. I've found scripts that will update the timestamp in column A based on changed in column B, but I only want to log the creation date. In other words, when there is already an entry in column B and it gets updated, the date in column A should not change. Only when column B changes from an empty cell to a new value should column A on that row be updated with the date.In the above example doc, if I change the task on line 2, the date in column A should not change. If I add a new task in cell B3, it should add the current date in cell A3.Any help will be greatly appreciated. | Add a 'creation date' value when a new line is edited | google spreadsheets | null |
_softwareengineering.175034 | I'm working on a git workflow to implement in a small team. The core ideas in the workflow:There is a shared project master that all team members can write toAll development is done exclusively on feature branchesFeature branches are code reviewed by a team member other than the branch authorThe feature branch is eventually merged into the shared master and the cycle starts againThe article explains the steps in this cycle in detail:https://github.com/janosgyerik/git-workflows-book/blob/small-team-workflow/chapter05.mdDoes this make sense or am I missing something? | Git workflow for small teams | version control;git;teamwork;workflows | I like the git flow branching model. The master branch is left alone most of the time, it only contains releases. The develop branch should be stable at all times, and the feature branches can be broken.You can also combine this with continuous integration by merging develop into your feature branch and your feature branch into develop. Of course you should only merge something into develop when you're confident that things are working and do not break. |
_webmaster.28160 | When a visitor clicks on a link or is redirected to my site from another site, Google typically includes information about the page the domain and the page the visitor was linked from.How does Google track this information? | How does Google Analytics track referring URLs? | google analytics;referrer | null |
_unix.333164 | Of course the system would preventing mounting two things on one directory.But, what if you mount a FUSE program in the top directory and another FUSE program in a sub directory?What exactly happens when you call operations on the sub directory?My assumption is that it initially goes through operations related to mounting from the top FUSE, and then goes to the actual operations you called in the sub FUSE.Is that correct?EDIT: Turns out I was wrong about not being able to mount two things in one directory. Then which FUSE program gets priority when two different FUSE programs are mounted on the same directory? | What happens if there are more than one FUSE programs acting on a directory? | mount;fuse | null |
_softwareengineering.273061 | I would like to understand a bit more the theory and the approaches available in modelling a population roaming across a landscape. Assume discrete time and space as simple as a discrete grid and a population of different creatures and plants being able to perform very simple actions such as move to a neighboring grid square or eat a neighboring plant (for animals), or expand to a neighboring grid square (for plants). This is inspired by the following JavaScript project which seems easy when you follow the narrative but when trying to model it on my own from scratch using my own abstractions I was frustrated.For instance a decision point I encountered was:should both the grid know where each thing (plant or animal) is located and the creatures themselves know where they are (in terms of x-y coordinates) or only one of the two ways. Holding this information in both places may simplify algorithms but then you have the possibility of corruption.I came up with the following very basic engine (pseudocode):while (world.containsAtLeastOneLivingThing()) { listOfAllPlantsAndAnimals = world.allSurvivingThings(); var actions = []; for (thing: listOfAllPlantsAndAnimals) actions.push(thing.liveAnotherMoment()); // things like: move, eat, expand, die var consolidatedActions = resolveConflicts(actions); world.absorbChange(consolidatedActions);}I gave up on this attempt when I realized that the actions need to hold so much information (such as where on the grid the action occurred, which creatures/plants were involved, how to identify those creatures, by some ID or by their location on the grid), that in the end the above engine doesn't really succeed in breaking up the problem into manageable chunks as all the complexity ends up in the resolveConflicts function.I am not really interested in modelling efficiency but rather in clarity of algorithms and abstractions. Is there some broader body of theory that deals with the modelling of this subject-matter? Game programming perhaps? I am confident I could tackle this more easily in a strongly-typed language like Java; trying to code this in a language (JavaScript) that's both loosely-typed and also in which I have limited exposure (and so lack the ability to express concepts idiomatically) I am sure doesn't help. | modelling an ecosystem evolving on a landscape | javascript;simulation;games | null |
_unix.267352 | We have 2 Web Server node Primary and secondary, if the primary is down by any reason secondary will act as Prim. Now if we talk about the codes we which is on both the host. we should be synchronizing it with actual primary dataHow do we sync those codes?I understand that rsync can sync all the thing from the Live server to the secondary. but what about those changed which have deleted some file or folder, from live server rsync should remove those from the secondaryAs per my requirement can we use below rsync on my server. will this workrsync -avzhe ssh [email protected]:/var/www/ /var/wwwI tested this on my local system. no luck[ar@test ~]$ rsync -avzhe /home/ar/avi/ /home/ar/red/sending incremental file listdrwxrwxr-x 4096 2016/03/03 07:28:13 .sent 51 bytes received 12 bytes 126.00 bytes/sectotal size is 0 speedup is 0.00Solutionrsync -av --delete /home/ar/avi/ /home/ar/red/ | rsync complication on sync | linux;rsync;webserver | This command works for me, it gets sync with the live server and deletes the file which were deleted from the live server.rsync -av --delete /home/ar/avi/ /home/ar/red/ |
_unix.31525 | I need to extend the root partition of a virtual machine (VM) using LVM (Logical Volume Manager). I can afford a few minutes of downtime so a VM shutdown/reboot is fine. The virtual hard disk is in qcow2 format but I can translate it to a raw format easily if it helps.Search engines did not help that much because answers usually refer to using a LVM partition to host the virtual hard disk, here the LVM partition is inside the virtual hard disk which is a simple file.The virtual machine is running with linux-kvm and must stay bootable after the operation. | How to extend an ext3 partition over LVM inside a file (virtual machine)? | partition;virtual machine;lvm;kvm | Your safest pick, without the need of making any changes to your current qcow disk, is adding another disk to the VM. Once you have rebooted, you can run these commands:pvcreate /dev/${newdisk}vgextend ${vgname} /dev/${newdisk}lvextend -L +${n}G /dev/${vgname}/${root_lv} (+ means add to LV ${n} GBs)resize2fs /dev/${vgname}/${root_lv}In the end you get extra room on / with just a reboot. |
_cs.26438 | how can i design XML Schema for logical and digital circuits?i cant find any help or manual for this workfor example i have a digital circuits with AND OR NOR ,... gates now i want design xml and schema for thatthanks for help soryy for bad english | How to design xml schema for digital circuits? | data structures;logic;circuits | A simple example.Assume the following circuit in symbolic notation:Cir = ((A OR B) AND (C OR D)) OR (NOT(D) AND F), where OR , AND, NOT etc can be realized with serial/parallel connections etc..XML scheme proposition:<circuit> <gate-or> <element> <gate-and> <element> <gate-or> <element>A</element> <element>B</element> </gate-or> </element> <element> <gate-or> <element>C</element> <element>D</element> </gate-or> </element> </gate-and> </element> <element> <gate-and> <element> <gate-not>D</gate-not> </element> <element>F</element> </gate-and> </element> </gate-or></circuit>This is an example xml-scheme where the root element is circuit and can have gates or elements as children and each gate (eg gate-or or gate-and) can have elements as children etc..Of course this is very verbose , a yaml or json representation scheme would be lighter |
_unix.172223 | In my current job I often have to work with files from Windows machines which most of the time isn't a big deal, but when piping a side-by-side diff to less, not only are the ^M being displayed, but it also messes up the indentation, like in the following:<U+FEFF>using System;^M <U+FEFF>using System;^M using System.Reflection;^M using System.Reflection;^M using System.Runtime.Serializa^M using System.Runtime.Serializa^M using System.Transactions;^M using System.Transactions;^M (I don't particularly mind the UTF-8 BOM in the first line, as it affects only that one line.)I know I can do adiff -y <(tr -d '\015' < file-a) <(tr -d '\015' < file-b) | lessBut that's a heck of a lot to type, and when file-a and file-b share a long path, you can't use bash's curly braces nicety. Anddiff -y file-{a,b} | tr -d '\015' | lessdoes not do the trick, as the formatting is already messed up.Interestingly though, the following displays fine both in terms of ^M and indentation:diff -y file-{a,b} | headSo my question is, how do I get side-by-side diffs piped into less without the aforementioned issues? (Like adding some parameter to diff or less that I'm not aware of) | `diff -y file-{a,b} | less` and DOS line endings display issues | diff;less;newlines | null |
_unix.194428 | I have never used bash completion of todo.txt cli; however, I have decided to give it try. On the author's github, it says: (Optional, since v 2.9:) Install the Bash completion, either system-wide, for all users: $ sudo cp todo_completion /etc/bash_completion.d/todoor put it somewhere in your home directory and source it from your .bashrc:Now in the install documentation for bash completion, it says:The easiest way to install this software is to use a package; it is availablein many operating system distributions. The package's name is usuallybash-completion. Depending on the package, you may still need to source itfrom either /etc/bashrc or ~/.bashrc (or any other file sourcing those). Youcan do this by simply using:# Use bash-completion, if available[[ $PS1 && -f /usr/share/bash-completion/bash_completion ]] && \ . /usr/share/bash-completion/bash_completionSo from my understanding, which may wrong, I should put # Use bash-completion, if available [[ $PS1 && -f /usr/share/bash-completion/bash_completion ]] && \ . /usr/share/bash-completion/bash_completionin my ~/.bashrc. Now should I put todo-completion in /usr/share/bash-completion/bash-completion? If so, I only have /usr/share/bash-completion but there is a completion one level below bash-completion.To be honest, I could be all wrong with this thought process. I have been doing numerous searches on todo and bash-completion but nothing has been too promising. So if this is all incorrect, how do I set up both up together? | Install an additional completion function for bash | bash;software installation;autocomplete | null |
_codereview.124525 | I wrote a JavaScript source which regrouping by new key i.e. following JSON to the other:{ items : [ { company : A , name : Prod_A01 } , { company : A , name : Prod_A02 } , { company : B , name : Prod_B01 } ]}to:{ A : [ { company : A , name : Prod_A01 } , { company : A , name : Prod_A02 } ] , B : [ { company : B , name : Prod_B01 } ]}And here is logic:function regroup(collection, key){ var novo_obj = {}; var type = typeof collection; var keyGroup = []; getKeyGroup(collection, key, keyGroup); for(var i = 0; i < keyGroup.length; i++) { novo_obj[keyGroup[i]] = []; gatherElements(collection, key, keyGroup[i], novo_obj[keyGroup[i]]); //console.log(novo_obj); } console.log(JSON.stringify(novo_obj)); return novo_obj;}function gatherElements(collection, key, value, arr){ //console.log(collection); if(typeof collection == 'object'){ var targetValue = collection[key]; if(typeof targetValue != 'undefined' && targetValue == value){ arr.push(collection); }else{ for(var elem in collection){ gatherElements(collection[elem], key, value, arr); } } }else if(typeof collection == 'array'){ for(var i = 0; i < collection.length; i++) { getKeyGroup(collection[i], key, value, arr); } }}function getKeyGroup(collection, key, keygroup){ var targetValue = ''; if(typeof collection == 'object'){ targetValue = collection[key]; if(typeof targetValue != 'undefined'){ keygroup.push(targetValue); }else{ for(var elem in collection){ getKeyGroup(collection[elem], key, keygroup); } } }else if(typeof collection == 'array'){ console.log('isArr'); for(var i = 0; i < collection.length; i++) { getKeyGroup(collection[i], key, keygroup); } } return keygroup;}function main(){ var test = { items : [ { company : A , name : Prod_A01 } , { company : A , name : Prod_A02 } , { company : B , name : Prod_B01 } ] } regroup(test, company);}However, I think looping every time does not look so good. I need to improve this source code. | Regrouping JSON object | javascript;object oriented;json | I think you may be complicating things a lot. If I understand correctly you are basically after groupBy which you can find in lodash, or implement in a few lines of code:function groupBy(coll, f) { return coll.reduce(function(acc, x) { var k = f(x); acc[k] = (acc[k] || []).concat(x); return acc; }, {});}var test = { items: [{ company: A, name: Prod_A01 }, { company: A, name: Prod_A02 }, { company: B, name: Prod_B01 }]};var result = groupBy(test.items, function(x){return x.company});JSON.stringify(result);/*^{ A: [ { company: A, name: Prod_A01 }, { company: A, name: Prod_A02 } ], B: [ { company: B, name: Prod_B01 } ]}*/You can use this to group a collection (array of objects) by some property, test.items in this case. Also, I would leave the JSON part out of it, because it is irrelevant to the grouping algorithm. |
_unix.263813 | Should not depend on PulseAudio.Like veth for network, of v4l2loopback for video, it should create virtual audiocard from which I can record everything that is played. | How do I create virtual ALSA device from which I can record everything that is played? | linux;audio;alsa | Load the kernel module: modprobe snd-aloopUse plughw:CARD=Loopback,DEV=0 device for recordingUse plughw:CARD=Loopback,DEV=1 device for playing (or vice versa). |
_unix.311873 | I have a program long_interactive_script.py which has thousands of print statements. I want to pipe the program through tee (or an alternative) so that I can save the output.If I dolong_interactive_script.py | tee logfile.txtPython puts its print statements in a 4K buffer, causing me to get:nothing, nothing, nothing, nothing, a whole lot of text!, nothing, nothing, a sudo prompt in the middle of a word, nothing, nothing, a whole lot of text!In an attempt to avoid the buffer I tried:unbuffer long_interactive_script.py | tee logfile.txtBut this causes my script to stop being interactive. So when the script breaks into a sudo prompt, it halts.Note: I cannot simple sudo BEFORE running the script. The interactive script only requires sudo on some runs, and I don't want to ask for sudo when it isn't necessary.More...stdbuf -oL long_interactive_script.py | tee -a logfile.txtworks to some extent. I get all the desired data, but I also get this error:ERROR: ld.so: object '/usr/lib64/coreutils/libstdbuf.so' from LD_PRELOAD cannot be preloaded: ignored. | Realtime print statements with tee in interactive script | rhel;sudo;python;pipe;tee | null |
_cstheory.19992 | The paper below and the news story based on it describe a new form of computation based on what they call environment-assisted quantum transport (ENAQT).ENAQT involves a combination of quantum and classical effects at room temperature.The paper says:Current computers operate with about 4 GHz processors, where the cycle time of logical operations is 250 picoseconds. Computers based on articial light harvesting complexes could have units with 100-1000 times larger eciency at room temperature. But, it is also possible to realize such systems on excitons of organic molecules or on Hamiltonians arising in nuclear matter, which would provide a virtually endless source of improvement both in time and miniaturization below the atomic scale.http://www.technologyreview.com/view/522016/quantum-light-harvesting-hints-at-entirely-new-form-of-computing/Evolutionary Design in Biological Quantum Computing,Gabor Vattay, Stuart A. Kauffmanhttp://arxiv.org/abs/1311.4688I know next to nothing about quantum computers or complexity-theory. Am I right in thinking that ENAQT computation is theoretically slower than quantum computers as heretofore conceived but faster than classical computers?Will terms such a qubit, quantum-information and quantum complexity-classes need new ENAQT counterparts (ENAQTbit, B-ENAQT-P instead of BQP etc) or will the existing quantum versions be sufficient for describing ENAQT computation? | Environment-assisted quantum transport computation | quantum computing | null |
_webapps.25016 | I would like to bring up something I or somebody else has shared on Google+ months ago. It is time consuming to just scroll down and down and down, clicking more and more.Can I just jump to a particular date instantly? | How to jump to a particular date when viewing a person's posts stream on Google+? | google plus | null |
_codereview.110783 | This is my first time writing C++, so I would appreciate advice in the areas of:Code style (naming conventions, indentation, etc)Memory usage (am I performing unnecessary object copies?)Class design (move constructors, destructors, etc, are they necessary?)Correct usage of standard library functions (especially the string parsing part)complex.h:#ifndef COMPLEX_H#define COMPLEX_H#include <string>class Complex{private: double real_; double imag_;public: Complex(); Complex(const Complex& obj); Complex(const std::string& str); Complex(double real); Complex(double real, double imag); double real() const; double imaginary() const; double argument() const; double modulus() const; Complex conjugate() const; Complex pow(double power) const; std::string toString() const; Complex operator+(const Complex& rhs) const; Complex operator-(const Complex& rhs) const; Complex operator*(const Complex& rhs) const; Complex operator/(const Complex& rhs) const; bool operator==(const Complex& rhs) const; bool operator!=(const Complex& rhs) const;};#endifcomplex.cpp:#include <cmath>#include <sstream>#include <regex>#include complex.hComplex::Complex(const Complex& obj) : Complex(obj.real_, obj.imag_) { }Complex::Complex(const std::string& str) { double real = 0.0, imag = 0.0; std::regex realRegex(^(-)?\\s*(\\d+(\\.\\d+)?)$); std::regex imagRegex(^(-)?\\s*(\\d+(\\.\\d+)?)i$); std::regex bothRegex(^(-)?\\s*(\\d+(\\.\\d+)?)\\s*([-+])\\s*(\\d+(\\.\\d+)?)i$); std::smatch match; if (std::regex_match(str.begin(), str.end(), match, realRegex)) { real = std::atof(match[2].str().c_str()); if (match[1].matched) { real = -real; } } else if (std::regex_match(str.begin(), str.end(), match, imagRegex)) { imag = std::atof(match[2].str().c_str()); if (match[1].matched) { imag = -imag; } } else if (std::regex_match(str.begin(), str.end(), match, bothRegex)) { real = std::atof(match[2].str().c_str()); imag = std::atof(match[5].str().c_str()); if (match[1].matched) { real = -real; } if (match[4].str() == -) { imag = -imag; } } else { throw std::runtime_error(Invalid number format); } real_ = real; imag_ = imag;}Complex::Complex() : Complex(0.0) { }Complex::Complex(double real) : Complex(real, 0.0) { }Complex::Complex(double real, double imag) : real_(real), imag_(imag) { }double Complex::real() const { return real_;}double Complex::imaginary() const { return imag_;}double Complex::argument() const { return std::atan2(imag_, real_);}double Complex::modulus() const { return std::sqrt(real_ * real_ + imag_ * imag_);}Complex Complex::conjugate() const { Complex result(real_, -imag_); return result;}Complex Complex::pow(double power) const { double mod = modulus(); double arg = argument(); mod = std::pow(mod, power); arg *= power; double real = mod * std::cos(arg); double imag = mod * std::sin(arg); Complex result(real, imag); return result;}std::string Complex::toString() const { std::stringstream fmt; if (imag_ == 0) { fmt << real_; } else if (real_ == 0) { fmt << imag_ << i; } else { fmt << real_; if (imag_ < 0) { fmt << - << -imag_; } else { fmt << + << imag_; } fmt << i; } return fmt.str();}Complex Complex::operator+(const Complex& rhs) const { Complex result(real_ + rhs.real_, imag_ + rhs.imag_); return result;}Complex Complex::operator-(const Complex& rhs) const { Complex result(real_ - rhs.real_, imag_ - rhs.imag_); return result;}Complex Complex::operator*(const Complex& rhs) const { double newReal = real_ * rhs.real_ - imag_ * rhs.imag_; double newImag = real_ * rhs.imag_ + imag_ * rhs.real_; Complex result(newReal, newImag); return result;}Complex Complex::operator/(const Complex& rhs) const { double denom = rhs.real_ * rhs.real_ + rhs.imag_ * rhs.imag_; double newReal = (real_ * rhs.real_ + imag_ * rhs.imag_) / denom; double newImag = (imag_ * rhs.real_ - real_ * rhs.imag_) / denom; Complex result(newReal, newImag); return result;}bool Complex::operator==(const Complex& rhs) const { return real_ == rhs.real_ && imag_ == rhs.imag_;}bool Complex::operator!=(const Complex& rhs) const { return !(*this == rhs);} | Simple complex number class | c++;reinventing the wheel;mathematics | Well, your code-style is quite common, and consistently-applied, so that's a plus.Your names though can be improved:I wouldn't use modulus for the absolute value, even though it seems to be perfectly correct, because there's a far more common and shorter way: Just call it abs..argument() is normally shortened to .arg(), .imaginary() to .imag(). Those can be debated though.You should provide compound-assignment-operators +=, -=, *= and /=, and implement +, -, * and / in terms of them.Is there a reason you are explicitly defining your copy-constructor? The default one you get by omitting the declaration is fine.You are far too fond of member-functions, and the increased coupling it brings. Read GotW 84: Monoliths Unstrung.Of your members, only real(), imaginary(), and the ones the language forces you to make members should be. (You should add free functions for the first two, or make them friend-functions instead though.).toString() should be the free function to_string(), like the standard-library one.Consider also adding a stream-inserter. Due to the format you chose, it's not possible to write a good stream-extracctor.Construction from a std::string should be marked explicit, as it might fail or loose information.All other constructors (and all functions but to_string) should be marked constexpr.And most should be marked noexcept.Construction from std::string is complex enough you should add a doc-comment giving all accepted formats.Consider merging your default-constructor, constructor from double, and constructor from real- and double- components into one using default-arguments.Also, implementing it in-class is potentially superior.Actually, consider in-class implementations for all small functions.Consider providing the square of the absolute value (as norm), to avoid the costly square-root unless needed.As the class only contains two doubles, pass-by-value might actually be more efficient than pass-by-reference. That depends on the specific architecture and ABI though.(You might benefit from comparing your code with std::complex<double>.) |
_unix.364259 | I'm building a .deb package using:dpkg-deb --build packageThe directory package contains another directory called DEBIAN that has the changelog, but the resulting package doesn't have the changelog.Debian.gz in it, and if I check the package using lintian I get following errors:E: msodbcsql: debian-changelog-file-missingW: msodbcsql: unknown-control-file changelogI don't know if relevant but the permissions on the changelog are as follows:-rwxr-xr-x 1 maximk maximk 159 May 10 11:23 changelogWhy is the changelog considered to be an unknown control file instead of, you know, a changelog? | dpkg-deb build ignores/misinterprets a changelog | dpkg;deb | In a binary package, the changelog isnt a control file, its just part of the packages payload. With dpkg-deb -b, that means you need to place the changelog in usr/share/doc/${package}/changelog.Debian.gz directly (or .../changelog.gz for a native package).More explicitly, since youre building your package in the package directory, instead of putting your changelog in package/DEBIAN/changelog, you put it in package/usr/share/doc/package/changelog.Debian.gz, and build your package as before with dpkg-deb -b package.In source packages, the changelog goes in debian/changelog and is processed by dh_installchangelogs. |
_vi.2566 | I installed ttf-font-awesome in Arch Linux; if I open up a file which contains icons from Font awesome it recognizes the file as utf-8 but the icons are displayed as squares. Does anyone know how to fix this? | How to display Font Awesome in Vim? | font | null |
_softwareengineering.318652 | I'm starting to develop an android application where I have to persist user data. For server data I want to use Google Cloud with noSQL but I don't know what to use to save data in local memory when the user doesn't have connection. My principal conflict is to mantain synchronized both data (local and cloud). Is there a noSql data base to use with android in local mode?.Thanks. | Synchronize local data with server data in an android application | android;nosql;android development;google cloud datastore | You might want to look into Firebase's Realtime Database (also Google) instead of Cloud Datastore.It has Android/iOS/JavaScript SDK's and is designed to work offline with a client side cache, handling synchronization between Client and Server for you.If you want to do this in Cloud Datastore, you'll need to wrap the client libraries with your own methods for local caching and synchronization (not easy, but possible).If you use Cloud Datastore's auto-generation ID's for entities it will be much harder since there isn't a great client way to do it. How you would do it was pre-allocate keys in advanced per client and store in a cache to use while offline. alternatively, if you use Names in the keys instead of IDs it will be somewhat easier, although you'll need to make sure there isn't conflicts between clients. |
_codereview.32567 | I'm working on a program that handles UTF-8 characters. I've made the following macros to detect UTF-8. I've tested them with a few thousand words and they seem to work.I'll add another one to do error-checking later, but for now I would like to know what mistakes I've made and how these macros can be improved. //check if value is in range of leading byte#define IS_UTF8_LEADING_BYTE(b) (((unsigned char)(b) >= 192) && (unsigned char)(b) < 248)//check if value is in range of sequence byte#define IS_UTF8_SEQUENCE_BYTE(b) ((unsigned char)(b) >= 128 && (unsigned char)(b) < 192)//can be any utf8 byte, first, last...#define IS_UTF8_BYTE(b) (IS_UTF8_LEADING_BYTE(b) || IS_UTF8_SEQUENCE_BYTE(b))//no error checking, it must be used only on leading byte#define HOW_MANY_UTF8_SEQUENCE_BYTES(b) ((((unsigned char)(b) & 64) == 64) + (((unsigned char)(b) & 32) == 32) + (((unsigned char)(b) & 16) == 16)) | Macros to detect UTF-8 | c;macros;utf 8 | If you are sure that you have only valid encoded UTF8, you can simplify#define IS_UTF8_BYTE(b) (IS_UTF8_LEADING_BYTE(b) || IS_UTF8_SEQUENCE_BYTE(b))to#define IS_UTF8_BYTE(b) ((unsigned char)(b) >= 128)because every byte that is not ASCII must be an UTF8-byte. |
_codereview.135481 | I want to fill collection of ToolbarItems from XAML, but change visibility of some ToolbarItem from view model. I implemented BasePage with some wrapper around ToolbarItems collection:public class BasePage : ContentPage{ public IList<CustomToolbarItem> CustomToolbar { get; private set; } public BasePage() { var items = new ObservableCollection<CustomToolbarItem>(); items.CollectionChanged += ToolbarItemsChanged; CustomToolbar = items; } private void ToolbarItemsChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e) { ToolbarItems.Clear(); foreach (var item in CustomToolbar) { item.PropertyChanged += OnToolbarItemPropertyChanged; if (item.IsVisible) { ToolbarItems.Add(item); } } } private void OnToolbarItemPropertyChanged(object sender, PropertyChangedEventArgs e) { if (e.PropertyName == CustomToolbarItem.IsVisibleProperty.PropertyName) { UpdateToolbar(); } } private void UpdateToolbar() { foreach (var item in CustomToolbar) { if (item.IsVisible) { ToolbarItems.Add(item); } else { ToolbarItems.Remove(item); } } } protected override void OnDisappearing() { base.OnDisappearing(); ToolbarItems.Clear(); CustomToolbar.Clear(); foreach (var item in CustomToolbar) { item.PropertyChanged -= OnToolbarItemPropertyChanged; } }}CustomToolbarItem.cs:public class CustomToolbarItem : ToolbarItem{ public static readonly BindableProperty IsVisibleProperty = BindableProperty.Create(nameof(IsVisible), typeof(bool), typeof(CustomToolbarItem), true); public bool IsVisible { get { return (bool)GetValue(IsVisibleProperty); } set { SetValue(IsVisibleProperty, value); } }}How I use it in XAML:<local:BasePage x:Name=Page xmlns=http://xamarin.com/schemas/2014/forms xmlns:x=http://schemas.microsoft.com/winfx/2009/xaml xmlns:local=clr-namespace:ToolbarExtensionSample;assembly=ToolbarExtensionSample x:Class=ToolbarExtensionSample.SubPage> <local:BasePage.CustomToolbar> <local:CustomToolbarItem Name=Alloha IsVisible={Binding IsItemVisible, Source={x:Reference Page}}/> </local:BasePage.CustomToolbar> <Button Text=Click! Clicked=OnButtonClicked/></local:BasePage>My code is working, but I'm wondering is it okay for performance? Maybe, exists some better solution.Thanks in advance for suggestions to improvements or better approaches. | Changing visibility of ToolbarItem - Xamarin.Forms | c#;xaml;xamarin | null |
_scicomp.19356 | The Fast Marching Method, Fast Iterative Method, and Fast Sweeping Method are three ways of solving the Eikonal Equation on a discrete grid, essentially just a wavefront spreading out from initial points, e.g.:The idea is that we want to compute the time $T(x, y)$ at which a wavefront, moving across a field with a velocity function $F(x, y)$, reaches each point in a grid, e.g.:Where walls and such would be represented with $F(x, y)=\epsilon$ for some very small $\epsilon$.This is good and these methods work very well in practice, however they all apply to fixed size grids.In many applications, it is better to represent the speed field with some sort of tree, say a quadtree or octree. Areas with a larger tree cell representing them would simply then all have the same speed, though some sort of interpolation may even be possible. This can be interpreted by saying that we desire higher resolution results in some areas with a higher tree density, and lower resolution results in some areas with a lower tree density. Either way, is it possible to apply these methods (or methods like them) for solving $T(x, y)$ using a speed field $F(x, y)$ represented with some kind of a tree?Trivially, I know you could simply find the lowest resolution of your tree - IE the smallest cell - then transform each larger cell into these smaller cells as the algorithm runs over them, but this seems far less than optimal, and kinda defeats the point of a tree in the first place. Thus, is there a better way to do this?It seems pretty clear that the goal would be to have a $T(x, y)$ field that is also represented in the same kind of tree that $F(x, y)$ is represented in, to give similar density results, however I don't know if that's the best way to think about it either. More specifically $T(x, y)$ would probably end up being some kind of form where inner points (if we were trying to recreate the fixed-size discrete grid) are interpolations between the corners of the tree grid, but this is also essentially what it's doing anyway in order to map a discrete grid answer to a continuous one, so I don't feel like that's too big of a concern.Really, this question was just motivated because current implementations of an Eikonal Equation solver are slow, for two reasons:The dependencies each grid cell has on its neighbor's results. Something like the Fast Iterative Method makes them essentially less tied together so some level of parallelization is possible, but I'm wondering if there would be some way to further split up the work into independent grid cells that possibly derive results dependent on their neighbor values which will eventually be computed. This is sort of a side-note but I feel like the tree aspect may be tied into it due to how low density results would have to be smoothly approximated by higher density neighbors. This gets into point two:The current methods compute the same accuracy for all points in the grid. In many situations good accuracy is only needed for specific portions of the map, so it would be nice if it were possible to split up the work into sections of desired high resolution results and sections of desired low resolution results. Doing that using some kind of a tree doesn't seem that straightforward with the current methods of solving this equation though. | Eikonal Equation solver with different grid densities | nonlinear equations;wave propagation | null |
_cogsci.16529 | In the visual sciences it is known that the oblique effect can be reduced by means of training. The oblique effect is observed when testing subjects psychophysically with a grating acuity task (e.g., the BaGa test). Visual acuity is better when horizontal or vertical gratings are tested than when diagonals are used. The performance in discerning diagonal gratings has been shown to improve after training subjects, although performance in the cardinal directions stays better than the oblique stimuli. The oblique effect has been observed in the tactile sense too (the sense of touch). However, the amount of papers in the scientific literature is quite restricted in the tactile modality. I wasn't able to find evidence in the literature whether training can improve people's performance in tactile diagonal grating tasks. Can training reduce the oblique effect in the tactile modality, comparable to that observed in visual grating tasks? | Does training affect the tactile oblique effect? | neurobiology;learning;psychophysics;training;touch | null |
_codereview.27702 | I am trying to solve this problem on Sphere Online Judge. I keep getting a timeout error. Any comments or suggestions are welcome and appreciated.package info.danforbes.sphere;import java.io.BufferedReader;import java.io.BufferedWriter;import java.io.FileReader;import java.io.FileWriter;import java.io.IOException;import java.io.InputStreamReader;import java.io.OutputStreamWriter;import java.nio.charset.MalformedInputException;import java.util.AbstractMap.SimpleEntry;import java.util.ArrayList;import java.util.HashSet;import java.util.List;import java.util.Map.Entry;public class PrimeNumberGenerator { private static BufferedReader inStream; private static BufferedWriter outStream; private static List<Entry<Integer, Integer>> pairs = new ArrayList<Entry<Integer, Integer>>(); private static HashSet<Integer> primes = new HashSet<Integer>(); private static void getPairs(int numCases) throws MalformedInputException { String line; int spaceNdx; int num1, num2; try { for (int pairNdx = 0; pairNdx < numCases; ++pairNdx) { line = inStream.readLine(); spaceNdx = line.indexOf(' '); num1 = Integer.parseInt(line.substring(0, spaceNdx)); num2 = Integer.parseInt(line.substring(spaceNdx + 1)); if (num1 < 1) throw new MalformedInputException(num1); if (num1 > num2) throw new MalformedInputException(num2 - num1); if (num2 > 1000000000) throw new MalformedInputException(num2); if (num2 - num1 > 100000) throw new MalformedInputException(num2 - num1); pairs.add(new SimpleEntry<Integer, Integer>(num1, num2)); } inStream.close(); } catch (IOException e) { System.out.println(IOException encounterd!); e.printStackTrace(); System.exit(-1); } } private static boolean isPrime(int num) { boolean isPrime = true; if (num < 2) isPrime = false; else if (num > 3) { if (num % 2 == 0 || num % 3 == 0) isPrime = false; else if (!primes.contains(num)){ int sqrRoot = (int)Math.sqrt(num); for (int factorNdx = 1; 6 * factorNdx - 1 <= sqrRoot; ++factorNdx) { if (num % (6 * factorNdx + 1) == 0 || num % (6 * factorNdx - 1) == 0) { isPrime = false; break; } } } } if (isPrime) primes.add(num); return isPrime; } public static void main(String[] args) { if (args.length == 0) { inStream = new BufferedReader(new InputStreamReader(System.in)); outStream = new BufferedWriter(new OutputStreamWriter(System.out)); } StringBuilder resultBuilder = new StringBuilder(); try { if (inStream == null && outStream == null) { inStream = new BufferedReader(new FileReader(args[0])); outStream = new BufferedWriter(new FileWriter(args[1])); } int numCases = Integer.parseInt(inStream.readLine()); if (numCases > 10) throw new MalformedInputException(numCases); getPairs(numCases); int beg, end; for (Entry<Integer, Integer> anEntry : pairs) { beg = anEntry.getKey(); end = anEntry.getValue(); for (int rangeNdx = beg; rangeNdx <= end; ++rangeNdx) { if (isPrime(rangeNdx)) resultBuilder.append(Integer.toString(rangeNdx) + \n); } resultBuilder.append(\n); } resultBuilder.delete(resultBuilder.length() - 2, resultBuilder.length()); outStream.write(resultBuilder.toString()); outStream.close(); } catch (IOException e) { System.out.println(IOException encountered!); e.printStackTrace(); System.exit(-1); } }} | Sphere Online Judge, Problem 2: Prime Number Generator | java;optimization;primes | null |
_vi.11296 | I have created some indentation commands like set shiftwidth=4, set autoindent and so on.... in my .vimrc file in my home folder and I'm able to get new files auto-indented happily. What I want to know is if there is some script or way to indent existing file as per a particular indentation (say indentation script written by me (.vimrc) or default indentation standard for a particular extension that vim is intelligent enough to do.... ) The existing file has no consistent indentation used. Hope the question is clear. | indent an existing unindented file as per some indentation commands supposedly written for autoindenting new files | vimscript;indentation | null |
_webapps.48577 | According to Google's help pages, one can embed pictures from Google Photos onto an external web site:... but, although it says there's a Link to this album on the right hand side, with an Embed Slideshow, I can't find it anywhere.There's nothing about embedding under Share, or anywhere else.How do I embed an album of photos uploaded to Google on my web site? | How do I embed a Google Photos album into a web site? | google plus;google plus photos | Proposed by an anonymous user, as a suggested edit to the accepted answer:Setting up Albums and generating the code to embed them in a web site is made unnecessarily complicated by Google. They really should make this more intuitive. Below are detailed instructions for how I learned (the hard way) to do it.CREATE A NEW ALBUM IN GOOGLE+ PHOTOSIn Google, select the icon at the upper right that looks like a tic-tac-toe board (or waffle iron), and select the red Google+ icon (may have to sign in).Open the drop list on the upper left that says Home, and selectPhotos.Select All Photos from the top.To see any existing Albums, select More at the top, and then Albums.If no photos are loaded, you will need to load some before you cancreate Albums.Go back to All Photos.Upload some photos by selecting Upload Photos, and following thedirections.When some photos are uploaded, go back to All Photos to see them.Click on an individual photo, and a bigger view of that photo willpop up.Move the mouse to hover over the big photo, and a circled check markwill appear in the upper left corner.Click on the circled check mark, and options will appear in a bluestrip across the top. Select Copy.A pop up will appear with the existing Albums, or you can enter thename of a new Album to create one.When done, hit Copy at the bottom.EMBED AN ALBUM IN THE WEB SITE (Weebly as an example)To create the links in the site requires the set up of Albums first in Google+ Photos (see instructions above for that), and then you have to go to the site called https://picasaweb.google.com to get the embedding code.Go to the picasaweb site.Any Albums that have been set up should show up under the Home tab.Under the thumbnail for each Album are three lines: the Album name, adate, and the number of photos.Just to the left of the number of photos is a little icon that iseither a padlock or a roundish gray-and-white icon that is hard todescribe.If it is a padlock, it means that the Album can not be seen with alink, and you can't generate embedding code for it, so it must bechanged.To change it for linking, click on the Album, select Actions, andselect Album Properties.In the Edit Album Information pop up, open the Visibility drop listat the bottom, and pick Limited, Anyone With The Link, then SaveChanges at the bottom.Then select the Home tab, and make sure the icon under the Album haschanged.If it is the right icon, then click on the Album, and on the rightside, click on the the blue text that says Link To This Album.Then click on the blue text on the lower right that says EmbedSlideshow.A pop up will open, and select the slideshow size that is desired(large is the one I pick).Then copy the code in the yellow box, and paste that into a WeeblyCustom HTML code box.Hit Done on the pop up.It is also a good idea to then copy the link address that is on theright under Paste Link In Email or IM, and then paste that in aregular Weebly text box.After putting it in the text box, you will have to highlight thattext, and select the option Weebly gives to make it a hot link.The reason why it is a good idea to paste this link in along with theembedding is that the embedded slideshow may not show up on somedevices (mostly Apple devices), and the link gives those users analternative way to get to the photos.To size the embedded slideshow, edit the code to include height andwidth (e.g. 500 and 700). Then publish the Weebly site, and checkthat it all works. |
_unix.358857 | We are in the process of moving from other source/version control methods to git and, because I have no actual experience with git (short of setting some user.* variables), I'd like to ask whether this is a viable direction to take before committing myself down this road.The solution in Is it possible to set the users .gitconfig (for git config --global) dynamically? came close for me but it did not address a situation I discovered using shared service accounts (and which may exist for root, too).I found that User1 would connect and /home/serviceaccount/.gitconfig would get set, then User2 would connect and overwrite that: an execution of git config --global user.name in either session would return User2 details, suggesting the file is referenced at each call. Because I don't do root, I don't know if this problem exists for two users who sudo to root following @oXiVanisher's solution.To make this dynamic for shared service accounts, a wrapper script rolls in the appropriate .gitconfig based on the user executing it. The core of it is:#!/bin/shmyuser=`who -m | awk '{ print $1 }'`HOST=`hostname`# atomic locking over NFS via https://unix.stackexchange.com/a/22062LOCKFILE=/local/share/bin/.git.lockTMPFILE=${LOCKFILE}.$$echo $myuser @ $HOST > $TMPFILEif ln $TMPFILE $LOCKFILE 2>&-; then :else echo $LOCKFILE detected echo Script in use by $(<$LOCKFILE) /bin/rm -f $TMPFILE exitfitrap /bin/rm -f ${TMPFILE} ${LOCKFILE} 0 1 2 3 15# find my gitconfigCFGFILE=/local/share/DOTfiles/DOTgitconfig.$myuserif [ ! -s $CFGFILE ]; then echo No personal /local/share/DOTfiles/DOTgitconfig found. exitfi# roll it incp $CFGFILE $HOME/.gitconfig# execute git/usr/bin/git $@# roll it back in case of changescp $HOME/.gitconfig $CFGFILE# zero it outcat > $HOME/.gitconfig << !# This file intentionally blank for dynamic use# The wrapper script is /local/share/bin/git!When two users are connected to the shared service account, git config --global user.name reports the proper name for each user. At first blush, this looks like it could make git dynamic for all users sharing one account where environment variables can't be found.But how am I breaking things? What am I not seeing yet?Thank you. | Dynamic user config for git with wrapper script? | shell script;sudo;git | It seems like your solution would have race conditions (what happens during multiple simultaneous invocations of git?) as well as other problems (such as incorrect use of $* instead of [email protected], why don't use just set $GIT_CONFIG in each user's environment to a different file? |
_softwareengineering.216574 | I don't quite follow how it works. According to the MSDN Article there is a big hierarchy of keys protecting other keys and passwords. At some point the database is encrypted. You query the database which is encrypted, and it works seamlessly.If you're able to simply connect to the database as normal and not have to worry about any of the encryption from a developer point of view, how exactly is it secure? Surely anyone can simply connect and do select * from x and the data is revealed.Sorry my question is a bit scattered, I am just very confused by the article. | Could someone help me understand SQL TDE Database encryption? | sql server;encryption | You are right, the point of database encryption is not to protect data from the users of the data base - that is the task of role-based access and privilege levels.Encryption protects you against someone physically stealing the server from its rack, ripping out the hard disk and then reading the confidential data from the file system. It's a bit more complicated than that - obviously you can't just keep the decryption key lying around, in particular not on the same disk where you store the encrypted DB, otherwise the thief could just decrypt the data - but done right, it can add a functional layer of data security, and security is all about defense in depth. |
_unix.45732 | I'm a careless terminal driver scared of accidentally deleting files, hence using some aliases like alias rm='rm -i' for rm, mv, cp. How can I get a similar confirmation behavior for file redirections (e.g echo I'm silly > very_important_file.txt).The common case is that I usually use replace (>) instead of append (>>) and so I ended up accidentally deleting some mid-important files. What are your suggestions? | ask for comfirmation when file is replaced using a redirection | bash;io redirection;alias | I don't think there's a way to get the exact behavior of -i, but I have noclobber set which prevents overwriting already existing files. See this page for a usage example.You can try out the command like this (and if you like it, include it in your startup file)$ set -o noclobberExample:$ ls > ls.out$ set -o noclobber$ ls > ls.outbash: ls.out: cannot overwrite existing file$Update:As @jsbillings mentions in a helpful comment below, to override the noclobber in bash one can use >|Since I primarily use tcsh (a csh variant), the override operator is >! |
_softwareengineering.198404 | I've been catching up with the modern client-side JS ecosystem and reading up on CommonJS and AMD (incl. associated tools - browserify, requirejs, onejs, jam, dozens of others). If I'm writing a Javascript library, how do I modularize/package it such that it can be most broadly accessible (ideally by users who swear by CommonJS, AMD, and especially neither)?Popular libraries like jQuery seem to just use old-school file concatenation for building itself and dynamically detect whether it should write to an exports or the global context. I'm currently doing the same thing, but the main downside is that if I (unlike jQuery) depend on a few libraries, it's nice to not have to ask users to manually pre-include the transitive set. (Though I currently just have two dependencies.) And of course global namespace pollution.Or perhaps it's cleanest to generate multiple versions of my library, for each context?I'm also wondering about packaging and publishing. There are several systems, but I believe the major one is bower, which is easy to deal with since all it does is fetch. However, I'm wondering if I should also be targeting other package systems like component (which requires CommonJS).Are there other relevant aspects I should be aware of? Are there any good example projects to follow for all of this? | How to modularize and package a client-side Javascript library today? | javascript;modules;packages | null |
_webmaster.25641 | it is probably a stupid question, but I have real troubles figuring out, how to redirect/products to /products/itemIt is a simple redirect, no regex needed. The toplevel site /products should just always redirect to /products/itemI tried:<IfModule mod_rewrite.c>RedirectMatch 301 /products(.*) /products/item/$1</IfModule>I receive the error, that the webserver is redirecting in an infinite loop. That might be because of the rest of the htaccess file. Which looks like that:# Custom Rules<IfModule mod_rewrite.c>RedirectMatch 301 /products(.*) /products/item/$1</IfModule># BEGIN WordPress<IfModule mod_rewrite.c>RewriteEngine OnRewriteBase /RewriteRule ^index\.php$ - [L]RewriteCond %{REQUEST_FILENAME} !-fRewriteCond %{REQUEST_FILENAME} !-dRewriteRule . /index.php [L]</IfModule># END WordPressBut the strangest thing: When I load the page, I get the error. When I then hit reload, it works???Thanks for your help!ole | htaccess 301 redirect subdirectory one level deeper - why is this not working? | htaccess;wordpress;301 redirect | If you just want to redirect /products to /products/item, this should work:RedirectMatch 301 ^/products/?$ /products/itemThe ^ and $ characters anchor the regular expression to the beginning and end of the path, so that it won't match /foo/products or /products/bar. The /? allows it to match both /products and /products/; you can remove it if you don't want that. |
_unix.242298 | Respectable projects release tar archives that contain a single directory, for instance zyrgus-3.18.tar.gz contains a zyrgus-3.18 folder which in turn contains src, build, dist, etc.But some punk projects put everything at the root :'-( This results in a total mess when unarchiving. Creating a folder manually every time is a pain, and unnecessary most of the time.Is there a super-fast way to tell whether a .tar or .tar.gz file contains more than a single directory at its root? Even for a big archive.Or even better, is there a tool that in such cases would create a directory (name of the archive without the extension) and put everything inside? | How to untar safely, without polluting the current directory in case of a tarbomb? | tar | patool handles different kinds of archives and creates a subdirectory in case the archive contains multiple files to prevent cluttering the working directory with the extracted files.Extract archivepatool extract archive.tarTo obtain a list of the supported formats, use patool formats. |
_vi.3534 | I'm working on large files for wikisource, along with another guy. I generate a big text file (a few Mb) witch I format a bit using scripts, then I upload the text to a temporary page, where the other guy does some more formatting.My friends computer and internet connection are not the best in the market, and he asked me not to upload more than 400Kb of data, so his computer can handle all the text.the file is formatted like this:==header 1==...==header 2==...and has a few hundreds of headers.What I need to do:I need to yank at most 400Kb (as close as I can to this number), but I also need to yank all the text between headers - I can't have the text between headers to split over a few files.I can find the next header using \^={2}[^=]+={2}$, but I don't know how to carry on yanking until I have ~400Kb of data.Any ideas? | yanking a certain amount of bytes | cut copy paste | Ok, this is somewhat of a quick hack, let's hope it works:function! <SID>Chunk() find first header let l_start = search('\m^==[^=]\+==$', 'cW') if !l_start beep execute normal \<Esc> return endif translate to byte position let b_start = line2byte(l_start) if b_start < 0 beep execute normal \<Esc> return endif start marking execute 'normal V' mode 400 KB down execute 'goto ' . (b_start + 400 * 1024) if line('.') == line('$') return endif find previous header let l_end = search('\m^==[^=]\+==$', 'bcsW', l_start) if l_end <= l_start || l_end <= 0 not found go to beginning of line execute 'normal 0' search next header let l_end = search('\m^==[^=]\+==$', 'sW') if l_end <= 0 not found go to end of file execute 'normal G$' return endif endif move up exec 'normal k$' returnendfunctionnnoremap <silent> <leader>H :call <SID>Chunk()<CR>It marks the next ~400 KB region of full headers. After yanking it you can go to the end of the marked region with '>. |
_unix.91937 | I just switched to a Macbook Air. I installed zsh using homebrew, but when I use some of the code that I (originally had) in my .zshrc, I get an error saying that .dircolors was not found.Below is the code in question:zstyle ':completion:*' auto-description 'specify: %d'zstyle ':completion:*' completer _expand _complete _correct _approximatezstyle ':completion:*' format 'Completing %d'zstyle ':completion:*' group-name ''zstyle ':completion:*' menu select=2eval $(dircolors -b)zstyle ':completion:*:default' list-colors ${(s.:.)LS_COLORS}zstyle ':completion:*' list-colors ''zstyle ':completion:*' list-prompt %SAt %p: Hit TAB for more, or the character to insert%szstyle ':completion:*' matcher-list '' 'm:{a-z}={A-Z}' 'm:{a-zA-Z}={A-Za-z}' 'r:|[._-]=* r:|=* l:|=*'zstyle ':completion:*' menu select=longzstyle ':completion:*' select-prompt %SScrolling active: current selection at %p%szstyle ':completion:*' use-compctl falsezstyle ':completion:*' verbose truezstyle ':completion:*:*:kill:*:processes' list-colors '=(#b) #([0-9]#)*=0=01;31'zstyle ':completion:*:kill:*' command 'ps -u $USER -o pid,%cpu,tty,cputime,cmd'Is dircolors not shipped with Mac OS X? How should I install it? Update:If I run dircolors directly on the shell I get:bash: dircolors; command not found | Mac OS X: dircolors not found? | shell;osx;coreutils | The command dircolors is specific to GNU coreutils, so you'll find it on non-embedded Linux and on Cygwin but not on other unix systems such as OSX. The generated settings in your .zshrc aren't portable to OSX.Since you're using the default colors, you can pass an empty string to thelist-colors to get colors in file completions.For colors with the actual ls command, set the CLICOLOR environment variable on OSX, and also set LSCOLORS (see the manual for the format) if you want to change the colors.if whence dircolors >/dev/null; then eval $(dircolors -b) zstyle ':completion:*:default' list-colors ${(s.:.)LS_COLORS} alias ls='ls --color'else export CLICOLOR=1 zstyle ':completion:*:default' list-colors ''fiIf you wanted to set non-default colors (dircolors with a file argument), my recommendation would be to hard-code the output of dircolors -b ~/.dircolors in your .zshrc and use these settings for both zsh and GNU ls.LS_COLORS=zstyle ':completion:*:default' list-colors ${(s.:.)LS_COLORS}if whence dircolors >/dev/null; then export LS_COLORS alias ls='ls --color'else export CLICOLOR=1 LSCOLORS=fi |
_webapps.90390 | We use google forms to send out tests to our students. We have a spreadsheet with rosters where we keep the info of the students (email, birthday, gender, guardian etc) for every class in our school. One roster per sheet. After we finalize and approve a form we immediately link the form to a Responses spreadsheet.We wonder whether we could (from the existing roster spreadsheet) automate the following workflow: Create and fill-in with one of the email addresses, a single option checkbox question. This could be the first question of the form. Generate a pre-filled link of the form with the above mentioned email address from the roster. Append the generated pre-filled link to the Responses spreadsheet. Rotate the process through all emails in the roster so we end up with a column of the links. Related How can we NOT record, but on the fly disregard responses not matching an already existing roster Show URL used to edit responses from a Google Form in a Google Spreadsheet by using a script Can I auto-fill an answer on a form edit URL? | How can one auto create pre-filled URLs with registered addresses from a roster | google spreadsheets;google forms;automation | Short answerYes, it's possible to automate the workflow but with slight changes. This answer assumes that the OP is using a consumer account instead of a Google Apps for Work of Google Apps for Education account.ExplanationFirst StepCreate and fill-in with one of the email addresses, a single option checkbox question. This could be the first question of the form.REMARK: If you want that to use a unique option to prevent that the student change it, then you will need to create a new form for each student. To automate this you will need to use Google Apps Script or an add-on. This was left out of this answer as it will make it too long for the Q&A model of this site.Checkboxes are used to allow multiple option selection, radio buttons are used to allow to select only one option but both will show all the email addresses and this could take a lot of space. Instead of using a checkbox/radio button, consider to use a dropdown list as the list of all email address will be displayed only when the user click the dropdown button.If the list has hundreds of email addresses, instead of a dropdown use text question to make the form load faster.Second StepGenerate a pre-filled link of the form with the above mentioned email address from the roster.Pre-filled links use URL parameters with special ID for each question. To keep the things simple, get the link from the form at least to take on URL to be used as template.To get the link to be used as template follow the instructions of Prepopulate form answers - Google docs editors HelpThird stepAppend the generated pre-filled link to the Responses spreadsheet. Assuming that the OP is referring to the roster spreadsheet, add a column to hold the pre-filled URL for each student.For a form with two questions, one for the email address and pre-filled, the pre-filled URL will look like the followinghttps://docs.google.com/forms/d/form_id/[email protected] form id will look like the following1awKpg_diniayS6360kNXrcgihk36azQ3DJEaZqXDY7AThe pre-filled email field will look like the following:[email protected] part between entry. and = could be different for each form, and obviously the email could be anyone as in this step the purpose is to find the URL to be used as template.Forth stepRotate the process through all emails in the roster so we end up with a column of the links.REMARK: In the automation terminology, instead of rotation the term used is iteration but also could be referred to this kind of task as doing a loop or looping through....There are several ways to automate this. You could use formulas, Google Apps Script and Add-ons. Using a formula.In this step, the pre-filled URL corresponding to each student will be generated.In the roster spreadsheet, assuming that the first row holds the column headers and the column B holds the student email addresses, in the new column add the following formula in the cell in the second row=https://docs.google.com/forms/d/form_id/viewform?entry.29450426=&B2Then fill down as necessary.REMARKS:The following formula will fill the required cells, but it requires that the there aren't data below the students data range.=FILTER( https://docs.google.com/forms/d/form_id/viewform?entry.29450426=&B2:B, LEN(B2:B) )Using add-onsI don't think that there is an add-on that implement the whole workflow, so it's very likely that could be necessary to use several or to mix formulas, scripts and add-ons.One of the add-ons that could be helpful is FormMule. It's a very popular addon among teachers that could be used to send the pre-filled form URL that correspond to each student.FormMule - Google+ CommunityFormMule - AddonUsing Google Apps Script (GAS)As this could require a very long explanation for someone that doesn't know the basic this be excluded of this answer. |
_codereview.154555 | I have a problem with my interface-design. I especially ask for help with the overloaded compute-Method.For a calculator I work with these Interfaces:public interface ICalculator{ public double compute(String expression) throws IllegalArgumentException;}public interface IParameterizedCalculator extends ICalculator { public double setParameter(String parameter, double value) throws IllegalArgumentException; public double getParameter(String parameter) throws IllegalArgumentException; // IllegalArgumentException when expression is null or malformed // IllegalStateException when a parameter in the expression was not set prior to this method double compute(String expression) throws IllegalArgumentException, IllegalStateException;}public interface IPersistentCalculator extends ICalculator { // IllegalStateException when no expression is stored double compute() throws IllegalStateException; void store(String expression) throws IllegalArgumentException;}IParameterizedCalculator adds an Exception to compute. As the previously stateless ICalculator suddenly became stateful I see no way than to add this exception.IPersistantCalculator doesn't modify the original compute(String), but overloads it with compute(), which also has a state and therefore throws an IllegalStateException (but not necessarily an IllegalArgumentException, as the only way to store an expression is through the store(String)-Method and I don't care if someone hacks my code via reflection or debugger). However this IllegalStateException has a different reason (another state if you want).I have three questions now:How do I properly extend the throws-clause to indicate that my derived methods will throw more than their parent?How do I communicate this in a class MyCalculator implements IPersistentCalculator, IParameterizedCalculator (which only exposes a single compute(String)-method)?How do I communicate that the MyCalculator.compute()-Method may throw IllegalStateException when a parameterized expression has been stored? | Multiple Interfaces modifying contract | java;inheritance;interface | I think if you continue this path you violate the interface seggregation principle. Try to separate the interfaces. Going with your semantics I expect interfaces like ParameterAware or Storable. Then you won't get into such a semantical trouble.As you have less semantical problems with your interfaces you will face other challenges that have to do with the implementing classes and your algorithms using and working with these seggregated interfaces. But you cannot escape it and the effort and it is justified. |
_unix.276370 | Could someone explain to me why this script does not delete /var/logmessages and /var/log/wtmp files? I found it on tldp.org tutorial#!/bin/bashLOG_DIR=/var/logcd $LOG_DIRcat /dev/null > messagescat /dev/null > wtmpecho Logs cleaned upexitAfter executing it, I checked /var/log directory and messages and wtmp are still there with the old logs.Why is that happening? | Script to clean log files does not delete them | linux;scripting | null |
_cstheory.29088 | I am pretty sure this problem has a name, or it can be reworded so it does. We are given a set $X$ and a family $\mathcal F$ of subsets of $X$, the problem is to find a subset $B$ such that $B\cap A\neq \emptyset$ for all $A\in \mathcal F$. | Does this problem have a name? Finding a subset intersecting all subsets of a family | co.combinatorics | null |
_cstheory.22118 | To show the NP-hardness of a problem, one need to choose a known NP-hard problem and find a polynomial reduction from the known problem to his problem in hand. Theoretically, any NP-hard problem can be used for the reduction, but in practice, some of the problems are more easily reduced than others.For instance, 3-SAT is usually a better choice for constructing a reduction than SAT because the former one is more restricted than the latter one, 3-partition is usually an easier choice than bin packing, ...One way to find such good problems for the reduction is to do a statistical analysis over the existing reductions. E.g., one can shape all the pairs of from -> to reductions of the book Computers and Intractability: A Guide to the Theory of NP-Completeness(or other resources)and draw a histogram of the problems in the from set.Then we can find out which problems are more commonly used for reductions.I wonder if such a statistical analysis makes sense at all. Has such a research been already conducted or not? If not, what is your guess about the most commonly used problems for reductions.The reason I am asking this question is that I have already done a few proofs of NP-hardness, but almost all of them rely on reduction from the same problem (3-partition). I am looking for other options to use in my proofs. | NP-hardness proof: looking for some good restricted np-hard problems | np hardness;reductions | null |
_unix.35733 | I need to be able to take a number of files on one machine and generate a .iso image to burn to a multisession CD-R that has already had a session written to it. I can easily create an image for the first session, but I cannot find a way to do so for the second or later sessions. Currently, the primary means I have of access to the remote system is via ssh. I have root on both systems in question. | Create multisession ISO image to burn on a remote CD | data cd;burning | null |
_unix.118302 | Am I correct that chunk size in context of RAID is essentially the same thing as cluster in file-system context? In other words, chunk size is the smallest unit of data which can be written to a member of RAID array? For example if I have a chunk size of 64KiB and I need to write a 4KiB file and cluster size of the file-system is also 4KiB, then is it true that I will use one 64KiB chunk and basically waste 60KiB? | understanding the chunk size in context of RAID | filesystems;partition;raid;mdadm;software raid | null |
_cogsci.48 | Possible Duplicate:Any work being done on Perception, Action, and/or Cognition in Video games? Are there any research studies that show this?See a thread on Quora for some initial discussion, but few research articles so far.For decades, a different game, chess, has held the exalted position of the drosophila of cognitive sciencethe model organism that scientists could poke and prod to learn what makes experts better than the rest of us. StarCraft 2, however, might be emerging as the rhesus macaque: its added complexity may confound researchers initially, but the answers could ultimately be more telling. I cant think of a cognitive process thats not involved in StarCraft, says Mark Blair, a cognitive scientist at Simon Fraser University. Its working memory. Its decision making. It involves very precise motor skills. Everything is important and everything needs to work together.Blair, the Simon Fraser University scientist running the SkillCraft project, asked gamers at all ability levels to submit their replay files. He and his colleagues collected more than 4500 files, of which at least 3500 turned out to usable. What weve got is a satellite view of expertise that no one was able to get before, he says. We have hundreds of players at the basic levels, then hundreds more at level slightly better, and so on, in 8 different categories of players. By comparing the techniques and attributes of low-level players with other gamers up the chain of ability, they can start to discern how skills developand perhaps, over the long run, identify the most efficient training regimen.Both Blair and Lewis see parallels between the game and emergency management systems. In a high-stress crisis situation, the people in charge of coordinating a response may find themselves facing competing demands. Alarms might be alerting them to a fire burning in one part of town, a riot breaking out a few streets over, and the contamination of drinking water elsewhere. The mental task of keeping cool and distributing attention among equally urgent activities might closely resemble the core challenge of Starcraft 2. For emergencies, you dont get to train eight hours a day. You get two emergencies in your life but you better be good because lives are at stake, Blair says. Training in something like Starcraft could be really useful. | Do expert computer gamers have unusual physiological or mental characteristics? | neurobiology;performance;video games;expertise | null |
_softwareengineering.42941 | I love that writing Python, Ruby or Javascript requires so little boilerplate. I love simple functional constructs. I love the clean and simple syntax.However, there are three things I'm really bad at when developing a large software in a dynamic language:Navigating the codeIdentifying the interfaces of the objects I'm usingRefactoring efficientlyI have been trying simple editors (i.e. Vim) as well as IDE (Eclipse + PyDev) but in both cases I feel like I have to commit a lot more to memory and/or to constantly grep and read through the code to identify the interfaces. This is especially true when working with a large codebase with multiple dependencies.As for refactoring, for example changing method names, it becomes hugely dependent on the quality of my unit tests. And if I try to isolate my unit tests by cutting them off the rest of the application, then there is no guarantee that my stub's interface stays up to date with the object I'm stubbing.I'm sure there are workarounds for these problems. How do you work efficiently in Python, Ruby or Javascript? | How do you navigate and refactor code written in a dynamic language? | ide;refactoring;dynamic typing | null |
_webapps.6986 | If you click on the Save as my template link at http://jsbin.com/, it will save the current page as my template and display it when I come back to the page later.How do I undo that so that I get a clean slate? | Clear the template in jsBin | js bin | It may sound stupid but removing the content in both windows left and right and saving it as an template creates a blank state.If you want to have the original template back here's the data:JSif (document.getElementById('hello')) { document.getElementById('hello').innerHTML = 'Hello World - this was inserted using JavaScript';}HTML<!DOCTYPE html><html><head><meta charset=utf-8 /><title>JS Bin</title><!--[if IE]> <script src=http://html5shiv.googlecode.com/svn/trunk/html5.js></script><![endif]--><style> article, aside, figure, footer, header, hgroup, menu, nav, section { display: block; }</style></head><body> <p id=hello>Hello World</p></body></html>Clearing your cookies for this site will also bring back this original template. |
_unix.134341 | I have a PHP script running which makes symlinks. To confirm which user it is: file_put_contents(testFile, test);$user = fileowner(testFile);unlink(testFile);echo running as user ' . $user . ';var_dump( exec('whoami'));running like this ...$ php script.phpruns correct all symlinks are made and output is:running as user '999'string(5) adminRunning through a shell script:#!/bin/shphp /path/to/script.phpgives the following output and doesn't work:PHP Warning: symlink(): Permission denied in /path/to/script.php on line 8 running as user '999'string(5) adminI'm not sure what the difference between the two is, as the users they are running as are identical.Any suggestions on how to make them both have the correct permissions for symlinking?cat /proc/versiongives:Linux version 2.6.39 (root@cross-builder) (gcc version 4.6.3 (x86 32-bit toolchain - ASUSTOR Inc.) ) #1 SMP PREEMPT Thu Oct 31 21:27:37 CST 2013That's the only output I can generate for any sort of release information.All of the code:$files = scandir('/volume1/dir1');$base = /volume1/;foreach($files as $file) { $f_letter = $file{0}; $new_path = $base . ByLetter/ . strtoupper($f_letter) . / . $file; if(ctype_alpha ($f_letter) && !is_link($new_path)) { var_dump($base. TV/ . $file); var_dump($new_path); symlink ($base . TV/ . $file , $new_path); }}gives the same output for the var dumps both methods. | File permissions | permissions;php;symlink | null |
_unix.147549 | I downloaded the .deb file from the Steam website, but couldn't install it as it complained about having and outdated version of libc6. I look around and apparently I have to add jessie sources to my sources.list to make it work, and so I do. The package installs but itself is a front to install steam's dependencies. I run it, and instead of simply installing the three dependencies described (libgl1-mesa-dri:i386, libgl1-mesa-glx:i386 and libc6:i386), it instead wants to do all this:The following packages were automatically installed and are no longer required: blt cups-daemon cups-server-common dconf-cli empathy-common geoclue-2.0 gir1.2-gck-1 gir1.2-gst-plugins-base-1.0 gir1.2-gstreamer-1.0 gir1.2-ibus-1.0 gir1.2-javascriptcoregtk-3.0 gir1.2-notify-0.7 gnome-panel-data gnome-session-common gnome-themes-standard-data gstreamer1.0-nice gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-pulseaudio hp-ppd libarchive13 libasound2-dev libassuan0 libavahi-client-dev libavahi-common-dev libavcodec55 libavformat55 libbz2-1.0:i386 libcaca-dev libcamel-1.2-49 libcaribou-common libchromaprint0 libdb5.3:i386 libdbus-1-dev libdirectfb-dev libdirectfb-extra libdrm-dev libdrm-nouveau1a libebackend-1.2-7 libebook-1.2-14 libebook-contacts-1.2-0 libecal-1.2-16 libedata-book-1.2-20 libedata-cal-1.2-23 libedataserver-1.2-18 libfluidsynth1 libfontembed1 libfreetype6-dev libgadu3 libgcr-base-3-1 libgd3 libgexiv2-2 libgl1-mesa-dev libglu1-mesa-dev libgphoto2-6 libgphoto2-port10 libgrilo-0.2-1 libgstreamer-plugins-bad1.0-0 libgstreamer-plugins-base1.0-0 libgstreamer1.0-0 libibus-1.0-5 libical1 libinput0 libjpeg8-dev libjson0 liblzma5:i386 libmjpegutils-2.1-0 libmozjs-24-0 libmpdec2 libmpdec2:i386 libmpeg2encpp-2.1-0 libmpg123-0 libmplex2-2.1-0 libmx-common libncursesw5:i386 libnm-gtk-common libopencv-core2.4 libopencv-flann2.4 libopencv-imgproc2.4 libopencv-ml2.4 libopencv-video2.4 libpackagekit-glib2-16 libpcre3-dev libpcrecpp0 libpng12-dev libpthread-stubs0 libpthread-stubs0-dev libpython3-stdlib libpython3-stdlib:i386 libpython3.4-minimal libpython3.4-minimal:i386 libpython3.4-stdlib libpython3.4-stdlib:i386 libqpdf13 libreadline6:i386 librtmp1 libsbc1 libslang2-dev libsqlite3-0:i386 libsrtp0 libssl1.0.0:i386 libtbb2 libtotem-plparser18 libtracker-sparql-1.0-0 libts-dev libwayland-cursor0 libwebp5 libx11-dev libx11-doc libx11-xcb-dev libx264-142 libxau-dev libxcb-dri2-0-dev libxcb-dri3-dev libxcb-glx0-dev libxcb-present-dev libxcb-randr0 libxcb-randr0-dev libxcb-render0-dev libxcb-shape0-dev libxcb-sync-dev libxcb-xfixes0-dev libxcb1-dev libxdamage-dev libxdmcp-dev libxext-dev libxfixes-dev libxkbcommon0 libxshmfence-dev libxxf86vm-dev mesa-common-dev pkg-config python-aptdaemon python-defer python-pkg-resources python3.4:i386 python3.4-minimal:i386 x11proto-core-dev x11proto-damage-dev x11proto-dri2-dev x11proto-fixes-dev x11proto-gl-dev x11proto-input-dev x11proto-kb-dev x11proto-xext-dev x11proto-xf86vidmode-dev xorg-sgml-doctools xtrans-dev zlib1g-devUse 'apt-get autoremove' to remove them.The following extra packages will be installed: bzip2 colord colord-data cups-bsd cups-client cups-common cups-daemon cups-ppdc cups-server-common cupsddk dbus-x11 dconf-cli dconf-gsettings-backend dconf-service empathy-common evince-common evolution-data-server-common folks-common fontconfig-config gcc-4.9-base gcc-4.9-base:i386 geoclue-2.0 gir1.2-atspi-2.0 gir1.2-freedesktop gir1.2-gck-1 gir1.2-gdesktopenums-3.0 gir1.2-glib-2.0 gir1.2-gst-plugins-base-1.0 gir1.2-gstreamer-1.0 gir1.2-ibus-1.0 gir1.2-javascriptcoregtk-3.0 gir1.2-notify-0.7 gir1.2-soup-2.4 gir1.2-telepathylogger-0.2 gir1.2-upowerglib-1.0 glib-networking glib-networking-common glib-networking-services gnome-desktop3-data gnome-packagekit-data gnome-panel-data gnome-session-common gnome-shell-common gnome-themes-standard-data gsettings-desktop-schemas gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-pulseaudio gvfs gvfs-bin gvfs-common gvfs-daemons gvfs-libs init-system-helpers libarchive13 libassuan0 libatk1.0-0 libatk1.0-data libatspi2.0-0 libaudit-common libaudit1 libavcodec55 libavformat55 libavutil53 libburn4 libbz2-1.0 libc6:i386 libc6-i686:i386 libcairo-perl libcairo2 libcamel-1.2-49 libcaribou-common libcolord2 libcolorhug2 libcups2 libcupscgi1 libcupsfilters1 libcupsimage2 libcupsmime1 libcupsppdc1 libdb5.3 libdbus-glib-1-2 libdconf1 libdjvulibre-text libdjvulibre21 libdrm-dev libdrm-intel1 libdrm-intel1:i386 libdrm-nouveau2 libdrm-nouveau2:i386 libdrm-radeon1 libdrm-radeon1:i386 libdrm2 libdrm2:i386 libebackend-1.2-7 libebook-1.2-14 libebook-contacts-1.2-0 libecal-1.2-16 libedata-book-1.2-20 libedata-cal-1.2-23 libedataserver-1.2-18 libegl1-mesa libegl1-mesa-drivers libelf1 libelf1:i386 libelfg0 libepoxy0 libevdev2 libexpat1 libexpat1:i386 libffi6 libffi6:i386 libfftw3-3 libfftw3-double3 libfftw3-long3 libfftw3-single3 libflac8 libfolks-telepathy25 libfolks25 libfontconfig1 libfontembed1 libgbm1 libgcc1 libgcc1:i386 libgck-1-0 libgcr-base-3-1 libgcrypt11 libgcrypt20 libgd3 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgee-0.8-2 libgeocode-glib0 libgexiv2-2 libgirepository-1.0-1 libgl1-mesa-dev libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libglib-perl libglib2.0-0 libglib2.0-bin libglibmm-2.4-1c2a libgmp10 libgnomekbd-common libgnutls-deb0-28 libgomp1 libgphoto2-6 libgphoto2-port10 libgrilo-0.2-1 libgstreamer-plugins-base1.0-0 libgstreamer1.0-0 libgtk-3-common libgutenprint2 libgweather-common libhogweed2 libhtml-parser-perl libibus-1.0-5 libical1 libicu52 libimobiledevice4 libinput0 libjavascriptcoregtk-3.0-0 libjson-c2 liblcms2-2 libllvm3.4 libllvm3.4:i386 liblocale-gettext-perl libmagickcore5 libmm-glib0 libmozjs-24-0 libmpdec2 libmx-common libncurses5 libncursesw5 libndp0 libnet-dbus-perl libnet-ssleay-perl libnettle4 libnl-3-200 libnl-genl-3-200 libnl-route-3-200 libnm-glib4 libnm-gtk-common libnm-util2 libopenjpeg5 libopenvg1-mesa libopus0 liborc-0.4-0 libp11-kit0 libpackagekit-glib2-16 libpam-systemd libparted2 libpciaccess0 libpciaccess0:i386 libpcre3 libpcre3-dev libpcrecpp0 libperl4-corelibs-perl libperl5.18 libpixman-1-0 libplist2 libpoppler-glib8 libpoppler46 libproxy1 libpulse-mainloop-glib0 libpulse0 libpulsedsp libpurple0 libpython3-stdlib libpython3.4-minimal libpython3.4-stdlib libqpdf13 libreadline6 librtmp1 libsdl1.2debian libsecret-1-0 libsecret-common libsocket-perl libsoundtouch0 libsoup2.4-1 libsqlite3-0 libssl1.0.0 libstdc++6 libstdc++6:i386 libswscale2 libsystemd-daemon0 libsystemd-id128-0 libsystemd-journal0 libsystemd-login0 libtasn1-6 libtelepathy-glib0 libtelepathy-logger3 libtext-charwidth-perl libtext-iconv-perl libtiff5 libtinfo5 libtinfo5:i386 libtotem-plparser18 libtracker-sparql-1.0-0 libtxc-dxtn-s2tc0 libtxc-dxtn-s2tc0:i386 libudev1 libudisks2-0 libupower-glib2 libusbmuxd2 libuuid-perl libva1 libvpx1 libwacom-common libwacom2 libwayland-client0 libwayland-cursor0 libwayland-egl1-mesa libwayland-server0 libwebp5 libx11-6 libx11-dev libx11-xcb-dev libx11-xcb1 libx264-142 libxatracker2 libxcb-dri2-0 libxcb-dri2-0-dev libxcb-dri3-0 libxcb-dri3-dev libxcb-glx0 libxcb-glx0-dev libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-present-dev libxcb-present0 libxcb-randr0 libxcb-randr0-dev libxcb-render0 libxcb-render0-dev libxcb-shape0 libxcb-shape0-dev libxcb-sync-dev libxcb-sync1 libxcb-xf86dri0 libxcb-xfixes0 libxcb-xfixes0-dev libxcb-xv0 libxcb1 libxcb1-dev libxdamage-dev libxdamage1 libxfixes-dev libxfixes3 libxi6 libxkbcommon0 libxml-parser-perl libxml2 libxshmfence-dev libxshmfence1 libxxf86vm-dev libxxf86vm1 libzeitgeist-2.0-0 mesa-common-dev mime-support nautilus-data network-manager parted perl perl-base perl-modules perlmagick policykit-1 ppp printer-driver-c2esp pulseaudio pulseaudio-module-x11 pulseaudio-utils python-aptdaemon python-gi python-gi-cairo systemd systemd-sysv udisks2 upower usbmuxd x11proto-damage-dev x11proto-dri2-dev x11proto-fixes-dev x11proto-gl-dev x11proto-xf86vidmode-dev xserver-common xserver-xephyr xserver-xorg-core xserver-xorg-input-evdev xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-input-vmmouse xserver-xorg-input-wacom xserver-xorg-video-all xserver-xorg-video-ati xserver-xorg-video-cirrus xserver-xorg-video-fbdev xserver-xorg-video-intel xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-modesetting xserver-xorg-video-neomagic xserver-xorg-video-nouveau xserver-xorg-video-openchrome xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-trident xserver-xorg-video-vesa xserver-xorg-video-vmware zlib1g zlib1g:i386 zlib1g-devSuggested packages: bzip2-doc xpp cups-pdf lrzip glibc-doc:i386 locales:i386 libfont-freetype-perl libfftw3-bin libfftw3-dev rng-tools libgd-tools libglide3 libglide3:i386 gnutls-bin gphoto2 gtkam grilo-plugins-0.2 gstreamer-codec-install gnome-codec-install gstreamer1.0-tools gutenprint-locales libdata-dump-perl libusbmuxd-tools liblcms2-utils opus-tools libparted-dev libparted-i18n libxcb-doc parted-doc perl-doc libterm-readline-gnu-perl libterm-readline-perl-perl make libb-lint-perl libcpanplus-dist-build-perl libcpanplus-perl libfile-checktree-perl liblog-message-simple-perl liblog-message-perl libobject-accessor-perl imagemagick-doc pavumeter pavucontrol paman paprefs systemd-ui xfsprogs reiserfsprogs exfat-utils btrfs-tools mdadm gpointing-device-settings touchfreeze xinput firmware-linuxRecommended packages: cups-browsed gstreamer1.0-x qpdf va-driver-all va-driver rename libarchive-extract-perl libmodule-pluggable-perl libpod-latex-perl libterm-ui-perl libtext-soundex-perl gdisk xserver-xorg-video-qxlThe following packages will be REMOVED: aisleriot alacarte aptdaemon baobab bluez-cups brasero caribou caribou-antler cheese cups dconf-tools empathy eog evince evolution evolution-data-server evolution-plugins evolution-webcal file-roller gcalctool gcr gdebi gdm3 gedit gedit-plugins gir1.2-caribou-1.0 gir1.2-clutter-1.0 gir1.2-clutter-gst-1.0 gir1.2-evince-3.0 gir1.2-gcr-3 gir1.2-gkbd-3.0 gir1.2-gnomebluetooth-1.0 gir1.2-goa-1.0 gir1.2-gtk-3.0 gir1.2-gtkclutter-1.0 gir1.2-gtksource-3.0 gir1.2-gucharmap-2.90 gir1.2-mutter-3.0 gir1.2-panelapplet-4.0 gir1.2-peas-1.0 gir1.2-rb-3.0 gir1.2-totem-1.0 gir1.2-vte-2.90 gir1.2-webkit-3.0 gir1.2-wnck-3.0 gkbd-capplet glchess glines gnect gnibbles gnobots2 gnome gnome-applets gnome-bluetooth gnome-color-manager gnome-contacts gnome-control-center gnome-core gnome-dictionary gnome-disk-utility gnome-documents gnome-font-viewer gnome-games gnome-icon-theme gnome-icon-theme-extras gnome-icon-theme-symbolic gnome-keyring gnome-media gnome-nettool gnome-online-accounts gnome-orca gnome-packagekit gnome-panel gnome-power-manager gnome-screensaver gnome-screenshot gnome-session gnome-session-bin gnome-session-fallback gnome-settings-daemon gnome-shell gnome-shell-extensions gnome-sudoku gnome-sushi gnome-system-log gnome-system-monitor gnome-terminal gnome-themes-standard gnome-tweak-tool gnome-user-guide gnome-user-share gnomine gnotravex gnotski gtali gucharmap gvfs-backends hpijs hplip iagno idle-python3.2 idle3 libaudit0 libavahi-ui-gtk3-0 libbrasero-media3-1 libcanberra-gtk3-0 libcanberra-gtk3-module libcaribou-gtk3-module libcaribou0 libchamplain-0.12-0 libchamplain-gtk-0.12-0 libcheese-gtk21 libcheese3 libclutter-1.0-0 libclutter-gst-1.0-0 libclutter-gtk-1.0-0 libclutter-imcontext-0.1-0 libclutter-imcontext-0.1-bin libcluttergesture-0.0.2-0 libcupsdriver1 libedata-book-1.2-13 libedataserverui-3.0-1 libepc-ui-1.0-3 libevdocument3-4 libevolution libevview3-3 libfolks-eds25 libgail-3-0 libgcr-3-1 libgdict-1.0-6 libgdu-gtk0 libglib2.0-dev libgnome-bluetooth10 libgnome-desktop-3-2 libgnome-media-profiles-3.0-0 libgnomekbd7 libgoa-1.0-0 libgtk-3-0 libgtk-3-bin libgtk-vnc-2.0-0 libgtk2-perl libgtkhtml-4.0-0 libgtkhtml-4.0-common libgtkhtml-editor-4.0-0 libgtkmm-3.0-1 libgtksourceview-3.0-0 libgucharmap-2-90-7 libgweather-3-0 libhpmud0 libmutter0 libmx-1.0-2 libnautilus-extension1a libnm-gtk0 libpanel-applet-4-0 libpango-perl libpeas-1.0-0 libperl5.14 libpulse-dev librhythmbox-core6 libsane-hpaio libsdl1.2-dev libseed-gtk3-0 libsnmp15 libtotem0 libunique-3.0-0 libvte-2.90-9 libwebkitgtk-3.0-0 libwnck-3-0 libyelp0 lightsoff mahjongg metacity mousetweaks nautilus nautilus-sendto nautilus-sendto-empathy network-manager-gnome notification-daemon policykit-1-gnome printer-driver-gutenprint printer-driver-hpcups printer-driver-hpijs printer-driver-postscript-hp printer-driver-splix python-aptdaemon.gtk3widgets python3 python3-tk quadrapassel rhythmbox rhythmbox-plugin-cdrecorder rhythmbox-plugins rygel-preferences seahorse shotwell simple-scan software-properties-gtk sound-juicer steam-launcher swell-foop task-gnome-desktop task-print-server totem totem-plugins tracker-gui transmission-gtk vinagre vino xdg-user-dirs-gtk xserver-xorg-video-apm xserver-xorg-video-ark xserver-xorg-video-chips xserver-xorg-video-i128 xserver-xorg-video-rendition xserver-xorg-video-s3 xserver-xorg-video-s3virge xserver-xorg-video-sis xserver-xorg-video-tseng xserver-xorg-video-voodoo yelp zenityThe following NEW packages will be installed: colord-data cups-daemon cups-server-common cupsddk dconf-cli gcc-4.9-base gcc-4.9-base:i386 geoclue-2.0 gir1.2-gst-plugins-base-1.0 gir1.2-gstreamer-1.0 gir1.2-ibus-1.0 gir1.2-notify-0.7 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-pulseaudio init-system-helpers libarchive13 libassuan0 libaudit-common libaudit1 libavcodec55 libavformat55 libavutil53 libc6:i386 libc6-i686:i386 libcamel-1.2-49 libcolord2 libcolorhug2 libdb5.3 libdconf1 libdrm-dev libdrm-intel1:i386 libdrm-nouveau2 libdrm-nouveau2:i386 libdrm-radeon1:i386 libdrm2:i386 libebackend-1.2-7 libebook-1.2-14 libebook-contacts-1.2-0 libecal-1.2-16 libedata-book-1.2-20 libedata-cal-1.2-23 libedataserver-1.2-18 libegl1-mesa libegl1-mesa-drivers libelf1:i386 libelfg0 libepoxy0 libevdev2 libexpat1:i386 libffi6 libffi6:i386 libfftw3-double3 libfftw3-long3 libfftw3-single3 libfontembed1 libgbm1 libgcc1:i386 libgcr-base-3-1 libgcrypt20 libgd3 libgee-0.8-2 libgexiv2-2 libgl1-mesa-dri:i386 libgnutls-deb0-28 libgphoto2-6 libgphoto2-port10 libgrilo-0.2-1 libgstreamer-plugins-base1.0-0 libgstreamer1.0-0 libhogweed2 libibus-1.0-5 libical1 libicu52 libimobiledevice4 libinput0 libjson-c2 libllvm3.4 libllvm3.4:i386 libmm-glib0 libmozjs-24-0 libmpdec2 libndp0 libopenjpeg5 libopenvg1-mesa libpackagekit-glib2-16 libpam-systemd libparted2 libpciaccess0:i386 libperl4-corelibs-perl libperl5.18 libplist2 libpoppler46 libproxy1 libpulsedsp libpython3-stdlib libpython3.4-minimal libpython3.4-stdlib libqpdf13 librtmp1 libsecret-1-0 libsecret-common libstdc++6:i386 libsystemd-id128-0 libsystemd-journal0 libtasn1-6 libtelepathy-logger3 libtiff5 libtinfo5:i386 libtotem-plparser18 libtracker-sparql-1.0-0 libtxc-dxtn-s2tc0 libtxc-dxtn-s2tc0:i386 libudev1 libudisks2-0 libupower-glib2 libusbmuxd2 libwayland-client0 libwayland-cursor0 libwayland-egl1-mesa libwayland-server0 libwebp5 libx11-xcb-dev libx264-142 libxatracker2 libxcb-dri2-0-dev libxcb-dri3-0 libxcb-dri3-dev libxcb-glx0-dev libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-present-dev libxcb-present0 libxcb-randr0 libxcb-randr0-dev libxcb-render0-dev libxcb-shape0-dev libxcb-sync-dev libxcb-sync1 libxcb-xf86dri0 libxcb-xfixes0 libxcb-xfixes0-dev libxcb-xv0 libxdamage-dev libxfixes-dev libxkbcommon0 libxshmfence-dev libxshmfence1 libxxf86vm-dev libzeitgeist-2.0-0 parted systemd systemd-sysv udisks2 x11proto-damage-dev x11proto-dri2-dev x11proto-fixes-dev x11proto-gl-dev x11proto-xf86vidmode-dev xserver-xorg-video-modesetting zlib1g:i386The following packages will be upgraded: bzip2 colord cups-bsd cups-client cups-common cups-ppdc dbus-x11 dconf-gsettings-backend dconf-service empathy-common evince-common evolution-data-server-common folks-common fontconfig-config gir1.2-atspi-2.0 gir1.2-freedesktop gir1.2-gck-1 gir1.2-gdesktopenums-3.0 gir1.2-glib-2.0 gir1.2-javascriptcoregtk-3.0 gir1.2-soup-2.4 gir1.2-telepathylogger-0.2 gir1.2-upowerglib-1.0 glib-networking glib-networking-common glib-networking-services gnome-desktop3-data gnome-packagekit-data gnome-panel-data gnome-session-common gnome-shell-common gnome-themes-standard-data gsettings-desktop-schemas gvfs gvfs-bin gvfs-common gvfs-daemons gvfs-libs libatk1.0-0 libatk1.0-data libatspi2.0-0 libburn4 libbz2-1.0 libcairo-perl libcairo2 libcaribou-common libcups2 libcupscgi1 libcupsfilters1 libcupsimage2 libcupsmime1 libcupsppdc1 libdbus-glib-1-2 libdjvulibre-text libdjvulibre21 libdrm-intel1 libdrm-radeon1 libdrm2 libelf1 libexpat1 libfftw3-3 libflac8 libfolks-telepathy25 libfolks25 libfontconfig1 libgcc1 libgck-1-0 libgcrypt11 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgeocode-glib0 libgirepository-1.0-1 libgl1-mesa-dev libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libglib-perl libglib2.0-0 libglib2.0-bin libglibmm-2.4-1c2a libgmp10 libgnomekbd-common libgomp1 libgtk-3-common libgutenprint2 libgweather-common libhtml-parser-perl libjavascriptcoregtk-3.0-0 liblcms2-2 liblocale-gettext-perl libmagickcore5 libmx-common libncurses5 libncursesw5 libnet-dbus-perl libnet-ssleay-perl libnettle4 libnl-3-200 libnl-genl-3-200 libnl-route-3-200 libnm-glib4 libnm-gtk-common libnm-util2 libopus0 liborc-0.4-0 libp11-kit0 libpciaccess0 libpcre3 libpcre3-dev libpcrecpp0 libpixman-1-0 libpoppler-glib8 libpulse-mainloop-glib0 libpulse0 libpurple0 libreadline6 libsdl1.2debian libsocket-perl libsoundtouch0 libsoup2.4-1 libsqlite3-0 libssl1.0.0 libstdc++6 libswscale2 libsystemd-daemon0 libsystemd-login0 libtelepathy-glib0 libtext-charwidth-perl libtext-iconv-perl libtinfo5 libuuid-perl libva1 libvpx1 libwacom-common libwacom2 libx11-6 libx11-dev libx11-xcb1 libxcb-dri2-0 libxcb-glx0 libxcb-render0 libxcb-shape0 libxcb1 libxcb1-dev libxdamage1 libxfixes3 libxi6 libxml-parser-perl libxml2 libxxf86vm1 mesa-common-dev mime-support nautilus-data network-manager perl perl-base perl-modules perlmagick policykit-1 ppp printer-driver-c2esp pulseaudio pulseaudio-module-x11 pulseaudio-utils python-aptdaemon python-gi python-gi-cairo upower usbmuxd xserver-common xserver-xephyr xserver-xorg-core xserver-xorg-input-evdev xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-input-vmmouse xserver-xorg-input-wacom xserver-xorg-video-all xserver-xorg-video-ati xserver-xorg-video-cirrus xserver-xorg-video-fbdev xserver-xorg-video-intel xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-neomagic xserver-xorg-video-nouveau xserver-xorg-video-openchrome xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-trident xserver-xorg-video-vesa xserver-xorg-video-vmware zlib1g zlib1g-dev198 upgraded, 162 newly installed, 220 to remove and 916 not upgraded.Need to get 163 MB of archives.After this operation, 167 MB disk space will be freed.I assume this is it wanting to upgrade to all the new Jessie stuff. I take it just letting this run will break my system? Is there any way to just install what Steam needs, or to go about installing steam some other way? (I got it working before somehow, and this is basically a fresh debian install). | Installing Steam on Debian Wheezy | debian;terminal;apt;steam | You can use gdebi to install the Debian Steam installer made by Ghost Squad 57, available from Github. It gets around the issues raised by Valve's installer and has not encountered any issues on Wheezy. |
_codereview.113605 | Yesterday, I've come up with the idea of using python context manager to ensure cleanup available here. This time I am using that context manager to make a decorator.The context manager# SafeGPIO.py# updated, warning silencedfrom RPi import GPIOfrom exceptions import RuntimeWarningimport warningsclass SafeGPIO(object): def __enter__(self): return GPIO def __exit__(self, *args, **kwargs): with warnings.catch_warnings(): warnings.simplefilter(error) #turn warning into exceptions try: GPIO.cleanup() except RuntimeWarning: pass # silence itThe decorator#decorators.pyfrom . import SafeGPIOfrom RPi import GPIOfrom functools import wrapsdef safe_gpio(func): This decorator ensure GPIO.cleanup() is called when function call ends, also it injects GPIO as first argument into your function @wraps(func) # using wraps preservses doc string def wrapper(*args, **kwargs): with SafeGPIO() as GPIO: return func(GPIO, *args, **kwargs) return wrapperdef gpio(func): This decorator injects GPIO as first argument into your function @wraps(func) def wrapper(*args, **kwargs): return func(GPIO, *args, **kwargs) return wrapperUse like this:# decorator_test.pyfrom SafeGPIO.decorators import safe_gpio, gpiofrom time import sleepfrom random import choice, randintGPIO_PINS = (3,5,7,8,10,11,12,13,15,16,18,19,21,22,23,24,26)VALUES = (True, False)@safe_gpiodef do_random_things_with_pins_for_ten_times(GPIO): GPIO.setmode(GPIO.BOARD) for pin in GPIO_PINS: GPIO.setup(pin, GPIO.OUT) for _ in xrange(10): pin = choice(GPIO_PINS) # choose one of the GPIO pin value = choice(VALUES) # output either true or false sleep_seconds = randint(1,3) # sleep from 1 to 3 seconds print slected pin %d, output %r, sleep for %d seconds % (pin, value, sleep_seconds) GPIO.output(pin, value) sleep(sleep_seconds)@safe_gpiodef do_real_work(GPIO): GPIO.setmode(GPIO.BOARD) GPIO.setup(7, GPIO.OUT) print doing real work for 5 seconds GPIO.output(7, True) sleep(5)@safe_gpio # guarantee to clean up on exitdef main(GPIO): do_random_things_with_pins_for_ten_times() do_real_work()if __name__ == '__main__': main() | Raspberry Pi Safe Clean Up - the decorator way | python;python 2.7;raspberry pi | My other answer says most of the stuff related to your underlying question of how to handle cleanup. In this answer I would like to focus on reviewing your code.Join both versions in one file I think I would like to have both of them in the same file, i.e. SafeGPIO, as that would allow for a slightly simpler structure, and the possibility to add a good description at top of file to properly highlight the options you present on how to do cleanup related to the GPIO.Missing documentation on use cases within module file I would very much appreciate to have the documentation on how to use it within the module itself, possibly with some links to explain why this is a focus areaPossible issues related to type of context manager - Using context manager can introduce issues related to single use, reusable and/or reentrant context managers, which needs some thinking. In your case, you might end up calling the GPIO.cleanup() multiple times, but there shouldn't be any harm in that. In the Technical details of Context Managers, some links are provided related to such issues.Simplify the decorator The decorator version can be simplified somwehat, using something like the following untested code:def cleanup_GPIO_at_end(func): Decorator function executing func() followed by GPIO.cleanup(). @wraps(func): def inner_wrapper(*args, **kwargs): return_value = func(*args, **kwargs) GPIO.cleanup() return return_value return inner_wrapperWhy the injection of GPIO as first argument? I don't see the reasoning of that in your current code, so I'd leave it out, currently.Consider at_exit() approach in addition to other means I commented on this before, and also in my other answer.Rename SafeGPIO to GPIO_cleanup SafeGPIO doesn't convey what the class does, so I would rename it to GPIO_cleanup or CleanupGPIO to better indicate what happens in the background.I've not commented upon the usage tests as I consider those test cases to illustrate usage of the cleanup variants. And they look nice enough, although I would probably use print as a function, and introduce a little vertical spacing (with comments above instead of after code).Refactored codeHere is my suggestion for SafeGPIO.py:Documentation on how to use this module with use cases...from RPi import GPIOfrom exceptions import RuntimeWarningimport warningsimport atexitclass GPIO_cleanup(object): ## ... Copy code from SafeGPIO your post here ...def GPIO_cleanup_at_end(func): Decorator function executing func() followed by GPIO.cleanup(). @wraps(func): def inner_wrapper(*args, **kwargs): return_value = func(*args, **kwargs) GPIO.cleanup() return return_value return inner_wrapper# Flag to indicate whether to cleanup or not at module exitdisable_cleanup_at_exit = Falsedisable_cleanup_at_exit(disable = True): Disable the default behaviour of cleaning up at exit of module. disable_cleanup_at_exit = disable;@atexit.registerdef cleanup(): if not disable_cleanup_at_exit: GPIO.cleanup()This would enforce a default cleanup at end of script, unless abnormally terminated, and still allows for the use of with GPIO_cleanup as GPIO: or the decorator @GPIO_cleanup_at_end. And final recommendation, is to use class encapsulation for a larger project to hide the actual pinout allowing for a better focus on the higher level functions of your script. |
_webmaster.105851 | i have a problem with my website titles in serp ( over 1000 pages ) The title of the pages is in this format page title - home_icon Home I checked my website and I found in the menu link to the home page with home icon using material icons:<a href=/> <i class=material-icons right data-material-icon=home_icon>home_icon</i> Home </a>What is the solution? | When using Google's material icons, the page title in the search results has the icon alt text | seo;serps;title | null |
_webapps.97407 | I have a simple line graph with two lines (line A and line B). Theyintersect at a given point (the break even point). How do I indicateon the graph that this is the break even point. I know its very easyto do with an arrow and text box. However, I would like the arrow andtext box to move to a new intersection as I change the numbers around.First tab Break Even Analysis, is the sheet where I'm trying to highlight the intersection point of the yellow and blue lines of the break even:https://docs.google.com/spreadsheets/d/1bBkLnHCQnje6jXT_gUQPFO5AfBfs9WxdFwJAFsQX_cY/edit?usp=sharingEssentially trying to dynamically plot the point circled in red as is shown in about minute 15 of this YouTube video for Excel: https://www.youtube.com/watch?v=7MxlVMzRxa8. | How to plot single intersecting point of 2 lines in Line Chart in Google Sheets | google spreadsheets | null |
_codereview.91790 | I've developed my shopping cart further. This is related to my earlier post: Securing PHP shopping cartIn addition to those functionalities, I've proceeded to make integration to Paytrail payment gateway. This post will add option to handle customer's personal data and to send that data to payment gateway. All credit card info/payments are handled by Paytrail on their servers.Merchant id/secret are Paytrail's test credentials, if anyone thinks, I'm revealing those accidentally.My questions are the following:Is my output data being visible in pure HTML considered insecure? This includes an MD5 hash, which is calculated based on shopping cart items/user input data (output.html).If this is insecure, then how should I proceed with changing my code, such that it'll be less likely to be abused?Are there any vulnerabilities on handling customer input data (addresses, names etc.)?Also, please state if your improvements/changes are purely an alternate way of doing things or are they crucial changes.This script is free to use for anyone who wishes to make their own integration with Paytrail gateway.test_inputfunction test_input($data){ $data = trim($data); $data = htmlspecialchars($data); return $data;}order.php<?php if ($_SERVER[REQUEST_METHOD] == POST) {$error = false;// CUSTOMER ADDRESS VARIABLES$firstname = $lastname = $address = $postnumber = $city = $country = $company = $homenumber = $worknumber = $email = ;// PAYMENT DATA VARIABLES$merchant_secret = '6pKF4jkv97zmqBJ3ZL8gUw5DfT2NMQ';$merchant_id = '13466';$order_number = '123456';$reference_number = '';$order_description = 'Testitilaus';$currency = 'EUR';$return_address = 'http://www.esimerkki.fi/success';$cancel_address = 'http://www.esimerkki.fi/cancel';$pending_address = '';$notify_address = 'http://www.esimerkki.fi/notify';$type = 'E1';$culture = 'fi_FI';$preselected_method = '';$mode = '1';$visible_methods = '';$group = '';// CART ITEMS DATA $vat = 0;$cart_tax = 0;$cart_discount = 0;$cart_type = 1;$items_data = '';$count = 0;foreach($_SESSION['cart']['id'] as $key => $value) { $count = count($_SESSION['cart']['id']);}## FORM VALIDATION BEGING ##// VALIDATE FIRSTNAME if (empty($_POST['firstname'])) {$error = true;} $firstname = test_input($_POST['firstname']); if(!preg_match(/[a-zA-Z]/u, $firstname)) {$error = true;}// VALIDATE LASTNAME if (empty($_POST['lastname'])) {$error = true;} $lastname = test_input($_POST['lastname']); if(!preg_match(/[a-zA-Z]/u, $lastname)) {$error = true;}// VALIDATE ADDRESS if (empty($_POST['address'])) {$error = true;} $address = test_input($_POST['address']); if(!preg_match(/[a-zA-Z]/u, $address)) {$error = true;}// VALIDATE POSTNUMBER $postnumber = filter_input(INPUT_POST, 'postnumber', FILTER_SANITIZE_NUMBER_INT); if (empty($_POST['postnumber'])) { $error = true; }// VALIDATE CITY if (empty($_POST['city'])) {$error = true;} $city = test_input($_POST['city']); if(!preg_match(/[a-zA-Z]/u, $city)) {$error = true;}// VALIDATE COUNTRY if (empty($_POST['country'])) {$error = true;} $country = test_input($_POST['country']); if(!preg_match(/[a-zA-Z]/u, $country)) {$error = true;}// VALIDATE COMPANY (OPTIONAL) if(isset($_POST['company']) && !empty($_POST['company'])) { if(empty($_POST['company'])) {$error = true;} $company = test_input($_POST['company']); if(!preg_match(/[a-zA-Z]/u, $company)) {$error = true;} }// VALIDATE HOMENUMBER (OPTIONAL) if(isset($_POST['homenumber']) && !empty($_POST['homenumber'])) { if (empty($_POST['homenumber'])) {$error = true;} $homenumber = test_input($_POST['homenumber']); if(!preg_match(/^[\+0-9\-\(\)\s]*$/, $homenumber)) {$error = true;} }// VALIDATE WORKNUMBER (OPTIONAL) if(isset($_POST['worknumber']) && !empty($_POST['worknumber'])) { if (empty($_POST['worknumber'])) {$error = true;} $worknumber = test_input($_POST['worknumber']); if(!preg_match(/^[\+0-9\-\(\)\s]*$/, $worknumber)) {$error = true;} }// VALIDATE EMAIL if (empty($_POST[email])) {$error = true;} $email = test_input($_POST[email]); if (!filter_var($email, FILTER_VALIDATE_EMAIL)) {$error = true;} }if(!empty($error)) {echo Something fishy is going on..; exit;}## FORM VALIDATION ENDS ##//CREATE STRINGS FROM CUSTOMER AND PAYMENT DATA$customer = $homenumber. '|' .$worknumber. '|' .$email. '|' .$firstname. '|' .$lastname. '|' .$company. '|' .$address. '|' .$postnumber. '|' .$city. '|' .$country. '|' .$vat. '|' .$count. '|';$payment_data = $merchant_secret. '|' .$merchant_id. '|' .$order_number. '|' .$reference_number. '|' .$order_description. '|' .$currency. '|' .$return_address. '|' .$cancel_address. '|' .$pending_address. '|' .$notify_address. '|' .$type. '|' .$culture. '|' .$preselected_method. '|' .$mode. '|' .$visible_methods. '|' .$group. '|' ; // PAYMENT DATAecho <form action=\https://payment.paytrail.com/\ method=\post\ id=\payment\>;echo <input name=\MERCHANT_ID\ type=\hidden\ value=\{$merchant_id}\>;echo <input name=\ORDER_NUMBER\ type=\hidden\ value=\{$order_number}\>;echo <input name=\REFERENCE_NUMBER\ type=\hidden\ value=\{$reference_number}\>;echo <input name=\ORDER_DESCRIPTION\ type=\hidden\ value=\{$order_description}\>;echo <input name=\CURRENCY\ type=\hidden\ value=\{$currency}\>;echo <input name=\RETURN_ADDRESS\ type=\hidden\ value=\{$return_address}\>;echo <input name=\CANCEL_ADDRESS\ type=\hidden\ value=\{$cancel_address}\>;echo <input name=\PENDING_ADDRESS\ type=\hidden\ value=\{$pending_address}\>;echo <input name=\NOTIFY_ADDRESS\ type=\hidden\ value=\{$notify_address}\>;echo <input name=\TYPE\ type=\hidden\ value=\{$type}\>;echo <input name=\CULTURE\ type=\hidden\ value=\{$culture}\>;echo <input name=\PRESELECTED_METHOD\ type=\hidden\ value=\{$preselected_method}\>;echo <input name=\MODE\ type=\hidden\ value=\{$mode}\>;echo <input name=\VISIBLE_METHODS\ type=\hidden\ value=\{$visible_methods}\>;echo <input name=\GROUP\ type=\hidden\ value=\{$group}\>;// CUSTOMER ADDRESSecho <input name=\CONTACT_TELNO\ type=\hidden\ value=\{$homenumber}\>;echo <input name=\CONTACT_CELLNO\ type=\hidden\ value=\{$worknumber}\>;echo <input name=\CONTACT_EMAIL\ type=\hidden\ value=\{$email}\>;echo <input name=\CONTACT_FIRSTNAME\ type=\hidden\ value=\{$firstname}\>;echo <input name=\CONTACT_LASTNAME\ type=\hidden\ value=\{$lastname}\>;echo <input name=\CONTACT_COMPANY\ type=\hidden\ value=\{$company}\>;echo <input name=\CONTACT_ADDR_STREET\ type=\hidden\ value=\{$address}\>;echo <input name=\CONTACT_ADDR_ZIP\ type=\hidden\ value=\{$postnumber}\>;echo <input name=\CONTACT_ADDR_CITY\ type=\hidden\ value=\{$city}\>;echo <input name=\CONTACT_ADDR_COUNTRY\ type=\hidden\ value=\{$country}\>;// CART ITEMS DATAecho <input name=\INCLUDE_VAT\ type=\hidden\ value=\{$vat}\ />;echo <input name=\ITEMS\ type=\hidden\ value=\{$count}\>;$i = -1;foreach ($_SESSION['cart']['id'] as $key => $value) { $i++; $cart_title = $_SESSION['cart']['name'][$key]; $cart_id = $i; $cart_quantity = $_SESSION['cart']['quantity'][$key]; $cart_price = $_SESSION['cart']['price'][$key]; $cart_pricedot = str_replace(',' , '.' , $cart_price); $cart_pricedot_trim = trim($cart_pricedot); echo <input name=\ITEM_TITLE[$i]\ type=\hidden\ value=\{$cart_title}\>; echo <input name=\ITEM_NO[$i]\ type=\hidden\ value=\{$i}\>; echo <input name=\ITEM_AMOUNT[$i]\ type=\hidden\ value=\{$cart_quantity}\>; echo <input name=\ITEM_PRICE[$i]\ type=\hidden\ value=\{$cart_pricedot_trim}\>; echo <input name=\ITEM_TAX[$i]\ type=\hidden\ value=\{$cart_tax}\>; echo <input name=\ITEM_DISCOUNT[$i]\ type=\hidden\ value=\{$cart_discount}\>; echo <input name=\ITEM_TYPE[$i]\ type=\hidden\ value=\{$cart_type}\>; $items_data.=$cart_title|$i|$cart_quantity|$cart_pricedot_trim|$cart_tax|$cart_discount|$cart_type|;}// START CALCULATING PAYMENT MD5 HASH$combined = $payment_data. '' .$customer. '' .$items_data;$combinedsub = substr($combined, 0, -1);$code = strtoupper(md5($combinedsub));$trimmeddata = trim($code);echo <input name=\AUTHCODE\ type=\hidden\ value=\{$trimmeddata}\>;echo <input type=\submit\ value=\Siirry maksamaan\>;echo </form>;?>output.html <form action=https://payment.paytrail.com/ method=post id=payment> <input name=MERCHANT_ID type=hidden value=13466> <input name=ORDER_NUMBER type=hidden value=123456> <input name=REFERENCE_NUMBER type=hidden value=> <input name=ORDER_DESCRIPTION type=hidden value=Testitilaus> <input name=CURRENCY type=hidden value=EUR> <input name=RETURN_ADDRESS type=hidden value=http://www.esimerkki.fi/success> <input name=CANCEL_ADDRESS type=hidden value=http://www.esimerkki.fi/cancel> <input name=PENDING_ADDRESS type=hidden value=> <input name=NOTIFY_ADDRESS type=hidden value=http://www.esimerkki.fi/notify> <input name=TYPE type=hidden value=E1> <input name=CULTURE type=hidden value=fi_FI> <input name=PRESELECTED_METHOD type=hidden value=> <input name=MODE type=hidden value=1> <input name=VISIBLE_METHODS type=hidden value=> <input name=GROUP type=hidden value=> <input name=CONTACT_TELNO type=hidden value=+5747 5884 7574543> <input name=CONTACT_CELLNO type=hidden value=0060 55574645> <input name=CONTACT_EMAIL type=hidden [email protected]> <input name=CONTACT_FIRSTNAME type=hidden value=zil> <input name=CONTACT_LASTNAME type=hidden value=lgebr> <input name=CONTACT_COMPANY type=hidden value=Company Ot> <input name=CONTACT_ADDR_STREET type=hidden value=Krkel 34> <input name=CONTACT_ADDR_ZIP type=hidden value=00000> <input name=CONTACT_ADDR_CITY type=hidden value=lbm> <input name=CONTACT_ADDR_COUNTRY type=hidden value=FI> <input name=INCLUDE_VAT type=hidden value=0 /> <input name=ITEMS type=hidden value=2> <input name=ITEM_TITLE[0] type=hidden value=Lasikengt> <input name=ITEM_NO[0] type=hidden value=0> <input name=ITEM_AMOUNT[0] type=hidden value=45> <input name=ITEM_PRICE[0] type=hidden value=23.43> <input name=ITEM_TAX[0] type=hidden value=0> <input name=ITEM_DISCOUNT[0] type=hidden value=0> <input name=ITEM_TYPE[0] type=hidden value=1> <input name=ITEM_TITLE[1] type=hidden value=Nahkakengt> <input name=ITEM_NO[1] type=hidden value=1> <input name=ITEM_AMOUNT[1] type=hidden value=23> <input name=ITEM_PRICE[1] type=hidden value=564.44> <input name=ITEM_TAX[1] type=hidden value=0> <input name=ITEM_DISCOUNT[1] type=hidden value=0> <input name=ITEM_TYPE[1] type=hidden value=1> <input name=AUTHCODE type=hidden value=958C104FA7522E0319214C3AE1147351> <input type=submit value=Siirry maksamaan></form> | Securing PHP shopping cart (Paytrail integration) - follow-up | php;html;security;e commerce | As one of your previous answer-ers, I can say this has definitely improved!I'll work through by the type this time.Variables:Your beginning (block) variables are defined like $firstname, but the secondary block variables are defined like $merchant_secret, with an underline instead of a no-space. I, personally, would suggest using underlines for readability, but, if you do choose the first, keep the coding style the same.It seems like the rest of your script fluctuates too between underline(s) and no space(s), so please do keep the coding style the same.Below, you initialise $count as zero, and proceed to go through every item in the cart, overriding $count with the same value everytime.$count = 0;foreach($_SESSION['cart']['id'] as $key => $value) { $count = count($_SESSION['cart']['id']);}Just use: $count = count($_SESSION['cart']['id']).This line here:if(!empty($error)) {echo Something fishy is going on..; exit;}While empty() works, it's pointless, because they're booleans. So, you can remove empty() and it'll work the exact same.You then proceed to 'CREATE STRINGS [sic] FROM CUSTOMER AND PAYMENT DATA', and just like the fact I won't comment on why sentences should not become SQL or your usage of plurality, I won't comment on why you concatenate a bunch of strings to make a secure MD5 hash. (It'd certainly be a question I'd vote up on Security.SE, though!)My beef is (again) with your coding style:$customer = $homenumber. '|' .$worknumber. '|' .$email. '|' .$firstname. '|' .$lastname. '|' .$company. '|' .$address. '|' .$postnumber. '|' .$city. '|' .$country. '|' .$vat. '|' .$count. '|';$payment_data = $merchant_secret. '|' .$merchant_id. '|' .$order_number. '|' .$reference_number. '|' .$order_description. '|' .$currency. '|' .$return_address. '|' .$cancel_address. '|' .$pending_address. '|' .$notify_address. '|' .$type. '|' .$culture. '|' .$preselected_method. '|' .$mode. '|' .$visible_methods. '|' .$group. '|' ; compared to here:$items_data.=$cart_title|$i|$cart_quantity|$cart_pricedot_trim|$cart_tax|$cart_discount|$cart_type|;If you choose to join each string with a . instead of just placing them in a string like $items_data, then do so, but please (again) keep your coding style the same.Your foreach loop, has similar to a for loop style coding attached.However, normally in a for loop, you increment the value once you've executed the code inside the loop, not before.Instead of:$i = -1;foreach ($array as $key => value){ $i++; some_code_here();}Change it to:$i = 0;foreach ($array as $key => $value){ some_code_here() $i++;}Moving on, you initialise $cart_id as $i, but then never use it, and if you were, it'd be best to replace it with $i seeing as they're the same. So, that line can be removed.These lines are clutter:$cart_price = $_SESSION['cart']['price'][$key];$cart_pricedot = str_replace(',' , '.' , $cart_price);$cart_pricedot_trim = trim($cart_pricedot);You can just use:$cart_price = trim(str_replace(',' , '.' ,$_SESSION['cart']['price'][$key]));or$cart_price = $_SESSION['cart']['price'][$key];$cart_price_formatted = trim(str_replace(',', '.', $cart_price));Only the last is used later anyway, so, if you like, you can replace them.Finally (on variables):$combined = $payment_data. '' .$customer. '' .$items_data;$combinedsub = substr($combined, 0, -1);$code = strtoupper(md5($combinedsub));$trimmeddata = trim($code);In $combined, you add strings together with empty strings in between. That can get removed.$combinedsub could be better named to $combined_substring, Nbdy lks abbrevs, yh?In $code, whilst being misleading, you perform strtoupper() and md5 transforms on it. You then proceed to trim() it.First, trim()ing after you've MD5 hashed it, literally reduces the chances of having characters trimmed to zero. So, you can put the trim() in front of the md5() instead.Secondly, if you're performing two transforms in one line, adding a third and condensing is nothing. While $combinedsub has a misleading name, $trimmeddata has a worse name. While having incorrect grammar and spelling, you manage to double both m & d, making the variable name look even less readable.String Building:Here's the most helpful part.During the course of execution, you echo out quite a few hidden inputs.Replacing that with a function would greatly be of assistance.function build_hidden_inputs($type, $value){ return <input name=\$type\ type=\hidden\ value=\$value\>;}Then, build arrays with the content as keys and values, like so:$first_content = array( MERCHANT_ID => $merchant_id, ORDER_NUMBER => $order_number, // So on );Then, foreach them into the function.foreach ($first_content as $key => $value){ echo build_hidden_inputs($key, $value);}To answer your questions:Is it somehow unsecure, that my output data will be visible in pure html? including md5 hash, which is calculated based on shopping cart items/user input data. (output.html)The visible data will be the data visible in your output.html file, if you don't want to output it, and it's not necessary, then just remove it from the echo statements.As for md5, you are outputting it, but, cracking hashes works by matching hashes of strings.Say, for example, if we both md5 hashed 'mypasswordissecure', we'd recieve 'efaafd259c656221c88b22471ac0d61e'.However, even if I had your hash, I would need to match it, and with a data input as large as yours, would probably1 be pretty hard.If 1. is unsecure, how should i proceed changing my code, that it'll be less likely to be abused?Use a more secure encryption than MD5, add a salt coming in from the Database.1Are there any vulnerabilities on handling customer input data? (addresses, names etc.)I wouldn't go outputting the tax, price, and the more sensitive information into a HTML field, not necessarily for hackers, but for cheeky users trying to buy things for free.The AUTHCODE though, probably shouldn't be outputted because of both.You can always use session variables instead.Also, please state, if your improvements/changes are purely an alternate way of doing things or are they crucial changes.Well, if you remove the foreach loop at the beginning that keeps doing the same thing, you'll get faster execution times.However, these suggestions are closer to improvements than alternate ways of proceeding.None of these are crucial as such.1: I'm not an expert on security, unfortunately, so if you're worried, take it to Security.SE, and see what they think. |
_unix.362205 | The 'test' command and '[' seem to be doing the exact same things and even use the exact same flags. Yet these are two different executables on my Ubuntu 14. Can anyone explain why? I would have expected one to be a symlink to another but that is not the case here. | How are '/usr/bin/[' and '/usr/bin/test' related? | command;test | null |
_codereview.28607 | Is this acceptable for readability or would it be better to refactor it into smaller methods?Tips? Comments? Examples?def save_with_payment if valid? charge = Stripe::Charge.create(:amount => (self.plan.price * 100).to_i, :currency => 'usd', :card => stripe_card_token, :description => Charge for #{user.email}) self.stripe_charge_id = charge.id self.starts_at = Date.today self.amount = plan.price self.expires_at = 1.years.from_now save! endrescue Stripe::InvalidRequestError => e logger.error Stripe error while creating charge: #{e.message} errors.add :base, There was a problem with your credit card. falseend | Refactor or keep this high-complexity method? | ruby | Some notes about your code:Indentation of a call with multi-line hash: That's subjective, but this style you use wastes a lot of space, and it's pretty hard to read (because of the longer lines). If the hash is long I prefer to use the one-line-per-key/value JSON style. More on this here.self.stripe_charge_id = charge.id: When you have a lot of sets + save! it is better just to use update_attributes!.save!: The method is named save_with_payment, in a ActiveRecord context that means this method won't raise an exception, so you should call save. if valid?: No else branch? The method seems to return a boolean so returning false in that case would be more consistent.rescue Stripe::InvalidRequestError => e: That's a long and subjective topic. My opinion: don't wrap a whole method that is doing lots of things with a rescue, the logic forms now some kind of spaghetti. Wrap the specific parts of the code that may raise that particular exception.Stripe::Charge.create: You asked if the code is too complex. I don't think so, at least not compared with typical Ruby practices, but it's probably more orthodox to create a separate method for this call.errors.add :base,: I don't like this mixing of calls with parens and calls without, it looks messy. DSL-style code in Rails without parens -> ok, normal code in methods -> not so sure, I'd write them. Or at least be consistent.self.plan: It's not idiomatic to write explicit self. to call instance methods.I'd write:def save_with_payment if !valid? false elsif !(charge = create_stripe_charge) errors.add(:base, There was a problem with your credit card.) false else update_attributes({ :stripe_charge_id => charge.id, :starts_at => Date.today, :amount => plan.price, :expires_at => 1.years.from_now, }) endenddef create_stripe_charge Stripe::Charge.create({ :amount => (plan.price * 100).to_i, :currency => 'usd', :card => stripe_card_token, :description => Charge for #{user.email}, })rescue Stripe::InvalidRequestError => e logger.error(Stripe error while creating charge: #{e.message}) nilend |
_codereview.165192 | This is a java solution to one of hackerrank problem given in below link. https://www.hackerrank.com/challenges/2d-arrayI know it's not optimized, can anyone help me to refactor and optimize?Task:Calculate the hourglass sum for every hourglass in , then print the maximum hourglass sum.Note: If you have already solved the Java domain's Java 2D Array challenge, you may wish to skip this challenge.Input Format:There are lines of input, where each line contains space-separated integers describing 2D Array ; every value in will be in the inclusive range of to .Output FormatPrint the largest (maximum) hourglass sum found in Array.Sample Input1 1 1 0 0 00 1 0 0 0 01 1 1 0 0 00 0 2 4 4 00 0 0 2 0 00 0 1 2 4 0Sample Output19solution:public class TestHourGlass { public static void main(String[] args) { Scanner in = new Scanner(System.in); int arr[][] = new int[6][6]; int hourGlassSum[] = new int[16]; int pos = 0; //Reads data from user input and store in 6*6 Array for (int arr_i = 0; arr_i < 6; arr_i++) { for (int arr_j = 0; arr_j < 6; arr_j++) { arr[arr_i][arr_j] = in.nextInt(); } } //Find each possible hourGlass and calculate sum of each hourGlass for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) { hourGlassSum[pos] = calculateHourGlassSum(arr, i, i + 2, j, j + 2); pos++; } } System.out.println(findmax(hourGlassSum)); } /** * @param arr * @param pos1 - Row startPoint * @param pos2 - Row endPoint * @param pos3 - column startPoint * @param pos4 - column endPoint * @return */ public static int calculateHourGlassSum(int arr[][], int pos1, int pos2, int pos3, int pos4) { int sum = 0; int exclRowNum = pos1 + 1; int exclColNum1 = pos3; int exclColNum2 = pos4; for (int arr_i = pos1; arr_i <= pos2; arr_i++) { for (int arr_j = pos3; arr_j <= pos4; arr_j++) { sum = sum + arr[arr_i][arr_j]; } } sum = sum - (arr[exclRowNum][exclColNum1] + arr[exclRowNum][exclColNum2]); return sum; } /** * @param arr * @return max elem of Array */ public static int findmax(int arr[]) { int max = arr[0]; for (int i = 0; i < arr.length; i++) { if (arr[i] >= max) max = arr[i]; } return max; }} | java 2d Array HourGlass Problem from hackerrank | java;algorithm;programming challenge;array | null |
_unix.219152 | I'm following some outdated instructions in an attempt to compile some proprietary, closed-source, unsupported, legacy code. Here is what a typical shell session looks like:$ autoreconf -fisrc/Makefile.am:7: error: 'pkglibexecdir' is not a legitimate directory for 'PYTHON'autoreconf: automake failed with exit status: 1$ cat src/Makefile.ampkglibexecdir = $(libexecdir)/packagenamenobase_pkglibexec_PYTHON = \ python_module_1.py python_module_2.py [...]MAINTAINERCLEANFILES = Makefile.inBUILT_SOURCES = some_sourcesome_source: ln -s ../lib/python/some_source some_sourceI'm using GNU Autoconf version 2.69. I'm also having similar issues with other packages in the same project. I'm assuming that there's a quick fix for this problem but I'm not very comfortable with autotools and most of what I've found via google hasn't made a lot of sense to me. | 'pkglibexecdir' is not a legitimate directory | compiling;gnu;gnu make;automake;autotools | null |
_cs.65395 | I'm trying to understand Sipser's example showing that $ALL_{nfa} \in Co-NSPACE(n)$, where $$ALL_{nfa} = \{ <A> | A \text{ is an NFA such that } L(A) = \Sigma^*\}.$$The algorithm can be seen here.I'm confused by a few things, but mainly what is meant by nondeterministically select an input symbol? If I nondeterministically choose symbol 'a' and then mark all the vertices I can reach from the current ones by 'a', then don't I possibly miss out on the vertices reachable by 'b'? And even though we repeat this $2^q$ times, what's to stop me from choosing that 'a' transition each time?If my instincts are correct, then what we are trying to do is convert the nfa to a dfa (which will have at most $2^q$ states). If the dfa has a single reject state, then we know the nfa has a string that will be rejected. However, we would like to do this without actually converting to a dfa, which might require too much space. So the provided algorithm tries to traverse the corresponding dfa graph, without actually building it? | Understanding why ALL_nfa is in co-nspace | complexity theory;space complexity;nondeterminism | To show that $ALL_{\mathsf{NFA}}$ is in $\mathrm{co-NSPACE}(n)$, we must show that the complement $\overline{ALL_{\mathsf{NFA}}}$ is in $\mathrm{NSPACE}(n)$. The complement is $$\overline{ALL_{\mathsf{NFA}}} = \{ \langle N \rangle \mid N \text{ is an NFA where } L(N) \neq \Sigma^* \}$$To show that $\overline{ALL_{\mathsf{NFA}}}$ is in $\mathrm{NSPACE}(n)$ we therefore need to devise a Turing machine that, given an NFA description $\langle N \rangle$ can tell us if there exists an input that $N$ will not accept.Essentially, the machine will nondeterministically guess a string $w$ and verify that this string is not accepted by $N$. Our machine may guess the wrong string, but remember that the machine is nondeterministic -- as long as it is possible to guess a string with this property, the machine will accept $\langle N \rangle$.To guess such a string $w$ and check that the NFA does not accept it, we need to explore all the possible computations of $N$ on $w$ and show that none of them will lead to an accepting state. This does not require us to construct an equivalent DFA using the subset construction; doing this would require more than linear space. However, even just storing all of the computations of $N$ on $w$ on the Turing machine tape will require more than $n$ tape cells.The trick is to guess the string symbol by symbol and keep track of the sets of states that $N$ could possibly be in after it has read a symbol. If $N$ has $q$ states, there are $2^q$ such sets of possible states. We start with the set $\{ q_0 \}$ and guess a sequence of $2^q$ symbols. For each symbol that we guess, we record the subset of states that can be visited from any of the states that $N$ could be in now. We can keep track of this subset of possible current states using $q$ bits (represented as $q$ cells on the tape). This is linear in $n$ where $n$ is the length of $\langle N \rangle$. This description must at the very least list the states of $N$, so $q = O(n)$.If the set of possible current states that we are now in does not contain an accept state, then we have found a string that cannot be accepted by $N$. We may have to guess $2^q$ symbols, since each symbol could in principle lead us to a new set of possible current states that had not been seen before. But once all sets of possible current states have been examined, we can stop. We can count to $2^q$ in binary using no more than $q$ bits. Again this requires no more than $O(n)$ bits. |
_unix.312888 | I use LXDE, with pcmanfm as my file manager. When I work on folders on a remote machine, via SFTP, I've noticed that I get long delays between each operation such as entering a folder - which indicate a new session may be opened every time I do this.Is that really the case, or could it be just some issue with my NIC or my LAN?Assuming that is the case, case I somehow get pcmanfm to maintain an open SFTP session and just issue commands within it? | pcmanfm sftp single session | sftp;session;file manager;delay;pcmanfm | null |
_cs.55026 | I implemented a standard Bloom Filter in C++, and tested it on different sizes, with varying values of the ratio ${c = n/m}$ where ${n}$ is the size of the filter, and ${m}$ is the number of elements inserted. For a Bloom Filter, I created ${k}$ Hash functions, and set the value in the filter for each Hash Index returned. Here's a graph of the results I obtained from my tests:The problem from the graph is clear, I am getting False Positive rates that are better than the Theoretical value. The theoretical value is based on the assumption that each Hash Index is equally likely, which is:$Pr(H(a) = 1)^k = 1 - Pr(H(a) = 0)\\ = (1 - (1 - 1/n)^{km})^{k}\\ = (1 - e^{-k/c})^{k}$Using $k = c \ln(2)$$Pr(H(a) = 1)^{k} = (1 - e^{-\ln(2)})^{k}\\ = (1/2)^{c \ln(2)}\\ = 0.6185^{c}$All my tests are based on the optimal value of $k = c \ln(2)$, and the theoretical line is a plot of $0.6185^{c}$. My concern is that my False Positive rates are consistently better than the theoretical. Does this mean that my implementation is incorrect, or is it a likely possibility given the large sizes of the Bloom Filters and the efficiency of the hashing algorithm? I appreciate any guidance on this. | Achieving better than the theoretical False Positive Rate for Bloom Filters | algorithms;data structures;statistics;bloom filters | The approximation of probability you are calculating is used to give estimated optimal number of hash function used (which is then rounded, $c*ln(2)$ does not look like integer). There are many factors that should be considered: the choice of hash functions, data set provided for tests, approximation skew, rounding of approximation and fact that given theoretical bound is asymptotic value, so works better for bigger n and m. I would like to propose paper false-positive rate of Bloom filters, showing how given approximation is incorrect.Another good resource Thomas Gerbet, Amrit Kumar, Cedric Lauradoux. The Power of Evil Choices in Bloom Filters. (Research Report) RR-8627, INRIA Grenoble. 2014 shows how saturation of filter changes with given data. |
_webapps.62406 | In Gmail can we make is:unread the default view? That is, when we load Gmail, can we configure it so that is:unread is already selected, thereby showing only unread messages by default?Or can we at least configure Gmail so that it has a 1-click link that only shows unread mail?The reason for this of course is that it is slow and inefficient to have to type is:unread into the Gmail search every time I open it. It would be much more convenient for Gmail to load this view by itself when I open Gmail. | In Gmail can we make 'is:unread' the default view? | gmail | Have you considered just setting your Priority Inbox settings so that Unread is always displayed first? |
_unix.38267 | I am running CentOS 6.2 and I need to create a subdirectory named crypto inside /proc/sys. Inside /proc/sys/crypto, I need to create a file named test which contains the value 1.I have tried many things and nothing works. Can someone help me? | Is it possible to create a directory and file inside /proc/sys? | centos;proc | null |
_codereview.153131 | I'm creating a web app to create pixel maps for large LED displays. The maps are basically large checkerboard patterns of various sizes with different constraints. I'm running into snags trying to draw arrows pointing in certain directions dependent on their position within a) the entire grid, and b) within smaller sub-sections of the map. Upon loading the jsfiddle, you can scroll down and see a correct example. The arrows snake through the red/blue area of the map from left to right, going down one row at a time. Now if you select the second data flow option (the 2nd radio button from the left on the top row of radio buttons) you'll see the direction of the arrows change but the map doesn't draw correctly. I need help getting this running efficiently. Nested loops within loops within loops seems slow. Plus I'm just in over my head and a bit confused. There's a working earlier version of the app at (http://www.blinkingthings.com) if you need a more complete picture of what I'm asking for.I'd also appreciate any criticism of this javascript as it's not my expertise.Javascript: $(function(){ var canvas=document.getElementById(canvas); var ctx=canvas.getContext(2d); var width=128; var height=128; var columns=16; var rows=9; var color1=#d9534f;//redish var color2=#428bca;//bluish var color4=#00FF00;//greenish var color3=#FFFF00;//yellowish var textcolor=#FFFFFF; var datacolor=#FFFFFF; var bordercolor =#5cb85c; var dataStartColor =#5cb85c; var infoBackgroundColor = rgba(255,255,255,.01); var infoForegroundColor = rgba(0,0,255,.1); var upArr = '\u2191'; var downArr = '\u2193'; var leftArr = '\u2190'; var rightArr = '\u2192'; var stopSign = '\uD83D\uDEAB'; var omega = '\u03A9'; var oddOrEven = odd; var colOddEven = odd; var dataFlow = 1; var drawCoords = true; var drawData = true; var drawInfo = true; var drawUser = false; var counter = 1; var xStart=0; var yStart=0; var resWidthLimit=1920; var resHeightLimit=1080; var colsLimit = Math.floor(resWidthLimit/width); var rowsLimit = Math.floor(resHeightLimit/height); var outputsHigh = Math.ceil(rows/rowsLimit); var outputsWide = Math.ceil(columns/colsLimit); var alphabet = a b c d e f g h i j k l m n o p q r s t u v w x y z aa bb cc dd ee ff gg hh ii jj kk ll mm nn oo pp qq rr ss tt uu vv ww xx yy zz aaa bbb ccc ddd eee fff ggg hhh iii jjj kkk lll mmm nnn ooo ppp qqq rrr sss ttt uuu vvv www xxx yyy zzz.split( ); var topEdge = false; var bottomEdge = false; var leftEdge = false; var rightEdge = false; var rowsOdd = false; var columnsOdd = false; var outTopEdge = false; var outBottomEdge = false; var outLeftEdge = false; var outRightEdge = false; var outRowsOdd = false; var outColumnsOdd = false; // references to the input-text elements // used to let user change the rect width & height var $width=document.getElementById('width'); var $height=document.getElementById('height'); var $rows=document.getElementById('rows'); var $columns=document.getElementById('columns'); var $resWidthLimit=document.getElementById('resWidthLimit'); var $resHeightLimit=document.getElementById('resHeightLimit'); var $tileSwap=document.getElementById('tileSwap'); var $wallSwap=document.getElementById('wallSwap'); var $outputSwap=document.getElementById('outputSwap'); var $tilePresets=document.getElementById('tilePresets'); var $outputPresets=document.getElementById('outputPresets'); var $colSlide=document.getElementById('colSlide'); var $rowSlide=document.getElementById('rowSlide'); var $radio1=document.getElementById('radio1'); var $radio2=document.getElementById('radio2'); var $radio3=document.getElementById('radio3'); var $radio4=document.getElementById('radio4'); var $radio5=document.getElementById('radio5'); var $radio6=document.getElementById('radio6'); var $radio7=document.getElementById('radio7'); var $radio8=document.getElementById('radio8'); var $drawcoords=document.getElementById('draw_coords_check'); var $drawdata=document.getElementById('draw_data_check'); var $drawinfo=document.getElementById('draw_info_check'); var $drawuser=document.getElementById('draw_user_check'); // set the initial input-text values to the width/height vars $width.value=width; $height.value=height; $rows.value=rows; $columns.value=columns; $resWidthLimit.value=resWidthLimit; $resHeightLimit.value=resHeightLimit; $width.addEventListener(change, function(){ width=this.value; //.value converts input field int to string*** outputsWide = Math.ceil(columns/colsLimit); colsLimit = Math.floor(resWidthLimit/width); draw(); }, false); $height.addEventListener(change, function(){ height=this.value; //.value converts input field int to string*** outputsHigh = Math.ceil(rows/rowsLimit); rowsLimit = Math.floor(resHeightLimit/height); draw(); }, false); $rows.addEventListener(keyup, function(){ rows=this.value; //.value converts input field int to string*** outputsHigh = Math.ceil(rows/rowsLimit); draw(); }, false); $columns.addEventListener(keyup, function(){ columns=this.value; //.value converts input field int to string*** outputsWide = Math.ceil(columns/colsLimit); draw(); }, false); $resWidthLimit.addEventListener(keyup, function(){ resWidthLimit=this.value; //.value converts input field int to string*** colsLimit = Math.floor(resWidthLimit/width); outputsWide = Math.ceil(columns/colsLimit); setTimeout(function() { draw(); }, 500); //had to add small delay to prevent crashing. look into }, false); $resHeightLimit.addEventListener(keyup, function(){ resHeightLimit=this.value; //.value converts input field int to string*** rowsLimit = Math.floor(resHeightLimit/height); outputsHigh = Math.ceil(rows/rowsLimit); setTimeout(function() { draw(); }, 500); //had to add small delay to prevent crashing. look into }, false); $wallSwap.addEventListener(click, function(){ var temp = $('#columns').val(); $('#columns').val($('#rows').val()); $('#rows').val(temp); columns=$('#columns').val(); rows=$('#rows').val(); outputsWide = Math.ceil(columns/colsLimit); outputsHigh = Math.ceil(rows/rowsLimit); draw(); }, false); $tileSwap.addEventListener(click, function(){ var temp = $('#width').val(); $('#width').val($('#height').val()); $('#height').val(temp); width=$('#width').val(); height=$('#height').val(); outputsWide = Math.ceil(columns/colsLimit); colsLimit = Math.floor(resWidthLimit/width); outputsHigh = Math.ceil(rows/rowsLimit); rowsLimit = Math.floor(resHeightLimit/height); draw(); }, false); $outputSwap.addEventListener(click, function(){ var temp = $('#resWidthLimit').val(); $('#resWidthLimit').val($('#resHeightLimit').val()); $('#resHeightLimit').val(temp); resWidthLimit=$('#resWidthLimit').val(); resHeightLimit=$('#resHeightLimit').val(); colsLimit = Math.floor(resWidthLimit/width); rowsLimit = Math.floor(resHeightLimit/height); draw(); }, false); $tilePresets.addEventListener(click, function(){ width=$('#width').val(); height=$('#height').val(); outputsHigh = Math.ceil(rows/rowsLimit); rowsLimit = Math.floor(resHeightLimit/height); outputsWide = Math.ceil(columns/colsLimit); colsLimit = Math.floor(resWidthLimit/width); draw(); }, false); $outputPresets.addEventListener(click, function(){ resWidthLimit=$('#resWidthLimit').val(); resHeightLimit=$('#resHeightLimit').val(); rowsLimit = Math.floor(resHeightLimit/height); colsLimit = Math.floor(resWidthLimit/width); draw(); }, false); $radio1.addEventListener(click, function(){ dataFlow=this.value; //.value converts input field int to string*** draw(); }, false); $radio2.addEventListener(click, function(){ dataFlow=this.value; //.value converts input field int to string*** draw(); }, false); $radio3.addEventListener(click, function(){ dataFlow=this.value; //.value converts input field int to string*** draw(); }, false); $radio4.addEventListener(click, function(){ dataFlow=this.value; //.value converts input field int to string*** draw(); }, false); $radio5.addEventListener(click, function(){ dataFlow=this.value; //.value converts input field int to string*** draw(); }, false); $radio6.addEventListener(click, function(){ dataFlow=this.value; //.value converts input field int to string*** draw(); }, false); $radio7.addEventListener(click, function(){ dataFlow=this.value; //.value converts input field int to string*** draw(); }, false); $radio8.addEventListener(click, function(){ dataFlow=this.value; //.value converts input field int to string*** draw(); }, false); $drawcoords.addEventListener(change, function(){ drawCoords = !drawCoords; //flips boolean of drawCoords t/f draw(); }, false); $drawdata.addEventListener(change, function(){ drawData = !drawData; //flips boolean of drawData t/f draw(); }, false); $drawinfo.addEventListener(change, function(){ drawInfo = !drawInfo; //flips boolean of drawInfo t/f draw(); }, false); $drawuser.addEventListener(change, function(){ drawUser = !drawUser; //flips boolean of drawUser t/f draw(); }, false); draw();//inital draw function draw(){ clearAll(); tilesByOutput(); }//end draw() function clearAll(){ ctx.canvas.width = +width * +columns; //set entire canvas width ctx.canvas.height = +height * +rows; //set entire canvas height ctx.clearRect(0,0,canvas.width,canvas.height); //clear out entirety of canvas xStart=0; //reset x coord to 0 (default) yStart=0; //reset y coord to 0 (default) } function allTiles(){ for (var j = 0; j < rows; j++){ //for every row under limit for (var i = 0; i < columns; i++) { //for every column under limit (i % 2) !=1 ? colOddEven = odd : colOddEven = even; //every other column if ((j % 2) !=1){ //for every other row oddOrEven = odd; (i % 2) != 1 ? ctx.fillStyle=red : ctx.fillStyle=blue;//start alternating patter with color option 1 } else { oddOrEven = even; //start alternating patter with color option 2 (i % 2) != 1 ? ctx.fillStyle=blue : ctx.fillStyle=red; } //end if/else ctx.fillRect(xStart,yStart,width,height); //draw single tile console.log(Single Tile Drawn at : ( + xStart + , + yStart + )); xStart = +xStart + +width; //shift starting coords for next column }//end columns for xStart = 0; //reset x coord to 0 (default) for begining of next row yStart = +yStart + +height; //shift starting coords for next row }//end rows for }//end allTiles function tilesByOutput(){ for (var l=0; l<=outputsWide; l++){//for each necessary output (width) //console.log(Output Width = + l); for(var k=0; k<=outputsHigh; k++){//for each necessary output (height) xStart = +colsLimit*l * +width;//0 on first loop, moves to right edge of output after yStart = +rowsLimit*k * +height;//0 on first loop, moves to bottom edge of output after for (var j = rowsLimit*k; (j < rowsLimit*(k+1)); j++){ //for every row wihtin current output's limit for (var i = colsLimit*l; (i < colsLimit*(l+1) ); i++) { //for every column within current output's limit if (i>columns-1) //previous loops are running too many times, this safeguards them. { continue; } else if (j>rows-1){ continue; }//end of for loop safeguard var xLimit = i-(colsLimit*l); var yLimit = j-(rowsLimit*k); i == columns-1 ? rightEdge = true : rightEdge = false; i == 0 ? leftEdge = true : leftEdge = false; j == rows-1 ? bottomEdge = true : bottomEdge = false; j == 0 ? topEdge = true : topEdge = false; (i % 2) != 1 ? columnsOdd = true : columnsOdd = false; (j % 2) != 1 ? rowsOdd = true : rowsOdd = false; xLimit == colsLimit-1 ? outRightEdge = true : outRightEdge = false; xLimit == 0 ? outLeftEdge = true : outLeftEdge = false; yLimit == rowsLimit-1 ? outBottomEdge = true : outBottomEdge = false; yLimit == 0 ? outTopEdge = true : outTopEdge = false; (xLimit % 2) != 1 ? outColumnsOdd = true : outColumnsOdd = false; (yLimit % 2) != 1 ? outRowsOdd = true : outRowsOdd = false; (i % 2) !=1 ? colOddEven = odd : colOddEven = even; //every other column odd or even (for data arrows) //Step 1 : Figure out background color for current tile. (l % 2) !=1 ? ( //if output x coord (l) is odd combo 1 : even combo 2 (k % 2) !=1 ? ( //if output y coord is odd (k) combo 1 : even combo 2 (outRowsOdd) ? ( //for every other row //rows: every other row in current output alternate between c1/c2 oddOrEven = odd, (outColumnsOdd) ? ctx.fillStyle=color1 : ctx.fillStyle=color2//columns: odds c1 : evens c2 (first row, first column is color1) ) : ( //middle j next row oddOrEven = even, (outColumnsOdd) ? ctx.fillStyle=color2 : ctx.fillStyle=color1//columns: odds c2 : evens c1 ) //end j end of rows for output v1 ) : ( //middle k end of combo 1, begin combo 2 //output columns: (outRowsOdd) ? ( //for every other row //rows: every other row in current output alternate between c3/c4 oddOrEven = odd, (outColumnsOdd) ? ctx.fillStyle= color3: ctx.fillStyle=color4//columns: odds c3 : evens c4 ) : (//middle j //rows: next row oddOrEven = even, (outColumnsOdd) ? ctx.fillStyle=color4 : ctx.fillStyle=color3//columns: odds c4 : evens c3 ) //end j //rows: )//end k //output columns: ) : ( //middle l //output rows: //end output color alternation 1 (k % 2) !=1 ? ( //if output y coord is odd (k) alternate output colorcombos (red/blue or green/yellow) (outRowsOdd) ? ( //for every other row oddOrEven = odd, (outColumnsOdd) ? ctx.fillStyle=color3 : ctx.fillStyle=color4//start alternating patter with color option 1 ) : ( //middle j oddOrEven = even, //start alternating patter with color option 2 (outColumnsOdd) ? ctx.fillStyle=color4 : ctx.fillStyle=color3 ) //end j ) : ( //middle k (outRowsOdd) !=1 ? ( //for every other row oddOrEven = odd, (outColumnsOdd) ? ctx.fillStyle= color1: ctx.fillStyle=color2//start alternating patter with color option 1 ) : ( //middle j oddOrEven = even, //start alternating patter with color option 2 (outColumnsOdd) ? ctx.fillStyle=color2 : ctx.fillStyle=color1 ) //end j )//end k )//end l ctx.fillRect(xStart,yStart,width,height); //draw single tile counter++; //alert(yLimit + , + xLimit); //draw alphanumeric coordinates if (drawCoords){ //check drawCoords checkbox. no check = no coords var alphaX = xStart; var alphaY = yStart; ctx.fillStyle=textcolor; if (+width < 35 || +height < 35){ //tile size check ctx.font = 8px Helvetica; alphaX = xStart+4; alphaY = yStart+15; } else if (+width < 54 || +height < 54){ //tile size check ctx.font = 10px Helvetica; alphaX = xStart+6; alphaY = yStart+15; } else if (+width < 64 || +height < 64){ //tile size check ctx.font = 14px Helvetica; alphaX = xStart+6; alphaY = yStart+20; } else if (+width < 110 || +height < 110){ //tile size check ctx.font = 24px Helvetica; alphaX = xStart+6; alphaY = yStart+30; } else { ctx.font = 30px Helvetica; alphaX = xStart+10; alphaY = yStart+40; }//end size check ctx.fillText(alphabet[xLimit].toUpperCase() + (yLimit+1),alphaX,alphaY); //draw alpha then number } //ARROW DIRECTION DETERMINATION //draw data flow (default for starting top left with horizontal rows. var arrow = downArr; //default direction switch (dataFlow){ case 1://dataFlow=1 //(j % 2) != 1 ? arrow = rightArr : arrow = leftArr; //even/odd row check i == columns-1 ? ( //last column of entire map (outRowsOdd) ? arrow = downArr : //output row odd (columns == (colsLimit*l)+1 && i == columns-1) ? //only 1 column of rightmost output? arrow = downArr : arrow = leftArr//far right column + single : far right column part of bigger output ) : ( //everything but last column of entire map (outRowsOdd) ? (arrow = rightArr, xLimit == colsLimit-1 ? arrow = downArr : arrow = rightArr )//odd rows default to right, last column of output down, otherwise right. : xLimit == 0 ? arrow = downArr /*even+left edge*/ : arrow = leftArr/*even*/ );//far right edge check break; case 2://dataFlow=2 i == columns-1 ? ( //last column of entire map (outRowsOdd) ? arrow = downArr : //output row odd (columns == (colsLimit*l)+1 && i == columns-1) ? //only 1 column of rightmost output? arrow = downArr : arrow = downArr//far right column + single : far right column part of bigger output ) : ( //everything but last column of entire map (outRowsOdd) ? (arrow = leftArr, xLimit == colsLimit-1 ? arrow = leftArr : arrow = leftArr )//odd rows default to right, last column of output down, otherwise right. : xLimit == 0 ? arrow = downArr /*even+left edge*/ : arrow = rightArr/*even*/ );//far right edge check break; } //ARROW COLOR DETERMINATION if (drawData){ //check drawData checkbox. no check = no data path ctx.fillStyle=datacolor; if (dataFlow == 1){ //check which data case and tile being drawn to change first tile in data chain's arrow to green if (yLimit== 0 && xLimit== 0){ //if top left ctx.fillStyle = dataStartColor; } if (rowsOdd) {//if odd row if (yLimit==rowsLimit-1 || j==rows-1){//if odd bottom of section or entire map if(xLimit==colsLimit-1||xLimit==columns-1||i==columns-1){//if last column of section or entire map arrow = stopSign; }//end right edge check else if (columns == colsLimit+1 && i == columns-1 ){//if theres only one column on next section arrow = stopSign; }//end check for single extra column }//end bottom edge check } else {//if even row if (yLimit==rowsLimit-1 || j==rows-1){//if even bottom of section or entire map if (xLimit == 0 || i == 0){ //if first column of section or entire map arrow = stopSign; }//end left edge check }//end bottom edge check }//end current row odd even check } else { } if (+width < 35 || +height < 35){ //tile size check ctx.font = 8px Helvetica; ctx.fillText(arrow,xStart+20,yStart+10); //small } else if (+width < 54 || +height < 54){ //tile size check ctx.font = 10px Helvetica; ctx.fillText(arrow,xStart+25,yStart+15); //med } else if (+width < 64 || +height < 64){ //tile size check ctx.font = 14px Helvetica; ctx.fillText(arrow,xStart+30,yStart+20); //med } else if (+width < 110 || +height < 110){ //tile size check ctx.font = 24px Helvetica; ctx.fillText(arrow,xStart+40,yStart+30); //med } else { ctx.font = 30px Helvetica; ctx.fillText(arrow,xStart+80,yStart+40); //large }//end size check }//end of data draw check// #### Draw Info if (drawInfo) { //draw info tile if checkbox checkes function numberWithCommas(x) { return x.toString().replace(/\B(?=(\d{3})+(?!\d))/g, ,); } tileCount = (+columns * +rows); var hRes = (+width * +columns); var vRes = (+height * +rows); var tPix = (hRes*vRes); var info1 = Total Resolution : +hRes+ x +vRes+. Total tiles : +tileCount+ . (+numberWithCommas(tPix)+) pixels; var info2 = Single Tile Resolution : +width+ x +height; var info3 = Maximum Output Resolution : +resWidthLimit+ x +resHeightLimit+. Max tiles per output : +(colsLimit*rowsLimit)+ tiles.; var info4 = Required Outputs : +outputsWide*outputsHigh; var info5 =; if (drawUser){ var info5 = $('#userText').val(); } var rectHeight = 200; var rectWidth = 600; var rectX = leftMarg; var rectY = topMarg; var topMarg = (vRes/2)-(rectHeight/2); var leftMarg = (hRes/2)-(rectWidth/2); ctx.fillStyle =infoBackgroundColor; ctx.fillRect(leftMarg,topMarg,rectWidth,rectHeight); ctx.fillStyle=infoForegroundColor; ctx.font = 12px Lucida Console; ctx.textAlign=center; ctx.font = 14px Lucida Console; ctx.fillText(info5,rectX+(rectWidth/2),rectY+(rectHeight/2)-14); ctx.font = 12px Lucida Console; rectY = rectY+14;//new line ctx.fillText(info1,rectX+(rectWidth/2),rectY+(rectHeight/2)-12); rectY = rectY+14;//new line ctx.fillText(info3,rectX+(rectWidth/2),rectY+(rectHeight/2)-12); rectY = rectY+14;//new line ctx.fillText(info2,rectX+(rectWidth/2),rectY+(rectHeight/2)-12); rectY = rectY+14;//new line ctx.fillText(info4,rectX+(rectWidth/2),rectY+(rectHeight/2)-12); ctx.textAlign=left;//reset text alignment for tile coords } console.log(################################); console.log(# Begin Drawing Tile + alphabet[xLimit] + (+yLimit+1)); console.log(# Tile Unique # + counter + , First Pixel (top-left) : ( + xStart + , + yStart + )); console.log(# Odd or Even : + oddOrEven); console.log(# xLimit is : + xLimit); console.log(# yLimit is : + yLimit); console.log(# Tile Coords (x, y) (i, j) : ( + i + , + j +)); console.log(# Columns : + columns); console.log(# Rows : + rows); console.log(# rowsLimit is : + rowsLimit); console.log(# colsLimit is : + colsLimit); console.log(# Output Coords (x, y) : ( + (l+1) + , + (k+1) + )); console.log(# Outputs high : + outputsHigh); console.log(# Outputs wide : + outputsWide); console.log(# Total Outputs Needed : + outputsHigh*outputsWide); console.log(# Top Edge : + topEdge); console.log(# Bottome Edge : + bottomEdge); console.log(# Left Edge : + leftEdge); console.log(# Right Edge : + rightEdge); console.log(# yLimit is : + yLimit); console.log(# Output Top Edge : + outTopEdge); console.log(# Output Bottom Edge : + outBottomEdge); console.log(# Output Left Edge : + outLeftEdge); console.log(# Output Right Edge : + outRightEdge); console.log(# Map Column is Odd? : + columnsOdd); console.log(# Map Row is Odd? : + rowsOdd); console.log(# Output Column is Odd? : + outColumnsOdd); console.log(# Output Row is Odd? : + outRowsOdd); xStart = +xStart + +width; //shift starting coords for next column }//end columns for xStart = +colsLimit*l * +width; //reset x coord to left most side of current output for next row yStart = +yStart + +height; //shift starting coords for next row }//end rows for }//end outputs high check(k) }//end outputs wide check(l) }//end tilesByOutput //jQuery $('.pixelPerfButton').click(function(){ var $this = $(this); $this.toggleClass('btn-danger').toggleClass('btn-primary'); $('#canvas').toggleClass('pixelPerf'); if($this.hasClass('btn-danger')){ $this.text('Pixel Perfect Preview : On'); } else { $this.text('Pixel Perfect Preview : Off'); } }); $('ul#tilePresets li').click(function(){ var $this = $(this); var preWidth= $this.text().substr(0, $this.text().indexOf('x')); var preHeight= $this.text().substr($this.text().indexOf(x) + 1); $('#width').val(preWidth); $('#height').val(preHeight); }); $('ul#outputPresets li').click(function(){ var $this = $(this); var preResWidth= $this.text().substr(0, $this.text().indexOf('x')); var preResHeight= $this.text().substr($this.text().indexOf(x) + 1); $('#resWidthLimit').val(preResWidth); $('#resHeightLimit').val(preResHeight); }); //col slider $('#colSlide').slider({ tooltip: 'show', min: 1, max: 120, value: $('#columns').val() }); var originalVal; $('#colSlide').slider().on('slideStart', function(ev){ originalVal = $('#colSlide').data('slider').getValue(); }); $('#colSlide').slider().on('slideStop', function(ev){ var newVal = $('#colSlide').data('slider').getValue(); if(originalVal != newVal) { $('#columns').val($(this).val()); } columns=$('#columns').val(); //.value converts input field int to string*** outputsWide = Math.ceil(columns/colsLimit); draw(); }); //row slider $('#rowSlide').slider({ tooltip: 'show', min: 1, max: 60, value: $('#rows').val() }); var originalVal2; $('#rowSlide').slider().on('slideStart', function(ev){ originalVal2 = $('#rowSlide').data('slider').getValue(); }); $('#rowSlide').slider().on('slideStop', function(ev){ var newVal = $('#rowSlide').data('slider').getValue(); if(originalVal2 != newVal) { $('#rows').val($(this).val()); } rows=$('#rows').val(); //.value converts input field int to string*** outputsHigh = Math.ceil(rows/rowsLimit); draw(); });});//main functionJS Fiddle link : http://jsfiddle.net/ganLf56k/ | Navigating HTML5 canvas checkerboard with nested loops / multi-dimensional arrays | javascript;jquery;canvas | null |
_unix.226524 | In strace outputs, the paths to the libraries that executables call are in calls to open(). Is this the system call used by executables that are dynamically linked? What about dlopen()? open() isn't a call I'd have guessed would play a role in the execution of programs. | What system call is used to load libraries in Linux? | dynamic linking;shared library;strace;dynamic loading | null |
_unix.347808 | I am using latest ubuntu 16.04 LTS. For some days after installing system worked fine but after some days it start showing this error.Welcome to emergency mode! After logging in, type journalctl -xb to view system logs, systemctl reboot to reboot, systemctl default or ^D to try again to boot into dafault mode.Press Enter for maintenance(or press control-D to continue):But there is one way using which I can start system. On boot up I go to recovery menu then clean package and then resume the system. It works but , it is time consuming to do after every boot up. Suggest something simple and cleaner to resolve this problem. Before going into emergency mode it gave message as : [12.320307] intel_soc_dts_thermal:request_threaded_irq ret -22In case anyone interested in seeing log files : http://www.filehosting.org/file/details/644902/error_log.txt | Ubuntu gives message Welcome to emergency mode ! | linux;ubuntu | The Emergency Mode sometime means that your file system may be corrupted. In such cases, you will be left out with a prompt to go nowhere.All you have to do is perform a file system check using,fsck.ext4 /dev/sda3where sda3 can be your partition and if you are using ext3 file system, change the command as follows:fsck.ext3 /dev/sda3About the partition number, Linux shows you the partition before arriving at the prompt.This should solve the problem.Hope this helps. |
_unix.61807 | I have setup OpenVPN. The server is a router running DD-WRT with OpenVPN setup. The client is Ubuntu 12.04. The connection is established without issues. I can ping clients at the VPN side. The problem is every 5-10min (seems slightly variant) the following occurs on the Ubuntu Client:Fri Jan 18 17:01:45 2013 OpenVPN 2.2.1 x86_64-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Mar 30 2012Fri Jan 18 17:01:45 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executablesFri Jan 18 17:01:45 2013 WARNING: file 'client1.key' is group or others accessibleFri Jan 18 17:01:45 2013 UDPv4 link local: [undef]Fri Jan 18 17:01:45 2013 UDPv4 link remote: [AF_INET]VPN_IP_ADDY:1194Fri Jan 18 17:01:47 2013 [VPN_HOST_NAME] Peer Connection Initiated with [AF_INET]VPN_IP_ADDY:1194Fri Jan 18 17:01:49 2013 TUN/TAP device tun0 openedFri Jan 18 17:01:49 2013 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0Fri Jan 18 17:01:49 2013 /sbin/ifconfig tun0 192.168.66.6 pointopoint 192.168.66.5 mtu 1500Fri Jan 18 17:01:49 2013 WARNING: potential route subnet conflict between local LAN [192.168.1.0/255.255.255.0] and remote VPN [192.168.1.0/255.255.255.0]Fri Jan 18 17:01:49 2013 Initialization Sequence CompletedFri Jan 18 17:04:10 2013 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)Fri Jan 18 17:04:10 2013 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)Fri Jan 18 17:04:10 2013 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)Fri Jan 18 17:04:10 2013 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)Fri Jan 18 17:04:10 2013 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)Fri Jan 18 17:04:10 2013 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)Fri Jan 18 17:04:12 2013 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)Fri Jan 18 17:07:11 2013 [VPN_HOST_NAME] Inactivity timeout (--ping-restart), restartingFri Jan 18 17:07:11 2013 SIGUSR1[soft,ping-restart] received, process restartingFri Jan 18 17:07:13 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executablesFri Jan 18 17:07:13 2013 Re-using SSL/TLS contextFri Jan 18 17:07:13 2013 UDPv4 link local: [undef]Fri Jan 18 17:07:13 2013 UDPv4 link remote: [AF_INET]VPN_IP_ADDY:1194Fri Jan 18 17:07:14 2013 [VPN_IP_ADDY] Peer Connection Initiated with [AF_INET]VPN_IP_ADDY:1194Fri Jan 18 17:07:17 2013 Preserving previous TUN/TAP instance: tun0Fri Jan 18 17:07:17 2013 Initialization Sequence CompletedIt then auto reconnects and 5-10min later it repeats this cycle forever.It times out even though I'm running a rsync transfer over the VPN. The timeout and reconnect does not effect the rsync transfer except stalling it for 30s. Clients do not notice except that file transfers will stop for 30s while it attempts to reconnect. I'm using in my server config 'keepalive 10 120'. I also tried 'keepalive 1 180' which made no difference. I think the connection refused errors are the keepalive ping attempts? | OpenVPN dropping connection every time interval | ubuntu;vpn;openvpn;dd wrt | null |
_unix.291113 | On my Lenovo T460p, I have a delay way before touch pad movement or scrolling is registered. Note: This delay way is not to be confused with inactive areas of the touchpad as configurable through synclient and used e.g. for clickpad features. This can more be compared to dead zones of joysticks, which only react after a certain amount of movement.When I touch the pad and start moving my finger, at first, nothing happens. I have to move the finger for a few millimeters before the mouse pointer would respond. It then registers the movement completely, which means that whenever I start using the touch pad, I have a skip by tens of pixels in the pointer movement. This makes the touch pad unusable for any precision work, such as hitting the close button on a tab.This also happens after I let the finger rest within a movement for a second or so. The same happens for two-finger scrolling. These are the xinput settings:Device 'SynPS/2 Synaptics TouchPad': Device Enabled (139): 1 Coordinate Transformation Matrix (141): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (275): 1 Device Accel Constant Deceleration (276): 2.500000 Device Accel Adaptive Deceleration (277): 1.000000 Device Accel Velocity Scaling (278): 12.500000 Synaptics Edges (297): 1574, 5369, 1354, 4571 Synaptics Finger (298): 25, 30, 0 Synaptics Tap Time (299): 180 Synaptics Tap Move (300): 254 Synaptics Tap Durations (301): 180, 100, 100 Synaptics ClickPad (302): 0 Synaptics Middle Button Timeout (303): 75 Synaptics Two-Finger Pressure (304): 282 Synaptics Two-Finger Width (305): 7 Synaptics Scrolling Distance (306): 115, 115 Synaptics Edge Scrolling (307): 0, 0, 0 Synaptics Two-Finger Scrolling (308): 1, 1 Synaptics Move Speed (309): 1.000000, 1.750000, 0.034590, 0.000000 Synaptics Off (310): 0 Synaptics Locked Drags (311): 0 Synaptics Locked Drags Timeout (312): 5000 Synaptics Tap Action (313): 0, 0, 0, 0, 1, 3, 2 Synaptics Click Action (314): 1, 3, 2 Synaptics Circular Scrolling (315): 0 Synaptics Circular Scrolling Distance (316): 0.100000 Synaptics Circular Scrolling Trigger (317): 0 Synaptics Circular Pad (318): 0 Synaptics Palm Detection (319): 0 Synaptics Palm Dimensions (320): 10, 200 Synaptics Coasting Speed (321): 20.000000, 50.000000 Synaptics Pressure Motion (322): 30, 160 Synaptics Pressure Motion Factor (323): 1.000000, 1.000000 Synaptics Grab Event Device (324): 0 Synaptics Gestures (325): 1 Synaptics Capabilities (326): 1, 0, 0, 1, 1, 1, 1 Synaptics Pad Resolution (327): 65, 44 Synaptics Area (328): 0, 0, 0, 0 Synaptics Noise Cancellation (329): 28, 28 Device Product ID (267): 2, 7 Device Node (266): /dev/input/event1Has anyone a solution for this?I have tried to set Noise Cancellation to 0, 0, but that did not help. This is on Debian testing (stretch). Fedora 24 Workstation Live Image also shows the same issue. | How to get rid of the delay way before Lenovo touch pad reacts? | touchpad;xinput | null |
_cstheory.37491 | Assume we know a parameter $n\in\mathbb N$, and then get to observe a sequence of elements $x_1,\ldots, x_n$, one at a time.Our goal is to count the number of distinct elements in $x_1,\ldots, x_n$, and succeed with probability $1-\epsilon$.A simple approach would be to compute a $\log\left({n\choose 2}\epsilon^{-1}\right)$ bits fingerprint of each element, and then count the number of distinct fingerprints.Since the number of distinct elements is a most $n$, with probability of at least $1-\epsilon$, all fingerprints of distinct elements will be different.This gives us a total of $\approx n\log (n^2\epsilon^{-1})-n$ bits of space.But is this anywhere close to optimal? Can we perhaps use only $O(n\log\epsilon^{-1})$ bits for the problem? What would be a lower bound for this problem?EDIT: I'm specifically interested in computing the number of distinct elements exactly, with high probability, and not in approximation algorithms.In the paper An Optimal Algorithm for the Distinct Elements Problem, the authors give a $O(\gamma^{-2}+\log n)$ bits algorithm for computing a $(1+\gamma)$ approximation with high probability, and claim that this is optimal.However, setting $\gamma<n^{-1}$ for getting exact count with high probability gives a $\Omega(n^2)$ bits algorithm, which seems worse than the $O(n\log n)$ proposed above.They do not assume that $n$ is known in advance, which may explain this difference. | How much memory is needed for counting distinct elements in a stream exactly with high probability | ds.algorithms;lower bounds;data streams | null |
_codereview.146700 | The delay sequence has been fixed so I can move to the next step which are the Retry and Breaker.(Just ignore the console output here and there. There is no logging yet so it helps me to see what's going on in LINQPad).Both are derived from a common type Try:public abstract class Try{ public abstract bool Execute(Action action, Action<AttemptInfo> onException) public virtual bool Execute(Action action) { return Execute(action, ex => { }); } public abstract Task<bool> ExecuteAsync(Action action, CancellationToken cancellationToken, Action<AttemptInfo> onException); public Task<bool> ExecuteAsync(Action action) { return ExecuteAsync(action, CancellationToken.None, ex => { }); }}The Retry provides some additional fields about the state and provides feedback to the calling method via the AttemptInfo type.public class AttemptInfo{ internal AttemptInfo(Exception exception, int count) { Exception = exception; Count = count; } public Exception Exception { get; } public int Count { get; } public bool Handled { get; set; }}public class Retry : Try{ private readonly DelaySequence _delaySequence; private readonly List<Exception> _exceptions = new List<Exception>(); public IEnumerable<Exception> Exceptions => _exceptions.AsReadOnly(); public int Count { get; private set; } public Retry(DelaySequence delaySequence) { _delaySequence = delaySequence; } public override bool Execute(Action action, Action<AttemptInfo> onException) { foreach (var delay in _delaySequence) { try { Count++; action(); return true; } catch (Exception ex) { _exceptions.Add(ex); var attempt = new AttemptInfo(ex, Count); onException(attempt); if (!attempt.Handled) { throw; } } Thread.Sleep(delay); } return false; } public override async Task<bool> ExecuteAsync(Action action, CancellationToken cancellationToken, Action<AttemptInfo> onException) { foreach (var delay in _delaySequence) { cancellationToken.ThrowIfCancellationRequested(); try { Count++; action(); return true; } catch (Exception ex) { _exceptions.Add(ex); var attempt = new AttemptInfo(ex, Count); onException(attempt); if (!attempt.Handled) { throw; } } await Task.Delay(delay, cancellationToken); } return false; }}The Breaker acts as a decorator for the Retry.public class Breaker : Try{ private readonly Retry _retry; public Breaker(Retry retry, BreakerThreshold threshold) { _retry = retry; Threshold = threshold; } public BreakerThreshold Threshold { get; } public DateTime? LastExceptionOn { get; private set; } public int ExceptionCount { get; private set; } public BreakerState State { get; private set; } private void Reset() { LastExceptionOn = null; ExceptionCount = 0; State = BreakerState.Closed; } public override bool Execute(Action action, Action<AttemptInfo> onException) { return _retry.Execute(() => { var hasOpenTimedout = (DateTime.UtcNow - LastExceptionOn) > Threshold.Timeout; if (Threshold.HasTimedout(LastExceptionOn)) { Reset(); Console.WriteLine(You may try again.); } if (State == BreakerState.Closed) { action(); } }, attempt => { if (Threshold.Exceeded(LastExceptionOn, ExceptionCount)) { State = BreakerState.Open; attempt.Handled = true; Console.WriteLine(It's enough for now! + Thread.CurrentThread.ManagedThreadId); return; } LastExceptionOn = DateTime.UtcNow; ExceptionCount++; onException(attempt); }); } public override async Task<bool> ExecuteAsync(Action action, CancellationToken cancellationToken, Action<AttemptInfo> onException) { return await _retry.ExecuteAsync(() => { var hasOpenTimedout = (DateTime.UtcNow - LastExceptionOn) > Threshold.Timeout; if (hasOpenTimedout) { Reset(); Console.WriteLine(You may try to dig again.); } if (State == BreakerState.Closed) { action(); } }, cancellationToken, attempt => { if (Threshold.Exceeded(LastExceptionOn, ExceptionCount)) { State = BreakerState.Open; attempt.Handled = true; Console.WriteLine(It's enough! + Thread.CurrentThread.ManagedThreadId); return; } LastExceptionOn = DateTime.UtcNow; ExceptionCount++; onException(attempt); }); }}It's supported by two helper types:public enum BreakerState{ Closed, Open, //HalfOpen, // I didn't find any use for this yet.}public class BreakerThreshold{ public BreakerThreshold(int maxExceptionCount, TimeSpan timespan, TimeSpan timeout) { MaxExceptionCount = maxExceptionCount; TimeSpan = TimeSpan; Timeout = timeout; } public int MaxExceptionCount { get; } public TimeSpan TimeSpan { get; } public TimeSpan Timeout { get; } public bool Exceeded(DateTime? exceptionOn, int count) { return exceptionOn.HasValue && DateTime.UtcNow - exceptionOn > TimeSpan && count > MaxExceptionCount; } public bool HasTimedout(DateTime? exceptionOn) { return (DateTime.UtcNow - exceptionOn) > Timeout; } }Usage example:var counter = 0;var failingAction = new Action(() =>{ if (counter++ < 6) { Console.WriteLine($Plant potato! Attempt {counter}); throw new Exception(Soil too hard to dig!); } Console.WriteLine($Finally planted! Attempt: {counter});});var retry = new Retry(new RegularDelaySequence(TimeSpan.FromSeconds(0.5), 30));var breaker = new Breaker(retry, new BreakerThreshold(2, TimeSpan.FromSeconds(2), TimeSpan.FromSeconds(3)));breaker.Execute(failingAction, attempt =>{ attempt.Handled = true;});// Wait for the breaker to timeout.Thread.Sleep(3200);breaker.Execute(failingAction, attempt =>{ attempt.Handled = true;});Output:Plant potato! Attempt 1Soil too hard to dig!Plant potato! Attempt 2Soil too hard to dig!Plant potato! Attempt 3Soil too hard to dig!Plant potato! Attempt 4It's enough for now!You may try again.Plant potato! Attempt 5Soil too hard to dig!Plant potato! Attempt 6Soil too hard to dig!Finally planted! Attempt: 7What I particularly don't like are the 99% identical loops in the sync and async APIs. They differ only by the wait and by the async keyword.Oh, and one more thing. I know there is something like Polly but I like to have things my way besides it helped me to understand a few things already so it's a good exercise at the same time. | Try again, and again, and again... but not too often because the potatoes won't grow | c#;design patterns;error handling;inheritance;async await | You can get rid of some code duplication by extracting the try/catch block in the Execute methods to a new method:private bool ExecuteAction(Action action, Action<AttemptInfo> onException){ try { Count++; action(); return true; } catch (Exception ex) { _exceptions.Add(ex); var attempt = new AttemptInfo(ex, Count); onException(attempt); if (!attempt.Handled) { throw; } return false; }}Then both Execute methods can call this method instead:public override bool Execute(Action action, Action<AttemptInfo> onException){ foreach (var delay in _delaySequence) { if (ExecuteAction(action, onException)) return true; Thread.Sleep(delay); } return false;}public override async Task<bool> ExecuteAsync(Action action, CancellationToken cancellationToken, Action<AttemptInfo> onException){ foreach (var delay in _delaySequence) { cancellationToken.ThrowIfCancellationRequested(); if (ExecuteAction(action, onException)) return true; await Task.Delay(delay, cancellationToken); } return false;}(I wish I could just call it in a single line and it either returns a value or throws an exception, but you only return the result if it's true.)Now the rest of the duplication becomes more obvious - I'd like to extract the loop and final return false to a new method that takes two more delegates - one as a pre-execution action (cancellationToken.ThrowIfCancellationRequested() for the async method, nothing for the sync method) and one for the post-execution action (Thread.Sleep(delay), await Task.Delay(delay, cancellationToken)). Each public method would then call that private method with four delegate parameters, and the entirety of the loop/execution could be in one method. But as my C# knowledge is really based on 4.0, I don't know enough about async/await to know whether that's feasible. |
_unix.166408 | I need help installing my TP Link USB dongle on Debian Wheezy. I already tried a bunch of different things and nothing seemed to work. Here is the result of some commands:$ lsusbBus 002 Device 005: ID 0bda:8178 Realtek Semiconductor Corp. RTL8192CU 802.11n WLAN Adapter$ lsmod | grep rtlrtl8192cu 74897 0 rtlwifi 81393 1 rtl8192curtl8192c_common 52602 1 rtl8192cumac80211 192806 3 rtl8192c_common,rtlwifi,rtl8192cucfg80211 137243 2 mac80211,rtlwifiusbcore 128741 5 ehci_hcd,usbhid,rtlwifi,rtl8192cuI tried installing the driver from the official Realtek website and it worked but the dongle isn't working, no result with neither ip link nor ifconfig.Any ideas ? | Install usb wifi tl-wn823n on Debian | debian;wifi;drivers;wlan | null |
_unix.163737 | I'm not sure what I should be googling or if FUSE does this (I suspect not). I'd like to create a virtual block device for which all forms of access, for example reads and writes, go directly to my appI know I can have a file be used as a block device by doing dd if=/dev/zero of=~/test count=100k then create a loopback to it using losetup /dev/loop0 ~/test. But I would like accesses going directly to my app instead of to a file. I hope this question is fairly clear. | virtual block device | block device | null |
_unix.343227 | I have a list from an inventory and another list from management. I'm trying to find the IP's that are similar between both files then output that is similar into another file:I tried using diff but, the output did not made sense.diff -buy list1 list2then I tried to use egrep using IP's from list 1but, I think I used the wrong syntax.egrep -o `192.168.*|192.1.69` list2not sure what to use correctlylike:list 1 maybe have:192.168.1.1192.168.1.2192.168.1.3192.168.2.1and I want to try to find this IPs in list2 | Compare different IPs in two files? | text processing;grep;scripting;file comparison | Solution in bash or a similar shell with process substitution using the <(...) form:comm -1 -2 <(sort list1) <(sort list2)Should you have duplicate entries in list2 then add the -u option to the sort call. |
_codereview.91037 | for (int i = 0; i < keyList.Count; i++){ if (oldDic.ContainsKey(keyList[i].ToString())) { if (newDic[keyList[i].ToString()].ToString() == oldDic[keyList[i].ToString()].ToString()) { //ReminderBackupLog(Same); } else { isChnagedSectionFields = Yes; string oldValue = oldDic[keyList[i].ToString()].ToString(); string newValue = newDic[keyList[i].ToString()].ToString(); table = table + <tr style='border: 1px solid black;'><td style='border: 1px solid black;'> + colNames[keyList[i].ToString()].ToString() + </td><td style='border: 1px solid black;'> + oldValue + </td><td style='border: 1px solid black;'> + newValue + </td></tr>; } } else { isChnagedSectionFields = Yes; string newValues = newDic[keyList[i].ToString()].ToString(); table = table + <tr style='border: 1px solid black;'><td style='border: 1px solid black;'> + colNames[keyList[i].ToString()].ToString() + </td><td style='border: 1px solid black;'> + + </td><td style='border: 1px solid black;'> + newValues + </td></tr>; }} | Generating a table of differences between two dictionaries | c#;hash table | null |
_unix.53206 | The other day, I deleted a user from one of our servers. That user had the ID 1002.Today, I've added a new user to the system. To my surprise, he got the user ID 1002. Because I neglected to delete the homedir of the deleted user, the new user now owns the homedir of the old user (as well as all other resources that were previously owned by 1002).I would have assumed that user IDs are never reused to avoid any conflicts like this. Why are they recycled and should I care/take precautions? | Why are user IDs recycled? | users | When you delete a user, the user information is completely removed, so there is no direct information that that ID was ever used.(The authoritative user information is stored in /etc/passwd, which is a simple list.)To prevent this, eitherforce another ID when creating new users, orkeep the user entry around (just disable logins) as long as you haven't cleaned up the corresponding files.(find's -user and -nouser options help with this.) |
_unix.308129 | Updating kernel is not an option, I'm getting the following message when trying to run iptablesroot@mail:/etc/postfix# iptables -L libkmod: ERROR ../libkmod/libkmod.c:554 kmod_search_moddep: could not open moddep file '/lib/modules/2.6.32-5-xen-amd64/modules.dep.bin' iptables v1.4.14: can't initialize iptables table `filter': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded.UPDATE:after running root@mail:/home/admin# sudo depmodI getERROR: could not open directory /lib/modules/2.6.32-5-xen-amd64: No such file or directoryFATAL: could not search modules: No such file or directoryand after root@mail:/home/admin# sudo modinfo ip_tablesI getlibkmod: ERROR ../libkmod/libkmod.c:554 kmod_search_moddep: could not open moddep file '/lib/modules/2.6.32-5-xen-amd64/modules.dep.bin'ERROR: Module alias ip_tables not found.It looks like kernel update is a must. | iptables - could not open moddep file' | debian;iptables | Did you recently upgrade the kernel on this installation? Typically iptables is built into the kernel and by default is not an external module (typically - I'm sure there are plenty of situations where that is not the case).Try the depmod command to have the module load order recalculated; it will fix this issue on occasion: sudo depmodYou will need to reboot after running depmod for it to have an impact. If you get an error, please update your question with it.Then see if ip_tables.ko is present at all: sudo modinfo ip_tablesIf it's not loaded, try loading it: sudo modprobe ip_tablesLastly, as a potentially valuable data point, see what iptables related kernel modules are in use with this command, and update your question with the details. cat /proc/net/ip_tables_matchesIf it is just not found, try to see if the file is on the system: find / -name ip_tables.koIf it's not, I believe you will have to at a minimum rebuild (or reinstall from packages) the kernel modules for your kernel release. |
_scicomp.14724 | I have a nonlinear differential algebraic system in time arising from the weak formulation of coupled transient non-linear 1D problem. The system roughly looks like. M*x[n] = (A + c1*I) * x[n-1] + dt*N(x[n-1], w[n-1]) B*w[n] = B * w[n-1] + ddt*N2(x[n-1], w[n-1])Here all capital letter represents matrices. x(y,t) and w(y,t) are unknowns. N and N2 represents nonlinear part of the system.I was thinking of using initial guess for both w[n-1] and x[n-1] and then solve the resulting nonlinear algebraic system. Update the values of x[n-1] and w[n-1] and repeat the process. Would it work??Second should I take the time steps for 'w' and 'x' to be same. I guess this decision needs to come from the physical situation but looking this in prospective of numerical stability issues with time I need some guidance.Regards!! | suggestion needed for solution algorithim non-linear differntial algebraic system | nonlinear equations | null |
_unix.129547 | I have a recurring problem where I need to mount a drive of a Win32 machine on my LAN so I can read and write files to it.But I have no control of said machine, and as such, said machine frequently gets turned off or rebooted without warning.And as such, I get directories that I'm in and files that are open turning into kernel level blocks, and I can't even kill -9 these processes, and have to wait well over 15 minutes for those mount points to start expiring.And thats even with using umount -a -t cifs -f -l.Nothing I can do seems to alleviate this problem, and I frequently get non-advice such asDon't mount non-servers and expect them to workGet the person I'm trying to access the files of to not useSamba/CIFSGet them not to turn their computer off.The last one is especially incredulous considering Win32's propensity to need regular reboots.A way to tell the kernel Look, that mount is not going to come back, please stop blocking everything waiting for it to come back would be nice. | How does one clean up a stale CIFS mount | kernel;cifs | null |
_codereview.158823 | I have implemented in C# a stack data structure, without using any existing C#/.NET containers (data structures). I have used TDD technique to implement this requirements, so you can find tests, too. All my tests are passing, so I guess stack is behaving correctly. I would be very pleased to have a code review for this implementation.Please do not recommend a generic implementation :-)using System;using Microsoft.VisualStudio.TestTools.UnitTesting;using StackDataStructure;namespace StackTesting{ [TestClass] public class Tests { [TestMethod] public void TestEmptyStackSize() { var stack = new Stack(); Assert.IsTrue(stack.IsEmpty()); } [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void EmptyStackPopException() { var stack = new Stack(); stack.Pop(); } [TestMethod] public void PushingAndPoppintItemsToStack() { var stack = new Stack(); stack.Push(1); Assert.AreEqual(1, stack.Size); stack.Push(2); Assert.AreEqual(2, stack.Size); Assert.AreEqual(2, stack.Pop()); Assert.AreEqual(1, stack.Size); Assert.AreEqual(1, stack.Pop()); Assert.IsTrue(stack.IsEmpty()); Assert.IsTrue(stack.Size == 0); stack.Push(10); Assert.IsTrue(stack.Size == 1); Assert.IsTrue(10 == stack.Pop()); Assert.IsTrue(stack.IsEmpty()); } }}using System;namespace StackDataStructure{ internal class Node { internal int value; internal Node underTop; } public class Stack { private int size; private Node top; public int Size { get { return size; } } public bool IsEmpty() { return top == null; } public int Pop() { if (IsEmpty()) throw new InvalidOperationException(Stack is empty); int value = top.value; top = top.underTop; size--; return value; } public void Push(int v) { top = new Node{ value = v, underTop = top }; size++; } }} | Stack data structure implementation | c#;unit testing;stack | The code and the test look very good. I only found these nit-picks:There's a typo in the method name Poppint.The test sometimes uses Assert.IsTrue(a == b), which should be Assert.IsEqual(a, b).The name underTop does not sound perfect to me, since it refers to a top, which is meaningless for a single node. A single node doesn't have a top, therefore I would call it next or below.The property Size should be renamed to Count to align with the other collection types.I prefer to have the same namespace for the main code and the testing code. But I don't know the official convention for organizing tests. |
_unix.295125 | I have bunch of log files which I am trying to encrypt with public/private key using openssl and save to my NAS but it is failing. My log files are in the following path :/var/SYSLOGS/hosts/archiveMy public key and private key are in /etc/log-enc/[root@NAG01 log-enc]# ls -ltotal 8-rw-r--r-- 1 root root 891 Jul 11 15:58 syslog_privalye_key.pem-rw-r--r-- 1 root root 272 Jul 11 15:59 syslog_public_key.pemNow I am trying to execute following commandIf I am executing the same command one by one, then there is no issue. for file in `find /var/SYSLOGS/hosts/archive/`do FILE_BASE=$(basename $file)echo $file=>/NFS/Nag01/syslogs/hosts/$FILE_BASE.encopenssl rsautl -encrypt -inkey /etc/log-enc/syslog_public_key.pem -pubin -in $file -out /NFS/Nag01/syslogs/hosts/$FILE_BASE.encdoneHere are the error logsRSA operation error140628568049480:error:0406D06E:rsa routines:RSA_padding_add_PKCS1_type_2:data too large for key size:rsa_pk1.c:151:/var/SYSLOGS/hosts/archive/192.168.33.5.log-20160131.gz=>/NFS/Nag01/syslogs/hosts/192.168.33.5.log-20160131.gz.encRSA operation error140123978278728:error:0406D06E:rsa routines:RSA_padding_add_PKCS1_type_2:data too large for key size:rsa_pk1.c:151:/var/SYSLOGS/hosts/archive/app02.log-20160306.gz=>/NFS/Nag01/syslogs/hosts/app02.log-20160306.gz.enc/var/SYSLOGS/hosts/archive/192.168.34.8.log-20160227.gz=>/NFS/Nag01/syslogs/hosts/192.168.34.8.log-20160227.gz.encRSA operation error139777258493768:error:0406D06E:rsa routines:RSA_padding_add_PKCS1_type_2:data too large for key size:rsa_pk1.c:151:/var/SYSLOGS/hosts/archive/192.168.31.3.log-20160511.gz=>/NFS/Nag01/syslogs/hosts/192.168.31.3.log-20160511.gz.encHere are the raw files.[root@NAG01 log-enc]# ls -l /var/SYSLOGS/hosts/archive/192.168.33.5.log-20160131.gz-rw-------. 1 root root 3569 Jan 31 04:16 /var/SYSLOGS/hosts/archive/192.168.33.5.log-20160131.gz[root@NAG01 log-enc]# ls -l /var/SYSLOGS/hosts/archive/192.168.34.8.log-20160227.gz-rw-------. 1 root root 2142 Feb 27 03:11 /var/SYSLOGS/hosts/archive/192.168.34.8.log-20160227.gz | mass file encryption not working with openssl | logs;encryption;openssl | null |
_cogsci.10079 | This question is inspired by a question I answered on Health. Can we erase problematic memories to aid recovery from depression?A depressed person asked how to erase specific unpleasant memories in order to manage his depression. My answer was basically that's impossible, here is how you deal with these memories instead. And then somebody asked in the comments why it is impossible. I thought of tackling this question and realized I can't do it well. My best attempt was something like, First, you have to find out which millions of neurons (out of the billions the person has) light up together to represent 'a memory', and we don't have a method for brain imaging with that resolution. Second, you have to stop them from activating in exactly that combination, while preserving their ability to activate in any other combination, and we have no idea how to achieve that. But this level of explanation does not satisfy me. It is very vague, and I'm not even certain it's correct. To our best knowledge, what would we have to do if we wanted to erase a specific memory of an event happening to a person? And how far is our technology from achieving each of the steps needed? | Is it possible to erase problematic memories? | memory;cognitive neuroscience | null |
_datascience.11292 | I have about 3,000,000 samples and each sample is described by a list of size about 20. Some elements in this list are categorical, for example name of cities, day of week, etc. (some categories have a large number of options, for example one category is url with more than 700,000 unique elements in my dataset!). Also some elements have real values, for example for time of day.My data is labelled (2 categories,) and I need to train a classifier for test data. I am inclined towards decision tree or random forest since they seem to be a good choice for this type of problem.Now my questions are:1) How do I pre-process categorical data? one-hot-encoding seem to be the right choice but given that some of my categories have huge number of possible values, one hot encoder will produce very long words! am I correct?2) How do I combine data from different categories? For example data from category 'cities' with data from 'urls', since they have different lengths. Do I simply concatenate them?3) How can I combine categorical data with real valued data, for example 'name of cities' with 'time of day' to produce one matrix that can then be passed to decision tree classifier?4) Are there any special normalisation, etc. that I have to do before passing data to classifier?I plan to use python and Scikit for this task.Many thanks for your help. | decision trees on mix of categorical and real value parameters | bigdata;scikit learn;decision trees;categorical data | null |
_unix.256877 | On Kali Linux 2, before you are greeted with the GUI login screen, there is some pre-GUI text that scrolls on the screen. It shows you what modules and programs are working correctly, which ones failed, etc.Well, OpenVas always failed, so I did some commands to make it not fail, and now there is no text at all. That may or may not be the reason that the text is gone. I just know that it was there before, and is gone now.So if you have any suggestions on how to get it back, please share.Thanks | Kali Linux 2.0 - Startup Text has disappeared | kali linux;init | You have to edit the /etc/default/grub file: remove quiet from GRUB_CMDLINE_LINUX_DEFAULT. After that, run update-grub. |
_unix.116243 | I have come across bash sequences such as \033[999D and \033[2K\r which are used to do some manipulation on a printout on a terminal. But what do these sequences mean? Where can I find a list/summary on the web to help me find out the meaning of these sequences? | What does a bash sequence '\033[999D' mean and where is it explained? | bash;terminal | See this link http://www.termsys.demon.co.uk/vtansi.htm. As Anthon says, \033 is the C-style octal code for an escape character. The [999D moves the cursor back 999 columns, presumably a brute force way of getting to the start of the line. [2K erases the current line. \r is a carriage return which will move the cursor back to the start of the current line and is a C-style escape sequence rather than a terminal control sequence.UpdateAs other people have pointed out, these control sequences are nothing to do bash itself but rather the terminal device/emulator the text appears on. Once upon a time it was common for these sequences to be interpreted by a completely different piece of hardware. Originally, each one would respond to completely different sets of codes. To deal with this the termcap and terminfo libraries where used to write code compatible with multiple terminals. The tput command is an interface to the terminfo library (termcap support can also be compiled in) and is a more robust way to create compatible sequences.That said, there is also the ANSI X3.64 or ECMA-48 standard. Any modern terminal implementation will use this. terminfo and termcap are still relevant as the implementation may be incomplete or include non standard extensions, however for most purposes it is safe to assume that common ANSI sequences will work.The xterm FAQ provides some interesting information on differences between modern terminal emulators (many just try to emulate xterm itself) and how xterm sequences relate to the VT100 terminals mentioned in the above link. It also provides a definitive list of xterm control sequences.Also commonly used of course is the Linux console, a definitive list of control sequences for it can be found in man console_codes, along with a comparison to xterm. |
_codereview.164320 | Please review the following piece of code. The class implements a light weight wrapper on top of boost::container::vector. I am not getting the expected performance nowhere comparable to std::vector. The vector is functionally correct but i feel there is some room for improvement, especially in the constructor methods. The wrapper is implemented for our abstraction to switch between different implementations. In our project we use lot of floating point numbers and bools as vector elements. #include <stdint.h> #include <errno.h> #include <malloc.h> #include <math.h> #include <pthread.h> #include <stdarg.h> #include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/mman.h> #include <sys/syscall.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <thread> #include <mutex> #include <unistd.h> #include <boost/interprocess/managed_mapped_file.hpp> #include <boost/interprocess/managed_shared_memory.hpp> #include <boost/align/aligned_allocator.hpp> #include <boost/align/aligned_allocator_adaptor.hpp> #include <boost/pool/pool_alloc.hpp> #include <iostream> #include <atomic> #include <inttypes.h> #include <dlfcn.h> #include <vector> #include <boost/preprocessor/stringize.hpp> #include <boost/container/scoped_allocator.hpp> #include <scoped_allocator> #include <memory> #include <iterator> #define UNW_LOCAL_ONLY #include <libunwind.h> #ifndef BOOST_DISABLE_ASSERTS #define BOOST_DISABLE_ASSERTS #include <boost/container/small_vector.hpp> #include <boost/container/vector.hpp> #include <boost/container/map.hpp> #endif #ifdef USE_CUSTOM_VECTOR #ifndef DEFAULT_SMALL_VECTOR_LENGTH #define DEFAULT_SMALL_VECTOR_LENGTH 8 #endif #include <type_traits> #include <memory> #include <algorithm> #include <stdexcept> #include <iterator> #include <iostream> #include <map> #include <mutex> #include <typeinfo> #include <typeindex> template<typename T, class Allocator = std::allocator<T>> class customVector { public: boost::container::vector<T, Allocator> *internal = nullptr;boost::container::vector<T, Allocator> &_internal = *internal; using value_type = typename boost::container::vector<T>::value_type; using reference = typename boost::container::vector<T>::reference; using const_reference = typename boost::container::vector<T>::const_reference; using pointer = typename boost::container::vector<T>::pointer; using const_pointer = typename boost::container::vector<T>::const_pointer; using iterator = typename boost::container::vector<T>::iterator; using const_iterator = typename boost::container::vector<T>::const_iterator; using difference_type = typename boost::container::vector<T>::difference_type; using size_type = typename boost::container::vector<T>::size_type; using reverse_iterator = boost::container::reverse_iterator<iterator>; using const_reverse_iterator = boost::container::reverse_iterator<const_iterator>; customVector(void) { assert(!_internal.size()); return; } template <typename Iterator> customVector(Iterator first, Iterator last) { assert(!_internal.size()); _internal.assign(first, last); return; } customVector(size_type capacity) { assert(!_internal.size()); _internal.resize(capacity); return; } customVector(size_type n, const value_type &val) { assert(!_internal.size()); _internal.reserve(n); while(n--) _internal.push_back(val); return; } customVector(const std::initializer_list<T> list) { assert(!_internal.size()); std::copy(list.begin(), list.end(), std::back_inserter(_internal)); return; } customVector(customVector const& copy) { assert(!_internal.size()); std::copy(_internal.begin(), _internal.end(), std::back_inserter(copy._internal)); return; } customVector(customVector&& move) noexcept { assert(!_internal.size()); _internal = std::move(move._internal); return; } ~customVector() { _internal.clear(); return; } inline customVector& operator=(customVector const& copy) { _internal = copy._internal; return *this; } inline customVector& operator=(customVector&& move) noexcept { _internal = std::move(move._internal); return *this; } inline customVector& operator=(std::initializer_list<T> list) { std::copy(list.begin(), list.end(), std::back_inserter(_internal)); return *this; } inline void swap(customVector& other) noexcept { _internal.swap(other._internal); } inline size_type size() const { return _internal.size(); } inline bool empty() const { return _internal.empty(); } inline reference at(size_type index) { return _internal.at(index); } inline const_reference at(size_type index) const { return _internal.at(index); } inline reference operator[](size_type index) { return (_internal[index]); } inline const_reference operator[](size_type index) const { return (_internal[index]); } inline reference front() { return _internal.front(); } inline const_reference front() const { return _internal.front(); } inline reference back() { return _internal.back(); } inline const_reference back() const { return _internal.back(); } inline iterator begin() { return _internal.begin(); } inline const_iterator begin() const { return _internal.begin(); } inline iterator end() { return _internal.end(); } inline const_iterator end() const { return _internal.end(); } inline const_iterator cbegin() const { return _internal.cbegin(); } inline const_iterator cend() const { return _internal.cend(); } inline bool operator!=(customVector const& rhs) const { return _internal != rhs._internal; } inline bool operator==(customVector const& rhs) const { return _internal == rhs._internal; } inline bool operator>(customVector const& rhs) const { return (_internal > rhs._internal); } inline bool operator<(customVector const& rhs) const { return (_internal < rhs._internal); } inline void push_back(value_type const& value) { _internal.push_back(value); } inline void push_back(value_type&& value) { _internal.push_back(std::move(value)); } template<typename... Args> inline void emplace_back(Args&&... args) { _internal.emplace_back(std::forward<Args>(args)...); } inline void pop_back() { _internal.pop_back(); } inline void reserve(size_type capacityUpperBound) { _internal.reserve(capacityUpperBound); } inline void resize (size_type n) { _internal.resize(n); } inline void resize (size_type n, const value_type& val) { _internal.resize(n, val); } inline T* data() { return _internal.data(); } inline T* data() const { return _internal.data(); } inline void clear() { _internal.clear(); } inline iterator erase(const_iterator iter) { return _internal.erase(iter); } inline iterator erase(const_iterator first, const_iterator last) { return _internal.erase(first, last); } iterator insert(const_iterator position, const T &x) { return _internal.insert(position, x); } iterator insert(const_iterator position, T &&x) { return _internal.insert(position, x); } template <typename Iterator> iterator insert(const_iterator p, Iterator first, Iterator last) { return _internal.insert(p, first, last); } }; #endif | A wrapper on top of boost vector | c++;performance;c++11;vectors;boost | null |
_unix.166294 | I normally use Vim on Windows and have had no problems with key-mapping, but now that I'm using a virtual machine running Ubuntu, accessed through MobaXterm, things don't quite work as they should. That includes basic things like <c-s> for save and <F5> for run, which I can't live without. I've done my research and read materials that seem to address this problem (here and here), but they went well over my head. Frankly, having to press <c-v> before a function key is downright weird. And it didn't even work when I tried. Can somebody explain, please? As a test case, if I want to map <F2> to :pwd, what do I have to do? To my delight, I've found that my key-mappings do work as intended on Ubuntu's gVim, so I'll be using that. I still would like the answer, though, because I actually prefer the quick and easy feel of plain old vim on the terminal. | Vim: x-terminal key mapping | vim | You'll have to live without ctrlS because that is the terminal command to stop output (ctrlQ undoes that).Other function keys should be mappable without any problem, just enter :map , then hit the function key you want to map which should show e.g.<F5> for F5, then space, then what you want the key to map to.You can put that line (without the leading colon) in your ~/.vimrc file to enable the mapping for every vim session.If you've tried that then please edit your question to show what exactly you tried and what the result was (and how that differed from your expectations).EDIT:If you want to map sequences that aren't defined in the terminal definition, then you can manually map the sequence.First you need to find out what bytes/characters it sent when you press e.g. ctrlshiftF2. I always use od -c for this; start the command, press the key sequence, hit ctrld to send an end-of-file to the command, which then prints the decoded version:$ od -c^[[24^0000000 033 [ 2 4 ^ \n0000006So this is escape, [, 2, 4, ^ (the newline is what I entered after the sequence and should be ignored; you can also hit ctrld twice but then the output is started after the input and looks messed up).Now we know the sequence, and can add the mapping to .vimrc. Add a line such as the following:map <C-[>[24^ :whateveryouwantThe <C-[> sequence is the vim representation of escape, which is the same as ctrl[. After that the characters aren't special so they can be entered as-is.Now ctrlshiftF2 will be mapped to the right hand side you entered when you start vim. |
_unix.238934 | I have to migrate some data to a new vdisk. But i have no idea how i can do this.The old Vdisk is under /dev/mapper/12345 which is a link to /dev/dm-1The new Vdisk is under /dev/mapper/67890 which is a link to /dev/dm-2There is also a Volume Group with the name sysvg. When i type into the console dmsetup ls i get the following output:12345 (253:1)sysvg-var_tmp_vol (253:13)sysvg-var_vol (253:12)67890 (253:2)Can someone give me a hint or the solution how i can migrate to thins new vdisk(67890)? | Migrate to a new vdisk | filesystems;file copy;storage;san | null |
_reverseengineering.3747 | I recently acquired root access to an Apple Airport Express router (2nd gen), and I would like to dump the current bootloader of the device to a .bin file I suppose. What tools would be necessary for doing such a thing? I am open to all suggestions short of desoldering the flash chip from the device. | How to dump the bootloader of NetBSD router? | firmware | null |
_unix.224980 | I've been reading many articles about Grub with LOTS of examples for configuration. Exactly 0 contain configuration for a separate root and boot partition on LVM.This is my configuration:menuentry 'Kali' {insmod lvminsmod gzioinsmod part_msdosinsmod ext2set root=lvm/triagia-kalibootsearch --no-floppy --fs-uuid --set=root f1eb6904-c17e-40b7-8740-60e67b8d04delinux /vmlinuz-4.0.0-kali1-amd64 root=/dev/mapper/triagia-kaliboot setkmap=usinitrd /initrd.img-4.0.0-kali1-amd64}And this is my LVM setup: sda3 8:3 0 396.9G 0 part triagia-kaliboot 254:0 0 500M 0 lvm triagia-kaliroot 254:1 0 50G 0 lvm triagia-kaliswap 254:2 0 4G 0 lvm This boots up but does not initiate, I think I'm using the wrong config regarding where the / is and where the /boot is. | Grub config separate root and boot partitions | grub2 | null |
_softwareengineering.333469 | I have the following scenario for which I've been experiencing performance problems:The context is a level editor of a new engine for an old video game (whose source is unavailable) in Unity. Basically I'm writing a level editor for Unity to shape old data structures into contemporary, more usable ones, i.e. parts/entities instead of a bunch of unrelated models/polygons.Here's a tree of how things are organized in the old game:Container (dozens)Models (hundreds)Polygons (thousands)Here's what I'd like to achieve:Group(s): for grouping related objectsObject(s): a tree, a ship etc ... Part(s): an object's parts with specific material etc ...Depending what the final object looks like on the screen, the initial data (models/polygons) is a mess, where there are merged or unrelated parts in a model and its polygons.Other problems emerge due to this, for instance, what used to be a simple pointer update to update a texture or color is simply not possible from within Unity.Here things I need and what I've done:I need to be able to track implicit and explicit references to models and polygons. This is to ensure that valid objects are built and therefore prevent incorrect content.Basically the user can create a part from either the model or or one of its polygons, the opposite being disallowed depending what's chosen first.Here's a picture with explanations:Scene9 is of type ContainerM123_456_ABC... are of type ModelP123_456_ABC... are of type PolygonA green icon is an explicit referenceA yellow icon is an implicit referenceA red icon is a non-referenced itemAlso, the user interface is augmented by elements getting disabled whenever appropriate to help user picking the right actions when building the level, etc ...Here, the issues I am experiencing:Querying for explicit/implicit references between around 300 models and 7000 polygons becomes slow over time as relationships gets created.I've been using a couple of List<T> and do LINQ queries between them:everythingall modelsall polygonsimplicit modelsimplicit polygonsexplicit modelsexplicit polygonsMinus the spaghetti logic, it does works but it's slow. Then I've upgraded to HashSet<T> with custom Comparer<T> which greatly improved speed but it's still not fast enough. All this results in freezes in the user interface.Main consideration is that within Unity, loops are rather tight, calls at high rate, immediate mode UI and no asynchronous calls features (Mono is roughly .NET 3.5).I am starting to consider that my approach using lists is flawed and was thinking that a better data structure could be used for performant queries.Can you suggest a more appropriate data structure to use for such scenario ?EDITHere is a diagram showing the new approach I'm trying after your suggestions:Notes:IScenePrimitive consists of 3 int and an EnumISceneReference.IsRef is simply IsRefExplicit || IsRefImplicitMissing things here are a dictionary in Scene for quick access to a model/polygon, when user selects something I retrieve these using hashes without having to enumerate, etc ...Interesting parts in code:SceneModel: public override bool IsRefExplicit { get { return _isRefExplicit; } set { _isRefExplicit = value; if (value) { Debug.Assert(_polygons.None(s => s.IsRefExplicit)); foreach (var polygon in _polygons) polygon.IsRefImplicit = true; } else { Debug.Assert(_polygons.All(s => s.IsRefImplicit)); foreach (var polygon in _polygons) polygon.IsRefImplicit = false; } } } public override bool IsRefImplicit { get { return _isRefImplicit; } set { _isRefImplicit = value; if (value) Debug.Assert(_polygons.Any(s => s.IsRefExplicit)); } }ScenePolygon: public override bool IsRefExplicit { get { return _isRefExplicit; } set { _isRefExplicit = value; if (value) { Debug.Assert(!Model.IsRefExplicit); Model.IsRefImplicit = true; } else { Debug.Assert(Model.IsRefImplicit); Model.IsRefImplicit = false; } } } public override bool IsRefImplicit { get { return _isRefImplicit; } set { _isRefImplicit = value; Debug.Assert(Model.IsRefExplicit); } }That's all that really happens, LINQ on a dozen items instead of many more (300 models querying each 7000 polygons X). I still have to test it to see how well it performs but it should perform much faster, to be continued. | What's an efficient data structure to make a lot of queries between parents/childs? | c#;data structures;performance;query;unity3d | Let's take a look at SceneModel's IsRefExplicit. I've added comments:SceneModelpublic override bool IsRefExplicit{ get { return _isRefExplicit; } set { _isRefExplicit = value; if (value) { // If the model has an explicit reference // then all its polygon have an implicit reference Debug.Assert(_polygons.None(s => s.IsRefExplicit)); foreach (var polygon in _polygons) polygon.IsRefImplicit = true; } else { // If the model doesn't have an explicit reference // then all its polygon don't have an implicit reference Debug.Assert(_polygons.All(s => s.IsRefImplicit)); foreach (var polygon in _polygons) polygon.IsRefImplicit = false; } }}This can be done by simply changing the implementation of ScenePolygon to query the Model:SceneModelpublic override bool IsRefExplicit{ get { return _isRefExplicit; } set { _isRefExplicit = value; }}ScenePolygonpublic override bool IsRefImplicit{ get { return Model.IsRefExplicit; } set { throw new NotSupportedException(); }}No more loop.Of course the above is under tha assumption that IsRefImplicit on ScenePolygon will only be set by SceneModel. If this assumption is not true, then perhaps you shouldn't do polygon.IsRefImplicit = false; on SceneModel because the IsRefImplicit may have been set to true by another code.Now, let's take a look at ScenePolygon's IsRefExplicit:ScenePolygonpublic override bool IsRefExplicit{ get { return _isRefExplicit; } set { _isRefExplicit = value; if (value) { // If the polygon has an explicit reference // then the model has an implicit reference Debug.Assert(!Model.IsRefExplicit); Model.IsRefImplicit = true; } else { // If the polygon doens't have an explicit reference // then the model doens't have an implicit reference? // ... // Could there be another polygon of the model with an explicit reference? // If there is, then the model should remain with an implicit reference Debug.Assert(Model.IsRefImplicit); Model.IsRefImplicit = false; } }}In this case what you need is reference counting. Keep in the SceneModel how many ScenePolygon does it have with IsRefExplicit and let ScenePolygon increment it or decrement it as needed.public override bool IsRefExplicit{ get { return _isRefExplicit; } set { if (_isRefExplicit == value) { return; } _isRefExplicit = value; if (value) { Model.IncrementImplicitRefCount(); } else { Model.DecrementImplicitRefCount(); } }}Then the model can implement IsRefImplicit by checking if the current reference count is greater than 0. For abstract, we could write the rules like this:Model.IsRefImplicit = contains at least one polygon with IsRefExplicit. Have a counter, and check if it is greater than 0. There is no setter.Model.IsRefExplicit = all the polygons have IsRefImplicit. Store a bool backing field.Polygon.IsRefImplicit = belongs to an model with IsRefExplicit. Read Model.IsRefExplicit There is no setter.Polygon.IsRefExplicit = should mark the model IsRefImplicit. Store a bool backing field. The setter will increment the counter of the Model when set to true, and decrement it when set to false, do nothing if the value didn't change.Then the concern is to annotate IsRefExplicit and let IsRefImplicit be populated by the code you have. So, you would be reading the source data and looking in some dictionary for the object it references, and annotating it. If the object is not in the dictionary you create it and add it.If you change your code to use Interlocked (use Increment and Decrement for the reference count, and use CompareExchange and Exchange on an int set to 0 or 1 instead of the bool backing fields) then the resulting code will be thread-safe, and you will be able to have multiple threads writing IsRefExplicit.If you change your lists/dictionaries to thread-safe solutions such as ConcurrentDictionary, then the structure can also be populated by multiple threads. |
_codereview.26381 | I've wrote this script to fetch and format content from my DB. It also counts how many result there are and separates them into pages. I'm barely learning PHP and MySQL so I don't know much about performance.function fetch_content($section, $subsection, &$count, $page = 0){ $section = substr($section, 0,8); $subsection = substr($subsection, 0,8); require_once(system/config.php); //Initiate connection $mysqli = new mysqli($db_host, $db_user, $db_password, $db_database); if ($mysqli->connect_error) { die('Connect Error (' . $mysqli->connect_errno . ') '. $mysqli->connect_error); } //Select page $limit = 2; $start = $page * $limit ; //select query if($section == 'home' || ($section != 'home' && $subsection == NULL)){ $selection = WHERE section = ?; } else $selection = WHERE section = ? AND subsection = ?; //Fetch data $stmt = $mysqli->stmt_init(); $qry= SELECT * FROM public $selection ORDER BY id DESC LIMIT ?,?; $stmt->prepare($qry); if($section == 'home' || ($section != 'home' && $subsection == NULL)) $stmt->bind_param(sss, $section, $start , $limit); else $stmt->bind_param(ssss, $section, $subsection, $start , $limit); $stmt->execute(); $result = $stmt->get_result(); //Format the data while( $row = $result->fetch_assoc()){ format_home($row, $mysqli); } $stmt->close(); //Count result $stmt = $mysqli->stmt_init(); $qry= SELECT COUNT(*) AS count FROM public $selection; $stmt->prepare($qry); if($section == 'home' || ($section != 'home' && $subsection == NULL)) $stmt->bind_param(s, $section); else $stmt->bind_param(ss, $section, $subsection); $stmt->execute(); $result = $stmt->get_result(); $count = $result->fetch_assoc(); $count = $count['count']; $stmt->close(); //close connection $mysqli->close();} | Fetching and formatting content from a database | php;sql;pdo | null |
_scicomp.19857 | Normally when searching in sorted sets, binary searches are a fast, nice and easy way to locate data. It does however break down in the hypothetical scenario of having a set of arbitrarily large but unknown size as it cannot have its upper limit set to infinity.In order to solve this, one could imagine doing a, for lack of a better term, reverse binary search to find rough lower/upper limits, then perform a regular binary search:min = 0max = 1while( set[max] exists && set[max] < target ) { min = max + 1 max *= 2}binarySearch(min, max)Since this seems to be a somewhat trivial algorithm, I cannot possibly be the first one to think of it, so my question is simply whether this algorithm has a name (and if so, what it is). | Does this reverse binary search have a name? | algorithms;terminology | This seems to be called an exponential search, doubling search, or galloping search. I've also heard it called a geometric expansion search or something similar. In principal, it is similar to the geometric expansion strategy that is often used for resizing dynamic arrays in computer programming. Resizing dynamic arrays in this way ensures that adding $n$ elements takes $\mathcal{O}(n)$ time. This does not necessarily rely on doubling the storage each time, you could use a factor of 3 or 3/2 or any other number greater than 1.If you use this method to find an upper bound for an interval and then use a binary search to find the exact element you would need $\mathcal{O}(\log_2(N))$ for each part of the algorithm where N is the index of the entry that is eventually found. Averaged over many searches, I think this would be an optimal search strategy if the probability that you will need to find entry $N$ is proportional to $1/N$. In other words, this would be a good strategy to use if you expect to preferentially choose points that are near the beginning of your array.In a continuous setting this would be useful if you were trying to find where a function reaches a certain value and all you know is that the function grows approximately exponentially. You could use the binary expansion search to find a range that encompasses the value you want and then use a binary search to narrow down on the precise value. Of course there are faster search methods in this case (taking better advantage of your knowledge of the function or using higher order search methods), especially for well-behaved functions, but this would allow you reasonable efficiency with a very simple method. |
_softwareengineering.291873 | Let's say I have a API with regular updates and many developer who are using the API. Whenever the API updates I don't want to bother the developers with all the change updates. I want to send them updates about only the functions or classes that they are using in their code.I have a faint idea, but not sure if this would work.A service which will have an index of all the changes made to the files functions based on the git commits in the last day.A module or plugin on the client side(where the API resides) which will check with this service on a cron run regarding the changes made to the API, compare it with its internal index of files and functions used and send updates to the developer if there is a match.Is there something implemented on these lines already? If not what would be the right way to go forward. | Context specific API updates to developer | api;software updates | null |
_softwareengineering.326275 | I have a webpage and frontend contains pages for Customer, Product and Order part and other parts may be added.The actual implementation is that every part(modul) has its own Controller and every part(modul) has about 8 pages.CustomerControllerProductControllerOrderControllerOther partsEach page has its methods in the related partial class. That means that Controller class is splitted to 8 files for example.From the maintaince perspective it eliminate fat controllers but still i am not sure about this practice.The example code:// Customer Index page public partial class CustomerController : Controller { Database database = new Database(); [HttpGet] public ActionResult Index() {...} [HttpPost] public ActionResult Index(CustomerModel model) {...} [HttpPost] public ActionResult Search(CustomerModel model) {...} }// Customer AddOrUpdate page public partial class CustomerController { [HttpPost] public ActionResult AddOrUpdateCustomer(CustomerModel model) {...} }// Customer Hierarchy page public partial class CustomerController { [HttpGet] public ActionResult Hierarchy(){...} [HttpPost] public ActionResult Hierarchy(HierarchyModel model){...} public ActionResult ApiGetHierarchyItem(int Id){ ... } }the question is if the way of preventing Controller becoming fat by partial classes is good practice. | MVC and use of partial class for Controller | asp.net mvc | If you are starting from scratch you should have a controller for each page and not bloat a single controller. While partials may seem like a good idea, its still a single giant class with multiple responsibilities. Adding additional controllers is very simple and will reduce the overall complexity of your project.If you are inheriting a solution where one controller was used for many pages and want to try to start correcting the issues then, as an initial first step of refactoring, breaking the monster into several files with partial classes is a sound plan. After you have broken the giant class into files that have a single responsibility you can change that file to be a class of its own with a single responsibility as you pay down the technical debt. |
_webmaster.106135 | I'd like to know the pros and cons of grouping external CSS and JavaScript files. The grouping method I'm referring to often seen by external libraries providers such as Google fonts and jsDelivr.For example:Google Fonts<link href=//fonts.googleapis.com/css?family=Inknut+Antiqua|Ravi+Prakash|Roboto>jsDelivr<script src=//cdn.jsdelivr.net/g/[email protected](slideshow/js/supersized.3.2.7.min.js),[email protected],[email protected],[email protected](jquery.fancybox.min.js)></script><link rel=stylesheet href=//cdn.jsdelivr.net/g/[email protected](flickr/css/supersized.css),[email protected](flexslider.css),[email protected](jquery.fancybox.min.css)>Typical answers I'm looking for is a list of the cons and pros, i.e cachable, not cacheable, reduces server-side requests, actually increases load time and so on. | Remote CDN: Pro's and Cons of grouping CSS and JS | html;css;javascript;cdn;best practices | null |
_unix.14998 | In my system I have an NVIDIA video card (GeForce 450 GTS) as an extra card (video output via motherboard ATI) for bitcoin mining.If I start mining, the fan speeds up a little, but the weird thing is, if I stop mining the fan starts to spin much much faster (generating lots of noise). And it doesn't stop doing that.What could be the cause? And how can I fix that? | NVIDIA card fan spins faster when not in use | nvidia | null |
_softwareengineering.289348 | I would like some advises on the organization of a set of related but independent C++ projects stored in a single (git) repository. The projects use CMake.For a simplified example we imagine 2 projects A and B, A depending on B. Most people developing A will get B via the packaging system. Thus they will compile only A. However, we should allow developers to compile both A and B themselves (and install it), separately or together.Here is a proposal : Repo1 CMakeLists.txt (1) A CMakeLists.txt (2) include aaa.h aaaa.h CMakeLists.txt (3) src aaa.cpp aaaa.cpp CMakeLists.txt (4) B CMakeLists.txt (2) include bbb.h bbbb.h CMakeLists.txt (3) src bbb.cpp bbbb.cpp CMakeLists.txt (4) test CMakeLists.txt (5) testaaaa.cpp(1) Define the common cmake variables for all projects (if any) and includes the subdirectories. (2) Defines the project itself and the project's required cmake variables.(3) Defines the headers to install and the ones required for compilation.(4) Configures the library and binaries.(5) Configures the test executables and test-cases.As I understand it, each project should produce a XXXConfig.cmake file and install it in /usr/local/share/cmake. Writing these files seem quite complicated when reading the documentation of CMake. What do you think ? Does the structure make sense ? Do you happen to have a working example of such a set of projects ? | Directory organization of a CMake (C++) repository containing several projects | c++;organization;code organization;dependency management;cmake | After quite some reading and tests, I have made a basic demo C++ project demonstrating the use of CMake, CTest + boost.test, CPack and Doxygen and using more or less the organization I mentioned in my question.The project shows how to make subproject dependencies, how to compile the whole repo or only a subproject, how to package, how to test and how to produce documentation.See here : https://github.com/Barthelemy/CppProjectTemplate |
_codereview.59565 | This is my homework. Kindly help me check it?The instructions are:Implement an infix expression to postfix expression converter. You are to implement the infix to postfix algorithm presented in the lecture. You are to use only the stack that was provided in the lab lecture. The use of the stack of JDK or any other stack is not allowed. The supported operators are +, -, *, / and ^. Grouping operator are ( and ).Input: Infix expresseion where each token (operand and operator) are space-separated (console-based). Explore the StringTokenizer class for tokenizing (separating the infix expression into tokesn) the input. The book of Deitel and Deitel has examples on how to use StringTokenizer.Output: Postfix expression (console-based). The tokens (operand and operator) are also space-separated in the output. The code I wrote is:import java.util.*;public class PostfixConverter {private String infixExpression;public PostfixConverter(String infixExpression) {this.infixExpression = infixExpression;}private String convertInfixExpression() {StringTokenizer tokens = new StringTokenizer(infixExpression);Stack operatorStack = new ArrayStack();String converted = ;while(tokens.hasMoreTokens()) { String token = tokens.nextToken(); if(token.equals(()) { operatorStack.push(new String(token)); } else if(token.equals())) { while(operatorStack.top().equals(() != true) { converted = converted + + operatorStack.pop(); } if(operatorStack.top().equals(()) { operatorStack.pop(); } } else if (isOperator(token)){ if(operatorStack.isEmpty() == true) { operatorStack.push(new String(token)); } else { if(ICP(token) < ISP((String) operatorStack.top())) { converted = converted + + operatorStack.pop(); operatorStack.push(token); } else { operatorStack.push(new String(token)); } } } else { converted = converted + + new String(token); }}while(operatorStack.isEmpty() != true) { converted = converted + + operatorStack.pop(); }return converted;}public int ISP(String token) {int precedence = 0;if(token.equals(+)|| token.equals(-)) { precedence = 2;}else if(token.equals(*) || token.equals(/)) { precedence = 4;}else if(token.equals(^)) { precedence = 5;}else if(token.equals(()) { precedence = 0;}return precedence;}public int ICP(String token) {int precedence = 0;if(token.equals(+)|| token.equals(-)) { precedence = 1;}else if(token.equals(*) || token.equals(/)) { precedence = 3;}else if(token.equals(^)) { precedence = 6;}return precedence;}private boolean isOperator(String token) {return (token.equals(+) || token.equals(-) || token.equals(*) || token.equals(/) || token.equals(^) ); }public static void main(String[] args) {System.out.println(Input the infix expression + (operands, operators are separated by spaces):);Scanner input = new Scanner(System.in);String infixExpression = input.nextLine();PostfixConverter converter = new PostfixConverter(infixExpression);System.out.println(The converted expression is + converter.convertInfixExpression());}} | Infix to Postfix Converter Program | java;parsing;homework;math expression eval | null |
_softwareengineering.171134 | For a previous project, I was using Backbonejs alongside Django, but I found out that I didn't use many features from Django. So, I am looking for a lighter framework to use underneath a Backbonejs web app.I never used Django built in templates. When I did, it was to set up the initial index page, but that's all.I did use the user management system that Django provided.I used the models.py, but never views.py.I used urls.py to set up which template the user would hit upon visiting the site.I noticed that the two features that I used most from Django was South and Tastypie, and they aren't even included with Django.Particularly, django-tastypie made it easy for me to link up my frontend models to my backend models. It made it easy to JSONify my front end models and send them to Tastypie.Although, I found myself overriding a lot of tastypie's methods for GET, PUT, POST requests, so it became useless. South made it easy to migrate new changes to the database. Although, I had so much trouble with South. Is there a framework with an easier way of handling database modifications than using South? When using South with multiple people, we had the worse time keeping our databases synced. When someone added a new table and pushed their migration to git, the other two people would spend days trying to use South's automatic migration, but it never worked. I liked how Rails had a manual way of migrating databases.Even though I used Tastypie and South a lot, I found myself not actually liking them because I ended up overriding most Tastypie methods for each Resource, and I also had the worst trouble migrating new tables and columns with South. So, I would like a framework that makes that process easier. Part of my problem was that they are too magical.Which framework should I use? Nodejs or a lighter Python framework? Which works best with my above criteria? | Which web framework to use under Backbonejs? | web development;javascript;python;web applications;django | null |
Subsets and Splits