id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.52873
A long time ago I programmed a lot in ADA, and it was normal to name arguments when invoking a function - SomeObject.DoSomething(SomeParameterName => someValue);Now that C# supports named arguments, I'm thinking about reverting to this habit in situations where it might not be obvious what an argument means.You might argue that it should always be obvious what an argument means, but if you have a boolean argument, and callers are passing in true or false then qualifying the value with the name makes the call site more readable.contentFetcher.DownloadNote(note, manual : true);I guess I could create Enums instead of using true or false (Manual, Automatic in this case).What do you think about occasionally using named arguments to make code easier to read?
Named arguments (parameters) as a readability aid
coding style
This was suggested in the development of C++, and Stroustrup discusses it in his Design and Evolution of C++, pages 153 and following. The proposal was well-formed, and drew on prior experience with Ada. It wasn't adopted.The biggest reason was that nobody wanted to encourage functions with large numbers of parameters. Each additional feature in a language costs something, and there was no desire to add a feature to make it easier to write bad programs.It also raised questions of what the canonical parameter names were, particularly in the usual header and code file convention. Some organizations had longer and more descriptive parameter names in the .h file, and shorter and easier to type names in the .cpp file (substitute file suffixes as desired). Requiring that these be the same would be an additional cost on compilation, and getting names mixed up between source files could cause subtle bugs.It can also be handled by using objects rather than function calls. Instead of a GetWindow call with a dozen parameters, create a Window class with a dozen private variables, and add setters as necessary. By chaining the setters, it's possible to write something like my_window.SetColor(green).SetBorder(true).SetBorderSize(3);. It's also possible to have different functions with different defaults that call the function that actually does the work.If you're just worried about the documentation effect of contentFetcher.DownloadNote(note, manual : true);, you can always write something like contentFetcher.DownloadNote(note, /* manual */ true);, so it's not even very helpful in documentation.
_webmaster.96804
What I am looking to do is to redirect all traffic from www.domain.com to www1.domain.com(subdomain of domain.com) via DNS.I am attempting to move from a local server (domain.com) to a GoDaddy server (www1.domain.com) which is a VPS, with the intention of completely shutting down the local server after the move. I have successfully moved the site's contents to the subdomain (www1.domain.com) and would like to direct all traffic to domain.com now to www1.domain.com. The DNS is controlled here locally, not by Godaddy.I previously tried setting a CNAME record like so: www IN CNAME www1.domain.com.But that failed terribly. Can anyone please provide some insight as to what other solutions there may be? Would something like this work instead? The IP address 1.2.3.4 would be the IP address of the Godaddy server.@ IN A 1.2.3.4www IN CNAME www1.domain.com
DNS redirect for domain to subdomain
domains;dns;subdomain;godaddy;cname
null
_unix.207805
I have a Btrfs partition which has a single subvolume at the top level (/root). It has subvol=root option in /etc/fstab.Every week, I take a readonly snapshot into /root/snapshots/... using:btrfs subvolume snapshot -r / /snapshots/$(date --rfc-3339=date)(paths don't have /root because it's mounted as subvol=root).Now let's say something went wrong and I wanted to restore my root subvolume from a snapshot, I boot from a USB disk and mount the partition as /mnt/disk without subvol=root. If I try to run:btrfs subvolume snapshot /mnt/disk/root/snapshots/2015-05-01 /mnt/disk/rootIt creates the new subvolume as /mnt/disk/root/2015-05-01 instead of replacing /mnt/disk/root/. If I try to delete it first by runningbtrfs subvolume delete /mnt/disk/rootIt gives the error message:ERROR: cannot delete '/mnt/disk/root' - Directory not emptyIs there a way to do this? Or should I get into the habit of creating snapshots outside the subvolume being snapshotted?
Restore Btrfs snapshot from subdirectory to parent
linux;btrfs;snapshot
null
_unix.332311
Specifically, I'd like to arrange for all completion lists (under the :completion:* context) to be preceded and followed by a fixed string (say ABC for the moment) on its own line.I have tried a variety of approaches, and I'm able to get the preceding line but not the following one. The best approaches areUse the array comppostfunc in _main_complete. Including a function that uses _message -r to print out the line. But comppostfun is executed at the end of _main_complete but apparently before the list is created, giving the string at the beginning not the end.Create new completers -- a clone of _complete that returns 1 and a completer that just does _message. The idea being that the _message completer comes last.This works to a point but again the string comes at the beginning of the completion list.Adapt 1 or 2 by playing with the tag-order style to put messages at the end. This does not work.Create a variant of the _complete completer that does its normal action (_normal in one case) and ends with a _message. Again this does both but always puts the message first. Playing with tag-order does not change this.I feel I'm missing something. For instance, is there a way to specify a completer style in which the output of several completers is concatenated? Why doesn't doing several actions within a completer produce the completion list in order? How to get tag-order style to put the _message output at the end?
How do I wrap the zsh completion list with a fixed line of text before and after?
zsh;autocomplete
null
_unix.337955
I found a blog post about torghost, I am wondering if this has some potential harmful software or have some potential risk in using it.
How to know if torghost is safe to use?
ubuntu;ip;proxy
I'm the developer of TorGhost. It's 100% safe, and it doesn't contain any malicious code. Don't worry. Here is the link to the tutorial: TorGhost channel all traffic through tor network in kali linux.
_cstheory.30703
Is there a published algorithm for translating a context-free parsing problem into SAT? That is, an algorithm that translates a context-free grammar and an input string into a set clauses that is satisfiable iff the input string is well-formed according to the grammar.
Translation of context-free parsing into SAT
fl.formal languages;sat
(I guess the important word in the original question is ``published''.) There is such an encoding of context-free parsing (more exactly of CYK-style parsing) in Roland Axelsson, Keijo Heljanko, and Martin Lange, Analyzing Context-Free Grammars Using an Incremental SAT Solver, ICALP 2008, Lecture Notes in Computer Science vol. 5126, pp. 410--422, doi:10.1007/978-3-540-70583-3_34. They use it in particular to detect ambiguity of words $w$ in context-free grammars for growing word lengths.
_reverseengineering.8476
I have found problem with finding file offset which actually is program entry point.In case I experience problem, value of AddressOfEntryPoint is 0x1018. Here is a section which maps this address.I assume entry point should be 0x28 = 0x10 + 0x1018 - 0x1000 (PointerToRawData + AddressOfEntryPoint - VirtualAddress)However tools says it is 0x18 instead. I'm not sure why, made some experiments and came up with another formula. 0x18 = (0x10 / 0x200) * 0x200 + 0x1018 - 0x1000 ((PointerToRawData / FileAlignment) * FileAlignment + AddressOfEntryPoint - VirtualAddress)I use FileAlignment from OptionalHeader and it works great, however I don't know if it is a coincidence or somewhere documented, so asking here for confirmation.Also, probably not important, but file is packed with UPack(0.399), packer signature BE****AD50FF7634EB7C4801.
Problem with entry point detection as a file offset
pe;entry point
From https://code.google.com/p/corkami/wiki/PE#PointerToRawData --if a section's physical start is lower than 200h (the lower limit for standard alignment), it is rounded down to 0.Thus, the entry point's physical offset would be:0x00000018 = 0x00000000 + 0x00001018 - 0x00001000 (PointerToRawData_rounded_down + AddressOfEntryPoint - VirtualAddress)
_unix.310125
I am trying to create a line to send mail using postfix but the terminal just hangs up:mail $MAILRECEIPIENT -s ${MAILSUBJECT}. - ${CURRENTDATETIME} < $TMPFILE;What's wrong with this syntax ?UPDATEThis actually sends an email but without the attachment and I also get an error:mail: Invalid header:echo ${CURRENTDATETIME} | mail -s ${MAILSUBJECT} -a $TMPFILE $MAILRECEIPIENT;
Mail Command Hangs Up
postfix
null
_webapps.75528
I'm using two sheets, one with the main values and reference datas.The second sheet is for price calculation.I use =Filter to get the values from sheet one where Sheet1!E2:E <> 1.How can I get the row from Sheet2 deleted when the value is no longer inside the Filter Condition?
Conditional deleting in Google Spreadsheets
google spreadsheets
null
_webapps.59570
I'm looking for a tool that allows me to keep track of people I unfollow on Twitter. I regularly look for Twitter users that might be interested in my Twitter accounts (I use specific keywords to find users and so on). But I have no interest in following them (well at least most of them). It's just a way to say hey! Look at this!.I would like to keep a track of the accounts I unfollow so I don't follow them again (I don't want to spam them).
Keep track of people I unfollow on Twitter?
twitter
null
_cs.48186
This particular language:$$L = \{ u u^R v \,:\, u, v \in \{0, 1\}^+\}$$is giving me a lot of trouble.I highly suspect that its non-regular, considering that $\{ u u^R : u \in \{0, 1\}^+\}$ is non-regular, and I can't seem to reason out a DFA for it.However, trying to achieve a contradiction with the pumping lemma is giving me a lot of trouble. because of the arbitrary $v$ in the construction.In particular, if the string starts with $00$ or $11$ it is automatically in the language because we can pick $v$ as the remainder and the first two characters are trivially the reverse of each other.Issues like this seem to thwart my every attempt at applying the pumping lemma. With $p$ as the pumping length, I tried something like $(01)^p (10)^p 1$ but you can simply pump up the starting character to obtain a string that starts with $00$ (and pumping down also works), so this string doesn't work for contradicting the pumping lemma.I'm pretty stuck on ideas for strings that will contradict the pumping lemma, so I could appreciate a hint on the problem. (Perhaps it is regular? That's still in the back of my mind too)
Proving non-regularity of $u u^R v$?
formal languages;pumping lemma
You can also use another approach: suppose that your language $L$ is regular; then you can intersect it with another regular language: $L_R = (01)^* (10)^* 1$ and - by the closure properties of regular languages - you get another regular language:$L' = L \cap L_R = \{ (01)^n\,(10)^m1 \mid m \geq n > 0 \}$.Now it is easier to prove that $L'$ is not regular using the pumping lemma (just pick $n = p$, the pumping length), leading to a contradiction.
_codereview.125189
Just an idea I had for a Java exercise. Cracking a password like testing takes at least 15 minutes, so obviously it's not as efficient as it can be. What are your thoughts? I also never worked with multithreading, so I can imagine I could be doing things better there.public class AlgoTester implements Runnable { private static final char[] possibleChars = { 'a', 'b', 'c', 'd', 'e', 'f', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' }; private static int[] arrayIndices; private static String testPassword = testing; private static final int passwordLength = testPassword.length(); private static boolean foundPassword = false; private final static int threadCount = 5; private static String passwordFound = ; public static void main(String[] args) { arrayIndices = new int[passwordLength]; for (int i = 0; i < arrayIndices.length; i++) { arrayIndices[i] = 0; } for (int i = 0; i < threadCount; i++) { new Thread(new AlgoTester()).start(); } } private synchronized static boolean advanceIndicies() { synchronized (arrayIndices) { if (foundPassword) return false; int maxValue = possibleChars.length - 1; for (int i = 0; i < passwordLength; i++) { int inverse = arrayIndices.length - i - 1; if (arrayIndices[inverse] == maxValue) { if (inverse - 1 == -1) return false; arrayIndices[inverse] = 0; continue; } else { arrayIndices[inverse] = arrayIndices[inverse] + 1; return true; } } return false; } } private synchronized static String genPassword() { synchronized (arrayIndices) { StringBuilder builder = new StringBuilder(); for (int i : arrayIndices) { builder.append(possibleChars[i]); } String returning = builder.toString(); System.out.println(Thread.currentThread().getName() + : Checking: + returning); return returning; } } private static boolean checkPassword() { // TODO Auto-generated method stub return (foundPassword = genPassword().equals(testPassword)); } @Override public void run() { while (!foundPassword) { if (checkPassword()) { passwordFound = genPassword(); System.out.println(----------------------); System.out.println(Found Password: + passwordFound); System.out.println(----------------------); foundPassword = true; } if (!advanceIndicies()) { return; } } }}
Password Bruteforcer
java;array;multithreading
null
_cs.21840
I am considering strategies to compare the performance of routers with buffers vs. their lower latency counterparts in a simple network setting (N clients communicating with each other randomly through a router). I know that buffered routers should ideally perform better when there are traffic bursts.My question is regarding my concern which is that I do not know what is a good strategy to pick scenarios and how could I convey that these models are relevant to common real world situations. So the question would be:How to pick scenarios that are similar to real world situations?How to convince that these scenarios are indeed relevant to realworld situations.My main concern is measuring the benefit of buffers vs. an improvement in RRT (lower latency) with varying levels of congestion. Also showing possible disadvantages where the amounts of congestion are overwhelming (buffer bloat) and possibly other patterns.
Buffering packets vs. low latency routing
computer networks;empirical research;routing
null
_softwareengineering.328079
There is a company that I am pulling data from which has a unique ID for each record. However, the ID is alphanumeric (even contains a special character too). When I asked about this I was told that this is because the leading alphanumeric portion indicates the type of record that it is. Is this a proper format to use? I would have thought it would be best to just have a type column.
Is it good form to use an alphanumeric unique ID?
coding style;database design
There is nothing wrong with alphanumeric IDs as such - they can be shorter, which is big advantage when they're handled by humans.Encoding the type in the ID is certainly not good from a DB design point of view, but it can make sense as a business requirement, again for human handling.I think the crucial point is that externally visible IDs are often subject to other requirements than only those of proper database design.
_unix.246702
I am trying to run su or sudo on my CentOS 6.6 machine. Every time I try to execute either command, I get prompted for a password. After entering the password, the command simply hangs. I can ctrl+c it. I have already checked my /etc/hosts and added the $HOSTNAME to the 127.0.0.1 and ::1 line. I can login in as root with a hassel. I tried running strace on sudo (strace -o ~/sudotrace.txt sudo -i) and got following error:sudo: effective uid is not 0, is sudo installed setuid root?Doing an ls -l14 $ ls -l /usr/bin/sudo---s--x--x. 1 root root 123832 Oct 15 2014 /usr/bin/sudoThese permissions seems to be wrong according to what I read online. That still doesn't explain why su behaving strangely though.Any input would be greatly appreciated.Thanks
sudo and su hang indefinitely... stumped
centos;sudo;su
null
_bioinformatics.2049
My goal is to make a conservation plot between bacterial sequences and a human protein.So far, I have a FASTA file of the protein, and a FASTA file with the sequences of the proteins from the BLAST results.In my initial attempts, I have tried to do this using msa software:I have downloaded clustalx and run a profile alignment using the single protein sequence as Profile 1 and the sequences from the BLAST results as Profile 2, and selected the option Align Sequences to Profile 1.I'm having difficulty interpreting and dealing with the results. Since the majority of the sequences aren't aligned, the results are a total mess, with a huge number of dashes added.My goal is to visualize the number of BLAST hits per amino acid in my protein of interest. ie: I would like to have a graph with my protein on the x-axis and a plot like the regions of high conservation for the multiple alignments, except with the spikes corresponding to a high number of BLAST hits. This would allow me to identify regions of my protein that have higher sequence similarity to bacteria than others.Is there a better way to achieve this or to salvage the results from the profile alignment?Thanks.
Profile of conservation between bacterial sequences and human protein
sequence alignment
I'm not sure what you mean by it being a mess? That looks like a pretty good alignment to me, and it most certainly has aligned all the sequences. You are nearly always going to have gaps (see my MSA below, which are all paralogs from the same genome so they couldn't really be more closely related, and yet, there are gaps). That comes with its own set of problems mind you, as you need to decide how you're going to deal with them.You also don't necessarily have to do a profile alignment. Those sequences don't look massively divergent to me, so straight forward sequence alignment would probably give you more or less the same result.My goal is to visualize the number of BLAST hits per amino acid in my protein of interest.This doesn't really make sense. You should really be searching for the number of BLAST hits with a particular domain, if you want to infer homology.If you want a per-position visualisation of the conservation, you could look at the shannon entropy for each column and plot that. I wrote a script to do just that a little while ago: https://github.com/jrjhealey/bioinfo-tools/blob/master/Shannon.pyJust beware it's not super well tested yet. Feed an MSA in with as many sequences as you want to analyse, but you'll have to have identified the sequences and done the alignment first.For example, given this MSA: 16 149PAU_02775 MSTTPEQIAV EYPIPTYRFV VSLGDEQIPF NSVSGLDISH DVIEYKDGTGPLT_01696 MSTTPEQIAV EYPIPTYRFV VSIGDEQIPF NSVSGLDISH DVIEYKDGTGPAK_02606 MSTTPEQIAV EYPIPTYRFV VSIGDEQVPF NSVSGLDISH DVIEYKDGTGPLT_01736 MSTTPEQIAV EYPIPTYRFV VSIGDEKVPF NSVSGLDISH DVIEYKDGTGPAK_01896 MTTTT----V DYPIPAYRFV VSVGDEQIPF NNVSGLDITY DVIEYKDGTGPAU_02074 MATTT----V DYPIPAYRFV VSVGDEQIPF NSVSGLDITY DVIEYKDGTGPLT_02424 MSVTTEQIAV DYPIPTYRFV VSVGDEQIPF NNVSGLDITY DVIEYKDGTGPLT_01716 MTITPEQIAV DYPIPAYRFV VSVGDEKIPF NNVSGLDVHY DVIEYKDGTGPLT_01758 MAITPEQIAV EYPIPTYRFV VSVGDEQIPF NNVSGLDVHY DVIEYKDGIGPAK_03203 MSTSTSQIAV EYPIPVYRFI VSIGDDQIPF NSVSGLDINY DTIEYRDGVGPAU_03392 MSTSTSQIAV EYPIPVYRFI VSVGDEKIPF NSVSGLDISY DTIEYRDGVGPAK_02014 MSITQEQIAA EYPIPSYRFM VSIGDVQVPF NSVSGLDRKY EVIEYKDGIGPAU_02206 MSITQEQIAA EYPIPSYRFM VSIGDVQVPF NSVSGLDRKY EVIEYKDGIGPAK_01787 MSTTADQIAV QYPIPTYRFV VTIGDEQMCF QSVSGLDISY DTIEYRDGVGPAU_01961 MSTTADQIAV QYPIPTYRFV VTIGDEQMCF QSVSGLDISY DTIEYRDGVGPLT_02568 MSTTVDQIAV QYPIPTYRFV VTVGDEQMSF QSVSGLDISY DTIEYRDGIG NYYKMPGQRQ AINISLRKGV FSGDTKLFDW INSIQLNQVE KKDISISLTN NYYKMPGQRQ AINISLRKGV FSGDTKLFDW INSIQLNQVE KKDISISLTN NYYKMPGQRQ AINISLRKGV FSGDTKLFDW INSIQLNQVE KKDISISLTN NYYKMPGQRQ AINITLRKGV FSGDTKLFDW LNSIQLNQVE KKDISISLTN NYYKMPGQRQ LINITLRKGV FPGDTKLFDW LNSIQLNQVE KKDVSISLTN NYYKMPGQRQ LINITLRKGV FPGDTKLFDW LNSIQLNQVE KKDVSISLTN NHYKMPGQRQ LINITLRKGV FPGDTKLFDW LNSIQLNQVE KKDVSISLTN NYYKMPGQRQ SINITLRKGV FPGDTKLFDW INSIQLNQVE KKDIAISLTN NYYKMPGQRQ SINITLRKGV FPGDTKLFDW INSIQLNQVE KKDIAISLTN NWFKMPGQSQ LVNITLRKGV FPGKTELFDW INSIQLNQVE KKDITISLTN NWFKMPGQSQ STNITLRKGV FPGKTELFDW INSIQLNQVE KKDITISLTN NYYKMPGQIQ RVDITLRKGI FSGKNDLFNW INSIELNRVE KKDITISLTN NYYKMPGQIQ RVDITLRKGI FSGKNDLFNW INSIELNRVE KKDITISLTN NWLQMPGQRQ RPTITLKRGI FKGQSKLYDW INSISLNQIE KKDISISLTD NWLQMPGQRQ RPTITLKRGI FKGQSKLYDW INSISLNQIE KKDISISLTD NWLQMPGQRQ RPSITLKRGI FKGQSKLYDW INSISLNQIE KKDISISLTD EAGTEILMTW SVANAFPTSL TSPSFDATSN EVAVQEITLT ADRVTIQAA EAGTEILMTW SVANAFPTSL ISPSFDATSN EVAVQEITLT ADRVTIQAA EAGTEILMTW SVANAFPTSL TSPSFDATSN EVAVQEITLT ADRVTIQAA EAGTEILMTW SVANAFPTSL TAPAFDATSN EVAVQEISLT ADRVTIQAA ETGTEILMSW SVANAFPTSL TSPSFDATSN DIAVQEIKLT ADRVTIQAA EVGTEILMTW SVANAFPTSL TSPSFDATSN DIAVQEIKLT ADRVTIQAA EAGTEILMSW SVANAFPTSL TSPSFDATSN DIAVQEIKLT ADRVMIQAA ETGSQILMTW NVANAFPTSF TSPSFDAASN DIAIQEIALV ADRVTIQAP EAGTEILMTW NVANAFPTSF TSPSFDATSN EIAVQEIALT ADRVTIQAA DAGTELLMTW NVSNAFPTSL TSPSFDATSN DIAVQEITLT ADRVIMQAV DAGTELLMTW NVSNAFPTSL TSPSFDATSN DIAVQEITLM ADRVIMQAV DTGSEVLMSW VVSNAFPSSL TAPSFDASSN EIAVQEISLV ADRVTIQVP DTGSKVLMSW VVSNAFPSSL TAPSFDASSN EIAVQEISLV ADRVTIQVP ETGSNLLITW NIANAFPEKL TAPSFDATSN EVAVQEMSLK ADRVTVEFH ETGSNLLITW NIANAFPEKL TAPSFDATSN EVAVQEISLK ADRVTVEFH ETGSNLLITW NIANAFPEKL TAPSFDATSN EVAVQEISLK ADRVTVEFHYou'd get this plot:I would like to have a graph with my protein on the x-axis and a plot like the regions of high conservation for the multiple alignments, except with the spikes corresponding to a high number of BLAST hits. This would allow me to identify regions of my protein that have higher sequence similarity to bacteria than others.I'm not sure your logic is quite right here though. Blast won't give you hits depending on a particular position. It's a local aligner, so it'll just return you hits where at least some part of your query matches at least some part of another.What you could do is take the logic in the script above, and just use a different metric. For example, perhaps you could count the proportion of sequences which have the most common amino acid at a given position within your MSA. That would be fairly crude though.As you say in the comments,My final goal is to produce a plot visualizing regions of high bacterial sequence similarity to my human protein of interest.your original alignment will show you this intrinsically, if only you include the sequences of all the BLAST hits in the first place. Thus your work flow will be:Blast sequence of interest.Download all/as many hits as you want (bear in mind the E-value/bitscore and the number of hits you get. It might only be a few dozen, in which case you can use the lot, but if not, just take all the hits below a certain cut-off.)Align all the sequences.Look at the column scores for the whole MSA. You can use whatever metric of conservation you like really. Might be as simple as proportion of sequences with the most common residue, or something more complex like Shannon entropy (though as you can see in the graph above Shannon entropy can be kinda noisy) etc.
_webmaster.3784
I seen many sites which are refreshing their pages after a fixed interval. Sometimes it is 1-2 mins only.If i refresh a page after every 5 mins. It'll helpful to notify user with updated contents.Please tell meshould i use Ajax for the same instead of reloading the complete page?Would it affect google analytics js script? Means, whether the counter for visits will be increasing?Would it affect adsense?
Is it good to refresh a page after a fixed interval?
google adsense;performance
Automatic refresh is fine if the content is something that is constantly changing and the users would expect and want the most recent content. ESPN.com uses this to update the pages where they report scores. In that case the reload the entire page. NFL.com only updates scores that change instead of reloading the whole page.How you do it depends on your goals and your users' expectations. Using Ajax is a good idea as it is fast and doesn't affect page statistics like Google Analytics. But refreshing the page may be required if you want to count every page load or do other more complex things that may be too much for Ajax (or just simpler without it).
_webapps.69387
I wish to see the comments on videos I've made. When I go to the comments section on YouTube it only shows me comments made on my videos (and I believe it only shows the ones made after the change to Google+). It does not actually show comments that I've posted. How can I see a full history of all the comments I've made on YouTube?I found this post but sadly it is outdated and does not work with the Google+ system.
How do I see my comment history on YouTube?
youtube;comments
Unless you publicly share every single comment you leave on a video through Google+, you won't be able to find them in a list, so to speak.
_cs.19844
I'm studying about Local Binary Pattern and I'm having trouble understanding the following part about the number of output labels for binary patterns from Computer Vision using Local Binary Patterns, by Pietikinen et al. (2011):Another extension to the original operator uses so called uniform patterns [53]. For this, a uniformity measure of a pattern is used: $U$ (pattern) is the number of bitwise transitions from 0 to 1 or vice versa when the bit pattern is considered circular. A local binary pattern is called uniform if its uniformity measure is at most 2. For example, the patterns 00000000 (0 transitions), 01110000 (2 transitions) and 11001111 (2 transitions) are uniform whereas the patterns 11001001 (4 transitions) and 01010011 (6 transitions) are not. In uniform LBP mapping there is a separate output label for each uniform pattern and all the non-uniform patterns are assigned to a single label. Thus, the number of different output labels for mapping for patterns of $P$ bits is $P(P-1)+3$. For instance, the uniform mapping produces 59 output labels for neighborhoods of 8 sampling points and 243 labels for neighborhoods of 16 sampling points.I don't understand why the number of different LBP output labels is $P(P-1) + 3$. Could someone explain why?....Thnx for any help =) Update: I think I have some idea already :) I included an example from my book:
Number of different output labels in Local Binary Pattern
image processing;binary arithmetic
A pattern with uniformity measure exactly 2 is determined by: the value $b$ of bit 0, the position $i>0$ of the first bit that is different from bit 0 and the position $j$ of the first bit after the $i$th that is different from bit $i$. (We may have $j=0$, for example in the pattern 00001111, which has $b=0$, $i=4$, $j=0$.) How many choices are there for the triple $(b,i,j)$? Clearly, there are two choices for $b$. For $i$ and $j$, there are $\tfrac12(P-1)(P-2)$ choices with $i<j$ and $P-1$ choices where $j=0$. Thus, the total number of choices for $i$ and $j$ is$$ \tfrac12(P-1)(P-2) + P-1 = \tfrac12P(P-1)\,.$$To that, you need to add three more labels: all zeroes, all ones and the label for all patterns of uniformity measure strictly more than 2.
_softwareengineering.342598
I understand what to include in ordinary release notes - there are many questions and answers relating to that.What should I include in the first set of release notes, i.e. for the very first released version? Are release notes needed for the first version?
Release notes for first version
documentation;release management;release
Release notes (changelogs) are primarily used to notify users of changes between versions. Perhaps a new useful feature is available, or an old feature was deprecated.The first released version has no changes. The release notes are then quite simple:v1.0.0 2017-02-20- first release!Anything else would be confusing, in my experience.Sometimes, the v1.0 isn't the first release. This is especially the case for open-source projects where v0.x releases are common. In that case, the v1.0 release would have some changes to note. At a minimum, a v1.0 release signifies that the API is now stable, which is a noteworthy change.For release notes on a website or a mailing list, these should be written less like a technical changelog and more like a press release. The first release is an opportunity to:showcase why your software is awesome,list differences to competing software, andgive an overview of your full documentation.If your changelog is embedded into an application, the first release should probably display a get started guide in this space.
_codereview.153225
Please comment on my DFS implementation and test, I would like to get comments about algorithm correctness.Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. One starts at the root (selecting some arbitrary node as the root in the case of a graph) and explores as far as possible along each branch before backtracking. using System;using System.Collections.Generic;using System.Linq;using Microsoft.VisualStudio.TestTools.UnitTesting;namespace JobInterviewTests{ [TestClass] public class DFSUnitTest { [TestMethod] public void CheckCorrectOrder() { GraphNode root = new GraphNode(A); root.Neighbors.Add(new GraphNode(F)); root.Neighbors.Add(new GraphNode(E)); var temp = new GraphNode(B); var temp2 = new GraphNode(C); var temp3 = new GraphNode(D); temp2.Neighbors.Add(temp3); temp.Neighbors.Add(temp2); root.Neighbors.Add(temp); List<string> result =TraverseDFS(root); Assert.AreEqual(A,result[0]); Assert.AreEqual(B,result[1]); Assert.AreEqual(C,result[2]); Assert.AreEqual(D,result[3]); Assert.AreEqual(E,result[4]); Assert.AreEqual(F,result[5]); } private List<string> TraverseDFS(GraphNode root) { if (root == null) { return null; } List<string> result = new List<string>(); Stack<GraphNode> s = new Stack<GraphNode>(); s.Push(root); while (s.Any()) { GraphNode curr = s.Pop(); result.Add(curr.Value); foreach (var node in curr.Neighbors) { s.Push(node); } } return result; } } public class GraphNode { public string Value { get; set; } public List<GraphNode> Neighbors { get; set; } public GraphNode(string s) { Value = s; Neighbors = new List<GraphNode>(); } }}
DFS algorithm + unit test
c#;interview questions
The implementation of depth-first traversal is correct.But there are several elements of this program and tests that could be improved.Prefer empty collections over null valuesWhen the input graph is empty,it would be better to return an empty list instead of null.Collections are used for iterating over its elements.When a collection variable can be null,that's troublesome, because before you can iterate over it,you have to check that it's not null.Constraints and limitationsThe implementation will only work with directed graphs without cycles,otherwise you will have an infinite loop.This is important to include in the description.A common follow-up question at an interview is how to make it work in graphs with cycles.Weak testsThe test knows too much about the implementation.There are 6 possible valid depth-first traversals of the example graph,and the test knows exactly which one to expect.It all comes down to the evaluation order of neighbor,in your implementation it's the reverse order that they were added.Either this implementation should be documented,or the test should not depend on it.The example graph is not a good choice to demonstrate depth-first traversal,because A F E B C D would be valid both as depth-first and as breadth-first traversal.A more expressive example would be: A / \ B C / \D EWith this example all depth-first and breadth-first traversals would be distinct.Lastly, a technical detail,instead of comparing the elements of a list one by one to values,you could compare entire lists, which would be a lot easier to type.List<string> expected = new List<string>(new string[] { A, B, C, D, E, F });List<string> result = TraverseDFS(root);Assert.AreEqual(expected, result);Don't provide setters if you don't really needThe fields of GraphNode are never modified,therefore the setters are unnecessary.
_codereview.4776
I used to mostly do this:class Person constructor : (@parent, @data)-> @index = @parent.lengthclass People constructor : -> people = [] for data in database.people people.push new Person(people, data)Lately I have been trying the following:class Person constructor : (@data)-> pushTo : (list)-> this.index = list.length list.push thisclass People constructor : -> people = [] for data in database.people person = new Person(data) person.pushTo peopleSo was wondering if there is any cons about doing it the second way. The reason I am asking is that it might not be the responsibility of Person to add himself to the list.
Adding push to array inside the class that is being pushed
design patterns;coffeescript
The reason I am asking is that it might not be the responsibility of Person to add himself to the list.I think you should ask youself a question Why am I doing this? Why do you need to keep the @index inside a Person? Is it just an id? If so why don't you use _.uniqueId('person') from Underscore JS? That would be much cleaner.Do you use it for ordering so you could for example sort an array of people by it? If so uniqueId would still suit your needs well.Do you use it to search for person by it's index in the list later on? Basically I can envision two kinds of data processing.You iterate over a list of people. In this case you'll be fine with Underscore's each method:_.each people, (person, index) -> // your code here ^----- see that? ;)You got a person data from some external source and you want to reuse the associated Person object. In this case you should still use an ID and search for your object with find or select methods.Some notes:Both your solutions are totally fine if you certain that they don't complicate the code of your application. They also apply in cases when your collection is very large and find takes too much time. BUT you can speed up selection on later invocations:indexFor = (targetPerson) -> unless person.index // we search for index once and then store it for later use _.find people, (person, index) -> person.index = index if found = (person.id == targetPerson.id) found targetPerson.indexuniqueId uses internal counter so the ID is always increasing. It cannot be reset which may not be always convenient. For example if you need to save your collection and reload it later you won't be able to tell the generator to increase the counter accordingly. In that case I would still go with uniqueId for presentation code but use some other ID generation scheme for persistence. I would go with UUIDs: but you can also try timestamps though I wouldn't recommend it since generally time is way too difficult to get right.You should definitely check out other methods in Underscore library. Whenever you encounter a kind of low-level task you should ask if Underscore does it for you already. If it doesn't check jQuery or whatever library you use for UI. If you're still out of luck ask Google or StackOverflow. I'm sure you're great coder and you can solve many of those tasks yourself but why should you spend your time and effort reinventing stuff instead of creating something new? Know your libraries and use them. It's like a new language or a new text editor - at first it's painful but it pays off later on.
_codereview.47631
TL;DR: The Bash script converts a published, somewhat-structured text file into a format usable by my testing infrastructure. It's slow, I think it's ugly -- although it is fully functional.The NIST provides test vectors for verifying the correct operation of a Galois Counter Mode (GCM) when used with the AES block cipher (I only care about the 128-bit key files, and have not looked into the format of the other files).In order to actually use these test vectors for automated testing of my GCM-AES implementation, I have to convert them from the RSP file that they come in, into a debug script that my chip simulator (mspdebug) can use. There are also other scripts, as well as driver code, that the test vectors ultimately interact with -- but it is sufficient for this problem to have each variable in the RSP file set in memory with an mw command (e.g., PT = 010203 should become mw PT 0x01 0x02 0x03), so long as each group of tests that share common values is broken into a separate file.As an operational example, from this section of the gcmEncryptExtIV128.rsp input file, this output is generated. Note that the input file generates 525 such files, from 525 corresponding sections within itself. The script, as written, will not work on just the subsection linked above, as the complete RSP has some extra junk at the start that gets trimmed out (though you can probably get it to work with some fiddling). Note also that none of the encryption tests are marked with a FAIL -- this token occurs only within the decryption tests, but the contents of the two RSP files are otherwise identical. For the sake of uniformity, each encryption test (as well as the non-failing decryption tests) have an output line of mw FAIL 0.The script is incredibly slow (it takes ~15 minutes to run against a single RSP file on a modest modern machine), primarily because of the sed expression for ensuring that every test block has a FAIL setting. It does, however, correctly spit out each test group into a well-named file, and convert all of the input data into a format that my simulator can handle.Any thoughts on how to speed this up, improve the readability, or conform to best practices are welcome. More drastic actions (e.g., re-casting this as an AWK script) are also interesting to me, if folks think this approach is just all-out incorrect.#!/bin/bashif ! [ -f $1 ]; then echo You must specify a valid input file. exitfi# Strip off the file extension of the input file nameBASEFILE=${1%.*}# Strip off any trailing digits of the input file nameuntil [ $TEMP == $BASEFILE ]; do TEMP=$BASEFILE BASEFILE=${BASEFILE%[0-9]}doneunset TEMP# - Convert the file's line endings# - Strip out the RSP file header and leading blank linesdos2unix < $1 | tail -n +7 > temp.txt# Convert the len values from decimal to hex# Process the temp file line by line to do thiscat temp.txt | \while read VARNAME EQUALS VALUE; do # If this line's variable name ends in len if [ ${VARNAME%%*len} != $VARNAME ]; then # Output the line (removing the starting [, up to the = sign echo -n ${VARNAME#[[]} $EQUALS # Then convert the value from decimal to hex, printing it at the very end of the line. # s/^(.?.)$/0x\1 0x0/ - If we have a 1 or 2 digit number, put it in the LSB position. # s/^(.?.)(..)$/0x\2 0x\1/ - If we have a 3 or 4 digit number, put it in little-endian order. echo obase=16; ${VALUE%%[]]*} | bc | sed -re 's/^(.?.)$/0x\1 0x0/' -e 's/^(.?.)(..)$/0x\2 0x\1/' else # This isn't a length variable? It's already hex, then; just print it straight out. echo $VARNAME $EQUALS $VALUE; fidone > temp2.txtmv temp2.txt temp.txt# - Strip out the block-level variable's enclosing square brackets# - Strip out the Count lines (we don't need them for anything).# - Strip lines with no values# - Strip trailing spacessed -ri -e 's/\[|\]//g' -e '/^Count .*/d' -e '/^[^ ]+ = $/d' -e 's/ +$//' temp.txt# Convert the var = value format to mw var value used by MSPDsed -rie 's/^([^ ]*) =/mw \1/' temp.txt# Convert hex values to 0x## byte notation MSPD will understand.# :loop - A label we'll need later# ^(mw (Key|IV|PT|AAD|CT|Tag) ) - Match only these keys (and eat them and their trailing space)# ((0x[0-9a-f]{2} )*) - Eat up any parts of the hex string that have already been split into 0x-prefixed bytes# ([0-9a-f]{2}) - Capture the next un-processed byte's worth of digits, if they exist# (.*)$ - Capture the rest of the line.# \1\30x\5 \6 - Paste the key (and its space), the processed hex, the new 0x, hex byte, and space, and then any remainder.# t loop - If we actually replaced something, run the replace again (go back to :loop)sed -ri -e ':loop' -e 's/^(mw (Key|IV|PT|AAD|CT|Tag) )((0x[0-9a-f]{2} )*)([0-9a-f]{2})(.*)$/\1\30x\5 \6/' -e 't loop' temp.txt# Split each test block into its own file.# /^mw Keylen/ - If this is a Keylen line, it's the start of a new test block# {x=test-++i;} - Increment our counter (start printing to a new file)# {print > x;} - Append the current line in the buffer to the current file.awk '/^mw Keylen/{x=test-++i;}{print > x;}' temp.txt# - Normalize each test so it has an appropriate FAIL line# - Normalize each test so it reads the test round (after setting variables)# - Rename the files to reflect the tests they contain.for FILE in test-*; do # Make sure that there always exists a pass OR fail indicator for each test round. # --- This is incredibly slow :( --- # $!P - If this is not the last buffer, print it. # 7~1N - Process 2 lines, starting from the 7th line on # /FAIL\n$/! - If this is NOT a FAIL line followed by a blank line... # s/(.*)\n$/ - If this IS (then) a line followed by a blank line... # /\1\nmw FAIL 0\n/ - insert md FAIL 0 as a line. # D - shift out the oldest (first) line, and jump back to the N sed -ri -e '$!P' -e '7~1N' -e '/FAIL\n$/!{s/(.*)\n$/\1\nmw FAIL 0\n/}' -e 'D' $FILE # Replace all the FAIL lines with appropriate memory sets. sed -rie 's/^FAIL$/mw FAIL 1/' $FILE # For each discrete set of test variables, read in the file responsible for actually running a single test round. # 7~1 - Skip the first 7 lines of the file # s/^$/ - If this is a blank line... # /read gcm_test_round.mspd\n/ - Insert the appropriate read line. sed -rie '7~1s/^$/read gcm_test_round.mspd\n/' $FILE # Rename the files to reflect the class of tests they contain. # head -n5 $FILE - Grab the first five lines of the file, which hold (in order) the values for key length, IV length, text length, AAD length, and tag length for all the test entries contained in that file. # ^.* 0x(.?.) 0x(.?.) - Match the two 1-2 digit hex numbers at the end of the lines # ibase=16; \2\1 - Put the bytes back into big-endian, and strip the 0x (prep for BC) # { while read; do echo $REPLY | bc; done; } - Pipe each line to BC one by one, converting the hex values back to decimal # :a - Label a # N - Append another line to the buffer # $!ba - If this is NOT the last line, branch to A # s/\n/-/g - Replace all the newlines in the processing space with dashes mv $FILE $BASEFILE`head -n5 $FILE | sed -re 's/^.* 0x(.?.) 0x(.?.)/ibase=16; \2\1/g' | { while read; do echo $REPLY | bc; done; } | sed -re ':a' -e 'N' -e '$!ba' -e 's/\n/-/g'`.mspd # The resulting renamed files are of the format: # [BASEFILE][Keylen]-[IVlen]-[PTlen]-[AADlen]-[Taglen].mspd # Get rid of the temporary file rm ${FILE}edone# Get rid of temporary filesrm temp.txt{,e}
Bash script to convert NIST vectors to debug scripts
performance;bash;converting;sed
Rewriting it in AWK would definitely result in a huge improvement, enough to say that writing it in Bash was a poor choice. Many of the considerations for this problem favour AWK:The input is line-oriented.Nearly every line has the same key = value format, except for the headers with [key = value] instead. Most importantly, they all share the same = delimiter.All of the processing can be done using simple text transformations and arithmetic.Processing can be done in one pass, with very little state to maintain.I think that Bash is underpowered for this problem, and is therefore a poor fit. The repeated use of sed is not only a performance barrier; the constant intermingling of Bash and sed hurts readability.Of course, any other general-purpose programming language would also work. However, considering that any system that has Bash will also have AWK, and AWK is just powerful enough to handle this problem comfortably, that's what I would choose. Besides, you already used a tiny bit of AWK within your Bash script why not go all the way? The AWK program below is much faster than your Bash script, and in my opinion, more readable. That said, there are some minor improvements that could be made to the Bash-based solution. I may eventually return to review it.#!/usr/bin/awk -fBEGIN { FS = = ; NUM_HEADERS = 0;}####################################################################### Skip first 6 lines######################################################################FNR < 7 { next }####################################################################### dos2unix######################################################################{ sub(\r$, ); }####################################################################### Read headers, of the form# [Keylen = 96]######################################################################/\[.*\]/ { gsub(\\[|\\], ); HEADER_NAME[NUM_HEADERS++] = $1; HEADER_VALUE[$1] = $2; next;}####################################################################### End of headers. Determine output file, and write out the headers.# Output filename is of the form# [BASEFILE][Keylen]-[IVlen]-[PTlen]-[AADlen]-[Taglen].mspd######################################################################NUM_HEADERS > 0 { if (OUT) { end_of_stanza(); close(OUT); } basename = FILENAME; sub(\\..*, , basename); sub([0-9]*$, , basename); OUT = sprintf(%s%d-%d-%d-%d-%d.mspd, basename, HEADER_VALUE[Keylen], HEADER_VALUE[IVlen], HEADER_VALUE[PTlen], HEADER_VALUE[AADlen], HEADER_VALUE[Taglen]); for (h = 0; h < NUM_HEADERS; h++) { header_name = HEADER_NAME[h]; hex_value = sprintf(%04x, HEADER_VALUE[header_name]); printf mw %s 0x%s 0x%s\n, header_name, substr(hex_value, 3, 2), substr(hex_value, 1, 2) > OUT; } NUM_HEADERS = 0; FAIL = ; next;}####################################################################### Split values of Key, IV, PT, AAD, CT, and Tag into hex bytes######################################################################$1 ~ /^(Key|IV|PT|AAD|CT|Tag)$/ && $2 ~ /^([0-9a-f][0-9a-f])+$/{ split($2, a, ); $2 = ; for (i = 1; i < length(a); i += 2) { $2 = sprintf(%s 0x%s%s, $2, a[i], a[i + 1]); } $2 = substr($2, 2);}####################################################################### Stanza processing: mark failure or non-failure######################################################################function end_of_stanza() { if (FAIL != ) { print mw FAIL, FAIL > OUT; print read gcm_test_round.mspd\n > OUT; } FAIL = 0; print > OUT;}$1 == FAIL { FAIL = 1; next;}$1 == Count { end_of_stanza(); next;}END { end_of_stanza(); close(OUT);}####################################################################### Normal body line######################################################################!/^$/ { if ($2 == ) { print mw, $1 > OUT; } else { print mw, $1, $2 > OUT; }}
_unix.367996
It seems that there are unsolvable issues when printing from Plasma apps with the native print dialogue. However, printing from GTK applications appears to be fine. Is there a way to force native KDE Plasma applications to print using the GTK dialogue box?
Can I use the gtk print dialogue for KDE Plasma applications?
kde;gtk;plasma;kde5;plasma5
null
_webmaster.15190
I wanted to know if you are allowed to save a static image of an area of a map so the usage limit does not run out and gets replaced with an quota exceeded image.If it's allowed, then I can save the image onto my server and link it around my website instead making requests to the Google Static Maps API.Google Static Maps APII have not seen a policy where you are not allowed, but all I need is confirmation if you are allowed or not so I don't get in any kind of trouble.
Are you allowed to save Google static map images?
google;google maps
The limit is 1000 different images, per person [not site], per day. Are you really sure this is even a concern for you?Anyway, from the full ToS: You must not pre-fetch, cache, or store any Content, except that you may store: (i) limited amounts of Content for the purpose of improving the performance of your Maps API Implementation if you do so temporarily, securely, and in a manner that does not permit use of the Content outside of the Service [...]
_codereview.8817
I need this kind of ORDER BY to list my curriculum vitae data: when ep.data_termino is null it means that the job is your current job (you can have multiple current jobs), then I must list them first. Then, I need to list all the others ordered by the ep.data_termino ASC.The IF was my only choice. I know that it is never a good option to use IFs in queries (at least that's how I learned it), so I am asking here if there is any better solution. SELECT ep.nome_empresa, ep.data_inicio, ep.descricao, ct.nome as contratacao_tipo_nome, p.nome as profissao_nome, IF(ep.data_termino IS NULL,NOW(),ep.data_termino) as data_term FROM experiencia_profissional as ep JOIN contratacao_tipo as ct ON ct.codigo = ep.codigo_contratacao_tipo JOIN profissao as p ON p.codigo = ep.codigo_profissao JOIN experiencia_profissional_membro as epm ON epm.codigo_experiencia_profissional = ep.codigo JOIN membro as m ON m.codigo = epm.codigo_membro JOIN usuario as u ON u.codigo_membro = m.codigo WHERE u.codigo = 124 ORDER by data_term DESC, ep.data_inicio ASC
Listing curriculum vitae data
sql;mysql
Try:ORDER BY ep.data_termino IS NOT NULL, ep.data_termino DESC, ep.data_inicio ASCBTW,IFNULL(ep.data_termino,NOW()) is a cleaner way to express:IF(ep.data_termino IS NULL,NOW(),ep.data_termino)
_unix.333510
I am trying to replace a string in a text file that is a CSV file. The row delimiter string is {id: and want to insert a new line before each occurrence of it as the CSV file appears only as one row and all columns.Ideally I need the file to be separated by commas translating to columns, and everywhere a {id: occurs to translate to a new line i.e. new roweach column should be delimited by *:, where * indicates any text e.g.: TLP: or id:A sample of the file is below, and the sample text should result in 3 rows and a column for each labelSorry for a painful question, however I'ave tried every combination of sed and awk I can think of and nothing works {id:5863ddde2577f521dccd9a3a,name:Switcher: Android joins the attack-the-router club,description:Recently, in our never-ending quest to protect the world from malware, we found a misbehaving Android trojan. Although malware targeting the Android OS stopped being a novelty quite some time ago, this trojan is quite unique. Instead of attacking a user, it attacks the Wi-Fi network the user is connected to, or, to be precise, the wireless router that serves the network. The trojan, dubbed Trojan.AndroidOS.Switcher, performs a brute-force password guessing attack on the routers admin web interface. If the attack succeeds, the malware changes the addresses of the DNS servers in the routers settings, thereby rerouting all DNS queries from devices in the attacked Wi-Fi network to the servers of the cybercriminals (such an attack is also known as DNS-hijacking). So, let us explain in detail how Switcher performs its brute-force attacks, gets into the routers and undertakes its DNS-hijack.,author_name:AlienVault,modified:2016-12-28T15:44:30.187000,created:2016-12-28T15:44:30.187000,tags:[android,baidu,android,mobile,dns hijack,Trojan.AndroidOS.Switcher,Kaspersky],references:[hxxps://securelist.com/blog/mobile/76969/switcher-android-joins-the-attack-the-router-club/],revision:1.0,indicators:[{content:,indicator:acdb7bfebf04affd227c93c97df536cf,description:,created:2016-12-28T15:44:31,is_active:1,title:,access_reason:,access_type:public,access_groups:[],role:null,expiration:null,type:FileHash-MD5,id:1744766,observations:1},{content:,indicator:64490fbecefa3fcdacd41995887fe510,description:,created:2016-12-28T15:44:31,is_active:1,title:,access_reason:,access_type:public,access_groups:[],role:null,expiration:null,type:FileHash-MD5,id:1744767,observations:1},{content:,indicator:101.200.147.153,description:,created:2016-12-28T15:44:31,is_active:1,title:,access_reason:,access_type:public,access_groups:[],role:null,expiration:null,type:IPv4,id:1744768,observations:1},{content:,indicator:112.33.13.11,description:,created:2016-12-28T15:44:31,is_active:1,title:,access_reason:,access_type:public,access_groups:[],role:null,expiration:null,type:IPv4,id:1744769,observations:1},{content:,indicator:120.76.249.59,description:,created:2016-12-28T15:44:31,is_active:1,title:,access_reason:,access_type:public,access_groups:[],role:null,expiration:null,type:IPv4,id:1744770,observations:1}],TLP:green,public:true,adversary:,targeted_countries:[China],industries:[]},{id:585bdcd497316a2db901eaa5,name:Fancy Bear Tracking of Ukrainian Field Artillery Units,description:Late in the summer of 2016, CrowdStrike Intelligence analysts began investigating a curious Android Package (APK) named -30.apk which contained a number of Russian language artifacts that were military in nature. Initial research identified that the filename suggested a relationship to the D-30 122mm towed howitzer, an artillery weapon first manufactured in the Soviet Union in the 1960s but still in use today. In-depth reverse engineering revealed the APK contained an Android variant of X-Agent, the command and control protocol was closely linked to observed Windows variants of X-Agent, and utilized a cryptographic algorithm called RC4 with a very similar 50 byte base key.,author_name:AlienVault,modified:2016-12-22T14:03:53.674000,created:2016-12-22T14:01:56.495000,tags:[apt28,fancy bear,ukraine,military,X-Agent,D-30,crowdstrike],references:[hxxps://www.crowdstrike.com/blog/danger-close-fancy-bear-tracking-ukrainian-field-artillery-units/,hxxps://www.crowdstrike.com/wp-content/brochures/FancyBearTracksUkrainianArtillery.pdf],revision:2.0,indicators:[{content:,indicator:69.90.132.215,description:,created:2016-12-22T14:01:57,is_active:1,title:,access_reason:,access_type:public,access_groups:[],role:command_and_control,expiration:null,type:IPv4,id:1683228,observations:1},{content:,indicator:6f7523d3019fa190499f327211e01fcb,description:,created:2016-12-22T14:01:57,is_active:1,title:,access_reason:,access_type:public,access_groups:[],role:null,expiration:null,type:FileHash-MD5,id:1683229,observations:2}],TLP:green,public:true,adversary:Fancy Bear,targeted_countries:[Ukraine],industries:[defence,military]},{id:585ae32297316a22f301eaa5,name:Fake Apps Take Advantage of Super Mario Run Release,description:Earlier this year, we talked about how cybercriminals took advantage of the popularity of Pokemon Go to launch their own malicious apps. As 2016 comes to a close, we observe the same thing happening to another of Nintendos game properties: Super Mario.\n\nIn advance of any official release, cybercriminals have already released their own Mario-related apps. Since 2012, we have found more than 9,000 apps using the Mario name on various sources online. About two-thirds of these apps show some kind of malicious behavior, including displaying ads and downloading apps without the users consent.,author_name:AlienVault,modified:2016-12-21T20:16:34.201000,created:2016-12-21T20:16:34.201000,tags:[super mario,android,mario,nintendo,google play,malware,trendmicro],references:[hxxp://blog.trendmicro.com/trendlabs-security-intelligence/fake-apps-take-advantage-mario-run-release/],revision:1.0,indicators:[{content:,indicator:8373aedc9819ff5dacb0fc1864eeb96adc5210b2,description:,created:2016-12-21T20:16:35,is_active:1,title:,access_reason:,access_type:public,access_groups:[],role:null,expiration:null,type:FileHash-SHA1,id:1674453,observations:1},{content:,indicator:4ba312a6eaf79da9036d4228a43f19c611345a5a,description:,created:2016-12-21T20:16:35,is_active:1,title:,access_reason:,access_type:public,access_groups:[],role:null,expiration:null,type:FileHash-SHA1,id:1674454,observations:1}],TLP:green,public:true,adversary:,targeted_countries:[],industries:[]}]
insert new lines into a csv file obtained via curl on an api
text processing;awk;sed;json
null
_codereview.146288
I've written a program that calculates all of the palindrome numbers in the given range, but the code is slightly slower than needed. I've tried to improve algorithmic complexity to the best of my abilities, but still overhead over passing solution (1 second). Is it possible to improve the runtime performance (in time) of the code?\$n\$ - number of tests\$a, b\$ - range in which I need to calculate number of palindromesHere is the code:#include<stdio.h>int ifpalin(int g){ int rev=0; int tmp=g; while(tmp>0) { rev=rev*10+(tmp%10); tmp=tmp/10; } if(rev==g) return 1; else return 0;}int findpalin(int a1,int b1){ int sm=0; for(int i=a1;i<=b1;i++) { if (ifpalin(i)==1) sm++; } printf(%d,sm); printf(\n); return 0;}int main(){int a,b,n; scanf(%d,&n); for(int i=0;i<n;i++) { scanf(%d,&a); scanf(%d,&b); findpalin(a,b); } return 0;}
Calculate the number of palindrome numbers in the given ranges
performance;algorithm;c;time limit exceeded;palindrome
null
_unix.253464
The time -v command outputs % CPU utilization for a given command on Linux. How do I do this on OS X? The Linux/OS X difference is illustrated here. I would like to measure multi-core utilization across the total execution period of a short-running program, so top probably wouldn't work, as it measures/averages at particular points in time.
Resource usage (% CPU) for given command on OS X
osx;performance;command;time;measure
Seems like there is no real alternative to the gnu time command. So, in the end I installed just that. On OS X gnu-time can be installed with homebrew: brew install gnu-time. Thereafter CPU utilization for a specific command can be measured using gtime <command>. A test shows that my program is indeed running concurrently: 1.73user 0.13system 0:01.61elapsed 115%CPU.
_codereview.85481
I asked this question on StackOverflow, got some answers, most notably a link to this one, and basing on that I've implemented this:{-# LANGUAGE RankNTypes #-}{-# LANGUAGE FlexibleContexts #-}module Main whereimport Control.Monad.Stateimport Control.Monad.IO.Class-- Module----------------------------------------------------------------------------------------newtype Module m a b = Module (a -> m (b, Module m a b)){-instance (Monad m) => Applicative (Module m a)instance (Monad m) => Arrow (Module m)instance (Monad m) => Category (Module m)instance (Monad m) => Functor (Module m a)-}-- GraphicsModule----------------------------------------------------------------------------------------data GraphicsState = GraphicsState Intrender :: (MonadState GraphicsState m, MonadIO m) => Int -> m ()render x = do (GraphicsState s) <- get liftIO $ print $ x + s put . GraphicsState $ s + 1type GraphicsModule = Module IO Int ()initialGraphicsState = GraphicsState 0createGraphicsModule :: GraphicsState -> GraphicsModule createGraphicsModule initialState = Module $ \x -> do (r, s') <- runStateT (render x) initialState return (r, createGraphicsModule s') initialGraphicsModule = createGraphicsModule initialGraphicsStaterunModule (Module m) x = m x-- Program----------------------------------------------------------------------------------------data ProgramState = ProgramState { graphicsModule :: GraphicsModule}renderInProgram :: (MonadState ProgramState m, MonadIO m) => Int -> m ()renderInProgram x = do gm <- gets graphicsModule (r, gm') <- liftIO $ runModule gm x modify $ \g -> g { graphicsModule = gm' }initialProgramState = ProgramState initialGraphicsModulemain = runStateT prog initialProgramStateprog = do renderInProgram 1 renderInProgram 1 renderInProgram 1I can see how this could be quite easily extended to allow more functions in a module (instead of just render). I am not sure if I'm keeping the state correctly, though. That was the only way I saw to not expose the inner, stateful context (note that the outer monad to the module is just IO).Also I am aware of the fact that Lens could make it less verbose. I deliberately chose to not depend on Lens, and I think it's really functionally equivalent.
Polymorphic components for graphics and program state
haskell;polymorphism;monads;state
null
_softwareengineering.101246
We are working on LoB app with lot's of content. XAP download will be rather large I think for one time download. I'm planning to break solution into separate projects.Not sure why - but I DO NOT like many many projects in solution. IMO it is not very good idea. Even when we were working as a large team - it was pain with many projects. Even now when I'm using MEF/PRISM - there still going to be some core dependencies like:PRISM librariesInterfacesNavigationShell/BootstrapperApp stylesConverters/Commands/Validatorsetc.And than I'm going to have modules that will use all that CORE stuff. Modules will have following inside them:RIA Services client sideViewModel'sViewsThose modules will be loaded on demand using MEF. I think size-wise all those modules will be larger than core module because of ammount of logic in them. I expect to have about 5-6 modules and CORE. I think that will give me reasonable number of client-side projects and libraries/XAPS and it will be manageable solution to work with.Do you see any issue with breakdown like this? Some online videos will make 7+ projects out of CORE module. What's a point? I think it adds to complexity. Than they show 3-4 DLL out of module. One for views, one for viewmodels and so on. They still need to be loaded together so why?I'm looking for do's and dont's from you guys who went through this..Thank you!
Silverlight - modularity. Best way to physically separate binaries?
architecture;silverlight;enterprise architecture
Why do you feel that a solution with multiple projects is a BAD IDEA?If your only doubt about this approach is that the solution will become cluttered then you can simply create multiple solutions, each with a different development perspective.Solution == Perspective ApproachA solution is merely a collection of projects and defines building a series of projects. In my project setups I will typically have a Server side Solution, Business Entity Solution, a number of different Presentation Layer solutions and a Master solution that builds all projects.Projects in the Server Side Solution may be in the Business Entity Solution as well. This way developers only need to develop in the solution/perspective which is applicable to them. This avoids the clutter of large numbers of projects.
_cogsci.12455
There are some long term diseases where the severity of your symptoms tend towards a 'normal'. So imagine plotting out the severity of the symptoms say, every day or every week, then drawing a line of best fit through them. So over the long term, severity may change slowly. However in the short term, statistically speaking, it'll tend to be that if symptoms have been super bad for a short while, it'll tend to get better. And if symptoms have been very good for a short while, it'll tend to get worse. There's a name for this trend. I'm not sure if it's actually a cognitive bias, it could be more of a statistical fallacy type thing. I can't remember the name... and I've searched for it in every way I can think of in google but no luck. I originally read about it in the context of pseudoscience treatments. So someone has a long term condition, they have a very 'bad year', they go see say... a homeopath, and then they get a bit better and they ascribe their improvement to the homeopathy where as in fact, statistically speaking they were likely to have gotten a bit better on their own.
What is the name for the cognitive bias that ignores that extreme symptoms always tend to get less extreme?
terminology;clinical psychology;statistics;bias;regression
The statistical phenomenon you are referring to is called regression to the mean. It describes the fact that extreme measurements which are affected by random fluctuation will tend to be closer to the average in subsequent measurements. This phenomenon is not limited to clinical symptoms but occurs for any measurement that is selected for its extremity.As an example, you let students take two tests that are designed to assess the same underlying aptitude. Then you look at the 5% best and the 5% worst performers in the first test. In the second test, the score of the high performing students will likely drop to some extent and the score of the low performing students will likely rise to some extent, even though the aptitude has not changed. This is because the test results are affected by random variation (they are not perfectly correlated) that is independent from the aptitude.The failure to take statistical regression into account gives rise to the so-called regression fallacy. The regression fallacy refers to the tendency to attribute the change of extreme scores to spurious causal reasons. As a frequent example, if a football team loses some games and fires the coach, subsequent increases in performance may be explained with this fact even though it was just due to chance.
_codereview.173150
I have a Table that I store All setting in that , it is like below :public class Setting:IEntity{ public string Key { get; set; } public string Value { get; set; }}and I have a service like this to update and read Setting Table : public class SettingService : ISettingService{ #region Fields private readonly IUnitOfWork _uow; private readonly IDbSet<Setting> _settings; private static readonly ConcurrentDictionary<string, object> _cash = new ConcurrentDictionary<string, object>(); #endregion #region Methods #region ctor public SettingService(IUnitOfWork uow) { _uow = uow; _settings = _uow.Set<Setting>(); if (_cash.IsEmpty) lock (_cash) { if (_cash.IsEmpty) _settings.ToList().ForEach(item => _cash.TryAdd(item.Key, item.Value)); } } #endregion public T Get<T>() where T : ISetting { object value; var setting = Activator.CreateInstance<T>(); var prefix = typeof(T).Name; foreach (PropertyInfo item in typeof(T).GetProperties()) { string key = ${prefix}.{item.Name}; _cash.TryGetValue(key, out value); if (item.PropertyType == typeof(Boolean)) { bool result; Boolean.TryParse(value?.ToString(), out result); item.SetValue(setting, result); } else item.SetValue(setting, value); } return setting; } public void Set<T>(T model) where T : ISetting { var prefix = typeof(T).Name; Type type = typeof(T); foreach (PropertyInfo prop in typeof(T).GetProperties()) { var key = ${prefix}.{prop.Name}; var setting = _settings.FirstOrDefault(row => row.Key == key); var isAddedd = true; if (setting == null) { setting = new Setting { Key = key }; _settings.Add(setting); _uow.MarkAsAdded(setting); isAddedd = false; } setting.Value = prop.GetValue(model, null)?.ToString() ?? string.Empty; if (isAddedd) _uow.MarkAsChanged(setting); _cash.AddOrUpdate(key, setting.Value, (oldkey, oldValue) => setting.Value); } } #endregion}I use this Service Like below :var data = _settingService.Get<AboutSetting>(); // when I want to featch from db _settingService.Set<AboutSetting>(aboutUsViewModel);// for updatenow I need to Read All Project Setting from project , in some Views I need just some of them like Address , Tel ,...I have created some Classes Like below :public static class CompanyConfig{ private static CompanyInformationSetting _companySettings; private static ISettingService _settingService; static CompanyConfig() { _settingService = ApplicationObjectFactory.Container.GetInstance<ISettingService>(); _companySettings = _settingService.Get<CompanyInformationSetting>(); } public static string CompanyAddress { get { return _companySettings.Address; } }}and use them in View Like : <h2> Address : @(CompanyConfig.CompanyAddress) </h2>Is there a way better than this , does this way bad for Performance ?Another Way is to change BaseViewPage Like below and set some properties like below :public class BaseViewPage<TModel> : WebViewPage<TModel>{ private readonly ISettingService _settingService; public BaseViewPage() { _settingService = ApplicationObjectFactory.Container.GetInstance<ISettingService>(); } public override void Execute() { } public string CompanyName { get { return _settingService.Get<CompanyInformationSetting>(CompanyName); } } public string CompanyPhoneNumber { get { return _settingService.Get<CompanyInformationSetting>(PhoneNumber); } } public string CompanyEmail { get { return _settingService.Get<CompanyInformationSetting>(Email); } } public string CompanyAddress { get { return _settingService.Get<CompanyInformationSetting>(Address); } }}in this way we dont need to static class and it solves singleton of UnitOfwork
Read project setting in asp.net mvc
c#;mvc;asp.net mvc
Quick note first, _cash should be _cache dictionary definition:a temporary storage space or memory that allows fast access to dataIs this bad for performance?You'd have to measure it but you've got a non-trivial amount of reflection on every call to Get and Set. Why not serialize your values to JSON or Xml to store in the database? Create an interface and you can decide on your implementation later. You could test a bunch of serializers and see which is fastest.public interface ISettingSerializer{ string Serialize<T>(T value); T Deserialize<T>(string value);}What I'm really trying to say is: don't create your own serialization. There are already multiple good options that support a lot of configuration. E.g. think about attributes to control how or even whether a member is included.You have another problem - a captive dependency.You have a Singleton CompanyConfig (as it's static) that holds a reference to a unit of work (through the settings service) which shouldn't be kept alive for the lifetime of the application IMO.
_vi.13023
The problem: I've used ctags with C++ code for a while, but that has no knowledge of the code. If there are many subclasses that overwrite a certain virtual function, then on a Ctrl-] I may end up in the definition of an unexpected subclass.Are there better ways to do this?I've seen recommendation for rtags and uctags, are these viable and better alternatives as of now? Why?
Alternatives to ctags: are rtags, uctags or other alternatives better?
tags;ctags;filetype c++
null
_unix.246311
I've been struggling to get synaptic to work properly since installing arch on my laptop about a year ago, and its finally broken completely. I need help figuring out why, as nothing I've done in the past few hours has fixed it.output of xinput immediately after realizing the cursor didn't move: Virtual core pointer id=2 [master pointer (3)] Virtual core XTEST pointer id=4 [slave pointer (2)] Virtual core keyboard id=3 [master keyboard (2)] Virtual core XTEST keyboard id=5 [slave keyboard (3)] Power Button id=6 [slave keyboard (3)] Video Bus id=7 [slave keyboard (3)] Video Bus id=8 [slave keyboard (3)] Power Button id=9 [slave keyboard (3)] Sleep Button id=10 [slave keyboard (3)] HD WebCam id=11 [slave keyboard (3)] AT Translated Set 2 keyboard id=12 [slave keyboard (3)] Acer WMI hotkeys id=13 [slave keyboard (3)]When my cursor was working before this, there was an entry that listed a couple numbers and said UNKNOWN. I'm pretty sure that was my touchpad, and it's now missing.contents of Xorg.0.log (The parts that I think relate to my touchpad)[ 198.965] (II) config/udev: Adding input device SYN1B7F:01 06CB:2970 UNKNOWN (/dev/input/event8)[ 198.965] (**) SYN1B7F:01 06CB:2970 UNKNOWN: Applying InputClass evdev touchpad catchall[ 198.965] (**) SYN1B7F:01 06CB:2970 UNKNOWN: Applying InputClass touchpad catchall[ 198.965] (**) SYN1B7F:01 06CB:2970 UNKNOWN: Applying InputClass Default clickpad buttons[ 198.965] (II) LoadModule: synaptics[ 198.965] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so[ 198.977] (II) Module synaptics: vendor=X.Org Foundation[ 198.977] compiled for 1.16.4, module version = 1.8.1[ 198.977] Module class: X.Org XInput Driver[ 198.977] ABI class: X.Org XInput driver, version 21.0[ 198.980] (II) systemd-logind: got fd for /dev/input/event8 13:72 fd 21 paused 0[ 198.980] (II) Using input driver 'synaptics' for 'SYN1B7F:01 06CB:2970 UNKNOWN'[ 198.980] (**) SYN1B7F:01 06CB:2970 UNKNOWN: always reports core events[ 198.980] (**) Option Protocol event[ 198.980] (**) Option Device /dev/input/event8[ 198.980] (--) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: x-axis range 0 - 1236 (res 12)[ 198.980] (--) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: y-axis range 0 - 898 (res 12)[ 198.980] (II) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: device does not report pressure, will use touch data.[ 198.980] (II) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: device does not report finger width.[ 198.980] (--) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: buttons: left double triple[ 198.980] (--) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: Vendor 0x6cb Product 0x2970[ 198.980] (--) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: invalid pressure range. defaulting to 0 - 255[ 198.980] (--) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: invalid finger width range. defaulting to 0 - 15[ 198.980] (**) Option SHMConfig on[ 198.980] (**) Option ClickPad 0[ 198.980] (**) Option VertTwoFingerScroll on[ 198.980] (**) Option TouchpadOff 0[ 198.980] (**) Option PalmDetect on[ 198.980] (--) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: touchpad found[ 198.980] (**) SYN1B7F:01 06CB:2970 UNKNOWN: always reports core events[ 198.980] (**) Option config_info udev:/sys/devices/pci0000:00/INT33C3:00/i2c-0/i2c-SYN1B7F:01/0018:06CB:2970.0001/input/input8/event8[ 198.980] (II) XINPUT: Adding extended input device SYN1B7F:01 06CB:2970 UNKNOWN (type: TOUCHPAD, id 12)[ 198.980] (**) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: (accel) MinSpeed is now constant deceleration 2.5[ 198.980] (**) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: (accel) MaxSpeed is now 1.75[ 198.980] (**) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: (accel) AccelFactor is now 0.131[ 198.981] (**) SYN1B7F:01 06CB:2970 UNKNOWN: (accel) keeping acceleration scheme 1[ 198.981] (**) SYN1B7F:01 06CB:2970 UNKNOWN: (accel) acceleration profile 1[ 198.981] (**) SYN1B7F:01 06CB:2970 UNKNOWN: (accel) acceleration factor: 2.000[ 198.981] (**) SYN1B7F:01 06CB:2970 UNKNOWN: (accel) acceleration threshold: 4[ 198.981] (--) synaptics: SYN1B7F:01 06CB:2970 UNKNOWN: touchpad found[ 198.981] (II) config/udev: Adding input device SYN1B7F:01 06CB:2970 UNKNOWN (/dev/input/mouse0)[ 198.981] (**) SYN1B7F:01 06CB:2970 UNKNOWN: Ignoring device from InputClass touchpad ignore duplicatesThe first thing I did was disable /etc/X11/xorg.conf.d/50-synaptics.conf and reboot.now xinput outputs: Virtual core pointer id=2 [master pointer (3)] Virtual core XTEST pointer id=4 [slave pointer (2)] SYN1B7F:01 06CB:2970 UNKNOWN id=12 [slave pointer (2)] Virtual core keyboard id=3 [master keyboard (2)](I've left out the parts under Virtual core keyboard because they don't change throughout this process).So now the numbers and UNKNOWN that displayed before are here again, but the cursor still won't move.After looking around for a bit, I found a thread that suggested I addi8042.nopnp i8042.nomux=1 i8042.resetTo my kernal via boot loader settings. I use systemd-boot, and added these settings to /boot/loader/entries/arch.conf. After restarting, xinput was the same and the cursor still wouldn't move.I kept searching, and found another thread that mentioned blacklisting the i2c_hid driver. I figured I would try it, so I created a conf file in /etc/modprobe.d with the contentsblacklist i2c_hidAfter rebooting, xinput again had a different output: Virtual core pointer id=2 [master pointer (3)] Virtual core XTEST pointer id=4 [slave pointer (2)] SynPS/2 Synaptics TouchPad id=13 [slave pointer (2)] Virtual core keyboard id=3 [master keyboard (2)]xinput outputting something reasonable instead of UNKNOWN seemed promising, but the cursor still wouldn't move.At this point, I reactivated /etc/X11/xorg.conf.d/50-synaptics.conf and restarted.Upon restart, xinput goes back to not displaying any touchpad, and the cursor is still not moving. Virtual core pointer id=2 [master pointer (3)] Virtual core XTEST pointer id=4 [slave pointer (2)] Virtual core keyboard id=3 [master keyboard (2)]I've looked at my 50-synaptics.conf thinking that it might have some sort of error in it, but after double checking I couldn't find one.Section InputClass Identifier touchpad catchall Driver synaptics MatchIsTouchpad on MatchDevicePath /dev/input/event* Option TouchpadOff 0 Option MaxTapTime 0 #disables tapping Option PalmDetect on Option EmulateTwoFingerMinZ 40# Option EmulateTwoFingerMinW 10 Option ClickPad 0 Option VertTwoFingerScroll on Option TapButton1 1 Option TapButton2 2 Option TapButton3 3EndSectionI tried a couple different combinations of the changes I made, such as removing the i8042 options, but leaving i2c_hid blacklisted, but nothing I tried made any significant difference, and the cursor remained unmovable.At this point, I started writing up this question. If I left out any important log files or anything, let me know and I'll post them. Please help me out. This is starting to drive me crazy.
Laptop Touchpad suddenly not working on Arch Linux
arch linux;xorg;touchpad;xinput;synaptic
null
_unix.301563
I have a number of vanity catch-all email domains. Back in the day, that was a good idea, and now it's too late to change for my friends and family.I do not relay out, only serving incoming domains. Those incoming messages then get forwarded using mail aliases rules that each user locally configures. I am receiving mail and forwarding to user gmail inboxes using postfix. To make sure I filter out 90%+ of the spam, I run spamassassin with auto-update, as well as two RBL blocking lists and SPF records. Good mail does get through to Google, which is great!Bad mail that still slips through the net ends up with a 421 temporary denial from Google. Typically Google will say this is spam or this contains bad links in the reject message, which is good as far as it goes, but I don't read the logs every hour and check every message.Currently, I run a command that flushes the deferred queue once a day, so that I don't re-try the same spam too often. This is somewhat fragile, because a single message that arrives right before the flush, and then gets deferred once for some technical reason (TCP timeout etc) would also get deleted without delivery. Not great!So, how can I go about training my spanassassin based on the messages received back from Google?For now, I'm thinking of something that wakes up every 10 minutes, tails the mail.log file, and looks for 421 messages, extracts the message ID using regex, then runs postcat on that message, and feeds it to sa-learn for training.First: Is something like this already available? I can't find anything obvious googling spamassassin learn from gmail or similar.Second: Can you find anything wrong, missed assumption, etc, in my reasoning above that I should correct?
How can my postfix/spamcop learn from Gmail 421 rejections?
postfix
null
_unix.341854
I am running CentOS 7 and need to mount an NFS share which is protected by credentials. I have read the nfs, mount, mount.nfs manuals and can't find the right options that work! I think the right options are 'user' and 'pass', but I've tried 'username' and 'password' and everything inbetween, but I get:mount -t nfs -o user=root,pass=mypass lserver:/root /mnt/d0mount.nfs: an incorrect mount option was specifiedCan someone tell me the right syntax/options to make this work? (It really shouldn't be this hard)
Failed to pass credentials to nfs mount
mount;nfs;options
Specifying username and password are options for cifs (samba), but not nfs. According to this CentOS Documentation:NFS controls who can mount an exported file system based on the host making the mount request, not the user that actually uses the file system. Hosts must be given explicit rights to mount the exported file system. Access control is not possible for users, other than through file and directory permissions.
_webmaster.8110
I found that Google decreased PR of my site and wonder why. Except recent submit of my programs to download sites and change of Google Analytics to asynchronous code there were no other changes to the site I can think of.
Why Google may decrease PR of a site?
google analytics;pagerank
1) You lose incoming links2) The incoming links you have lost PR themselves and thus pass less PR to you3) Google's index increases. Since PR is relative as the index increases every page's PR goes down.FYI, website's don't have PR. Pages do. So your home page's PR went down, not your site's.
_codereview.149622
I'm currently enrolled in a node.js course. The first app we have done in the course was a terminal notebook. It appends objects to a json-file.Example call on the terminal:node notebook.js add Second Aenean commodo ligula eget dolor.Example structure json-file:[{ title: First, timestamp: 1481534161237, body: Lorem ipsum dolor sit amet, consectetuer adipiscing elit.}, { title: Second, timestamp: 1481534192437, body: Aenean commodo ligula eget dolor.}]Then the objects can be displayed, listed or removed.The basic structure was done together with the trainer. Afterward I added the validation part and an additional property timestamp. timestamp is mainly for listing the notes sorted in ascending order. Furthermore I wrote a function for creating a formatted date (based on the timestamp). notebook.js (the main file): // ---- Assignments ----------- const fs = require('fs'); const notes = require('./notes.js'); var args = process.argv; var errorReport = \nSomething has gone wrong.; var maxLengthTitle = 150; var maxLengthBody = 1000; var command = args[2]; var title = args[3]; var body = args[4]; // ---- Validation ----------- if (['add', 'list', 'read', 'remove'].indexOf(command) === -1) { errorReport += `\nParam 1: Expected add or list or read or remove. ${command} found.`; } if (command) { if (!title && command !== 'list') { command = ; errorReport += `\nParam 2: Expected string. undefined found.`; } else if (title && title.length > maxLengthTitle) { command = ; errorReport += `\nParam 2: Maximal length of title is ${maxLengthTitle} chars.`; } } if (!body || typeof body !== 'string') { body = '-'; } else if (body.length > maxLengthBody) { command = ; errorReport += `\nParam 3: Length of given second parameter is ${body.length} Maximal valid length is ${maxLengthBody} chars.`; } // ------------------------------------------- console.log('\n ----- NOTEBOOK ----- '); // ---- Reacting to the user input ----------- if (command === 'add') { var note = notes.addNote(title, body); if (note) { console.log(`Note '${title}' has been added.`) } else { console.log(`Adding note has failed.'`) } } else if (command === 'list') { var allNotes = notes.getAll(); allNotes.sort((a, b) => { return a - b; }); for (let i = 0; i < allNotes.length; i++) { console.log('\n' + notes .createFormattedDate(allNotes[i]['timestamp']) + '\n' + allNotes[i]['title'] + '\n' + allNotes[i]['body']); } } else if (command === 'read') { var title = notes.readNote(title); console.log(`Title is : ${title} !`); } else if (command === 'remove') { notes.removeNote(title); console.log(`Note '${title}' has been removed.`) } else { console.log(errorReport); } // ---------------------------------------- console.log('\n -------------------- \n');notes.js (containing the module with the actual functions):const fs = require('fs');var fetchNotes = () => { try { var notesString = fs.readFileSync('notes-data.json'); return JSON.parse(notesString); } catch (e) { return []; }};var saveNotes = (notes) => { fs.writeFileSync('notes-data.json', JSON.stringify(notes));}var addNote = (title, body) => { var notes = fetchNotes(); var note = { title: title.trim(), timestamp: Date.now(), body: body.trim() }; notes.push(note); saveNotes(notes); return note;}var readNote = (title) => { var notes = fetchNotes(); var ret = notes.filter((note) => { return note.title === title; }); return ret[0] ? ret[0].title + '\n' + ret[0].body : '';}var getAll = () => { return fetchNotes();}var getNote = (title) => { console.log('Get single node: ', title);}var removeNote = (title) => { var notes = fetchNotes(); var newNotes = notes.filter((note) => note.title !== title); fs.writeFileSync('notes-data.json', JSON.stringify(notes));}var createFormattedDate = (timestamp) => { var date = new Date(timestamp); var ret = ('0' + date.getDate()).slice(-2) + '.' + ('0' + (date.getMonth() + 1)).slice(-2) + '.' + date.getFullYear() + ', ' + ('0' + date.getHours()).slice(-2) + ':' + ('0' + date.getMinutes()).slice(-2) + ':' + ('0' + date.getSeconds()).slice(-2); return ret;}module.exports = { addNote, getAll, getNote, removeNote, readNote, createFormattedDate}It all works but notebook.js seems rather messy to me. How could it be better structured? What other improvements could be done?I guess there a tasks which could be accomplished less awkward then I have done, especially concerning the formatted-date function.Was it a good choice to use a date value for sorting the records? Or would another data type be more appropriate??
Notebook app for terminal
javascript;beginner;node.js
One of the things a course should teach you is to take care when you decide what should be const, and what should not be const. Example of what ought to be const are maxLengthTitle and maxLengthBodyMoving process.argv to args is sugar, it makes the lines 10 to 12 easier to read. However, in this case, I would have foregone the move and just use process.argv in lines 10 to 12. The code would be 1 line shorter, and not harder to read.You would get extra points if you could derive 'Expected add or list or read or remove.' from the array you declared just prior to that with ['add', 'list', 'read', 'remove']I would not clear command if there is an error, I would just make the first if after --reacting to the user input -- if( errorReport ){ and go from there. I would also future proof that variable and call it output or feedbackI am wondering about typeof body !== 'string', what case are you covering there?Run your code thru jshint.com, you have a few missing semicolons, some accessors that could use dot notation, and an unused library in notebook.jsRead up on model view controller (MVC), your controller code (especially for command 'list', is doing far too much, in essence your code should readif(command == list){ var allNotes = notes.getAll(); console.log( formatNotes( allNotes ); }and then drop the mic Finally, this drives me nuts:var getAll = () => { return fetchNotes();}Fat arrow syntax is meant for inline functions, please just use function getAll(){ ..}
_cs.69368
I study fuzzy logic and have an ambiguity. Assume, we have 3 variables: X, Y, Z, where Z = X / Y. To represent the variables we can use fuzzy sets very small, small, medium, big, very big. It's quite a routine to write rules for all the combination (5*5 = 25), so, I'd like to write only direct dependencies, like if X is very big, then Z is very big, if Y is very small, then Z is big.Q: is my understanding correct, and can I use aggregation process to mix those results (using centroid method)?
Aggregate not overlapping fuzzy sets
fuzzy logic
null
_unix.243835
I am using X11 over the secure shell with the trusted forwarding flag, i.e.using trusted forwarding with ssh -Y Today I have been receiving the following error:/usr/bin/xauth: error in locking authority file /home/users/USERNAME/.XauthorityTo actually try to use X11 with an application, I get the errorX11 connection rejected because of wrong authenticationIt's difficult to find exact documentation regarding what is going on here. Any advice?EDIT: For future users: when this happens, it appears either (A) you are out of disk space or (B) the file permissions are wrong. It looks like I had a combination of the two. Issue resolved.
Error with `ssh -Y`, error in locking authority file
ssh;x11
null
_webapps.12448
We have created a map with a bunch of locations added for meetings we sponsor. Each location has a link in the description to launch out to that meeting's agenda/location info/etc. page (usually either a meetup.com meeting or a Facebook group page). This all works wonderfully when you are looking at the map directly on maps.google.com, but when we embed the map into another webpage, the links in the description get opened in the embedded frame and are pretty much unusable. You used to see this on our page by clicking on any of the green markers, then clicking the link in its description. But here's the kicker: some browsers (Chrome, Safari) do the right thing and launch that link in the window/tab not in the iframe. Other browsers (IE, Firefox) launch the new link in the iframe.The logical/obvious solution to this is to put a target on the anchor in the description, but Google strips them out, no matter what I put there. So, anyone have any ideas on how to make this work the same in all browsers (preferably like Chrome and Safari)?(I made it past tense as we've put the frame breaker into place so you can't easily see the failure on our map anymore, but you could still reproduce this with your own map if you want to play with it.)
How do I get links to reliably pop out of embedded Google Maps frame?
google maps;embed
null
_unix.3682
What's the + in find /path/ -exec command '{}' + do? as opposed to find /path/ -exec command '{}' \;
What's the + in find /path/ -exec command '{}' + do?
find
The '+' makes one big command line out of all found files to minimize the number of commands to be run.Given the case that a find command finds four files.find . -type f -exec command '{}' \;would producecommand file1command file2command file3command file4 On the other handfind . -type f -exec command '{}' \+producescommand file1 file2 file3 file4
_codereview.20526
I am building a web service that gets data via Stored Procedures from a db and provides the result as JSON. The solution is built as a MVC 4 Web API project. I have to retrieve the data via Stored Procedures for several reasons (security, SQLAnywhere db, etc).It is a relatively small scale project so I have opted for not defining a Model in the Web Service layer to save work (and possibly performance). There will likely be models in the front end application (which will be a MVC 4 Web Application).The reasons why I am including an extra layer in the form of a Web Service is to a) add extra security, and b) that there will likely be a iPhone app for part of the application which then can utilize the same web service as the site.It is working as I want it to, so what I want feedback on isWhat do you think of the general approach to build a MVC 4 Web API solution without explicit models?Do you have any comments on how to improve the details of the implementation?One potential thought I have had is to skip the dataset and make a direct call to the Stored Procedure and loop over a datareader, which could potentially be faster. The other potential change is to loop over the datatable and build up the JSON manually instead of using the serializer.What I like about the current solution is that it can be done on very few lines of code and that I do not have to manually name each JSON object since it is inherited from the datatable within the dataset. I define a dataset called dsProducts that is loaded with a TableAdapter. // GET api/products public string Get() { string result = ; dsProductTableAdapters.GetProductsTableAdapter taTemp = new dsProductsTableAdapters.GetProductsTableAdapter(); dsProducts dsTemp = new dsProducts(); try { taTemp.Fill(dsTemp.GetProducts); result = jsonHelper.convertJSONString(dsTemp.GetProducts); } catch (Exception ex) { result = {\error\:\1\}; } return result; }The convertJSONString and its help functions are defined as: public static string convertJSONString(DataTable table) { JavaScriptSerializer serializer = new JavaScriptSerializer(); return serializer.Serialize(table.ToDictionary()); } static Dictionary<string, object> ToDictionary(this DataTable dt) { return new Dictionary<string, object> { { dt.TableName, dt.convertTableToDictionary() }}; } static object convertTableToDictionary(this DataTable dt) { var columns = dt.Columns.Cast<DataColumn>().ToArray(); return dt.Rows.Cast<DataRow>().Select(r => columns.ToDictionary(c => c.ColumnName, c => r[c])); }
Web API and Stored Procedures
c#;asp.net;web services
null
_softwareengineering.8660
We're integrating Mercurial slowly in our office and doing web-development we started using named branches. We haven't quite found a good convention as far as naming our branches though. We tried: FeatureName (Can see this causing problem down the line)DEVInitial_FeatureName (Could get confusing when developer come and go down the line){uniqueID (int)}_FeatureSo far the uniqueID_featureName is winning, we are thinking of maintaining it in a small DB just for reference.It would have:branchID(int), featureName(varchar), featureDescription(varchar), date, who etc...This would give us branches like: 1_NewWhizBangFeature, 2_NowWithMoreFoo, ... and we would have an easy reference as to what that branch does without having to check the log.Any better solution out there?
Good naming convention for named branches in {DVCS} of your choice
version control;mercurial
If you don't have an issue tracker, I recommend setting one up and then using {issue tracker name}_{ticket number}. When someone years from now files a bug and you don't know exactly how the feature was supposed to work, it'll be easy to annotate the file and get back to where the user may have requested that exact functionality.
_codereview.24526
I was using foo(bar) as it's adopted for functional programming.console.log( join( map(function(row){ return row.join( ); }, tablify( map(function(text){return align(text}, view), 20))), \n);Now, with dot operator:view.map(function(text){return align(text)}) .tablify(20) .map(function(row){return row.join( );}) .join(\n) .log();I guess everyone will agree this reads too much better, and the only cost is that you have to modify the native types prototype. So, which?
The eternal dilemma: bar.foo() or foo(bar)?
javascript;object oriented;functional programming
null
_codereview.90126
I am new to JavaScript, and trying to ensure the i write the best possible code instead of copy & paste. I hope i am asking correctly, want to know if my code I have written is done correctly or could it be done better, the code works, so nothing wrong with the end result, just not sure if I'm on the right track.$(document).ready(function() {use strict;var menu_btn = $('a.menu-trigger');var menu = $('#main_nav > ul');var w = $(window).width();// Adding the ScrollTo Function on the menu items$('#main_nav > ul a[href^=#],.logo a[href^=#]').on('click', function(e) { var target = $( $(this).attr('href') ); if(target.length ) { e.preventDefault(); $('html, body').animate({ scrollTop: target.offset().top }, 1000); } if (w < 768) { $('#main_nav > ul').css('display', 'none'); }});// Create the Sticky Header$(window).scroll(function() { if ($(this).scrollTop() > 1){ $('.header-inner').addClass(sticky); }else{ $('.header-inner').removeClass(sticky); }});// Mobile Menu Functionalitymenu_btn.on('click', function(e) { e.preventDefault(); menu.slideToggle ();});$(window).resize(function () { if (w > 768 && menu.is(':hidden')) { menu.removeAttr('style'); }});});
Overriding scrolling behavior for navigation elements
javascript;beginner;jquery;event handling;animation
null
_unix.331517
I have setup a second instance of the sshd service, that I want to use to allow remote tunnelling on. I followed How to restrict an SSH user to only allow SSH-tunneling? - that showed me how to lock down to only allow remote or local tunnelling, but I'm concerned that someone could open a connection to do local forwarding to a port that I don't want publicly accessible.They could also use it to hit services that would otherwise be restricted to local connections only, because the forwarding would make it seem that the connections are local (I believe).
How to restrict SSH to allow remote tunnelling only (no local)?
ssh tunneling;sshd
For my version of OpenSSH (OpenSSH_7.3p1) the AllowTcpForwarding allows local and remote settings in addition to yes and no.
_softwareengineering.196878
I have a requirement to read a text file with lines in tag=value format and then output the file with specific tags listed first and the rest sorted alphabetically. The incoming file is randomly sorted with the exception of the first line. The output needs the first two lines to always be the same. For example, given the following tags:NAMEAGESSNMARITAL_STATUSNO_OF_DEPENDENTSThe input will always have NAME first and the remaining tags (there are literally hundreds) sorted randomly. The output needs to have SSN first and NAME second and the rest sorted alphabetically so I would end up with:SSNNAMEAGEMARITAL_STATUSNO_OF_DEPENDENTS Note: These are just sample tags. The actual file has 13 fields which need to be listed first and the remaining few hundred listed alphabetically.I'm trying to figure out the best way to do the sorting. Right now my plan is to read the lines in the incoming file and marshal them into two List objects. The first will contain the specific tags which need to be placed first and the second will have everything else. I will then sort the second list and merge it into the first list. This seems complicated and I feel like I'm missing an easier or more elegant approach.
Custom Alphabetic Sorting of Array in Java
java;sorting;text processing;list
The data that is present most closely resembles a map of header to rest of value. This should point one in the direction of a Map rather than a List.Of the data that is presented, there are two groupings of the data - the first 13 fields, and all the rest. The presentation of the all the rest is to be used in a sorted order. For this, one looks at the SortedMap interface and sees the TreeMap as one if its implementations.The 13 field data can be used as a little known map type - the EnumMap.With the EnumMap, one would first define the enumerations of the fields in the order desired.public enum Headers { SSN, NAME;}One then gets the associated code that looks something like (lacking the looping over the data):SortedMap<String, String> other = new TreeMap<String, String>();EnumMap<Headers, String> headers = new EnumMap<Headers, String>(Headers.class);try { headers.put(Headers.valueOf(key), value);} catch (IllegalArgumentException e) { other.put(key, value);}If the header is present in the enum, put it in the headers map, otherwise put it in the other map. At this point, one can then iterate over the headers EnumMap and then the other TreeMap printing out the key and value pairs. Or present a new object that itself extends Iterable and makes an iterator that first walks the EnumMap and then the SortedMap.The enum provides easy extensibility of the code (if you want to do validation checking, one can associate it with the enum. See the Java tutorial on Enum Types to see some more things one can do with it (associate a method with the enum itself, associate values (Pattern? an instance of a class that has an interface providing a boolean validate(String arg)? display formatting code?) to further extend it.
_webapps.105400
I have 3 separate Gmail accounts (professional, personal, shopping/spam). I like to keep all 3 open in 3 tabs in FireFox or Chrome; which Gmail allows you to do while logged in to all 3 in different tabs.I'd like to be able to restore these 3 tabs from bookmarks (as opposed to saved sessions). Is there a way to create such bookmarks in either FireFox or Chrome?
Can I bookmark Gmail logged in to a specific account?
gmail;google account;bookmarks;browser;chrome bookmarks
null
_webapps.18394
I have one google voice number (A) that forwards to two mobile phone lines (B and C). Voicemails left for all three numbers (A, B, or C) are collected in my single google voice inbox. When viewing a message, how can I determine which number was dialed?
Google Voice: when viewing a voicemail, how can I tell which number was dialed?
google voice
null
_computerscience.4597
I am currently a beginner when it comes to anti-aliasing. I have read some notes online that how anti-aliasing works is that you first draw the line using an algorithm such as Bresenham's algorithm. Then you modify it to include anti-aliasing. Now, I understand that one method of anti-aliasing is to simply imagine there being a rectangle that encompasses the line and any pixel that touches that will be turned on or filled with an RGB value. Now if I know that area of the segment of the pixel that is cut off by the rectangle (let's assume the pixel is a circle) how would I use that area of the segment to compute the density of the colour which is from 0-1 (where 0 is very light and 1 is the darkest)?
Anti-aliasing - Controlling colour density of pixel that comes within the rectangle surrounding my line
line drawing;antialiasing;pixel graphics
null
_softwareengineering.266361
The Steelman language requirements have this:The language shall require some redundant, but not duplicative, specifications in programs.I think I can see the underlying idea (that re-stating things may lead to fewer errors given the limitations of human cognition), but I would like a more detailed explanation.What do they mean by redundant and by duplicative?
What is meant by redundant, but not duplicative in the Steelman language requirements?
programming languages;code quality;language design;specifications
They mean that you shall be required to write things that could reasonably be inferred from other things in the code. But you will not be implementing the same thing twice.An example of what they mean is type systems. You might be able to infer the existence and types of variables from their usage, but the redundancy of explicit declaration can catch errors.If in doubt think about http://en.wikipedia.org/wiki/Ada_%28programming_language%29 because it is what came out of this effort.
_cstheory.12490
Planar graphs have genus zero. Graphs embeddable on a torus have genus at most 1. My question is simple :Are there any problems that are polynomially solvable on planar graphs but NP-hard on graphs of genus one ?More generally are there any problems that are polynomially solvable on graphs of genus g but NP-hard on graphs of genus > g ?
Hard Problems for higher genus graphs
cc.complexity theory;ds.algorithms;graph theory;planar graphs
null
_cs.55691
How can we show that, for every infinite recursive language, it has a subset that is recursively enumerable but not recursive?I think we need to show there's a list of natural numbers that can't be ordered but I don't know how to build such a subset.Is there another way of arguing or another type of argument in order to prove such subset exists?
Recursively enumerable but non recursive subset of an infinte recursive language
formal languages;computability;undecidability
null
_unix.59482
On my office computer (running Scientific Linux 6.3) I have several windows running some processes in separate terminal emulators (/dev/pts/). I often connect to my office computer with iSSH from my iPad, but I can only see the results of the programs that have been written to a file and can't see what each terminal is showing or control the terminals.I want to be able to temporarily switch control of a terminal to my iPad iSSH terminal, look at the results, run new commands (on my office terminal from my iPad) then let the program run on my office computer and return back to my iSSH terminal, so I can check other terminals or simply quit. Since I use 3G most of the time to connect with my iPad I don't want to use any graphically depenent method which will be very slow.As far as I have understood, something like reptyr seems to permanently move control of a process from one terminal to another, I haven't seen any one talk (or ask) about giving control back to the original terminal. I want to return it back to it's original terminal after I finish. I would really appreciate any suggestions or help. Thanks in advance
Temporarily controlling a shell
ssh
I believe you need just to run the original commands inside a screen session.Then you can disconnect from it (screen keeps on runnign and keep your virtual terminal displayed correctly), and re-attach to it from another session (ie, from your ipad, or from another computer, or from the same computer when you get back to it). There are many more things screen can do too (such as, for example, allow a co-worker to sneak to your running screen session as you use it, or when you are away from it, allowing to have several persons peeking at the same terminal)in a nutshell:on your primary terminal, on host A, as user ORIGINALUSER:screencommand (ex: vi /tmp/file)CTRL+a d # which is 'CTRL' and 'a' at the same time, and then 'd'. This will 'd'etach from the screen session, while screen itself still runs! (and inside it the commands, shell and any still running invoked command, still run)on another terminal (or the same one) :#log in the original machine (host A) as the same user ORIGINALUSER, and then:screen -r #will reattach to the latest running screen from that user. If there is more than one screen to reattach to, see the screen man page or on the net. (usefull too if you can't reattach : there are ways to force it to reattach)Once really finished : you just exit the shell running inside the screen. This will terminate the screen command too.while in screen: ctrl+A is special, and allows to send commands to screen. Try : ctrl+A ?
_codereview.33713
In an old project I tinker with from time to time, I have a DOM-like structure in an MMF database. I'd like the nodes to act like they have some C++ typing based on the content, and accessor methods that go along with that. Yet in actuality, the data is independent of the type system in C++.The problems with trying to do this are pretty analogous to situations like Object Relational Impedance Mismatch. So it's not going to be perfect; it's more of a helpful bookkeeping tool than anything. Some background reading of prior feedback: first, secondAnother thing I wanted was to get solid control of the nodes. I don't want client code to leak them, or store handles to them past certain clearly delineated phases. This means I need something akin to smart pointers...and all factory creation with private destructors that are friends of std::default_delete. I also have to bundle some context information with the handle instances that are passed into user code.I've simplified the pattern down a single-file example that with a really basic datatype that you can compile, if you like:facade.cpp (gist.github.com)But I've included the relevant classes in a reduced form here. There's an internal class called FooPrivate, which is very simple; representing our basic behavior of the data item:class FooPrivate final { /* ... */private: FooPrivate (int value) : _value (value) { } ~FooPrivate () { }private: static unique_ptr<FooPrivate> create(int value) { return unique_ptr<FooPrivate> (new FooPrivate (value)); }public: FooPrivate * consume(unique_ptr<FooPrivate> && other) { _value += other->_value; FooPrivate * result (other.release()); consumed.push_back(result); return result; } void setValue(int value) { _value = value; } int getValue() const { return _value; }private: int _value; vector<FooPrivate *> consumed;};Simple case here for the data is you can access an integer with getValue() and setValue(). A consume() function is an example of something that takes transfer of ownership to the data structure.The next step is a handle and accessor for client use called Foo. For simplicity, the context is just a boolean. Each Foo contains a raw FooPrivate pointer and a shared pointer to a Context.class Context final { /* ... */private: Context (bool valid) : _valid (valid) { }private: bool _valid;};Foo has a method matching every method in FooPrivate. But there is added checking of the Context, as well as preventing raw FooPrivate pointers from leaking into user code:class Foo { /* ... */ private: void setInternals(FooPrivate * fooPrivate, shared_ptr<Context> context) { _fooPrivate = fooPrivate; _context = context; }protected: Foo () { setInternals(nullptr, nullptr); } private: Foo (FooPrivate * fooPrivate, shared_ptr<Context> context) { setInternals(fooPrivate, context); } FooPrivate const & getFooPrivate() const { if (not _context->_valid) { throw Attempt to dereference invalid Foo handle.; } return *_fooPrivate; } FooPrivate & getFooPrivate() { if (not _context->_valid) { throw Attempt to dereference invalid Foo handle.; } return *_fooPrivate; }public: template<class FooType> Reference<FooType> consume(Owned<FooType> && foo) { return Reference<FooType> ( getFooPrivate().consume(move(foo.extractFooPrivate())), shared_ptr<Context> (new Context (false)) ); } void setValue(int value) { getFooPrivate().setValue(value); } int getValue() const { return getFooPrivate().getValue(); }private: FooPrivate * _fooPrivate; shared_ptr<Context> _context;};Further up from that is a template for Reference. It's basically equivalent to a Foo, but is parameterized by a subclass of Foo. It uses the pointer dereference operator to invoke methods coming from that subclass.template<class FooType>class Reference { /* ... */ private: Reference (FooPrivate * fooPrivate, shared_ptr<Context> context) { /* ... */ _foo.setInternals(fooPrivate, context); }public: Reference (Foo & foo) : _foo (foo) { } template<class OtherFooType> Reference (Reference<OtherFooType> & other) { /* ... */ _foo.setInternals(other._fooPrivate, other._context); } FooType * operator-> () { return &_foo; } FooType const * operator-> () const { return &_foo; } FooType & getFoo() { return _foo; } FooType const & getFoo() const { return _foo; }private: FooType _foo;};Finally there is Owned; an analogue to unique_ptr that has similar behavior to FooReference. The Foo version of consume--for instance--takes an Owned<FooType> instead of a unique_ptr<FooPrivate>.template<class FooType>class Owned { /* ... */private: Owned ( unique_ptr<FooPrivate> && fooPrivate, shared_ptr<Context> context ) { /* ... */ _foo.setInternals(fooPrivate.release(), context); } unique_ptr<FooPrivate> extractFooPrivate() { unique_ptr<FooPrivate> result (_foo._fooPrivate); _foo.setInternals(nullptr, nullptr); return result; } /* ... */public: static Owned<FooType> create(int value) { return Owned<FooType> ( FooPrivate::create(value), shared_ptr<Context> (new Context (true)) ); } template<class OtherFooType> Owned (Owned<OtherFooType> && other) { /* ... */ _foo.setInternals( other.extractFooPrivate().release(), other._foo._context ); } template<class OtherFooType> Owned<FooType> operator= (Owned<OtherFooType> && other) { /* ... */ return Owned<FooType> (other.extractFooPrivate(), other._context); } FooType * operator-> () { return &_foo; } FooType const * operator-> () const { return &_foo; } FooType & getFoo() { return _foo; } FooType const & getFoo() const { return _foo; }private: FooType _foo;};To make things extra boring for this reduced example, you can't use that Reference for anything after the transfer (to demonstrate an instance of expiring a Context without adding more methods or other classes).Here is a useless demonstration:class DoublingFoo : public Foo {public: int getDoubleValue() const { return Foo::getValue() * 2; } Reference<Foo> consumeHelper(int value) { return consume(Owned<Foo>::create(value * 2)); }};int main(){ auto parent (Owned<DoublingFoo>::create(10)); auto child (Owned<Foo>::create(20)); parent->consumeHelper(30); Reference<Foo> childReference (parent->consume(move(child))); cout << The value is << parent->getDoubleValue() << \n; return 0;}I guess some of my big questions would be:Have any existing library authors faced a similar desire and done it better?Any undefined behavior or other Really Bad Things I'm setting myself up for?The technique seems to require those making Foo-derived classes to inherit publicly from Foo. But narrowing the methods vs. just adding to them could be useful too. How could one do something like this with protected inheritance?Is putting the create() method as a static member of Owned the right place for it? Is there a good way to make it so that the authors of a Foo subclass could more elegantly make their own construction methods, while keeping the tight control of making sure everything is wrapped in an Owned?But all feedback is welcome. So if anyone wants to get out a red pen and suggest better style on something not directly related to the pattern, that's great too...
Proxy/Facade Implementation Concept in C++11, impedance matching DB with classes
c++;design patterns;c++11
null
_unix.374490
I want to make a desktop environment and I want to make a button that logs you out back to the default GUI login. Any ideas thanks.
How does Gnome (or LXDE or UNITY) logout?
linux;ubuntu;gui;desktop environment;openbox
null
_unix.165294
I have a file (Data.txt) with something like this:A=1234597890 B=192.168.1.1sometimes B goes first and sometimes only A or B.So, If I find: A, Print A content and execute A script B, B B A&B, A&B A Neither, Print a error messageNow the problem is the content in A or B is not the same always!I don't know which would be the best command to use.Thank you for your help and for trying to understand my basic English :)!
Find string in text file and execute a script depending of the result
text processing;scripting;string
You can do this in Perl:$ perl -lne 'if(/A=([^\s]+)/){print A : $1; `scriptA.sh`} if(/B=([^\s]+)/){print B : $1; `scriptB.sh`}' Data.txtExplanation-lne : remove trailing newlines from each input line and add a newline to each print call (-l); read the file line by line (-n) and run the script given by -e on each line. if(/A=([^\s]+)/){print A : $1; `scriptA.sh`} : if this line matches A= something, print A : something (the parentheses mean that the pattern is captured and is now saved as $1), then run scriptA.if(/B=([^\s]+)/){print B : $1; `scriptB.sh`} : as above but for B. Or, you can just extract the values and parse them:$ grep -Po '[AB]=[^\s]+' Data.txt | while IFS== read name val; do echo $name=$val; [ $name = A ] && scriptA.sh || scriptB.sh; doneExplanationgrep -Po '[AB]=[^\s]+' Data.txt : The -P enables PCREs, and the -o will cause grep to print each match on a different line. The [AB]=[^\s]+ means match A or B, then =, then as many non-whitespace characters as possible. The output of this is the full collection of A=foo and B=bar from your input file, each on its own line. while IFS== read name val : set the ionput field separator to = and read two variables. This means that $name will be A or B and $value will be the value associated with them.echo $name=$val; : print them, change the format to whatever you like.[ $name = A ] && scriptA.sh || scriptB.sh; : if $name is A, run scriptA.sh, else run scriptB.sh. Both of the solutions above assume that the scripts you want to call are scriptA.sh and scriptB.sh and that they are in your $PATH. Edit accordingly.
_webapps.42037
I've embedded several Google forums in our website, found at: http://appinventor.mit.edu/explore/forums.html, following the general guidelines found at:http://support.google.com/groups/bin/answer.py?hl=en&answer=1191206While you do see searches over individual forums when you click on the name of that forum, I'd like there to be a way to search over multiple related forums from one search box. Is this possible? Would a Google Custom Search Engine work for this purpose? (Off to try that now)
Is it possible to search over multiple Google Forums from one search box?
search;google groups;embed;forums
null
_webapps.101055
I currently have a website hosted on myUserName.github.io. I found out that I can add a new page to it by pushing another repository to its gh-pages branch, but I am not sure how to do this exactly since I am dealing with two different repositories. If I have a repository named test, how do I add this my website so I have myUserName.github.io/test?
How to add a new repo/page to my github hosted website?
github;github pages
null
_unix.360560
I bought a Raspberry Pi 3 Model 3 to set up a VPN server, following this guide: https://sys.jonaharagon.com/2016/05/12/setting-up-an-openvpn-server-on-a-raspberry-pi-2-part-12/When I finished, I was able to get my .ovpn file and imported it to the OpenVPN client, but I got this error: Sat Apr 22 00:09:40 2017 OpenVPN 2.4.1 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [LZ4] [PKCS11] [AEAD] built on Mar 22 2017Sat Apr 22 00:09:40 2017 Windows version 6.2 (Windows 8 or greater) 64bitSat Apr 22 00:09:40 2017 library versions: OpenSSL 1.0.2k 26 Jan 2017, LZO 2.09Enter Management Password:Sat Apr 22 00:09:41 2017 WARNING: --ns-cert-type is DEPRECATED. Use --remote-cert-tls instead.Sat Apr 22 00:09:41 2017 OpenSSL: error:0906D06C:PEM routines:PEM_read_bio:no start lineSat Apr 22 00:09:41 2017 OpenSSL: error:140AD009:SSL routines:SSL_CTX_use_certificate_file:PEM libSat Apr 22 00:09:41 2017 Cannot load inline certificate fileSat Apr 22 00:09:41 2017 Exiting due to fatal errorBeing new to programming/UNIX/Raspberry, can anyone help me figure out what this is saying?
Error connecting to my Raspberry Pi OpenVPN server
raspberry pi;openvpn
null
_softwareengineering.193188
I have built a JavaScript library that I'd like to release for free within my user community. However I want to keep it under control, and not allow my users to pass it on to others without my permission.As far as I can tell, a MIT or GPL license won't work for that scenario. What licensing options do I have?Background: I've had some bad experience in the past where my solutions were used in situations other than the intended purpose, and this triggered maintenance and security issues. I am more careful this time.
License for free but restricted software sharing
javascript;licensing;mit license
null
_codereview.62593
I wrote the following script and php code for long-polling architecture by googling references, I would like to know if this is long-polling or short-polling, ...because I am not sure:bc_test.jsjQuery(function($) { var data = []; function pullNotification() { var params = {}; new RPC.Call({ 'method': 'users.getJsonUsers', 'params': params, 'timeout': 30000, 'onSuccess': success, 'onFailure' : error, }); }; function error() { alert('Error occured'); } function success(result) { if (data.length !== 0) { for (var i = 0, item; item = result[i]; i++) { for (var k = 0; k < data.length; k++) { if (item.userid === data[k]) { this.found = true; } } if (!this.found) { data.push(item.userid); $('<tr origclass=even_row class=even_row>\n\\n\ <td>' + item.userid + '</td>\n\ <td>' + item.alias + '</td>\n\ <td>' + item.surname + '</td>\n\ </tr>').insertAfter('.nilar'); } this.found = false; } } else { for (var j = 0; j < result.length; j++) { data.push(result[j].userid); } } }window.setTimeout(function() { window.setInterval(function() { pullNotification(); }, 3000); pullNotification();}, 1000); });PHPfunction getJsonUsers() { $timeStart = time(); $newData = false; $users_data = array(); $sql = 'SELECT * FROM users'; while (!$newData && (time() - $timeStart) < 10) { usleep ( 10000000 ); $db_res = DBselect($sql); while ($user = DBfetch($db_res)) { $users_data[] = $user; $newData = true; } } return $users_data; }
Is it long polling or short polling?
javascript;php;beginner;jquery;sql
This code is performing a long-polling technique. The server gets a client request, the server method is checking for any changes in data, and when found, returns it to the client. In short-polling, the client is constantly sending requests for data on a timer basis; in long-polling, the client makes one request and waits for the server to return any new data it has.For example, in your method getJsonUsers(), it is running a query that looks for new data, and upon finding it, returns it to the user. If it doesn't find data, it waits a finite amount of time, and tries again. The client won't get notified of anything until data is actually returned, or it will timeout. This is a normal pattern for long-polling. If your program was using a short-polling technique, the server method would just return (either data or no data, depending on the results of the query you're running), and not wait to actually get something to give back to the client. Your client code on the other hand, in the case of short-polling, would keep making requests to the server, and after a certain amount of time, would make another request. Long-polling uses an event-based mechanism (that is, it waits for data to be present prior to returning) for alerting the client of data, whereas the short-polling method just keeps hitting the server regardless of what it gets back or not.This website has good information on polling techniques, including an example of long-polling: CoderTalks: Long Polling vs Short Polling
_webmaster.105947
Sorry for the title, this is a confusing problem and I wasn't sure how to make the title succinct.The background: I have a client's website which sell products. The purchasing is all done through e-junkie.com cart, payments through paypal.The course is delivered by giving the user access to a password protected page on the site with embedded vimeo links.(it's not the best process I know but the site has been like this for YEARS).....The Problem:My client has received an email from somebody who purchased on her site. Here is that email:I am emailing you because once I paid for your course, my account has been hacked and money used for various purchases in the USA as well as creating Match.com fake profile under my name. From what I gathered talking to my IT, there may be a virus/hacker on your paypal link or video links have something that allows that to happen? I really don't know, but I just wanted to let you know. We contacted our bank UBS in Switzerland (where I live) and I'm chasing match.com admin to erase my profile (already started getting messages on my mobile from match.com people??!!).........The question: What is my course of action? There are so many pieces of the puzzle on my end (website, e-junkie, vimeo) I'm not sure how/where/what all to check.My first thoughts is that her personal computer has a virus on it and she was compromised there. Could this have come from my site?and a bonus question: why would a hacker sign her up for Match.com??... Edit ...My question was flagged for being opinion-based, so I've edited the question hopefully to be more specific to a course of action. If you still believe this question should be removed can you please let me know why and what I should be asking instead so I can keep it and get some assistance? Thanks
Customer's financials compromised - they think it was my site
security;paypal;shopping cart;virus
null
_webapps.4476
Some emails circumvent the spam protection mechanism and drop into inbox. I occasionally see this happen both on web based mail services and on our Exchange mail server.Some of those spams have a bottom line saying something like Click here to unsubscribe from this mailing list. The hyperlink has a long address with the unsubscribe term in it. I want to ask experienced Web Apps users whether it is safe to click on this link or is it another bait?
Unsubscription links in spams: Safe?
spam prevention
Generally, I wouldn't trust it unless it comes from a service you use. For example, I might signup for service X and not notice the checkbox to opt-out of email marketing. If they start sending me email I would expect there to be an unsubscribe link, since it's required by law. I would expect them to follow the law since I wouldn't create an account on a non-trustworthy website. Furthermore, I gave them my email during the registration process, so they already know it's a valid address and don't need to wait for a click on a link to prove that.The law also says that it does not require e-mailers to get permission before they send marketing messages. Which means that even services you didn't sign up for are allowed to send you email. This makes it tricky since that service may or may not be legit. If it's a well known company I would say it's safe to click the unsubscribe link (make sure it's going to their domain though, since the email might be spoofed). If it's from some other company it's a tough call. Personally, I wouldn't click on it, but someone else might have better advice for you.Another thing you need to consider is that spammers might add the unsubscribe link not only to prove that it's a real address, but also to confuse the Bayesian filter into thinking it's a non spam message.
_softwareengineering.224126
Say you want to render a table with five columns, but you want the order of the columns to be different depending on some specific parameter. This would be very easy to accomplish if the model sets the order. The view can then simply use a loop and create the table accordingly. However, unless I have misunderstood things, we want to let the view handle how things are rendered (although I guess there may a gray area involved here in terms of what the view should decide how to render)? It also feels ugly to let the model set formatting / order, but maybe this is another thing I might have misunderstood?If the view is supposed to deal with the order of the columns, what is a good way to accomplish it (read: having to use a lot of if-statements and other ugly code in the view)?
Which layer should order the columns shown to the user when using MVC?
mvc;order;rendering
Split (conceptualy) your View into ViewView and ViewModel. In view model you will have things like order of columns, number of rows on page, current page, whether you are in edit mode. Simply things that you need to store, but does not make sense to store them in model (different view will not use them).In what form view model is stored depends on view. It can be in session scoped variable on server (in web application), in hidden field (in web application), on mobile device (in mobile application communicating with server, so model itself is on server), just in different class (in standalone desktop/mobile app).
_vi.704
I have set expandtab in my .vimrc file to convert tab to multiple space characters. However some files (like Makefile) need actual the tab character inserted.Is there an easy way to force the insertion the tab while I am typing?
Insert tabs in INSERT mode when expandtab is set
insert mode;tab characters
Instead of just pressing Tab, first press Ctrl-V and then press Tab.This can be used to insert a variety of special chars. See :help i_CTRL-V for details.Ctrl-V also works in command-line mode (:help c-CTRL-V), and even in some other programs entirely. (e.g. bash, mutt.)If you have Ctrl-V mapped to something else, try Ctrl-Q. This has the same effect in Vim as Ctrl-V, but some terminals use it for control flow, in which case Vim won't ever see it.
_cogsci.15231
Adobe Illustrator has taken over five minutes (and counting) to render a vector 2D image rotated 18 in 3D on my computer. And yet, I and nearly anyone else can easily visualize the subject rotated almost instantaneously, and with little effort rotate the object continuously in real time in the mind's eye.I'm not asking how the brain stores representations of objects, as that's clearly up for debate. But how does the brain structure its internal representation of 3D visual data?It's almost definitely not in some pixel-based format, as can be shown by simply visualizing an object of some sort, and then zooming in mentally on some detail and noticing the image retains its sharpness. It's probably not breaking down objects into geometric shapes either, because at least I personally don't visualize my friends as stick figures. It could be a vector format, but then it should be easier to visualize complex shapes that are mathematically simple, like this one:So it would seem that the brain uses some other format. To the best knowledge of modern cognitive, how does this work?
How does the brain structure 3D visual data?
cognitive neuroscience;vision;cognitive modeling;visualization
null
_unix.217369
I am building an image for an embedded Linux based on Debian. I did use apt-get update before on the device that I want to use as a base for that image, so the lists under /var/lib/apt/lists are quite large (almost 100 MB in size).I want to keep apt-get functionality (so I don't want to remove apt repositories) but I want to free the space used up in these lists (the lists almost double the size of the image).Does anyone know how to do that? Can I just delete the files under /var/lib/apt/lists?
clear apt-get list
debian;apt;package management
You can just use:rm /var/lib/apt/lists/*This will remove the package lists. No repositories will be deleted, they are configured in the config file in /etc/apt/sources.list. All that can happen is that tools like apt-cache cannot get package information unless you updated the package lists. Also apt-get install will fail with E: Unable to locate package <package>, because no information is available about the package.Then just run:apt-get updateto rewrite those lists and the command will work again.Anyway, it's recommended to run apt-get update before installing anything.
_codereview.29081
I just wrote some PHP to let a client lock off a their one-page site from the public using a password. No users and no database. Just checking the password the user enters and testing it against the chosen phrase. This should be simple (right?) but I havn't built this sort of thing before. Let me know what you think, and whether it could be done better. Thanks!includes/session.phpclass Session {private $logged_in;function __construct() { session_start(); $this->check_login();}// Return whether they are logged in.public function is_logged_in() { return $this->logged_in;}// When initialized, check to see if the user is logged in.private function check_login() { if (isset($_SESSION['logged_in'])) { // If logged in, take this action $this->logged_in = true; } else { // If not logged in, take this action $this->logged_in = false; }}// Set the session. User will remain logged in.public function login() { $_SESSION['logged_in'] = 1; $this->logged_in = true;}// Log out. Unset session and destroy it. public function logout() { unset($_SESSION['logged_in']); $this->logged_in = false; session_destroy();}public function check_pass($pass) { $pass = @strip_tags($pass); $pass = @stripslashes($pass); if ($pass !== 'TheChosenPassword') return false; return true;}}login.php:require_once('includes/session.php');// If already logged in just send them to index.phpif ($session->is_logged_in()) { header('Location: index.php'); exit;}// Create the var in the global scope.$msg = 'This page is private.<br> Please enter the password.';// If the form was submitted ... if (isset($_POST['submit'])) { // Make sure they entered a pass if (!$_POST['pass']) { $msg = Please enter a password.; } else { // If they entered a pass, grab it and trim it. $pass = trim($_POST['pass']); // Now check the pass if ($session->check_pass($pass)) { $session->login(); header('Location: index.php'); exit; } else { $msg = Sorry, that password is incorrect.; } // Endif checkpass } // endif isset(pass)} // Endif isset(post)?> <!-- The form is bellow -->index.phprequire_once('includes/session.php');if (!$session->is_logged_in()) { header('Location: login.php'); exit;}You can probably guess from the code that the form just has an <input> with a name of 'pass' and a submit button with a name of 'submit'. Also a <small> tag in which i echo $msg. So, any thoughts on how this could be improved? Did I make any glaring mistakes? Thanks for the help!
Password-Lock a Single Page with PHP - How did I do?
php;session
A few things, mostly around logged_in:private function check_login() { if (isset($_SESSION['logged_in'])) { // If logged in, take this action $this->logged_in = true; } else { // If not logged in, take this action $this->logged_in = false; }}Simply:$this->logged_in = isset( $_SESSION['logged_in'] );Also, the accessor method is_logged_in is being used quite well, yet there is a lack of a corresponding set_logged_in. Add a set accessor, even if private, and use it exclusively.Directly assigning the value of an instance variable in more than one place violates the DRY principle. That is, the assignment of $this->logged_in = <<value>> should exist only once: within the accessor. For example:private function set_logged_in( $logged_in ) { $this->logged_in = $logged_in;}Once the accessor is in place, use it:private function check_login() { $this->set_logged_in( isset($_SESSION['logged_in']) );}After the logged in status is using accessors exclusively, you can remove the logged_in variable completely and rely only on the $_SESSION['logged_in'] value. Think about what this implies for the check_login() method.Next:if ($pass !== 'TheChosenPassword') return false;return true;Simply:return $pass === 'TheChosenPassword';
_unix.136165
I'm looking for java code to copy files to a remote linux system. I have tried Runtime.getRuntime().exec() function by passing an scp command, but each time I run the program it is asking for the remote system password. I'd like to avoid that.I looked at the Jsch library -- using this I can login to a remote system -- but I can't copy the files to the remote system. Once I login I can do scp to my host but again it requires the host system username and password. However, I only have the remote system's information.
Java code to copy files from one linux machine to another linux machine
java;scp;file copy
Copying a file from one host to another requires a daemon on the remote host, implementing some application-level file transmission protocol. This is a requirement no matter from which language you are going to talk to that remote daemon.Your options for Linux systems are:SSH. This requires a SSH daemon (say openssh-server) on the remote side. Because ssh is designed for security you will have to configure the remote host to authenticate you with either a password or a private key. Actually copying the file can be done via the scp utility or ssh client library (jsch would be an example of such).NFS. The remote host installs a daemon (for example samba) and shares some files. Your local computer (cifs-utils package is capable of that) can then mount a remote location on the local file system. This way you can copy a file to the remote host by just copying the file locally. Authentication is optional, files are sent in plain over the network.FTP. An ftp server is installed on remote side and configured to permit access to certain locations for certain users. You can then use any ftp client or some ftp client library (commons-net library from the Apache project, for instance) to connect to the remote ftp server and copy the files. Authentication is optional, files are sent in plain over the network.All of this seems like a lot of work, and in deed it is, because there is not a single widely-adopted and standardized protocol that would be implemented and configured out-of-the-box on most systems.
_softwareengineering.274901
function getOrInsertEmptyElemById(id){ var elem = $(id) || document.body.insert('<div id='+id+'></div>'); return elem;}I find myself using functions like the above quite often. But I struggle on naming them.How is a make sure it exists and return function usually named?
How to name a function that makes sure an element exists and returns it?
design patterns;naming;naming standards
null
_codereview.70676
This code draws two sets of 9 boxes on the screen to display the contents of two arrays.How can I speed up, shorten and make the code more efficient? for (int x=0;x<3;x++){ for (int y=0;y<3;y++){ if (pattern[x][y]==1){ g.setColor(colorrange.colb()); g.fillRect(actlo+(x*size),actlo+(y*size),size,size); } g.setColor(Color.yellow); g.drawRect(actlo+(x*size),actlo+(y*size),size,size); if (patternb[x][y]==1){ g.setColor(colorrange.colb()); g.fillRect(actlo+(x*size),inlo+(y*size),size,size); } g.setColor(Color.yellow); g.drawRect(actlo+(x*size),inlo+(y*size),size,size); } }
Drawing sets of boxes to display contents of arrays
java;optimization
Multiplies can be pretty expensive within loops, so I minimized the use of them by precalculating where possible. It is also better to have the Y loop before the X loop to prevent cache misses (which wouldn't matter on a really small array like this but is a good practice). Consider this alternative:for(int y = 0; y < 3; y++){ final int yy = y * size; for(int x = 0; x < 3; x++) { final int xx = x * size; if(pattern[x][y] == 1) { g.setColor(colorrange.colb()); g.fillRect(actlo + xx, actlo + yy, size, size); } g.setColor(Color.yellow); g.drawRect(actlo + xx, actlo + yy, size, size); if(patternb[x][y] == 1) { g.setColor(colorrange.colb()); g.fillRect(actlo + xx, inlo + yy, size, size); } g.setColor(Color.yellow); g.drawRect(actlo + xx, inlo + yy, size, size); }}
_unix.348139
When I try to install the package pandoc-convert 1.1.0 on Atom 1.14.3 using the Manjaro LXqt OS 16.11 I get the following error:Installing [email protected] failed.Hide output(node:9072) DeprecationWarning: os.tmpDir() is deprecated. Use os.tmpdir() [email protected] postinstall /tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r/node_modules/pandoc-convert/node_modules/pandoc-binnode index.js Downloading Pandoc (~20-50MB depending on OS). This may take a minute or so./tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r (empty)npm WARN deprecated [email protected]: use cross-spawn or cross-spawn-async instead.npm WARN deprecated [email protected]: Use the globby package insteadpath.js:7throw new TypeError('Path must be a string. Received ' + inspect(path));^TypeError: Path must be a string. Received { url: 'https://raw.github.com/toshgoodson/pandoc-bin/0.1.0/vendor/linux/x64/pandoc',name: 'pandoc',os: 'linux',arch: 'x64' }at assertPath (path.js:7:11)at Object.basename (path.js:1355:5)at /tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r/node_modules/pandoc-convert/node_modules/download/index.js:35:43at each (/tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r/node_modules/pandoc-convert/node_modules/each-async/each-async.js:63:4)at module.exports (/tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r/node_modules/pandoc-convert/node_modules/download/index.js:33:5)at /tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r/node_modules/pandoc-convert/node_modules/bin-wrapper/index.js:108:20at /tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r/node_modules/pandoc-convert/node_modules/bin-wrapper/index.js:141:24at /tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r/node_modules/pandoc-convert/node_modules/bin-check/index.js:30:20at /tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r/node_modules/pandoc-convert/node_modules/executable/index.js:39:20at FSReqWrap.oncomplete (fs.js:114:15)npm WARN enoent ENOENT: no such file or directory, open '/tmp/apm-install-dir-117127-9072-634y30.v7yhbk2o6r/package.json'npm WARN apm-install-dir-117127-9072-634y30.v7yhbk2o6r No descriptionnpm WARN apm-install-dir-117127-9072-634y30.v7yhbk2o6r No repository field.npm WARN apm-install-dir-117127-9072-634y30.v7yhbk2o6r No README datanpm WARN apm-install-dir-117127-9072-634y30.v7yhbk2o6r No license field.npm ERR! Linux 4.8.5-2-MANJAROnpm ERR! argv /usr/bin/node /usr/lib/node_modules/npm/bin/npm-cli.js --globalconfig /home/martinezce/.atom/.apm/.apmrc --userconfig /home/martinezce/.atom/.apmrc install /tmp/d-117127-9072-m6rmws.6kl2mg3nmi/package.tgz --runtime=electron --target=1.4.15 --arch=x64 --global-stylenpm ERR! node v7.6.0npm ERR! npm v4.3.0npm ERR! code ELIFECYCLEnpm ERR! errno 1npm ERR! [email protected] postinstall: node index.jsnpm ERR! Exit status 1npm ERR!npm ERR! Failed at the [email protected] postinstall script 'node index.js'.npm ERR! Make sure you have the latest version of node.js and npm installed.npm ERR! If you do, this is most likely a problem with the pandoc-bin package,npm ERR! not with npm itself.npm ERR! Tell the author that this fails on your system:npm ERR! node index.jsnpm ERR! You can get information on how to open an issue for this project with:npm ERR! npm bugs pandoc-binnpm ERR! Or if that isn't available, you can get their info via:npm ERR! npm owner ls pandoc-binnpm ERR! There is likely additional logging output above.npm ERR! Please include the following file with any support request:npm ERR! /home/martinezce/.atom/.apm/_logs/2017-02-27T13_55_01_959Z-debug.log
Cannot install pandoc-convert 1.1.0 via npm in atom 1.14.3 on Manjaro LXqt
npm;atom editor;pandoc
null
_unix.44776
I've got some issues when copying larger files to my USB drive. First of all, my system stopped recognizing my USB drives automatically. I now have to run modprobe usb-storage in order for my USB drive to be recognized.Once this is run, I can properly access my USB drives (USB pen drives). However, when I try to copy a larger file, the copying process will hang. My USB drive indicates to be busy, and my program also indicates it is writing information, but it simply hangs and does not move. Details:I tried using a GUI tool and the command line. Results are the same. I tried using different USB drives. Results are the same.I tried decreasing the max_sectors setting, as indicated here. This has one main result: instead of hanging at 200 MB, the file now copies for 60 MB instead before hanging. Given the last observation, I feel it might have to do something with the max_sectors setting, but even lowering this does not help, so I'm not sure exactly where the issue can be. Please note that the system will copy a certain amount of information (60 or 200 MB, or whatever I set max_sectors to), and then act as if more info is being written, while in fact nothing happens. I have to manually remove the USB drive in order to be able to shut down my computer. EDIT: Please note that at this moment very few times I'm actually able to have my USB recognized. I am running modprobe usb-storage a dozen times, taking the USB out and connecting it again, and opening my file browser frequently, to try to 'force' recognition/mounting of the drive. It only happens once in many times. I feel this whole thing is related to one main issue with the USB drivers or something else in common, but I'm just not sure what it is. Any ideas?
Problems copying large files to USB drive
arch linux;usb drive;udev
null
_webmaster.89500
** I can't share the client's specific site name, but I will try my best to explain the problem.**We have a major brand client whose just launched a new campaign for a new product they offer. This campaign has a lot of cash injected to it including a national TV advert and is huge for the client.The main issue is that the page has stopped appearing in the SERPs when people type the campaign term. Another subdomain (owned by client) ranks instead, even though it's on-page SEO is vaguely related.It used to rank number 1. None of the other domain pages have an issue.There has been no linkbuilding, just naturally earned links due to the client being a household name brand.When entering the page as a site search on Google it says the following:subdomain.clientwesbite.com/category/product.html?intcmpid=A description for this result is not available because of this site's robots.txt learn more. We've done everything:- Checked on Robots.txt to see if the page is being blocked and this isn't the issue.- The on-page is optimized for the page and the campaign keyword.- The canonical appears correctly.- The internal anchor text with the actual isn't pointing to other pages.- The external anchor text with the actual isn't pointing to other pages.- Updated the XML sitemap which included the target URL.- Updated the HTML sitemap to include the specific page.Plus, when I test the cache of the page, it shows the homepage instead of the actual page.We're all banging our heads against the wall but can't find the issue.Anyone had similar problems? And how did you resolve it?
An important page has disappeared from SERPs
seo;google search console;serps;page
null
_softwareengineering.79936
Is it considered impolite to file a bug report against an abandoned open-source project, or an abandoned branch of a still-continuing project?Is it perceived as making a request of the former author(s) of the project, which they're no longer willing to fulfil, or are bug reports merely seen as a description of the current version of the software, which may be fixed by a different person in the future?
Is it impolite to file bug reports against abandoned open-source projects?
open source;issue tracking;etiquette
Impolite? No. Why it should be? If it is abandoned it means nobody will look at hose bugs.Useless? No.If that project will start to live on, or it will be used in some project - people will see that bug reported and may take it into consideration. So you would help them, or in even better scenario - they will fix it. Remember - it's OpenSource!My point in here is simple: bug is a bug. Notify about its existence and stop doubting. It may do some good.
_cs.10932
I'm trying to understand DPLL algorithm for solving SAT problem. And here it is:Algorithm DPLL Input: A set of clauses . Output: A Truth Value.function DPLL() if is a consistent set of literals then return true; if contains an empty clause then return false; for every unit clause l in unit-propagate(l, ); for every literal l that occurs pure in pure-literal-assign(l, ); l choose-literal(); return DPLL( l) or DPLL( not(l));At first, I don't clearly understand how unit-propagate(l, ), pure-literal-assign(l, ) and choose-literal() work. I'll try to guess on particular examples. Correct me please if I do something wrong. For the first one unit-propagate(a, (0 v -a) (a v b) (b v d) (f v g) v ...) we will have ((0 v -0) (0 or 1) (1 v d) (f v g) ... = (f v g) v ...,having a = 0, b = 1.For second procedurepure-literal-assign(a, (a v b v c) (d v -b v a) (-d v b))result is (b v c) (d v -b) (-d v b),assigning a = 1.And finally choose-literal() just returns some random (in common case) unassigned literal for further computations.Now, I don't understand why algorithm has such strange conditions for finishing? Why does it work?Thanks!
Understanding DPLL algorithm
algorithms;logic;satisfiability;sat solvers
It works because the three cases you mention will remove every non-pure variable after some number of steps, but it is crucial that you also consider the recursive step, namely $\mbox{dpll}(\phi \land l) \lor \mbox{dpll}(\phi \land \neg l)$.After a recursive step, you are guaranteed to have at least one unit clause, so the unit clause will be set to its value and hence all the variable will in the end be assigned a value.I think your misunderstanding must arise from the recursive calls. Observe that if you have any formula $\phi = ((a \lor b \lor c) \land \cdots)$, then $\mbox{choose-literal}(\phi)$ will return some unassigned variable, say $a$. The recursive call will then be with $\phi \land a$ which will be $(a \lor b \lor c) \land \cdots) \land (a)$. And then $\mbox{unit-literal}(\phi)$ will return $a$ and set it to true. The other recursive call is equivalent, but with $\phi \land (\neg a)$ so $\mbox{unit-literal}(\phi)$ will still return $a$ and set $a$ to false.
_webapps.39408
I've only just started using Trello but I've been told I can add an email link into the card. How do I do this?
Adding an email into the card
trello
null
_cogsci.8730
Having disorganized thinking is different for everyone. But, it is sometimes described as not being able to connect thoughts together. I am asking about disorganized thinking other than communication problems. To be more specific, disorganized thinking, relating to the thoughts one forms, that causes disorganized behavior.Is there a biological reason behind this, something relating to dopamine activity in the brain?
What is the biological reason behind disorganized thinking and disorganized behavior in thought disorders?
cognitive psychology;neurobiology;terminology;schizophrenia;unconscious
There is some evidence that thought disorder (also called loose association) arises, at least partially, from increased spreading activation; schizophrenics, for example, often show a greater increase in activation to indirectly related words compared to unrelated words, than do non-thought disordered controls. This is primarily a cognitive mechanism, not a biological one, of course, and I have found very little biologically-based explanation of spreading activation itself; it comes from the computational cognitive science end of things, and is often discussed as part of a neural network model of cognition.To address Alex Stone's comment above, I believe what he is talking about is either Broca's area (the language-dominant hemisphere's inferior frontal gyrus), generally understood to be important for producing language as well as comprehending complex sentences, or Wernicke's area (the language-dominant hemisphere's posterior superior temporal gyrus), which is generally thought to be important for processing the dominant meaning of words as well as understanding nonliteral language such as jokes, sarcasm, etc. There is some evidence for a correlation between volume reduction in both Broca's and Wernicke's areas, and thought disorder symptoms. Again, however, no mechanism has been suggested; and as with much imaging research, it is at this stage impossible to tell whether the neural abnormalities are the cause of the symptom or themselves a symptom of another cause.Schizophr Res. 2013 May;146(1-3):308-13. doi: 10.1016/j.schres.2013.02.032.Schizophr Bull (2008) 34 (3): 473-482. doi: 10.1093/schbul/sbm108Neural Basis of Semantic Memory, ed. Kraut & Hart (2007). Cambridge University Press.
_softwareengineering.50120
I have read Uncle Bob's Clean Code a few months ago, and it has had a profound impact on the way I write code. Even if it seemed like he was repeating things that every programmer should know, putting them all together and putting them into practice does result in much cleaner code. In particular, I found breaking up large functions into many tiny functions, and breaking up large classes into many tiny classes to be incredibly useful. Now for the question. The book's examples are all in Java, while I have been working in C++ for the past several years. How would the ideas in Clean Code extend to the use of namespaces, which do not exist in Java? (Yes, I know about the Java packages, but it is not really the same.)Does it make sense to apply the idea of creating many tiny entities, each with a clearly define responsibility, to namespaces? Should a small group of related classes always be wrapped in a namespace? Is this the way to manage the complexity of having lots of tiny classes, or would the cost of managing lots of namespaces be prohibitive?Edit: My question is answered in this Wikipedia entry about Package Principles.
Best practices for using namespaces in C++
design;c++;namespace
null
_unix.286336
I have written a script which should execute certain commands as another user and after execution is finished (success or failure) should logout immediately.I have read that I can use -c of su to execute commands. So, I wrote a script like:#!/usr/bin/env bashsu - user2 -c echo 'hurray' && exitIt executes the echo command but stays logged in as user2 doesn't logout. I even tried using logout instead of exit but it stays logged in. I need to execute script as user1 which invokes the su command and executes as user2 and then the control should return to user1 after completion.UPDATEAlright, I was to able logout automatically but this time some commands are not being executed. For eg:Command1:su user2 -c 'echo 1'Output1:1And, then it logs out on its own.Command2:su user2 -c 'bash some-script.sh'Output2:bash: cannot set terminal process group (-1): Inappropriate ioctl for devicebash: no job control in this shellSo, whenever I try to run some script via -c of su, it displays the above mentioned error. I've tried many commands with -c option such as ls, which, bash --version, mkdir, rm, whoami etc. All these commands produced the correct output. But any command which tries to execute a script, it fails with the error.su user2 -c 'bash --version' # Workssu user2 -c 'bash some-script.sh' # Doesn't workI cannot figure out why this is happening. And, so is the reason I'm unable to fix this error.
error thrown when using su -c: bash: no job control in this shell
bash;su
null
_scicomp.10542
When I learned about SOR, it was mostly given as one of the first examples of iterative methods, and then later the iterative methods that I would end up using would be Krylov subspace methods.Are any of the iterative methods like Gauss-Seidel and SOR ever used in practice? Do you know of any real packages that use them seriously, for something other than demonstration purposes?
Gauss-Seidel, SOR in practice?
linear algebra
Yes, but not as stand-alone solvers for linear systems of equations. These days, they are used as smoothers in multigrid or as preconditioners in krylov methods.
_ai.3499
I know nowadays agencies are using GPUs in order to accelerate AI, but how fast should be it to be efficient, I mean I know that depends of how large and complex the assignment is but what would be a way to measure its efficiency and what kind of technology(amount of GPUS,RAM,STORAGE) and techniques need to be used in order to get enough efficency?Any thoughts from experts would be appreciated
how fast does need to be an AI agent to be efficient?
efficiency
My understanding of efficiency in this context is in regards to optimization of algorithms as opposed to hardware speed, which is more of a brute force component. GPUs may be more energy efficient, but this is distinct from linear optimization of algorithms.In terms of how much processor speed you need to tackle a given problem, that's in the realm of computational complexity theory and analysis of algorithms. Amount of GPUs, RAM and storage needed to tackle a given problem are purely a function of the complexity of the problem and the efficiency of the algorithms.
_unix.163632
To pool specific content from a batch of files, I dofor ID in {92..128}; do sed '3q;d' directory_$ID/statsdoneNow what if want to put the $ID in front of each line read (preferably shifting the columns in a fixed-width manner) and then append the line to a report.txt file (creating it if it doesn't exist). I did some research on this but there seem to be many potential ways of doing it, none of which I'm familiar with as a new Linux user (perhaps I should just use Python next time).
Read line from file, manipulate, and then append to another file
shell script;sed
To append $ID (with space) at the beginning of each line something likesed s/^/$ID /should work (notice double, not single quotes). If you want to do this within the given loop and redirect output to report.txt tryfor ID in {92..128}; do sed s/^/$ID /;3q;d directory_$ID/statsdone > report.txt
_softwareengineering.216735
I'm building a website in ASP.NET (Web Forms) on top of an engine with business rules (which basically resides in a separate DLL), connected to a database mapped with Entity Framework (in a 3rd, separate project).I designed the Engine first, which has an Entity Framework context, and then went on to work on the website, which presents various reports. I believe I made a terrible design mistake in that the website has its own context (which sounded normal at first).I present this mockup of the engine and a report page's code behind:Engine (in separate DLL):public Engine{ DatabaseEntities _engineContext; public Engine() { // Connection string and procedure managed in DB layer _engineContext = DatabaseEntities.Connect(); } public ChangeSomeEntity(SomeEntity someEntity, int newValue) { //Suppose there's some validation too, non trivial stuff SomeEntity.Value = newValue; _engineContext.SaveChanges(); }}And report:public partial class MyReport : Page{ Engine _engine; DatabaseEntities _webpageContext; public MyReport() { _engine = new Engine(); _databaseContext = DatabaseEntities.Connect(); } public void ChangeSomeEntityButton_Clicked(object sender, EventArgs e) { SomeEntity someEntity; //Wrong way: //Get the entity from the webpage context someEntity = _webpageContext.SomeEntities.Single(s => s.Id == SomeEntityId); //Send the entity from _webpageContext to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context //Right(?) way: //Get the entity from the engine context someEntity = _engine.GetSomeEntity(SomeEntityId); //undefined above //Send the entity from the engine's context to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context }}Because the webpage has its own context, giving the Engine an entity from a different context will cause an error. I happen to know not to do that, to only give the Engine entities from its own context. But this is a very error-prone design. I see the error of my ways now. I just don't know the right path.I'm considering:Creating the connection in the Engine and passing it off to the webpage. Always instantiate an Engine, make its context accessible from a property, sharing it. Possible problems: other conflicts? Slow? Concurrency issues if I want to expand to AJAX?Creating the connection from the webpage and passing it off to the Engine (I believe that's dependency injection?)Only talking through ID's. Creates redundancy, not always practical, sounds archaic. But at the same time, I already recuperate stuff from the page as ID's that I need to fetch anyways.What would be best compromise here for safety, ease-of-use and understanding, stability, and speed?
Design pattern for an ASP.NET project using Entity Framework
c#;design patterns;asp.net;entity framework;webforms
I don't think you are a million miles away from where you need to be.What you need to do really depends on the level of abstraction that you need and what you foresee the future of this application to be.You definitely do not want to be using multiple contexts in a single http request. That will cause you a lot of pain.As a bare minimum I would alter your Engine to have the context dependency injected in (through the constructor or setter). That will allow you to isolate the engine for testing purposes by injecting in a mocked context.I would then consider creating a BasePage class that new ups an Engine Instance in the constructor (passing in a new Context). Each subsequent page would then inherit from the BasePage and would therefore have an Engine instance by default.I would then work on the Engine class to provide all the functionality that your pages require (possibly sub-classing if it starts to get a bit monolithic). You then make sure that all interaction between the UI and the underlying database is channelled through the Engine. This way you could extend the Engine as required say for instance Engine.Ajax.Get... or something along those lines.Like I say the level of abstraction you require may reveal other design considerations that you will need to make. You may wish to implement a repository and unit of work pattern on top of the context (technically an EF context already gives you this but if you are not sure about sticking with EF or know that you will need to introduce alternative data access options then you will need to consider it).If you are injecting dependencies and those dependencies are 'volatile' then you may look to use an Inversion of Control Container such as AutoFac or Windsor.
_unix.265676
I want to check out the Version of my Oracle JDBC driver.In this case, I know the driver is 12.1.0.2:/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracleBut without that I am searching for a valid command which will give me the version back on the console. I am using CentOS 7. Should I search for the Manifest.MF file or do you have other ideas?
Finding the Oracle JDBC version using the command line
centos;oracle database
Oracle documents this in Verifying a JDBC Client Installation with a sample function written in Java:If at any time you must determine the version of the JDBC driver that you installed, you can invoke the getDriverVersion() method of the OracleDatabaseMetaData class. Further reading:Oracle JDBC FAQDetermine JDBC Driver Version (giving the same advice)
_unix.318133
I am running a script which has to be suspended before killing. The first time I run it there is only one pid for the process. I kill it and run again and the number of PID goes increasing. First of all why this behaviour?And how can I kill all the suspended processes without explicitly mentioning each PID?./toplog.shSuspending:Ctrl-ZListing suspended processes:jobs -lOutput:[1] 12055 Stopped ./toplog.sh[2] 12752 Stopped ./toplog.sh[3]- 13276 Stopped ./toplog.sh[4]+ 13579 Stopped ./toplog.shKilling:kill 12055 12752 13276 13579
Pipe all the suspended processes to kill
kill;signals;jobs;processes
null
_webmaster.106217
Using Google Analytics, I'm trying to create a supported browsers segment using the Conditions section, but the boolean order of operations isn't occurring as expected. A simple example is below.AND usually takes precendence over OR, so I'd expect my segment above to be interpreted as (Safari AND v10) OR (Chrome AND v57), which would allow me to include a number of Browser/Version combinations.However, it seems like that's not the case, here. I believe this because:My results zero out with the segment configured as pictured, but if I remove either the first or fourth clause, I'll get results. This is consistent with my filters being interpreted as Safari AND (v10 OR Chrome) AND v57, such that Safari AND (v10 OR Chrome) and (v10 OR Chrome) AND v57 both return results.The horizontal line by each AND operator seems to divide the clauses as described above.OR-precedence doesn't make any sense to me, since the Add Filter button already allows the user to add additional clauses which would be equivalent to an AND, allowing the user to choose OR-precedence if they needed to. As it is, there seems to be no way to construct an AND-precedent segment.
Google Analytics Segments w/ Conditions: Boolean order of operations?
google analytics;advanced segments
The easiest way to accomplish this is to create multiple filters so that each filter can compare the same dimension. So, for example, this type of segment would have a filter for the browser (Safari or Chrome) AND a filter for the browser version (10 or 57). This segment would show you traffic for those browsers only. Here are screen shots of the segment and the resulting browser table:
_unix.28983
I somehow managed to create a file that doesn't seem to have a filename. I found some information regarding how to get more details of the file in the following thread.However, I tried some of the suggestions listed and can't seem to delete the file. I'm not sure what I did to create it, but it happened while trying to copy an xml file. Some info on the file is as follows;> ls -lbtotal 296-rw-r--r-- 1 voyager endeavor 137627 Jan 12 12:49 \177> file *: XML document> ls -i 417777 I tried to find using the inum switch and then pipe that to rm as that seemed like the most foolproof way of getting rid of it. However, the example given at the bottom of the thread linked below failed for me. Example was:> find -inum 41777 -exec ls -al {} \;find: illegal option -- ifind: [-H | -L] path-list predicate-listso I tried using the path list first like the following, but that didn't work either:> find . -inum 41777 -exec ls -al {} \;I'm not sure what the non-printable character \177 is or how I can pass that to an rm command, but I really want to make sure I don't mess up any other files/directories in my attempt to delete this file.
How can I delete a file with no name
filenames;rm;special characters
The file has a name, but it's made of non-printable characters. If you use ksh93, bash, zsh, mksh or FreeBSD sh, you can try to remove it by specifying its non-printable name. First ensure that the name is right with: ls -ld $'\177'If it shows the right file, then use rm: rm $'\177'Another (a bit more risky) approach is to use rm -i -- * . With the -i option rm requires confirmation before removing a file, so you can skip all files you want to keep but the one.Good luck!
_unix.363332
I am using Raspberry Pi using Raspbian which is just Debian.I would like to bridge from the primary WiFi network router that connects to Cox Cable to my cabled router here for my subnet to have reliable internet access. It needs to be a WiFi-to-Ethernet bridge. I have set /etc/networks for a static address for the USB wlan1 with the external adapter and hi-gain antenna. wpa_supplicant is configured to log in to the master router properly. So right now it is set up so I can login to the proper network with the password, on external wlan1. Static address is set in /etc/networks. Gateway and nameserver are OK. I can browse web pages, etc. The missing link is to bridge this to the eth0 port so my router can connect also, to provide service to my subnet.No need for any extra network services like routing or nat or dhcp, etc. Just a simple bridge. Can anyone please point me in the right direction to make this happen?Thank you in advance for any help.
How do I configure a network interface bridge from WiFi to Ethernet with Debian?
debian;wifi;ethernet;bridge
For configuring a bridge from ethernet to wifi, it is as simple as doing in your /etc/network/interfaces:auto eth0allow-hotplug eth0iface eth0 inet manualauto wlan0allow-hotplug wlan0iface wlan0 inet manualauto br0iface br0 inet staticbridge_ports eth0 wlan0 address 192.168.1.100 netmask 255.255.255.0Replace the IP address with something more appropriate to your network.If you prefer the IP attribution done via DHCP, change it to:auto br0iface br0 inet dhcpbridge_ports eth0 wlan0After changing /etc/network/interfaces, either restarting Debian or doing service networking restartWill activate this configuration.You will have to make sure for this configuration to have bridge-utils installed. You can install it with:sudo apt install bridge-utilsFor more information, see: BRIDGE-UTILS-INTERFACESThe wlan0 interface also has to be condigured to connect to your remote AP so this configuration is not be used verbatim.
_computergraphics.91
Is it realistic to render a super-high resolution image over an array of 3 by 3 or 5 by 5 (for example) stacked screens? Could I do this by combining several different GPUs, or would the only way be to use a GPU per screen and simply divide the image to be rendered?Ideally I'd like to be able to use fewer GPUs than screens. Even though one GPU is insufficient I'm hoping I won't need n GPUs for n screens.If this is possible I'd also need to know whether this requires using the same make and model of GPU or whether unrelated ones could still share processing.
Can I use several GPUs for a grid multi screen image?
gpu
Yes, it's certainly possible to do this kind of thing. First of all, you can typically connect 3 or 4 displays to each GPU, and (with an appropriate motherboard) you can have up to 4 GPUs per machine. That would give you 16 screens. With multiple machines, you could have even more.Rendering to multiple displays from a single GPU is no big deal. You can create a single window that stretches across all the displays, render to it, and it should just work.Rendering the same scene on multiple GPUs is a bit trickier. One way is to create a separate rendering context on each GPU, and essentially repeat all the rendering—re-upload all buffers and textures to each GPU, and re-submit the draw calls to each one every frame. Another possibility is to use the built-in multi-GPU support in DX12, which allows you to manage several GPUs in one rendering context.
_datascience.1183
I'm interested in discovering some kind of dis-associations between the periods of a time series based on its data, e.g., find some (unknown number of) periods where the data is not similar with the data from another period.Also I would like to compare the same data but over 2 years (something like DTW?).I get my data Excel as a two-column list:c1=date (one per each day of the year), c2=Data To AnalyzeSo, what algorithms could I use and in what software?Update/Later edit:I'm looking for dates as cut-off points from which the DataToAnalyze could be part of another cluster of consecutive dates. For example:2014-1-1 --> 2014-3-10are part of Cluster_1 based on DataToAnalyze. And:2014-3-11 --> 2014-5-2are part of Cluster_2 based on DataToAnalyze, and so on. So, clusters of consecutive dates should be automatically determined based on some algorithms, which is what I'm looking for. Which ones (or which software) would be applicable to this problem?
Discovering dis-associations between periods of time-series
clustering;time series
null
_webmaster.71456
We are running a print magazine and all issue articles are posted online as well. We optimize our Meta Titles, Meta Descriptions, and URLs for SEO but for archival purposes the actual titles of the articles are identical to what is in the print version. Many article titles are not SEO friendly. A few examples would be Fly Away or Building a Bigger Boat.How much of an impact does this have on overall SEO? Are we fine with keeping the original titles or should we change our strategy for print version titles?
SEO titles for online version of a printed magazine
seo;title
null
_unix.226668
I was playing around on Scientific Linux 7 and deleted all the files from my home directory. Now the bash prompt is bash-4.2$. How can I change the prompt and save it. I edited the /etc/bashrc but changing the $PS1 does not change the prompt.
Deleted home directory files, how to change bash prompt
scientific linux
Just copy everything from /etc/skel/ to your home and change the owner to yourself.There should be a .bashrc in it.
_unix.226734
I have 2 folder: - /Desktop/old- /Desktop/newIn old folder, I have following file:- olda.txt- oldb.txtIn new folder, I have following file:- newa.txt- newb.txt- newc.txtAnd I want to overwrite the file from old to new. And I tried like this: mv /home/user/Desktop/old /home/user/Desktop/newI get the below output in new folder:- old- newa.txt- newb.txt- newc.txtWhat I want is like this:- olda.txt (from old/olda.txt)- oldb.txt (from old/oldb.txt)- newc.txtHow should I write in order to get it?
Overwrite file from old folder to new folder
mv
You just moved the folder itself inside the new folder. Just use:mv /home/user/Desktop/old/* /home/user/Desktop/new
_unix.116528
Before I go ahead and wipe my whole system and start from scratch, is there a way I can avoid what just happened on trying to install ffmpeg? I still don't actually know the correct way to install it so it includes most functions which it DOES NOT on Centos 6.4 with STATIC version.My /etc/lb folder is a mess after the command ldconfig -v which created symlinks and they conflict all over the place. The /etc/yum.repos.d folder is also a mess and the script I followed in a linux blog makes no sense to me and started errors in first place. I'd like to avoid that altogether next time around...is there a way I can do that? In other words not play around with writing repo scripts or touching that folder..that is when all the trouble began!Better yet is there a safe method to employ that can be reversed? Starting to think Centos is a headache in as far as ffmpeg goes. Everyone else doing basic commands with it just downloads static version.
Linux re install order of events to include ffmpeg
centos;ffmpeg
There are some licensing issues with some components of ffmpeg making it unpalatable to include in certain configurations for most distros, I think -- probably you are aware of this. Also, it includes a set of libraries that have been replaced some places with the libav fork.If you are looking for static binaries, they're here. If that has not worked for you,1 it is not hard to build and install from source, in which case you can configure it anyway you want. The source tarballs are further down that page. The wiki has a page about compiling and installing on CentOS, although it is into $HOME, which is probably not what unless you have to (because, e.g., it's not your computer). Presuming it is your computer I'll give you a alternate-parallel guide to install it system-wide, which is preferable. You must do ALL of this su root or via sudo. It's simpler than it looks, BTW; 5 simple steps:Make sure you have the pre-reqs: yum install autoconf automake gcc gcc-c++ git libtool make nasm pkgconfig zlib-devel.mkdir -p /usr/local/src. Then move the tarball into there and unpack it: tar -xjf ffmpeg-2.1.3.tar.bz2 if you got the bzip2 ball or tar -xzf ffmpeg-2.1.3.tar.gz if you got the gzip ball. You may need to yum install tar (and/or bzip2 or gzip) if you don't have those already. Once all that's done, move into the created directory: cd ffmpeg-2.1.3.The potentially complicated part is configuration. You haven't said exactly what you want this for, but if it is just recoding from one common format to another, the default should be fine; I did this recently and I think it includes everything it can, but to be sure you may want to have a look at the list of available encoders and decoders:./configure --list-encoders./configure --list-decodersIf there's anything there you know you want, you can make sure it gets built by including --enable-encoder=[whatever] (or --enable-decoder) when you configure. Presuming you are not re-distributing this, you might as well also use --enable-nonfree,1 which is the stuff the distros definitely leave out of their builds (and, I'd guess, the static binaries). So for example:./configure --enable-encoder=mpeg4 --enable-encoder=pcm_u8 --enable-decoder=wmv3 --enable-nonfree --enable-gplThe last one (--enable-gpl) isn't enabled by default but pretty much everyone will want it; for more information, look at ./configure --help | less. Don't go crazy with this by listing every single encoder; I'm pretty sure they almost all get build in anyway. Choose one or two if you want and then if something doesn't work out, you can rebuild and re-install; it is easier the second time.Before you do the configure, have a look at the various libraries listed at the top of the wiki page under Compilation & Installation (x264, libfdk_aac, etc.) to see if there's anything there you want. You probably don't; if you don't recognize anything, don't worry about it. If you do: Follow the directions there but leave out the --prefix=$HOME/ffmpeg_build and --bindir=$HOME/bin switches to the individual ./configure commands. Run ldconfig and make sure your $PATH is correct after that, see step #5.2 To be clear: you only need to do this now if you built third party libs.Build, check, install. If configure does not exit cleanly, leave a comment here with the output. Otherwise go ahead and make. If you have a multi-core system, speed that up with make -j N where N is a number of cores (your total - 1 is good if you want to use the system during the compile, total + 1 is ideal if you don't). If the build finished without error, make test (if that doesn't do anything make check -- I can't remember which is used). That should take a minute or two and finish without any failures. At that point you can do a make install.Set paths. The tools were installed into /usr/local/bin, so make sure that is at the beginning of your path, e.g.:> echo $PATH/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:[...]Notice /usr/local/bin preceeds /usr/bin and likewise with sbin. This means if some distro version is installed, your custom one will take precedence. If your $PATH is not like that, create an /etc/profile.d/local.sh with one line in it:export PATH=/usr/local/bin:$PATHI'm not worried about sbin since nothing from ffmpeg (or generally, anything else) ends up there. Also execute that line now in in your current terminal. Other users will have to log in again to make it effective.Finally, you need to make sure the libraries are available to the system linker (see man ldconfig). Check:grep /usr/local/lib /etc/ld.so.conf.d/*If you do not get any output, create a file /etc/ld.so.conf.d/local.conf with one line:/usr/local/libYou can add /usr/local/lib64 on a 64-bit system. Ffmpeg doesn't install to there but some things do and you may end up doing this again with those somethings one day. Now run ldconfig. This is crucial. Without that, the system won't be able to find the libraries the ffmpeg binary is linked to.You can undo all this with make uninstall. As mentioned in step #3, you can also re-built/install again later if you want to change the configuration. First make clean then start from #3 (you don't have to make uninstall first). At #5 all you'll have to do is ldconfig this time.1. See my comment about --enable-nonfree in step #3. I haven't used the static binaries, but my guess is they do not include everything.2. So the procedure if you are also building third party libs is to make and install all of them, then proceed with step #5, then #4, then all you have to do from #5 again afterward is run ldconfig.
_computerscience.2412
In perspective projection, group of parallel lines have the same vanishing point. I am interesting about the reverse calculation: Getting the group of parallel lines equations that their vanishing point specific point.Say I know that the camera is perspective camera at $(0,0,0)$ and it's direction is $(0,0,1)$, the view plane is $z = 1$ and I am interesting about the lines in plane $y= y_0$ that their vanishing point is $P = (p_x,p_y,p_z)$.I have tried to calculate the projection point of some point $(x,y_0,z)$ and get the equations:(i) $p_x = x(\frac1z)$(ii) $p_y = y_0(\frac1z)$(iii) $p_z = 1$But it seems wrong because if the vanishing point is like $(x_0,0,*)$ then form (ii) we will get $z\rightarrow \infty$ but then (i) is wrong because $x(\frac1z)\rightarrow0$ but it need to be equals to $x_0$.So how can I get the group of parallel lines have the same vanishing point in these conditions?
Calculate vanishing point
3d;perspective
null
_unix.367244
This question came up while I was reading the book 'Linux Device Drivers'.Every device driver is mapped to a physical device and since filesystems in linux can be associated with memory I got a bit confused.I think this needs a bit more justice in order to understand drivers better. Looking for more arguments other than what the book specified.
Why is a filesystem in linux not classified as a device driver?
linux kernel;drivers
The filesystem is actually device agnostic as most file systems can be implemented on most block devices.Device drivers tell the kernel how to use the hardware device to address (read/write/seek) its data, whilst the filesystem modules tell it how to represent files and directories over a block device.You could analogically think of the block device as a house structure and the filesystem as what is inside the house such as the furniture and decoration. The house structure doesn't determine what you put in it or how it's decorated.