id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webmaster.18730 | Possible Duplicate:What are the best ways to increase your site's position in Google? What are some way to optimize a Blogger blogspot for SEO? I have looked at several other articles, but they were old, so the tactics used won't work with the newer Blogger. | SEO optimizing Blogger blogspots | seo;blog;blogger | null |
_softwareengineering.199708 | Is there, somewhere, a freely usable/accessible script, source file, or whatever, that is able to measure the compliance of a given C++ compiler?For example, the Acid3 test for browsers: http://acid3.acidtests.org/The results I dream of would be a global percentage note (or multiple notes, one for each standard, e.g. , c++98, c++11, c++14, etc.), and then detailed tests with success or failure for each of them.Background: I had a discussion at work about boost and some challenged compilers. My interlocutor spoke about boost being an academic project, because it won't work in major C++ compilers, and me answering that mentally challenged compilers should not count. Being able to measure with code the actual conformance of a compiler would help both in evaluating the compiler, and discovering the corner cases that should be avoided in cross-platform code compiled with them.Edit: 2013-06-22Not an answer, but apparently, the C++ committee is working on the subject:SG10, Feature Test: Clark Nelson (Intel). Investigation into whether and how to standardize a way for portable code to check whether a particular C++ product implements a feature yet, as we continue to extend the standard.Source: http://isocpp.org/std/the-committee | Is there a compliance test for C++ compilers? | c++ | null |
_codereview.129241 | How can the time complexity of the following code be improved (assuming it's \$O(N^2)\$)? What about style and patterns?This code is trying to find the minimum subarray size that adds up to given sum k. Elements should be adjacent./** * Created by mona on 5/24/16. */public class MinSubArray { public static void minSubArray(int[] a, int k){ int start=-1; int end=-1; int min=Integer.MAX_VALUE; for (int i=0; i<a.length; i++){ int tmpSum=0; for (int j=i; j<a.length && (j-i+1)<min ; j++){ tmpSum+=a[j]; if (tmpSum==k){ start=i; end=j; min=end-start+1; break; } if (tmpSum>k){ break; } } } if (start==-1 || end==-1){ System.out.println(No such array exists with sum of +k); } while (start<=end){ System.out.print(a[start]+ ); start++; } } public static void main(String[] args){ int a[] ={1,2,3,-1,2,4,8,9,5,6,-2,-3,10}; minSubArray(a,5); }} | Min sub array size with a given sum | java;array;complexity | null |
_softwareengineering.230878 | Consider what it takes to completely finish a story in our organization.Demo requirements were metUI design finalized (layout, styles, fonts, controls, colors, etc...)User text finalized and mistake-proofedAll text translated to Spanish, Germal, Italian and French.User manual updatedAll bugs deemed by PO as need fixing for release were fixedDocuments for FDA updated (SRS,STD,STP,STM,UFMEA, DFMEA, SDD, installation)Requirements were written and approved by POUI spec was written and approved by POAll tests (final acceptance and integration) written/updated and executed - synced and agreed with test managerAll FAT tests were approved by QPERegression testing was doneCode was reviewed and documentedUnit tests were written and executed - synced and agreed with software manager.Refactoring, if needed, was done - synced and agreed with software architect.Design was documentedInstaller was updated (If needed)With so much work needed to fully complete a story, the team stops being nimble and it becomes more difficult to try out new features quickly. As far as I see there are 3 ways to solve this problem:Do more Spike stories which don't meet the Done definition but serve as quick prototypes to gather feedback from users.Get feedback from users all the time during the Sprint rather than at the end of it. So half-way through the story you could already gauge what users think of the story and quickly adapt before doing all the heavy stuff (translations, user manual and such).Relaxing the Done definition to require just basic testing and bug fixing. This way the team won't have a potentially shippable product at the end of the Sprint, but it will be very quick to try new features and technical innovations without suffering from the burden of documenting everything, writing user manual, doing design polishes, etc...Which option would you pick and why?Thank you. | Scrum: relaxing done or using spikes? | scrum | null |
_unix.273617 | I can create a cron job for every 5 minutes with following codes:*/5 * * * * root bash /etc/cron.d/mongo/5min.shIn the /etc/cron.d/mongo/5min.sh file send request with cURL:#!/bin/shexport PATH=/usr/local/bin:/usr/bin:/bincurl http://mysite.com/crons/5minuteRouting of sent request site crons/5minute is php code and not does anything with file system but every executed times will create a file in the home directory like this format:5minute5minute.1... 5minute.14999Inside of these files exists what response's body of curl request. Like this:empty<br>yes<br>These strings are written in crons/5minute.php with echo command.What cause of the problem? | Cronjob Create File Every Execute | linux;ubuntu;cron;curl | null |
_unix.13896 | My home computer is behind an ISP-level NAT (and firewall).The target computer is work computer behind gateway. You have to log to gateway computer first via SSH (as it is the only one visible and with access from Internet). The SSH daemon on this gateway is configured to allow only 'keyboard-interactive' logins (i.e. no password-less public-key logging). Then you log to target computer using public-key based logging (only).How to set up SSH tunnels (I would probably need two of them: forward and reverse), so that after setting those up I can login from my home computer directly to host computer, and vice-versa, both without providing password.I'd like to be able to, for example, synchronize my private git repositories (pushing from home to target, and fetching from target to home).Note that this is more involved setup that the one described in question How can I forward traffic from my publicly available server to a computer that is not publicly available? | Set up password-less SSH tunneling from home computer behind NAT to inside computer behind gateway | ssh;ssh tunneling;port forwarding | You're looking for something like this, I believe:(Let's call the first server 'gateway1', and the second server 'gitrepo1')ssh -L 8022:gitrepo1:22 gateway1Then, with your private key locally on your home computer, you should be able to do the following to get to your git repo server:ssh -i /path/to/your/key localhost -p 8022I'm a little concerned that I'm missing something as I don't see a need for more than one tunnel in this situation. |
_webmaster.74592 | I'm getting spam messages send through my contact form on my website but the visitor is not logged at all in Piwik.Does this mean that they disabled Java? Would they purposely go through that effort to avoid you logging their visit? | Getting spam sent through website, but visitor not logged | analytics;spam;piwik | null |
_unix.158053 | How can I tell checkinstall only create deb package file, but not install?with checkinstall --install=no, it fails at the end, for not having permission to do something. Does it really need root to create a deb file without installation?$ checkinstall --install=nocheckinstall 1.6.2, Copyright 2009 Felipe Eduardo Sanchez Diaz Duran This software is released under the GNU GPL.********************************************* Debian package creation selected ********************************************This package will be built according to these values: 0 - Maintainer: [ tim@admin ]1 - Summary: [ wine 1.6.2 built from source Oct 3, 2014 ]2 - Name: [ wine ]3 - Version: [ 1.6.2 ]4 - Release: [ 1 ]5 - License: [ GPL ]6 - Group: [ checkinstall ]7 - Architecture: [ i386 ]8 - Source location: [ wine-1.6.2 ]9 - Alternate source location: [ ]10 - Requires: [ ]11 - Provides: [ wine ]12 - Conflicts: [ ]13 - Replaces: [ ]Enter a number to change any of them or press ENTER to continue: Installing with make install...========================= Installation results ===========================make[1]: Entering directory `/tmp/wine-1.6.2/tools'make[1]: `makedep' is up to date.make[1]: Leaving directory `/tmp/wine-1.6.2/tools'make[1]: Entering directory `/tmp/wine-1.6.2/libs/port'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/libs/port'make[1]: Entering directory `/tmp/wine-1.6.2/libs/wine'version=`(GIT_DIR=../../.git git describe HEAD 2>/dev/null || echo wine-1.6.2) | sed -n -e '$s/\(.*\)/const char wine_build[] = \1;/p'` && (echo $version | cmp -s - version.c) || echo $version >version.c || (rm -f version.c && exit 1)make[1]: Leaving directory `/tmp/wine-1.6.2/libs/wine'make[1]: Entering directory `/tmp/wine-1.6.2/libs/wpp'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/libs/wpp'make[1]: Entering directory `/tmp/wine-1.6.2/tools'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools'make[1]: Entering directory `/tmp/wine-1.6.2/tools/widl'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/widl'make[1]: Entering directory `/tmp/wine-1.6.2/tools/winebuild'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/winebuild'make[1]: Entering directory `/tmp/wine-1.6.2/tools/winedump'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/winedump'make[1]: Entering directory `/tmp/wine-1.6.2/tools/winegcc'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/winegcc'make[1]: Entering directory `/tmp/wine-1.6.2/tools/wmc'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/wmc'make[1]: Entering directory `/tmp/wine-1.6.2/tools/wrc'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/tools/wrc'make[1]: Entering directory `/tmp/wine-1.6.2/include'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/include'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/adsiid'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/adsiid'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dinput'make[1]: `libdinput.def' is up to date.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dinput'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dinput'make[1]: `libdinput.def.a' is up to date.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dinput'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dxerr8'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dxerr8'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dxerr9'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dxerr9'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/dxguid'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/dxguid'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/strmbase'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/strmbase'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/strmiids'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/strmiids'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/uuid'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/uuid'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/winecrt0'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/winecrt0'make[1]: Entering directory `/tmp/wine-1.6.2/dlls/acledit'make[1]: Nothing to be done for `all'.make[1]: Leaving directory `/tmp/wine-1.6.2/dlls/acledit'./tools/mkinstalldirs -m 755 /usr/local/lib/winemkdir /usr/local/lib/winemkdir: cannot create directory `/usr/local/lib/wine': Permission deniedmake: *** [/usr/local/lib/wine] Error 1**** Installation failed. Aborting package creation.Cleaning up...OKBye.with fakeroot checkinstall, also fail due to permission problem. | how to tell checkinstall only create package file, but not install? | software installation;checkinstall | null |
_softwareengineering.223058 | Consider a database table of Items that have a status flag represented by an integer. A few of the status might be:0 - Past Storage;1 - Current Inventory;5 - Scrap;6 - Rework;15 - Processing;Now, I would like to avoid passing and querying for 'magic numbers' in my code, and in the past I have used a Dictionary to accomplish this, but this approach seems less elegant than what I hope to accomplish.How are these types of status flags retrieved from a database handled in object oriented code? Is it with enums, and if so, how? Or is it better to create a separate table with the status flag as the primary key? | How to handle status integers from database in object oriented code? | c#;object oriented design | Where I work this is a common situation. What we do here is use enums AND a separate table with the status flag as the primary key. In our experience, things have been a lot easier when the primary key was not an identity field. The good thing about doing it this way is that the c# compiler has a list of valid values (the enum) and the DBAs and whoever else has to work with the data (report writers) also has a list of valid values (the table). The downside, of course, is that any additions or modifications have to be done in both places. |
_unix.353558 | As a child I've played one DOS game called Electro Body.The game did something amazing - it played back PCM samples through the PC speaker. Not the crappy square beeps - it played real sound effects!It was super quiet in comparison to the usual beeps that the PC speaker makes, but it was a completely new quality of sound. I never heard anything like that before or after that game.I wonder - if there is a way in GNU/Linux to play arbitrary PCM sound streams through the PC speaker apart from just beeps that the beep command makes? Can I play WAV or Ogg files through that?Apart from the fact that it'd be cool to make some sophisticated noises, one could probably use this as an analogue voltage control output - for whatever crazy DIY project. | Playing arbitrary PCM sound throught the PC speaker? | audio | I don't have a system to test it on, but it appears that ALSA can provide mapping of output to the PC speaker. FYI, there are many pages out there that say this is a bad idea because the driver is intended as a toy and not for general use (it will burn a lot of CPU cycles), but that said, this should work:# Load the PC speaker driversudo modprobe snd-pcsp# Reload ALSA to find the new driversudo alsa force-reload# You should now see pcsp (pcspeaker) as an ALSA output optionsudo aplay -lSelect the sound card as your output and have fun!Sources:http://wiki.archlinux.org/index.php/PC_speaker#ALSAhttp://wiki.archlinux.org/index.php/Advanced_Linux_Sound_Architecture#Set_the_default_sound_cardhttp://www.linuxquestions.org/questions/slackware-14/how-do-you-use-snd-pcsp-in-slackware-14-1-a-4175534306/ |
_cs.45640 | I've read in an article that $coRP = RP$ is an open question, but that it is obvious that $coRP \subseteq RP^{RP}$.If $L \in coRP$, I don't understand how access to the oracle helps to build a probabilistic machine that proves $L \in RP^{RP}$.Any explanation would be appreciated. | Prove that $coRP \subseteq RP^{RP}$ | complexity theory;complexity classes | Suppose $L \in \mathsf{coRP}$, so that $\overline{L} \in \mathsf{RP}$. Using an oracle to $\mathsf{RP}$ we can determine whether a given string $x$ is in $\overline{L}$, and so whether $x \in L$. This gives a $\mathsf{P}^{\mathsf{RP}}$ algorithm for $L$. |
_unix.98138 | It happens with a Vostro 3550, Debian, Gnome and a Philips 1080p television.As soon as I connect the HDMI it starts toggling output and goes from mirror to just the laptop display to just the TV to conjugated at intervals of 15 seconds to 5 minutes.The Fn + F1 command and the display tab change it, only for it to continue happening.I should also mention that the TV shifts from 1080p and a more contrasted image with less quality throughout this toggling.xrandr>Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192LVDS1 connected (normal left inverted right x axis y axis) 1366x768 60.0 + 40.1 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis)HDMI1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 640mm x 360mm 1920x1080 60.0*+ 50.0 30.0 25.0 30.0 25.0 24.0 1280x1024 60.0 1360x768 59.8 1280x720 60.0 50.0 1440x576 25.0 1024x768 60.0 1440x480 30.0 800x600 60.3 720x576 50.0 720x480 59.9 640x480 60.0 59.9 DP1 disconnected (normal left inverted right x axis y axis)after xrandr --output LVDS1 --mode 1360x768 --output HDMI1 --same-as LVDS1Screen 0: minimum 320 x 200, current 1360 x 768, maximum 8192 x 8192LVDS1 connected 1360x768+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1366x768 60.0 + 40.1 1360x768 59.8* 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis)HDMI1 connected 1360x768+0+0 (normal left inverted right x axis y axis) 640mm x 360mm 1920x1080 60.0 + 50.0 30.0 25.0 30.0 25.0 24.0 1280x1024 60.0 1360x768 59.8* 1280x720 60.0 50.0 1440x576 25.0 1024x768 60.0 1440x480 30.0 800x600 60.3 720x576 50.0 720x480 59.9 640x480 60.0 59.9 DP1 disconnected (normal left inverted right x axis y axis)I tried stopping udev in case that was causing the toggling:root@mach:/home/rt# sudo service udev stop[ ok ] Stopping the hotplug events dispatcher: udevd.root@mach:/home/rt# xrandrScreen 0: minimum 320 x 200, current 3286 x 1080, maximum 8192 x 8192LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1366x768 60.0*+ 40.1 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis)HDMI1 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 640mm x 360mm 1920x1080 60.0*+ 50.0 30.0 25.0 30.0 25.0 24.0 1280x1024 60.0 1360x768 59.8 1280x720 60.0 50.0 1440x576 25.0 1024x768 60.0 1440x480 30.0 800x600 60.3 720x576 50.0 720x480 59.9 640x480 60.0 59.9 DP1 disconnected (normal left inverted right x axis y axis) | How do I keep my display output from toggling by itself? | debian;display settings | null |
_softwareengineering.218879 | What are the differences between string.c_str() and &string[0]?Regarding performance my guess is that &string[0] is a little faster than string.c_str() as it doesn't require a function call.Regarding safety and stability common sense tells me that string.c_str() should have some checks implemented, but I don't know, that's why I'm asking. | What is the difference between string.c_str() and &string[0]? | c++;c | In C++98 there is no guarantee that the internal array is null terminated; in other words string.data()[string.size()] results in undefined behavior. The implementation will then reallocate the array with the null termination when c_str() is called but can leave the null terminator off when it isn't.This also means that &string[0] is not guaranteed to be null terminated (it is essentially a detour to data())In C++11 the null termination guarantee is specified so string.data()==string.c_str() is always valid. |
_codereview.92601 | A classic problem to split a string into its root and suffix:(Stemming) The word walk is the base form for word walking (suffix is ing);The file path /var/www is the base path of /var/www/myApp;The string doi:10.1038/ncomms is the URN-prefix for doi:10.1038/ncomms7368 and doi:10.1038/ncomms7666;The word hello is not a root for walking (suffix is walking);There are many ways to implement the same algorithm, so, what is the best? Examples in PHP, but valid for any language./** * Splits a string by its root and sufix. * @param $str string input * @param $root string empty when no root, or start string * @return array (rootFlag,sufix) */function str_splitByRoot($str, $root){ ...}Algorithms str_splitByRoot1(), str_splitByRoot2(), ..., str_splitByRoot5() or other (show more if you know). All do the same thing, are valid solutions.function str_splitByRoot1($str, $root){ if (strpos($str,$root)===0) return array($root, substr($str, strlen($root)) ); else return array( '', $str );}function str_splitByRoot2($str, $root){ $rootLen = strspn($str ^ $root, \0); return array( substr($root,0,$rootLen), substr($str,$rootLen) );}function str_splitByRoot3($str, $root){ $s = explode($root,$str); return ( count($s)>1 && !array_shift($s) )? array($root,join($root,$s)): array('',$str);}function str_splitByRoot4($str, $root){ // to generalize need a secure regex, something like // $regex = str_replace(array('/','.','-'),array('\\/','\\.','\\-'),$root); $suffix = preg_replace(/^$root/,$str,1,$n); return $n? array($root,$suffix): array('',$str);}function str_splitByRoot5($str, $root){ // need also $root translating as algorithm 4. if (preg_match(/^$root(.+)$/,$str,$m)) return array($root,$m[1]); else return array('',$str);}The first is the traditional way. The last (algorithms 4 and 5) use regular expression, the second trim excess garbage from strings that are null terminated; and the third remember that the algorithm is a kind of split (explode function). All can be used withfunction str_sepByRoot($str,$root){ return join(' * ',str_splitByRoot($str,$root)); }print \n.str_sepByRoot(walking,walk);print \n.str_sepByRoot(hello,walk);print \n.str_sepByRoot(walking-walk-walk,walk);print \n.str_sepByRoot(/var/www/myApp,/var/www/);print \n.str_sepByRoot(10.1038/ncomms7368,10.1038/ncomms);returning walk * ing * hello walk * ing-walk-walk /var/www/ * myApp 10.1038/ncomms * 7368 | Simple string-split by root and sufix algorithm | php;algorithm;strings | null |
_unix.212610 | I have an old modem I was considering repurposing. The first step I thought to do was connect my computer with the modem and try to map the open ports. I found that, upon connecting the device, that my eth0 interface was assigned an IP address. My computer also has a wireless NIC (wlan0), so I now have two IP addresses. I scanned eth0 and got the results. Then, for good measure, I port mapped my wireless interface only to find that the results were identical. As such, the obvious conclusion is that by scanning eth0 I was in fact scanning myself. So, how can I go about scanning the modem? My knowledge in this space is limited, but in order to do this I imagine the modem itself requires the assignment of an IP. Or am I thinking about this incorrectly? EDIT: The modem is an older Motorola 3360 used ages ago with a DSL connection. | How to communicate with modem over Ethernet? | ip;modem | null |
_cstheory.26033 | Why does simhash work? I understand how to implement the hash algorithm, mechanically, from the many articles such as http://matpalm.com/resemblance/simhash/. But is there a simple intuitive explanation for why this particular procedure is so effective at capturing similarity? | What is the intuition behind simhash? | hash function | null |
_unix.39864 | How can I fix this problem? What is libzypp.so.1106 and and libaugeas.so.0? Why is this error repeated so many times in libzypp.so.1106?zypper: /usr/local/lib64/libxml2.so.2: no version information available (required by / usr/lib64/libzypp.so.1106)zypper: /usr/local/lib64/libxml2.so.2: no version information available (required by /usr/lib64/libzypp.so.1106)zypper: /usr/local/lib64/libxml2.so.2: no version information available (required by /usr/lib64/libzypp.so.1106)zypper: /usr/local/lib64/libxml2.so.2: no version information available (required by /usr/lib64/libzypp.so.1106)zypper: /usr/local/lib64/libxml2.so.2: no version information available (required by /usr/lib64/libaugeas.so.0) | No version information available? | package management;libraries;suse;zypper | I find two copies of libxml2.so.2 in /usr/lib64 and /usr/local/lib64. I deleted and symlinked the binary and the error message is gone? |
_scicomp.2313 | I am doing a text classification task with R, and I obtain a document-term matrix with size 22490 by 120,000 (only 4 million non-zero entries, less than 1% entries). Now I want to reduce the dimensionality by utilizing PCA (Principal Component Analysis). Unfortunately, R cannot handle this huge matrix, so I store this sparse matrix in a file in the Matrix Market Format, hoping to use some other techniques to do PCA.So could anyone give me some hints for useful libraries (whatever the programming language), which could do PCA with this large-scale matrix with ease, or do a longhand PCA by myself, in other words, calculate the covariance matrix at first, and then calculate the eigenvalues and eigenvectors for the covariance matrix. What I want is to calculate all PCs (120,000), and choose only the top N PCs, who accounts for 90% variance. Obviously, in this case, I have to give a threshold a priori to set some very tiny variance values to 0 (in the covariance matrix), otherwise, the covariance matrix will not be sparse and its size would be 120,000 by 120,000, which is impossible to handle with one single machine. Also, the loadings (eigenvectors) will be extremely large, and should be stored in sparse format. Thanks very much for any help !Note: I am using a machine with 24GB RAM and 8 cpu cores. | Apply PCA on very large sparse matrix | machine learning | I suggest the irlba package - it produces virtually the same results as svd, yet you can define a smaller number of singular values to solve for. An example, using sparse matrices to solve the Netflix prize, can be found here: http://bigcomputing.blogspot.de/2011/05/bryan-lewiss-vignette-on-irlba-for-svd.html |
_softwareengineering.23064 | As most people agree, encouraging developers to make fast code by giving them slow machines is not a good idea. But there's a point in that question. My dev machine is fast, and so I occasionally write code that's disturbingly inefficient, but that only becomes apparent when running it on other people's machines.What are some good ways to temporarily slow down a turbocharged dev machine? The notion of speed includes several factors, for example:CPU clock frequency.Amount of CPU cores.Amount of memory and processor cache.Speed of various buses.Disk I/O.GPU.etc. | How to slow down your computer (for testing purposes)? | efficiency | Run your tests in a virtual machine with limited memory and only one core.The old machines people still may have now are mostly Pentium 4 era things. That's not that unrealistic - I'm using one myself right now. Single core performance on many current PCs normally isn't that much better, and can be worse. RAM performance is more important than CPU performance for many things anyway, and by limiting a little more harshly than for an old 1GB P4, you compensate for that a bit.Failing that, if you're willing to spend a bit, buy a netbook. Run the tests on that. |
_softwareengineering.102294 | As a part of learning system programming, I am looking to implement a file shredder. The simplest way (and probably seen as naive) would be to replace the data bytes with zeroes (I know OS splits the files and I'll replace bytes in all those chunks). But when I google on this topic, I am surprised to find multiple pass algorithms, some going as high as 35!Could someone elucidate the benefit of multiple pass please? I couldn't find any explanation.Thanks | File shredder algorithm | algorithms;systems programming;file systems | null |
_datascience.10119 | I have a 2M instances dataset with millions of very very sparse dummy variables created using the hashing trick = hash(orig_feature_name + orig_feature_value)=1. Note that the data is sparse both on rows (every instance has only a limited <100 features=1) and on columns (most features are relevant only to very few instances < 1%) I discovered that in such sparse scenarios the follow-the-regularized-leader FTRL proximal gradient descent is very popular:paper, reference implementation.But I'm not sure why shouldn't I prefer a batch gradient descent algorithm? FTRL for all its merits is still an online-learning algorithm that sees one instance at a time. So what are the advantages and disadvantages of using FTRL vs. a well known sparse least squares algorithm such as LSQR (paper, reference implementation)?My intuition is that if possible to use all the data for each iteration, we should do it, but I'm not sure... | differences between LSQR and FTRL when working with very sparse data | linear regression;gradient descent;online learning | null |
_webmaster.53479 | I want my website to work either way for both www.mysite.com and mysite.com, so I setup an A record for the root domain mysite.com and then created a CNAME record which points www.mysite.com to the host record mysite.com.Now if I type www.mysite.com, it's actually resulting in a 301 redirect to mysite.com. But every time the redirection is causing about 3 seconds latency (see screenshot).Have I made any mistake in configuring my domain? | 301 redirect latency | dns;301 redirect;latency | null |
_unix.333393 | Is there an application or feature that allows a copy and paste across log outs or reboots?Linux rome 4.8.0-32-generic #34-Ubuntu SMP Tue Dec 13 14:30:16 UTC 2016 i686 i686 i686 GNU/Linux | Copy and paste across logout/reboot? | clipboard;reboot | null |
_softwareengineering.257662 | I'm searching for the correct type of diagram in which I can see all dependencies between the functions, classes and files of my Python program (multiple files). It's for cleaning purposes. So my question is: Which diagram should I use? I though about Class Diagram, but there are no dependencies between functions (which function uses which function, which class or file uses functions from which class or file, etc.). | Functional dependencies diagram | python;diagrams | null |
_unix.387943 | I'm quite new to Linux Kernel Development, and I have an issue trying to build my device drivers so that I can test them and run the strace command on them. However, for some reason, in any directory (within the staging directory, such as greybus or netlogic), when I run the command make, I always get the same error. I'm using this tutorial (header: Compiling only part of the kernel) which details the compiling process.make: *** No targets. Stop.I have no idea why this is showing up. Just, as an example, there is a Makefile in the greybus directory and it does have targets. This is the Makefile:# Greybus coregreybus-y := core.o \ debugfs.o \ hd.o \ manifest.o \ module.o \ interface.o \ bundle.o \ connection.o \ control.o \ svc.o \ svc_watchdog.o \ operation.oobj-$(CONFIG_GREYBUS) += greybus.o# needed for trace eventsccflags-y += -I$(src)# Greybus Host controller driversgb-es2-y := es2.o | Building Device Drivers make Error | linux;linux kernel;compiling;make | null |
_cstheory.30726 | I have read in several papers it is well known that deterministically extracting even one bit from a weak source is impossible. Could someone explain why? | Deterministic Randomness Extractors | randomness | Intuitively, the situation is you'd like some deterministic extractor $E: \{0,1\}^n \rightarrow \{0,1\}$ that can take in $n$ bits sampled from a weak source and output one bit with probability close to $1/2$, say it outputs 0 with probability $1/2 \pm \epsilon$ and 1 with $1/2\pm\epsilon$.Here's a weak argument that at the very least, such extractors $E$ can't exist if we don't put any restrictions on the input distribution other than it has 'enough' min-entropy. Suppose $E$ is such a potential extractor. By flipping the output if necessary, we may assume without loss of generality that $|E^{-1}(0)|\ge|E^{-1}(1)|$; that is, $E^{-1}(0)$ is a set of $n$-bit strings of size at least $2^{n}/2$. Thus a random variable that samples uniformly from $E^{-1}(0)$ will have min-entropy at least $n - 1$, but the extractor will never give you any 'random' output other than 0.Of course, if we tighten the restrictions on the input distribution (say, we assume all $n$ bits are IID) then we do have deterministic extractors that work. But as problem 6.6 in Salil Vadhan's survey of pseudorandomness shows, even weakening the IID assumption a little bit will cause deterministic extractors to fail, by a slight generalization of the same argument as I made above. |
_cs.3138 | I'm trying to implement the Pastry Distributed Hash Table, but some things are escaping my understanding. I was hoping someone could clarify.Disclaimer: I'm not a computer science student. I've taken precisely two computer science courses in my life, and neither dealt with anything remotely complex. I've worked with software for years, so I feel I'm up to the implementation task, if I could just wrap my head around the ideas. So I may just be missing something obvious.I've read the paper that the authors published [1], and I've made some good progress, but I keep getting hung up on this one particular point in how the routing table works:The paper claims thatA nodes routing table, $R$, is organized into $\lceil \log_{2^b} N\rceil$ rows with $2^b - 1$ entries each. The $2^b - 1$ entries at row $n$ of the routing table each refer to a node whose nodeId shares the present nodes nodeId in the rst n digits, but whose $n + 1$th digit has one of the $2^b - 1$ possible values other than the $n + 1$th digit in the present nodes id.The $b$ stands for an application-specific variable, usually $4$. Let's use $b=4$, for simplicity's sake. So the above isA nodes routing table, $R$, is organized into $\lceil \log_{16} N\rceil$ rows with $15$ entries each. The $15$ entries at row $n$ of the routing table each refer to a node whose nodeId shares the present nodes nodeId in the rst n digits, but whose $n + 1$th digit has one of the $2^b - 1$ possible values other than the $n + 1$th digit in the present nodes id.I understand that much. Further, $N$ is the number of servers in the cluster. I get that, too.My question is, if the row an entry is placed into depends on the shared length of the key, why the seemingly random limit on the number of rows? Each nodeId has 32 digits, when $b=4$ (128 bit nodeIds divided into digits of b bits). So what happens when $N$ gets high enough that $\lceil\log_{16} N\rceil > 32$? I realise it would take 340,282,366,920,938,463,463,374,607,431,768,211,457 (if my math is right) servers to hit this scenario, but it just seems like an odd inclusion, and the correlation is never explained.Furthermore, what happens if you have a small number of servers? If I have fewer than 16 servers, I only have one row in the table. Further, under no circumstances would every entry in the row have a corresponding server. Should entries be left empty? I realise that I'd be able to find the server in the leaf set no matter what, given that few servers, but the same quandary is raised for the second row--what if I don't have a server that has a nodeId such that I can fill every possible permutation of the nth digit? Finally, if I have, say, four servers, and I have two nodes that share, say, 20 of their 32 digits, by some random fluke... should I populate 20 rows of the table for that node, even though that is far more rows than I could even come close to filling?Here's what I've come up with, trying to reason my way through this:Entries are to be set to a null value if there is not a node that matches that prefix precisely.Empty rows are to be added until enough rows exist to match the shared length of the nodeIds.If, and only if, there is no matching entry for a desired message ID, fall back on a search of the routing table for a nodeId whose shared length is greater than or equal to the current nodeId's and whose entry is mathematically closer than the current nodeId's to the desired ID.If no suitable node can be found in #3, assume this is the destination and deliver the message.Do all four of these assumptions hold up? Is there somewhere else I should be looking for information on this?Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems by A. Rowstrong and P. Druschel (2001) -- download here | How Does Populating Pastry's Routing Table Work? | algorithms;data structures;distributed systems;hash tables | The idea of a routing table in Pastry (and all structured P2P networks) is to minimize its size, while guaranteeing a quicker routing. The routing algorithm of Pastry goes as follows: Step A. A node u searches for an object A by firstly looking it up in its leaf set. Step B. If it was not available, then the query is forwarded to a known node that shares a number of prefixes with $A$ that is at least larger than what node u shares with A.Step C. If such a record is not found, then the query is forwarded to a node in the leaf set that is numerically closest to $A$. This is why a node $u$ stores addresses of nodes organizes its table as follows: 1.Each record in row $i$ of the routing table of node $u$ is a node identifier that shares $i$ prefix bits with the identifier of $u$. 2.The $(i + 1)^{th}$ bit of the records of a row $i$ is unique and is taken from the set $\{0, , 2^{b} 1\}$. Example in a typical scenario: if u address is 1111 and object $A$ has identifier 4324: then here is what will happen: (we assume that it is of the base of 4. (i.e. addresses are from [1-4][1-4][1-4][1-4]). Node $u$ shares 0 prefix with object $A$. Therefore, it looks in row 0. According to rule 2 above, node $u$ stores addresses of nodes 1XXX, 2XXX, 3XXX, 4XXX, where X is a dont-care value. The closest among these nodes to $A$ is 4XXX. - Let's say this 4XXX is actually 4013. Then $u$ forwards to $u _1$ with address 4013. Now you are going to repeat the same thing again at node $u _1$ with address 4013. To make it simpler, here is again an example of how it will go in 4013. $u _1$ will first look for the size common prefix between 4013 and 4324 which is 1. So it goes to row 1, which contain values such that 41XX, 42XX, 43XX, 44XX. The closes among them to $A$ is 43XX. - if this was 4331 then it will be forwards toward it. The maximum number of hops here in is 4 hops (XXXX) ! in Pastry terms, it is $log_{2^b}$. So it is reduced as $b$ increases. But the size of the rows which are $2^b$ will increases ! -- so the authors said that $b$ = 4 is a good balance ! Practical scenarios are usually not typical as that. There may be situations in which there are not many nodes in the network. this is why we follow step C above. - However, what you need to guarantee to make this algorithm correct is that each node be connected to the closest two nodes to it (in term of identifiers). This will form a ring of ordered nodes [e.g. 1->3->4->9->10->11->1] |
_softwareengineering.251248 | I am creating a solution where I essentially put all rules regarding communication with customers (including automatic invoicing, reminder emails, welcome emails, etc.) into a Google Sheets and use Ultradox to create emails and PDFs based upon Google Docs templates. For the three automatic emails I have currently implemented this is working out really well, the whole thing is very transparent to our organization since even non-technical people can inspect and correct the Excel-formulas. My concern is that in 2-3 years we will probably have 200 unique emails and actions that we need to send out for the various occasions and given the various states that customers can be in. Of course I could aim at limiting the number of emails and states that our customer can be in, but this should be choice based upon business realities and not be limited by the choice of technology.My question is therefore, what are the limits of complexity (when will it become unmaintainable) that can be reasonably implemented in a solution based upon Google Apps Scripts and Google Sheets, given that I will attempt to expose as many of the rules as possible to Google Sheets? And what pitfalls should I be aware of when basing myself on a spreadsheet formulas, and what strategies should I follow to avoid the pitfalls?Some of my own strategiesSo far I have come up with the following strategies to increase maintainability:Using several Google Sheets, each with its own purpose, each with its own dedicated export and import sheets so it is clear, which columns are dependent on the Google Sheet. Such sheets also help maintain referential integrity when inserting columns and rows.Using multi-line formulas with indentation for formula-readabilityExperimenting with the validation function to reduce the variability of dataExperimenting with Arrayformulas to ensure that formulas will work even if additional rows are addedPotentially offloading very complex formulas to Google Scripts and calling them from spreadsheet formulasUsing Named Ranges to ensure referential integrityPlease notice that I am not asking about performance in this question, only maintainability.Also, I am unsure of how software complexity can be measured, so I am unsure of how to ask this question in a more specific way. | Complexity limits of solutions created in Google Spreadsheets | complexity;google app engine;business rules;spreadsheet | null |
_codereview.17643 | I'm trying to write a Python module to handle matrices. (I know about numpy, this is just for fun) So far I have written a few classes, Matrix, Dim, and Vec. Matrix and Vec are both subclasses of Dim. When creating a matrix, one would first start out with a list of lists and they would create a matrix like:startingList = [[1,2,3],[4,5,6],[7,8,9]]myMatrix = matrix.Matrix(startingList)This should create a Matrix. The created Matrix should contain multiple Dims all of the same length. Each of these Dims should contain multiple Dims all of the same length, etc. The last Dim, the one that contains numbers, should contain only numbers and should be a Vec instead of a Dim. So far this works as I want it to. Here is what I have:from numbers import Numbertest2DMat = [[1,2,3],[4,5,6],[7,8,9]]test3DMat = [[[1,2,3],[4,5,6],[7,8,9]],[[2,3,4],[5,6,7],[8,9,0]],[[9,8,7],[6,5,4],[3,2,1]]]class Dim(list): def __new__(cls,inDim): # Make sure inDim is iterable iter(inDim) # If every item in inDim is a number create a Vec if all(isinstance(item,Number) for item in inDim): #return Vec(inDim) return Vec.__new__(cls,inDim) # Make sure every item in inDim is iterable try: for item in inDim: iter(item) except TypeError: raise TypeError('All items in a Dim must be iterable') # Make sure every item in inDim has the same length # or that there are zero items in the list if len(set(len(item) for item in inDim)) > 1: raise ValueError('All lists in a Dim must be the same length') # Actually create the Dim because it passed all the tests return list.__new__(cls,inDim) def __init__(self,inDim): inDim = map(Dim,inDim) list.__init__(self,inDim)class Vec(Dim): def __new__(cls,inDim): if cls.__name__ not in [Vec.__name__,Dim.__name__]: newMat = list.__new__(Vec,inDim) newMat.__init__(inDim) return newMat return list.__new__(Vec,inDim) def __init__(self,inDim): list.__init__(self,inDim)class Matrix(Dim): def __new__(cls,inMat): return Dim.__new__(cls,inMat) def __init__(self,inMat): super(Matrix,self).__init__(inMat)This works, but as I have little experience overriding __new__() I'm fairly certain that this could be better. How can I improve this? Answers need not be specific to use of __new__ and __init__, but that is what I care most about. | Python __new__ and __init__ usage | python | First, the thing that was most obvious in your code was that you never used a space after a comma.Second, you have the Matrix class where you override __new__ and __init__, however it would be work just fine if you'd use class Matrix(Dim): pass.That's where I started to wonder why you define a Dim class and than inherit from it for Matrix and Vec, instead of code Matrix and subclass Vec from it. Any reason?I would move the tests from Dim.__new__, except for the numbers test, to __init__. The iter test is not needed, you'll get an error from the list constructor or the for loop anyway.One issue I see is that Matrix('abc') will result in endless recursion, because strings are iterable. Those are the places where I regret the absence of chr type in python :)In Vec.__new__, I would let others customize the way they inherit from, why do you check if the class is a Vec or a Dim?Last tip, when inheriting types like list or str, you have to override all the operator overloading methods if you plan on doing things like Vec((0, 1, 2)) * Vec((2, 3, 4))Also:[Vec.__name__,Dim.__name__] use tuples here, memory managementtest2DMat = [[1,2,3],[4,5,6],[7,8,9]] I would break lines here |
_softwareengineering.136197 | I am using ColdFusion 8 and jQuery 1.7.** This is a programming question, because the solution I am questioning requires programming. It may not be the right solution to the problem, but if it is, then I need to figure out how to best program the concept. **When a user comes to our site, we track their session by writing various CGI variables to a database using a CFC and stored procures. First we filter out non human traffic by keywords in the user agent such as bot. Unfortunately a lot of bots and spammers mask their user agents. Later, we try to exclude from our visitor reports the bad bots and a few other known entities that are scraping pages and such. But this is a manual process.We are considering using an additional/alternate method of tracking usage. Once the user's page loads, we will use JavaScript to send the CGI variables from the client back to our server and store them. Specifically, we'll write the server variables to JavaScript on each page and then have JavaScript send them right back to us. If a bot or user doesn't fully view the page or have JavaScript enabled, the usage won't be counted is a real user.Correct me if I am wrong, but this is the same method that Google Analytics uses to track user behavior.Our goal is to eliminate good and bad bots from being counted as visitors in our reports. Does using JavaScript on a page like this minimize bots being counted? Is there a gaping hole in this plan? | Should we use JavaScript and CGI variables to weed out bots from our visitor reports? | web development;javascript;seo | null |
_codereview.169281 | There are three types of edits that can be performed on strings: insert a character, remove a character, or replace a character.EXAMPLES:pale, ple returns truepales, pale returns truepale, bale returns truepale, bake returns falseMy solutions seems to work for any case I could think of (like (, s) but I feel like I'm checking conditions too much in it (mainly my final if/else statement had to be tacked on when I saw my solution wouldn't work for (pales, pale))bool oneAway(string s1, string s2){ //loop through smaller string size decltype(s1.size()) size = (s1.size() < s2.size()) ? s1.size() : s2.size(); for (decltype(s1.size()) i = 0; i < size; ++i) { if (s1[i] != s2[i]) { string temp1 = s1.substr(i + 1); string temp2 = s2.substr(i + 1); //if rest of the string is equal we do 1 replacement. if (temp1 == temp2) { s1[i] = s2[i]; break; } //otherwise we will try to insert or remove a character else { if (s1.size() < s2.size()) { s1.insert(i, 1, s2[i]); break; } else { s1.erase(i, 1); break; } } } } //check equality if (s1 == s2) return true; //otherwise try to erase last character in s1 and check again (since for loop may not check this character if s2 was smaller string size) else { s1.erase(s1.size() - 1, 1); if (s1 == s2) return true; } return false;}This question is from the book Cracking the Code Interview by Gayle McDowell. | Test whether the edit-distance of two strings is at most 1 | c++;strings;c++11;interview questions;edit distance | It looks like you either imported std::string or the whole standard namespace into the global namespace.It's ok to do the former in implementation-files, though I would desist as it doesn't gain you all that much in brevity.If it's the latter, read Why is using namespace std; considered bad practice? and change it.Avoid allocating memory. Doing so is slow and can fail. That means accepting the arguments by constant reference, not using temporary copies, and not modifying the arguments.If you had C++17, it would mean changing to std::string_view for the added flexibility.Prefer using auto to a more complicated expression using decltype. It's less error-prone, more readable, and also shorter.<algorithm> contains std::min(). Using that is more readable, shorter and no less efficient than writing it out using the conditional-operator.Did you test (abc, b)? That's two deletions edit-distance, but will be accepted anyway.Keep your line-length reasonable. Horizontal scrolling kills readability.Using all that, but staying true to C++11:#include <string>#include <algorithm>bool oneAway(const std::string& a, const std::string& b) noexcept { if (a.size() > b.size()) return oneAway(b, a); if (b.size() - a.size() > 1) // No need to look further, pure optimization return false; auto begin = std::mismatch(a.begin(), a.end(), b.begin()); using reverse_it = decltype(a.rbegin()); // std::make_reverse_iterator is C++14 auto end = std::mismatch(a.rbegin(), reverse_it(begin.first), b.rbegin()); return end.second.base() - begin.second < 2;}And for testing:#include one_away.h#include <iostream>#include <cstdlib>static bool do_test(const std::string& a, const std::string& b, bool expected) { bool r = oneAway(a, b); std::cout << (r == expected ? [OK] : [FAIL] ) << std::boolalpha << r << \ << a << \ \ << b << \\n; return r == expected;}static bool test(const std::string& a, const std::string& b = a, bool expected = true) { bool r = do_test(a, b, expected); if (a != b) r &= do_test(b, a, expected); return r;}int main() { bool r = test(); r &= test(abc); r &= test(pale, ple); r &= test(pales, pale); r &= test(pale, bale); r &= test(pale, bake, false); r &= test(abc, b, false); std::cout << (r ? [OK]\n : [FAIL]\n); return r ? 0 : EXIT_FAILURE;} |
_webmaster.105073 | I own a Minecraft Server with a non standard port and i want the user to connect without port. So i tried setting up an A and a SRV Record for my Domain.A Record: join.domainName.xyz MineCraftServerIPSRV Record: _minecraft._tcp.join.domainName.xyz Priority:0 Weight:5 minecraftServerPort join.domainName.xyzif i run nslookup -q=SRV _minecraft._tcp.join.domainName.xyz i getpriority = 0weight = 5port = minecraftServerPortsvr hostname = joinbut on Minecraft i get Cant resolve Hostname, if i add the minecraftServerPort behind join.domainName.xyz, it is working, i thought SRV Record can be used to hide the port? | Can SRV DNS record be used to allow Mincraft to connect to a non-default port without specifying the port number? | dns;port number;srv records | After swiching to Cloudflare and making the same settings there, everything works fine now. |
_webmaster.99799 | Google Analytics seems to miss most of the clicks from Yandex. Is there anything to make them show up in Google Analytics? | Why does Google Analytics miss Yandex clicks? | google analytics;yandex | null |
_softwareengineering.194216 | I've seen many developers asking for How to intercept in/out HTTP packets , How to modify them on the fly. The most clean answer I've seen is to make a kernel-mode-driver filter from the scratch (TDI for XP and earlier winx9 or NDIS for NT systems).An other way, is to use a user-mode-driver like Windivert, also Komodia has a great solution (without writing any single code).The idea behind this introduction is just I want to know is API Hooking can be considered as alternative of writing of whole of driver-filter? writing a driver from the scratch is not an easy task, why just not Hooking the HttpSendRequest or any other API used by the browser? There are many free/commercial libraries to do this in a safe manner (eg: EasyHook, Mhook, Nektra..).I'm not the first who ask, there already Sockscap that uses Hook(DLL injection) to change behavior to other applications and force them to use a Socks proxy, also Form grabbing attack 'used by keylogger.. | NDIS Driver Filter VS API Hooking | http request | null |
_cs.23290 | I am interested to know whether that language $$L = \{ a^pb^q \mid p, q \text{ are prime} \}$$ is regular. How do you prove that it is not regular? | Is the language $\{ a^pb^q \mid p, q \text{ are prime} \}$ regular? | formal languages;regular languages | This language is not regular, the easiest way to see this is to use the Pumping Lemma, see http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages Alternatively, you could also use the Myhill-Nerode theorem, see http://en.wikipedia.org/wiki/Myhill%E2%80%93Nerode_theoremTo give you some more details, assume (towards a contradiction) that $$L = \{ a^pb^q \mid p,q \text{ are prime} \}$$was regular. By the Pumping Lemma, there is an integer $l \geq 1$ such that every word $w \in L$ of length at least $l$ can be written as $w=xyz$ with$y$ is not the empty string,$xy$ has at most length $l$,$xy^iz \in L$ for every $i \geq 0$.Now we can pick $w$ as $a^2b^q \in L$ for some prime $q \geq l$. This word meets the conditions of the Pumping Lemma. Without loss of generality we can assume that $xy=a^2b^k$ for some $k \geq 1$ (the case for $xy=a$ or $xy=aa$ is even simpler). Now, either $y=a^jb^k$ for some $j \geq 1$ or $y=b^k$. But in both cases, we immediately see that we can choose $i \geq 0$ such that $xy^iz \not\in L$, which contradicts our initial assumption that $L$ is regular, hence $L$ is not regular. |
_softwareengineering.206877 | How should I handle deploying web applications to multiple servers?Constraints I have a dev, test and prod environment. No build server is available. Developers can't deploy to prod. The people that do deploy to prod copy files from test to prod. They don't have VS installed.Currently The way it's handled is using web.config transform. However, to deploy to prod involves putting prod code on the test server where it's copied over.ProblemSometimes simple mistakes are made, such as forgetting to change test back to the right environment after deployment. Or the test config gets moved to prod instead of the prod config.SolutionSo the question is, what is the best way to prevent mistakes from happening? My first thought is let the app determine which server it's on at runtime and use the appropriate settings/connection strings/etc... However, the server names could change in the not too distant future. So if multiple apps are hard coded, that would mean updating all of them. The easiest way to handle that situation would be to place a DLL in the GAC that determines the environment.Are there any drawbacks or possible complications that this would cause? Or is there a better solution to the problem than this? | Handling Deployment to Multiple Environments | asp.net;deployment | Normally the higher the deployment process is automated, the lower risk there will be. To achieve better automation, i think command line procedures (with different modes, for example, interactive v.s. quiet mode) could be a good option:I believe you won't have many environments (DEV/QA/UAT/PROD/etc.) and they are already fixed. So a fixed xslt transformation could be defined and included in your project, for example, call it web.config.xslt, in which, a list of configuration sections per environment could be defined (or a switch-case conditional configuration)use MSBuild to build the solution, and output the built files to a newwork location (preferably accessible to all developers), and at the end, transform web.config based on web.config.xslt and produce a couple of config files: web.Dev.config, web.QA.config, web.UAT.config and web.Prod.configgive an option at the command line, say which environment the user wants to deploy, then batch copy all the files to the desired server (of course, you have to allow these folders shared and accessible to the deployer) and at the end copy the corresponding web.[ENV].config file as web.config to destination folderother bit and pieces (such as versioning, DLL signing and etc.) could also be included and automated through out the MSBuild process. all these steps could be stored in a predefined solution level file, let's say it mySolution.BuildRelease.proj (all up to you) and set MSbuild (or your self-defined bat file, which calls MSBuild and your own console app) as the default app to open it.of course, if some of the processes could not be done using command line, you might want to try write a console application and call it during MSBuild. if you are fancy with interactive approaches, you may also define a html app (hta) to wrap up the process |
_unix.52423 | I installed Linux Mint first on my Acer Aspire 4930 and then dual booted with Windows 7. I see the correct time on Linux Mint but on booting into Windows the time is shifted back by a few hours, even after re-setting the time on rebooting it shows the wrong time again. Why is this happening? | Different time in Windows and Linux Mint | linux mint;windows;clock | null |
_codereview.148645 | Could someone tell me their opinion on my leaderboard system?library #!/usr/bin/env python3def write_score(score, name, scores, filename, splitter=','): writes a score with a name to a file, in a specified format score_tuple = (score,name) scores.append(score_tuple) with open(filename,'w') as f: for s in scores: f.write(str(s[0]) + splitter + s[1] + '\n') f.close()def read_scores(filename, splitter=','): reads scores and names from a file, and returns a list of each with open(filename) as f: raw_scores = f.read().strip().split('\n') f.close() scores = [] names = [] for score in raw_scores: score_split = score.split(splitter) scores.append(int(score_split[0])) names.append(score_split[1]) return scores, namesdef sort_scores(scores, names,reverse_bool=True): sorts the scores from greatest to least and returns in a list of tuples format zipped = sorted(list(zip(scores,names)), reverse=reverse_bool) return zippeddef print_scores(score_list, seperator=' ', top_amount=5): prints the number of leaderboard scores stated for score_tuple in score_list[:top_amount]: print(str(score_tuple[0]) + seperator + score_tuple[1])def has_better_score(score, scores, leaderboard_len=5): returns if the score should be written to a file if (len(scores) > leaderboard_len and score >= scores[leaderboard_len - 1][0]) or len(scores) <= leaderboard_len: return True return Falseleaderboard.txt123 | jimmy16 | bill12 | Pete10 | Jim210 | Jim6 | henry is cool5 | Bob3 | Jane223 | billySmall Programimport leaderlib as llif __name__ == '__main__': try: while True: scores, names = ll.read_scores('leaderboard.txt',' | ') sorted_scores = ll.sort_scores(scores, names) ll.print_scores(sorted_scores, ' = ', 5) new_name = input('Name > ') new_score = int(input('Score > ')) if ll.has_better_score(new_score, sorted_scores, 5): ll.write_score(new_score, new_name, sorted_scores, 'leaderboard.txt', ' | ') else: print('not on the leaderboard...') print('\n\n\n') except KeyboardInterrupt: exit() | Leaderboard in Python3 | python;python 3.x | The customary name for what you called splitter would be delimiter or sep (the letter is used e.g. in split, the former for example in numpy). Regardless of what you choose, you should be consistent, right now you use both splitter and then later separator.When using the with..as construct (as you should), you don't need to manually close the file. This is one of the reasons of why you should use it in the first place.In your function write_score, you should use str.join.In read_scores you can directly iterate over the lines of the file, which is a lot more memory-efficient. You can also use tuple assignment to make it clearer what is what.In sort_scores, you can use sorted directly on zip, there is no need to cast it to a list first. You can also return the result right away.In has_better_score you can just return the result of the comparisons.In print_scores you can use str.join again.#!/usr/bin/env python3def write_score(score, name, scores, filename, splitter=','): writes a score with a name to a file, in a specified format scores.append((score, name)) with open(filename,'w') as f: for s in scores: f.write(splitter.join(map(str, s)) + '\n')def read_scores(filename, splitter=','): reads scores and names from a file, and returns a list of each scores = [] names = [] with open(filename) as f: for score in f: score, name = score.strip().split(splitter) scores.append(int(score)) names.append(name) return scores, namesdef sort_scores(scores, names, reverse_bool=True): sorts the scores from greatest to least and returns in a list of tuples format return sorted(zip(scores,names), reverse=reverse_bool)def print_scores(score_list, splitter=' ', top_amount=5): prints the number of leaderboard scores stated for score_tuple in score_list[:top_amount]: print(splitter.join(map(str, score_tuple)))def has_better_score(score, scores, leaderboard_len=5): returns if the score should be written to a file return (len(scores) > leaderboard_len and score >= scores[leaderboard_len - 1][0]) or len(scores) <= leaderboard_len:In your small program, catching KeyboardException and then just exiting is not really different from letting the exception rise all the way to the top. Also, exit() should only be used in the interactive session, use sys.exit() in a script instead (because it allows passing of a return value). |
_webapps.75096 | I understand that the default board on Trello has a format using To Do, Doing, Done, but only one of my boards has that template. All the other boards are in three stacks but without the to do, doing, done titles. Why is that? | In Trello, why do some of my boards have a _To Do_, _Doing_, Done_ format and others do not? | trello;trello boards | null |
_unix.68974 | For the past few weeks, when opening the lid my laptop often fails to recover from suspension leaving me with a blank screen. Further, when this bug occurs, the laptop fans continue to run while the lid is closed, causing my laptop to quickly overheat in my bag. I have read various different questions on here that had similar symptoms but none of the solutions worked in my case. Disabling suspension on gnome seemed to reduce the frequency of this issue, but it still occurs with some frequency. It only seems to occur if I do not lock the computer before closing the lid or I am in a different desktop manager (such as dwm). I currently have the ATI graphics drivers enabled (not sure on the specific driver, but it is whatever driver that Jockey installed). | Opening lid results in black screen (Dell Laptop with ATI Graphics, Ubuntu 12.04, 3.2 kernel, GNOME 3) | linux;ubuntu;power management | null |
_unix.19807 | I'm trying to currently rename a large set of files and have been using quite kludgy methods to do so, such as:rename 's:(.*)\.MOV:$1.mov:g' *.MOVrename 's:(.*)\.JPG:$1.jpg:g' *.JPGWhat I'd really like to do is to be able to combine all of these commands using the 'y' sed operator. Evidently, using this operator, you can transform items to lower case. The problem is that I need to convert only the extensions. Is there a way to do this using this command? I need to essentially transform the capture group in the following expression to lowercase: ^.+\.(.+)$. Is there a way to do this? I'm kind of new to these kinds of transformations. | Renaming files to have lower case extensions with 'rename' | rename;regular expression;perl | That's the Perl-based rename found on Debian, Ubuntu and derivatives, judging by the syntax. You can't use the tr operator because it acts on the whole string. But you can match the extension, and lowercase it with \L.rename 's/\.[^.]*$/\L$&/' *.JPG *.MOVHere it's unnecessary, but if the regexp matched more than the part that you want to lowercase, you could put the part to be matched in a group:rename 's/\.([^.]*)$/.\L$1/' *.JPG *.MOVReplace *.JPG *.MOV by *.* to act on all files regardless of extension. In bash 4.3 (and also in bash 4.04.2, with the caveat that this also traverses symbolic links to directories), you can easily act on files in subdirectories and so on recursively:rename 's/\.[^.]*$/\L$&/' **/*.*For the zsh fans (the :r and :e modifiers isolate the extension from the rest of the file):autoload zmvzmv '*.(MOV|JPG)' '${f:r}.${(L)f:e}' # these extensions, current directoryzmv '*.*' '${f:r}.${(L)f:e}' # all extensions, current directoryzmv '**/*.*' '${f:r}.${(L)f:e}' # all extensions, recursive directory traversal |
_webapps.19839 | In Google Calendar, is there any way to add a task, rather than an event, via the quick add feature?Quick add is accessed by pressing q or clicking the down arrow on the Create button in the left column. | Is it possible to add a task using the Quick Add feature in Google Calendar? | google calendar;tasks;shortcut | You can't quick add a task on the calendar. Only an event or appointment. |
_webmaster.103345 | I just published a completely revised version of a website (static site to a Wordpress powered one) without changing the domain or the web server. In the old site the URL structure was example.com/product_category1.phpexample.com/product1.phpand in the current one it is:example.com/product_category1/example.com/product_category1/product1/so now it's more logical and without .php extensions. Now, if I do a Google search e.g. for product category 1, example.com/product_category1.php ranks higher than example.com/product_category1. I solved this by applying a 301 redirect from the GUI offered by my web hosting company, so now example.com/product_category1.php redirects to example.com/product_category1/. I noticed that I have to keep product_category1.php file in the public_html folder in order for the redirection to happen, otherwise the only redirection that happens is to the 404 page.I'd like the example.com/product_category1.php and other .php extended URLs to vanish from the search results. Will they do so if they always get redirected, even though they still exist in the public_html folder? Is there a way to do the redirection without keeping the files? Or should this be dealt in a completely different manner in order to have maximum SEO and visibility for correct URLs in the new website? | Implementing 301 redirects won't work without an existing file. Will that matter to Google or is there a better way to implement redirects? | seo;google search;url;301 redirect;indexing | null |
_cstheory.3309 | A DFA has a synchronizing word if there is a string that sends any state of the DFA to a single state. In The Cerny Conjecture for Aperiodic Automata by A. N. Trahtman (Discrete Mathematics and Theoretical Computer Science vol. 9:2, 2007, pp.3-10), he wrote,Cerny conjectured in 1964 that every n-state synchronizable DFA possesses a synchronizing word of length at most $(n-1)^2$.He also wrote, in the case when the underlying graph of the aperiodic DFA is strongly connected, this upper bound has been recently improved by Volkov who has reduced the estimation to $n(n + 1)/6$.Does anybody know the current status of Cerny conjecture?And in which paper Volkov obtained the result n(n+1)/6 ?Thanks for any pointer or link. | Status of Cerny Conjecture? | fl.formal languages;automata theory;open problem;synchronization | Trakhtman has a bibliography on the problem, which is apparently kept up to date; so I suppose ern's question remains unresolved until today. The same is stated in Volkov's recent survey (LATA 2008) linked from the wikipedia article cited in the question. There you find pointers to some partial results, for example, for which subclasses of regular languages the conjecture is known to be true. Even more recent is a research paper by Ananichev, Gusev & Volkov (MFCS 2010) on a related topic, where they confirm that ern's conjecture is still open now (at least as of May 2010). |
_softwareengineering.336119 | I've been programming for a few years, and have become very familiar with C# and JavaScript over time. I have some larger C# and JavaScript projects that I have no trouble navigating around. I recently started a PHP & AngularJS project for work with no prior experience with PHP.The flow of the PHP side of things is becoming hard to keep track of (The JavaScript side is larger, but easy to work through), when I try and think through it I imagine a tangled ball of thread. Major design mistakes that I made when I started are beginning to pile up and effect my design going forward. It takes longer and longer to implement anything new.I'm on a tight deadline and finding it harder and harder to write good, DRY, SOLID, code. It's becoming more enticing to copy/paste chunks of code to make slight variations to it's behavior as design time goes up. It's also taking a long time to get back into the code base whenever I have to do a context switch (From one project then back to this one), I have a feeling of dread whenever I go back to work on this project.What steps can I take to remedy this? The extra time it might take needs to be justifiable as well, my boss is not a developer and is not familiar with development or software life cycles so explaining might be more difficult than normal. | I'm losing track of the flow of my PHP web app, it's becoming hard to work with | php;javascript;web applications;code smell;technical debt | You are taking on technical debt. The more you justify sloppy code with deadlines the more deadlines will see you achieving less and less.Understand that you can completely get away with this. No ones going to catch you making a mess and ball you out. You're just going wake up one day surrounded with clutter. At that point you'll either update your resume and make it my problem or you'll decide to pay down the debt and spend some time cleaning the code.If you go the cleaning rout understand this isn't about spending more time on design. This is about breaking some lazy habits and taking out the trash. Throwing out dirty code wholesale is a bad idea. Not because of the work that went into it, but because working code captures an idea. Move the idea into clean code before you trash the dirty code.Having unit tests helps with this but if you created your tests with the same care you put into the mess they likely need fixing as well.Don't give into rigidity. If you can't change it then it's not software. |
_unix.236542 | '4800483343' is a directory, and 'file1' & 'file2' are two files in it.Why is the following happening?$ ls 4800483343file1 file2$ md5sum 4800483343/*36468e77d55ee160477dc9772a99be4b 4800483343/file129b098f7d374d080eb006140fb01bbfe 4800483343/file2$ mv 4800483343 4800[48]3343$ md5sum 4800[48]3343/*md5sum: 4800[48]3343/*: No such file or directory$ md5sum '4800[48]3343'/*36468e77d55ee160477dc9772a99be4b 4800[48]3343/file129b098f7d374d080eb006140fb01bbfe 4800[48]3343/file2What other characters cause this? | Why are square brackets preventing shell expansion? | shell;quoting;filenames;wildcards | Answer for original questionWhy are square brackets preventing shell expansionSquare brackets do not prevent shell expansion but quotes do.I suspect that the commands that you actually ran were as followsThis runs md5sum on the files in dir/:$ md5sum d[i]r/*02fdd7309cef4d392383569bffabf24c dir/file1db69ce7c59b11f752c33d70813ab5df6 dir/file2This moves dir to d[i]r with the quotes preventing the expansion of the square brackets:$ mv dir 'd[i]r'This looks for directory dir which no longer exists:$ md5sum d[i]r/*d[i]r/*: No such file or directoryBecause of the quotes, the following looks in the new directory named d[i]r:$ md5sum 'd[i]r'/*02fdd7309cef4d392383569bffabf24c d[i]r/file1db69ce7c59b11f752c33d70813ab5df6 d[i]r/file2Answer for revised questionIn the revised question, the directory 4800483343 exists and the following command run:mv 4800483343 4800[48]3343What happens when this command is run depends on whether the glob 4800[48]3343 matches any existing directory. If no directory matches that, then 4800[48]3343 expands to itself 4800[48]3343 and the directory 4800483343 is moved to the directory 4800[48]3343.Consequently:The command md5sum 4800[48]3343/* will return the error No such file or directory because no directory exists which matches the glob 4800[48]3343.The command md5sum '4800[48]3343'/* will correctly find the files because the quotes prevent expansion of the glob.Examples of globsLet's create two files:$ touch a1b a2bNow, observe these globs:$ echo a[123]ba1b a2b$ echo a?ba1b a2b$ echo *ba1b a2b |
_cs.24256 | I was wondering how can someone prove that one class of languages is of a certain complexity? For example, how could I show the Turing-recognizable languages are in P?Would I have to come up with an algorithm that runs in deterministic polynomial time? | How does one figure out where a class of languages falls under some complexity class? | complexity theory;formal languages;proof techniques | Turing recognizable languages are not (generally) in P, as the time hierarchy theorem implies. To show that a class $L_1$ is contained in a class $L_2$, you need to show that every language in $L_1$ belongs to $L_2$. For example, to show that $L$ (logspace) is contained in $P$ (polytime), you argue as follows: every language in $L$ is decided by some machine running in logarithmic space; this machine must run in polynomial time; so the language is in $P$. In this example, the same machine was used, but this is not necessary. For example, Savitch's theorem shows that $\mathrm{NSPACE}(s(n)) \subseteq \mathrm{DSPACE}(s(n)^2)$, and here the same machine cannot be used (since a non-deterministic Turing machine is not necessarily deterministic). |
_unix.17450 | I am learning awk from this tutorial. It's quite basic. I have a list of processes in a file which I got by doing ps aux > processes Now according to the tutorial, doing awk '$2 ~ 14022, $2 ~ 14040' should give a range of processes with PID ranging from 14022 to 14040. I tried the same with PID range 1746 - 1760. But it outputs the processes which have PID above 1760. Output$ awk '$2 ~ 1746, $2 ~ 1760 {print $1, $2, $11}' processes root 1746 sudoroot 1750 wvdialroot 1751 /usr/sbin/pppddharmit 1772 /opt/google/chrome/chromedharmit 1788 /opt/google/chrome/chromedharmit 1790 /opt/google/chrome/chromeroot 1791 /sbin/udevddharmit 1827 /opt/google/chrome/chromedharmit 1830 /opt/google/chrome/chromedharmit 1846 /opt/google/chrome/chromedharmit 1850 gnome-terminaldharmit 1856 gnome-pty-helperdharmit 1857 bashroot 1902 [kworker/0:4]dharmit 1940 /opt/google/chrome/chromeroot 1952 [kworker/1:0]root 2104 /usr/sbin/anacronroot 2181 /usr/libexec/packagekitddharmit 2183 psWhy does it happen so? What am I missing here? | Awk : Range of PIDs | awk | You are specifying a range match where the end of the range does not match any of the input lines - i.e. there is no process with pid 1760.awk is not being smart here and knowing that the field is a numeric field and comparing the PIDs against a numeric range, as you seem to be expecting. Instead it is just simply matching a string for the start and end of the range, and with no match on the end of the range, the range effectively extends to end the end of the file.In your example, if you end the range at 1751 you will find you get what you want.Alternatively, compare the field numerically:awk '$2 >= 14022 && $2 <= 14040 { print }'That will work even if your input is not sorted. |
_webmaster.83696 | I need to hire a dev to do some work on my site, however as it is a very large one and valueable at the same time, i don't want to risk a dev stealing the work and mechanisms behind it.I will need to grant him cPanel access however would prefer if they're not able to download any files. What is the best way to prevent someone stealing my site, whilst giving cPanel access, or possibly the best way to limit the access for them? | Granting a developer cPanel access but disabling downloads | apache;web development;cpanel;ftp | null |
_unix.199453 | I'm having an issue with a script that seems really weird; it appears to stop executing prematurely when run from udev, but not when I run it manually from the command line. I've tried troubleshooting it with the set -x and when I run it from the command line everything gets executed as expected. However, when it gets run from udev, it stops prematurely after a certain point.Part of the issue, I think, is that it's hard to debug the script when it's run by udev. I've tried putting in logger statements, but they basically just tell me the same thing (it stops prematurely).Do you see anything that pops out that would be causing this issue?The script(s) can be found here. One note about them, they are intended for an embedded system. When run manually from the command line, the command I run is:./product.sh -b update /dev/sda1The udev rule that runs the script is:ACTION==add, KERNEL==sd?1, RUN+=/usr/sbin/product.sh -b update /dev/%kThe script appears to stop at lines 195 or 197 in product.sh. I've noticed that if I comment out lines 22 and 28 in product-manifest.sh everything runs as expected when run by udev and manually on command line. | Script executes differently if run from udev | bash;shell;shell script;udev | null |
_unix.305491 | I was reading ELF format specification, where was told all this stuff with elf-headers, program header, sections, segments and etc.All this is referenced as structs with all kind of fields and values.So question is, where all this go? I mean, can I watch them as structs, not as output of readelf util? Is there any intermediate file, where all this elf-magic exist, merged into source code? Or it is just internals of compiler, and in specification structs are mentioned just for humans?) It looks like chicken and egg question for me (speaking about compiled code in terms of code). | .elf format internal inspection | compiling;dynamic linking;elf | null |
_codereview.105826 | I have written some code to encrypt XML and then store it on the disk. I want to be sure that the encryption code is secure, so here is the code:package com.application;import java.io.UnsupportedEncodingException;import java.lang.reflect.Field;import java.security.Key;import java.security.MessageDigest;import java.security.NoSuchAlgorithmException;import java.security.spec.AlgorithmParameterSpec;import javax.crypto.BadPaddingException;import javax.crypto.Cipher;import javax.crypto.spec.IvParameterSpec;import javax.crypto.spec.SecretKeySpec;public class Aes { public Aes() { } public String encrypt(String data, String key) { try { Cipher cipher = Cipher.getInstance(AES/CBC/PKCS5Padding); String iv = generateRandomIv(); cipher.init(Cipher.ENCRYPT_MODE, makeKey(key), makeIv(iv)); return iv + System.getProperty(line.separator) + new String(cipher.doFinal(data.getBytes(ISO-8859-1)), ISO-8859-1); } catch (Exception e) { throw new RuntimeException(e); } } public String decrypt(String data, String key) throws WrongPasswordException { String decrypted = ; try { Cipher cipher = Cipher.getInstance(AES/CBC/PKCS5Padding); String iv = getIv(data); cipher.init(Cipher.DECRYPT_MODE, makeKey(key), makeIv(iv)); decrypted = new String(cipher.doFinal(removeIvFromString(data).getBytes(ISO-8859-1)), ISO-8859-1); } catch (BadPaddingException e) { throw new WrongPasswordException(); } catch (Exception e) { throw new RuntimeException(e); } return decrypted; } private AlgorithmParameterSpec makeIv(String iv) { try { return new IvParameterSpec(iv.getBytes(UTF-8)); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } return null; } private String generateRandomIv() { return new RandomStringGenerator().randomString(16); } private String getIv(String data) { return data.substring(0, data.indexOf(System.getProperty(line.separator))); } private String removeIvFromString(String data) { return data.substring(data.indexOf(System.getProperty(line.separator)) + 1, data.length()); } private Key makeKey(String encryptionKey) { try { MessageDigest md = MessageDigest.getInstance(SHA-256); byte[] key = md.digest(encryptionKey.getBytes(UTF-8)); return new SecretKeySpec(key, AES); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } return null; }}package com.application;import org.apache.commons.lang3.ArrayUtils;import java.util.ArrayList;import java.util.Random;public class RandomStringGenerator { private char[] vowelLowerCaseLetter = {'a', 'e', 'i', 'o', 'u', 'y'}; private char[] consonantsLowerCaseLetter = {'b','c','d','f','g','h','j','k','l','m','n','p','q','r','s','t','v','w','x','z'}; private char[] numbers = {'1', '2', '3', '4', '5', '6', '7', '8', '9', '0'}; private char[] specialCharacters = {'!', '', '@', '#', '', '', '$', '%', '&', '/', '{', '(', '[', ')', ']', '=', '}', '?', '+', '\\', '', '', '~', '^', '*', '\'', '-', '_', '.', ':', ',', ';', ' ', '', '', '<', '>'}; public String randomString(int length) { char[] upperCaseLetter = convertCharsToUpperCase(ArrayUtils.addAll(vowelLowerCaseLetter, consonantsLowerCaseLetter)); char[] lowerCaseLetter = ArrayUtils.addAll(vowelLowerCaseLetter, consonantsLowerCaseLetter); char[] allowedCharacters = ArrayUtils.addAll(ArrayUtils.addAll(lowerCaseLetter, upperCaseLetter), ArrayUtils.addAll(numbers, specialCharacters)); String randomString = ; for (int i = 0; i < length; i++) { randomString += getRandomCharacter(allowedCharacters); } return randomString; } private char getRandomCharacter(char[] allowedCharacters) { Random r = new Random(); return allowedCharacters[r.nextInt(allowedCharacters.length)]; } private char[] convertCharsToUpperCase(char[] lowerCaseLetter) { char[] upperCaseLetters = new char[lowerCaseLetter.length]; for (int i = 0; i < lowerCaseLetter.length; i++) { upperCaseLetters[i] = Character.toUpperCase(lowerCaseLetter[i]); } return upperCaseLetters; }} | Encypt XML file with AES and storing on disk | java;security;xml;cryptography | Binary data != stringpublic String encrypt(String data, String key) { ... new String(cipher.doFinal(data.getBytes(ISO-8859-1)), ISO-8859-1); ...}Here you get a byte[] with the input data encrypted. This is arbitrary binary data. Do not treat binary data as strings.It only works because you are using an encoding with a single byte per character.When you want to store binary data as a string you should use Base64 encoding instead:import java.util.Base64;public String encrypt(String data, String key) { try { Cipher cipher = Cipher.getInstance(AES/CBC/PKCS5Padding); String iv = generateRandomIv(); cipher.init(Cipher.ENCRYPT_MODE, makeKey(key), makeIv(iv)); byte[] cipherBytes = cipher.doFinal(data.getBytes(StandardCharsets.UTF_8)); String base64CipherText = Base64.getEncoder().encodeToString(cipherBytes); return iv + System.getProperty(line.separator) + base64CipherText; } catch (Exception e) { throw new RuntimeException(e); }}public String decrypt(String data, String key) throws WrongPasswordException { String decrypted = ; try { Cipher cipher = Cipher.getInstance(AES/CBC/PKCS5Padding); String iv = getIv(data); cipher.init(Cipher.DECRYPT_MODE, makeKey(key), makeIv(iv)); byte[] cipherBytes = Base64.getDecoder().decode(removeIvFromString(data)); decrypted = new String(cipher.doFinal(cipherBytes), StandardCharsets.UTF_8); } catch (BadPaddingException e) { throw new WrongPasswordException(); } catch (Exception e) { throw new RuntimeException(e); } return decrypted;}Now that the encoding issues are fixed you should consider using UTF-8 (or another portable encoding) for the String.getBytes. |
_cs.11301 | In database query processing, the approximate time for selection operation using primary index when equality is on key is $2(b_s + b_t)$ where $b_s$ is disk seek time and $b_t$ is disk transfer time (assuming one level of indexing), because one seek and transfer time will be needed for finding the index and another one will be for the actual data. But what will happen if the equality is on a no- key value? Since now we cannot search in the index, don't we have to do a linear search? | Approximate time for selection operation using index when equality is on nonkey | runtime analysis;search algorithms;databases | In this case full table scan will be executed.The cost is $N$ I/O operations (where $N$ is a number of pages/blocks in your table) |
_webapps.49956 | Which time zone is GitHub working from on their servers?For example, a commit made on Sun Dec 2 05:01:00 2012 +0200 is interpreted by GitHub as a commit made on 1st December 2012 in the contributions calendar/graph.Which is the first hour when a new day starts? | What time zone are the main GitHub servers located in? | github;time zone | GitHub uses a strategy that involves the date-time-offset pattern. When you make a commit, the timestamp includes your offset from UTC.You can see this in the API docs for the Commits. The sample they show there uses a commit timestamp of 2010-04-10T14:10:01-07:00. This is a valid ISO8601 representation of a date-time-offset. For the person performing the commit, it was April 10th 2010 at 14:10:01. The item would show up on his commit calendar for Saturday, April 10th.Git and GitHub do not attempt to normalize this data to the offset of the viewer, but they do take it into account when calculating relative time strings. For example, there's a commit on a project I work on that says it was made 1 hour ago. It's 1:30 my time, but when I hover over that text it looks like it was made at 2:30. How can that be? Because my offset is currently -07:00 and the person who made the commit has an offset of -05:00.So there is no system-wide first hour of the start of a day. Two commits made at the exact same moment in time might appear on two different days even on the same calendar, if they were made by people in different time zones. In other words, a GitHub day is a virtual floating calendar date that aligns to the committer - not necessarily the viewer. |
_scicomp.7064 | can you help me? I have a fluid simulation in 2D, where I compute by finite differences on the grid the velocity field and pressure. So this is my output from the program. Now I have to show how the vertical component of traction on the boundary looks like. But I am not sure what the word traction means, what I have to compute. I can compute the stress from the velocity field as$$\sigma_{xy}=\eta (\frac{dv_x}{dy}+\frac{dv_y}{dx})$$$$\sigma_{xx}=2\eta \frac{dv_x}{dx}$$$$\sigma_{yy}=2\eta \frac{dv_y}{dy}$$So this I would know how to compute from my results - by finite differences.What does the vertical traction mathematically means, what I have to compute?What I about know is, that the traction in vertical (y-axis) could be:$$\vec{t_y} = \sigma_{xy}\vec{e_x} + \sigma_{yy}\vec{e_y}$$Am I right?Many thanksThsi is about traction:Note a convention that we have implicitly established: the sense of theforce per unit area $t$ across the oblique surface is that the fluid penetratedby the unit normal $n$ acts on the fluid on the other side of the surface. Thisforce per unit area is called the traction across the surface. Note further that$t_x$ indicates the traction across the y z plane, not the x component of atraction. Representation of components requires a second subscript: $T_{xy}$ isthe y component of the traction $t_x$ across the y z face of the tetrahedron,whereas $T_{xy}$ is the x component of the traction ty across the x z face.Thus, the first subscript represents the vector component of the force andthe second subscript indicates the face on which the force is acting.But how to compute it? | Traction on the boundary - how to compute from velocity field | fluid dynamics | First, the stress in fluid dynamics must also include the pressure. In fact, it is$$ \sigma = 2 \eta \varepsilon(\mathbf v) + p I$$where $\varepsilon(\mathbf v)$ is the symmetric gradient of the velocity $\mathbf v$. Then, the traction is the normal component of the stress at the boundary:$$ \mathbf t = \sigma \mathbf n$$If you want you can further decompose the traction into the normal force $f = \mathbf n \cdot \mathbf t = \mathbf n \cdot \sigma \mathbf n=2\eta \mathbf n \cdot \varepsilon(\mathbf v) \mathbf n + p$ and the tangential friction force $\mathbf t - f \mathbf n$ for which it is easy to show that it doesn't depend on the pressure any more. |
_unix.348834 | I used the following command to send an emailecho Body of the mail | mail -s subject [email protected] first time I ran it, it returned an error saying the program mail is not installed. After searching a bit, I fired the following command and it seems to have installed the program mail.sudo apt-get install mailutilsI again tried to send the mail. This time the command did not return any error (I used $? to check the return value of the command.) So I thought the mail was sent successfully. However, I have not received it in my mailbox. I checked the junk/spam folders too, before anyone points that out.What could be the reason?I ask this question because there seems to be some caveat which no one talks about while using the mail command. All the answers I've seen so far just give the command to be used. But is there any setting that must be done before one can send an email from Bash? | Trouble sending email from the mail utility in Bash | email;mailx;mail command | null |
_unix.315255 | I was trying to list some hidden files in my home directory and I encountered a very odd behavior of grep command when combining with ls command.I executed ls -a on my home directory and got all the filesincluding hidden files as expected.I wanted to list all the hidden files starting with 'xau' so I executed ls -a |grep -i .xau* and it also worked as expected.Then I executed ls -a |grep -i .x* in the same directory but itdidn't list anything at all.I then mistakenly typed ls -a |grep -i .*x (note that this time wildcard character * and character 'x' have switched places) and the interesting thing is that it behaved like what I intended in step3. I tried the same thing with this command ls -a .*x and ls -a .*X but I get no such file or directory error. I have added the actual text output here. Some of you may ask why not just use ls -a .x* but the thing with grep is that it prints with the appropriate colors. So could anyone please explain this to me?One more thing: This is my first post so please be gentle if I have made any newbie mistakes. | grep command with ls -a not working properly? | grep;ls | You are suffering from premature glob expansion..xa* doesn't expand because it doesn't match anything in the current directory. (Globs are case sensitive.) However, .x* does match some files, so this gets expanded by the shell before grep ever sees it.When grep receives multiple arguments, it assumes the first is the pattern and the remainder are files to search for that pattern.So, in the command ls -a | grep -i .x*, the output of ls is ignored, and the file .xsession-errors.old is searched for the pattern .xsession-errors. Not surprisingly, nothing is found.To prevent this, put your special characters within single or double quotes. For example:ls -a | grep -i '.x*'You are also suffering from regex vs. glob confusion.You seem to be looking for files that start with the literal string .x and are followed by anythingbut regular expressions don't work the same as file globs. The * in regex means the preceding character zero or more times, not any sequence of characters as it does in file globs. So what you probably want is:ls -a | grep -i '^\.x'This searches for files whose names start with the literal characters .x, or .X. Actually since there's only one letter you are specifying, you could just as easily use a character class rather than -i:ls -a | grep '^\.[xX]'The point is that regular expressions are very different from file globs. If you just try ls -a | grep -i '.x*', as has been suggested, you will be very surprised to see that EVERY file will be shown! (The same output as ls -a directly, except placed on separate lines as in ls -a -1.)How come?Well, in regex (but not in shell globs), a period (.) means any single character. And an asterisk (*) means zero or more of the preceding character. So that the regex .x* means any character, followed by zero or more instances of the character 'x'.Of course, you are not allowed to have null file names, so every file name contains a character followed by at least zero 'x's. :)Summary:To get the results you want, you need to understand two things:Unquoted special glob characters (including *, ?, [] and some others) will get expanded by the shell before the command you are running ever sees them, andRegular expressions are different from (and more powerful than) file globs. |
_webmaster.13955 | How to setup kohana's index.php with cherokee webserver?Should i add a rule to redirect all to index.php?These settings don't workRule RegExp ^.*index.php.*$Redirect ^(.*)$ => index.php$1 | Kohana with cherokee | mod rewrite;kohana | Can you try changing Redirect ^(.*)$ => index.php$1to be:Redirect ^(.*)$ => /index.php?$1You may need to play around, the addition of / and ? are intended to stop the looping. / alone may be sufficient.You might find this answer on server fault relevant to your issue - even if cherokee and kohanna have their own syntaxes. |
_unix.180252 | I uninstalled clamav withapt-get remove --purge clamavbut still I have this 100MB folder:du -shc /var/lib/clamavWhy is this not deleted on purge? And how can I find out if some other installed program still uses this folder? | removing clamav with purge leaves database | apt;disk usage;clamav | clamav on at least Debian (you don't mention what distro you're using) doesn't contain the database. For that clamav has a dependency on clamav-freshclam | clamav-data so make sure that both of those are also purged. |
_cseducators.3101 | I'm exploring iOS App Development as an elective for students, and in perusing Apple's iBooks on the subject, I see two similar books: App Development with Swift and Intro to App Development with Swift. On the surface they look quite similar.Are there significant differences between the two that I should take into account when selecting one for a course? | Books to use for iOS App Development | curriculum design;swift;app development;resource information | These are both part of Apple's Everyone Can Code initiative, and both are appropriate for a high-school and college audience. The Intro book is intended for non-programmers, and teaches programming fundamentals and Swift syntax, with 90 hours of lessons included. The non-intro book gets into more complex UI development such as working with table views and navigation, and topics like consuming web APIs, and includes 180 hours of lessons:App Development with Swift Curriculum Guide (where I took that screenshot) also shares helpful outlines of the two curricula that can give you a sense of the differences at a glance.Here is the Overview from that guide explaining the difference:The Intro to App Development with Swift and App Development with Swift curricula were designed to teach high school and college students with little or no programming experience how to be app developers, capable of bringing their own ideas to life.The Intro to App Development with Swift course introduces students to the world of app development and the basics of Swift and Xcode. The course culminates in a final project where they can choose one of two basic iOS apps to build.App Development with Swift takes students further, whether theyre new to coding or want to expand their skills. If theyre already familiar with Swift, Xcode, and iOS development, they can move through lessons quickly or go straight to the labs, where theyll build miniprojects and test their code in playgrounds. By the end of the course, theyll be able to build a fully functioning app of their own design.The document goes on to further explain the two courses as follows:Intro to App Development with SwiftThis introductory one-semester course is designed to help students build a solid foundation in programming fundamentals using Swift as the language. Students get practical experience with the tools, techniques, and concepts needed to build a basic iOS app.App journal activities take students through the app design process, from thinking about the purpose of an app to market research and early user testing. By the end of the course, students will have created a plan for an app theyd like to develop. Even though they might not yet have the skills to build the app, the work they put into the framework will set them up for future development. App Development with SwiftThis two-semester course features 45 lessons, each designed to teach a specific skill related to either Swift or app development. Each type of lesson takes a different approach:Swift lessons. These lessons focus on specific concepts. The labs for each are presented in playgrounds so that students can experiment with code and see the results immediately. Playground files are provided.App development lessons. Focusing on building specific features for iOS apps, these lessons typically take students step by step through a miniproject. The labs help students apply what they learned to a new scenario. |
_webapps.107876 | I recently started using Slack and I'm trying to customize it to have some sort of visual clue when something is written in a particular channel/channels. For Slack, this will be something like the blue dot on the system tray.The problem with that blue dot is that it shows up when messages are posted in any of the channels I have joined, and I'm trying to make some channels higher priority. This is, I don't want to be notified of every single message in every single channel I have joined but rather on 1 or 2 Channels I'm following closer.Here are the Notification settings for my accountIs there some way to customize the tool to achieve that? | How to customize the appearance (or not) of Slack blue dot on the system tray | notifications;slack;configuration | null |
_softwareengineering.298305 | Interesting question came up while designing interfaces at work, now resolved, but I want to ask about the theory behind it.Is it incorrect to say that properly typed data members of a class provide encapsulation? (e.g. a Boost Units type that has conversions well defined between other like units, not a typedef'd/boxed uint64_t)struct Ruler{ Length length; Length tick_size; Ruler(Length length, Length tick_size); // Why not have helpers immediately related to the class, let's // stick in an alternative constructor Ruler(Length length, int number_ticks); // Accessors for # tick marks, because it needs transformed // to/from length and tick_size int GetNumberTicks(void); // This specific example breaks down with this function, // but I don't think it's an inherent issue with the design. // I need an overload so it will know which of the two member // variables to calculate... problem is both are Length void SetNumberTicks(int nticks); }vsstruct Ruler{ Ruler(Length length, Length tick_size); Ruler(Length length, int number_ticks); // General accessors Length GetLength(void); void SetLength(Length length); Length GetTickSize(void); void SetTickSize(Length length); // Calculated accessors, see above int GetNumberTicks(void); void SetNumberTicks(int nticks); private: Length length; Length tick_size; }IMO the first encourages cleaner design by consumers and doesn't encourage pulling in slightly related, but probably attached to the wrong object member functions (eg CanMachineCreateRuler() ). I don't really have the words to describe this properly and I may be misunderstanding the additional utility that the accessors/private data combination provide. | Can encapsulation be implemented by proper types rather than accessors? | c++;object oriented design;solid | Accessors for basic data types generally come from the additional operators available to those types. While getting and setting variables may be allowed, keeping extra references to those variables is often discouraged.Some languages also need accessors when the encapsulated program wants to make changes at the time an accessor is used, e.g., computing a lazy value or invalidating a cache.Much of this can be solved by creating general accessor types, such as an int, that has been initialized, for which no address can be obtained, which is not a NaN, is signed, and, upon overflow takes the largest absolute value allowed. or an int, that has been initialized, which is arbitrary meaning no arithmetic operations are allowed, for which references can be taken. The choice is often to trade off the pain of specialized types versus the pain of dependency injection. |
_unix.290046 | An OpenVPN client seems to initialize on a CentOS 7 client virtual machine. However, the response from the server is not clear when the client sends a ping.Specifically, ping 10.8.0.0 from the client does NOT get any response from server.ping 10.8.0.1 from the client does get a response, but is it from the server?ping 10.0.2.2 from the client does get a response, but is it from the server? How do I interpret these ping responses? Is the server respondng to the ping requests? And if not, what specific changes need to be made to the below in order to get the server to reply to a ping from the client?THE CURRENT SETUP:On the server, server.conf is: port 1194proto udpdev tunca ca.crtcert server.crtkey server.key dh dh2048.pemserver 10.8.0.0 255.255.255.0route 10.8.1.0 255.255.255.0 route 10.8.2.0 255.255.255.0 client-config-dir ccd client-to-client ifconfig-pool-persist ipp.txtkeepalive 10 120user nobodygroup nobodypersist-keypersist-tunstatus openvpn-status.logverb 3Also, in the server, the two files in the /etc/openvpn/ccd directory referred to in the server.conf above are: /etc/openvpn/ccd/administrators, which contains only the following one line: ifconfig-push 10.8.1.1 10.8.1.2And /etc/openvpn/ccd/otherorgs, which contains only the following one line: ifconfig-push 10.8.2.1 10.8.2.2The firewalld config for the server is: [root@hostname easy-rsa]# firewall-cmd --get-default-zonepublic[root@hostname easy-rsa]# firewall-cmd --get-active-zonesinternal interfaces: tun0public interfaces: enp3s0[root@hostname easy-rsa]# firewall-cmd --list-allpublic (default, active) interfaces: enp3s0 sources: services: dhcpv6-client http imaps openvpn smtp ssh ports: masquerade: yes forward-ports: icmp-blocks: rich rules: [root@hostname easy-rsa]# firewall-cmd --zone=internal --list-allinternal (active) interfaces: tun0 sources: services: dhcpv6-client ipp-client mdns samba-client ssh ports: masquerade: no forward-ports: icmp-blocks: rich rules: rule family=ipv4 source address=10.8.1.0/24 service name=https_others accept rule family=ipv4 source address=10.8.1.0/24 service name=https accept rule family=ipv4 source address=10.8.0.0/24 service name=https accept rule family=ipv4 source NOT address=10.8.1.1 service name=ssh reject rule family=ipv4 source address=10.8.2.0/24 service name=https_others accept[root@hostname easy-rsa]# On the client, client.ovpn is:clientdev tunproto udpremote ip.addr.of.server 1194resolv-retry infinitenobindpersist-keypersist-tunverb 3ca /etc/openvpn/ca.crtcert /etc/openvpn/centos_vm1_client.crtkey /etc/openvpn/centos_vm1_client.keyThe client seems to start, because the client terminal gives the following logs: [user@localhost openvpn]$ sudo openvpn --config ~/openvpn_config/client.ovpn[sudo] password for user: Wed Jun 15 16:52:23 2016 OpenVPN 2.3.11 x86_64-redhat-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on May 10 2016Wed Jun 15 16:52:23 2016 library versions: OpenSSL 1.0.1e-fips 11 Feb 2013, LZO 2.06Wed Jun 15 16:52:23 2016 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info.Wed Jun 15 16:52:23 2016 Socket Buffers: R=[212992->212992] S=[212992->212992]Wed Jun 15 16:52:23 2016 UDPv4 link local: [undef]Wed Jun 15 16:52:23 2016 UDPv4 link remote: [AF_INET]ip.addr.of.server:1194Wed Jun 15 16:52:23 2016 TLS: Initial packet from [AF_INET]ip.addr.of.server:1194, sid=40ea5916 7f5543b1Wed Jun 15 16:52:23 2016 VERIFY OK: depth=1, C=UK, ST=RW, L=SomeCity, O=OrganizationName, OU=MyOrganizationalUnit, CN=somedomain.com, name=server, [email protected] Jun 15 16:52:23 2016 VERIFY OK: depth=0, C=UK, ST=RW, L=SomeCity, O=OrganizationName, OU=MyOrganizationalUnit, CN=server, name=server, [email protected] Jun 15 16:52:24 2016 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit keyWed Jun 15 16:52:24 2016 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authenticationWed Jun 15 16:52:24 2016 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit keyWed Jun 15 16:52:24 2016 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authenticationWed Jun 15 16:52:24 2016 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSAWed Jun 15 16:52:24 2016 [server] Peer Connection Initiated with [AF_INET]ip.addr.of.server:1194Wed Jun 15 16:52:26 2016 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1)Wed Jun 15 16:52:27 2016 PUSH: Received control message: 'PUSH_REPLY,route 10.8.0.1,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.18 10.8.0.17'Wed Jun 15 16:52:27 2016 OPTIONS IMPORT: timers and/or timeouts modifiedWed Jun 15 16:52:27 2016 OPTIONS IMPORT: --ifconfig/up options modifiedWed Jun 15 16:52:27 2016 OPTIONS IMPORT: route options modifiedWed Jun 15 16:52:27 2016 ROUTE_GATEWAY 10.0.2.2/255.255.255.0 IFACE=enp0s3 HWADDR=08:00:27:d5:85:a9Wed Jun 15 16:52:27 2016 TUN/TAP device tun0 openedWed Jun 15 16:52:27 2016 TUN/TAP TX queue length set to 100Wed Jun 15 16:52:27 2016 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0Wed Jun 15 16:52:27 2016 /usr/sbin/ip link set dev tun0 up mtu 1500Wed Jun 15 16:52:27 2016 /usr/sbin/ip addr add dev tun0 local 10.8.0.18 peer 10.8.0.17Wed Jun 15 16:52:27 2016 /usr/sbin/ip route add 10.8.0.1/32 via 10.8.0.17Wed Jun 15 16:52:27 2016 Initialization Sequence CompletedPING RESULTS:Opening a new terminal on the client and ping to the server address given in server.conf above gives no response[user@localhost ~]$ ping 10.8.0.0PING 10.8.0.0 (10.8.0.0) 56(84) bytes of data. However, ping to two two ip addresses given in the OpenVPN startup logs above did produce responses:[user@localhost ~]$ ping 10.8.0.1PING 10.8.0.1 (10.8.0.1) 56(84) bytes of data.64 bytes from 10.8.0.1: icmp_seq=1 ttl=64 time=91.1 ms64 bytes from 10.8.0.1: icmp_seq=2 ttl=64 time=93.1 ms...^C--- 10.8.0.1 ping statistics ---14 packets transmitted, 14 received, 0% packet loss, time 13013msrtt min/avg/max/mdev = 89.449/93.387/101.522/2.731 ms[user@localhost ~]$ ping 10.0.2.2PING 10.0.2.2 (10.0.2.2) 56(84) bytes of data.64 bytes from 10.0.2.2: icmp_seq=1 ttl=63 time=0.245 ms64 bytes from 10.0.2.2: icmp_seq=2 ttl=63 time=0.429 ms...^C--- 10.0.2.2 ping statistics ---9 packets transmitted, 9 received, 0% packet loss, time 8009msrtt min/avg/max/mdev = 0.170/0.410/0.558/0.117 ms[user@localhost ~]$ | OpenVPN Server does not reply to Client ping | centos;rhel;openvpn;vpn | From OpenVPN's man page: --server network netmask ['nopool'] A helper directive designed to simplify the configuration of OpenVPN's server mode. This directive will set up an OpenVPN server which will allocate addresses to clients out of the given network/netmask. The server itself will take the .1 address of the given network for use as the server-side endpoint of the local TUN/TAP interface.And from openvpn.conf (my CentOS7 one at least):# Configure server mode and supply a VPN subnet# for OpenVPN to draw client addresses from.# The server will take 10.8.0.1 for itself,# the rest will be made available to clients.# Each client will be able to reach the server# on 10.8.0.1. Comment this line out if you are# ethernet bridging. See the man page for more info.server 10.8.0.0 255.255.255.0As you can see, you shouldn't be able to ping 10.8.0.0 as it's a network address - the server is allocated the first address. In your case this is 10.8.0.1.As you found out, you can ping 10.8.0.1 which takes 90ms. The delay is because it is the remote end of the VPN (from your client).You can also ping 10.0.2.2 which takes a mere 0.2ms, so this is the local end.So bottom line is - everything's fine. |
_softwareengineering.118044 | Some of my friends say that UNIX is beautiful and simple. But I really don't know what they mean. Why are they saying so? For me, UNIX is just a boring command prompt with different shells. How can I experience the beauty of UNIX? Can you share any experience? | How to understand the beauty of UNIX? | unix | null |
_scicomp.24236 | I am trying to run a pbs file which runs a script on a cluster. The script prompts for a user input. I wish to write this user input in my pbs file, what is a good way to do this? | Running a script that asks for user input from a pbs file | cluster computing;pbs | You might check out the concept of a here document assuming your shell is bash. That way you can write, in your batch (pbs, lsf, sge, slurm, or whatever) script, the text of what you'd like to respond to the prompt with, capture it in your batch script for reproducibility purposes, and format it more or less how you like (based on the constraints of the prompting program). |
_cogsci.9456 | Most uses of SPA I've seen seem to be representing static systems, such as recognizing digits, categorizing images, rapid variable creation (also called completing a pattern) and planning a path for writing those digits back out. Can it be used to represent dynamics as well, such as the general movement from a series of images? | How can Semantic Pointer Architecture be used to capture dynamical systems? | theoretical neuroscience;cognitive modeling;spa | null |
_softwareengineering.205389 | Often in coding, I find it very slow and difficult to detect the root cause of a bug and sometimes I end up going to wrong point in my code. It's painful. I know that to detect the root cause of a bug is a very important skill for programmers. Does anybody have a trick or technique to suggest a good way of finding the root cause? | How to detect root cause of problem or bug | code quality;problem solving | Ask why a lot, and keep asking it until the problem is clear.Why did the code crash? Because we got a divide by zero.Why did we get a divide by zero? Because X was zero.Why was X zero? Because it was passed in by function foo.Why did foo pass in a zero? Because it was set to the total number of relationships that the object hasWhy did this object have zero relationships? Because that's the default value before the object is fully initializedWhy was was the value not set to be greater than one during initialization?There is a set of cascading if statements without a final catch-all else statement.Why did this object not satisfy any of the if statements? Because the code assumes that every object has at least one parent, or at least one child. Is that a valid assumption? This technique is often referred to as 5 Whys. You can read more about 5 Whys on wikipedia. You might also be interested in reading answers to the question What is your most useful technique for finding (or preventing) bugs? |
_scicomp.23522 | I have a function evaluated on a regular 5D grid with 21 points per dimension (so $21^{5}$ points total).I need to evaluate the integral of the function over all 5 dimensions, so I was planning on using one of the composite Newton-Cotes formulae (i.e. trapezium rule, Simpson's rule, Boole's rule etc.)I'm wondering:Assuming we can't re-evaluate the function on a different grid, is there a way of approaching this that is significantly better than Newton-Cotes?If we are using Newton-Cotes, is higher-order always better? More specifically is it always better to divide the data into a smaller number of intervals evaluated with a higher-order scheme rather than a larger number of intervals evaluated with a lower-order scheme?Thanks! | Choice of Newton-Cotes formulae for regularly gridded multi-dimensional data | numerical analysis;quadrature;integration | null |
_datascience.12306 | in Spark, there is a RowMatrix.columnSimilarities() method (see http://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/distributed/RowMatrix.html#columnSimilarities()) that returns An n x n sparse upper-triangular matrix of cosine similarities between columns of this matrix.How should I read it? If I try to implement an example from https://stackoverflow.com/a/1750187 as following: JavaRDD<Vector> rows = sc.parallelize(Arrays.asList( new DenseVector(new double[]{2, 1, 0, 2, 0, 1, 1, 1}), new DenseVector(new double[]{2, 1, 1, 1, 1, 0, 1, 1})));RowMatrix mat = new RowMatrix(rows.rdd()); List<Vector> sims = mat.columnSimilarities().toRowMatrix().rows().toJavaRDD().collect();for(Vector v: sims) { System.out.println(v);}I get this(8,[6,7],[0.7071067811865475,0.7071067811865475])(8,[1,2,3,4,5,6,7],[0.9999999999999998,0.7071067811865475,0.9486832980505137,0.7071067811865475,0.7071067811865475,0.9999999999999998,0.9999999999999998])(8,[2,3,4,5,6,7],[0.7071067811865475,0.9486832980505137,0.7071067811865475,0.7071067811865475,0.9999999999999998,0.9999999999999998])(8,[7],[0.9999999999999998])(8,[4,5,6,7],[0.4472135954999579,0.8944271909999159,0.9486832980505137,0.9486832980505137])(8,[6,7],[0.7071067811865475,0.7071067811865475])(8,[3,4,6,7],[0.4472135954999579,1.0,0.7071067811865475,0.7071067811865475])How should I interpret it? How do I get the cosine angle 0.822 from this, as mentioned in the referenced stackoverflow post?Thanks! | How to interpret upper-triangular matrix of cosine similarities | apache spark;similarity | null |
_cs.3119 | I have these questions from an old exam I'm trying to solve. For each problem, the input is an encoding of some Turing machine $M$.For an integer $c>1$, and the following three problems:Is it true that for every input $x$, M does not pass the $|x|+c$ position when running on $x$?Is it true that for every input $x$, M does not pass the $\max \{|x|-c,1 \}$ position when running on $x$?Is it true that for every input $x$, M does not pass the $(|x|+1)/c$ position when running on $x$?How many problems are decidable? Problem number (1), in my opinion, is in $\text {coRE} \smallsetminus \text R$ if I understand correct since, I can run all inputs in parallel, and stop if some input reached this position and for showing that it's not in $\text R$ I can reduce the complement of Atm to it. I construct a Turing machine $M'$ as follows: for an input $y$ I check if $y$ is a history of computation, if it is, then $M'$ running right and doesn't stop, if it's not, then it stops.For (3), I believe that it is decidable since for $c \geqslant 2$ it is all the Turing machines that always stay on the first cell of the stripe, since for a string of one char it can pass the first cell, so I need to simulate all the strings of length 1 for $|Q|+1$ steps (Is this correct?), and see if I'm using only the first cell in all of them.I don't really know what to do with (2). | Is it decidable whether a TM reaches some position on the tape? | computability;turing machines;undecidability | Any situation which asks whether a Turing Machine is confined to a finite section of the tape (say of length $n$) on a given input is decidable.The argument works as follows. Consider the Turing Machine, the tape, and the position of the Turing Machine on the tape. All together these have a finite number of configurations. To be specific, there are only $t = n|\Gamma|^n|Q|$ possible configurations. $\Gamma$ is the set of tape symbols and $Q$ is the set of states. I'll continue to use the word configuration to describe the state of the Turing Machine combined with the state of the tape and its position on the tape for the remainder of this answer, but that is not standard vocabulary.Run the machine, keeping track of all its past configurations. If it ever goes beyond point $n$, return yes, $M$ passes position $n$. Otherwise, the machine is somewhere between 0 and $n$. If the machine ever repeats a configuration--its state, the symbols on the tape, and its position on the tape are identical to what they were before--return no, $M$ never passes position $n$.By the pidgeonhole principle, this has to happen in no more than $t+1$ steps. So all of the above is decidable; after at most $t+1$ simulated steps you get an answer.A quick note on why this works: when the machine, the tape, and its position on the tape repeat themselves, there must have been a sequence of configurations between these repetitions. This sequence will happen again, leading to the same configuration one more time--the machine is in an infinite loop. This is because we're keeping track of every aspect of the Turing Machine; nothing outside of the configuration can have an impact on what happened. So when a configuration repeats, it will repeat again, with an identical series of configurations in between.So confining the tape to a finite part of the string is decidable. Therefore, by iterating over all possible input strings, the problem is in $\text{coRE}$ for all three questions. You may have already realized this (between your ideas for 1 and 3 and Ran G's answer for 2 it seems solved completely anyway) but I figured it may be worth posting nonetheless. |
_unix.24377 | Is it possible to configure bash vi mode so that initially it is in command mode instead on insert mode? I find that I have to press Esc far too much. It seems that there is possibility to specify this in zsh, but I have not found a way to do this in bash/readline. | Bash vi mode configuration to default to command mode | bash;vi | null |
_codereview.69855 | I have Ruby models which are populated from the responses of API calls in the following way:JSON.parse converts the response to a Hashthe Hash is passed into the initialize method of a classthe initialize method converts camelCase hash keys and assigns underscore_case instance variablescontroller code works with these instances and converts back to json to send to the browserThis works fine, but some of these response objects are large. Others are arrays of large objects.Profiling shows that this process consumes a lot of CPU (and memory, but that is less of a concern) -- which makes sense given that I create hashes in order to create objects, and the back and forth between camelCase and underscore_case happens A LOT -- so what libraries or techniques have you come across which solve this problem?Here is an oversimplified example:JSON response from a third party API (unlikely to change):{\abcDef\: 123, \ghiJkl\: 456, \mnoPqr\: 789}Class definition (attributes unlikely to change):class Data attr_accessor :abc_def, :ghi_jkl, :mno_pqr def initialize(attributes = {}) attributes.each do |key, val| send #{key.underscore}=.to_sym, val end end def as_json instance_variables.reduce({}) do |hash, iv| iv_name = iv.to_s[1..-1] v = send(iv_name) if self.respond_to?(iv_name) hash[iv_name.camelize(:lower)] = (v.as_json(options) if v.respond_to?(:as_json)) || v hash end endendController:get '/' do d = Data.new JSON.parse(api.get) # ... do some work ... content_type 'application/json' d.to_jsonend | Deserialize JSON Strings Directly into Models? | ruby;memory management;json;active record | null |
_unix.204662 | I am using ethtool to change the bandwidth to 10MB/s. Since this is my first time using this program, I am struggling with the correct syntax to change it. I have tried something like:ethtool -s --change speed 10 eth0I know this is incorrect since the command line shot back an error. Can anyone suggest what the correct syntax is? | Change Network Bandwidth | networking | As per man page command should be:ethtool -s devname speed X duplex half|fullI think --change is long option and -s is a short option.ethtool -s devname [speed N] [duplex half|full] [port tp|aui|bnc|mii] [mdix auto|on|off] [autoneg on|off] [advertise N] [phyad N] [xcvr internal|external] [wol p|u|m|b|a|g|s|d...] [sopass xx:yy:zz:aa:bb:cc] [msglvl N | msglvl type on|off ...]So you can try for example:ethtool -s eth0 speed 10 duplex halfor:ethtool --change eth0 speed 10 duplex half |
_codereview.92973 | I have different roles in my system.HODStaffNon staffI am having a form at the front end and fields shall be visible to only users with specific roles.The ShowInGui levels are like this:Show to AllShow to staff/HODShow to HODAny other do not showI am planning to write a method which will take the user role and decides the access.Could this method be better written?public boolean canBeShownInGUI(Role role, int showInGui) { if (showInGui == 1) { return true; } else if (showInGui == 2 && (role == Role.HOD || role == Role.STAFF)) { return true; } else if (showInGui == 3 && role == Role.HOD) { return true; } else return false; } | Rewrite method to decide the ShowLevel based on roles | java | I presume Role is an enum (if it is not, it should be). An enum is a great candidate for a switch statement, and it also makes the conditions very clear:switch (role) { case HOD: return showInGui >= 1; case STAFF: return showInGui == 1 || showInGui == 2;}return showInGui == 1;Note that, if you want you can embed that logic in to the enum itself, and have a simple call to an enum method:return role.isVisible(showInGui);Additionally, if you reverse the values of the showInGui to be:0 -> nobody1 -> HOD only2 -> HOD & STAFF3 -> everyonethen your logic is simplified further to:switch (role) { case HOD: return showInGui >= 1; case STAFF: return showInGui >= 2;}return showInGui >= 3; |
_codereview.36754 | I've been reading about WeakPointer and WeakPointer<T> today, and it looks very useful. Rather than just using it as-is though, I decided to write a wrapper around it that covers a common usage case for me.Code as follows:public class Temporary<T> where T : class{ Func<T> generator = null; WeakReference<T> reference = null; public T Value { get { return GetValue(); } } public Temporary(T value) : this(value, null) { } public Temporary(Func<T> generator) : this(null, generator) { } public Temporary(T value, Func<T> generator) { reference = new WeakReference<T>(value); this.generator = generator; } public T GetValue() { if (reference == null) return null; T res; if (!reference.TryGetTarget(out res) && generator != null) { res = generator(); reference.SetTarget(res); } return res; }}I've tested this against my use-case and it works as I expect - the garbage collector cleans out items with no active references, and they are re-instantiated at need.I'm looking for critiques of the implementation and suggestions for improvements/additions. | Generic 'temporary instance' class | c#;generics;weak references | It's pretty good, clean and simple code. Not a lot to critique really. A few minor things:Some people (myself included) prefer to prefix private fields with _. This way it's easy to see that it's a field rather than a local variable (and gets rid of the this. in most cases).You shouldn't have a public property and a public method which do the same thing. How is a programer supposed to know whether to use Value or GetValue? It's not obvious what the difference is or if there is one at all.Not 100% sure about the null check in GetValue(). There is no code path where reference should ever be null. So it would actually indicate a bug if it were the case which you are hiding with this check. Consider removing the check or actually throwing an InvalidOperationException or similar instead. Detecting bugs early is important. Another option would be to look at Code Contracts to state the invariants of that method.I have started to try and avoid null checks where possible by making sure that there is always an object. In your case this could be something like this:public class Temporary<T> where T : class{ private static Func<T> NullGenerator = () => null; Func<T> generator = NullGenerator; WeakReference reference = null; public T Value { get { return GetValue(); } } public Temporary(T value) : this(value, NullGenerator) { } public Temporary(Func<T> generator) : this(null, generator) { } public Temporary(T value, Func<T> generator) { if (generator == null) throw new ArgumentNullException(generator); reference = new WeakReference(value); this.generator = generator; } private T GetValue() { T res; if (!reference.TryGetTarget(out res)) { res = generator(); reference.SetTarget(res); } return res; }} |
_unix.126019 | I'd like to use btrfs' send/receive feature for transmitting backup snapshots over a rather slow (initial seed of about 50-100GB, upstream bandwith ~1-2MBit/s) and unreliable (daily forced interruption on both ends) connection.I see following requirements:encrypted transfer (usually achieved by using an SSH tunnel)robustness to interrupted connectionsIt seems ZFS is able to resume interrupted transfers automatically, similar to how rsync does. Does this also apply to BTRFS? The send/receive wiki page is not useful with respect to interrupted transfers. If btrfs would resume interrupted transfers, all I would have to do is using an SSH tunnel and resume if interrupted.If not, I'd have to use some buffer in-between that make sure the btrfs-connection survives interrupts, or get both servers close to each other for seeding (which will be a problem with respect to added files that excel the daily transmission capacities and sending snapshots).What will I have to consider for transmitting the seed and snapshots? | How to use btrfs send/receive for transmitting backup snapshots over a slow and unreliable network connection? | linux;backup;btrfs | null |
_unix.98681 | Does somebody know, why I get -ne (probably param from echo from section of setting PROMPT_COMMAND line 23) after I switch to root? Here is my /etc/bashrc.Bash 3.2.51, OS X 10.9 | Unexpected output after switch to root | terminal;osx;prompt;bashrc | null |
_webapps.28914 | I'm missing something somehow. Can't seem to find how to get to the next 50 messages in Gmail. I know sounds like a dumb question, but don't see it. (Im using Firefox.) | Older mail page change tab? | gmail | null |
_unix.185419 | I'm using iptables on my router to redirect all web traffic to my page.But i don't know how to except my mac address list.I did command like this:iptables -t nat -A PREROUTING -m mac ! --mac-source xx-xx-xx-xx-xx-xx -p tcp --dport 80 -j DNAT --to 127.0.0.1:8080 (Host A)iptables -t nat -A PREROUTING -m mac ! --mac-source xx-xx-xx-xx-xx-xx -p tcp --dport 80 -j DNAT --to 127.0.0.1:8080 (Host B)But it just execute command for host A. It means Host A can access web normally but Host B still got redirect.How can i got access normally for both mac address? | iptables - Redirect except list MAC Address | networking;iptables;router | Use this:iptables -t nat -A PREROUTING -m mac --mac-source xx-xx-xx-xx-xx-xx -j ACCEPT (Host A)iptables -t nat -A PREROUTING -m mac --mac-source xx-xx-xx-xx-xx-xx -j ACCEPT (Host B)iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to 127.0.0.1:8080The first two commands ACCEPT packets from A and B, so redirection only happen if the packet is from another host. |
_webapps.17907 | What is this red notification bar in Google+?It is always at 1, and when I click it a blank popup opens.What is it supposed to do and why is it always showing me a 1? | Google+ notification button showing count of 1 but nothing else | google plus | null |
_codereview.3833 | I've been seeking to optimize this algorithm, perhaps by eliminating one of the loops, or with a better test to check for prime numbers.I'm trying to calculate and display 100000 prime numbers has the script pausing for about 6 seconds as it populates the list with primes before the primes list is returned to the console as output.I've been experimenting with usingprint odd,to simply print every found prime number, which is faster for smaller inputs like n = 1000, but for n = 1000000 the list itself prints much faster (both in the Python shell and in the console).Perhaps the entire code/algorithm should be revamped, but the script should remain essentially the same: The user types in the number of prime numbers to be printed (n) and the script returns all prime numbers up to the nth prime number.from time import timeodd = 1primes = [2]n = input(Number of prime numbers to print: )clock = time()def isPrime(number): global primes for i in primes: if i*i > number: return True if number%i is 0: return Falsewhile len(primes) < n: odd += 2 if isPrime(odd): primes += [odd]print primesclock -= time()print \n, -clockraw_input() | Optimizing this 'print up to the nth prime number' script | python;optimization;algorithm;beginner;primes | null |
_datascience.16087 | Rather than creating 15 additional columns full of sparse binary data, could I: 1) use the first 15 prime numbers as indexes for the 15 categories2) store data by multiplying the prime numbers of the categories that otherwise would have a value of 1 in one-hot encoding3) retrieve data by factorizing the value generated by multiplying unique prime numbersEx: 1914 would yield the list [2, 3, 11, 29] which would let you know that the user with the 1914 value has property 2, 3, 11, and 29 but nothing else.I understand this is limited because BIGINTs can only hold the product of the first 15 prime numbers, but would it not still be useful in some situations and save time when searching the database? The entire table would be 14 columns smaller. I guess this is less about machine learning algorithms and more about storing and retrieving data. | Instead of one-hot encoding, can I store the same information in one column using a single value? | bigdata;data;databases;sql | I suppose you could do this, but if your goal is simply to store 15 boolean values in a single column you are complicating things unnecessarily. Instead of going to all the trouble to compute the prime factors of the stored value, why don't you just store the flags as a bit string? Your example of 15 different possible values could be stored in a single SMALLINT (2-byte) SQL column. After retrieving the value, you would just need to extract the bits of interest for your record with some basic bitwise arithmetic. |
_softwareengineering.105911 | We have a centralised business application framework, and it contains all our business logic and provides access to all our back-end systems. It is accessed by a number of different programs and clients through remoting. When we need to do a change in a certain part of the program, we currently have to retest all our different programs and clients. Even assuming we extend our automated testing, we feel that there is no way we can be certain that upon releasing a new version of our application framework all the different programs will work correctly after that change without also manually retesting all these programs.Every time we release a new version of project C, we have to test program A and B as well for unintended changes.Preferably, we would move to a release system where upon releasing project C, we only have to retest program C, and not A and B for possible breaking changes that we overlooked.The idea lives to duplicate our centralised business application framework, one for each sub-project. It would mean for that when we release a new version of program C, we will provide a private version of the application framework. Upon release of program C, nothing will have changed for application A and B, they still communicate with their private application framework - so they do not need to be retested.I came up with the following pro's and con's:Con's:We'll have our framework hosted n times for each project We'll get confused in what version / project what bug is solved.With a major fix / change, we will have to re-release all n versions of the application framework.Pro's: Significantly cuts down on testingCertainty that no unintended changes are released for different projects.I must admit I am not a fan of the private copies-approach, but I fail to provide a better alternative, or better arguments against this.Has anyone been in a similar situation, and how was it handled? Is the above approach of private frameworks copies a solid / accepted approach? Any advice is welcomed. Thanks in advance. | centralized hosted application framework or private copies for each program? | release;versioning;release management | Your framework, like any library code, should be a separate, versioned, project. And each app should be dependent upon a specific version of the framework.Whenever you make changes to the framework, its version should change, and the new version should be made available in addition to the old version(s) still being available for the other apps.You can then switch over each app as desired or needed to a new(er) version of your framework. And you can decommission a version when not a single app still has it as a dependency.Yes, there will be duplication, but with a correct folder structure and some configuration in the apps the duplication can be limited to the deployed/deployable files. All other dupplication should be in your version control system (which I do hope you use...).Just look at your framework as another framework like .Net. My hosting provider happily makes version 2.0, 3.5, 4.0 ... available and which one is used for my site is defined by a configuration setting which can be different from the configuration settings of any other sites hosted by the same server.With regard to your con's:I don't see multiple hosted versions as a problem. If anything it makes dependencies a lot more explicit and allows you to develop the framework with confidence.Whether or not you get confused in what version/project a bug is fixed depends entirely on your bug tracking system. Plus the framework should be its own project. I didn't mention this above because to me it is so obvious that I didn't anticipate anybody having/changing it as part of another project.Yep if a fix is important enough to be applied to older versions, you will indeed have to fix all previous (and following) releases. But that is a good thing. It will make you think twice about whether something is worhty of being hot-fixed (requiring changing all currently maintained releases) or just something that the other apps can get when they switch to a newer version of your framework. |
_softwareengineering.164703 | I am thinking to take the next semester a course called Digital systems architecture, and I know that we need to program micro-controllers with several programming languages such as C, C++, verilog, and VHDL. I want to be prepared to take that course, but I need to know if I need to study deeper these languages. At this moment, I have taken one course in basic Java dealing with basic methods, data types, loop structures, vectors, matrices, and GUI programing. Must I study deeper Java and then go with C, and C++? Besides, I know basic verilog and VHDL. | Useful programming languages for hardware programming | learning;education | During my studies, we used C and VHDL, but C was by far the more prominent language used. And the advantage of learning C is of course obvious even beyond hardware programming. So that's what I recommend. When you learn C very well, it should give you no trouble to learn Java and C++ later on - it's just about learning the OOP aspects of both, mostly. After learning C, you will already have a good grasp of the main principles of programming.And if our professors are to be trusted, the majority of embedded hardware is still programmed with plain old C, so it should be more than enough to land a job. Then you can branch out to either more sophisticated systems still in C, or more modern (and more rare, would you believe?) programming with C++ and others. The majority of embedded hardware is so simple, that using 32-bit MCUs would be incredibly overkill and a waste of revenue and resources.But of course, it depends on your interest. If you want to program modern hardware, like GPUs, SSD controllers and such, then C++ will possibly kickstart you closer - although Assembly is most likely what you will be using, depending on how close to the core you will get with the hardware.As for Java, C++ and similar, perhaps multimedia hardware (DVD/BluRay players, modern TVs, etc) is the field that uses those languages. |
_unix.124923 | I have a file on disk which contains some text I want to display on the screen. You can do it like this:dialog --yesno `cat FILE` 10 100However, I'm concerned that if FILE becomes large, I'm likely to exceed the command length limits of the shell. Is there some other way to accomplish this task?I want to display the contents of the file so the user can scroll through it (if it's really long), and then select what to do next. (I've relabelled the yes and no buttons to something more meaningful.) Presumably trying to pipe six pages of text through the command line like this is going to break. | Display file using Dialog | files;dialog | Looking at the man page for this there does seem to be any way to avoid passing the text as an argument - there appears to be no way to pipe the data in or have dialog read directly from the file. However, you could limit the size of the argument using head. On most Linux systems the maximum size for a single argument is 32KiB, so you could do:dialog --max-input 32768 --yesno $(head -c 32K FILE) 10 100The maximum size for a single argument is defined by MAX_ARG_STRLEN which you will find in /usr/include/linux/binfmts.h if you have kernel headers installed. Usually the value is PAGE_SIZE * 32 where PAGE_SIZE is usually 1KiB (see /usr/include/linux/a.out.h).Of course these values can be completely reconfigured. Moreover MAX_ARG_STRLEN is Linux specific and was introduced in Linux 2.6.23. For more information about what the limits actually are, please see What defines the maximum size for a command single argument?.UpdateOops, you can actually use the --file argument for this. It looks like you can do something like (without having tested):{ echo -n \ sed 's//\/' FILE echo -n \} | dialog --yesno --file '&1' 10 100No need for the sed if you know there are no quotes in the file. Or alternatively just put everything you need in the file (quotes, --yesno and all) and simply do:dialog --file FILE |
_webmaster.31407 | I need to exclude protection on one of the folder inside a protected directory with .htaccess I put .htaccess in here:/home/mysite/public_html/new/administrator/.htaccessThe directory need to be exclude from protection:/home/mysite/public_html/new/administrator/components/com_phocagallery/My .htaccess file :AuthUserFile /home/mysite/.htpasswds/public_html/new/administrator/passwdAuthType BasicAuthName adminrequire valid-userSetEnvIf Request_URI (/components/com_phocagallery/)$ allowOrder allow,denyAllow from env=allowSatisfy anyI tried but not working on my purpose. I suspect my path to the excluded directory may have some mistakes. | Exclude a sub directory in a protected directory | htaccess | null |
_codereview.78155 | The goal of my code is to sort data into two categories. It must use a local copy of the initial data from Collar (Top View).csv. My code creates a Collection of items called Collars using the initial data file, then moves each Collar into its respective category based upon its E dimension. I would like feedback on if I could do this more efficiently and readable, but other feedback is welcomed.Option ExplicitOption Base 1Dim CollarCol As New CollectionDim BatchNum As String' Calls for creation of a collection of collars and then calls that to be sorted.Sub SortButton_Click() ' Clear current values Range(D3:L30).Clear ' Create local copy. Cannot open live copies of files. FileCopy O:\IQC_Inspection\EngineeringData\Collar (Top View).csv, _ ActiveWorkbook.Path + \ + Collar (Top View).csv ' Get user input for desired batch number On Error GoTo ErrorHandler BatchNum = InputBox(Prompt:=Enter batch number: ) If (BatchNum = 0) Then Exit Sub ' exit for cancel button Set CollarCol = New Collection Call PopulateCollarCol Call SortCollarCol Exit SubErrorHandler: MsgBox Err & : & Error(Err)End Sub' Populates the Collection named CollarColPrivate Sub PopulateCollarCol() Workbooks.Open ActiveWorkbook.Path + \ + Collar (Top View).csv Dim Index As Integer, EndIndex As Integer Dim NewCollar As Collar EndIndex = FindEnd(BatchNum) For Index = FindStart(BatchNum) To EndIndex Set NewCollar = New Collar ' If first measure, add to collection If (Cells(Index, 11) = 0) Then ' NewCollar.SetBatchNum (Cells(Index, 9)) NewCollar.SetSerialNum (Cells(Index, 10)) NewCollar.SetDimE (Cells(Index, 13)) CollarCol.Add Item:=NewCollar, key:=CStr(NewCollar.GetSerialNum) Else ' see if remeasure is done for DimE If (Cells(Index, 15) <> ) Then Dim EditCollar As New Collar Set EditCollar = CollarCol.Item(CStr(Cells(Index, 10))) ' make sure remeasure is done for DimE EditCollar.SetDimE (Cells(Index, 13)) End If End If Next Index Workbooks(Collar (Top View).csv).CloseEnd Sub ' PopulateCollarCol' Returns the first row of the given stringFunction FindStart(ToFind As String) As Integer ' find bottom of batch Dim Rng As Range If Trim(ToFind) <> Then With Sheets(Collar (Top View)).Range(I2:I30000) Set Rng = .Find(What:=ToFind, _ after:=.Cells(.Cells.Count), _ LookIn:=xlValues, _ LookAt:=xlWhole, _ SearchOrder:=xlByRows, _ searchdirection:=xlPrevious, _ MatchCase:=False) If Not Rng Is Nothing Then FindStart = Rng.Row ' found bottom Else MsgBox Nothing found Exit Function End If End With End If ' Loop past remeasures Do While (Cells(FindStart, 11) = 1) FindStart = FindStart - 1 Loop ' Loop while batch number is the same Do While (Cells(FindStart - 1, 9) = ToFind) If Cells(FindStart - 1, 10) < Cells(FindStart, 10) Or _ Cells(FindStart, 11) = 1 Then FindStart = FindStart - 1 Else Exit Do End If LoopEnd Function ' FindStartFunction FindEnd(ToFind As String) As Integer ' find bottom of batch Dim Rng As Range If Trim(ToFind) <> Then With Sheets(Collar (Top View)).Range(I2:I30000) Set Rng = .Find(What:=ToFind, _ after:=.Cells(.Cells.Count), _ LookIn:=xlValues, _ LookAt:=xlWhole, _ SearchOrder:=xlByRows, _ searchdirection:=xlPrevious, _ MatchCase:=False) If Not Rng Is Nothing Then FindEnd = Rng.Row ' found bottom Else MsgBox Error finding end of batch. Exit Function End If End With End IfEnd Function ' FindEnd' Takes CollarCol and places each collar into its respective listPrivate Sub SortCollarCol() Dim BlueIndex As Integer, YellowIndex As Integer Dim Index As Integer Dim CurCollar As New Collar BlueIndex = 3 YellowIndex = 3 For Index = 1 To CollarCol.Count Set CurCollar = CollarCol.Item(Index) If (CurCollar.GetDimE < 0.062055555) Then Cells(BlueIndex, 4) = CurCollar.GetBatchNum Cells(BlueIndex, 5) = CurCollar.GetSerialNum Cells(BlueIndex, 6) = CurCollar.GetDimE BlueIndex = BlueIndex + 1 Else ' Bucket 2 Cells(YellowIndex, 9) = CurCollar.GetBatchNum Cells(YellowIndex, 10) = CurCollar.GetSerialNum Cells(YellowIndex, 11) = CurCollar.GetDimE YellowIndex = YellowIndex + 1 End If Next IndexEnd Sub ' SortCollarCol'Returns boolean true if an object is within a collectionPublic Function InCollection(col As Collection, key As String) As Boolean Dim var As Variant Dim errNumber As Long InCollection = False Set var = Nothing Err.Clear On Error Resume Next var = col.Item(key) errNumber = CLng(Err.Number) On Error GoTo 0 '5 is not in, 0 and 438 represent incollection If errNumber = 5 Then ' it is 5 if not in collection InCollection = False Else InCollection = True End IfEnd FunctionHere is data that I would be an example. Each column is in a spreadsheet and the first column starts as 'B'1-Jan-14 8:43:48 worker1 QQ SAQ20 Z R 143 3 0 1 2.72E-02 2.71E-02 1-Jan-14 8:43:48 worker1 QQ SAQ20 Z R 143 4 0 1 2.75E-02 2.73E-02 2-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 1 0 6.20E-02 6.19E-02 2.77E-02 2.76E-02 1.19E-02 1.35E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 3 0 0.062127182 6.18E-02 2.77E-02 2.78E-02 0.010853701 1.47E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 4 0 6.20E-02 6.20E-02 2.76E-02 2.75E-02 0.011244671 1.45E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 5 0 6.19E-02 6.20E-02 2.78E-02 2.75E-02 1.29E-02 1.29E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 6 0 6.20E-02 6.20E-02 2.79E-02 2.76E-02 1.20E-02 1.36E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 7 0 6.21E-02 6.20E-02 2.75E-02 2.74E-02 1.19E-02 1.38E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 8 0 6.17E-02 6.17E-02 2.75E-02 2.75E-02 1.34E-02 1.20E-022-Jan-14 7:08:39 worker1 QQ SA3054 Z R 150 9 0 6.16E-02 6.16E-02 2.73E-02 2.77E-02 1.30E-02 1.23E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 10 0 0.061871287 6.19E-02 2.75E-02 2.74E-02 1.19E-02 1.36E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 11 0 6.17E-02 6.19E-02 2.77E-02 2.76E-02 0.012293416 1.33E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 12 0 0.062024465 0.062002266 2.76E-02 2.75E-02 1.16E-02 1.41E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 13 0 6.19E-02 6.17E-02 2.74E-02 2.76E-02 1.29E-02 1.26E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 14 0 6.19E-02 6.16E-02 2.74E-02 2.78E-02 1.30E-02 1.23E-022-Jan-14 7:08:39 worker1 QQ SAQ20 Z R 150 15 0 6.18E-02 6.19E-02 2.75E-02 2.74E-02 1.25E-02 1.31E-023-Jan-14 7:34:05 worker1 QQ SAQ20 Z R 181 1 0 6.21E-02 6.19E-02 2.73E-02 2.71E-02 1.34E-02 0.0122620733-Jan-14 7:34:05 worker1 QQ SAQ20 Z R 181 2 0 6.20E-02 6.22E-02 2.71E-02 2.70E-02 1.32E-02 1.28E-02 | Efficiently create and sort a Collection | sorting;vba;excel;collections | General ImpressionIt's better than most VBA I see. I think you generally did a pretty good job.Event Handlers shouldn't have very much code in them. How would you run this code headless (without a person interacting with the UI) if you needed to? I would consider breaking the SortButton_Click() event procedure into at least one or two more subroutines. I would actually recommend moving almost all of this logic into a class module. Keep your code behinds clean of all business logic. Code behinds should be mainly responsible for dealing with UI events and calling on classes that hold the business logic.The string literal Collar (Top View) shows up a lot. Extract a constant to store it in. Be careful however, you use it in two different contexts. In some places it refers to a file name and in others it is a sheet name. So, you actually need two different constants. It's perfectly okay to let one constant reference the other though. This is completely legit and compilable code.Private Const sheetName As String = Collar (Top View)Private Const fileName As String = sheetName & .csvSortButton_ClickYou're turning the error handling on pretty late. What happens if there's an issue with the FileCopy command? The code will break on that line. Probably not what you want to happen. Generally speaking, if you're using On Error GoTo it should be the first line after the sub declaration.Be explicit about scope. Sub SortButton_Click()Scope is public by default in VBA, unlike .Net where it's private by default. That alone is a good reason to be explicit about how things are scope. It will reduce confusion for anyone (including yourself) who may move between the two languages. It's one less thing to remember. Also, did you actually mean to make this Public? I can't think of a good reason for an event handler to be public. If you need to call the code inside of it, it would be much better to extract the logic into a public subroutine of it's own.Using Range all on its own implicitly calls ActiveSheet.Range. It's always better to be explicit and in turn, it's rarely recommended to work on the active worksheet. There might not be another option here though. This could be one of those rare times.Give this a newline for readability.BatchNum = InputBox(Prompt:=Enter batch number: ) If (BatchNum = 0) Then Exit Sub ' exit for cancel buttonBatchNum = InputBox(Prompt:=Enter batch number: ) If (BatchNum = 0) Then Exit Sub ' exit for cancel buttonSpeaking of readability, you might want to ditch the one-line If in favor of the more verbose If block syntax.PopulateCollarColYou've repeated this code from the click event. ActiveWorkbook.Path + \ + Collar (Top View).csvYou should be passing that filepath into the subroutine as an argument simply to keep the code DRY, but there's another issue here. What if the user clicks on a different workbook after the click event starts, but before the code execution gets here? You could execute in a different path. (Unlikely, but possible.)When I first saw this, I expected to give my spiel about implicitly declared variants, but you declared these variables correctly. Well done. I see this get screwed up a lot, but you didn't.Dim Index As Integer, EndIndex As IntegerNothing guarantees that someone won't come behind you and call this function before there is a valid (Not Nothing) Collar collection to work with before you add it. It's a potential bug. A simple fix would beIf CollarCol Is Nothing Then Set CollarCol = New CollectionIf you want to make this more efficient, don't open and close the csv file as a workbook. Opening a workbook is an expensive operation. Instead, use an adodb recordset to read in the closed file and loop through the recordset instead. Here is one example of how to get the data into a recordset. FindStart & FindEndI'm a bit torn on these. On one hand, they do one thing and do it well. They're also nicely decoupled. On the other, they share some copy/pasted code. DRYing these out to share the common code would couple them together in a way I'm not sure I care for. You could have FindStart() call FindEnd() if you chose to do so.Now, assuming that by efficient you meant you want to squeeze every last bit of performance out of the code you could do something along the following lines, but I'm not sure I'd really recommend it. Take it for what it's worth to you.To find the starting index, you first have to find the ending index. You also call both of these functions in rapid succession. This means that you're .Finding the same value twice in a row. What you could do (and again, I'm not sure I recommend actually doing this) is take advantage of passing arguments ByRef and turn your two functions into a single Sub that overwrites the values of some out parameters. This is more efficient because the find only happens once, but readability suffers. It's not often you'll see people use this type of method to return values.Private Sub FindEndPoints(ByVal ToFind As String, ByRef outStartIndex As Integer, ByRef outEndIndex As Integer) ' find bottom of batch Dim Rng As Range If Trim(ToFind) <> Then With Sheets(Collar (Top View)).Range(I2:I30000) Set Rng = .Find(What:=ToFind, _ after:=.Cells(.Cells.Count), _ LookIn:=xlValues, _ LookAt:=xlWhole, _ SearchOrder:=xlByRows, _ searchdirection:=xlPrevious, _ MatchCase:=False) If Not Rng Is Nothing Then outEndIndex = Rng.Row ' found bottom Else MsgBox Nothing found Exit Function End If End With End If ' Loop past remeasures outStartIndex = outEndIndex Do While (Cells(outStartIndex, 11) = 1) outStartIndex = outStartIndex - 1 Loop ' Loop while batch number is the same Do While (Cells(outStartIndex - 1, 9) = ToFind) If Cells(outStartIndex - 1, 10) < Cells(outStartIndex, 10) Or _ Cells(outStartIndex, 11) = 1 Then outStartIndex = outStartIndex - 1 Else Exit Do End If LoopEnd SubWhich you could call like so:Dim Index As Integer, EndIndex As IntegerFindEndPoints(BatchNum, Index, EndIndex) 'The index variables will be set after this line executesLike I said, readability/understandability suffers. That's why you don't often see people do this, but if you're after pure speed, this is the way to go.InCollectionYou received a (very good) review that focuses on just this function already, but there are a few things to note still.You've not used this function anywhere in the code you've shown us. If you're not using it, remove it.The function is useful beyond this code behind. It should probably live somewhere you could re-use it throughout your project(s). Perhaps as part of a Custom collection class or a *.bas module.You're re-inventing the wheel. The built in collection object doesn't handle keys very well (as I'm sure you're aware). There is an alternative in the Scripting Runtime Library. If this functionality is indeed important and needed, I recommend using a Scripting.Dictionary instead of a Collection. It has a built in Exists function that does exactly what your InCollection function does. |
_webmaster.26324 | Suppose I create a blog with SomeName in blogger.com or tumblr.com. So the address is SomeName.blogger.com or SomeName.tumblr.com. I need some clarification as to the situation when I want to use MyOwnDomain.com for the blog:If I simply go to the control panel of my domain registrar and setup MyOwnDomain.com to redirect to SomeName.blogger.com, then I shall expect that the blog will show up in search results as SomeName.blogger.com?If I add MyOwnDomain.com in blogger control panel and edit the A and CNAME records of MyOwnDomain.com with the information provided in blogger, then the blog will show up in search results as MyOwnDomain.com? Is this guaranteed?Are there any other SEO dangers to using Blogger or Tumblr when you want to promote your own domain? | Can the blog subdomain take precedence over my own domain in search engines? | seo;domains;blogger;tumblr | Yes.Yes. Yes. You ask about tumblr.com and the answer is yes for them too. They give detailed instructions, but say:Our staff isn't able to support many of the issues that may crop up when setting up a domain name.(It should be our staff aren't, but I can't fix that!)The SEO dangers associated with using Blogger and Tumblr with your own domain aren't about the fact that the site is hosted by them, but more around the difficulty of customising the site for good SEO practise, as you don't have full control over the HTML as you would with your own site. |
_webapps.28632 | Is one permitted to sell digital goods using Facebook Payments?Looking at https://developers.facebook.com/policy/credits/ I understand:I am not permitted to sell tangible goods (any good that is physically delivered) using Facebook Payments.I am permitted to sell in-game digital goods such as weapons.For example, if I were a photographer, could I create a Facebook app that showed low-resolution versions of photos but then use Facebook Payments to sell downloads of the high-resolution versions of those photos from within the app?Thanks!NOTE: This is duplicate of my question on Stack Overflow; I was advised this was a more appropriate site to post the question to. | May I sell digital content (e.g. photos, PDFs and videos) using Facebook Payments? | facebook | null |
_webmaster.54769 | Say that I have a domain name called example.com and two server located at 2 different IP addresses: 1.2.3.4 and 6.7.8.9.How could I assign example.com to 1.2.3.4 and the subdomain foo.example.com to 6.7.8.9?[EDIT] I did try to put a A record linking from @ to the first IP address and from foo.example.com to the second IP address, as illustrated below:And I did configure a vhosts called foo.example.com on my server at IP address 2.The @ record works. But after 3 hours waiting for the result (DNS delay), nothing happened with foo.example.com, which link to nothing. Why? | How to assign foo.example.com to one IP address and example.com to a different one? | domains;dns;subdomain;ip address | null |
_webapps.15985 | When I go to Google Maps, it shows my default location. Often I then search/find a location/address and then want to get Directions from my default location, but there doesn't seem to be an obvious way to do that. When I click on directions, it wants me to type an address. If I start typing the addr of my default location, it doesn't autocomplete or even list it among the choices until I've typed most of it.I thought it (Google Maps or more likely my browser) used to remember the places I'd typed in for the originating address, which enabled me to choose from previous entries in a drop down. But that seemed to disappear once Maps started doing instant search on what I'm typing in the address field.Ideally I'd like to have a few locations (e.g. home, work) and choose them as the starting point for directions unless I wait to specifically type in an address. But I don't see anything like this. | How to get directions to my default location in Google Maps | google maps | In google maps click My places then assign home address. Next time you're searching for directions you can simply begin typing home into one of the direction fields and it will load your home address. Presumably this works with your work location as well. |
_unix.274594 | I'm setting my Archlinux to run programs on Ubuntu Natty via systemd-nspawn.Currently I start the session with --setenv=DISPLAY=:0 --bind=/dev -a su -l (user) -c xfce4-panel.I'm trying to disable the internet for the guest because of security concerns, but when I give --private-network it fails to connect to xorg on the host.Is there any way to solve it? | How do I cut internet but connect X server for systemd-nspawned program | xorg;systemd nspawn | null |
_codereview.112009 | The point of this code is to build an ip from a list of type ushort.var ip = new StringBuilder();List<ushort> ipList = new List<ushort>(4) {192, 168, 1, 1};ipList.ToList().ForEach(x => ip.Append(x + .));return ip.Remove(ip.Length - 1, 1).ToString();The code works and outputs an ip as expected, but the formatting I give it leaves to be desired, having to delete the last element of the string does not look like a reliable solution or at least, I don't feel like it is.The code above would output, before returning, the following string:192.168.1.1.And after removing the last character it will look like this: 192.168.1.1 | Processing a list to build an ip in a string format | c#;strings | There is a string.Join() method which would exactly do what you want like so string ip = string.Join(., ipList);btw, you don't need to call ToList() on a List<T>. |
_webmaster.104223 | I am working on a site and it has a sub domain which has almost 25K pages indexed in Google. This subdomain is of no use anymore and its main site (root domain) has the same content. My question is how to effectively handle this. Redirect all the pages to the root domain.Take down these pages and send URL removal request in search console.no-index them only.PS - this subdomain has Page authority of 49/100 and total links are - 39 only and most of these links are coming from the site owners other sites. | Subdomain Redirection Question | redirects;subdomain;seo audit | Basically, the steps are:Ensure that both the subdomain and main domain are verified properties in Google Search Console.Redirect (301 permanent) all pages from the subdomain to the main domain. (This is the most important step.)Use Google's change of address tool to specifically tell Google that the site has moved from the subdomain to the main domain. (You can only do this if you have completed steps #1 and #2 above.)Take down these pages and send URL removal request in search console.Do not send URL removal request. This will simply remove the URLs from the SERPs. You want Google to recrawl these pages and see the redirect.no-index them only.No. The pages have simply moved and you want them indexed at the new location (at the main domain). |
_webmaster.58544 | I checked the speed of my website with http://tools.pingdom.com/ .Results:Per. Grade = 85/100Req.s = 18Load Time = 1.94sPage Size= 96kbIs this result is good for my website(also consider SEO)? | Is my website is too slow? | seo;performance;page speed | null |
_webapps.35723 | I can't seem to figure out how to find the files that I've shared with people. The Google Help page on the topic provides a few clues but the following query is not allowed:owner:me to:*but a specific target is valid:owner:me to:[email protected] clues? | In Google Drive, how do I search for files that I own and that I've shared? | google drive | null |
_codereview.112798 | Background: I'm trying to learn basic OOP with Python. I have this program I wrote to help my son practice math problems. It does addition, subtraction, multiplication, and division. He can choose easy, medium, or hard for each operator.Here's an excerpt of the code. It's the easy addition function. (The medium and hard functions simply choose bigger random numbers.)def addition_problems_1(): global name score = 0 while score < 30: a = randint(1,10) b = randint(1,10) sum = a + b answer = int(raw_input(%i + %i = % (a, b))) if answer == sum: score = score + 1 print Good job. Current score is %d % score elif answer != sum: print Oops, the correct answer is %i. Try another one. % sum print Good job, %s. You passed this course! % name enter_lobby()And here's the easy Subtraction function. (The multiplication and division functions are very similar.)def subtraction_problems_1(): global name score = 0 while score < 30: a = randint(1,10) b = randint(1,10) if a > b: sum = a - b answer = int(raw_input(%i - %i = % (a, b))) else: sum = b - a answer = int(raw_input(%i - %i = % (b, a))) if answer == sum: score = score + 1 print Good job. Current score is %d % score elif answer != sum: print Oops, the correct answer is %i. Try another one. % sum print Good job, %s. You passed this course! % name enter_lobby()My Question: Could this script be shortened and/or simplified by re-writing it in OOP style? If so, can you show me an example of what that might look like? | Transform Python math game to OOP | python;beginner;object oriented;python 2.7;quiz | It is rather difficult to provide a complete review of the code without having access to it as a whole. For instance, the global name line in both functions is rather bad. Firstly because you're not modifying but just accessing name's value so you don't need it. Secondly because using this kind of global variables is bad anyway, and this might have different ways of solving depending on what you're doing with name in other parts of your code (to which OOP might be a way, but not necessarily the way).Abstract the problemDo not repeat yourself, it is bad practice and impair maintainability. Instead try to extract a pattern and parametrize it with the different values you need. Not accounting for the if a > b test in the substraction for now, the only thing that differ between the two functions is:The operation to perform (both taking two arguments);The representation of this operation.You can thus definedef problems_1(operation, symbol): score = 0 while score < 30: a = randint(1,10) b = randint(1,10) result = operation(a, b) answer = int(raw_input(%i %s %i = % (a, symbol, b))) if answer == result: score = score + 1 print Good job. Current score is %d % score elif answer != result: print Oops, the correct answer is %i. Try another one. % result print Good job, %s. You passed this course! % name enter_lobby()And call it using one of three ways:def add(a,b): return a + bdef sub(a,b): return a - b...problems_1(add, '+')problems_1(sub, '-')problems_1(lambda a,b: a+b, '+')problems_1(lambda a,b: a-b, '-')import operatorproblems_1(operator.add, '+')problems_2(operator.sub, '-')Using the operator module is neater since it allows you to reuse existing code.Account for sorting operandsWe left aside the problem of sorting the operands for the substraction problem. This can be incorporated back using a third parameter to tell the function whether or not we should try to sort the operands:def problems_1(operation, symbol, sort=False): score = 0 while score < 30: a = randint(1,10) b = randint(1,10) if sort and a < b: a, b = b, a # Swap variables using tuple unpacking result = operation(a, b) answer = int(raw_input(%i %s %i = % (a, symbol, b))) if answer == result: score = score + 1 print Good job. Current score is %d % score elif answer != result: print Oops, the correct answer is %i. Try another one. % result print Good job, %s. You passed this course! % name enter_lobby()You can then call your function usingimport operatorproblems_1(operator.add, '+')problems_1(operator.sub, '-', True)You can also choose to sort unconditionally, it won't change the expected output for + or *; and it will force results to be greater than 1 for / (asuming your divisions functions ask for integral division). But having this optional parameter can come in handy when dealing with more difficult problems that can potentially benefit from having substraction produce negative results.Expand on greater difficultyI guess that the _1 at the end of the function name is for easy and that you have _2 and _3 types of functions which all look the same.Again, don't repeat yourself. Parametrize your function instead. If all you need to change is the range of the randint calls, then you should use:def problems(operation, symbol, min_op, max_op, sort=False): score = 0 while score < 30: a = randint(min_op, max_op) b = randint(min_op, max_op) if sort and a < b: a, b = b, a # Swap variables using tuple unpacking result = operation(a, b) answer = int(raw_input(%i %s %i = % (a, symbol, b))) if answer == result: score = score + 1 print Good job. Current score is %d % score elif answer != result: print Oops, the correct answer is %i. Try another one. % result print Good job, %s. You passed this course! % name enter_lobby()And that's it. This single tiny change allows you to remove at least 8, all look-alike, functions.The call then becomes:import operatorproblems(operator.sub, '-', 1, 10, True) # easyproblems(operator.sub, '-', 10, 100, True) # mediumproblems(operator.sub, '-', 100, 1000) # hardCoding standardsNow that we removed a whole bunch of your code at once, let's write it properly. For starter, you should have noticed that I changed your sum variable into result. This is both because it better express the intent and because sum is a builtin function in Python that you are shadowing. Try to avoid using builtin functions names for your variables, it makes the code less understandable.Second, the use of % format specifiers is to be replaced with the format string syntax. It is pretty simple for basic use (you don't even need to handle types yourself) and can be much more expressive it need be. For instance %i %s %i = % (a, symbol, b) becomes {} {} {}.format(a, symbol, b).Next, answer == sum and answer != sum are already complementary clauses. There is no third (or fourth, or whatever) choice. Thus you can use an else instead of your elif.Last, your code will fail if the user enters something that fails to be an integer. int(raw_input(...)) will raise ValueError in such cases. You could account for that using a try .. except clause and providing a default value to answer (such as None) if the user inputs something unparsable.The code thus become:def problems(operation, symbol, min_op, max_op, sort=False): score = 0 while score < 30: a = randint(min_op, max_op) b = randint(min_op, max_op) if sort and a < b: a, b = b, a # Swap variables using tuple unpacking result = operation(a, b) try: answer = int(raw_input({} {} {} = .format(a, symbol, b))) except ValueError: answer = None if answer == result: score += 1 print Good job. Current score is {}.format(score) else: print Oops, the correct answer is {}. Try another one..format(result) print Good job, {}. You passed this course!.format(name) enter_lobby()Rest of the codeIn addition to the speech about using global at the beginning of this post, the call enter_lobby() at the end of the function seems suspicious. What I imagine is that this enter_lobby() function calls one of the various problems once, and that's it. Making each problem responsible of returning to it.This is wrong because it is an implicit recursion. And potentially an infinite one. The lobby calls a problem, which calls a lobby, which calls an other problem, which calls a third lobby See what I mean? The control flow never return to the first lobby, thus consumming memory to manage the call stack and potentially turning into a RuntimeError: maximum recursion depth exceeded.This also impairs reusability of your code. Each problem should only be responsible of it's own duty and shouldn't have to know about the rest of the code. If you want to make sure that you stay in the lobby at the end of each problem, you should probably put your enter_lobby code into some kind of infinite loop.This loop being either within enter_lobby or in the caller body does depend on your intent and the rest of your code, though. Oh, snap! |
_unix.128093 | I have dumped the database table data into a flat file and below is how the data looks like : (Kindly copy from below ;metier_code ;;-------------------------;(0 rows affected);CRDS_Ptf_No; ; ; ; ; ; ; ; ; ; ; ; ; ; ;Status;;-----------;----------;--------------------------------;-------------------------;----------;--------------------------------;-;-------------------------;-------------------------;---------------;---------------;---------------;-------------------------;-------------------------;-----;------;; NULL;ABCD ;ABHJARS ; ;ABCD ;ABCD ;Y; ; ; ; ; ; ; ; ;A ;; 1234;XEU-ANKD ;XEU-AJKD ; ;ABCD ;ABCD ;Y; ; ; ; ; ; ; ; ;A ;..; 11745;ANJLDMAOKD;AMKDJ AN DJ JAHF AS CPFVH ACCR ;NONE ;AN DJ JAHA;AN DJ JAHA ;Y;NO ANKIO GAP ;YES AMK SCF ; ; ; ; ; ; ;I ;; 11744;AMKDIONSKH;AMKDJ AN DJ JAHF AS CPFVH MTM ;NONE ;AN DJ JAHA;AN DJ JAHA ;Y;NO ANKIO GAP ;YES AMK SCF ; ; ; ; ; ; ;I ;(5436 rows affected)(return status = 0)Return parameters:; ;;-----------;; 5436;(1 row affected); ; ;;-------;-----------;;grepkey; 5436;(1 row affected)want to convert the above as below format:Row should contain the seq no (Prefixed)Need to remove columns names and the blank spaces present in the original file at begining and ending.BELOW IS THE FORMAT OF DATA THAT I AM GETTING BY USING THE SUGGESTED CODE:awk -F ';' '/^;-----------;/ {start=1;next;}; start==0 {next;}; {gsub( +,); print NR $0;}' temp_file > testFORMAT after executing above script :7;NULL;ABCD;ABHJARS;;ABCD;ABCD;Y;;;;;;;;;A;8;NULL;XEU-ANKD;XEU-AJKD;;ABCD;ABCD;Y;;;;;;;;;A;..5443;11744;AMKDIONSKH;AMKDJ AN DJ JAHF AS CPFVH MTM;;QWERDF;QWERDF;Y;;;;;;;;;A;54445445(5436rowsaffected)5446(returnstatus=0)54475448Returnparameters:54495450;;5452;5436;545354545455(1rowaffected)5456;;;5457;-------;-----------;5458;grepkey;5436;54595460(1rowaffected)Above : the prefix row number is not coming in sequence(Incrementing by using the preceeding lines that is not the actual data). Initial file was containing additional info in flat file like column name @ begining , at the end of file few additional details that i wanted like count of records etcI want the data in below format (Which shall have prefix row number and shall include only rows of table , not the additional preceeding and exceeding data) 1;NULL;ABCD;ABHJARS;;ABCD;ABCD;Y;;;;;;;;;A;2;NULL;XEU-ANKD;XEU-AJKD;;ABCD;ABCD;Y;;;;;;;;;A;3;NULL;SWAPOLEIL;SWAPOLEIL;;QWERDF;QWERDF;Y;;;;;;;;;A;..5436;11744;AMKDIONSKH;AMKDJ AN DJ JAHF AS CPFVH MTM;;QWERDF;QWERDF;Y;;;;;;;;;A;5436 - is the number of rows present in the table from where i am fetching the data.Thanks in advance!(Tried the other suggested solutions as well . However, dint got the desired result) | Remove spaces and headers from a dumped database table | text processing;scripting | null |
_unix.213716 | I have a file with tab-separated fields in the following format:2-micron 251 1523 R0010W . + SGD gene . ID=R0010W;Name=R0010W;gene=FLP1;Alias=FLP1;Ontology_term=GO:0003690,GO:0003697,GO:0005575,GO:0008301,GO:0009009,GO:0042150;Note=Site-specific%20recombinase%20encoded%20on%20the%202-micron%20plasmid%2C%20required%20for%202-micron%20plasmid%20propagation%20as%20part%20of%20a%20plasmid%20amplification%20system%20that%20compensates%20for%20any%20copy%20number%20decreases%20caused%20by%20missegregation%20events;dbxref=SGD:S000029654;orf_classification=Verified 0I need to extract 2 columns (4th and last), which I have successfully done. But I also need to extract specific information from a column with more details. For example, I need to extract gene=foo from the 10th column.So, in results I want the 4th column, gene information from 10th column and the last column, a total of 3 columns. How do I do that ? | Extract single information from a column in a file | text processing | null |
_cstheory.18378 | I recently read Landin's paper The Next 700 Programming Languages. But I was a bit confused by ISWIM. In particular, are functions first-class objects in ISWIM? It seems not because every function must occur under some name and there is no $\lambda$-like construct in the language to construct an anonymous function. Landin even explicitly claimed in the first footnote that a not inappropriate title would have been Church without lambda. Anybody knows the reason behind this choice? Is ISWIM less expressive than a language with $\lambda$? | A few questions about ISWIM | pl.programming languages;lambda calculus;functional programming | The wikipedia page for ISWIM says that it is a higher-order language, and that ISWIM is syntactic sugar for the $\lambda$-calculus. Although it seems to have no explicit $\lambda$ construct, thereby making it impossible to have anonymous functions, one achieves the same expressive power by combining lexical scoping and first-class functions: Define a new name for a function locally; pass function as value to higher-order function. |
Subsets and Splits