id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.247306
What is the difference between:if echo T | grep -q F ; then echo FOUND; else echo NOT; fiAnd:if ; then echo FOUND; else echo NOT; fiThe output of first command is:NOTThe output of second command is:: command not found NOTBoth should behave in the same way, I think.
If empty condition behaviour
bash;shell;grep
: command not foundThis means that the command provided as condition in the if statement is not found. That command is just an empty string and therefore not found; there is no such command.According to the manual page of bash, the syntax of an if statement should look as follows:if list; then list; [ elif list; then list; ] ... [ else list; ] fiA list is a sequence of one or more pipelines separated by one of the operators ;, &, &&, or ||, and optionally terminated by one of ;, &, or .So, in the if statement above the list is executed and an exit code of 127 came back and an error is trown that the command is not found. This is then evaluated as false, therefore the else block is executed.What you may want is something as follows:if false; then echo FOUND; else echo NOT; fi
_datascience.13710
I have a large number of variables (2000) and want to visualize them using Python. Also, I would like to see the summary statistics of these variables. It seems that drawing box plots for all of these variables would not be feasible. What are some methods that could be used to visualize this data?
Visualizing a large number of variables in python
python;visualization
null
_unix.100800
I have an Archlinux installed on headless RaspberryPi. I'm using SSH to connect to it. I've added some colors to bash with editing sudo nano /home/pi/.bashrcand adding:PS1='[\[\e[1;34m\]\u\[\e[m\]@\[\e[1;32m\]\h\[\e[m\] \[\e[1m\]\W\[\e[m\]]\$'The only problem is, that the colors like 1;34 is not displayed as Light Blue, but as Bold Blue which is hard to read on the black terminal. Is there any way to set the colors to their light variants?
How to set light colors instead of bold in PS (shell prompt)
arch linux;colors
Try using this site/service to generate the colors that you want.Bash $PS1 GeneratorNOTE: I think you're looking for the color Cyan to get light blue: (Cyan - 0;36).excerptReferencesHow to Customize Your Command Prompt
_cs.70043
I got the following as an interview question:Count the number of tours from the upper left corner to the lower left corner in a grid world where you can move in any manhattan direction. This is the number of Hamiltonian paths from upper left to lower left: a path such that every vertex is visited only once, and (this follows from the first statement) such that each edge is used at most once. The grid world is a 4x10 matrix (4 rows and 10 columns). Is it really this hard? Matrix method for counting Hamiltonian cyclesAnd: Number of Hamiltonian Paths from Lower Left to Upper RightThese papers seem dated--94, 97--but are they really asking a question that qualifies for publishing in a combinatorical journal? Then I ran into this: SO question: number of Hamiltonian pathsAnd am thinking dynamic programming, or divide and conquer...but it is really not clear how one would go about doing this. Is there a way to solve this problem in reasonable time?
Number of hamiltonian tours from upper left to lower left corner of a grid graph?
dynamic programming;discrete mathematics;divide and conquer;hamiltonian path
null
_unix.218300
I've recently installed lxc on ubuntu 14.04.Networking used to work (lxcbr0 being a NAT interface, DHCP configured using dnsmasq) but after a container restart, the networking does not work any more. No further IP's are assigned, neither are the containers linked to lxcbr0.I've tried to purge and reinstall lxc, neither did that help.Following are container and network configurationsNAME STATE IPV4 IPV6 AUTOSTART ------------------------------------znc RUNNING - - NOip link and ifconfigeth0 Link encap:Ethernet Hardware Adresse 52:54:81:57:7b:97 inet Adresse:x.x.x.x Bcast:46.38.245.255 Maske:255.255.254.0 inet6-Adresse: xy Gltigkeitsbereich:Verbindung UP BROADCAST RUNNING MULTICAST MTU:1500 Metrik:1 RX-Pakete:478039 Fehler:0 Verloren:0 berlufe:0 Fenster:0 TX-Pakete:12195 Fehler:0 Verloren:0 berlufe:0 Trger:0 Kollisionen:0 Sendewarteschlangenlnge:1000 RX-Bytes:142734376 (142.7 MB) TX-Bytes:1930229 (1.9 MB)lo Link encap:Lokale Schleife inet Adresse:127.0.0.1 Maske:255.0.0.0 inet6-Adresse: ::1/128 Gltigkeitsbereich:Maschine UP LOOPBACK RUNNING MTU:65536 Metrik:1 RX-Pakete:44 Fehler:0 Verloren:0 berlufe:0 Fenster:0 TX-Pakete:44 Fehler:0 Verloren:0 berlufe:0 Trger:0 Kollisionen:0 Sendewarteschlangenlnge:0 RX-Bytes:2864 (2.8 KB) TX-Bytes:2864 (2.8 KB)lxcbr0 Link encap:Ethernet Hardware Adresse fe:f7:54:76:df:66 inet Adresse:10.0.3.1 Bcast:10.0.3.255 Maske:255.255.255.0 inet6-Adresse: fe80::ac8b:6fff:fe2b:5e92/64 Gltigkeitsbereich:Verbindung UP BROADCAST RUNNING MULTICAST MTU:1500 Metrik:1 RX-Pakete:59 Fehler:0 Verloren:0 berlufe:0 Fenster:0 TX-Pakete:50 Fehler:0 Verloren:0 berlufe:0 Trger:0 Kollisionen:0 Sendewarteschlangenlnge:0 RX-Bytes:11272 (11.2 KB) TX-Bytes:9651 (9.6 KB)vethJDA1JG Link encap:Ethernet Hardware Adresse fe:f7:54:76:df:66 inet6-Adresse: fe80::fcf7:54ff:fe76:df66/64 Gltigkeitsbereich:Verbindung UP BROADCAST RUNNING MULTICAST MTU:1500 Metrik:1 RX-Pakete:29 Fehler:0 Verloren:0 berlufe:0 Fenster:0 TX-Pakete:19 Fehler:0 Verloren:0 berlufe:0 Trger:0 Kollisionen:0 Sendewarteschlangenlnge:1000 RX-Bytes:6014 (6.0 KB) TX-Bytes:4090 (4.0 KB)1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group defaultlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:002: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000link/ether xy3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group defaultlink/ether fe:f7:54:76:df:66 brd ff:ff:ff:ff:ff:ff7: vethJDA1JG: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lxcbr0 state UP mode DEFAULT group default qlen 1000link/ether fe:f7:54:76:df:66 brd ff:ff:ff:ff:ff:ffcontainer configuration and log# Template used to create this container: /usr/share/lxc/templates/lxc-ubuntu# Parameters passed to the template:# For additional config options, please look at lxc.container.conf(5)# Common configurationlxc.include = /usr/share/lxc/config/ubuntu.common.conf# Container specific configurationlxc.rootfs = /var/lib/lxc/znc/rootfslxc.mount = /var/lib/lxc/znc/fstablxc.utsname = znclxc.arch = amd64# Network configurationlxc.network.type = vethlxc.network.flags = uplxc.network.link = lxcbr0lxc.network.hwaddr = 00:16:3e:d2:db:d6lxc-start 1437823814.247 DEBUG lxc_cgmanager - cgmanager.c:cgm_setup_limits:1245 - cgroup 'devices.allow' set to 'c 10:228 rwm' lxc-start 1437823814.247 DEBUG lxc_cgmanager - cgmanager.c:cgm_setup_limits:1245 - cgroup 'devices.allow' set to 'c 10:232 rwm' lxc-start 1437823814.247 INFO lxc_cgmanager - cgmanager.c:cgm_setup_limits:1249 - cgroup limits have been setup lxc-start 1437823814.247 INFO lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:157 - changed apparmor profile to lxc-container-default lxc-start 1437823814.248 NOTICE lxc_start - start.c:start:1152 - exec'ing '/sbin/init' lxc-start 1437823814.252 NOTICE lxc_start - start.c:post_start:1163 - '/sbin/init' started with pid '22703' lxc-start 1437823814.252 WARN lxc_start - start.c:signal_handler:295 - invalid pid for SIGCHLD lxc-start 1437823814.252 DEBUG lxc_commands - commands.c:lxc_cmd_handler:888 - peer has disconnected lxc-start 1437823814.256 DEBUG lxc_commands - commands.c:lxc_cmd_get_state:574 - 'znc' is in 'RUNNING' state lxc-start 1437823814.257 DEBUG lxc_commands - commands.c:lxc_cmd_handler:888 - peer has disconnected
Default lxc networking suddenly stops to work
linux;networking;lxc
null
_cs.2120
If I were to let the variables be the propositions and, constraint be all clauses being satisfied, which technique would be more effective in solving 3-SAT? Forward checking or arc consistency? From what I gathered forward-checking is $O(n)$, while Arc consistency is about $O(8c)$ where c is the number of constraints (According to this page). So perhaps forward -checking is faster somehow? How should I determine which to use?
Forward checking vs arc consistency on 3-SAT
algorithms;satisfiability;heuristics;3 sat;sat solvers
null
_unix.175777
While trying to install CentOS 6.4 IN DUAL BOOT with win 7 there was this error sda must have gpt label. I don't know how to fix it and I also don't want to repartition my hard drive.
while installing centos error is that 'sda must have gpt label'
centos;partition;system installation;gpt
null
_vi.5232
Also asking how to apply an ex range to a normal command.In a file of some 125k lines with a preamble followed by 8 columns of 10 character wide text, the contents of the column are 8 characters of numbers followed by spaces, except that the last column has no trailing space.If I wanted to wrap it such that each column becomes a row in order,I would set tw=9 (or 8 or 10?) and then use gq to wrap it.I hoped the easiest thing to do would be to go to the first line after the preamble and type something like: :.,$gqAs you know, gq is not an ex mode command.I couldn't find in the help where the ex mode version of a formatting command is, I'd love both an answer to if that does exist what it is, or if it doesn't, and what you'd do to apply the gq normal mode command to everything to the end of the file. Is there something better than type:200000gq ?EG file:These are very nice data points from our experiment.This was carried out at our big data point lab by [email protected] you positioned your cursor on the fifth line, what would youtype in vim to turn all of these points into their own line? 1 10 100 1000 10000 100000 1000000 1000000010000001 10000010 10000100 10001000 10010000 10100000 11000000 20000000... 125k lines later88888881 18888818 18888188 18881888 18818888 18188888 11888888 28888888Where the desired format would be:These are very nice data points from our experiment.This was carried out at our big data point lab by [email protected] you positioned your cursor on the fifth line, what would youtype in vim to turn all of these points back into columns? [BONUS]1101001000100001000001000000100000001.25M lines of data points; Padding or alignment aren't important now.
Hard wrapping a range to textwidth with an ex command
cursor motions;ex mode;wrapping
I set tw=9 as you mentioned. Then I removed the spaces before de 1 in the first row and also left a blank line between the paragraph and the numbers.These are very nice data points from our experiment.This was carried out at our big data point lab by [email protected] you positioned your cursor on the fifth line, what would youtype in vim to turn all of these points into their own line?1 10 100 1000 10000 100000 1000000 1000000010000001 10000010 10000100 10001000 10010000 10100000 11000000 2000000088888881 18888818 18888188 18881888 18818888 18188888 11888888 28888888Then I put the cursor anywhere on the numbers and typed gqip. This is the result I obtained:These are very nice data points from our experiment.This was carried out at our big data point lab by [email protected] you positioned your cursor on the fifth line, what would youtype in vim to turn all of these points into their own line?11010010001000010000010000001000000010000001100000101000010010001000100100001010000011000000200000008888888118888818188881881888188818818888181888881188888828888888
_unix.293609
I use -s regularly to find the size of a file. It works cross platform.Is there a similar well tested way of finding the size of a block device?We are not talking the size of a filesystem or the free space on a file system, but the size of the actual block device?
Perl: General way of finding size of block device
perl;block device
null
_webmaster.68805
Using Apache 2.0 (2.0.53) on a particular legacy system, enabling mod_deflate causes directly-served content to be gzip-encoded just fine. However, and this is my problem, content proxied from another server using ProxyPass is not getting gzip-encoded. Wireshark shows me there is no encoding header in the response and the body is in plain text.I have another similar system, also running Apache 2.0 (2.0.52), where the proxied content IS being gzipped, and I can't find any significant difference between the configuration of the two systems.Details:mod_deflate is being enabled using a file /etc/httpd/conf.d/deflate.conf containing:<ifModule mod_deflate.c>AddOutputFilterByType DEFLATE text/html text/xml text/css text/plainAddOutputFilterByType DEFLATE text/javascript application/javascript application/x-javascript application/json</ifModule>The proxying is being done within the <VirtualHost> section using these lines:ProxyPass /mtserver http://localhost:8000ProxyPassReverse /mtserver http://localhost:8000Can anyone point me at a possible reason for one system not compressing the proxied content?
Problem using mod_deflate with ProxyPass
apache;proxy;gzip
null
_codereview.82454
I have started a project in PHP with PDO and I'm almost done, but I've read somewhere that PDO escape alone is not secure and we have to consider some settings of PDP. I am a little confused about my PDO class and usage.public function __construct(){ //set DNS $dns = 'mysql:host=' . $this->host . ';dbname=' . $this->dbname.';charset=utf8'; //set options $options = array(PDO::ATTR_PERSISTENT => true, PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::MYSQL_ATTR_INIT_COMMAND => SET NAMES utf8,PDO::ATTR_EMULATE_PREPARES => false ); //create a new PDO instance try{ $this->dbh = new PDO($dns, $this->user, $this->pass, $options); } //catch any errors catch(PDOException $e){ $this->error = $e->getMessage(); }}public function query($query){ $this->stmt = $this->dbh->prepare($query);}public function bind($param, $value, $type = null){ if(is_null($type)){ switch(true){ case is_int($value): $type = PDO::PARAM_INT; break; case is_bool($value): $type = PDO::PARAM_BOOL; break; case is_null($value): $type = PDO::PARAM_NULL; break; default: $type = PDO::PARAM_STR; } } $this->stmt->bindValue($param, $value, $type);}public function execute(){ return $this->stmt->execute();}public function resultset(){ $this->execute(); return $this->stmt->fetchAll(PDO::FETCH_OBJ);}public function single(){ $this->execute(); return $this->stmt->fetch(PDO::FETCH_OBJ);}public function rowCount(){ return $this->stmt->rowCount();}Queries:$db->query(SELECT * FROM XYZ WHERE some=:some);$db->bind(':some', $$somevar);$result= $db->resultset(); ///for select$result= $db->single(); ///if we want single rowFor inserting and updating:$db->query(INSERT INTO XYZ (x,y,z) VALUES (:x,:y,:z));$db->bind(':x', $x);$db->bind(':y', $y);$db->bind(':z', $z); $db->execute();Is there anything wrong which will affect me in future? I have searched many things for PDO and have updated my class accordingly, but I want to be sure about my project.
PDO class and security
php;security;pdo;wrapper
null
_codereview.138736
I have been trying to develop online food order application. I have taken the concept of zomato.com where a user or say owner registers his/her restaurant. After adding the restaurant, an executive from the company will fix a meeting with him/her, take pictures and menu of his/her restaurant by himself/herself or ask him/her to mail it. Long story short an admin will list all the menus in his/her restaurant.Note: A user can have multiple restaurant. A restaurant has multiple menu items.For such process flow and structure alike Zomato, will this database design fit well? Is my database designed perfectly deeming best practices?If i have missed any part, please pitch your thought and idea.restaurant/models.pyclass Restaurant(models.Model): OPEN = 1 CLOSED = 2 OPENING_STATUS = ( (OPEN, 'open'), (CLOSED, 'closed'), ) BREAKFAST = 1 LAUNCH = 2 DINNER = 3 DELIVERY = 4 CAFE = 5 LUXURY = 6 NIGHT = 7 FEATURE_CHOICES = ( (BREAKFAST, 'breakfast'), (LAUNCH, 'launch'), (DINNER, 'dinner'), (DELIVERY, 'delivery'), (CAFE, 'cafe'), (LUXURY, 'luxury dining'), (NIGHT, 'night life'), ) MONDAY = 1 TUESDAY = 2 WEDNESDAY = 3 THURSDAY = 4 FRIDAY = 5 SATURDAY = 6 SUNDAY = 7 TIMING_CHOICES = ( (MONDAY, 'monday'), (TUESDAY, 'tuesday'), (WEDNESDAY, 'wednesday'), (THURSDAY, 'thursday'), (FRIDAY, 'friday'), (SATURDAY, 'saturday'), (SUNDAY, 'sunday'), ) user = models.ForeignKey(User) restaurant_name = models.CharField(max_length=150, db_index=True) slug = models.SlugField(max_length=150, db_index=True) address = models.CharField(max_length=100) city = models.CharField(max_length=100) restaurant_phone_number = models.PositiveIntegerField() restaurant_email = models.EmailField(blank=True, null=True) owner_email = models.EmailField(blank=True, null=True) opening_status = models.IntegerField(choices=OPENING_STATUS, default=OPEN) email = models.EmailField() restaurant_website = models.TextField(validators=[URLValidator()]) features = models.IntegerField(choices=FEATURE_CHOICES, default=DINNER) timings = models.IntegerField(choices=TIMING_CHOICES, default=MONDAY) opening_from = models.TimeField() opening_to = models.TimeField() facebook_page = models.TextField(validators=[URLValidator()]) twitter_handle = models.CharField(max_length=80, blank=True, null=True) other_details = models.TextField() available = models.BooleanField(default=True) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) class Meta: verbose_name = 'restaurant' verbose_name_plural = 'restaurants' ordering = ('restaurant_name',) index_together = (('id','slug'),) def __str__(self): return self.restaurant_name # def get_absolute_url(self): # return reverse('restaurant:restaurant_detail', args=[self.id, self.slug])class Category(models.Model): name = models.CharField(max_length=120,db_index=True) #veg, non-veg slug = models.SlugField(max_length=120,db_index=True) class Meta: ordering=('name', ) verbose_name = 'category' verbose_name_plural = 'categories' def __str__(self): return self.nameclass Menu(models.Model): category = models.ForeignKey(Category, related_name=menu) restaurant = models.ForeignKey(Restaurant, related_name=restaurant_menu) name = models.CharField(max_length=120,db_index=True) slug = models.SlugField(max_length=120,db_index=True) image = models.ImageField(upload_to='products/%Y/%m/%d', blank=True) description = models.TextField(blank=True) price = models.DecimalField(max_digits=10,decimal_places=2) stock = models.PositiveIntegerField() available = models.BooleanField(default=True) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) class Meta: ordering=('name', ) index_together = (('id', 'slug'), ) verbose_name = 'menu' def __str__(self): return self.name # def get_absolute_url(self): # return reverse('restaurant:menu_detail', args=[self.id, self.slug])orders/models.pyfrom django.db import modelsfrom restaurant.models import Menuclass Order(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) email = models.EmailField() address = models.CharField(max_length=250) postal_code = models.CharField(max_length=20) city = models.CharField(max_length=50) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) paid = models.BooleanField(default=False) class Meta: ordering = ('-created',) def __str__(self): return 'Order {}'.format(self.id) def get_total_cost(self): return sum(item.get_cost() for item in self.items.all())class OrderMenu(models.Model): order = models.ForeignKey(Order, related_name='menu') menu = models.ForeignKey(Menu, related_name='order_menu') price = models.DecimalField(max_digits=10, decimal_places=2) quantity = models.PositiveIntegerField(default=1) def __str__(self): return '{}'.format(self.id) def get_cost(self): return self.price * self.quantity
Model design for Online Food order app
python;database;django
null
_unix.99032
I installed mailutils on my linux machine with Linux Mint 15 (basically Ubuntu 13.04).When I log in, I get the following:Welcome to Linux Mint 15 Olivia (GNU/Linux 3.8.0-32-generic i686)Welcome to Linux Mint * Documentation: http://www.linuxmint.comNo mail.Last login: Wed Nov 6 01:33:10 2013 from xxxxxxxxxxxxxxxxxxxxxxxBFL SingleSC: 3s ago - [2013-11-06 01:37:33] 5s:57.83 avg:57.96 u:56.96 Gh/sI have added the last line, colorized as I prefer, as a custom script that updates me on the status of my BFL bitcoin hashing rig in ~/.bashrc.I now want to color the rest of it, especially the No mail. line, from mailutils, and remove the duplicated 'Welcome to Linux Mint' messages and newline.I've been searching for the mailutils section specifically, and can't find any reference to it in:~/.bashrc~/.profile/etc/profile/etc/profile.d/*/etc/bashrc/etc/init.d/*/etc/rc.local/etc/rc*.d (1, 2, 3, 4, 5, 6, S)So - how do I go about finding where these messages are generated so I can modify and color them as I like?
How do I track down the source of a ssh login message?
scripting;login;bashrc;profile;mail command
Updated answer based on some researchingRemove duplicated welcome messagesSince you login with ssh, the first welcome message should be coming from /etc/issue.net. To remove the message, just remove the contents of that file.To remove the second welcome message, remove the contents of /etc/motd.Colorize the line about mailTo colorize that line, the easiest option I can think of requires quite a bit of low-level work: the option is that you modify and build pam_mail.so yourself.These are the steps for modifying it and installing the modified versionDownload Linux-PAM source from linux-pam.org (the official project site).Extract the source (this will create a new directory named Linux-PAM-1.1.8) and cd to it:# tar xzvf Linux-PAM-1.1.8.tar.gz# cd Linux-PAM-1.1.8Change the following lines (the lines which begins with +, 4 lines at all) in file modules/pam_mail/pam_mail.c to be as the following diff shows (produced with diff -u)(the filename pam_mail.c.new is just my temporary file that I could produce that diff):--- pam_mail.c 2013-06-18 17:11:21.000000000 +0300+++ pam_mail.c.new 2013-12-29 16:57:49.759298926 +0200@@ -294,17 +294,17 @@ switch (type) { case HAVE_NO_MAIL:- retval = pam_info (pamh, %s, _(No mail.));+ retval = pam_info (pamh, %s, _(\\033[0;1;31mNo mail.\\033[0m)); break; case HAVE_NEW_MAIL:- retval = pam_info (pamh, %s, _(You have new mail.));+ retval = pam_info (pamh, %s, _(\\033[0;1;31mYou have new mail.\\033[0m)); break; case HAVE_OLD_MAIL:- retval = pam_info (pamh, %s, _(You have old mail.));+ retval = pam_info (pamh, %s, _(\\033[0;1;31mYou have old mail.\\033[0m)); break; case HAVE_MAIL: default:- retval = pam_info (pamh, %s, _(You have mail.));+ retval = pam_info (pamh, %s, _(\\033[0;1;31mYou have mail.\\033[0m)); break; } elseI have simply added \\033[0;1;31m to the beginning of those messages and \\033[0m to the end of those messages.Note: Now it displays those messages as red; from ascii-table.com page about Ansi Escape Sequences under title Set Graphics Mode you can find more complete list about colors and other tricks about customising terminal output.Compile it (Note: from here to the end I assume that your working directory is Linux-PAM-1.1.8, the very same directory to which we cd'd at the beginning, i.e. the root directory of the Linux-PAM package):# ./configure# makeBackup your existing pam_mail.so in case that the new one breaks your system (I doubt it will break, but it's always good to have the original file in safe):# cp /lib/i386-linux-gnu/security/pam_mail.so ~/Copy the file modules/pam_mail/.libs/pam_mail.so to /lib/i386-linux-gnu/security/:# cp modules/pam_mail/.libs/pam_mail.so /lib/i386-linux-gnu/security/Log out and again in (or start a new ssh session, whatever), and you should see red No mail. message (assuming you have no new mail).The old, obsolete answerThe mail message can be disabled by changing the following line in file /etc/pam.d/system-login fromsession optional pam_mail.so dir=/var/spool/mail standardtosession optional pam_mail.so dir=/var/spool/mail nopenReference from archlinux's forums.The text before the mail information is in /etc/motd, and you can disable it to be printed when login with ssh by putting the following line to ~/.ssh/config:PrintMotd no
_computerscience.1669
Why for perfect reflections a surface must have G2 continuity (class A surface)?I would like a mathematical answer.
Why for perfect reflections a surface must have G2 continuity?
rendering
null
_softwareengineering.49503
I've always wondered why some companies want you to register when you install their programs. Personally, I just find it annoying and decline, but what do companies have to gain by having their users get registred? They claim that you get the latest updates and stuff, but often you get that anyway without having to register.So what's the deal with this registering? I just don't see that point.
Why do companies want you to register with their program?
industry;users
Marketing Database & Upsell. I can think of no other good reason. They know how many users they sell the product to, or how many times it's been downloaded.If there's an update - well, either the program will have an update program built in, or you'll find it on their website when you get a problem.Critical Bug notification? Maybe...but to be honest, see above.
_codereview.104879
I'm writing a Telephone class and it has a method called getDigits which takes in a String parameter and returns the number that would appear on your phone if you typed in those letters the old fashioned way.Example: Typing in CAT would return 228.I wrote the following code and it works but I wanted to know if anyone else out there had better ideas to do it/ more sophisticated ones. I'm learning about data abstraction/enums and maybe that's a clue to how my professor wanted me to approach this but I'm stumped.public int getDigits(String test){ String result = ; String param = test.toUpperCase(); for(int i = 0; i<param.length(); i++){ String s = Character.toString(param.charAt(i)); if(s.equals(A) || s.equals(B) || s.equals(C)){ result += 2; } else if(s.equals(D) || s.equals(E) || s.equals(F)){ result += 3; } else if(s.equals(G) || s.equals(H) || s.equals(I)){ result += 4; } else if(s.equals(J) || s.equals(K) || s.equals(L)){ result += 5; } else if(s.equals(M) || s.equals(N) || s.equals(O)){ result += 6; } else if(s.equals(P) || s.equals(Q) || s.equals(R) || s.equals(S)){ result += 7; } else if(s.equals(T) || s.equals(U) || s.equals(V)){ result += 8; } else if(s.equals(W) || s.equals(X) || s.equals(Y) || s.equals(Z)){ result += 9; } } return Integer.parseInt(result); }
Creating a numeric phone keypad
java;beginner;strings;converting
null
_cs.80433
Why given two equivalent Mealy and Moore machines, with the same initial state, the output of the Moore machine is delayed of an interval $ \Delta $ of the time sequence, for the same input sequence?
Why is a Moore Machine delayed compared to the equivalent Mealy Machine?
automata;finite automata
null
_unix.61881
Some software that I install on my Ubuntu doesn't appear on the Dash Home menu. For example, I installed Komodo Edit (thru a .sh file). If I want to run this program, I have to go to the directory where its located and then click on it. If I type Komodo on the dash home menu, it just won't appear.Could it be it's because I didn't install it thru 'apt-get' (komodo is not available there)How can I enable a program on the dash home menu?
Installed programs not appearing at the Dash Home?
ubuntu
Try creating a komodoedit.desktop file and save it on ~/.local/share/applications/ and make it point to your executable file.[Desktop Entry]Name=Komodo EditComment=Komodo and stuffExec=/path/to/the/execucable/fileIcon=/path/to/an/appropriate/iconType=ApplicationTerminal=falseCategories=Application;Utility;TextEditor;MimeType=text/plainAccomodate the values accordingly. You should now be able to at least find it on the menu or search options.
_cs.53019
I want to map the various combinations to an unique index: For a given $n$ and $r$, we would have $\binom{n}{r}$ arrangement for values:$[0,\dots,n)$:Ex: For n = 6, r = 3[012,013,014,015,...,345]So, for any given arrangement [012] the index would 0 as its the very first element.Similarly, the index for arrangement for [015] should be 3. Trying to come up a formula to determine the correct index for a given arrangement.
Computing the index in a structured way
combinatorics;discrete mathematics;combinatory logic
null
_unix.252475
I'm installing Debian testing on a SSD, there is a pre-existing EFI ESP partition from windows installation, the disk uses GPT partitioning. Problem is after it finishes installing it won't boot linux or grub. In my motherboard UEFI boot order I see a japenese sign that appeared after the installation, when I try to boot on it it won't do anything. Any idea ?efibootmgr -vroot@ubuntu:~# efibootmgr -vBootCurrent: 000BTimeout: 1 secondsBootOrder: 0002,0006,0007,0008,000A,000B,000C,0000Boot0000* debian VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)Boot0002* Windows Boot Manager HD(2,GPT,b790d826-8e17-4ec7-b89b-12d783ec520e,0xe1800,0x32000)/File(\EFI\MICROSOFT\BOOT\BOOTMGFW.EFI)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}...9................Boot0006* Hard Drive BBS(HD,,0x0)..GO..NO..........N.1.-.S.a.m.s.u.n.g. .S.S.D. .9.5.0. .P.R.O. .5.1.2.G.B....................A........................1.N........>.;......N..Gd-.;.A..MQ..L.N.1.-.S.a.m.s.u.n.g. .S.S.D. .9.5.0. .P.R.O. .5.1.2.G.B........BO..NO........o.W.D.C. .W.D.6.0.0.1.F.F.W.X.-.6.8.Z.3.9.N.0....................A...........................>..Gd-.;.A..MQ..L. . . . .W. .-.D.X.W.1.4.7.D.L.5.8.N.5.H........BOBoot0007* CD/DVD Drive BBS(CDROM,,0x0)..GO..NO........o.A.T.A.P.I. . . .i.H.A.S.1.2.4. . . .E....................A...........................>..Gd-.;.A..MQ..L.5.3.4.2.0.7. .3.L.2.4.8.3.4.0.5.4.9.9.8........BOBoot0008* USB BBS(USB,,0x0)..GO..NO........i.V.e.r.b.a.t.i.m.S.T.O.R.E. .N. .G.O. .1.1.0.0....................A.............................6..Gd-.;.A..MQ..L.1.3.1.1.0.4.0.0.0.0.0.0.4.3.2.4........BO..NO........}. .M.E.M.U.P. .1...0.0....................A.............................J..Gd-.;.A..MQ..L.0.9.0.2.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.5.4.8........BOBoot000A* UEFI: ATAPI iHAS124 E PciRoot(0x0)/Pci(0x11,0x4)/Sata(2,65535,0)/CDROM(0,0x33f,0xdffb0)/HD(1,MBR,0x0,0x20,0x7fe0)..BOBoot000B* UEFI: VerbatimSTORE N GO 1100 PciRoot(0x0)/Pci(0x1a,0x0)/USB(1,0)/USB(2,0)/HD(1,MBR,0x0,0x800,0x1dd9000)..BOBoot000C* UEFI: MEMUP 1.00 PciRoot(0x0)/Pci(0x1d,0x0)/USB(1,0)/USB(5,0)/HD(1,MBR,0x0,0x800,0xeeb800)..BOtree EFI :root@ubuntu:/# mount /dev/nvme0n1p2 /mntroot@ubuntu:/# ls mntEFIroot@ubuntu:/# tree /mnt/mnt EFI Boot bootx64.efi debian grubx64.efi Microsoft Boot BCD BCD.LOG BCD.LOG1 BCD.LOG2 bg-BG bootmgfw.efi.mui bootmgr.efi.mui bootmgfw.efi bootmgr.efi BOOTSTAT.DAT boot.stl cs-CZ bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui da-DK bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui de-DE bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui el-GR bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui en-GB bootmgfw.efi.mui bootmgr.efi.mui en-US bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui es-ES bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui es-MX bootmgfw.efi.mui bootmgr.efi.mui et-EE bootmgfw.efi.mui bootmgr.efi.mui fi-FI bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui Fonts chs_boot.ttf cht_boot.ttf jpn_boot.ttf kor_boot.ttf malgun_boot.ttf malgunn_boot.ttf meiryo_boot.ttf meiryon_boot.ttf msjh_boot.ttf msjhn_boot.ttf msyh_boot.ttf msyhn_boot.ttf segmono_boot.ttf segoen_slboot.ttf segoe_slboot.ttf wgl4_boot.ttf fr-CA bootmgfw.efi.mui bootmgr.efi.mui fr-FR bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui hr-HR bootmgfw.efi.mui bootmgr.efi.mui hu-HU bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui it-IT bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui ja-JP bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui kd_02_10df.dll kd_02_10ec.dll kd_02_1137.dll kd_02_14e4.dll kd_02_15b3.dll kd_02_1969.dll kd_02_19a2.dll kd_02_8086.dll kd_07_1415.dll kd_0C_8086.dll kdstub.dll ko-KR bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui lt-LT bootmgfw.efi.mui bootmgr.efi.mui lv-LV bootmgfw.efi.mui bootmgr.efi.mui memtest.efi nb-NO bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui nl-NL bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui pl-PL bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui pt-BR bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui pt-PT bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui qps-ploc bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui Resources bootres.dll fr-FR bootres.dll.mui ro-RO bootmgfw.efi.mui bootmgr.efi.mui ru-RU bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui sk-SK bootmgfw.efi.mui bootmgr.efi.mui sl-SI bootmgfw.efi.mui bootmgr.efi.mui sr-Latn-CS bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui sr-Latn-RS bootmgfw.efi.mui bootmgr.efi.mui sv-SE bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui tr-TR bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui uk-UA bootmgfw.efi.mui bootmgr.efi.mui zh-CN bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui zh-HK bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui zh-TW bootmgfw.efi.mui bootmgr.efi.mui memtest.efi.mui Recovery BCD BCD.LOG BCD.LOG1 BCD.LOG2
Installing Debian but can't boot
debian;boot;grub;uefi
To me, it seems that something went wrong with the installation of GRUB. I would try to create a new firmware boot entry first:efibootmgr -c -d /dev/disk/by-uuid/b790d826-8e17-4ec7-b89b-12d783ec520e -p 2 -l /EFI/debian/grubx64.efi -L Debian(for more information see e.g. https://wiki.archlinux.org/index.php/Unified_Extensible_Firmware_Interface#efibootmgr)This will not touch other boot options and can be removed if it didn't work.If this doesn't fix your installation, you can try to re-install GRUB. You can do this by booting a live CD (preferably one which matches your installation) and chroot to your installation. Afterwards you should run grub-install and update-grub (for more information, again, see e.g. https://wiki.archlinux.org/index.php/GRUB#Installation_2; though you probably can't use arch-chroot with a Debian installation).
_webmaster.104659
I have searched Google Accounts and Google Sites services. Nowhere was I able to find anything relating to any domain settings. Only thing I found about domain transfer was in Google Domains, but it does not work in Finland.
How do I transfer a Finland based domain from Google Sites to another hosting service?
google;domains;web hosting;transfer
Domain transfers are done by the receiving registrar. You get in touch with them and provide them with the domain information and state that you want to transfer it in to them from an alternate registrar and they will contact the registrar to transfer the domain. There is no fixed form to do from Google's end. What you will probably need to do (going generic here as I am not familiar with the specifics of the Finnish domain system) is lodge a support request for Google domain's requesting the transfer code and requesting client transfer be enabled, and with that information the receiving registrar will be able to transfer the domain to their servers.I should note that this does not transfer the hosted website itself though, only the domain name. If you wish to transfer the site itself you will need to do that yourself and it ios beyond the scope of this question to go into that.
_softwareengineering.126370
Currently I'm starting a new system on my company, and we are using a good separation between models, views and controllers, basically using Asp.Net MVC 3 for the user UI, and a C# class library for the model.The question is about modelling a model.We are using Linq-to-SQL as a Data Access Layer, and modelling entities over this DAL. Example: // DAL, an autogenerated .dbml file ... public System.Data.Linq.Table<TB_USER> TB_USERs { get { return this.GetTable<TB_USER>(); } } ...And we are mapping this table on an entity, like below: public class User { // Entity, mirroring a .dbml table public static IEnumerable<User> GetAll() { var db = new MyDataContext(); var userList = (from u in db.TB_USERs select u).ToList(); IEnumerable<User> retorno = lista.ConvertAll(u => (User)u); return retorno; } // Active Record ? public static User Save(User user) { ... } }Is this kind of modelling correct ? It feels like I'm repeating myself by having 2 entities meaning the same thing (User and TB_USER), but TB_USER is the raw representation of the database table that persists the User entity.And the GetAll method, a static method created on the entity with the sole purpose of retrieving all of them. That means that if I want to retrieve data using a filter, for example, I have to create another GetDataBy... method.And what about the Save method ? I know it's supposed to save the state of one User, but what if I have to save some random User object along with other objects to make a transaction ? Shouldn't this kind of transaction control be in the database ?
Best practice modelling an active record entity using Linq-To-SQL as DAL
c#;architecture;asp.net mvc 3;linq
If you are going to stick with LINQ-to-SQL you probably want to use the linq-to-sql classes as your entities. That is rename TB_USER to User and you then wrap the interaction with LINQ-to-sql in repositories, i.e. a UserRepository with a GetById, GetByUserName, Save, and similar methods - depending on your specific needs. This keeps the data access in one place.With LINQ-to-sql beware of the temptation to have LINQ expressions that go off and query the DB scattered across your code base. LINQ-to-sql seems to encourage that in my experience. But that leads to very tight coupling.If you want a cleaner cut between your domain entities and the DB, I'd really recommend moving to another ORM. My personal preference with a SQL Server on the backend would be NHibernate.
_cstheory.19779
I'm looking for a graph $G$ on $n$ vertices with the following propertiesG doesn't have too many edges, $O(nlog(n))$ or $O(n)$ would be perfectFor random k disjoint pairs of vertices $(s_i,t_i)$ there is a path that links $s_i$ to $t_i$ for each $i$ such that all of these paths are vertex-disjoint. This statement should be true with high probability p.The goal is to find $G$ such that $k$ is maximal. I'm not very precise on $k$ and the number of edges as well as on p (but let's say large constant p is fine or even better something that goes to 0 with n) because I don't know what is possible.EDIT: I rephrased the question in terms of average case instead of worst case since worst case won't give many vertex-independent paths as pointed out by Chandra Chekuri.Thanks,Andr.
Vertex-disjoint paths in sparse graphs
graph theory
null
_unix.186759
These commands are running in the background:foo@contoso ~ $ sleep 30 &foo@contoso ~ $ sleep 60 &foo@contoso ~ $ sleep 90 &What is the minus and plus sign after running jobs process?foo@contoso ~ $ jobs[1] Running sleep 30 &[2]- Running sleep 60 &[3]+ Running sleep 90 &
Minus and Plus Sign in jobs Process
process;job control
null
_cs.9181
Given a regular language $L$, then it is easy to prove that there is a constant $N$ such that is $\sigma \in L$, with $\lvert \sigma \rvert \ge N$ there exist strings $\alpha$, $\beta$ and $\gamma$ such that $\lvert \alpha \beta \rvert \le N$ and $\lvert \beta \rvert \ne \epsilon$, and for all $k$ it is $\alpha \beta^k \gamma \in L$. It is widely stated that the converse isn't true, but I haven't seen any clear example. Any suggestions? Clearly the proof that the offending language isn't regular has to use stronger methods than the typical doesn't satisfy the pumping lemma. I'd be interested in simple examples, to present in introductory formal languages classes.
Languages that satisfy the pumping lemma but aren't regular?
formal languages;proof techniques
The language $\{ \$ a^nb^n \mid n \ge 1 \} \cup \{ \$^kw \mid k\neq 1, w\in \{a,b\}^* \}$ seems to be simple. The second part is regular (and can be pumped). The first part is nonregular, but can be pumped into the second part by choosing $\$$ to pump.(added) Of course, this can be generalized to $\$L \cup \{ \$^k \mid k\neq 1 \} \cdot \{a,b\}^*$ for any $L\subseteq \{a,b\}^*$. Sometimes the formulation is in the if ... then ... style: if $w$ starts with a single $\$$ then it is of the form. That I personally find less intuitive.As noted by @vonbrand the (possibly) non-regular part of the language is isolated by intersecting with $\$\{a,b\}^*$. This can be separately tested using the pumping lemma if needed.
_cs.11393
I read a quotation attributed to Sheila Greibach that says that the intersection of two context free grammars is recursively enumerable.I could not, however, find a citation for this quotation (and searching has failed to turn up a restatement of this result somewhere else).Can anyone provide a proof or a citation to the original proof for this result? Can anyone state that it is false?
Is the intersection of two context free languages recursively enumerable?
formal languages;computability;context free
This result more easily follows from the fact that every context-free grammar is recursively enumerable, by enumerating all parse trees. The intersection of any two r.e. languages is r.e. - just enumerate them both and output every word that appears on both lists. The other direction (given by the quotation you found) is more interesting.Edit: As Huck correctly comments, there are actually efficient algorithms for deciding membership in context-free languages. But the argument above holds for the most general grammars possible, in which case we prove that the language corresponding to any grammar is recursively enumerable. (Replace parse trees with the a sequence of rule applications.)
_webmaster.44452
I use Google Chrome. When I want to search Wikipedia, I type www.wikipedia.org into the search bar and then press tab. The screen looks like this:If I type some search words, it uses the actual search functionality of Wikipedia, instead of just returning a Google search of site:www.wikipedia.org x y z.I have a site with search functionality using a regular html form, but I can't do the tab trick to search the site. Is there any way I can change my site's search page to be recognized by Chrome (and possibly other applications, if there's a standard format)? Google searching this only gives me results about registering my site with the Google search engine, frustratingly.
making a site searchable via Chrome search bar
search;google chrome
Ironically the answer is on this page and every other Stack Exchange site :)You have to define an OpenSearchDescription for your site. If you look at the source code of this page you will see in the header:<link rel=search type=application/opensearchdescription+xml title=Pro Webmasters - Stack Exchange href=/opensearch.xml>And if you open opensearch.xml referenced here you see:<OpenSearchDescription xmlns=http://a9.com/-/spec/opensearch/1.1/ xmlns:moz=http://www.mozilla.org/2006/browser/search/><ShortName>Webmasters</ShortName><Description>Search Webmasters: Q&A for pro webmasters</Description><InputEncoding>UTF-8</InputEncoding><Image width=16 height=16 type=image/x-icon>http://sstatic.net/webmasters/img/favicon.ico</Image><Url type=text/html method=get template=http://webmasters.stackexchange.com/search?q={searchTerms}/></OpenSearchDescription>You have to implement the same for your site. The key is that you do need some kind of search implemented on you site which is used by the broswer to perform the actual search. This is specified in the template part of the XML:http://webmasters.stackexchange.com/search?q={searchTerms}Google Custom Search can be used for this purpose if you have no current search on your site.
_webapps.35227
I have looked through the keyboard shortcuts and searched as best I can and I cannot seem to find a keyboard shortcut to add a new contact. It must be staring me right in the face, but I'm not seeing it.To be clear on what I'm talking about: I'm referring to the scenario where I am in the main screen of the Gmail contact manager, looking at my list of contacts. There is a big red button labeled New Contact that I can click on to add a new contact. That's great. I just want to do that exact same action with a keyboard shortcut. The same as what I can do with 'c' and 'shift-c' in Gmail to compose a new message.So far the best I have been able to do is '/' (to get to the search box), then 'tab' 9 times, then 'space'. But that is ridiculous.Thanks!
Keyboard shortcut to add new Gmail contact?
gmail;google contacts
Unfortunately there is no built-in way of doing this. Hitting / and tabbing over is the easiest way without generating a custom keyboard shortcut outside of the Gmail Contacts framework.
_unix.40924
I'd like to be able to export the output (and error messages) of my cygwin terminal in a file, especially since I have to click a lot of buttons in order to mark the stuff in the cygwin terminal (and it's desirable to minimize the amount of clicking I do).
For cygwin, how do I export the output in a terminal into a file?
shell;cygwin;output
The stderr output of an executable can be redirected to a file with the following syntax:mycommand 2> error.txtIf you want to redirect stdout (i.e. the regular program output) to the file, the command should be:mycommand > output.txtTo redirect both the stderr and stdout output to the same file (similar to as seen on terminal), use:mycommand > output_and_error.txt 2>&1
_webmaster.90054
We have created a new site and we are doing an Ad Campaign in Facebook Ads. My problem is that when I go to Google Analytics to analyze my traffic with the Model Comparison Tool and I click to First Interaction Model all my traffic comes from Direct traffic. I know that this is not real because all traffic comes from Facebook and Instagram in the first interaction.Here is a screen shot. What can be happening?
Google Analytics model comparison tool shows all traffic as direct when it actually came from Facebook and Instagram
google analytics;advertising;facebook
null
_unix.375754
Ive got a problem starting a live-system as a virtual machine with qemu.Ive installed qemu and virt-manager.I wanted to start clonezilla (.iso-file) by using this command:virt-install --hvm --name clonezilla_zesty --ram 384 --nodisks --livecd --vnc --cdrom /media/rosika/f14a27c2-0b49-4607-94ea-2e56bbf76fe1/ISOs/clonezilla/clonezilla-live-20170626-zesty-amd64.iso --network network:defaultAlas, that doesnt work. I get the message: Error setting up logfile: No write access to logfile /home/rosika/.cache/virt-manager/virt-install.logInstallation wird gestartet ERROR Interner Fehler: process exited while connecting to monitor: 2017-07-03T15:43:01.421612Z qemu-system-x86_64: -drive file=/media/rosika/f14a27c2-0b49-4607-94ea-2e56bbf76fe1/ISOs/clonezilla/clonezilla-live-20170626-zesty-amd64.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on: Could not open '/media/rosika/f14a27c2-0b49-4607-94ea-2e56bbf76fe1/ISOs/clonezilla/clonezilla-live-20170626-zesty-amd64.iso': Permission denied Installation der Domne scheint nicht erfolgreich gewesen zu sein.. Wenn ja, knnen Sie Ihre Domne starten durch Ausfhren von: virsh --connect qemu:///system start clonezilla_zesty ansonsten starten Sie bitte die Installation neu.I read somewhere that this could be due to the fact that the respective iso-file resides on another partition. And that indeed is the case here. Obviously qemu/virt-manager have problems crossing partition boundaries.Has anyone got any idea what can be done besides moving the iso-file?Thanks a lot in advance.Greetings.RosikaP.S.:system: Linux/Lubuntu 16.04.2 LTS (64bit)
qemu/virt-manager starting iso-file as live ystem
qemu;virt manager
null
_unix.90554
I am using 32-bit Red Hat Linux in my VM. I want to boot it to command-line mode, not to GUI mode. I know that from there I can switch to GUI mode using startx command. How do I switch back to command-line mode?
How to boot Linux to command-line mode instead of GUI?
command line;rhel;boot
You want to make runlevel 3 your default runlevel. From a terminal, switch to root and do the following:[user@host]$ suPassword:[root@host]# cp /etc/inittab /etc/inittab.bak #Make a backup copy of /etc/inittab[root@host]# sed -i 's/id:5:initdefault:/id:3:initdefault:/' /etc/inittab #Make runlevel 3 your default runlevelAnything after (and including) the second # on each line is a comment for you, you don't need to type it into the terminal.See the Wikipedia page on runlevels for more information.Explanation of sed commandThe sed command is a stream editor (hence the name), you use it to manipulate streams of data, usually through regular expressions. Here, we're telling sed to replace the pattern id:5:initdefault: with the pattern id:3:initdefault: in the file /etc/inittab, which is the file that controls your runlevles. The general syntax for a sed search and replace is s/pattern/replacement_pattern/.The -i option tells sed to apply the modifications in place. If this were not present, sed would have outputted the resulting file (after substitution) to the terminal (more generally to standard output).UpdateTo switch back to text mode, simply press CTRL+ALT+F1. This will not stop your graphical session, it will simply switch you back to the terminal you logged in at. You can switch back to the graphical session with CTRL+ALT+F7.
_unix.218701
tar can be used to gather a whole directory into a single file.I tried with the sample directory sampledir containing just some text files, with no subdirectories.Originally the directory occupies 52K:$ du -h sampledir/52K sampledir/I ran$ tar -cf tararchive.tar sampledir/and the generated file is$ du -h tararchive.tar 40K tararchive.tarIt is smaller than sampledir: but in the command I did not request any compression. I am referring to the BSD version of tar (used also in Ubuntu).So, what exactly does tar? Does it simply gather the directory with all its files, inserting some header in order to mark their end and their beginning? If it was so, how can tararchive.tar be smaller than the original directory, even without compression?
Output file generated by tar
directory;tar;disk usage
This is because files use up space in whole-block increments. So if your block size is 512 bytes and you have a small 100 byte file, the size it actually uses up will be rounded up to the nearest block - in this case 512. When tarring, because the result is a single file, that inefficiency is reduced since there is only one resultant file - the .tar file.You can really see this in action if you create 100 small files and see their size as individual files vs. combined together. Running the following commands will create a directory with 100 single-byte files and then compare the size of them individually vs. all combined into one vs. a tarball created from them.mkdir tmp_small_file_testfor ((i=0; i<100; i++)); do head -c 1 /dev/zero > tmp_small_file_test/file$i; donedu -sh tmp_small_file_test#on a 4096 byte block size filesystem this output 404Kcat tmp_small_file_test/file* >> tmp_small_file_test/all_files_combineddu -sh tmp_small_file_test/all_files_combined#this output 4.0Krm -f tmp_small_file_test/all_files_combinedtar -cf tmp_small_file_test.tar tmp_small_file_testdu -sh tmp_small_file_test.tar#this output 116KNOTE: since tar has some overhead to store each file in a tarball, if you tar up the above directory the tar file isn't as small as all the files combined together, but it's still a lot smaller than the files by themselves (at least on a filesystem with block size 4096).If you're using an ext3/ext4 filesystem, you can see the block size using something like tune2fs -l /dev/sda1 |grep -i 'block size' (replace /dev/sda1 which the filesystem you're using). This should work out to the first du above divided by 100.
_unix.345619
What is the equivalent of this command in Solaris?ulimit -e 19What this does in other systems is set all programs in the current shell to run with nice -n 19. When I try to run the above command in Solaris it does not recognise the -e option.
ulimit -e in Solaris?
solaris;ulimit
null
_softwareengineering.14610
Have you ever encountered a case of code duplication where, upon looking at the lines of code, you couldn't fit a thematic abstraction to it that faithfully describes its role in the logic? And what did you do to address it? It is code duplication, so ideally we need to do some refractoring, like for example making it its own function. But since the code doesn't have a good abstraction to describe it the result would be a strange function that we can't even figure out a good name for, and whose role in the logic is not obvious just from looking at it. That, to me, hurts the clarity of the code. We can preserve clarity and leave it as it is but then we hurt maintainability. What do you think is the best way to address something like this?
Code duplication with no obvious abstraction
design;maintenance;refactoring
Sometimes code duplication is the result of a pun: Two things look the same, but aren't.It is possible that over-abstracting can break the true modularity of your system. Under the regime of modularity, you have to decide what is likely to change? and what is stable?. Whatever is stable gets put in the interface, while whatever is unstable gets encapsulated in the module's implementation. Then, when things do change, the change you need to make is isolated to that module.Refactoring is necessary when what you thought was stable (e.g. this API call will always take two arguments) needs to change.So, for these two duplicated code fragments, I would ask: Does a change required to one necessarily mean the other must be changed as well?How you answer that question might give you better insight into what a good abstraction might be.Design patterns are also useful tools. Perhaps your duplicated code is doing a traversal of some form, and the iterator pattern should be applied.If your duplicated code has multiple return values (and that's why you can't do a simple extract method), then perhaps you should make a class that holds the values returned. The class could call an abstract method for each point that varies between the two code fragments. You would then make two concrete implementations of the class: one for each fragment. [This is effectively the Template Method design pattern, not to be confused with the concept of templates in C++. Alternatively, what you are looking at might be better solved with the Strategy pattern.]Another natural and useful way to think about it is with higher-order functions. For example, making lambdas or using anonymous inner classes for the code to pass to the abstraction. Generally, you can remove duplication, but unless there really is a relation between them [if one changes, so must the other] then you might be hurting modularity, not helping it.
_cs.2257
Context: I'm working on this problem:There are two stacks here:A: 1,2,3,4 <- Stack Top B: 5,6,7,8A and B will pop out to other two stacks: C and D. For example: pop(A),push(C),pop(B),push(D).If an item have been popped out , it must be pushed to C or D immediately.The goal is to enumerate all possible stack contents of C and D after moving all elements.More elaborately, the problem is this: If you have two source stacks with $n$ unique elements (all are unique, not just per stack) and two destination stacks and you pop everything off each source stack to each destination stack, generate all unique destination stacks - call this $S$.The stack part is irrelevant, mostly, other than it enforces a partial order on the result. If we have two source stacks and one destination stack, this is the same as generating all permutations without repetitions for a set of $2N$ elements with $N$ 'A' elements and $N$ 'B' elements. Call this $O$.Thus$\qquad \displaystyle |O| = (2n)!/(n!)^2$Now observe all possible bit sequences of length 2n (bit 0 representing popping source stack A/B and bit 1 pushing to destination stack C/D), call this B. |B|=22n. We can surely generate B and check if it has the correct number of pops from each destination stack to generate |S|. It's a little faster to recursively generate these to ensure their validity. It's even faster still to generate B and O and then simulate, but it still has the issue of needing to check for duplicates.My questionIs there a more efficient way to generate these?Through simulation I found the result follows this sequence which is related to Delannoy Numbers, which I know very little about if this suggests anything.Here is my Python codedef all_subsets(list): if len(list)==0: return [set()] subsets = all_subsets(list[1:]) return [subset.union(set([list[0]])) for subset in subsets] + subsetsdef result_sequences(perms): for perm in perms: whole_s = range(len(perm)) whole_set = set(whole_s) for send_to_c in all_subsets(whole_s): send_to_d = whole_set-set(send_to_c) yield [perm,send_to_c,send_to_d]n = 4perms_ = list(unique_permutations([n,n],['a','b'])) # number of unique sequences result = list(result_sequences(perms_))
Generating number of possibilites of popping two stacks to two other stacks
algorithms;combinatorics;efficiency
(answer for another problem, see the edit below)First, generate all subsets $A_1$ of $A$. $A_1$ will go in $C$ and $A_2=A\backslash A_1$ in $D$. Likewise, generate all subsets $B_1$ of $B$. You then have to generate all possible ordered combinations of $A_1$ and $B_1$ for $C$ and the same for $A_2$ and $B_2$ in $D$.This amounts to enumerate, given $(a_1,,a_n)$ and $(b_1,,b_m)$ all interleavings of length $n+m$ which can be viewed as an exploration of the intersections of a grid of size $nm$. (There are $\frac{(n+m)!}{n!m!}$ paths)This is a way of generating all possible $C$s and $D$s uniquely.But maybe I misunderstood the question. Is this what you want? To help comparing with what you already have tried, here is the number of pairs of stacks it generates (where $A$ is the size of $A$):$$N(A,B)=\sum_{n=0}^{A}\sum_{m=0}^{B}\binom{A}{n}\binom{B}{m}\frac{(n+m)!}{n!m!}\frac{(A+B-n-m)!}{(A-n)!(B-m)!}$$Formula when $A=B$ on the OEIS.Implementation in HaskellExecution trace for A=[a, b] B=[1, 2] with 54 pairs since $N(2,2)=54$EDIT: I am wrong: my algorithm behaves like separating $A$ and $B$ into $(A_1,A_2)$ and $(B_1,B_2)$ before interleaving $(A_1,B_1)$ and $(A_2,B_2)$ which is presumably what you don't want (it generates too many stacks, e.g. from $A=[2,1], B=[4,3]$ it includes $C=[2,3],D=[4,1]$). (Mitchus's answer does not have this problem.)
_webapps.97587
I have a master spreadsheet that contains columns like so:FirstName LastName Gender etc, etc, etcI need to import from that master sheet into another sheet that filters by one of the other columns. In the new sheet, I need name to be a single column, joining FirstName and LastName together. How can I do that? Here is the current query I am using:=query(importrange($SPREADSHEET_KEY, Full Overview!A2:AH), select Col1, Col2, Col5 where Col8 contains 'Migrators', 0)I need to merge Col1 and Col2 from the master sheet into Col1 of the new sheet and put Col5 in Col2 of the new sheet.
Merge two columns by using QUERY in Google Sheets
google spreadsheets
Short answerQUERY select argument can't merge columns.ExplanationThe QUERY built-in function uses Google Visualization API Query Language. It doesn't include a concatenate operator. One alternative is to concatenate the data.ExamplesAssume that First Name and Last Name columns are columns A and B respectively.Example 1Add an auxiliary column to concatenate the desired columns in the source sheet and include this column in the IMPORTRANGE. Add one of the following formulas to an empty cell in the row 2:=A2& &B2. Fill down as necessary. =ARRAYFORMULA(A2:A& &B2:B) (Tip: Delete empty rows or use FILTER to only concatenate non-empty rows).Example 2Use several IMPORTRANGE, the concatenate operator & and arrays.=ARRAYFORMULA( QUERY( { importrange($SPREADSHEET_KEY, Full Overview!A2:A)& & importrange($SPREADSHEET_KEY, Full Overview!B2:B)), importrange($SPREADSHEET_KEY, Full Overview!C2:AH) }, select Col1, Col2, Col5 where Col8 contains 'Migrators', 0 ))
_softwareengineering.185300
Having simple code like this:int A=5;object X=Console.ReadLine()if(Condition) DoSomething();else DoStuff();DoSomethingElse();Some sources say there are actually 4 branches: First unconditional, two for the IF and another unconditional after the IF statement.Some say there are only two branches.What would be correct?E.g. here:http://www.ruleworks.co.uk/testguide/BS7925-2-Annex-B7.asp
Is unconditional code considered a branch?
testing;unit testing;complexity;test coverage
The origins of the word 'branch' in code comes from assembly. An example of this can be seen in MIPS assembly. A branch is a conditional statement.Before anyone jumps on me for point out that MIPS has a b instruction which has the description of branch unconditionally the key to the difference between b (branch unconditionally) and j (jump) is that the branch statements work off of relative addresses and the jump statements work off of absolute addresses.There is also some terminology of unconditional branch in the realm of branch prediction. A CPU with a pipeline will try to guess which way a branch statement goes and where it will end up after the branch. It is possible to analize the assembly to see that a given branch will always execute given a certain set of conditions at the start of the pipline. For example, the branch is based on if register 1 is greater than 0 or not. If register 1 is 0, and no instructions in the pipeline change that, it is in essence an unconditional branch which can be executed safely (and not having to worry about flush the pipeline after speculative execution or stall until the branch can be decided).All of the above was for very low level code. In higher level code (as demonstrated in the question) the terminology of a branch is where code may follow two (or more in the case of a switch (pun not intended)) different paths. From wikipediaA branch is sequence of code in a computer program which is conditionally executed depending on how the flow of control is altered at the branching point.Unconditional code is not conditionally executed and thus is not a branch.
_unix.365238
I have a server that's to be used for imaging here, and the client machines boot off NFS roots. Or, rather, a single NFS root.Here's the problem: when only one client is connected, the system runs as expected, but if there are multiple clients, there's a reasonably good chance that I'll get a stale file handle error.I'm at my wit's end, here. What can I do to prevent errors on a heavily shared NFS mount?
NFS Stale File Handle Errors on Root
nfs;pxe
null
_unix.150174
With sar and iowait, we can get CPU time utilization. But, when I executed both commands, I could see significant differences in their outputs. > iostat && sar 1 1Linux 2.6.32-042stab090.4 (LinuxBox) 08/14/2014 _x86_64_ (16 CPU)avg-cpu: %user %nice %system %iowait %steal %idle 0.46 0.00 0.52 0.07 0.00 98.95Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 10.53 760.66 44.67 3464410204 203460004sdb 2.49 368.15 779.18 1676748162 3548769968sdc 4.09 192.81 10.71 878170395 48792907Linux 2.6.32-042stab090.4 (LinuxBox) 08/14/2014 _x86_64_ (16 CPU)10:35:21 AM CPU %user %nice %system %iowait %steal %idle10:35:22 AM all 0.00 0.00 0.06 0.00 0.00 99.94Average: all 0.00 0.00 0.06 0.00 0.00 99.94It is very difficult for me to decide which output is more reliable. Which command should be considered as more accurate one?
Difference between 'sar' and 'iostat' commands
linux;cpu;system information;iostat
null
_cstheory.27606
This question is two-fold, and is mainly reference-oriented:Is there somewhere where the main intuitions for proving graph minor theorem are given, without going too much into the details? I know the proof is long and difficult, but surely there must be key ideas that can be communicated in an easier way.Are there other relations on graphs that can be shown to be well quasi-orders, maybe in a simpler way than for the minor relation? (obviously I am not interested in trivial results here, like comparing sizes). Directed graphs are also in the scope of the question.
Understanding graph minor theorem
reference request;graph theory;graph minor
The following book covers some material related to the proof of the graph minor theorem (Chapter 12). Reinhard Diestel: Graph Theory, 4th edition, Graduate Texts in Mathematics 173.The author states: [...] we have to be modest: of the actual proof of the minor theorem, this chapter will convey only a very rough impression. However, as with most truly fundamental results, the proof has sparked off the development of methods of quite independent interest and potential. An electronic version of the book can be viewed online. http://diestel-graph-theory.com/
_unix.156532
Fedora 29 (kernel 3.15.10-201.fc20.x86_64) This worked in F19.I'm trying to use cgroups to limit memory usage for some apps that are prone to misbehaviour, and I'm encountering problems. I'm testing with a small single-purpose program.I have this in my /etc/cgconfig.conf file:group memtest { memory { memory.limit_in_bytes = 209715200; memory.soft_limit_in_bytes = 104857600; }}and this in /etc/cgrules.conf:*:memtest memory memtest/The memtest.c file simply malloc's 1GiB, sleeps for 30 seconds, and then frees the buffer and exits.When the memtest program is running, its PID is properly listed in /sys/fs/cgroup/memory/memtest/tasks, showing that it's being classified correctly. However, its memory use is not being limited.Using ulimit the behaviour is as expected:$ (ulimit -S -v 200000 ; ./memtest )malloc failed: Cannot allocate memoryHere's the source of memtest.c:#include <errno.h>#include <stdlib.h>#include <stdio.h>#include <unistd.h>main() { char *buf; size_t bytes = (1 * 1<<30); errno = 0; buf = malloc(bytes); if (errno != 0) { int errno_copy = errno; perror(malloc failed); return errno_copy; } printf(%d bytes allocated (requested %d)\n, malloc_usable_size(buf), bytes); sleep(30); printf(Freeing..\n); free(buf); return 0;}Why is the task getting properly classified, but not limited in its memory use? What changed between F19 and F20? (I only upgraded to F20 last week.)Thanks!
Fedora 20 memory.limit_in_bytes not working
memory;fedora;cgroups
null
_unix.183427
I'm a unix newbie, just know enough to be dangerous.I have a SUSE server and recently added a second IP address (ending in .159) by editing the config file below. It works, however now all the services on the machine are using the new address when making connections. Example: the nagios service now makes requests from the new IP address, and I want to use the old address (ending in .160) for all outbound connections. Is there a way to set a default IP address (ending in .160) for outbound connections? I'm just using the command line, no KDE available.Here's my eth config file:admin1@server1:/etc/sysconfig/network# more ifcfg-eth2BOOTPROTO='static'BROADCAST=''ETHTOOL_OPTIONS=''IPADDR='192.168.100.160/24'MTU=''NAME='79c970 [PCnet32 LANCE]'NETWORK=''REMOTE_IPADDR=''STARTMODE='auto'USERCONTROL='no'IPADDR_external='207.47.100.160/24'LABEL_external='external'IPADDR1='192.168.100.159/24'IPADDR1_external='207.47.100.159/24'Here is the output of ip route show:admin1@server1:/etc# ip route show207.47.100.0/24 dev eth2 proto kernel scope link src 207.47.100.159192.168.100.0/24 dev eth2 proto kernel scope link src 192.168.100.160169.254.0.0/16 dev eth2 scope link127.0.0.0/8 dev lo scope linkdefault via 192.168.100.1 dev eth2
setting or changing the default ip address
suse;network interface
null
_webapps.6463
Is there a way to search for more than one topic at a time in Google? For example, I wish to look for speeches for President Obama, President Bush, and President Clinton and didn't want to do a search for presidential speeches but wanted three separate search results. Another example is if you want a photo of a chair, Tiger Woods, and a walker. Any way to get a response for each of these separately without having to search three different times?
Return multiple Google search results in same query
google search
null
_unix.159757
I'm facing a ugly problem with my system. my login manager (LightDM) is starting gnome-keyring-daemon at login successfully and unlocking my keyring as it should (EDIT: Everything via PAM).The thing is, I get gnome-keyring-daemon started with just one component: secrets, but I need all these: pkcs11, secrets, ssh, and gpg. I don't know why the latter is not the default, I neither know if I should report this to the package maintainer.The file /usr/share/dbus-1/services/org.freedesktop.secrets.service defines how gnome-keyring-daemon should run:[D-BUS Service]Name=org.freedesktop.secretsExec=/usr/bin/gnome-keyring-daemon --start --foreground --components=secretsI could just edit it on Emacs and problem solved, but, that's dirty and my changes will be gone for the next upgrade of the gnome-keyring package.So, the question is: How do I change the Exec line of that service while preventing this to be lost in the next system upgrade? Is there a way to enable custom services and disable those services that comes by default?The relevant packages and their versions installed on my system.$ LC_ALL=C pacman -Qi dbus gnome-keyring lightdm | egrep (Name|Version)Name : dbusVersion : 1.8.8-1Name : gnome-keyringVersion : 3.12.2-1Name : lightdmVersion : 1:1.12.0-1
How to modify a dbus service's Exec line without losing the changes in case of upgrade
d bus;gnome keyring
Ok, I found a way to solve this issue. This not address my question directly, but solves the issue that pushed me to ask here.The problemas it was, gnome-keyring wasn't unlocking my GPG keys, so I was asked for the password of my GPG key every time I login (because Emacs reads a .gpg file for configuration), all my passwords were available after login so offlineimap didn't complain about don't be able to get the passwords of my e-mail account at all when working.I tried then to start gnome-keyring-daemon from the .xprofile (which is read by LightDM, other DM may read different files) in this way:#!/bin/basheval $(gnome-keyring-daemon --start --components=gpg,pkcs11,secrets,ssh)export GPG_AGENT_INFO SSH_AUTH_SOCKAfter rebooting (I like this best than logout and login again) and login, I wasn't asked for my GPG key password, however offlineimap was complaining about not being able to get the passwords of my e-mail accounts. Running seahorse I notice that there is no Passwords section.The solutionAfter fighting for few hours and trying many different combinations (one of them, showing the Passwords section but with the folder Login locked!) I found out what was the correct solution:#!/bin/bashsource /etc/X11/xinit/xinitrc.d/30-dbus # You need a dbus session, duheval $(gnome-keyring-daemon --start --components=gpg,pkcs11,secrets,ssh)export GPG_AGENT_INFO SSH_AUTH_SOCKDone. problem solved. el es fin, muchachos.EDIT: Beware, your gnome-keyring-daemon may issue more environment variables for you to export. To be sure you don't need more than GPG_AGENT_INFO or SSH_AUTH_SOCK run gnome-keyring-daemon --start --components=gpg,pkcs11,secrets,ssh from your shell and add more variables to export sentence according.Please note that LightDM is still starting gnome-keyring-daemon thanks to its PAM configuration and I wouldn't recommend you to change such configuration. However, if you find yourself inserting your password after login to unlock something on gnome-keyring, it might be because LightDM is not providing your password to it. I did this addition to LightDM PAM module /etc/pam.d/lightdm:auth optional pam_gnome_keyring.so try_first_passThe addition was the try_first_pass thing (reading The Linux-PAM System Administrators' Guide is not a bad idea), in my system LightDM don't have that parameter included.This is how I solved my problem with Gnome Keyring!
_codereview.115704
Background and PurposeFor those unaccustomed to Ruby, Ruby supports range literals i.e. 1..4 meaning 1, 2, 3, 4 and 1...4 meaning 1, 2, 3. This makes a Ruby for loop pretty sweet:for i in 1..3 doSomething(i)endIt works for non-numeric types and that's pretty cool but out of our scope.A similarly convenient thing exists in Python, range, with some additional features and drawbacks. range(1, 4) evaluates to [1, 2, 3]. You can also provide a step parameter via range(0, 8, 2) which evaluates to [0, 2, 4, 6]. There is no option to make the last element inclusive (as far as I know). range calculates its elements the moment it is invoked, but there is also a generator version to avoid unnecessary object creation. In combination with Python's list construction style, you can do all sorts of cool stuff like [x*x for x in range(0, 8, 2)], which is right on the border of this question's scope.Now that ES6 has generators and the for-of statement (and most platforms support them), iteration in JavaScript is quite elegant; there is generally only one truly right (or at least best) way to write the loop for your task. However, if you are iterating over a series of numbers, ES6 offers nothing new. That's not just in comparison to ES5, but to pretty much every curly-bracket-based language before it. Python and Ruby (probably other languages I don't know, too) have proven that we can do it better and eliminate stroke-inducing code like:while (i--) { sillyMistake(array[i--]);}for ( var sameThing = youHaveWrittenOut; verbatimFor >= decades; thinkingAboutDetails = !anymore) { neverInspectAboveForErrors(); assumeLoopVariantIsNotModifiedInHere(); if (modifyLoopVariant()) { quietlyScrewUp() || ensureLoopCondition() && causeInfiniteLoop(); }}ES6 for-of, spread operator, and this generator can eliminate all manual loop counter fiddling, unreadable three-part for, and smelly loops that initialize arrays. Please explain how I have succeeded or failed to do so, including any use cases this doesn't cover.Code for Reviewconst sign = Math.sign;/** * Returns true if arguments are in ascending or descending order, or if all * three arguments are equal. Importantly returns false if only two arguments * are equal. */function ord(a, b, c) { return sign(a - b) == sign(b - c);}/** * Three overloads: * * range(start, end, step) * yields start, start + step, start + step*2, ..., start + step*n * where start + step*(n + 1) would equal or pass end. * if the sign of step would cause the output to be infinite, e.g. * range(0, 2, -1), range(1, 2, 0), nothing is produced. * * range(start, end) * as above, with step implicitly +1 if end > start or -1 if start > end * * range(end) * as above, with start implicitly 0 * * In all cases, end is excluded from the output. * In all other cases, start is included as the first element * For details of how generators and iterators work, see the ES6 standard */function* range(start, end, step) { if (end === undefined) { [start, end, step] = [0, start, sign(start)]; } else if (step === undefined) { step = sign(end - start) || 1; } if (!ord(start, start + step, end)) return; var i = start; do { yield i; i += step; } while (ord(start, i, end));}/* Use Cases. Feedback on output method is not necessary. Insights on use cases themselves are welcome.*/(function shortFor(ol) { for (var i of range(0, 16, 2)) ol.append($(<li>).text(i.toString()));})($(#short-for .vals));(function spreadMap(ol) { for (var el of [...range(4)].map(x => x * x)) { ol.append($(<li>).text(el.toString())); }})($(#spread-map .vals));(function correspondingIndex(ol) { var a = [1, 2, 3], b = [4, 5, 6]; for (var i of range(a.length)) a[i] += b[i]; for (var el of a) ol.append($(<li>).text(el.toString()));})($(#corresponding-index .vals));<!-- jQuery is included here to conveniently emit results of test cases; the code to be reviewed has no dependencies. Feedback on HTML is not necessary.--><script src=https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js></script><body> <div class=use-case id=short-for> <h1>Use Case: Less verbose simple for loop</h1> <code>for (var i of range(0, 16, 2))</code> <p>Values of i: <ol class=vals></ol> </div> <div class=use-case id=spread-map> <h1>Use Case: Initialize an array without push()</h1> <code>[...range(4)].map(x => x*x)</code> <p>Array contents: <ol class=vals></ol> </div> <div class=use-case id=corresponding-index> <h1>Use Case: Add corresponding elements of two arrays</h1> <code> var a = [1, 2, 3], b = [4, 5, 6]; for (var i of range(a.length)) a[i] += b[i]; </code> <p>Contents of array a: <ol class=vals></ol> </div></body>
Range iterator in ES6 similar to Python and Ruby for
javascript;iterator;ecmascript 6;generator
Answers propose replacingyield i;i += step;withyield start + i*step;++i;The differences are related to floating point accuracy and performance. In my original design, the iterator would endlessly spit out the initial value if the step value was too small in comparison, literally bigNumber + smallNumber === bigNumber. In addition, if the increment was not perfectly represented as a float (e.g. 1/3), error would pile up on the output after many iterations. By incrementing an integer i and multiplying by step instead, the iterator would eventually halt and always yield the most appropriate float.The reason I did not do this initially is that I had integers in mind; while all numbers in JS are supposed to be of interchangeable real number type, it is theoretically possible for a JIT or something to optimize into integer addition if it was always used that way. Integer math is way faster than floating point math, and addition is much faster than multiplying for either type. Math between a float and an integer is actually the worst, since conversion eats time too.After some testing, I have concluded that only very few cases (e.g. asm.js style for caller & callee) will allow that optimization to occur and translate into a tangible performance difference, and on many platforms it can just as easily be prevented by using yield and for (... of ...) or overshadowed by call and return time. Given time, ES6 implementations may evolve to do better at this, and a special fast integer range iterator will be justified, but for the moment, the suggestion of multiplying the step instead of repeatedly adding it is sound.Finally, the original code fails to produce number sequences of a single element because it returns early if the second element will be beyond end. The intent of the early return was to make the iterator empty if step was the wrong sign. I tried to combine that idea with invalidating the iterator if you gave it start equal to end, for e.g. not iterating the indexes of an empty array or string by using ord, but that doesn't work. It's better to spell out both conditions, and the final correct code is:const sign = Math.sign;function ord(a, b, c) { return sign(a - b) == sign(b - c);}function* range(start, end, step) { if (end === undefined) { [start, end, step] = [0, start, sign(start)]; } else if (step === undefined) { step = sign(end - start) || 1; } else if (sign(end - start) != sign(step)) { return; } if (start === end) return; var i = 0, result; do { result = start + i*step; yield result; ++i; } while (ord(start, start + i*step, end));}See above for comments.
_unix.304613
I'm working on local git project and I'm running separated SSH service on port 2222 with non-root user. Along that I'm using ufw firewall and opened port 2222 - I opened it just for testing if connection to it works as expected and it's OK.Because default port for SSH is 22 and I wouldn't like for users to write additional ports in address for access to repositories I want to preroute it from 22 to 2222. Also I want that port 2222 wouldn't be accessible from outside when prerouting is effective.Basically I already done the first part - I prerouted traffic from 22 to 2222 and it works without a problem if port 2222 is also opened, but when trying to close down port 2222 also the connection to 22 stops working (the rule for opened port is still there). This is somehow logical since iptables seeems to just convert port 22 to 2222 and forward it to ufw, which then recognizes this and deny connection because port 2222 is not opened.Currently this is what I have in ufw's before.rules and it works if port 2222 is also opened:*nat:PREROUTING ACCEPT [0:0]:POSTROUTING ACCEPT [0:0]-A PREROUTING -p tcp --dport 22 -j REDIRECT --to-port 2222Is there a way to do this prerouting without a need to have port 2222 opened?
Prerouting SSH to a different internal port
ssh;networking;iptables;routing;ufw
In principle, no. Using ufw here isn't important, since it only acts as a frontend, creating iptables rules. As you noticed, the NAT rules are handled before any filtering, so the filter sees the resulting packet after any NAT rules. (Wikipedia has a rather scary looking chart of the packet flow inside netfilter, which represents them as independent.)I'm not sure if it's much of a problem that the server can be reached on the other port too.However, if you actually want to drop packets going directly to port 2222, you can do it with some trickery, using connection marks, i.e. the CONNMARK target and connmark module:In the nat table, set a mark on the connection, and redirect it to the target port.iptables -t nat -A PREROUTING -p tcp --dport 22 -j CONNMARK --set-mark 1234iptables -t nat -A PREROUTING -p tcp --dport 22 -j REDIRECT --to-port 2222In the filter (default) table, reject the connection if it doesn't have the mark.iptables -A INPUT -p tcp --dport 2222 -m connmark ! --mark 1234 -j REJECTIt doesn't matter what the mark number is, as long as it doesn't collide with your other rules.(I'm not sure if it would be possible to create DROP rules in the NAT table, too. iptables v1.4.21 doesn't seem to allow it, but the complaint comes from iptables itself, not from the kernel, like it does for REJECT.)
_hardwarecs.1155
I have a couple laptops that I'd like to remote into, but they have restrictions in place that prevent me from installing remote software. Is there a hardware solution that I can use to access them over the Internet?I was thinking maybe something that emulates a keyboard, mouse, and screen that would redirect those over the Internet.
What hardware can turn a keyboard, mouse, and monitor into a remote desktop solution?
monitors;mice;keyboards;docking stations
null
_webmaster.6172
I run a fairly large-scale Web crawler. We try very hard to operate the crawler within accepted community standards, and that includes respecting robots.txt. We get very few complaints about the crawler, but when we do the majority are about our handling of robots.txt. Most often the Webmaster made a mistake in his robots.txt and we kindly point out the error. But periodically we run into grey areas that involve the handling of Allow and Disallow.The robots.txt page doesn't cover Allow. I've seen other pages, some of which say that crawlers use a first matching rule, and others that don't specify. That leads to some confusion. For example, Google's page about robots.txt used to have this example:User-agent: GooglebotDisallow: /folder1/Allow: /folder1/myfile.htmlObviously, a first matching rule here wouldn't work because the crawler would see the Disallow and go away, never crawling the file that was specifically allowed.We're in the clear if we ignore all Allow lines, but then we might not crawl something that we're allowed to crawl. We'll miss things.We've had great success by checking Allow first, and then checking Disallow, the idea being that Allow was intended to be more specific than Disallow. That's because, by default (i.e. in the absence of instructions to the contrary), all access is allowed. But then we run across something like this:User-agent: *Disallow: /norobots/Allow: /The intent here is obvious, but that Allow: / will cause a bot that checks Allow first to think it can crawl anything on the site.Even that can be worked around in this case. We can compare the matching Allow with the matching Disallow and determine that we're not allowed to crawl anything in /norobots/. But that breaks down in the face of wildcards:User-agent: *Disallow: /norobots/Allow: /*.html$The question, then, is the bot allowed to crawl /norobots/index.html?The first matching rule eliminates all ambiguity, but I often see sites that show something like the old Google example, putting the more specific Allow after the Disallow. That syntax requires more processing by the bot and leads to ambiguities that can't be resolved.My question, then, is what's the right way to do things? What do Webmasters expect from a well-behaved bot when it comes to robots.txt handling?
What's the proper way to handle Allow and Disallow in robots.txt?
robots.txt
One very important note: the Allow statement should come before the Disallow statement, no matter how specific your statements are.So in your third example - no, the bots won't crawl /norobots/index.html.Generally, as a personal rule, I put allow statements first and then I list the disallowed pages and folders.
_webapps.20894
I can see that 103 people like a page but how can I find out who they are?With the old group system this was so easy!
On Facebook, can I find out who else likes a page?
facebook;facebook pages
The list is only available to page admins. You can only see which of your friends liked the it.
_codereview.169111
I noticed that the C++ version of <complex> has norm.However, I am programming in C, and there is no cnorm. So I made one to define before my main function:#include <complex.h>double cnorm(double _Complex z){ return cpow(creal(z),2) + cpow(cimag(z),2);}How would you improve this definition? Is the name ok? or will it break when it is eventually added to <complex.h>. Is the return type appropriate? Would it be more consistent to use double _Complex? And is the implementation efficient (for rapid numerical evaluation)?
The C++ analog of norm in C
c
The code presented suffers from both performance and precision problems. pow() and cpow() work by converting to logarithms and multiplying. To square a floating-point value, it's quicker and more accurate to multiply it by itself:#include <complex.h>double cnorm(double _Complex z){ const double r = creal(z); const double i = cimage(z); return r*r + i*i;}You could multiply z by conj(z) (the imaginary terms ought to cancel), but the function above is cheaper and works in the double domain.cnorm() is not reserved by the Standard; the names reserved for future <complex.h> arecerf, cerfc, cexp2, cexpm1, clog10, clog1p, clog2, clgamma, ctgamma and their -f and -l suffixed variants
_vi.13252
I've had a friend install neovim on my laptop and I'm not sure what happened and I can't reach him for some reason he has gone MIA (like went back to hometown or something).The problem is every time I use vim I get an .nvimlog on the current folder. This did not use to happen.I checked the issues list on github but most of the replies there are just adding to my confusion.I want to know if there's a way get rid of the .nvimlog file or at least make it so it only appears in one place -- and not all over my laptop.It has been alluded to that we can make the latter happen. But like I said, a lot of the talk there just goes over my head.Can anyone help me with this or at least narrow it down on that conversation so I get a better understanding of the whole issue?
nvimlog file appearing in all of my directories
neovim
null
_cs.59377
I have been reading the randomized algorithm book by Rajeev Motwani and Prabhakar Raghavan. In section 3.5 they have introduced principle of deferred decision which is a different probability space. The example they provided is a clock solitaire game. The game is as follows. Initially 52 cards are randomly grouped into 4 cards of 13 piles. The piles are labeled $1,2,3,...,10,Q,J,K,A$. The game starts by drawing a card from $K$ labeled pile. The next draw will be taken place from the pile with the face value of the drawn card. For an example suppose the drawn card is 7, then we go to the pile labeled $7$ and pick a card from this pile. We continue in this fashion. The game ends when we reach an empty pile. And one wins if all the piles are empty when the game ends. It is easy to show that the last card drawn must have face value $K$.Now, to analyze the probability of winning, the author assumes a different probability space named Principle of deferred decision. The idea is to let the random choices unfold with the progress of the game, rather than fix the entire set of choices in advance. With this, they conclude that the probability of winning i.e., the probability that 52nd card being $K$ is 1/13. Can anyone explain why this is 1/13?
Clock solitaire game and principle of deferred decision
probability theory;randomized algorithms;probabilistic algorithms
null
_softwareengineering.240767
I understand and enjoy the benefits of the Garbage Collection in Java. However I don't understand why there is no way in Java to explicitly (and quickly) destroy an object. Surely this could be useful in some cases, I assume performance-critical software.It's true that in Java the GC will delete an object with no existing reference to it, so if I want an object deleted I can set the reference to it to null. But if I understand correctly, it isn't ensured that the GC will indeed delete the object, at least not immediately. And that's out of the programmer's control.Why is there no way in Java to explicitly destroy objects?While I understand that Java was designed to be used as a high-level language, that abstracts away some of the technical details from the programmer to make things easier: Java has become one of the most widely used languages, and is used in huge projects. I assume that in huge projects, performance is often an issue. Since Java had grown to become what it is, why wasn't explicit object destruction added to the language?
Why is there no deterministic object destruction in Java?
java;garbage collection;object
null
_unix.64461
I wonder, is it possible to compile Unix along with a my custom program so that it runs only my custom program on start up of computer, as if my program starting with boot up?
Compile Unix with a custom application
compiling;application
If you mean you only want users to be able to run one program, you can replace the user's shell with the absolute path to the program in the passwd file - assuming a local passwd file..
_codereview.104381
I've become interested prime factorization since solving Project Euler problem 3 (finding the largest prime factor of 600851475143). Learning here that initializing lists with many elements to later prune them is the computational equivalent of swimming with bricks, I challenged myself to code a function that returns all prime factors as efficiently as possible, without the need for initializing a big list beforehand.The routine takes the input n and divides it by 2 as many times as possible before leaving a remainder, redefining n to be the result of the division each time. When 2 no longer cleanly divides n, it moves to 3, but it is after 3 that the crux of the inefficiency arises.After 3, if n does not equal 1 and there are still factors to be found, the next factor tried is the next consecutive odd integer after 3. For example, take the number \$129373200 = (2^4) (3^5) (5^2) (11^3)\$. After the algorithm divides all 2's, 3's and 5's away, the next test-number is 7, which I view as efficient because 7 is prime. However, as the routine iterates, 9 is tested before 11, and I see this as inefficient because 9 is composite.If I edit the code such that it stores all tested numbers in a list, and checks if the next test-number is a multiple of a previously tested one or not before iterating, the runtime slows down considerably. Is there an efficient way to do this sort of check via elementary functions without storing tested numbers in a list (or set, dictionary, etc.)?TLDR: I want to understand why factoring numbers with 8-digit-long prime factors takes the code 7 seconds versus numbers with 9-digit-long prime factors that take nearly 2.5 minutes to solve. I want to reduce this large jump in runtime.def pf(n): startTime=datetime.now() factors = [] #Initialize a list to store prime factors. while n % 2 == 0: #While n/2 continues to yield no remainder: factors += [2] #Append 2 to the factor list. n /= 2 #Redefine n as n/2. if n == 1: #If n is 1, all of its prime factors have been found, print datetime.now() - startTime return factors #so return the factor list to the user. p = 3 #Initialize a count at 3, the next prime after 2 while p*p <= n: #While n is greater than or equal to p*p: if n%p == 0: #If p divides n: factors += [p] #Append p to the factor list. n /= p #Redefine n as n/p. else: #If p doesn't divide n: p += 2 #See if the next consecutive odd number up divides n. #Once all smaller factors are found, and n is smaller than p*p, factors += [n] #append n to the factor list, print datetime.now() - startTime, return factors #and then return it to the user.print pf(121)print pf(42768)print pf(19440)print pf(97200)print pf(129373200) #inefficiency exampleprint pf(600851475143)print pf(31610054640417607788145206291543662493274686990) #consecutive primesprint pf(4383898882371133212190175441147530134182228613257) #5-6 digit primesprint pf(815145012617325671714771027149) #8-digit primesprint pf(9657874875862260078751562987967607300225789) #9-digit primesOutput:0:00:00 [11, 11]0:00:00 [2, 2, 2, 2, 3, 3, 3, 3, 3, 11]0:00:00 [2, 2, 2, 2, 3, 3, 3, 3, 3, 5]0:00:00 [2, 2, 2, 2, 3, 3, 3, 3, 3, 5, 5]0:00:00 [2, 2, 2, 2, 3, 3, 3, 3, 3, 5, 5, 11, 11, 11]0:00:00 [71, 839, 1471, 6857L]0:00:00 [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113L]0:00:00.023000 [37489, 59617, 63577, 63841, 74771, 75521, 82217, 97283, 102181, 104717L]0:00:06.935000 [15485867, 32452843, 32452867, 49979687L]0:02:24.362000 [122949829, 314606891, 393342743, 674506111, 941083987L]
Prime factorization for integers with up to 9-digit long prime factors
python;performance;programming challenge;primes
Code CommentsFirst, that is a thoroughly excessive amount of commenting. I appreciate your zeal in documentation, but you do not need to comment every line. Just a couple comments here and there is thoroughly sufficient.Second, your function should do its job. If you want to time it, that is an external job. Use the timeit library:def pf(n): # stuff that doesn't involve datetime return factorsprint timeit.timeit('pf(val)', setup='from __main__ import pf; val = 145721*(145721+2)', number=10)That will print how long it took your function to run 10 times. Great library, highly recommended. Thirdly, factors += [p] should be spelled factors.append(p). Algorithm CommentsThe reason your code performs so poorly is because of the its algorithmic complexity. You are trying every odd number from 2 to sqrt(N), so your algorithm is O(sqrt(N)). But here N is the actual number... if we express N in terms of digits, it's O(sqrt(10^N)) = O(10^(N/2)). So when we go from 30-digit number to a 43-digit number, we would expect to see an increase in time on the order of a million in the worst case (if we were looking at twin primes). The fact that we see an increase of only 20x is an artifact of case selection more than anything else. I tried picking multiples of (arbitrary) twin primes, and the runtimes I got after 10 runs each were:145721 * 145723 0.536s1117811 * 1117813 2.716s18363797 * 18363799 45.436sRoughly 10x each time. Yay predictions. So what can we do to speed this up? When our times are this bad, it's either because we have a really stupid bug or we're using a bad algorithm. There isn't anything I see that will save this particular algorithm. At best, we can improve by changing the iteration to avoid multiples of 3 and 5:jump = 4while p*p <= n: ... else: p += jump jump ^= 6 #so it alternates between 2 and 4That gets us to:145721 * 145723 0.263s1117811 * 1117813 2.172s18363797 * 18363799 34.740sStill exponential, but better constants at least. Not sure how much better we're going to do than that. So let's just google some prime factorization algorithms. First one I came across was Pollard's rho. Simply copying the algorithm gives produces:def pollard(n): from fractions import gcd def get_factor(n): x_fixed = 2 x = 2 cycle_size = 2 factor = 1 while factor == 1: for count in xrange(cycle_size): if factor > 1: break x = (x * x + 1) % n factor = gcd(x - x_fixed, n) cycle_size *= 2 x_fixed = x return factor factors = [] while n > 1: next = get_factor(n) factors.append(next) n //= next return factorsThat gives me the correct response on all three twin primes, with this performance:145721 * 145723 0.021s (12.5x improvement)1117811 * 1117813 0.295s ( 7.4x)18363797 * 18363799 1.131s (30.7x)1500450269 * 1500450271 5.733s (est. ~1000x)Even jumping up to 10-digits, we still see good performance (we'd expect over an hour with the previous algorithm, so we're talking 1000x improvement), and we see better growth to.Sometimes, you just need a better algorithm.
_unix.320514
The audio in my pc does not work, either with hearphones. I have tried to runalsactl init from root and reboot, but nothing happeneduname -r :4.7.0-0.bpo.1-amd64lspci | grep -i audio:00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31)cat /proc/asound/cards: 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xf7120000 irq 139
Audio doesn't work Debian Jessie
debian;audio;alsa
null
_cs.60251
I've been trying to find out how to determine Hit or Miss regarding this info:2-Way Set associative cache memory which contains 32 blocks with 4 bytes in each block,Accesses were done to addresses no 7,13,37,27,33,11,7,18,34,23.I know how to determine Hit or Miss in direct-mapped cache but it seems irrelevant in K-Way Set associative cache. I would be thankful for any help as I didn't find proper explanation on the web.
2-way set associative mapping Hit or Miss
cpu cache
null
_cs.7846
Consider the following problem:Let a $k$-wheel be defined as an indexed circularly linked list of $k$ integers. For example{3, 4, 9, -1, 6} is a 5-wheel with 3 at position 0, 4 at position 1, and so on. A wheel supports the operation of rotation, so that a one-step rotation would turn the above wheel into{6, 3, 4, 9, -1} now with 6 at position 0, 3 at position 1, and so on. Let $W_{N_k}$ be an ordered set of $N$ distinct $k$-wheels. Given some $W_{N_k}$ and some integer $t$, find a series of rotations such that$$\forall\ 0 \leq i < k, \sum_{N \in W} N_i = t$$In other words, if you laid out the wheels as a matrix, the sum of every column would be $t$. Assume that $W_{N_k}$ is constructed so that the solution is unique up to rotations of every element (i.e., there are exactly $k$ unique solutions that consist of taking one solution, then rotating every wheel in $W$ by the same number of steps).The trivial solution to this problem involves simply checking every possible rotation. Here is some pseudocode for that:function solve(wheels, index) if wheels are solved: return true if index >= wheels.num_wheels: return false for each position 1..k: if solve(index + 1) is true: return true else: rotate wheels[index] by 1solve(wheels, 0)This is a pretty slow solution (something like $O(k^n)$). I'm wondering if it is possible to do this problem faster, and also if there is a name for it.
sum of like indices in circular lists
algorithms
null
_codereview.75237
I've finally got a working Proxy pattern. The original task is presented here. My test case is on coliru.It appears to flow through the correct pathways, however I am grateful for feedback and possible improvements.EDIT: I've reworked the code to provide sensible trace output, so it is possible to examine the flow under various situations. I'm not sure I've caught every possible situation though.EDIT: I think I've now trapped all possible scenarios. Complex cases like (a[u] = b[v]) = foo degenerate into a sequence of steps. And I've isolated all the possible steps. The only thing I still need to work out is whether there is any possibility of invalid pointers causing trouble. In the original code every PyObject* is wrapped in an Object, and the Object's destructor takes care of decrementing the reference count. In order to simplify the proxy logic, I'm using bare pointers -- otherwise the complexity is just horrific, with const considerations creeping in. So I need to make sure this doesn't introduce any danger. Finally, I'm moving away from the idea of using ->. mainly because I'm realising that if ob[idx] is to behave like an object, the proxy class must contain all the initialisers and converters and methods that object contains. -> provides a way of avoiding duplicating the methods, but there is no way to avoid duplicating the initialisers and converters. Hence introducing a new syntax to solve one third of the problems is a bad plan. Especially when that syntax is a compromise in the first place. It should be ob[idx].foo not ->foo.#include <iostream>#include <string>#include <vector>#include <memory>using namespace std;using PyObject = string;// Dummies for Python C-API functionsvoid PyObject_SetItem( PyObject* pyob, PyObject* key, PyObject* value ) { cout << PyObject_SetItem << *pyob << [ << *key << ]= << *value << endl; }PyObject* PyObject_GetItem( PyObject* pyob, PyObject* key ) { cout << PyObject_GetItem << *pyob << [ << *key << ] << endl; return new string( item@ + *pyob + [ + *key + ] ); }class Object {public: PyObject* p; Object() {}; Object(PyObject* _p) : p{_p} { cout << Object{ << (p==nullptr?0:*p) << } << endl; } void someMethod() { cout << Object::someMethod() << endl; } PyObject* ptr() { return p; } Object& operator= (const Object& rhs) { cout << const Object& operator= << endl; p = rhs.p; return *this; } class Proxy { private: PyObject* container; PyObject* key; public: // at this moment we don't know whether it is 'c[k] = x' or 'x = c[k]' Proxy( PyObject* c, PyObject* k ) : container{c}, key{k} { cout << Proxy( << *c << , << *k << ) << endl; } ~Proxy(){ cout << ~Proxy() << endl; } // Rvalue // e.g. fooObject = myList[5] operator Object() const { cout << Proxy::operator Object() const R-VALUE ACCESS << endl; return { PyObject_GetItem(container,key) }; } // Lvalue // e.g. (something = ) myList[5] = foo Proxy& operator= (const Object& rhs_ob) { cout << Proxy& Proxy::operator= (const Object&) L-VALUE ACCESS << endl; PyObject_SetItem( container, key, rhs_ob.p ); return *this; // allow daisy-chaining a = b = c etc } /* This is to handle ... = ob[a] = ob[b] = ... . The compiler provides an automatic 'Proxy operator=', unless I provide one. My first idea is to eliminate this 'Proxy operator=', and hope that when the compiler encounters 'fooProxy=barProxy' it's overload resolution will notice Proxy provides conversion to Object, and produce 'fooProxy=Object(barProxy)'. Unfortunately, the deleted function still participates in overload resolution. (Why??) And consequently the compiler produces a 'Proxy operator=' error instead. */ Proxy& operator= (const Proxy& rhs) { cout << Proxy& Proxy::operator= (const Proxy&) PROXY-PROXY << endl; cout << getting rhs... << endl; PyObject* val = rhs->ptr(); cout << ... = << *val << endl; PyObject_SetItem( container, key, val ); return *this; // allow daisy-chaining a = b = c etc } Object operator->() const { cout << Object Proxy::operator->() << endl; return { PyObject_GetItem(container,key) }; } }; /* This overload is unnecessary, and is just for efficiency. If ob is const, we know that we won't be doing ob[idx]=... And so we can bypass having to engage the proxy mechanism. */ const Object operator[] (const Object& key) const { cout << const Object Object::operator[] setting << *p << [ << *(key.p) << ] CONST-SHORTCUT << endl; return { PyObject_GetItem( p, key.p ) }; } Proxy operator[] (const Object& key) { cout << Proxy Object::operator[] creating proxy obj for << *p << [ << *(key.p) << ] << endl; return { p, key.p }; } const Object* operator -> () const { cout << const Object* Object::operator -> << endl; return this; } Object* operator -> () { cout << Object* Object::operator -> << endl; return this; } };int main(){ PyObject a=ob,b=idx,c=targ; Object ob=Object(&a), idx=Object(&b), targ=Object(&c); Object result; // std::endl flushes #define DO(x) cout << - - - - - - - - - - - - - - - - \n EXECUTING: #x << endl; x; DO( PyObject pyob_in_const_ob = pyob_in_const_ob; const Object const_ob(&pyob_in_const_ob); //const_ob[idx] = targ; <-- can't assign to const obviously! result = const_ob[idx]; ) DO( Object out = ob[idx]; ) DO( ob[idx] = targ ) DO( auto x = ob[idx] ) DO( PyObject d=idy; Object idy=Object(&d); ob[idx] = ob[idy]; ) DO( ob[idx] = ob[idy] = targ ) ob[idx]->someMethod(); DO( PyObject* w = ob[idx]->p; ); DO( /* avoid variable not used warnings */ ) cout << w; cout << &out;}
Proxy pattern for supporting '... = ob[idx]->someObjMember = ...' type access
c++;c++11;proxy
null
_unix.386061
I just got the Dell Precision 7520 which comes with Ubuntu 16.04 and the NVIDIA Quadro M1200 GPU. I installed Mint 18.2 and then the nvidia-375 apt package. After reboot, I hear the Mint jingle and the screen is blank. From my research this seems to have something to do with 'nomodeset'. However, during boot holding down Shift does not bring up any menus. Any ideas?
Mint 18.2, Cinnamon, NVIDIA Quadro M1200, nvidia-375 driver, nomodeset
linux mint;nvidia
null
_cstheory.9500
The definition of Ramsey numbers is the following: Let $R(a,b)$ be a positive number such that every graph of order at least $R(a,b)$ contains either a clique on $a$ vertices or a stable set on $b$ vertices.I am working on some extension of Ramsey Numbers. While the study has some theoretical interest, it would be important to know the motivation of these numbers. More specifically I am wondering the (theoretical or practical) applications of Ramsey numbers. For instance, are there any solution methodology for a real life problem that uses Ramsey numbers? Or similarly, are there any proofs of some theorems based on Ramsey numbers?
Application of Ramsey Numbers
reference request;co.combinatorics;ramsey theory
Applications of Ramsey theory to CS, Gasarch
_codereview.54673
I was just wanting a code review on this image slider I made. I don't have anyone else to give me their opinion on the code I write outside of work.Is this sloppy? Is it wrong? Is it ineffective? Does it have bad performance?You can see my repo on the slider here.deitySlider.jsvar deitySlider = { init: function(userDefinedOptions) { options = userDefinedOptions; settings.images = document.querySelectorAll(options.slider + > img); settings.stages = settings.images.length; slider = document.getElementById(options.slider.slice(1)); initCaptionElement(); initNavigation(); initHoverEvent(); deitySlider.cycle(); //Load first slide. setTimerInterval(); function initHoverEvent() { if(!options.pauseOnHover) return; slider.addEventListener(mouseover, function() { clearInterval(settings.interval); }, false) slider.addEventListener(mouseout, function() { if (settings.interval != null) setTimerInterval(); }, false) } function initCaptionElement() { if (!options.captions) return; slider.appendChild(document.createElement(div)).className = deity-captions; slider.getElementsByClassName('deity-captions')[0].innerHTML += '<div class=deity-title></div>'; } function initNavigation() { if(!options.directionNav) return; slider.appendChild(document.createElement(div)).className = deity-directionNav; slider.getElementsByClassName('deity-directionNav')[0].innerHTML += '<a class=deity-prevNav>'+ options.prevText +'</a><a class=deity-nextNav>'+ options.nextText +'</a>'; slider.getElementsByClassName('deity-prevNav')[0].addEventListener('click', function() { if(settings.stage > 0) settings.stage -= 2; else settings.stage = (settings.stages - 2); deitySlider.cycle(play); setTimerInterval(); }, false); slider.getElementsByClassName('deity-nextNav')[0].addEventListener('click', function() { deitySlider.cycle(); setTimerInterval(); }, false); } function setTimerInterval() { if (settings.interval) { clearInterval(settings.interval) }; settings.interval = setInterval(deitySlider.cycle, options.pauseTime); }; console.log(\nslider: + options.slider + \npauseTime: + options.pauseTime + \nTransitionTime: + options.transitionTime); console.log(settings.stages); }, cycle: function() { console.log(cycle); var cSettings = settings; var cOptions = options; cSettings.stage = ++cSettings.stage % cSettings.stages; for (var i = 0, length = cSettings.stages; i < length; i++) cSettings.images[i].classList.add(hidden); cSettings.images[cSettings.stage].classList.remove(hidden); if(cOptions.captions) insertImageCaption(); function insertImageCaption() { var title_text = cSettings.images[cSettings.stage].getAttribute('title'); var description_text = '<div class=deity-description>' + cSettings.images[cSettings.stage].getAttribute('description') + '</div>'; var title_div = slider.getElementsByClassName('deity-title')[0]; if(title_text != null && title_text != '') title_div.innerHTML = title_text + description_text; else title_div.innerHTML = ; } }};var options = ({ slider: #deitySlider, pauseTime: 3000, transitionTime: 1000, directionNav: false, pauseOnHover: false, captions: false, prevText: 'Prev', nextText: 'Next',});var settings = ({ stage: -1, //Starting Image stages: null, interval: null, images: null,});var slider;index.html<html><head><link rel=stylesheet type=text/css href=deity-style.css/><title>Deity Slider Demo</title></head><body><div id=container> <div id=slider_container> <div id=slider_one class=deity-Slider> <img src=slide_1.jpg title=Experienced description=Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum libero lacus, feugiat sit amet auctor ut, porttitor quis justo. /> <img src=slide_2.jpg title=An Idea description=Cras id metus luctus, tristique nibh eu, cursus nulla. Etiam a neque nec erat fringilla sagittis a vulputate diam. Fusce vulputate mauris condimentum tortor scelerisque, non congue nisl luctus. Sed ac nibh non sapien tristique consectetur sit amet vitae libero. Sed porttitor lobortis pretium. Sed viverra lorem dolor, sit/> <img src=slide_3.jpg title=Deity Design description= Donec lorem mi, vehicula ut pharetra in, tempus venenatis ante. Donec in dui auctor justo vehicula congue. /> <img src=slide_4.jpg title=Knowledgeable description=Aliquam erat volutpat. Nullam blandit, mi ac viverra mollis, nibh purus auctor mauris, sed dapibus ante lectus quis nibh. Interdum et malesuada fames ac ante ipsum primis in faucibus. Nulla facilisi. /> </div> </div> </div><script type=text/javascript src=deity.slider.js></script><script> window.addEventListener(load, function load(){ window.removeEventListener(load, load, false); deitySlider.init({ slider: '#slider_one', transitionTime: 1000, pauseTime: 3000, directionNav: true, pauseOnHover: true, captions: true, }); },false);</script></body></html>
Pure JS Image Slider
javascript;image
Your code needed quite a bit of refactoring. You had unnecessary nested functions, globals all over the place, inconsistent selectors, etc.I took your code and refactored it so that you now only have 1 global, 'DietySlider'. I also completely refactored the structure so that it separates out every function instead of nesting them. The new structure is much easier to maintain and further extend features.DietySlider.js'use strict'; var DietySlider = (function() { var _doc = document; var slider; var options = { slider: #DietySlider, pauseTime: 3000, transitionTime: 1000, directionNav: false, pauseOnHover: false, captions: false, prevText: 'Prev', nextText: 'Next', }; var settings = { stage: -1, //Starting Image stages: null, interval: null, images: null, }; return { init: function(userDefinedOptions) { options = userDefinedOptions; slider = _doc.querySelector(options.slider); settings.images = _doc.querySelectorAll(options.slider + > img); settings.stages = settings.images.length; //run init functions DietySlider.initCaptionElement(); DietySlider.initNavigation(); DietySlider.initHoverEvent(); DietySlider.cycle(); //Load first slide DietySlider.setTimerInterval(); }, initCaptionElement: function() { if ( 0 == options.captions) return; var element = _doc.createElement(div); element.className = deity-captions; element.innerHTML = '<div class=deity-title></div>'; slider.appendChild(element); }, initNavigation: function() { if( 0 == options.directionNav ) return; var element = _doc.createElement(div); element.className = deity-directionNav; element.innerHTML = '<a class=deity-prevNav>'+ options.prevText +'</a><a class=deity-nextNav>'+ options.nextText +'</a>'; slider.appendChild(element); var prevNav = slider.querySelector('.deity-prevNav'); prevNav.addEventListener('click', function() { settings.stage > 0 ? settings.stage -= 2 : settings.stage = (settings.stages - 2); DietySlider.cycle(play); DietySlider.setTimerInterval(); }, false); var nextNav = slider.querySelector('.deity-nextNav'); nextNav.addEventListener('click', function() { DietySlider.cycle(); DietySlider.setTimerInterval(); }, false); }, initHoverEvent: function() { if( 0 == options.pauseOnHover ) return; slider.addEventListener(mouseover, function() { clearInterval(settings.interval); }, false); slider.addEventListener(mouseout, function() { if (settings.interval > null) { DietySlider.setTimerInterval(); } }, false); }, cycle: function() { console.log(cycling stage: + (settings.stage + 1)); var cSettings = settings; var cOptions = options; cSettings.stage = ( cSettings.stage + 1 ) % cSettings.stages; for (var i = 0, length = cSettings.stages; i < length; i++) { if ( i === cSettings.stage ) { cSettings.images[i].classList.remove(hidden); } else { cSettings.images[i].classList.add(hidden); } } if( cOptions.captions ) { DietySlider.insertImageCaption(cSettings); } }, insertImageCaption: function(cSettings) { var title_div = slider.querySelector('.deity-title'); var title_text = cSettings.images[cSettings.stage].getAttribute('title'); var description = cSettings.images[cSettings.stage].getAttribute('description') var description_text = '<div class=deity-description>' + description + '</div>'; if( title_text != null && title_text != '' && title_text != undefined ) { title_div.innerHTML = title_text + description_text; } else { title_div.innerHTML = ; } }, setTimerInterval: function() { if ( settings.interval > null ) { clearInterval(settings.interval); }; settings.interval = setInterval(DietySlider.cycle, options.pauseTime); }, }; })();index.html<!DOCTYPE HTML><html> <head> <title>Deity Slider Demo</title> <link rel=stylesheet type=text/css href=deity-style.css/> </head> <body> <div id=container> <div id=slider_container> <div id=slider_one class=deity-Slider> <img src=slide_1.jpg title=Experienced description=Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum libero lacus, feugiat sit amet auctor ut, porttitor quis justo. /> <img src=slide_2.jpg title=An Idea description=Cras id metus luctus, tristique nibh eu, cursus nulla. Etiam a neque nec erat fringilla sagittis a vulputate diam. Fusce vulputate mauris condimentum tortor scelerisque, non congue nisl luctus. Sed ac nibh non sapien tristique consectetur sit amet vitae libero. Sed porttitor lobortis pretium. Sed viverra lorem dolor, sit/> <img src=slide_3.jpg title=Deity Design description= Donec lorem mi, vehicula ut pharetra in, tempus venenatis ante. Donec in dui auctor justo vehicula congue. /> <img src=slide_4.jpg title=Knowledgeable description=Aliquam erat volutpat. Nullam blandit, mi ac viverra mollis, nibh purus auctor mauris, sed dapibus ante lectus quis nibh. Interdum et malesuada fames ac ante ipsum primis in faucibus. Nulla facilisi. /> </div> </div> </div> <script src=DietySlider.js></script> <script> // Document ready (function(){ (function(){var a=setInterval(function(){complete===document.readyState?ready(a):!1},16.6666666667)})(); var ready = function(a) { clearInterval(a); DietySlider.init({ slider: '#slider_one', transitionTime: 1000, pauseTime: 3000, directionNav: true, pauseOnHover: true, captions: true }); } })(); </script> </body></html>
_unix.224270
I recently installed Debian 8 on my workstation , which before ran Ubuntu 15.04 , but after Debian 8 came into my life, I have not succeeded in running ssh with X11forwarding.case is:I have a server running Ubuntu 14.04, and when I login to that server from my laptop ( also Ubuntu 14.04 ), I have no problem getting X11forwarding working, but if I try to login from my Debian8 workstation, I will get this error when trying to run xterm (output is from ssh -X serverip )connect /tmp/.X11-unix/X0: No such file or directoryxterm: Xt error: Can't open display: localhost:12.0A ls shows that the file exists$ ls -la /tmp/.X11-unix/X0srwxrwxrwx 1 root root 0 Aug 18 17:53 /tmp/.X11-unix/X0In the server's sshd_config i have the following options enabledX11Forwarding yesX11UseLocalhost yes ( i have tried both yes and no here )X11DisplayOffset 12and in my ssh_config on my Debian8, I have the following lines enabledHost *ForwardAgent yesForwardX11 yesForwardX11Trusted yesAs I see it with a 'ssh -vv -X serverip' all x11 debug message are the same both on my working laptop, and nonworking are the same, and ~/.Xauthority get created on the server when I loginThe following environment variables are set when I log into the server from my laptop ( some variables are omitted )SSH_CLIENT=1.1.1.1 59061 22SSH_TTY=/dev/pts/1SSH_AUTH_SOCK=/tmp/ssh-qHuaV5m5QO/agent.32572LANGUAGE=en_US:enLOGNAME=xxxSSH_CONNECTION=1.1.1.1 59061 2.2.2.2 22DISPLAY=localhost:12.0the following environment variables are set when I log into the server from my debian8 workstation (some variables are omitted)SSH_CLIENT=3.3.3.3 59148 22SSH_TTY=/dev/pts/1SSH_AUTH_SOCK=/tmp/ssh-fwG6aBA4he/agent.32763SSH_CONNECTION=3.3.3.3 59148 2.2.2.2 22DISPLAY=localhost:12.0All tests are done with ssh -X serverip, and firewall is disabled.Note: if I login to the server from my laptop via the Debian workstation, X11forwarding works finePlease can any one help me? I have tried to google this problem for 2 days now, without any solution yet. So I really hope someone here can help me out.
ssh X11forwarding not working , but only on debian8
linux;debian;ssh
null
_unix.229877
I'm trying to figure out how to generate a menu.lua for Awesome WM on Debian (actually Kali 2.0). In a nutshell it's a Lua file representing the structure of the Applications menu so it can be reproduced in Awesome's menu.This is the relevant section in the Arch wiki, but I can't seem to find the equivalent on Debian. There doesn't seem to be and Xdg-menu counterpart.UpdateI tried menu-xdg from the Debian repos, as suggested in the comments, unfortunately this doesn't appear to be what I'm looking for:root@kali:/etc/menu-methods# ./menu-xdg --helpinstall-menu [-vh] <menu-method> Read menu entries from stdin in update-menus --stdout format and generate menu files using the specified menu-method. Options to install-menu: -h --help : this message --remove : remove the menu instead of generating it. -v --verbose : be verbose
Generating xdg menu lua files on Debian
debian;kali linux;awesome;freedesktop;xdg
null
_unix.97157
I have this script to divide a file over a 100000 or even more in 50000 lines.desc() { echo $1 $2|awk '{d=$1;} BEGIN {a=1;b=0} { f=d0a; while ((getline x<d)>0) { print x>>f.txt if (b++>=50000) { close(f);b=0;a++;f=(a<10)?d0a:f=da } } } END {print Terminado ... ! }' } if [ -f $1 ]; then desc $1 $2 fi When I execute it I got this error message:sh-3.2$ parte.sh pa.txt : command not found '/parte.sh: line 2: syntax error near unexpected token `'/parte.sh: line 2: `desc() sh-3.2$ Can anyone help to solve it?
command not found and syntax error near unexpected token
shell script
null
_webmaster.72817
I'm running an e-commerce website and have a featured product block on the home page.When the banner is generated, the URL is dynamically assembled using the Google Campaign URL Builder guidelines:?utm_source=homepage&utm_medium=banner&utm_content=MSI+N660+GAMING+GTX+660+2GB+Desktop+Graphics+Card&utm_campaign=featured%20productIn other words:Campaign source is: homepageCampaign medium is: bannerCampaign Content is: MSI N660 Gaming GTX 660 2GB Desktop Graphics CardCampaign Name is: featured productThe problem is Google Analytics seems to group these together regardless of the content, which is not ideal because I'm trying to track each featured product's efficacy separately.If I navigate to Acquisition Campaigns featured productI can now see a single item under the Source/Medium column, which in this case is homepage/banner.I was under the impression that if I click onto the homepage / banner option, it should show me the Campaign Contents, but it doesn't. It simply displays homepage / banner again, this time greyed out & not click-able and I cannot see the contents.Is there any way to workaround this - or am I doing something incorrectly.
How to get a more detailed Google Analytics report about a campaign
google analytics;url parameters;campaigns
All you need to do is Select Ad Content as you secondary dimension. Below is a snapshot:Also, Have you thought about using Google Tag Manager and dataLayers to do this? Its much more efficient. You can use HTML5 data-attributes or you can setup a custom dimension.
_unix.62505
I have a bootable usb flash drive with grub2 handling booting of ISOs (mostly different spins of Ubuntu). I am editing the menu.cfg myself and have my own script to update grub because I don't want to waste time using external tools.I would like to hide 64-bit ISOs on a 32-bit system, so that I can't make the mistake of trying to boot an incompatible ISO.Is it possible for grub2 to detect whether the processor is x64 or i386 and display a different menu (or preferably enable/disable some menu options) accordingly?Edit: I'm aware of the grub2 CLI command cpuid -l, to check for long mode, but I'm not sure if or how that can be used in menu.cfg.
Can grub2 detect processor architecture, and display options accordingly?
ubuntu;grub2;live usb
null
_unix.342079
On a centos7.1, when I run netstat -antp, there is one line, that contains a listening tcp port, without process ID.tcp 0 0 0.0.0.0:35598 0.0.0.0:* LISTEN - I can actually connect to this port:root# telnet 127.0.0.1 35598 Trying 127.0.0.1...Connected to 127.0.0.1.Escape character is '^]'.Connection closed by foreign host.How can I find which process actually listens/uses on this port?How is it possible not to have a process association?
Listening socket without process? What it means?
tcp;netstat
null
_unix.350963
I am running SUSE Linux Enterprise Server 12 SP2. I want to install Kubernetes.I tried this command: zypper in kubernetesBut I got this message: No provider of 'kubernetes' found.I downloaded this file: http://download.opensuse.org/repositories/Virtualization:/containers/SLE_12_SP2/src/kubernetes-1.5.3-8.2.src.rpmI used: rpm -ivh kubernetes-1.5.3-8.2.src.rpmBut there is no evidence that Kubernetes is installed.I found a relevant .ymp file. http://software.opensuse.org/ymp/Virtualization:containers/SLE_12_SP2/kubernetes.ympBut I do not know how to use it.How do I install Kubernetes on Linux SUSE?
How do I install Kubernetes on Linux SUSE?
docker;suse;kubernetes
null
_reverseengineering.11888
I am bring a .EXE, a .PDB, and a source code .C file into my computer and attempting to look at my program in Ollydbg.I am compiling C programs on one machine (XP Vm actually) and running them in Ollydbg 2.01 in Windows 7 on another machine. I want to look at Release code, so I set the compile and link options as described in link. And so I have a 'prog.exe' and 'prog.pdb' in Win7 where Ollydbg is. I can launch prog.exe in Ollydbg and I see the labels for main() and my other functions, and can go to them with the CTRL+G Enter Expression to Follow dialog.But I like to also see the associated source code line, to be able to see it below the code in the CPU window, and to be able to double-click and open the source code .C file.But unless I recreate the entire same directory path in my Win7 (Olly) computer Olly can't get at this source code (even though it sees the label names for code blocks). I've spent some time looking through the settings in both Visual Studio (6 for me) to try not have absolute paths, and Olly to change where it looks. Any ideas?
In Ollydbg, how do I change the path to a source code file without recreating entire directory structure?
ollydbg;c;pdb
null
_softwareengineering.39509
My company is giving us the possibility to sign up for some offsite training classes on Design Patterns.Browsing through the brochures, I'm already feeling bored (and somewhat repelled by the marketingy buzzwordy silverbulletty enterprisey managerese) - I already know the basics about design patterns (read the GoF book years ago, used a few as needed, read articles on the net, etc.). I'm worried that the training is going be mostly watching a powerpoint with stuff I mostly know, and arguing details over a UML diagram.My programming experience is mostly in games, simple web development and mathy stuff, in Python, C++ or simple scripting languages; I never worked on anything enterprisey, or in Java or .Net (for some reason, Java and .Net seems especially associated with Design Patterns), nor am I planning to in the forseeable future. I'm much more interesting in things like functional programming and Haskell and making micro domain-specific-languages to solve specific problems - I'm closer to the hacker culture (I'm mostly self-taught) than to the enterprise culture.But maybe this is just me having too high of an opinion of myself and passing a good opportunity to learn useful stuff. Or being a snob and refusing to learn about different cultures.So, how could I tell if a Design Patterns class is going to be useful? How you found such trainings useful? How you ever felt apprehensive about them?
How should I evaluate a training class?
design patterns;education;training
null
_unix.42465
Currently I am using gcc-4.3.6 and Eclipse IDE for c++ development. I want to debug my project in Eclipse with gdb. I am having a hard time debugging code when it contains STL containers. Also I am not using STL directly, I have wrappers for each container. I know we have to use pretty printing for looking into STL containers, but it is not working in Eclipse. I have worked in Visual Studio in the past. I migrated to gcc and Eclipse because compilation time in VS is too much wrt gcc. However, the debugger in VS is very good. I don't know much about gcc and Eclipse. I just want to know if there is a similar debugger in linux or unix.
Unix/Linux C++ debugger that supports STL containers?
gcc;debugging;eclipse;ide
the debugging features provided by gdb are based on the set of symbols that comes with your compiled code.Actually there isn't a debug version available for the STL, but there are at least 2 portings that can add debug symbols to your code:http://www.stlport.org/http://code.google.com/p/stl-debug/gdb without debugging symbols is useless, so you have to use a debug version for each library that you are using in your code if you want to test your code.
_unix.388325
Supposedly I passed the kernel a parameter that it doesn't understand, for example blabla or eat=cake, what would the kernel do with these unknown parameters, the traditional case would be passing any unknown parameter to init, in case if the the Linux kernel starts with early user space (initramfs) would it pass it to /init in initramfs?
What does the Linux kernel do with unknown kernel parameters?
kernel;linux kernel;init;initramfs;kernel parameters
null
_unix.98492
I have found this answer providing how to manipulate the current xterm window's dimensions, ie:echo -ne \e[8;30;30tHow can I modify this to maximize the window (xterm's alt + enter shortcut)?Also, where do I find more info on these xterm command line modifiers?UPDATE:See multiple solutions below for both maximize and full screen (without title and borders)
Maximize xterm via bash script
xterm;escape characters;window management
The commands areecho -ne '\e[9;1t'to maximize andecho -ne '\e[9;0t'to restore the original size. It's described in thexterm control sequences documentation.
_softwareengineering.209401
I'm a developer working on a Wordpress project. I work on this alone and I want to improve the way I use Git but I don't know where to start.Currently, I'm using git to commit all of the work that I've done for the day as long as that specific work is done. Here are some instances where I commit a specific work:A specific feature whose code can be written within a dayA bugfixFor bigger features that can take weeks to finish I create a branch for it, then commit the sub-features once they're done.But often times I feel like I'm committing way too much; for example, when I'm trying to solve lots of bugs I do not commit each and every bug that I resolve.Sometimes I also end up working with 2 or more things that should be in there own commits. Then I'll find it hard to just target the specific files within a specific change so I end up putting them all in one commit and end up with a commit message like add search feature and add caching feature.What I want to know are the best practices for solo developers working with Git. For solo developers out there you're welcome to answer with your own workflow as well.
Git for a solo developer
version control;git;solo development
Solo or multiple is the same. Commit early, commit often. As soon as you go from non-broken to non-broken and changed, commit. If you feel that you're committing a lot, it is only because you're not used to it.Here is a benchmark for you. When I was at Google, the average commit was under 20 lines. That will feel crazy when you first do it, but really, it is not too much.
_reverseengineering.14926
Any time I try to crack a program I get this error in olly:What causes this error? How can I fix this?If anyone has suggestion or advice that would be great. I already tried pressing F9 but that doesn't help
Access violation when reading [OLLYDEBUG]
ollydbg;debugging;debuggers;exception
null
_unix.211588
Docker images are supposed to be immutable, yet when I import one of my images on another machine, it behaves differently.To reproduce the bug, I will start from a debian-wheezy image built using debootstrap.I build my image using a Dockerfile:FROM debian-wheezyRUN apt-get install -y fail2banRUN rm /var/run/fail2ban/fail2ban.sockThe last command avoids fail2ban from crashing at next start: for some reason, the socket file stays there after installing fail2ban, even if I stop the service manually in the Dockerfile. Fail2ban can not restart if the file is still there.Launching the image and starting fail2ban will succeed. We can check the content of the /var/run/fail2ban repository:$ docker build -t test/fail2ban .$ docker run test/fail2ban ls /var/run/fail2banfail2ban.pidHowever, if I export the image and import it to another machine:$ docker save test/fail2ban > /tmp/fail2ban.tar$ scp /tmp/fail2ban.tar user@machine:/tmp$ ssh user@machine cat /tmp/fail2ban.tar | docker load$ ssh user@machine docker run test/fail2ban ls /var/run/fail2banfail2ban.pidfail2ban.sockThis time the socket file is there, preventing me from starting the service.Can somebody explain this behaviour and how to fix the problem?Here are some more informations:$ uname -aLinux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u2 x86_64 GNU/Linux$ docker --versionDocker version 1.6.0, build 4749651$ ssh user@machine uname -aLinux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1 x86_64 GNU/Linux$ ssh user@machine docker --versionDocker version 1.6.0, build 4749651
inconsistent Docker image
docker;fail2ban
null
_unix.164778
I have a strange problem and I don't know where I have to take a look to fix the issue.I am running the command lynx https://example.com/index.php from the server where the website is hosted on and I am getting a 404 Not Found error.When running lynx from another server with this URL, or visiting in standard browsers, I see the index.php normally.Any ideas what could be the problem and how to fix it?
lynx domain shows 404 error when running from localhost
dns;lynx
null
_codereview.133850
Here is the code I have written for a HackerRank challenge which takes multiple arrays, and attempts to determine if there exists an element in an array such that the sum of the elements on its left is equal to the sum of the elements on its right. If there are no elements to the left/right, then the sum is considered to be zero.Detailed problem statement:Input FormatThe first line contains \$T\$, the number of test cases. For each test case, the first line contains N , the number of elements in the array \$A\$ . The second line for each test case contains N space-separated integers, denoting the array \$A\$.Output FormatFor each test case print YES if there exists an element in the array, such that the sum of the elements on its left is equal to the sum of the elements on its right; otherwise print NO.It works fine, but it's definitely not an optimal solution, as it fails two test cases (time outs occur). Could anybody provide any insights on optimising it further?T = int(input())N = []A = []for i in range(T): #N.append(int(input())) N = int(input()) arr = input().strip().split(' ') arr = list(map(int, arr)) A.append(arr)#print(A)for i in A: count=0 for j in range(len(i)): preSum = sum(i[0:j]) postSum = sum(i[j+1:len(i)]) if(preSum == postSum): count+=1 if(count>0): print(YES) else: print(NO)
HackerRank, Sherlock and Array
python;programming challenge;time limit exceeded
General commentsYou should follow PEP8, the coding standard for python. This means using underscore_names for variables instead of camelCase.i is a bad generic name, except when iterating explicitly over integers. Maybe use for arr in A.I would use more descriptive variable names. Instead of A use maybe arrays?Your input could be more succinct:for i in range(T): N = int(input) A.append(map(int, input().split()))This will store a map object in A, instead of a list. But since that is iterable we won't have any problems.If the input was in one line with the first element being the length of the array and the rest of the line being the array, it would have been even easier:for i in range(T): N, *arr = map(int, input().split()) A.append(arr)This uses a nice feature of python3. N will take the first element of an iterable and arr will take all the (possible) rest. You can even specify variables at the end. Try these cases out to get a feel for it:a, *b = [] # ValueError: not enough values to unpack (expected at least 1, got 0)a, *b = [0] # a = 0, b = []a, *b = [0,1] # a = 0, b = [1]a, *b = [0,1,2] # a = 0, b = [1,2]a, *b, c = [0] # ValueError: not enough values to unpack (expected at least 2, got 1)a, *b, c = [0,1] # a = 0, b = [], c = 1a, *b, c = [0,1,2] # a = 0, b = [1], c = 2PerformanceInstead of always calculating all sums, you could store the pre_ and post_sum and add/subtract the current element. You should also stop after having found one occurrence.for array in A: found = False pre_sum, post_sum = 0, sum(array) for element in array: post_sum -= element if(pre_sum == post_sum): found = True break pre_sum += element print(YESif found else NO)I'm not exactly sure, but I think there is a small performance difference between these two, for large arr:arr = list(map(int, arr))# andarr = [int(x) for x in arr]
_softwareengineering.102459
Here's the type of situation that I've been struggling with, and I'm certainly not the first:Project has a budget based on the estimated time to develop the solutionDeadline to turn project over to the PM for QA is one some dateI am working on multiple projects at once, and each has a budgeted number of hours and deadlineTracking my progress against a budget and deadline would be pretty straightforward if I knew exactly how I was going to implement it, and only had one or two projects to focus on at a time. However, I'm building websites with Drupal, and every project includes some functionality that might be available as module - or the available module won't work at all, and it will require significant effort to build that functionality.What strategy have you found most helpful to keep track of where you are in a project so that you can accurately report to the PM how far along you are, and identify early on if the project is in danger of running late or going over budget, especially in cases where the time required for some parts is unknown?
How do you track your progress in a project?
project management;time tracking
null
_webmaster.204
Sometimes the boss wants to know who changed something on the website or changes their mind several times on where a button should go, what color something is, or whether or not a page should show up at all. Is there a simple way for a small 2-3 person web team to keep track of these constant changes?
How can I keep track of changes to my website over time?
change management
null
_codereview.20466
I am running a simulation with 250 interacting agents and have a few functions that are called over and over again. Even with precomputing all distances between agents before the N2 (250x250) interaction loop, my simulation is still very slow. Are there any C++ optimization tricks that I could use to speed these up?This is the most-used function in my simulation. It calculates the distance2 between two agents in a continuous space. I have a feeling there isn't much that can be done to further optimize this, but you guys have surprised me with some tricks before:double tGame::calcDistanceSquared(double fromX, double fromY, double toX, double toY){ double diffX = fromX - toX; double diffY = fromY - toY; return ( diffX * diffX ) + ( diffY * diffY );}Here's another expensive function in my simulation. It calculates the angle from one agent to another agent relative to the 'from' agent's heading. As you can see, I already did a little precomputing with the atan2() function (and that DOES speed things up a bit, despite what I've read in other posts).double tGame::calcAngle(double fromX, double fromY, double fromAngle, double toX, double toY){ double Ux = 0.0, Uy = 0.0, Vx = 0.0, Vy = 0.0; Ux = (toX - fromX); Uy = (toY - fromY); Vx = cosLookup[(int)fromAngle]; Vy = sinLookup[(int)fromAngle]; int firstTerm = (int)((Ux * Vy) - (Uy * Vx)); int secondTerm = (int)((Ux * Vx) + (Uy * Vy)); if (fabs(firstTerm) < 1000 && fabs(secondTerm) < 1000) { return atan2Lookup[firstTerm + 1000][secondTerm + 1000]; } else { return atan2(firstTerm, secondTerm) * 180.0 / cPI; }}Finally, here's the monster function that uses the calcDistanceSquared() function so much. This is run every simulation time step, and there's 2,000 time steps per simulation (and MANY simulations). The most expensive part is the calcDistanceSquared() in the N2 loop.void tGame::recalcPredAndPreyDistTable(double preyX[], double preyY[], bool preyDead[], double predX, double predY, double predDists[250], double preyDists[250][250]){ for (int i = 0; i < 250; ++i) { if (!preyDead[i]) { predDists[i] = calcDistanceSquared(predX, predY, preyX[i], preyY[i]); preyDists[i][i] = 0.0; for (int j = i + 1; j < 250; ++j) { if (!preyDead[j]) { preyDists[i][j] = preyDists[j][i] = calcDistanceSquared(preyX[i], preyY[i], preyX[j], preyY[j]); } } } }}
Calculating angles and distances
c++;performance;computational geometry
null
_webmaster.31287
I'm building a website that's not security-critical in any way at all, so having somebody put a page in an <iframe> is not particularly dangerous to its users. However, as my website doesn't have script plugins that will be used anywhere else, is there any reason why I shouldn't just apply:X-Frame-Options: Denyto every page on my website? Is there any valid reason for any other website to embed mine? I've seen plenty of content-stealing ones and attempts to hijack user accounts, but never an actual good usage of frames that's not an explicit feature of the website.
Is there any good reason I would want my website to be framed?
security;iframe
Some folks get a fair amount of traffic by allowing themselves to be framed by social network sharing sites like StumbleUpon - if your page is at all likely to be shared, I'd avoid doing this, and handle instances of framing in another way.Also, your site can already be sucked in and repurposed by benign services like Google Translate - and I believe an HTTP header like that won't prevent that kind of usage.
_unix.268075
I got a Logitech Bluetooth Multi-Device Keyboard K480 and after installing it just by following the bluetooth pairing, the default settings will leave the function keys mapped as multimedia keys, so I have to hold fn to access F1,F2,F3, etc.That is very counter-intuitive for me, and I'd like to remap it so I can access Functions directly and access the multimedia keys with fnThis is F1 without holding fn and then holding it.KeyPress event, serial 37, synthetic NO, window 0x4e00001,root 0xd6, subw 0x0, time 63445847, (-438,408), root:(284,460),state 0x10, keycode 180 (keysym 0x1008ff18, XF86HomePage), same_screen YES,XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyRelease event, serial 37, synthetic NO, window 0x4e00001,root 0xd6, subw 0x0, time 63445922, (-438,408), root:(284,460),state 0x10, keycode 180 (keysym 0x1008ff18, XF86HomePage), same_screen YES,XLookupString gives 0 bytes: XFilterEvent returns: FalseKeyPress event, serial 37, synthetic NO, window 0x4e00001,root 0xd6, subw 0x0, time 63446510, (-438,408), root:(284,460),state 0x10, keycode 67 (keysym 0xffbe, F1), same_screen YES,XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyRelease event, serial 37, synthetic NO, window 0x4e00001,root 0xd6, subw 0x0, time 63446597, (-438,408), root:(284,460),state 0x10, keycode 67 (keysym 0xffbe, F1), same_screen YES,XLookupString gives 0 bytes: XFilterEvent returns: False
How can I remap the multimedia keys for function keys on this bluetooth keyboard?
keyboard layout
null
_unix.68354
When I tab tab _ in terminal, Bash suggests 206 posibilities. I tried to run one of them _git_rm but nothing happend, what are they?Here is a screenshot:
What's those underscore commands?
linux;bash;terminal;linux mint
These functions whose name begins with an underscore are part of the programmable completion engine. Bash follows zsh's convention here, where the function that generates completions for somecommand is called _somecommand, and if that function requires auxiliary functions, they are called _somecommand_stuff.These completion functions typically do nothing useful or raise an error if you call them manually: they're intended to be called from the completion engine.This follows on a fairly widespread practice in various programming languages to use a leading underscore to indicate that a function or variable is in some way internal to a library and not intended for the end-user (or end-programmer).
_unix.145322
I need to rename the files given below and move to some other path1234551abcde20140718023216001.txt.809047512.2014_07_07_13:47:44000001abcde20140718023216001.txt.34568.001.2014_07_07_13:50:4444444abcded20140718023216001.txt.1111111.2014_07_07_13:47:44expected Result1234551abcde20140718023216001.txt.809047512000001abcde20140718023216001.txt.34568.00144444abcded20140718023216001.txt.1111111only I need to remove the timestamp attached every time with the filenames,and move it to other directory in AIXplease help me to write the script.I am new to Unix as I am able to rename single single files but not all,and moving to other path also had issue.For your reference I had tried.#!/usr/bin/kshfile1=`echo 1234551abcde20140718023216001.txt.809047512.2014_07_07_13:47:44 | awk -F . '{for(i=1;i<NF;i++) if ($i!= 1) f=f?f FS $i:$i;print f;f=}'`echo $file1
Renaming the files and move to other path
shell;scripting;rename
null
_unix.128095
I have microcontroller device which reads data from sensors and sends it via Serial to USB converter (ftdi232 cable). This Serial to USB converter is connected to ARMv7 mini-computer - CuBox with Ubuntu 13.04. I have also connected WiFi USB adapter to CuBox.I would like to read data from serial port and send it via WiFi and receive on Windows PC. It is possible without network socket programming, using available tools? Something like pipe/bridge/redirection between serial port and WiFi.I don't need to communicate with this box, so constant and one-way stream of data over WiFi will be sufficient. Thanks you for any resources and ideas how to set up this communication.
How to send data from serial port over wifi?
wifi;serial port;socket;bridge
null
_softwareengineering.135993
Does anyone know if there is some kind of tool to put a number on technical debt of a code base, as a kind of code metric? If not, is anyone aware of an algorithm or set of heuristics for it?If neither of those things exists so far, I'd be interested in ideas for how to get started with such a thing. That is, how can I quantify the technical debt incurred by a method, a class, a namespace, an assembly, etc. I'm most interested in analyzing and assessing a C# code base, but please feel free to chime in for other languages as well, particularly if the concepts are language transcendent.
How can I quantify the amount of technical debt that exists in a project?
code quality;technical debt;code metrics
Technical debt is just an abstract idea that, somewhere along the lines of designing, building, testing, and maintaining a system, certain decisions were made such that the product has become more difficult to test and maintain. Having more technical debt means that it will become more difficult to continue to develop a system - you either need to cope with the technical debt and allocate more and more time for what would otherwise be simple tasks, or you need to invest resources (time and money) into reducing technical debt by refactoring the code, improving the tests, and so on.There are a number of metrics that might give you some indication as to the quality of the code:Code coverage. There are various tools that tell you what percentage of your functions, statements, and lines are covered by unit tests. You can also map system and acceptance tests back to requirements to determine the percentage of requirements covered by a system-level test. The appropriate coverage depends on the nature of the application.Coupling and cohesion. Code that exhibits low coupling and high cohesion is typically easier to read, understand, and test. There are code analysis tools that can report the amount of coupling and cohesion in a given system.Cyclomatic complexity is the number of unique paths through an application. It's typically counted at the method/function level. Cyclomatic complexity is related to the understandability and testability of a module. Not only do higher cyclomatic complexity values indicate that someone will have more trouble following the code, but the cyclomatic complexity also indicates the number of test cases required to achieve coverage.The various Halstead complexity measures provide insight into the readability of the code. These count the operators and operands to determine volume, difficulty, and effort. Often, these can indicate how difficult it will be for someone to pick up the code and understand it, often in instances such as a code review or a new developer to the code base.Amount of duplicate code. Duplicated code can indicate potential for refactoring to methods. Having duplicate code means that there are more lines for a bug to be introduced, and a higher likelihood that the same defects exist in multiple places. If the same business logic exists in multiple places, it becomes harder to update the system to account for changes.Often, static analysis tools will be able to alert you of potential problems. Of course, just because a tool indicates a problem doesn't mean there is a problem - it takes human judgement to determine if something could be problematic down the road. These metrics just give you warnings that it might be time to look at a system or module more closely.However, these attributes focus on the code. They don't readily indicate any technical debt in your system architecture or design that might relate to various quality attributes.
_unix.217781
I have a cPanel dedicated server installed with PHP 5.2.17. I need to use php 5.3 version for my subdomain and the directory needed php.5.3 is /home/website30/public_html/blog/.Can anyone provide the step by step installation of php 5.3 for this issue and the steps that bind PHP 5.3 with the directory /home/website30/public_html/blog/?
Installing a different php version for a subdomain
centos;php;version;cpanel
null
_ai.3703
Cepheus is an artificial intelligence designed to play Texas Hold'em. By playing against itself and learning where it could have done better, it became very good at the game. Slate Star Codex comments:I was originally confused why they published this result instead of heading to online casinos and becoming rich enough to buy small countries, but it seems that its a very simplified version of the game with only two players. More interesting, the strategy was reinforcement learning the computer started with minimal domain knowledge, then played poker against itself a zillion times until it learned everything it needed to know. Apparently Cepheus currently just plays against one person. Seeing as it managed to develop amazing strategy for this very simplified environment, what's stopping it from working on real/full poker games?
What's stopping Cepheus from generalizing to full poker games?
reinforcement learning;gaming
The reason why Cepheus can't generalize has to do with the number of decision points.The same authors recently let loose Deep Stack (DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit (HUNL) Poker) which is freaking many professional poker players out.In the DeepStack arxiv paper, they sayAI techniques (Cepheus) have previously shown success in the simpler game of heads-up limit Texas holdem, where all bets are of a fixed size resulting in just under 10^14 decision points....The imperfect information game HUNL is comparable in size to go, with the number of decision points exceeding 10^160...Imperfect information games require more complex reasoning than similarly sized perfect information games. The correct decision at a particular moment depends upon the probability distribution over private information that the opponent holds, which is revealed through their past actions.Using the same strategy for HUNL as Cepheus did is out of the question. Rather, taking an educated guess or using intuition based on the previous play (referred to as Continual Re-solving in the paper) is a method which can better handle this gargantuan game. For more information check out the DeepStack website.
_codereview.40145
I just started trying out SVG the other day. Eventually I hope to be able to know how to do what SE does with their reputation graphs.For now, I've just been trying to set up an easier way to make lines. I think my below code is decent, but please let me know what I can improve on or what I'm doing poorly.Would like your thoughts before I start adding like animations, creativity, data to it.Demo http://jsfiddle.net/ECKNh/var Line = {};Line.LINES = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15];var SVGline = function (l) { this.l = l;}for (var i = Line.LINES.length; i > -1; i -= 1) { Line[Line.LINES[i]] = new SVGline(Line.LINES[i]);}SVGline.prototype.createline = function (x1, y1, x2, y2, color, w) { var aLine = document.createElementNS('http://www.w3.org/2000/svg', 'line'); aLine.setAttribute('x1', x1); aLine.setAttribute('y1', y1); aLine.setAttribute('x2', x2); aLine.setAttribute('y2', y2); aLine.setAttribute('stroke', color); aLine.setAttribute('stroke-width', w); return aLine;}function start() { var aSvg = document.createElementNS('http://www.w3.org/2000/svg', 'svg'); aSvg.setAttribute('width', 1000); aSvg.setAttribute('height', 400); var w = document.getElementById('container'); for (var i = 1; i < 11; i += 1) { var x1 = Math.floor(Math.random() * 500 / 2); var xx = Line[i].createline(x1, 0, 200, 300, 'rgb(0,0,' + x1 + ')', i); aSvg.appendChild(xx); } w.appendChild(aSvg);}start();
Making lines with SVG and JavaScript
javascript;svg
Let me know what I can improve on or what I'm doing poorly.If you are going to do animations with those lines, you will want to keep track of them. I am assuming that is why you have an array of LINES. However, in createLine you create the SVG element and just return it, you do not keep a reference to it. So it will be hard to animate those lines, you would have to query the DOM to find them back.I suggest you have an object that creates SVG elements, a factory so to speak. And then another object that tracks lines you have created so far, mostly to do animations and other fun stuff.The factory object could be something likeSVG = { createCanvas : function( width, height, containerId ){ var container = document.getElementById( containerId ); var canvas = document.createElementNS('http://www.w3.org/2000/svg', 'svg'); canvas.setAttribute('width', width); canvas.setAttribute('height', height); container.appendChild( canvas ); return canvas; }, createLine : function (x1, y1, x2, y2, color, w) { var aLine = document.createElementNS('http://www.w3.org/2000/svg', 'line'); aLine.setAttribute('x1', x1); aLine.setAttribute('y1', y1); aLine.setAttribute('x2', x2); aLine.setAttribute('y2', y2); aLine.setAttribute('stroke', color); aLine.setAttribute('stroke-width', w); return aLine; }}Initially the lines object could be as simple asvar lines = [];lines.addLine = function( line ){ this.push( line ); return line;}Your start function would then be something likefunction start() { var canvas = SVG.createCanvas( 1000 , 400 , 'container' ), lineElement, i, x1; for (i = 1; i < 11; i += 1) { x1 = Math.floor(Math.random() * 500 / 2), lineElement = lines.addLine( SVG.createLine(x1, 0, 200, 300, 'rgb(0,0,' + x1 + ')', i) ); canvas.appendChild( lineElement ); }}
_cs.22399
There is a shop which consists of N items and there are M buyers. Each buyer wants to buy a specific set of items. However, the cost of all transactions is same irrespective of the number of items sold. So, the shopkeeper needs to maximize the number of buyers. The buyers will buy only if all the items are being sold. Items are unique. All items need not be sold.So, basically, we have a bipartite graph. We need to find a set of edges which maximize the number of nodes on Buyer vertex set such that each node on item set has only one edge. Any suggestions?
How to maximize the number of buyers in a shop?
graph theory;greedy algorithms;bipartite matching
null
_cs.33539
Reading the book Introduction to Linear Optimization by Bertsimas and Tsiklisis, I've come across the following subject: Driving the artificial variables out of the basis.The description is as follows: suppose an artificial variable $x_{B(j)}$ is in the basis, then examining the $j$-row of the simplex tableau there are two cases:Either the $j$-row of $B^{-1}A$ contains a nonzero element (I understand this case !), otherwise all elements in the row are equal to $0$, which imply that the rows of $A$ are linearly independent and it is shown that the constraint $a_j x = b_j$ is redundant.The book then suggest we delete the $j$-row of the simplex tableau and continue from there. Why is this possible ? By deleting the $j$-row of the tableau, we remove the $j$-basic variable from the basis. Also, deleting the $j$-row of the simplex tableau corresponds to deleting the $j$-row of $A$, form the basis of the basic variables, not including the $j$'th basic variable (now one dimension smaller), and then forming the simplex tableau corresponding to these ?Now, how can we be sure the vectors corresponding to the basic variables excluding the $j$'th are linearly independent and form a basis ($j$-element is removed from these vectors) ? How do I know the simplex tableau with the $j$-row removed is actually a real simplex tableau and what vectors and basis matrix does it correspond to ?Please give me advice and tell me whether I'm wrong and what to do, I've been really thinking over this, but haven't come to a conclusion.
Introduction to Linear Optimization: Driving the artificial variables out of the basis (case: no entries in the $j$-row are nonzero)
optimization;linear programming
null
_unix.85539
Recently, these messages started popping up straight into my prompt when I'm connected to the OpenSUSE system in question via PuTTY:Message from syslogd@host at Aug 5 11:04:03 ... kernel:[ 6177.851012] EIP: [<75c0234e>] 0x75c0234e SS:ESP 0068:f324dde1Message from syslogd@host at Aug 5 11:15:01 ... kernel:[ 6836.654020] Process sh (pid: 6245, ti=f2bee000 task=f32fd2b0 task.ti=f2bee000)Message from syslogd@host at Aug 5 11:15:01 ... kernel:[ 6836.654020] Stack:Message from syslogd@host at Aug 5 11:15:01 ... kernel:[ 6836.654020] Call Trace:Message from syslogd@host at Aug 5 11:15:01 ... kernel:[ 6836.654020] Inexact backtrace:Message from syslogd@host at Aug 5 11:15:01 ... kernel:[ 6836.654020]Message from syslogd@host at Aug 5 11:15:01 ... kernel:[ 6836.654020] Code: Bad EIP value.Message from syslogd@host at Aug 5 11:15:01 ... kernel:[ 6836.654020] EIP: [<75c0234e>] 0x75c0234e SS:ESP 0068:f2befeadI know some very basic stuff about Linux but this catches me off guard. What does this mean? How do I troubleshoot?edit Turns out the system is actually unreachable now, while it does reply to ping I cannot connect via SSH to it. Is there anything I can do physically on the machine?
Messages from syslogd, what do they mean and what do I do?
kernel;opensuse;syslog
null
_codereview.11753
This is a fairly trivial example of a question I come up to often. In the example below I tend to think that allowing duplication sometimes results in clearer code.Does anyone agree/disagree on this point?First way: an additional local variable is used so method.Invoke is called just once on the last line. I think this has slightly more logic to decode than the second way.private static void Invoke<T>(string[] args, MethodInfo method) where T : new(){ T target = new T(); target.InvokeMethod(Initialize, errorIfMethodDoesNotExist: false, accessNonPublic: true); string[] methodArgs = null; if (args.Length > 1) { methodArgs = args.Skip(1).ToArray(); } method.Invoke(target, methodArgs);}Second way: No additional local variable but the call to method.Invoke is duplcated in both branches of the if statement. I think this is clearer even though logic was duplicated.private static void Invoke<T>(string[] args, MethodInfo method) where T : new() { T target = new T(); target.InvokeMethod(Initialize, errorIfMethodDoesNotExist: false, accessNonPublic: true); if (args.Length > 1) { method.Invoke(target, args.Skip(1).ToArray()); } else { method.Invoke(target, null); } }
Duplicate method call or add logic to set local variables as parameters?
c#
I think this would be even better, in this case.private static void Invoke<T>(string[] args, MethodInfo method) where T : new(){ T target = new T(); target.InvokeMethod( Initialize, errorIfMethodDoesNotExist: false, accessNonPublic: true); string[] methodArgs = args.Skip(1).ToArray(); method.Invoke(target, methodArgs);}Skip() on an empty collection returns an empty collection.
_scicomp.20856
I have a matrix which is almost like an upper triangular just that the last row has non zero elements. And I want to perform the QR decomposition on that matrix.Does anyone know the name of such a matrix? And also since I don't want to use the built in matlab QR function for such a matrix, can anyone suggest a better algorithm for me to use?
QR decomposition
linear algebra;matlab;algorithms;matrices
null
_unix.251721
I'm using the following command to run a QEMU VM:qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -hda disk.qcow2By default, the guest OS has internet access, but can also access open ports on the host OS. How can I prevent the guest OS from accessing the host ports (but without limiting its internet access)?
How to disable QEMU guest access to host ports
networking;kvm;qemu
This looks somewhat problematic if you do not have experience of advanced iptables manipulation and overview of how Linux filters traffic for local processes.In your mode qemu runs an emulated NAT: all calls to NIC from guest will be translated as socket/connect/send/recv calls by qemu process itself. This means that connections are made by machine itself, from 127.0.0.1. At this point you can run qemu as another user, and filter that user by adding an owner match:iptables -I OUTPUT -o lo -m owner --uid-owner username -m multiport --dports ports -j DROPwhere username is a name of user you want to filter and ports is a comma separated list of ports you wish to disable for that machine. To run qemu as another user, you need to run it via tools like sudo or logging in as the user with su or login.Without this, you end up to filter yourself, so if you will add a general rule to filter ports, you will be blocked from accessing these ports as well.Another way is to change the way qemu does networking. A good way to filter traffic well is to bind qemu to virtual ethernet device:Enable packet forwarding.Install tunctl, add virtual network interface which owner is you:tunctl -u yourname -t qemu(remember to add this command to something like rc.local to make it permanent)Configure qemu interface (use ip / ifconfig or other OS provided tool) to assign a free /24 subnet to it. This subnet is needed to be set in your guest OS as well. Then run qemu with -net tap,ifname=qemu,script=off. Configure guest OS networking again.Then you can easily filter guest OS traffic which is represented by qemu virtual interface:iptables -I FORWARD -i qemu -m multiport --dports ports -j DROPshould work.But the NAT stopped working. If you need to make NAT work again, you should add a rule that will patch IP addresses going out of your machine. If you have eth0 interface where your all traffic goes, you enable NAT for it:iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
_softwareengineering.314176
PHP variables are dynamic in type so I can do:$x = 'hello world!'; $x = strlen($x); Sometimes this is trivial and I could save many lines of code, but it reduces clarity. I'm using a text editor, but I guess IDEs (and programmers) could be confused if a variable change type.Less trivial case: if (is_string($data)) $lines = explode(PHP_EOL,$data); else $lines = $data; versus:if (is_string($data)) $data = explode(PHP_EOL,$data); Of course I could do something like:$lines = is_string($data) ? explode(PHP_EOL,$data) : $data; But that's not the point.BTW: I'm assuming $data is string or array and this should be documented as such.
What is the impact of re-defining a dynamic type variable (such as in PHP)?
php;variables
null
_cogsci.3236
Sometimes, a student who has a lack of knowledge, tries to change the topic of a question so that it will be about what he knows and can speak about. Is this a known phenomenon in psychology?
What is it called when a student tends to speak about what he knows?
terminology;educational psychology
null