source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
153,011
I need to download a large file (1GB). I also have access to multiple computers running Linux, but each is limited to a 50kB/s download speed by an admin policy. How do I distribute downloading this file on several computers and merge them after all segments have been downloaded, so that I can receive it faster?
The common protocols HTTP, FTP and SFTP support range requests , so you canrequest part of a file. Note that this also requires server support, so itmight or might not work in practice. You can use curl and the -r or --range option to specify the range andeventually just cat ting the files together. Example: curl -r 0-104857600 -o distro1.iso 'http://files.cdn/distro.iso'curl -r 104857601-209715200 -o distro2.iso 'http://files.cdn/distro.iso'[…] And eventually when you gathered the individual parts you concatenate them: cat distro* > distro.iso You can get further information about the file, including its size with the --head option: curl --head 'http://files.cdn/distro.iso' You can retrieve the last chunk with an open range: curl -r 604887601- -o distro9.iso 'http://files.cdn/distro.iso' Read the curl man page for more options and explanations. You can further leverage ssh and tmux to ease running and keepingtrack of the downloads on multiple servers.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/153011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10867/" ] }
153,091
I have a file that only uses \n for new lines, but I need it to have \r\n for each new line. How can I do this? For example, I solved it in Vim using :%s/\n/\r\n/g , but I would like to use a script or command-line application. Any suggestions? I tried looking this up with sed or grep , but I got immediately confused by the escape sequence workarounds (I am a bit green with these commands). If interested, the application is related to my question/answer here
You can use unix2dos (which found on Debian): unix2dos file Note that this implementation won't insert a CR before every LF , only before those LF s that are not already preceded by one (and only one) CR and will skip binary files (those that contain byte values in the 0x0 -> 0x1f range other than LF , FF , TAB or CR ). or use sed : CR=$(printf '\r')sed "s/\$/$CR/" file or use awk : awk '{printf "%s\r\n", $0}' file or: awk -v ORS='\r\n' 1 file or use perl : perl -pe 's|\n|\r\n|' file
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/153091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59802/" ] }
153,102
When starting XTerm the prompt starts at the first line of the terminal. When running commands the prompt moves down until it reaches the bottom, and from then on it stays there (not even Shift - Page Down or the mouse can change this). Rather than have the start of the terminal lifetime be "special" the prompt should always be at the bottom of the terminal. Please note that I have a multi-line prompt . Of course, it should otherwise work as before (resizeable, scrollable, no unnecessary newlines in the output, and no output mysteriously disappearing), so PROMPT_COMMAND='echo;echo;...' or similar is not an option. The solution ideally should not be shell-specific. Edit: The current solution , while working in simple cases, has a few issues: It's Bash specific . An ideal solution should be portable to other shells. It fails if other processes modify PS1 . One example is virtualenv, which adds (virtualenv) at the start of PS1 , which then always disappears just above the fold. Ctrl - l now removes the last page of history. Is there a way to avoid these issues, short of forking XTerm?
If using bash , the following should do the trick: TOLASTLINE=$(tput cup "$LINES")PS1="\[$TOLASTLINE\]$PS1" Or (less efficient as it runs one tput command before each prompt, but works after the terminal window has been resized): PS1='\[$(tput cup "$LINES")\]'$PS1 To prevent tput from changing the exit code, you can explicitly save and reset it: PS1='\[$(retval=$?;tput cup "$LINES";exit $retval)\]'$PS1 Note that the variable retval is local; it doesn't affect any retval variable you might have defined otherwise in the shell. Since most terminals cup capability is the same \e[y;xH , you could also hardcode it: PS1='\[\e[$LINES;1H\]'$PS1 If you want it to be safe against later resetting of PS1, you can also utilize the PROMPT_COMMAND variable. If set, it is run as command before the prompt is output. So the effect can also be achieved by PROMPT_COMMAND='(retval=$?;tput cup "$LINES";exit $retval)' Of course, while resetting PS1 won't affect this, some other software might also change PROMPT_COMMAND .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153102", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
153,106
I have a file with groups of IP addresses. The file looks like this: London:1.1.1.0-1.1.1.200172.25.2.0-172.25.2.100Germany:2.2.2.0-2.2.2.100192.168.1.0-192.168.1.200172.25.2.0-172.25.2.200 So when I search for an IP address ( ./program.sh 172.25.2.32 ) the output should be London and Germany.
If using bash , the following should do the trick: TOLASTLINE=$(tput cup "$LINES")PS1="\[$TOLASTLINE\]$PS1" Or (less efficient as it runs one tput command before each prompt, but works after the terminal window has been resized): PS1='\[$(tput cup "$LINES")\]'$PS1 To prevent tput from changing the exit code, you can explicitly save and reset it: PS1='\[$(retval=$?;tput cup "$LINES";exit $retval)\]'$PS1 Note that the variable retval is local; it doesn't affect any retval variable you might have defined otherwise in the shell. Since most terminals cup capability is the same \e[y;xH , you could also hardcode it: PS1='\[\e[$LINES;1H\]'$PS1 If you want it to be safe against later resetting of PS1, you can also utilize the PROMPT_COMMAND variable. If set, it is run as command before the prompt is output. So the effect can also be achieved by PROMPT_COMMAND='(retval=$?;tput cup "$LINES";exit $retval)' Of course, while resetting PS1 won't affect this, some other software might also change PROMPT_COMMAND .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78167/" ] }
153,136
I would like to understand what is meant by an network interface up?Because ip addr or ifconfig command shows an interface as up even when there is no IP associated with it. for example on RHEL7 : [root@IDCDVAM887 ~]# ifconfig ens256ens256: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 00:50:56:9e:19:5b txqueuelen 1000 (Ethernet) RX packets 229406 bytes 59265584 (56.5 MiB) RX errors 0 dropped 229454 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 (or) [root@IDCDVAM887 ~]# ip addr show ens2565: ens256: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000link/ether 00:50:56:9e:19:5b brd ff:ff:ff:ff:ff:ff What is the real use of showing as UP when the interface doesn't have IP at all?I believe when there is no IP, there could be no communication on that? Then what is the use of it?
The LOWER_UP is the state of the Ethernet link (or other link layer protocol). It's defined as Driver signals L1 up , which basically means the cable is fitted and it can see another device on the other end of the cable. The UP means that it has been enabled. This can be controlled by you (or a script) using the ip link set <device> up of ifconfig <device> up command. There are other protocols, such as IPX that use Ethernet, but will not have an IP address as they are not part of the Internet Protocol stack. So it is perfectly acceptable for the link to be UP but not have an IP address.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153136", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82643/" ] }
153,151
How can I make cp replace a directory of the same name without removing the existing directory first? cp 's default behaviour is to copy the source directory into the destination rather than replace it: mkdir -p test/amkdir acp -a test/a a a is now within a , it didn't replace a . How can I make cp replace directories? I want it to work the same way it does with files. I could of course delete the target first, but I don't want to have to run more than one command :)
Use a dot . after a : cp -a test/a/. a It actually does not replace a as you though. It just copy test/a content to directory a .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27018/" ] }
153,157
I have the following ps command to get particular properties of all the running processes along with some properties: ps --no-headers -exo "uname,ppid,pid,etime,%cpu,%mem,args" I wish to have it formatted in CSV so I can parse it. Note I have put the args at the end to make parsing easy; I don't think a , will exist in any of the other columns - please correct me if I am wrong. How do I remove the whitespace?
From the man page: -o format user-defined format. format is a single argument in the form of a blank-separated or comma-separated list, which offers a way to specify individual output columns. The recognized keywords are described in the STANDARD FORMAT SPECIFIERS section below. Headers may be renamed (ps -o pid,ruser=RealUser -o comm=Command) as desired. If all column headers are empty (ps -o pid= -o comm=) then the header line will not be output. Column width will increase as needed for wide headers; this may be used to widen up columns such as WCHAN (ps -o pid,wchan=WIDE-WCHAN-COLUMN -o comm). Explicit width control (ps opid,wchan:42,cmd) is offered too. The behavior of ps -o pid=X,comm=Y varies with personality; output may be one column named "X,comm=Y" or two columns named "X" and "Y". Use multiple -o options when in doubt. Use the PS_FORMAT environment variable to specify a default as desired; DefSysV and DefBSD are macros that may be used to choose the default UNIX or BSD columns. So try: /bin/ps -o uname:1,ppid:1,pid:1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74366/" ] }
153,159
A friend of mine is running Ubuntu and got GRUB RESCUE . Can they use Mint ISO to repair their Grub? As I don't have Ubuntu ISO?
If the Ubuntu installation is still present (and only GRUB was lost), sure, you can use any distro that has live booting to do so. chroot into the Ubuntu installation and install and update Grub. If /dev/sda5 is the Ubuntu partition: mount /dev/sda5 /mntmount -o bind /dev /mnt/devmount -t proc none /mnt/procmount -t sysfs none /mnt/sysmount -t devpts none /mnt/dev/ptschroot /mnt /bin/bash#Inside the chrootgrub-install /dev/sdaupdate-grubexit# Unmount all those mounts:for m in /mnt/{dev/pts,dev,proc,sys,}; do umount $m; done# reboot If all you need to do is install grub, and updating isn't necessary, then you don't need the chroot : mount /dev/sda5 /mntgrub-install --root-directory=/mnt /dev/sda If you have a separate boot partition, remember to mount it as well, after mounting /mnt .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77239/" ] }
153,169
I'd like to use fsarchiver in the safest way possible but have no live medium for booting from. I vaguely remember having read something like this though: Is it possible to remount / readonly when in single user mode?
If the Ubuntu installation is still present (and only GRUB was lost), sure, you can use any distro that has live booting to do so. chroot into the Ubuntu installation and install and update Grub. If /dev/sda5 is the Ubuntu partition: mount /dev/sda5 /mntmount -o bind /dev /mnt/devmount -t proc none /mnt/procmount -t sysfs none /mnt/sysmount -t devpts none /mnt/dev/ptschroot /mnt /bin/bash#Inside the chrootgrub-install /dev/sdaupdate-grubexit# Unmount all those mounts:for m in /mnt/{dev/pts,dev,proc,sys,}; do umount $m; done# reboot If all you need to do is install grub, and updating isn't necessary, then you don't need the chroot : mount /dev/sda5 /mntgrub-install --root-directory=/mnt /dev/sda If you have a separate boot partition, remember to mount it as well, after mounting /mnt .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49464/" ] }
153,182
I installed OpenBSD 5.5 using the recommended defaults. The OS came with fvwm as the window manager. How do I copy text within an Xterm and paste it into another Xterm? Using the mouse? Using only the keyboard? Before making this post, I checked the man page of fvwm and there is nothing that answers my question.
Select the text to copy and then use the middle mouse button in the other window to paste the selected text. Even if no longer selected you can paste last selected string.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66229/" ] }
153,190
zathura is my default PDF reader. Some files cause it trouble though, and in such cases I run :exec acroread $FILE which automatically opens the same file with Acrobat Reader. How do I add a key shortcut to the zathura configuration file ( ~/.config/zathura/zathurarc ) to do that?
I recently bumped against a similar problem and, for future reference, here is a workaround: map <C-o> focus_inputbar ":exec acroread $FILE" This will map ctrl+o (or whichever your key is) to open the input bar you would normally open with : and input there that text. You can then press Enter to launch the command. This is far from ideal and still requires a two-key press, but surely faster than writing all the command by hand.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82677/" ] }
153,210
I'm piping output from clock through sed to remove leading zeroes from numbers.It looks like this: clock -sf 'S%A, %B %d. %I:%M %P' | sed 's/\b0\+\([0-9]\+\)/\1/g' That works fine and produces the output I want. However, when I try to redirect the output to a file, nothing is written to the file. The following does NOT work. clock -sf 'S%A, %B %d. %I:%M %P' | sed 's/\b0\+\([0-9]\+\)/\1/g' > testfile Nothing is written to testfile. What am I doing wrong?
You're running into an output buffering problem. sed normally buffers its output when not writing to a terminal, so nothing gets written to the file until the buffer fills up (probably every 4K bytes). Use the -u option to sed to unbuffer output. clock -sf 'S%A, %B %d. %I:%M %P' | sed -u 's/\b0\+\([0-9]\+\)/\1/g' > testfile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82690/" ] }
153,226
What is the preferred method to keep track of who is acting as root in the logs when root login is disabled (SSH) but users can run sudo -i or su - to become root? I would like to follow every command with the original username as well. RHEL 6 or any Linux rsyslog, etc.
The most robust methods seems to be auditd: Requirement 10: Track and monitor all access to network resources and cardholder data Auditd basically intercepts all system calls and checks them against your set of rules. So in your /etc/audit/audit.rules file you would have something like the following: # This file contains the auditctl rules that are loaded# whenever the audit daemon is started via the initscripts.# The rules are simply the parameters that would be passed# to auditctl.# First rule - delete all-D# Increase the buffers to survive stress events.# Make this bigger for busy systems-b 320# Feel free to add below this line. See auditctl man page-a always,exit -F euid=0 -F perm=wxa -k ROOT_ACTION The last rule being the only non-default rule. The main drawback with this approach (and the reason I found this question while looking for alternatives) is that the raw log files are pretty cryptic and are only helpful after running the querying program on the raw log file: ausearch An example query for that rule would be: ausearch -ts today -k ROOT_ACTION -f audit_me | aureport -i -f A common sense solution would probably be to create a cron that will query your raw auditd logs and then ship them off to your logging solution.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153226", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43342/" ] }
153,243
I rather randomly checked the status of my RAID arrays with cat/proc/mdstat and realized, that one of my arrays seems to be resyncing: md1 : active raid1 sdb7[1] sdc7[0] 238340224 blocks [2/2] [UU] [==========>..........] resync = 52.2% (124602368/238340224) finish=75.0min speed=25258K/sec Why is this and what does it mean? I seemingly can access the mount point just fine with r/w permissions. EDIT 1 (in response to SLM's ANSWER ) I can't really see anything if I grep through dmesg and the --detail switch doesn't tell me much either, i.e. it displays that the resync is in progress... but no hint for the reason or why it might have gotten out of sync... - I guess I might just need to keep an eye on it before I start swapping out my hardware.
This would seem to be indicating that the syncing between the 2 members of the RAID are not staying in sync with each other. 1. Investigate logs I'd investigate your dmesg logs and see if there are any messages stating that either of the physical HDDs that make up this array are having hardware failures. 2. Check mdadm You can also consult mdadm using the --detail switch to find out more information about the resync like so: $ sudo mdadm --detail /dev/md0/dev/md0: Version : 00.90.03 Creation Time : Sat Jan 26 09:14:11 2008 Raid Level : raid1 Array Size : 976759936 (931.51 GiB 1000.20 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Jan 1 01:29:16 2010 State : clean, resyncing Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Rebuild Status : 50% complete UUID : 37a3bfcb:41393031:23c133e6:3b879f08 Events : 0.2178969 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 If both devices seem fine and you cannot pinpoint which device is having an issue, you may want to temporarily run a diagnostic tool such as HDAT2 or SpinRite against each HDD to confirm their health. 3. Cabling If the HDDs check out then I would start scrutinizing the cabling, I typically will swap these out. 4. Controller I'd next scrutinize the controller itself, either taking the drives out of the affected system and diagnose them in a secondary system, or add a 3rd party controller card into the affected system to diagnose the issue further. 5. Power supply Believe it or not, I've had issues in the past with HDDs and RAIDs where swapping out a failing, or about to fail, power supply, resolved my RAID health issues.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46433/" ] }
153,248
Why is it possible to look at the contents of a block device's file like /dev/sda using cat or a hex editor?Why can I not do the same for a character device like /dev/pts/3 ?
This would seem to be indicating that the syncing between the 2 members of the RAID are not staying in sync with each other. 1. Investigate logs I'd investigate your dmesg logs and see if there are any messages stating that either of the physical HDDs that make up this array are having hardware failures. 2. Check mdadm You can also consult mdadm using the --detail switch to find out more information about the resync like so: $ sudo mdadm --detail /dev/md0/dev/md0: Version : 00.90.03 Creation Time : Sat Jan 26 09:14:11 2008 Raid Level : raid1 Array Size : 976759936 (931.51 GiB 1000.20 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Jan 1 01:29:16 2010 State : clean, resyncing Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Rebuild Status : 50% complete UUID : 37a3bfcb:41393031:23c133e6:3b879f08 Events : 0.2178969 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 If both devices seem fine and you cannot pinpoint which device is having an issue, you may want to temporarily run a diagnostic tool such as HDAT2 or SpinRite against each HDD to confirm their health. 3. Cabling If the HDDs check out then I would start scrutinizing the cabling, I typically will swap these out. 4. Controller I'd next scrutinize the controller itself, either taking the drives out of the affected system and diagnose them in a secondary system, or add a 3rd party controller card into the affected system to diagnose the issue further. 5. Power supply Believe it or not, I've had issues in the past with HDDs and RAIDs where swapping out a failing, or about to fail, power supply, resolved my RAID health issues.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82716/" ] }
153,259
I'm wondering about direction of recursion in general and rm specifically. rm recursion only works downwards correct? Running: sudo rm -R *.QTFS will delete all *.QTFS files in current directory and its children, correct? current directory as displayed by ls -lha also contains . and .. links for the lack of a better word, so why doesn't recursion follow these upwards in the directory tree? Is there an artificial limit on rm app, or . and .. are not real things?
rm recursion only works downwards correct? rm -r x y will delete x and y and everything inside them (if they are directories), but not their parents or anything outside them. Running: sudo rm -R *.QTFS will delete all *.QTFS files in current directory and its children, correct? No. It will delete all files named *.QTFS , any files recursively inside directories called *.QTFS , and those directories themselves. If you want that other deletion behaviour, use find -delete . current directory as displayed by ls -lha also contains . and .. links for the lack of a better word, so why doesn't recursion follow these upwards in the directory tree? Is there an artificial limit on rm app, or . and .. are not real things? It's an artificial limit of rm . It's not really all that artificial, though - it's the only way it could ever work. If rm followed the parent .. links, every rm -r would remove every file on the system, by following all of the .. links all the way back to / . rm sees the .. and . entries in each directory when it lists the content, and explicitly disregards them for that reason. You can try that out yourself, in fact. Run rm -r . and most rm implementations will refuse to act, reporting an error explicitly: $ rm -r .rm: refusing to remove ‘.’ or ‘..’ directory: skipping ‘.’ (that message is from GNU rm ; others are similar). When it encounters these entries implicitly, rather than as explicit arguments, it just ignores them and continues on. That behaviour is required by POSIX . In GNU rm and many of the BSDs, it's provided automatically by the fts_read family of hierarchy-traversal functions. or . and .. are not real things? . and .. are generally real directory entries, although that is filesystem-specific. They will almost always be presented as though they are real entries to all user code, regardless. Many pieces of software (not just rm ) special-case their behaviour in order to catch or prevent runaway or undesirable recursion.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153259", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
153,260
In my system I have files that not belong to any package, they are mine or from compiled programs installed with make install . How can I find all files that do not belong to any package?
In /var/lib/dpkg/info are .list text files that list all the files contained in each package¹ installed through Debian's package manager. Finding all files in the filesystem not matching any entry there can be achieved with something naïve like this: find / -xdev -type f \( -exec grep -xq "{}" /var/lib/dpkg/info/*.list \; -or -print \) This will obviously take a very long time as the whole filesystem will be scanned. If you use different partitions for system directories (such as /usr or /var ), specify them after the initial / . Warning: That does not include files created by package scripts. For instance: /etc/hosts.allow is not listed anywhere but it might come from libwrap0 that possibly created it, if that file didn't exist at time of the package installation. Many files are compiled during installation, for example .pyc files (compiled Python libraries), .elc files (compiled Emacs Lisp librarires), etc. …
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82724/" ] }
153,286
In a recent Ask Ubuntu question on checking whether some files have differing content, I saw a comment stating that, if the differing sections didn't matter, cmp would be faster than diff . This Stack Overflow answer concurs, giving the reason that cmp would stop at the first differing byte. However, GNU diff has the -q (or --brief ) flag, which is supposed to make it report only when files differ . It would seem logical that GNU diff would also stop comparing once any difference is found (like grep would stop searching after the first match when -l or -q is specified). Is cmp truly faster than diff -q , in the context of Linux systems, which are likely to have the GNU version?
Prompted by @josten, I ran a comparison on the two. The code is on GitHub .In short: The User+Sys time taken by cmp -s seemed to be a tad more than that of diff in most cases. However, the Real time take was pretty much arbitrary - cmp ahead on some, diff ahead on some. Summary: Any difference in performance is pure coincidence. Use whatever you wish.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70524/" ] }
153,294
I am examining the egress IPTABLES log of my computer for yesterday and notice the following: IN= OUT=eth0 SRC=192.168.1.1 DST=69.46.36.10 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=12345 DF PROTO=TCP SPT=56789 DPT=4000 WINDOW=29200 RES=0x00 SYN URGP=0 After looking up the list of well known TCP ports , there is only a Diablo II game using port 4000 . I don't have the game installed on my computer. I tried but could not determine what other services could be connecting to the port. As this only happen yesterday, if I use netstat , I could be staring at the screen the entire day to catch the service in action. Is there a better way to determine what program or user connected to this particular port?
You can use the audit system to log all connect() system calls. sudo auditctl -a exit,always -F arch=b64 -S connect -k connectLogsudo auditctl -a exit,always -F arch=b32 -S connect -k connectLog Then you can search: sudo ausearch -i -k connectLog -w --host 69.46.36.10 Which will show something like: type=SOCKADDR msg=audit(02/09/14 12:31:57.966:60482) : saddr=inet host:69.46.36.10 serv:4000type=SYSCALL msg=audit(02/09/14 12:31:57.966:60482) : arch=x86_64 syscall=connect success=no exit=-4(Interrupted system call) a0=0x3 a1=0x20384b0 a2=0x10 a3=0x7fffbf8c9540 items=0 ppid=21712 pid=25423 auid=stephane uid=stephane gid=stephane euid=stephane suid=stephane fsuid=stephane egid=stephane sgid=stephane fsgid=stephane tty=pts5 ses=4 comm=telnet exe=/usr/bin/telnet.netkit key=connect BTW, I've seen that IP address being resolved from grm.feedjit.com and connection attempts being done to that on 400x ports by iPhones.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153294", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24509/" ] }
153,297
I am trying to understand input redirection in combination with process substituation. I am using bash 3 An example with tr is the following $ tr "o" "a" <(echo "Foo")tr: extra operand `/dev/fd/63'Try `tr --help' for more information. I think I understand why this does not work. The process substitution <( ) creates a file descriptor, where tr only reads from standard input. How can I make it work with proper redirection? I know that I could simply use pipes: $ echo "Foo" | tr "o" "a"Faa However, I am trying to get a better understanding. I tried some thing with the help of man bash , by using <& , but I don't really know what I am doing. How can I use process substitution properly using tr ?
You were really close: tr "o" "a" < <(echo "Foo") The substitution <() makes a file descriptor and just pastes the path to the shell. For comprehension just execute: <(echo blubb) You will see the error: -bash: /dev/fd/63: Permission denied That's why it just pastes /dev/fd/63 into the shell and /dev/fd/63 is not excutable, because it's a simple pipe. In the tr -example above, it's echo "Foo" that writes to the pipe and via input redirection < it's the tr command that reads from the file descriptor.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14084/" ] }
153,302
I have a large text file with words. Every word is on a single line (typical wordlist). I want to replace all characters "o" with the number "0" in every word, and the new formed word to be pasted on the next line after the original word. For example, if we have this two in the list: dogsomeone After the manipulation, our file should looks like this: dogd0gsomeones0me0ne How this can be done with GNU tools like sed or awk ?
You can use sed : sed -i -e 'p' -e 's/o/0/g' file Explanation: -i : activates in-place editing of the file -e 'p' : just pastes the line -e 's/o/0/g' : replaces o with 0 and pastes the altered line And if you want an awk solution: awk '1;gsub("o", "0")' file >new_file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153302", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81287/" ] }
153,309
In gpg, is it possible to move a UID up or down in the list of UIDs in a PGP key? I realize it is a purely cosmetic thing, but I may want to use this to show priority among my addresses, which one should be used first if possible. pub 4096R/0xAABBD62D0BA66C66 2014-09-02uid [ultimate] Mr. Foo Bar <[email protected]>uid [ultimate] Mr. Foo Bar <[email protected]>uid [ultimate] Mr. Foo Bar <[email protected]>uid [ultimate] Mr. Foo Bar <[email protected]>
You can make a UID appear at the top of the list by making it primary. The top UID then gets moved down to the second slot, and likewise, everything else shifts one space downward. It seems as though this "shift" only happens once you save the changes to the key. If you want to get the correct order, you need to repeat these steps starting with the UID you want showing up second-to-last, all the way until the item you want displayed as the top (first) UID. The commands for doing this are (the text following $ and gpg> are what you type into the console): $ gpg --edit-key 0xAABBD62D0BA66C66gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc. # irrelevant output removed #[ultimate] (1). Mr. Foo Bar <[email protected]>[ultimate] (2) Mr. Foo Bar <[email protected]>[ultimate] (3) Mr. Foo Bar <[email protected]>[ultimate] (4) Mr. Foo Bar <[email protected]>gpg> uid 3[ultimate] (1). Mr. Foo Bar <[email protected]>[ultimate] (2) Mr. Foo Bar <[email protected]>[ultimate] (3)* Mr. Foo Bar <[email protected]>[ultimate] (4) Mr. Foo Bar <[email protected]>gpg> primary[ultimate] (1) Mr. Foo Bar <[email protected]>[ultimate] (2) Mr. Foo Bar <[email protected]>[ultimate] (3)* Mr. Foo Bar <[email protected]>[ultimate] (4) Mr. Foo Bar <[email protected]>gpg> save Then rinse and repeat, working your way backwards from the second-to-last item all the way until the item you want displayed first in the list is the primary UID.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5769/" ] }
153,325
I write expect script that login to remote machine and run there some scriptsBut I need also to verify the following Verify if directory /var/cti/adm/APP exists. If APP not exists under adm directory , then need to create this directory and add ownership to this directory , ( as chown system ) Please advice how to check if directory exist in expect script and if not need to create this directory example of part of my expect script #!/usr/bin/expect -f set multiPrompt {[#>$]} send "ssh $LOGIN3@$IP\r" sleep 0.5 expect { word: {send $PASS\r ; exp_continue } expect -re $multiPrompt } example how we can do it with bash [[ ! -d /.../..../... ]] && mkdir -p xxxxx
You can do this in pure Tcl without exec : #!/usr/bin/env expectset path dir1/dir2/dir3file mkdir $path ;# [file mkdir] in Tcl is like mkdir -pfile attributes $path -owner system
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153325", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67059/" ] }
153,339
I need to identify the postion of a character in string. Example, the string is RAMSITALSKHMAN|1223333 . grep -n '[^a-zA-Z0-9\$\~\%\#\^]' How do I find the position of | in the given string?
You can use -b to get the byte offset, which is the same as the position for simple text (but not for UTF-8 or similar). $ echo "RAMSITALSKHMAN|1223333" | grep -aob '|'14:| In the above, I use the -a switch to tell grep to use the input as text; necessary when operating on binary files, and the -o switch to only output the matching character(s). If you only want the position, you can use grep to extract only the position: $ echo "RAMSITALSKHMAN|1223333" | grep -aob '|' | grep -oE '[0-9]+'14 If you get weird output, check to see if grep has colors enabled. You can disable colors by passing --colors=never to grep, or by prefixing the grep command with a \ (which will disable any aliases), e.g.: $ echo "RAMSITALSKHMAN|1223333" | grep -aob '|' --color=never | \grep -oE '^[0-9]+'14 For a string that returns multiple matches, pipe through head -n1 to get the first match. Note that I use both in the above, and note that the latter will not work if grep is "aliased" through an executable (script or otherwise), only when using aliases.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153339", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82782/" ] }
153,344
As far as i know, when backtrack starts it boots on text mode and the wifi(wlan0) is down in order to be more stealth. Kali linux is the "update" of backtrack and when I boot it always starts in GUI mode. Is it possible to start it in text mode? Also i noticed that it asks for a wifi connection, so i suppose that the stealth of wifi scanning is not valid anymore?
You can use -b to get the byte offset, which is the same as the position for simple text (but not for UTF-8 or similar). $ echo "RAMSITALSKHMAN|1223333" | grep -aob '|'14:| In the above, I use the -a switch to tell grep to use the input as text; necessary when operating on binary files, and the -o switch to only output the matching character(s). If you only want the position, you can use grep to extract only the position: $ echo "RAMSITALSKHMAN|1223333" | grep -aob '|' | grep -oE '[0-9]+'14 If you get weird output, check to see if grep has colors enabled. You can disable colors by passing --colors=never to grep, or by prefixing the grep command with a \ (which will disable any aliases), e.g.: $ echo "RAMSITALSKHMAN|1223333" | grep -aob '|' --color=never | \grep -oE '^[0-9]+'14 For a string that returns multiple matches, pipe through head -n1 to get the first match. Note that I use both in the above, and note that the latter will not work if grep is "aliased" through an executable (script or otherwise), only when using aliases.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
153,352
I've seen discussion before about ELF magic, most recently the comments in this Security stack exchange question . I've seen it mentioned before, and I've seen it in my own boot logs.. But I'm not sure what it is. The man page on elf is a bit over my head, as I don't do C or lower level languages. As someone who uses Linux as an every day operating system, what is ELF?
Right from the man page you reference: elf - format of Executable and Linking Format (ELF) files ELF defines the binary format of executable files used by Linux. When you invoke an executable, the OS must know how to load the executable into memory properly, how to resolve dynamic library dependencies and then where to jump into the loaded executable to start executing it. The ELF format provides this information. ELF magic is used to identify ELF files and is merely the very first few bytes of a file: % od -c -N 16 /bin/ls0000000 177 E L F 002 001 001 \0 \0 \0 \0 \0 \0 \0 \0 \00000020 or % readelf -h /bin/ls | grep Magic Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 These 16 bytes unambiguously identify a file as an ELF executable. Many file formats have "magic" bytes that accomplish the same task -- identifying a type of file.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/153352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73386/" ] }
153,357
I have this ~/.inputrc file that I created for certain key bindings. # mappings for Ctrl-left-arrow and Ctrl-right-arrow for word moving"\e[1;5C":forward-word"\e[1;5D":backward-word"\e[5C":forward-word"\e[5D":backward-word"\e\e[C":forward-word"\e\e[D":backward-word whenever I try to run source ~/.inputrc , it gives me the following error: \e[1;5C:forward-word: Command not found. \e[1;5D:backward-word: Command not found. \e[5C:forward-word: Command not found. \e[5D:backward-word: Command not found. \e\e[C:forward-word: Command not found. \e\e[D:backward-word: Command not found. It also doesn't work when I open a new terminal, I don't get the error but my ctrl key combinations are not working in new terminal as well. I created this file myself since I do not have root access to change /etc/inputrc. Can anybody help me out? Thanks. EDIT: I've tried the file with space after the colon (:) sign as well. It doesn't work. I also tried my making it executable (chmod +x ~/.inputrc), didn't work. EDIT: I realized that this procedure is only for 'bash' and I am running 'tcsh'. For 'csh', use .bindings file instead of .inputrc file and use bindkey syntax.
The key bindings and ~/.inputrc file posted in question is for bash . For csh (or tcsh ) use a file ~/.bindings and use following syntax. bindkey '^[[1;5C' forward-wordbindkey '^[[1;5D' backward-word Realized this after some googling.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80747/" ] }
153,381
I am running a simple shell script which has a for loop and kicks off 16 threads in the back ground and waits for each of them to finish before moving on the next commands. Here is the shell: total_threads=16 echo "" echo "Running Web Exclusive Item updates ... " echo "Total Number of threads = " $total_threads for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 do echo "Running Thread $i .. " ./run_upd_web_excl_items.sh $i $total_threads & done wait echo "Done." echo "" echo "Committing data ... " output_val=`$ORACLE_HOME/bin/sqlplus -S $UP <<endofsql whenever sqlerror exit 2 whenever oserror exit 3 SET HEADING OFF SET FEEDBACK OFF SET ECHO OFF set serverout on commit; / exit endofsql` echo "Done." As I was running this, the output is coming as follows: ~> . ./run_upd_web_excl_items_mitctrl.shRunning Web Exclusive Item updates ...Total Number of threads = 16Running Thread 1 ..Running Thread 2 ..Running Thread 3 ..Running Thread 4 ..Running Thread 5 ..Running Thread 6 ..Running Thread 7 ..Running Thread 8 ..Running Thread 9 ..Running Thread 10 ..Running Thread 11 ..Running Thread 12 ..Running Thread 13 ..Running Thread 14 ..Running Thread 15 ..Running Thread 16 ..Total time for Thread 3 : 0 minute(s), 22.5 second(s).Total time for Thread 2 : 0 minute(s), 23.3 second(s).Total time for Thread 12 : 0 minute(s), 24.3 second(s).Total time for Thread 8 : 0 minute(s), 24.8 second(s).Total time for Thread 7 : 0 minute(s), 29.9 second(s).Total time for Thread 1 : 0 minute(s), 30 second(s).[1] Done ./run_upd_web_excl_items.sh $i $total_threads[2] Done ./run_upd_web_excl_items.sh $i $total_threads[3] Done ./run_upd_web_excl_items.sh $i $total_threads[7] Done ./run_upd_web_excl_items.sh $i $total_threads[8] Done ./run_upd_web_excl_items.sh $i $total_threads[12] Done ./run_upd_web_excl_items.sh $i $total_threadsTotal time for Thread 16 : 0 minute(s), 32.1 second(s).Total time for Thread 4 : 0 minute(s), 32.8 second(s).[4] Done ./run_upd_web_excl_items.sh $i $total_threads[16]+ Done ./run_upd_web_excl_items.sh $i $total_threadsTotal time for Thread 10 : 0 minute(s), 33.2 second(s).Total time for Thread 13 : 0 minute(s), 33.7 second(s).Total time for Thread 5 : 0 minute(s), 33.8 second(s).[5] Done ./run_upd_web_excl_items.sh $i $total_threads[10] Done ./run_upd_web_excl_items.sh $i $total_threads[13] Done ./run_upd_web_excl_items.sh $i $total_threadsTotal time for Thread 14 : 0 minute(s), 35.6 second(s).Total time for Thread 6 : 0 minute(s), 36.8 second(s).[6] Done ./run_upd_web_excl_items.sh $i $total_threads[14]- Done ./run_upd_web_excl_items.sh $i $total_threadsTotal time for Thread 11 : 0 minute(s), 37.7 second(s).Total time for Thread 9 : 0 minute(s), 37.8 second(s).[9] Done ./run_upd_web_excl_items.sh $i $total_threads[11]- Done ./run_upd_web_excl_items.sh $i $total_threadsTotal time for Thread 15 : 0 minute(s), 38.8 second(s).[15]+ Done ./run_upd_web_excl_items.sh $i $total_threadsDone.Committing data ...Done. Normally, I would have expected an output like this: ~> ./run_upd_web_excl_items_mitctrl.shRunning Web Exclusive Item updates ...Total Number of threads = 16Running Thread 1 ..Running Thread 2 ..Running Thread 3 ..Running Thread 4 ..Running Thread 5 ..Running Thread 6 ..Running Thread 7 ..Running Thread 8 ..Running Thread 9 ..Running Thread 10 ..Running Thread 11 ..Running Thread 12 ..Running Thread 13 ..Running Thread 14 ..Running Thread 15 ..Running Thread 16 ..Total time for Thread 1 : 0 minute(s), 26.5 second(s).Total time for Thread 10 : 0 minute(s), 27.1 second(s).Total time for Thread 2 : 0 minute(s), 27.5 second(s).Total time for Thread 6 : 0 minute(s), 27.9 second(s).Total time for Thread 3 : 0 minute(s), 27.9 second(s).Total time for Thread 15 : 0 minute(s), 27.9 second(s).Total time for Thread 9 : 0 minute(s), 28 second(s).Total time for Thread 5 : 0 minute(s), 28 second(s).Total time for Thread 16 : 0 minute(s), 28.1 second(s).Total time for Thread 8 : 0 minute(s), 30.5 second(s).Total time for Thread 12 : 0 minute(s), 31 second(s).Total time for Thread 11 : 0 minute(s), 31.5 second(s).Total time for Thread 7 : 0 minute(s), 31.9 second(s).Total time for Thread 14 : 0 minute(s), 32 second(s).Total time for Thread 13 : 0 minute(s), 32.7 second(s).Total time for Thread 4 : 0 minute(s), 34.8 second(s).Done.Committing data ...Done. I am not able to figure out why the shell is printing the extra lines like : [13] Done ./run_upd_web_excl_items.sh $i $total_threads Any thoughts?
The key bindings and ~/.inputrc file posted in question is for bash . For csh (or tcsh ) use a file ~/.bindings and use following syntax. bindkey '^[[1;5C' forward-wordbindkey '^[[1;5D' backward-word Realized this after some googling.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82800/" ] }
153,409
I'm trying to write a conky script that shows my MPD album art, a 'folder.jpg' in the album folder. My current plan is to use mpc -f %file% , which prints out the file name and path, and then cut out the actual file name (i.e. everything after the last /), and use that as the path to for conky's image object. I'm having trouble with grep/cut, particularly since some songs are nested in two folders, while others are in one. ( grep -m 1 . |cut -f1 -d / works for the single folder albums) How would I go about this? Is there a simpler way I'm missing?
You can use sed to remove the rest - everything starting with the last slash : mpc -f %file% | head -1 | sed 's:/[^/]*$::' The pattern /[^/]*$ matches a slash, followed by any characters except slash, up to the end of the line. It is replaced by the empty string. The head -1 ignores some status output following the line we want - but see below for how to not pring them in the first place. In case you are not used to sed, the command may look unusual because of the : used. You may have seen sed commands with / as separator before, like 's/foo/bar/' - I prefer to use the separator : for readability, as the expression contains / itself. Alternatively, we could escape the / in the expression: 's/\/[^\/]*$//' - that does not make it more readable. The way you use it with mpc gives you two additional status lines. The option -q for quiet switches off all the output. You need to combine -q with explicitly printing the current playing file to get the line yon need, and nothing more: mpc -q -f %file% current | sed 's:/[^/]*$::' Instead of removing everything starting with the last slash , one could explicitly print everything before the last slash : mpc -q -f %file% current | sed 's:^\(.*\)/.*$:\1:' That matches any characters that are followed by a / and any other characters; It matches as much as possible, so that the / that is matched will be the last one.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153409", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82817/" ] }
153,483
Currently I'm using i3 window manager (but I guess that this applies to other non-standard window managers as well). Whenever I run nautilus it also starts a full screen desktop, which I have to close. Possible solution is to start nautilus with: nautilus --browser --no-desktop , which solves this problem only partially, as sometimes nautilus is launched automatically by other applications and in this case it would be launched without --browser --no-desktop options. Is there any gnome3 config option that allows me to suppress desktop launching?
Yes, there is a dconf value that controls this. Run the following command to disable drawing of the desktop by Nautilus: gsettings set org.gnome.desktop.background show-desktop-icons false Source: https://askubuntu.com/a/237984/81372
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5612/" ] }
153,508
I know cat can do this, but its main purpose is to concatenate rather than just to display the content. I also know about less and more , but I'm looking for something simple ( not a pager ) that just outputs the content of a file to the terminal and it's made specifically for this, if there is such thing.
The most obvious one is cat . But also have a look at head and tail . There are other shell utilities to print a file line by line: sed , awk , grep . But those are meant to manipulate the file content or to search inside the file. I made a few tests to estimate which is the most effective one. I ran all trough strace to see which made the fewest system calls. My file has 1275 lines. awk : 1355 system calls cat : 51 system calls grep : 1337 system calls head : 93 system calls tail : 130 system calls sed : 1378 system calls As you can see, even if cat was designed to concatenate files, it is the fastest and most effective one. sed , awk and grep printed the file line by line, that's why they have more that 1275 system calls.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/153508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80615/" ] }
153,539
Out of the two options to change permissions: chmod 777 file.txt chmod a+rwx file.txt I am writing a document that details that users need to change the file permissions of a certain file. I want to detail it as the most common way of changing file permissions. Currently is says: - Set permissions on file.txt as per the example below: - chmod 777 /tmp/file.txt This is just an example, and won't change files to have full permissions for everyone.
Google gives: 1,030,000 results for ' chmod 777 ' 371,000 results for ' chmod a+rwx ' chmod 777 is about 3 times more popular. That said, I prefer using long options in documentation and scripts, because they are self-documenting. If you are following up your instructions with "Run ls -l | grep file.txt and verify permissions", you may want to use chmod a+rwx because that's how ls will display the permissions.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/153539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20752/" ] }
153,544
There are a few questions and answers on here with regard to being alerted when a process completes/exits( 1 , 2 ) – but these all assume that the user has issued said process themselves, and thus can script it with an alert built into the governing script, or pipe the process to some kind of alert. My situation is that I would like to be alerted of the completion/exit of a process that my user is not initializing. Namely, I am bulk processing massive video files on a Ubuntu 12.04 LTS server. Certain operations on these files take a very long time, so I would like some kind of alert (email would be great) when a specific one finishes. They take so long, that doing this on a one-off basis, manually, based on PIT would be perfectly fine. To provide more info – let's say I'm processing a particularly big file, and I see that it has progressed on to an FFMPEG script, the process itself being a python script (that is quite complex, and not written by myself, and something I do not wish to modify – though that would be the first logical approach). I imagine issuing a command or script with the PID of said running python script as an argument, and when the process with that PID is no longer running, the alert script does its thing. Any ideas?
Something like this? (while kill -0 $pid; do sleep 1; done) && echo "finished" Replace $pid with the process id and echo "finished" with whatever you want to do when the process exited. For example: (while kill -0 $pid; do sleep 1; done) && mail ...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22994/" ] }
153,552
I've been trying to establish an RDP conexion with a Windows 7 machine from Linux with sound AND microphone redirection, I have tried rdesktop, remmina and freerdp with no use. Finally, with freerdp I have finally managed to get sound and microphone redirection but from different sessions. I mean xfreerdp -u user -d domain -p password --plugin drdynvc --data audin -- server Allows me to send the microphone to the remote machine. xfreerdp -u user -d domain -p password --plugin rdpsnd --data pulse -- server Allows me to get the sound from the remote machine, but I can't manage to get both within the same session. Any ideas?
I have finally found the solution. Looking for an answer for the remmina client, a guy had xfreerdp working and not remmina and he posted his xfreerdp command here . The thing was that when I was launching the command with the two plugins (drdynvc and rdpsnd) I was also passing two data strings audin and pulse and that doesn't work. The solutions is passing two plugins with just audin as data: xfreerdp -u user -d domain -p password --plugin drdynvc --plugin rdpsnd --data audin -- server
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153552", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56793/" ] }
153,566
This has been working on my vagrant vm for a few month without any issues. I didn't change anything, it has been vagrant up and vagrant destroy 'd several times before with no problems. But now it fails and I can't find out what the problem seems to be. I can't even install vim. Version Information [root@localhost ~]# uname -aLinux localhost 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux[root@localhost ~]# cat /etc/*{release,version}CentOS release 6.5 (Final)CentOS release 6.5 (Final)CentOS release 6.5 (Final) Installing EPEL [root@localhost ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/$(uname -i)/epel-release-6-8.noarch.rpmReceiving http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpmWarning: /var/tmp/rpm-tmp.RGvUnd: Header V3 RSA/SHA256 Signature, Key-ID 0608b895: NOKEYPreparing... ########################################### [100%] 1:epel-release ########################################### [100%][root@localhost ~]# yum clean all[root@localhost ~]# /etc/yum.repos.d/epel.repo [epel]name=Extra Packages for Enterprise Linux 6 - $basearch#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearchmirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearchfailovermethod=priorityenabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 Trying to check update (FAILS) [root@localhost ~]# yum check-updateLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: ftp-stud.fht-esslingen.de * epel: mirrors.ircam.fr * extras: mirror2.hs-esslingen.de * updates: mirror2.hs-esslingen.dehttp://mirrors.ircam.fr/pub/fedora/epel/6/x86_64/repodata/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"Trying other mirror.[...] Trying to check update excluding epel (WORKS) [root@localhost ~]# yum --disablerepo="epel" check-updateLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: ftp-stud.fht-esslingen.de * extras: mirror2.hs-esslingen.de * updates: mirror2.hs-esslingen.de[vagrant@localhost ~]# URLGRABBER_DEBUG [root@localhost ~]# URLGRABBER_DEBUG=1 yum check-update 2> debug.log2014-09-03 07:39:10,534 MIRROR: trying repodata/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz2 -> http://mirrors.ircam.fr/pub/fedora/epel/6/x86_64/repodata/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz2INFO:urlgrabber:MIRROR: trying repodata/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz2 -> http://mirrors.ircam.fr/pub/fedora/epel/6/x86_64/repodata/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz22014-09-03 07:39:10,535 combined options: { 'checkfunc' : (<bound method YumRepository.checkMD of <yum.yumRepo.YumRepository object at 0x27c8490>>, ('primary_db',), {}), 'copy_local' : 1, 'http_headers' : (), 'range' : None, 'reget' : 'simple', 'size' : '6623767', 'text' : 'epel/primary_db', 'delegate' : { 'bandwidth' : 0, 'cache_openers': True, 'checkfunc' : None, 'close_connection': 0, 'copy_local' : 0, 'data' : None, 'delegate' : None, 'failure_callback': (<bound method YumBaseCli.failureReport of <cli.YumBaseCli object at 0x276c0d0>>, (), {}), 'ftp_headers' : None, 'http_headers' : (), 'interrupt_callback': <bound method YumBaseCli.interrupt_callback of <cli.YumBaseCli object at 0x276c0d0>>, 'keepalive' : True, 'max_header_size': 2097152, 'opener' : None, 'password' : None, 'prefix' : None, 'progress_obj' : <output.YumTextMeter instance at 0x279e4d0>, 'proxies' : None, 'quote' : None, 'range' : None, 'reget' : 'simple', 'retry' : 10, 'retrycodes' : [-1, 2, 4, 5, 6, 7], 'size' : None, 'ssl_ca_cert' : None, 'ssl_cert' : None, 'ssl_cert_type': 'PEM', 'ssl_context' : None, 'ssl_key' : None, 'ssl_key_pass' : None, 'ssl_key_type' : 'PEM', 'ssl_verify_host': True, 'ssl_verify_peer': True, 'text' : None, 'throttle' : 0, 'timeout' : 30.0, 'urlparser' : <urlgrabber.grabber.URLParser instance at 0x28f1f38>, 'user_agent' : 'urlgrabber/3.9.1 yum/3.2.29', 'username' : None, } }DEBUG:urlgrabber:combined options: { 'checkfunc' : (<bound method YumRepository.checkMD of <yum.yumRepo.YumRepository object at 0x27c8490>>, ('primary_db',), {}), 'copy_local' : 1, 'http_headers' : (), 'range' : None, 'reget' : 'simple', 'size' : '6623767', 'text' : 'epel/primary_db', 'delegate' : { 'bandwidth' : 0, 'cache_openers': True, 'checkfunc' : None, 'close_connection': 0, 'copy_local' : 0, 'data' : None, 'delegate' : None, 'failure_callback': (<bound method YumBaseCli.failureReport of <cli.YumBaseCli object at 0x276c0d0>>, (), {}), 'ftp_headers' : None, 'http_headers' : (), 'interrupt_callback': <bound method YumBaseCli.interrupt_callback of <cli.YumBaseCli object at 0x276c0d0>>, 'keepalive' : True, 'max_header_size': 2097152, 'opener' : None, 'password' : None, 'prefix' : None, 'progress_obj' : <output.YumTextMeter instance at 0x279e4d0>, 'proxies' : None, 'quote' : None, 'range' : None, 'reget' : 'simple', 'retry' : 10, 'retrycodes' : [-1, 2, 4, 5, 6, 7], 'size' : None, 'ssl_ca_cert' : None, 'ssl_cert' : None, 'ssl_cert_type': 'PEM', 'ssl_context' : None, 'ssl_key' : None, 'ssl_key_pass' : None, 'ssl_key_type' : 'PEM', 'ssl_verify_host': True, 'ssl_verify_peer': True, 'text' : None, 'throttle' : 0, 'timeout' : 30.0, 'urlparser' : <urlgrabber.grabber.URLParser instance at 0x28f1f38>, 'user_agent' : 'urlgrabber/3.9.1 yum/3.2.29', 'username' : None, } }2014-09-03 07:39:10,535 attempt 1/10: http://mirrors.ircam.fr/pub/fedora/epel/6/x86_64/repodata/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz2INFO:urlgrabber:attempt 1/10: http://mirrors.ircam.fr/pub/fedora/epel/6/x86_64/repodata/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz22014-09-03 07:39:10,535 opening local file "/var/cache/yum/x86_64/6/epel/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz2" with mode abINFO:urlgrabber:opening local file "/var/cache/yum/x86_64/6/epel/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz2" with mode ab* About to connect() to mirrors.ircam.fr port 80 (#0)* Trying 129.102.1.37... * connected* Connected to mirrors.ircam.fr (129.102.1.37) port 80 (#0)> GET /pub/fedora/epel/6/x86_64/repodata/9fdd4609f219b3ec5cfa5408ab03b84b2bce97ab6de268b890577ee86b998618-primary.sqlite.bz2 HTTP/1.1User-Agent: urlgrabber/3.9.1 yum/3.2.29Host: mirrors.ircam.frAccept: */** The requested URL returned error: 404 Not Found* Closing connection #02014-09-03 07:39:10,610 exception: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"INFO:urlgrabber:exception: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"2014-09-03 07:39:10,610 calling callback: (<bound method YumBaseCli.failureReport of <cli.YumBaseCli object at 0x276c0d0>>, (), {})INFO:urlgrabber:calling callback: (<bound method YumBaseCli.failureReport of <cli.YumBaseCli object at 0x276c0d0>>, (), {}) Any ideas?
@Burhan Ali was right, it seems to be a temporary issue with the EPEL repository. I changed my /etc/yum.repos.d/epel.repo to use the baseurl instead of the mirrorlist as mentioned in this answer and it works now.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82914/" ] }
153,592
Is there any way to count the lines of a non-terminating input source? For example, I want to run the following for a period of time to count the number of requests: ngrep -W byline port 80 and dst host 1.2.3.4 | grep ":80" | wc Obviously, that doesn't work. When I Ctrl + C kill that, I'm not going to get any output from wc . I would rather avoid creating a file if I can.
Typing Ctrl + C from the terminal sends SIGINT to the foreground process group. If you want wc to survive this event and produce output, you need to have it ignore the signal. The solution is to run wc in a subshell and have its parent shell set SIGINT to be ignored before running wc . wc will inherit this setting and not die when SIGINT is sent to the process group. The rest of the pipeline will die, leaving wc reading from a pipe that has no process on the other end. This will cause wc to see EOF on the pipe and it will then output its counts and exit. ngrep -W byline port 80 and dst host 1.2.3.4 | grep ":80" | (trap '' INT ; wc)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153592", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82928/" ] }
153,641
Is it possible to tell if a terminal is SSH'd into a machine ? some kind of command to do this for me and execute something if I am SSHd?
You can check if the $SSH_CLIENT and the $SSH_TTY variables are set: (through SSH) $ [[ -z $SSH_TTY ]]; echo $?1(local) $ [[ -z $SSH_TTY ]]; echo $?0 The $SSH_CLIENT variable contains the IP you're connecting from, along with the remote and the local port: (through SSH) $ echo $SSH_CLIENT15.3.25.189 54188 22
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82961/" ] }
153,659
I have a USB disk encrypted with LUKS . Immediately after mounting the disk today, I find that a recently edited text file contains seemingly random characters. All other files that I have checked, and the directory hierarchy, appear to be normal. What can cause this, and can such a file be recovered? I do have a recent backup of the entire disk, and a recent version of the text file is committed to a Git archive. I am, however, interested in a fix -- as well as preventative guidance.
You can check if the $SSH_CLIENT and the $SSH_TTY variables are set: (through SSH) $ [[ -z $SSH_TTY ]]; echo $?1(local) $ [[ -z $SSH_TTY ]]; echo $?0 The $SSH_CLIENT variable contains the IP you're connecting from, along with the remote and the local port: (through SSH) $ echo $SSH_CLIENT15.3.25.189 54188 22
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6764/" ] }
153,660
I tried 'man echo' in Bash and it told me that 'echo --help' will display help then exit, and similarly, that 'echo --version' will output version and exit. But why it doesn't work ? 'echo --help' just simply prints '--help' literally.
man echo relates to the echo program . GNU echo supports a --help option, as do some others. When you run echo in Bash you instead get its builtin echo which doesn't. To access the echo program, rather than the builtin, you can either give a path to it: /bin/echo --help or use Bash's enable command to disable the built-in version: $ enable -n echo$ echo --help Bash has built-in versions of a lot of basic commands, because it's a little faster to do that, but you can always bypass them like this when you need to.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/153660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82973/" ] }
153,665
I was checking unshare command and according to it's man page, unshare - run program with some namespaces unshared from parent I also see there is a type of namespace listed as, mount namespace mounting and unmounting filesystems will not affect rest of the system. What exactly is the purpose of this mount namespace ? I am trying to understand this concept with the help of some example.
Running unshare -m gives the calling process a private copy of its mount namespace, and also unshares file system attributes so that it no longer shares its root directory, current directory, or umask attributes with any other process. So what does the above paragraph say? Let us try and understand using a simple example. Terminal 1: I do the below commands in the first terminal. #Creating a new processunshare -m /bin/bash#creating a new mount pointsecret_dir=`mktemp -d --tmpdir=/tmp`#creating a new mount point for the above created directory. mount -n -o size=1m -t tmpfs tmpfs $secret_dir#checking the available mount points. grep /tmp /proc/mounts The last command gives me the output as, tmpfs /tmp/tmp.7KtrAsd9lx tmpfs rw,relatime,size=1024k 0 0 Now, I did the following commands as well. cd /tmp/tmp.7KtrAsd9lxtouch hellotouch helloagainls -lFa The output of the ls command is, ls -lFatotal 4drwxrwxrwt 2 root root 80 Sep 3 22:23 ./drwxrwxrwt. 16 root root 4096 Sep 3 22:22 ../-rw-r--r-- 1 root root 0 Sep 3 22:23 hello-rw-r--r-- 1 root root 0 Sep 3 22:23 helloagain So what is the big deal in doing all this? Why should I do it? I open another terminal now ( terminal 2 ) and do the below commands. cd /tmp/tmp.7KtrAsd9lxls -lFa The output is as below. ls -lFatotal 8drwx------ 2 root root 4096 Sep 3 22:22 ./drwxrwxrwt. 16 root root 4096 Sep 3 22:22 ../ The files hello and helloagain are not visible and I even logged in as root to check these files. So the advantage is, this feature makes it possible for us to create a private temporary filesystem that even other root-owned processes cannot see or browse through. From the man page of unshare , mount namespace Mounting and unmounting filesystems will not affectthe rest of the system (CLONE_NEWNS flag), except for filesystemswhich are explicitly marked as shared (with mount--make-shared; see /proc/self/mountinfo for the shared flags). It's recommended to use mount --make-rprivate or mount --make-rslaveafter unshare--mount to make sure that mountpoints in the new namespace are really unshared from the parental namespace. The memory being utilized for the namespace is VFS which is from kernel. And - if we set it up right in the first place - we can create entire virtual environments in which we are the root user without root permissions. References: The example is framed using the details from this blog post .Also, the quotes of this answer are from this wonderful explanation from Mike . Another wonderful read regarding this can be found from the answer from here .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/153665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47538/" ] }
153,687
First time I am starting working on "heidisql". I want to download & install "heidisql" on Linux Mint. May anybody please provide me the link to download "heidisql" for Linux version & how to install? Actually after googling I am not getting proper result.
A good alternative to Heidi that runs on Linux without wine is dbeaver. It's a Java app and the user interface will be familiar for users of HeidiSQL. https://dbeaver.io/ Installation instructions How to install DBeaver 2.3.6 on 32 bit Ubuntu, Linux Mint, Elementary OS, Debian, Crunchbang and KWheezy systems: $ wget -c https://dbeaver.io/files/dbeaver-ce_latest_i386.deb$ sudo dpkg -i dbeaver-ce_latest_i386.deb$ sudo apt-get install -f How to install DBeaver 2.3.6 on 64 bit Ubuntu, Linux Mint, Elementary OS, Debian, Crunchbang and KWheezy systems: $ wget -c https://dbeaver.io/files/dbeaver-ce_latest_amd64.deb$ sudo dpkg -i dbeaver-ce_latest_amd64.deb$ sudo apt-get install -f Source
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153687", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82993/" ] }
153,691
I just installed lxc in Arch Linux, but the qemu-debootstrap binary seems missing, This command sudo lxc-create -n test -t ubuntu -P /run/shm/1 complains about that. I couldn't find it with either pacman or yaourt . Any ideas how to fix that? I have the debootstrap script installed and that works though
Debootstrap is in aur/debootstrap package. After installation process you will have to make a symlink in /usr/bin : cd /usr/bin ; ln -sf debootstrap qemu-debootstrap After that do what ouzmoutous suggests. Anyway I always advise to use downloaded templates. HTH
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
153,693
I'm trying to change the cpu frequency on my laptop (running Linux), and not having any success. Here are some details: # uname -aLinux yoga 3.12.21-gentoo-r1 #4 SMP Thu Jul 10 17:32:31 HKT 2014 x86_64 Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz GenuineIntel GNU/Linux# cpufreq-infocpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009Report errors and bugs to [email protected], please.analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 0.97 ms. hardware limits: 800 MHz - 2.60 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 800 MHz and 2.60 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 2.42 GHz (asserted by call to hardware).(similar information for cpus 1, 2 and 3)# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governorsperformance powersave I initially had the userspace governor built into the kernel, but then I also tried building it as a module (with the same results); it was loaded while running the above commands (and I couldn't find any system messages when loading it): # lsmodModule Size Used bycpufreq_userspace 1525 0(some other modules) And here are the commands I tried for changing the frequency: # cpufreq-set -f 800MHzError setting new values. Common errors:- Do you have proper administration rights? (super-user?)- Is the governor you requested available and modprobed?- Trying to set an invalid policy?- Trying to set a specific frequency, but userspace governor is not available, for example because of hardware which cannot be set to a specific frequency or because the userspace governor isn't loaded?# cpufreq-set -g userspace Error setting new values. Common errors:- Do you have proper administration rights? (super-user?)- Is the governor you requested available and modprobed?- Trying to set an invalid policy?- Trying to set a specific frequency, but userspace governor is not available, for example because of hardware which cannot be set to a specific frequency or because the userspace governor isn't loaded? Any ideas?
This is because your system is using the new driver called intel_pstate . There are only two governors available when using this driver: powersave and performance . The userspace governor is only available with the older acpi-cpufreq driver (which will be automatically used if you disable intel_pstate at boot time; you then set the governor/frequency with cpupower ): disable the current driver: add intel_pstate=disable to your kernel boot line boot, then load the userspace module: modprobe cpufreq_userspace set the governor: cpupower frequency-set --governor userspace set the frequency: cpupower --cpu all frequency-set --freq 800MHz
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/153693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34089/" ] }
153,697
To determine the maximum length of each column in a comma-separated csv-file I hacked together a bash-script. When I ran it on a linux system it produced the correct output, but I need it to run on OS X and it relies on the GNU version of wc that can be used with the parameter -L for --max-line-length . The version of wc on OSX does not support that specific option and I'm looking for an alternative. My script (which not be that good - it reflects my poor scripting skills I guess): #!/bin/bashfor((i=1;i< `head -1 $1|awk '{print NF}' FS=,`+1 ;i++)); do echo | xargs echo -n "Column$i: " && cut -d, -f $i $1 |wc -L ; done Which prints: Column1: 6Column2: 7Column3: 4Column4: 4Column5: 3 For my test-file: 123,eeeee,2323,tyty,3154523,eegfeee,23,yty,343 I know installing the GNU CoreUtils through Homebrew might be a solution, but that's not a path I want to take as I'm sure it can be solved without modifying the system.
why not use awk ? I don't have a mac to test, but length() is a pretty standard function in awk, so this should work. awk file: { for (i=1;i<=NF;i++) { l=length($i) ; if ( l > linesize[i] ) linesize[i]=l ; }}END { for (l in linesize) printf "Columen%d: %d\n",l,linesize[l] ;} then run mybox$ awk -F, -f test.awk a.txtColumen4: 4Columen5: 3Columen1: 6Columen2: 7Columen3: 4
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41888/" ] }
153,707
I have working from svn email notification, but it got the error, could you help for me? (The command executes automatic send file for my mail, but now not sending.) svn commit -m "[1] add some text in this file" error: Sending test/test.txtTransmitting file data . I go to syslog: tail -f /var/log/syslog Sep 4 13:16:42 dmayavanlo1 logger: Going to execute the email notification commandSep 4 13:16:42 dmayavanlo1 sSMTP[3116]: Unable to locate smtp.gmail.comSep 4 13:16:42 dmayavanlo1 logger: sendmail: Cannot open smtp.gmail.com:587Sep 4 13:16:42 dmayavanlo1 sSMTP[3116]: Cannot open smtp.gmail.com:587Sep 4 13:16:42 dmayavanlo1 logger: Traceback (most recent call last):Sep 4 13:16:42 dmayavanlo1 logger: File "/home/bugzilla/mysvn/hooks/mailer.py", line 1348, in <module>Sep 4 13:16:42 dmayavanlo1 logger: sys.argv[3:3+expected_args])Sep 4 13:16:42 dmayavanlo1 logger: File "/usr/lib/python2.7/dist-packages/svn/core.py", line 281, in run_appSep 4 13:16:42 dmayavanlo1 logger: return func(application_pool, *args, **kw)Sep 4 13:16:42 dmayavanlo1 logger: File "/home/bugzilla/mysvn/hooks/mailer.py", line 105, in mainSep 4 13:16:42 dmayavanlo1 logger: messenger.generate()Sep 4 13:16:42 dmayavanlo1 logger: File "/home/bugzilla/mysvn/hooks/mailer.py", line 383, in generateSep 4 13:16:42 dmayavanlo1 logger: group, params, paths, subpool)Sep 4 13:16:42 dmayavanlo1 logger: File "/home/bugzilla/mysvn/hooks/mailer.py", line 653, in generate_contentSep 4 13:16:42 dmayavanlo1 logger: renderer.render(data)Sep 4 13:16:42 dmayavanlo1 logger: File "/home/bugzilla/mysvn/hooks/mailer.py", line 963, in renderSep 4 13:16:42 dmayavanlo1 logger: self._render_diffs(data.diffs, '')Sep 4 13:16:42 dmayavanlo1 logger: File "/home/bugzilla/mysvn/hooks/mailer.py", line 1042, in _render_diffsSep 4 13:16:42 dmayavanlo1 logger: w(line.raw)Sep 4 13:16:42 dmayavanlo1 logger: IOError: [Errno 32] Broken pipe 2) tail -f /var/log/apache2/error.log [Thu Sep 04 12:34:11 2014] [error] [client 192.168.1.12] Could not fetch resource information. [301, #0][Thu Sep 04 12:34:11 2014] [error] [client 192.168.1.12] Requests for a collection must have a trailing slash on the URI. [301, #0]
why not use awk ? I don't have a mac to test, but length() is a pretty standard function in awk, so this should work. awk file: { for (i=1;i<=NF;i++) { l=length($i) ; if ( l > linesize[i] ) linesize[i]=l ; }}END { for (l in linesize) printf "Columen%d: %d\n",l,linesize[l] ;} then run mybox$ awk -F, -f test.awk a.txtColumen4: 4Columen5: 3Columen1: 6Columen2: 7Columen3: 4
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78285/" ] }
153,732
I was able to archive and compress a folder with the following command: tar -cvfz example2.tgz example1 I then removed the example1 folder and tried to unpack the archive using this command: tar -xvfz example2.tgz and tried tar -zxvf example2.tgz Niether of these commands worked. The error returned was: gzip: example2.tgz: not in gzip formattar: This does not look like a tar archivetar: Exiting with failure status due to previous errors It clearly used gzip compression since I passed tar the z qualifier in the initial command. What am I doing wrong? I am on Ubuntu 14.0.4
The command you're showing in your first line ( tar -cvfz example2.tgz example1 ) doesn't work and it should not output any file example2.tgz . Didn't you get an error? Perhaps the file example2.tgz existed already? Check if you have a file called z in that folder - that's where the tgz has been saved to, because: The -f parameter specifies the file which must follow immediately afterwards: -f <file> Try tar cvzf exam.tgz example1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81926/" ] }
153,761
When I try to find a file using find -name "filename" I get an error that says: ./var/named/chroot/var/named' is part of the same file system loop as `./var/named' I ran the ls -ldi /var/named/chroot/var/named/ /var/named command and the inode numbers are the same. Research indicates the fix is to delete the hard link /var/named/chroot/var/named/ using rm -f and recreate it as a directory but when I do this I am advised that it can't be deleted because it is a directory already. How do I fix this? I'm running Centos 6 with Plesk 11. The mount command gives this: /dev/vzfs on / type reiserfs (rw,usrquota,grpquota)proc on /proc type proc (rw,relatime)sysfs on /sys type sysfs (rw,relatime)none on /dev type tmpfs (rw,relatime)none on /dev/pts type devpts (rw,relatime)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)/etc/named on /var/named/chroot/etc/named type none (rw,bind)/var/named on /var/named/chroot/var/named type none (rw,bind)/etc/named.rfc1912.zones on /var/named/chroot/etc/named.rfc1912.zones type none (rw,bind)/etc/rndc.key on /var/named/chroot/etc/rndc.key type none (rw,bind)/usr/lib64/bind on /var/named/chroot/usr/lib64/bind type none (rw,bind)/etc/named.iscdlv.key on /var/named/chroot/etc/named.iscdlv.key type none (rw,bind)/etc/named.root.key on /var/named/chroot/etc/named.root.key type none (rw,bind)
named , that is the DNS server, runs in a chroot. To access the configuration file, the startup script uses mount --bind to make the configuration dir visible inside the chroot. This means that /var/named/ is the same as /var/named/chroot/var/named , and /var/named/chroot/var/named/chroot/var/named and so on. This is a recursive directory structure so if find tried to transverse it all it would never be able to terminate its execution, so it realizes that the two directory are actually the same, and prints you that message, to warn you. The message means that find won't search inside /var/named/chroot/var/named because it realized it is the same as some other directory already seen before. It is a totally harmless message, you can safely ignore it: after skipping /var/named/chroot/var/named the find operation continues normally.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83062/" ] }
153,763
I am trying to instruct GNU Make 3.81 to not stop if a command fails (so I prefix the command with - ) but I also want to check the exit status on the next command and print a more informative message. However my Makefile below fails: $ cat Makefile all: -/bin/false ([ $$? -eq 0 ] && echo "success!") || echo "failure!"$$ make/bin/falsemake: [all] Error 1 (ignored)([ $? -eq 0 ] && echo "success!") || echo "failure!"success! Why does the Makefile above echo "success!" instead of "failure!" ? update: Following and expanding on the accepted answer, below is how it should be written: failure: @-/bin/false && ([ $$? -eq 0 ] && echo "success!") || echo "failure!" success: @-/bin/true && ([ $$? -eq 0 ] && echo "success!") || echo "failure!"
Each update command in a Makefile rule is executed in a separate shell. So $? does not contain the exit status of the previous failed command, it contains whatever the default value is for $? in a new shell. That's why your [ $? -eq 0 ] test always succeeds.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/153763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24044/" ] }
153,785
Under consistent network device naming scheme, what does 'eno' stand for in network interface name eno16777736 for CentOS 7 or RHEL 7?
This is Predictable Network Interface Device Names in action. en is for Ethernet o is for on-board The number is a firmware/BIOS provided index. More details in the source of udev-builtin-net_id.c
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/153785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83074/" ] }
153,788
One can find several threads on the Internet such as this: http://www.gossamer-threads.com/lists/linux/kernel/972619 where people complain they cannot build Linux with -O0, and are told that this is not supported; Linux relies on GCC optimizations to auto-inline functions, remove dead code, and otherwise do things that are necessary for the build to succeed. I've verified this myself for at least some of the 3.x kernels. The ones I've tried exit after a few seconds of build time if compiled with -O0. Is this generally considered acceptable coding practice? Are compiler optimizations, such as automatic inlining, predictable enough to rely on; at least when dealing with only one compiler? How likely is it that future versions of GCC might break builds of current Linux kernels with default optimizations (i.e. -O2 or -Os)? And on a more pedantic note: since 3.x kernels cannot compile without optimizations, should they be considered technically incorrect C code?
You've combined together several different (but related) questions. A few of them aren't really on-topic here (e.g., coding standards), so I'm going to ignore those. I'm going to start with if the kernel is "technically incorrect C code". I'm starting here because the answer explains the special position a kernel occupies, which is critical to understanding the rest. Is the Kernel Technically Incorrect C Code? The answer is definitely its "incorrect". There are a few ways in which a C program can be said to be incorrect. Let's get a few simple ones out of the way first: A program which doesn't follow the C syntax (i.e., has a syntax error) is incorrect. The kernel uses various GNU extensions to the C syntax. Those are, as far as the C standard is concerned, syntax errors. (Of course, to GCC, they are not. Try compiling with -std=c99 -pedantic or similar...) A program which doesn't do what its designed to do is incorrect. The kernel is a huge program and, as even a quick check of its changelogs will prove, surely does not. Or, as we'd commonly say, it has bugs. What Optimization means in C [NOTE: This section contains a very lose restatement of the actual rules; for details, see the standard and search Stack Overflow.] Now for the one that takes more explanation. The C standard says that certain code must produce certain behavior. It also says certain things which are syntactically valid C have "undefined behavior"; an (unfortunately common!) example is to access beyond the end of an array (e.g., a buffer overflow). Undefined behavior is powerfully so. If a program contains it, even a tiny bit, the C standard no longer cares what behavior the program exhibits or what output a compiler produces when faced with it. But even if the program contains only defined behavior, C still allows the compiler a lot of leeway. As a trivial example (note: for my examples, I'm leaving out #include lines, etc., for brevity): void f() { int *i = malloc(sizeof(int)); *i = 3; *i += 2; printf("%i\n", *i); free(i);} That should, of course, print 5 followed by a newline. That's what's required by the C standard. If you compile that program and disassemble the output, you'd expect malloc to be called to get some memory, the pointer returned stored somewhere (probably a register), the value 3 stored to that memory, then 2 added to that memory (maybe even requiring a load, add, and store), then the memory copied to the stack and the also a point string "%i\n" put on the stack, then the printf function called. A fair bit of work. But instead, what you might see is as if you'd written: /* Note that isn't hypothetical; gcc 4.9 at -O1 or higher does this. */void f() { printf("%i\n", 5) } and here's the thing: the C standard allows that. The C standard only cares about the results , not the way they are achieved. That's what optimization in C is about. The compiler comes up with a smarter (generally either smaller or faster, depending on the flags) way to achieve the results required by the C standard. There are a few exceptions, such as GCC's -ffast-math option, but otherwise the optimization level does not change the behavior of technically correct programs (i.e., ones containing only defined behavior). Can You Write a Kernel Using Only Defined Behavior? Let's continue to examine our example program. The version we wrote, not what the compiler turned it in to. The first thing we do is call malloc to get some memory. The C standard tells us what malloc does, but not how it does it. If we look at an implementation of malloc aimed at clarity (as opposed to speed), we'd see that it makes some syscall (such as mmap with MAP_ANONYMOUS ) to get a large chunk of memory. It internally keeps some data structures telling it which parts of that chunk are used vs. free. It finds a free chunk at least as large as what you asked for, carves out the amount you asked for, and returns a pointer to it. It's also entirely written in C, and contains only defined behavior. If its thread-safe, it may contain some pthread calls. Now, finally, if we look at what mmap does, we see all kinds of interesting stuff. First, it does some checks to see if the system has enough free RAM and/or swap for the mapping. Next, it find some free address space to put the block in. Then it edits a data structure called the page table, and probably makes a bunch of inline assembly calls along the way. It may actually find some free pages of physical memory (i.e., actual bits in actual DRAM modules)---a process which may require forcing other memory out to swap---as well. If it doesn't do that for the entire requested block, it'll instead set things up so that'll happen when said memory is first accessed. Much of this is accomplished with bits of inline assembly, writing to various magic addresses, etc. Note also it also uses large parts of the kernel, especially if swapping is required. The inline assembly, writing to magic addresses, etc. is all outside the C specification. This isn't surprising; C runs across many different machine architectures—including a bunch that were barely imaginable in the early 1970s when C was invented. Hiding that machine-specific code is a core part of what a kernel (and to some extent C library) is for. Of course, if you go back to the example program, it becomes clear printf must be similar. It's pretty clear how to do all the formatting, etc. in standard C; but actually getting it on the monitor? Or piped to another program? Once again, a lot of magic done by the kernel (and possibly X11 or Wayland). If you think of other things the kernel does, a lot of them are outside C. For example, the kernel reads data from disks (C knows nothing of disks, PCIe buses, or SATA) into physical memory (C knows only of malloc, not of DIMMs, MMUs, etc.), makes it executable (C knows nothing of processor execute bits), and then calls it as functions (not only outside C, very much disallowed). The Relationship Between a Kernel and its Compiler(s) If you remember from before, if a program contains undefined behavior, so far as the C standard is concerned, all bets are off. But a kernel really has to contain undefined behavior. So there has to be some relationship between the kernel and its compiler, at least enough that the kernel developers can be confident the kernel will work despite violating the C standard. At least in the case of Linux, this includes the kernel having some knowledge of how GCC works internally. How likely is it to break? Future GCC versions will probably break the kernel. I can say this pretty confidently as its happened several times before. Of course, things like the strict aliasing optimizations in GCC broke plenty of things besides the kernel, too. Note also that the inlining that the Linux kernel is depending on is not automatic inlining, it's inlining that the kernel developers have manually specified. There are various people who have compiled the kernel with -O0 and report it basically works, after fixing a few minor problems. (One is even in the thread you linked to). Mostly, it's the kernel developers see no reason to compile with -O0 , and requiring optimization as a side effect makes some tricks work, and no one tests with -O0 , so it's not supported. As an example, this compiles and links with -O1 or higher, but not with -O0 : void f();int main() { int x = 0, *y; y = &x; if (*y) f(); return 0;} With optimization, gcc can figure out that f() will never be called, and omits it. Without optimization, gcc leaves the call in, and the linker fails because there isn't a definition of f() . The kernel developers rely on similar behavior to make the kernel code easier to read/write.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153788", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79424/" ] }
153,793
A week ago I tried to install Steam, I downloaded the .deb package and installed it, and played a game with it. In the next day, Steam prompted me with this: Steam needs to install these additional packages: libgl1-mesa-dri:i386, libgl1-mesa-glx:i386, libc6:i386 And then I did it, I installed those packages. But it broke my system. The next time I rebooted my system, it went straight to tty, no GUI. I tried a bunch of things to fix it but the only way (for me, a new guy to Linux) was to reinstall the system. Now I'm trying to install it again and I found on the internet that installing ia32-libs would fix Steam. When I was going to install it, apt-get said that it would need to remove elementary-desktop . This is bad. Right now I don't know how to install Steam without breaking my system. What's the proper way to install it? If it's relevant, I'm using integrated graphics with an Intel Core i3 processor, Linux 3.2.0-51, distro: elementary OS Luna 64bit.
You've combined together several different (but related) questions. A few of them aren't really on-topic here (e.g., coding standards), so I'm going to ignore those. I'm going to start with if the kernel is "technically incorrect C code". I'm starting here because the answer explains the special position a kernel occupies, which is critical to understanding the rest. Is the Kernel Technically Incorrect C Code? The answer is definitely its "incorrect". There are a few ways in which a C program can be said to be incorrect. Let's get a few simple ones out of the way first: A program which doesn't follow the C syntax (i.e., has a syntax error) is incorrect. The kernel uses various GNU extensions to the C syntax. Those are, as far as the C standard is concerned, syntax errors. (Of course, to GCC, they are not. Try compiling with -std=c99 -pedantic or similar...) A program which doesn't do what its designed to do is incorrect. The kernel is a huge program and, as even a quick check of its changelogs will prove, surely does not. Or, as we'd commonly say, it has bugs. What Optimization means in C [NOTE: This section contains a very lose restatement of the actual rules; for details, see the standard and search Stack Overflow.] Now for the one that takes more explanation. The C standard says that certain code must produce certain behavior. It also says certain things which are syntactically valid C have "undefined behavior"; an (unfortunately common!) example is to access beyond the end of an array (e.g., a buffer overflow). Undefined behavior is powerfully so. If a program contains it, even a tiny bit, the C standard no longer cares what behavior the program exhibits or what output a compiler produces when faced with it. But even if the program contains only defined behavior, C still allows the compiler a lot of leeway. As a trivial example (note: for my examples, I'm leaving out #include lines, etc., for brevity): void f() { int *i = malloc(sizeof(int)); *i = 3; *i += 2; printf("%i\n", *i); free(i);} That should, of course, print 5 followed by a newline. That's what's required by the C standard. If you compile that program and disassemble the output, you'd expect malloc to be called to get some memory, the pointer returned stored somewhere (probably a register), the value 3 stored to that memory, then 2 added to that memory (maybe even requiring a load, add, and store), then the memory copied to the stack and the also a point string "%i\n" put on the stack, then the printf function called. A fair bit of work. But instead, what you might see is as if you'd written: /* Note that isn't hypothetical; gcc 4.9 at -O1 or higher does this. */void f() { printf("%i\n", 5) } and here's the thing: the C standard allows that. The C standard only cares about the results , not the way they are achieved. That's what optimization in C is about. The compiler comes up with a smarter (generally either smaller or faster, depending on the flags) way to achieve the results required by the C standard. There are a few exceptions, such as GCC's -ffast-math option, but otherwise the optimization level does not change the behavior of technically correct programs (i.e., ones containing only defined behavior). Can You Write a Kernel Using Only Defined Behavior? Let's continue to examine our example program. The version we wrote, not what the compiler turned it in to. The first thing we do is call malloc to get some memory. The C standard tells us what malloc does, but not how it does it. If we look at an implementation of malloc aimed at clarity (as opposed to speed), we'd see that it makes some syscall (such as mmap with MAP_ANONYMOUS ) to get a large chunk of memory. It internally keeps some data structures telling it which parts of that chunk are used vs. free. It finds a free chunk at least as large as what you asked for, carves out the amount you asked for, and returns a pointer to it. It's also entirely written in C, and contains only defined behavior. If its thread-safe, it may contain some pthread calls. Now, finally, if we look at what mmap does, we see all kinds of interesting stuff. First, it does some checks to see if the system has enough free RAM and/or swap for the mapping. Next, it find some free address space to put the block in. Then it edits a data structure called the page table, and probably makes a bunch of inline assembly calls along the way. It may actually find some free pages of physical memory (i.e., actual bits in actual DRAM modules)---a process which may require forcing other memory out to swap---as well. If it doesn't do that for the entire requested block, it'll instead set things up so that'll happen when said memory is first accessed. Much of this is accomplished with bits of inline assembly, writing to various magic addresses, etc. Note also it also uses large parts of the kernel, especially if swapping is required. The inline assembly, writing to magic addresses, etc. is all outside the C specification. This isn't surprising; C runs across many different machine architectures—including a bunch that were barely imaginable in the early 1970s when C was invented. Hiding that machine-specific code is a core part of what a kernel (and to some extent C library) is for. Of course, if you go back to the example program, it becomes clear printf must be similar. It's pretty clear how to do all the formatting, etc. in standard C; but actually getting it on the monitor? Or piped to another program? Once again, a lot of magic done by the kernel (and possibly X11 or Wayland). If you think of other things the kernel does, a lot of them are outside C. For example, the kernel reads data from disks (C knows nothing of disks, PCIe buses, or SATA) into physical memory (C knows only of malloc, not of DIMMs, MMUs, etc.), makes it executable (C knows nothing of processor execute bits), and then calls it as functions (not only outside C, very much disallowed). The Relationship Between a Kernel and its Compiler(s) If you remember from before, if a program contains undefined behavior, so far as the C standard is concerned, all bets are off. But a kernel really has to contain undefined behavior. So there has to be some relationship between the kernel and its compiler, at least enough that the kernel developers can be confident the kernel will work despite violating the C standard. At least in the case of Linux, this includes the kernel having some knowledge of how GCC works internally. How likely is it to break? Future GCC versions will probably break the kernel. I can say this pretty confidently as its happened several times before. Of course, things like the strict aliasing optimizations in GCC broke plenty of things besides the kernel, too. Note also that the inlining that the Linux kernel is depending on is not automatic inlining, it's inlining that the kernel developers have manually specified. There are various people who have compiled the kernel with -O0 and report it basically works, after fixing a few minor problems. (One is even in the thread you linked to). Mostly, it's the kernel developers see no reason to compile with -O0 , and requiring optimization as a side effect makes some tricks work, and no one tests with -O0 , so it's not supported. As an example, this compiles and links with -O1 or higher, but not with -O0 : void f();int main() { int x = 0, *y; y = &x; if (*y) f(); return 0;} With optimization, gcc can figure out that f() will never be called, and omits it. Without optimization, gcc leaves the call in, and the linker fails because there isn't a definition of f() . The kernel developers rely on similar behavior to make the kernel code easier to read/write.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83083/" ] }
153,804
I have a file myfile that must be re-generated periodically. Re-generation takes some seconds. On the other hand, I have to periodically read the last (or next to last) file generated. What is the best way to guarantee that I am reading a completely generated file and that, once I begin reading it, I will be able to read it completely? One possible solution is myfile is actually a soft link to the last generated file, say myfile.last . regeneration is done on a new file, say myfile.new after regeneration, myfile.new is moved onto myfile.last The problem I see (and I don't know the answer to) is: if another script is copying myfile while the mv takes place, does cp finish correctly? Another possible solution would be to generate files with a timestamp on its name, say myfile-2014-09-03_12:34 and myfile is again a soft link to the last created file. This link should be changed after creation to point to the new file. Again: what are the odds that something like cp myfile anotherfile copies a corrupted file?
If you're moving within the same filesystem, mv is atomic -- it's just a rename, not copying contents. So if the last step of your generation is: mv myfile.new myfile.last The reading processes will always see either the old or new version of the file, never anything incomplete.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61222/" ] }
153,807
I would like to compare 2 similar files on a common column. The files will have identical headers. file1.txt mem_id Date Time Building aa bb cc ddee ff gg hhii jj kk ll file2.txt mem_id Date Time Building aa bb cc ddee ff 2g hhii jj kk 2l Command awk 'NR==FNR{for(i=1;i<=NF;i++){A[i,NR]=$i}next} {for(i=1;i<=NF;i++){if(A[i,FNR]!=$i)\ {print "ID#-"$1": Column",i"- File1.txt value=",A[i,FNR]" / File2.txt value= "$i}}}'\ file1.txt file2.txt Current Output ID#-ee: Column 3- File1.txt value= gg / File2.txt value= 2gID#-ii: Column 4- File1.txt value= ll / File2.txt value= 2l Desired Output mem_id#-ee: Time- file1.txt value= gg / file2.txt value= 2gmem_id#-ii: Building- file1.txt value= ll / file2.txt value= 2l I am very close. But I would like help with a few improvements. 1- I would like to replace the “Column 3” and “Column 4” with the actual column header (Time, Building, whatever) 2- I would like to dynamically gather the file names in the output and not have to add it as part of the command (to make it universal) 3- I would like this scriptable. Any help would be appreciated.
If you're moving within the same filesystem, mv is atomic -- it's just a rename, not copying contents. So if the last step of your generation is: mv myfile.new myfile.last The reading processes will always see either the old or new version of the file, never anything incomplete.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153807", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79825/" ] }
153,808
I often leave tasks running within screen so that I can check on them later, but I sometimes need to check which command is running within the screen. It's usually a php script of sorts, such as screen -d -m nice -n -10 php -f convertThread.php start=10000 I don't record which screens are running which command, but I'd like to be able to get a feel for the progress made by checking what command was run within it, without killing the process. I don't see any option for this within the screen help pages.
I recently had to do this. On Stack Overflow I answered how to find the PID of the process running in screen . Once you have the PID you can use ps to get the command. Here is the contents of that answer with some additional content to address your situation: You can get the PID of the screen sessions here like so: $ screen -lsThere are screens on: 1934.foo_Server (01/25/15 15:26:01) (Detached) 1876.foo_Webserver (01/25/15 15:25:37) (Detached) 1814.foo_Monitor (01/25/15 15:25:13) (Detached)3 Sockets in /var/run/screen/S-ubuntu. Let us suppose that you want the PID of the program running in Bash in the foo_Monitor screen session. Use the PID of the foo_Monitor screen session to get the PID of the bash session running in it by searching PPIDs (Parent PID) for the known PID: $ ps -el | grep 1814 | grep bashF S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD0 S 1000 1815 1814 0 80 0 - 5520 wait pts/1 00:00:00 bash Now get just the PID of the bash session: $ ps -el | grep 1814 | grep bash | awk '{print $4}'1815 Now we want the process with that PID. Just nest the commands, and this time use the -v flag on grep bash to get the process that is not bash: $ echo $(ps -el | grep $(ps -el | grep 1814 | grep bash | awk '{print $4}') | grep -v bash | awk '{print $4}')23869 We can use that PID to find the command (Look at the end of the second line): $ ps u -p 23869USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDdotanco+ 18345 12.1 20.1 5258484 3307860 ? Sl Feb02 1147:09 /usr/lib/foo Put it all together: $ ps u -p $(ps -el | grep $(ps -el | grep SCREEN_SESSION_PID | grep bash | awk '{print $4}') | grep -v bash | awk '{print $4}')
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/153808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33437/" ] }
153,834
I am trying to create a script to test whether it is possible to login via telnet. I do not want to really log in; therefore, expect is not needed. I just want to see if I am able to get a login prompt. This is being done from a Linux system so I have been trying to use nc : nc 192.168.10.5 23 -w 1 | grep -q login if [ $? -eq 1 ]then echo "console is down"fi The problem is that this is causing my console to lock up. It seems like the -w is not really dropping the connection. I also tried using telnet but I am not able to break the connection from within the script. Trying \echo "\035" | telnet 192.168.10.5 breaks before I get a login prompt.
Bash provides pseudo devices that you're likely familiar with such as /dev/null . However there are other devices such as /dev/tcp and /dev/udp for testing network connections, which you may use from within Bash scripts too. excerpt from Bash's man page Bash handles several filenames specially when they are used in redirections, as described in the following table: /dev/fd/fd If fd is a valid integer, file descriptor fd is duplicated. /dev/stdin File descriptor 0 is duplicated. /dev/stdout File descriptor 1 is duplicated. /dev/stderr File descriptor 2 is duplicated. /dev/tcp/host/port If host is a valid hostname or Internet address, and port is an integer port number or service name, bash attempts to open a TCP connection to the corresponding socket. /dev/udp/host/port If host is a valid hostname or Internet address, and port is an integer port number or service name, bash attempts to open a UDP connection to the corresponding socket. Example Here's I'm testing the connection to a host in my domain named skinner and seeing if I can connect to its port 22. NOTE: Port 22 is for SSH, for telnet use port 23. $ echo > /dev/tcp/skinner/22 && echo "it's up" || echo "it's down"it's up Great so let's try a non-port: $ echo > /dev/tcp/skinner/223 && echo "it's up" || echo "it's down"bash: connect: Connection refusedbash: /dev/tcp/skinner/223: Connection refusedit's down Well that works, but it's awfully ugly output. Not to worry. You can run the echo > /dev/tcp/... in a subshell and redirect all the output to /dev/null to clean it up a bit. Here's the pattern you can use within your shell scripts: $ (echo > /dev/tcp/skinner/22) > /dev/null 2>&1 \ && echo "it's up" || echo "it's down"it's up$ (echo > /dev/tcp/skinner/223) > /dev/null 2>&1 \ && echo "it's up" || echo "it's down"it's down
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153834", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83103/" ] }
153,841
(I already asked this question on superuser but got no answer so I'll try here now.) I would like to have the cursor keys and some others available as AltGr+c,h,t,n etc. (i,j,k,l if you have qwerty). I have made a custom keyboard layout file (see here ). There I basically just took a layout file and modified the relevant lines to be like key <AC09> { [ n, N, Right, Right ] }; I have this as /usr/share/X11/xkb/symbols/nonpop and I load it with setxkbmap nonpop in ~/.xinitrc (I'm not using any DE, just the Awesome WM). And it works---almost. In Firefox, for example, everything works nicely. For example AltGr+n moves the cursor right and Shift+AltGr+n selects text. Similarly in the main text area in LyX. In MonoDevelop the cursor moves with AltGr+n, but Shift+AltGr+n also only moves the cursor instead of selecting. Shift+Right selects text normally. In NetBeans and LyX dialogs pressing (Shift+)AltGr+n does absolutely nothing. I guess these other programs read the AltGr state and decide it's a bad thing. So, is there any way to make things happen the way I want? It looks like overlays could be a solution but I didn't get them to work yet.
The problem is that when you use a common modifier like AltGr , programs actually see it as being pressed when you use the AltGr arrows. So Shift + AltGr + i actually appears to a program as if you are pressing Shift + Alt + Up and it's the addition of the extra Alt that confuses the program. You can avoid this by either adding more shortcuts to the program if possible, so Shift + Alt + Up does the same action as Shift + Up , or you can use an uncommon modifier that the program is not aware of. There are other ways of doing this, but this was one of the simpler ways and works well for me. It creates a "Group2" overlay on the alphanumeric keys, such that when you switch to Group2 you will have arrows there instead of letters. You can switch to Group2 with a toggle (on/off like Caps Lock normally works) or a momentary (press to activate, like Shift normally works) partial alphanumeric_keysxkb_symbols "alpha_arrows" { key <AD07> { symbols[Group2]= [ Home ] # u }; key <AD08> { symbols[Group2]= [ Up ] # i }; key <AD09> { symbols[Group2]= [ End ] # o }; key <AD10> { symbols[Group2]= [ Prior ] # p }; key <AC07> { symbols[Group2]= [ Left ] # j }; key <AC08> { symbols[Group2]= [ Down ] # k }; key <AC09> { symbols[Group2]= [ Right ] # l }; key <AC10> { symbols[Group2]= [ Next ] # ; }; key <AB06> { symbols[Group2]= [ Delete ] # n };}; You can then pick a key to use as a Group2 activation key. You can see all the options (if you don't want to roll your own) with this: $ grep grp /usr/share/X11/xkb/rules/base.lst Then pick an option, for example this one activates the arrows only while Caps Lock is being held. It also adds a new combination Alt + Caps Lock to actually switch caps lock on and off: $ setxkbmap -option grp:caps_switch You can add this into your Xorg config file to make the change permanent - there are plenty of guides online about how to do this. Note that: key is used, as opposed to replace key because we want to add to the existing definition, not replace it. If you actually have an alternate keyboard layout (possibly some international keyboards that can also type Latin/English letters), this method may not work for you. You will have to define a new type based on ALPHANUMERIC with extended modifiers (e.g. Hyper) to activate the arrow keys.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153841", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56084/" ] }
153,862
I have a directory containing a large number of files. I want to delete all files except for file.txt . How do I do this? There are too many files to remove the unwanted ones individually and their names are too diverse to use * to remove them all except this one file. Someone suggested using rm !(file.txt) But it doesn't work. It returns: Badly placed ()'s My OS is Scientific Linux 6. Any ideas?
POSIXly: find . ! -name 'file.txt' -type f -exec rm -f {} + will remove all regular files (recursively, including hidden ones) except file.txt . To remove directories, change -type f to -type d and add -r option to rm . In bash , to use rm -- !(file.txt) , you must enable extglob : $ shopt -s extglob $ rm -- !(file.txt) (or calling bash -O extglob ) Note that extglob only works in bash and Korn shell family. And using rm -- !(file.txt) can cause an Argument list too long error. In zsh , you can use ^ to negate pattern with extendedglob enabled: $ setopt extendedglob$ rm -- ^file.txt or using the same syntax with ksh and bash with options ksh_glob and no_bare_glob_qual enabled.
{ "score": 10, "source": [ "https://unix.stackexchange.com/questions/153862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78535/" ] }
153,880
Suppose I have a particular date stored in a variable date_m . I want ((date_m)-25) date. For example: I have 15/09/2014 stored in my variable , then I want 21/08/2014 returned if I subtract 25 from the date stored in variable.
With the GNU implementation of date , to display yesterday's date, enter: $ date --date="1 days ago" OR $ date --date="-1 day" For your question: $ date --date="25 days ago" OR $ date --date="-25 day" For using it with variables, you can use $() : pastDate=$(date --date="-25 day")echo "$pastDate" For general case n days and for a specific date: #!/bin/bashdate1="Tue Sep 2 07:53:47 EEST 2014"echo "Before? "read ndate --date="$date1 -$n day" Source
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153880", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83129/" ] }
153,885
I open a terminal and navigate to my project folder (ex: cd /proj/tickecting_app ).After this I execute this command: rvm use ruby 1.9.x@gemset . I want this command to run this every time I open the terminal. Is there a way to run this automatically? And to be more specific I navigate to different projects in separate tabs. For example: First tab cd /proj/tickecting_app and run rvm use ruby 1.9.x@gemset . Second tab cd /proj/rss_feed_app and run rvm use ruby 2.1.x@gemset . I want to run different commands depending upon the project. Is this possible?
If you are using RVM then it is just a matter of putting a file named .rvmrc in your project directory with the following content: rvm use ruby 1.9.x@gemset For your second project it is the same, just change the rvm invocation.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83134/" ] }
153,928
The question may be trivial but with this simple for loop for i in {1..10000000}; do echo "$i" line >> filedone my shell crashed after some elaboration time (the tab in the terminal emulator was automatically closed) and file was not created. It is GNU bash, version 4.1.11(2)-release . Is this due to a memory limit or what else? And if yes which should be this limit? With {1..1000000} it worked correctly.
The reason this happens, is because the brackets are expanded before the command is invoked. Thus you actually end up with a command like: for i in 1 2 3 ... 10000000 ; do ... .. and thus it uses up a lot of memory or it crashes. The solution when working with long loops is to use the bash's c-style syntax: for ((i = 1; i <= 10000000; i++))do ...done Or if you want to remain compatible with other shells you can pipe to a while loop: seq 1 10000000 | while IFS= read -r linedo ...done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48707/" ] }
153,943
less itself isn't capable of doing syntax highlighting, according to this thread . However, git diff nicely shows colored output in less, its default pager. When I redirect the output of git diff into a file, no color escape sequences are visible. Does git diff know where it's being sent, and formats the output accordingly? How would one do that? I just noticed that git colors the diff output (e.g. git diff ), however, it doesn't know how to syntax highlighting in general. e.g. git show 415fec6:log.tex doesn't enable any TeX-like syntax. Reading the git sources, I found the following hints in diff.h : int use_color; I was previously referring to syntax highlighting , but that was not correct. What I mean is output coloring, see e.g.
Git uses isatty() to check whether stdout is a tty: this is used to see if a pager must be used ( pager.c ) as well as colors ( color.c ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/153943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6958/" ] }
153,975
I want to change my Linux Mint 17 Cinnamon boot image manually. So, I replaced the image located at: /lib/plymouth/themes/mint-logo/logo.png with mine. The logo changes at shutdown but not at boot.
just type: sudo update-initramfs -u .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/153975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83189/" ] }
153,980
I have some problems to install nginx pkg (nginx-full) on debian jessie # apt-get install nginx-fullReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following extra packages will be installed: nginx-commonSuggested packages: fcgiwrap nginx-docThe following NEW packages will be installed: nginx-common nginx-full0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.Need to get 510 kB of archives.After this operation, 1.271 kB of additional disk space will be used.Do you want to continue? [Y/n] Get:1 http://debian.c3sl.ufpr.br/debian/ jessie/main nginx-common all 1.6.1-1 [83,6 kB]Get:2 http://debian.c3sl.ufpr.br/debian/ jessie/main nginx-full amd64 1.6.1-1+b1 [427 kB]Fetched 510 kB in 1s (266 kB/s) Selecting previously unselected package nginx-common.(Reading database ... 170540 files and directories currently installed.)Preparing to unpack .../nginx-common_1.6.1-1_all.deb ...Unpacking nginx-common (1.6.1-1) ...Selecting previously unselected package nginx-full.Preparing to unpack .../nginx-full_1.6.1-1+b1_amd64.deb ...Unpacking nginx-full (1.6.1-1+b1) ...Processing triggers for man-db (2.6.7.1-1) ...Setting up nginx-common (1.6.1-1) ...Setting up nginx-full (1.6.1-1+b1) ...Job for nginx.service failed. See 'systemctl status nginx.service' and 'journalctl -xn' for details.invoke-rc.d: initscript nginx, action "start" failed.dpkg: error processing package nginx-full (--configure): subprocess installed post-installation script returned error exit status 1Errors were encountered while processing: nginx-fullE: Sub-process /usr/bin/dpkg returned an error code (1)# systemctl status nginx.servicenginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled) Active: failed (Result: exit-code) since Sex 2014-09-05 11:39:46 BRT; 1s ago Process: 2972 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)#journalctl -xnNo journal files were found. Someone know how to fix it?
A similar issue was reported on Debian bug #754407 . In the end it was just the port 80 being taken by other process (Apache2). Might this be your case as well?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/153980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83195/" ] }
154,001
Can I use file and magic ( http://linux.die.net/man/5/magic ) to override the description of some other known formats ? for example, I would like to describe the following formats: BED: http://genome.ucsc.edu/FAQ/FAQformat.html#format1 Fasta : http://en.wikipedia.org/wiki/FASTA_format ... that are 'just' text file Or BAM http://genome.ucsc.edu/FAQ/FAQformat.html#format5.1 that is 'just' a gzipped-file starting with the magic-number BAM\1 ? do you know any example ? Is it possible to provide a custom C code to test the file instead of using the magic format ?
You can use the -m option to specify an alternate list of magic files, and if you include your own before the compiled magic file ( /usr/share/file/magic.mgc on my system) in that list, those patterns will be tested before the "global" ones. You can create a function, or an alias, to transparently always transparently use that option by just issuing the file command. The language used in magic file is quite powerful, so there is seldom a need to revert to custom C coding. The only time I felt inclined to do so was in the 90's when matching HTML and XML files was difficult because there was no way (at that time) to have the flexible casing and offset matching necessary to be able to parse <HTML and < Html and < html with one pattern. I implemented that in C as modifier to the 'string' pattern, allowing the ignoring of case and compacting of (optional) blanks . These changes in C required adaptation of the magic files as well . And unless the file source code has significantly changed since then, you will always need to modify (or provide extra) rules in magic files that match those C code changes. So you might as well start out trying to do it with changes to the magic files only, and fall back to changing the C code if that really doesn't work out.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154001", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5921/" ] }
154,064
I had a previous question (deleted it) and I still don't understand why it is so hard to understand what I'm trying to ask, but I'll try once again and more clearly. So I'm copying over several folders over from server A to server B, but the command for each copy process ('rsync') has to be executed from the command line, once for each folder. So, to copy folder A to server X, I execute a rsync command and once it's done copying, I execute a new rsync command to copy over folder B to server X. I've did a little research, that probably helps you answer my needs. But I'm looking for a way to write multiple commands at once, that fires one after another if the previous one is finished with his actions (thus finished copying). Usually, multiple commands are separated on one line by ; semicolons or && ampersands.The difference between is that if the first/previous command fails with semicolons, it still continues. If the command produces an error and you had used ampersands, than it will stop. Thus, I used semicolons in my process and let my computer going on overnight.. but it only executed the first command, even though I written several on one line separated with semicolons. So It should have executed all of them, one by one... So first: How do I correctly execute multiple commands and B, what is wrong with my command? sudo rsync -v -v -r -h -t —progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-01-02-233653/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap ; sudo rsync -v -v -r -h -t —progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-01-09-152837/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap ;sudo rsync -v -v -r -h -t —progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-01-20-201229/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap ;sudo rsync -v -v -r -h -t —progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-02-21-130931/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap ;sudo rsync -v -v -r -h -t —progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-03-05-113353/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap ;sudo rsync -v -v -r -h -t —progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-03-19-162703/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap ;sudo rsync -v -v -r -h -t —progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-05-13-215016/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap ;sudo rsync -v -v -r -h -t —progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-06-01-231321/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap I separated them with breaks here for readability purposes. Normally it's one whole line This usually goes by with a ; semicolon character
For repeatability I suggest to put the lines into a small script, but without the keyword sudo : #!/bin/bashrsync -v -v -r -h -t --progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-01-02-233653/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap rsync -v -v -r -h -t --progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-01-09-152837/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap rsync -v -v -r -h -t --progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-01-20-201229/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap rsync -v -v -r -h -t --progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-02-21-130931/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap rsync -v -v -r -h -t --progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-03-05-113353/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap rsync -v -v -r -h -t --progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-03-19-162703/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap rsync -v -v -r -h -t --progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-05-13-215016/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap rsync -v -v -r -h -t --progress /Volumes/My\ Book/Backups.backupdb/MbpScs-van-iSCS/2014-06-01-231321/SSDaeffer /Volumes/BackupMyBook/ViaMacbookMove/DitIsDeMap To run the series of commands, type: chmod +x myscript.shsudo ./myscript.sh This way, sudo is not timing out during a long command.There is no need for a semicolon in a script file. By default, each line may fail and the next is executed (to change this, use set -o errexit ). I also noticed something fishy in your commands. for the option --progress you didn't use a regular hyphen , but some other, similar looking character. This could be another source of the errors you're seeing.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83197/" ] }
154,070
Recently I installed qBitTorrent so I could download some episodes of a Creative Commons-licensed TV series. I simply used apt-get install qbittorrent , ran qbittorrent , then added the torrent files, and pressed "start". I noticed that the series would take too long to download, so I quit qBitTorrent when it was at 10%. The next day, I launched qBitTorrent again, and was surprised to find the downloads complete. Either the 2.5 GBs downloaded within 3 seconds or something else occurred. Does BitTorrent continue to exchange files after starting the download in qBitTorrent and then quitting qBitTorrent?
qBitTorrent has a mode where it is minimized in the notification area. If you go to the Options, then in the Behaviour tab, you will see a tree of checkboxes reading "Show qBit in notification area", and "Close qBit to notification area". This is the only way for qBittorrent to keep operating if you click the close button.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154070", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13099/" ] }
154,076
What is the equivalent for GPT using HDDs of: # fdisk -l /dev/hda > /mnt/sda1/hda_fdisk.info I got this from https://wiki.archlinux.org/index.php/disk_cloning (under "Create disk image") for getting the extra hdd info which may be important for restoring or extracting from multi-partition images. When I do this I get an error similar to: "WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted."
some unix partitioner, are deperecated and GPT partition table is new and some tools doesn't work GPT . GNU parted is new and gparted is GNOME Parted for example: root@debian:/home/mohsen# parted -l /dev/sdaModel: ATA WDC WD7500BPVT-7 (scsi)Disk /dev/sda: 750GBSector size (logical/physical): 512B/4096BPartition Table: msdosDisk Flags: Number Start End Size Type File system Flags 1 32.3kB 41.1MB 41.1MB primary fat16 diag 2 41.9MB 2139MB 2097MB primary fat32 boot 3 2139MB 52.1GB 50.0GB primary ext4 4 52.1GB 749GB 697GB extended 5 52.1GB 737GB 685GB logical ext4 6 737GB 749GB 12.0GB logical linux-swap(v1) NOTE: GPT is abbrivation of GUID Partition Table and much new. GPT
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55082/" ] }
154,083
I have tried to create a Linux distribution with the Linux From Scratch (LFS) website. Everything went good until step 5.7. Glibc-2.19 , but when I tried: $LFS_TGT-gcc dummy.c I get: /tools/lib/gcc/i686-lfs-linux-gnu/4.8.2/../../../../i686-lfs-linux-gnu/bin/ld: cannot find crt1.o: No such file or directory/tools/lib/gcc/i686-lfs-linux-gnu/4.8.2/../../../../i686-lfs-linux-gnu/bin/ld: cannot find crti.o: No such file or directory So I googled it for a while, and I realize that Debian changed some directory, and I searched for those files and I found them in: /usr/libx32/ I realize from those searches that it happen with trying to compile 64 bit in 32 bit structures, and I should create virtual link of theme in: /tools/lib/gcc/i686-lfs-linux-gnu/4.8.2/ But when I did that I got: /tools/lib/gcc/i686-lfs-linux-gnu/4.8.2/crt1.o: file not recognized: File format not recognized In this step I really don't know what to do next. How can I fix it?
The correct virtual link is: ln -s /tools/lib/crt*.o /tools/lib/gcc/i686-lfs-linux-gnu/4.8.2/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154083", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83268/" ] }
154,118
Here are what I put in bash and the results I got about 'echo': $ echo '!$' !$ $ echo "!$" echo "'!$'" '!$' I'd like to know how 'echo' deals with the second input. It seems to me that 'echo' would first print the strings (expand some if necessary) you entered, then execute certain if they are executable. A more mind-blowing example I can construct but am not able to understand is: $ echo '!$' !$ $ echo "!echo "!$"" echo "echo '!$' "'!$'"" echo '!$' !$
In command: echo "!$" !$ is expanded by bash before passed to echo . Inside double quotes , if enabled, history expansion will be performed unless you escape ! , using backslash \ . bash had done the expansion, echo does nothing here, it just print what it got. !$ refer to the last argument of preceding command, which is string '!$' . In your second example: $ echo '!$'!$$ echo "!echo "!$""echo "echo '!$' "'!$'""echo '!$' !$ Command echo "echo '!$' "'!$'"" , arguments passed to echo are divided in three parts: First: "echo '!$' " , expanded to string echo '!$' . Second: '!$' , expanded to string !$ . Third: "" , expanded to empty string.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82973/" ] }
154,204
For example, if I have script ./foo that takes 10 parameters, and I only want to pass the 8th parameter. The only way I know how to do this currently is: ./foo '' '' '' '' '' '' '' 'bar' Is there an easier/better way?
Assuming you can change the the script, you should consider using optional parameters (options) instead of required parameters (arguments). If you have each of the first 7 seven parameters as options, and have them default to the empty string then you could just do: ./foo bar If you use a POSIX-compatible shell you can use the getopts utility, or the program getopt . bash - like most shells - offers getopts as a built-in. Either way is easier than rolling your own command-line parser. Unless you implement something like the last X non-option arguments are the values for the last Y arguments and X-Y option arguments you have to provide option strings before each of the 7 (now empty) strings if you want to set any of those. This however is not common practise, normally an option is always an option and an argument an argument, and the order of option "insertion" is free.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83327/" ] }
154,232
I use something like this to unmount a range of drives: umount /dev/sd[c-k]2 Is there any way to use same thing with mount? something like this: mount /dev/sd[c-k]2 /[c2-k2]
Globbing (which is what you're doing with your wildcard matching) will expand the current command line. For example: ls [abc]1 gets expanded to: ls a1 b1 c1 Globbing only works where the command allows multiple arguments. While umount /dev/sdc2 /dev/sdd2 Works, there's no way to express the same thing for mount . So you have to loop it: for m in $(basename /[c-k]2)do mount /dev/sd${m} /${m}done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154232", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73778/" ] }
154,254
Not sure how else to phrase the question, but basically, I often find myself running a command like vagrant to bring up the VM, and then ssh into it like below: vagrant up && vagrant ssh Short of writing my own function or script, is there a way to "reuse" the vagrant portion in the second part of the command?
With (t)csh , bash or zsh history expansion you could write: vagrant up && !#:0 ssh But, seriously, you wouldn't
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/154254", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47355/" ] }
154,281
I've made a fresh install of Debian Wheezy, and installed zsh to it. Few days after, I've done a vanilla installation of TeX Live 2014, so I added the necessary binary paths to my $PATH. Now I started writing little scripts so I would like to put them somewhere easily accessible, that is ~/bin . My path looks like this: ~/bin:/opt/texbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games Now, if I wanted to run something from TeX Live, it's easy: % which pdflatex/opt/texbin/pdflatex No problem. But when I try running something from ~/bin , ... % which hello_worldhello_world not found So I double-checked: % ls -l ~/bintotal 18-rwxr-xr-x 1 bozbalci bozbalci 5382 Sep 8 00:28 hello_world And it shows that hello_world is doing fine in ~/bin with its execution permissions set. I've tried rehash , but it didn't work. Help?
In a shell command like PATH=~/bin:/opt/texbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games the tilde is expanded to your home directory when the shell command is executed. Thus the resulting value of PATH is something like /home/theconjuring/bin:/opt/texbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games . Make sure that the tilde isn't within quotes ( PATH="~/bin:…" ), otherwise it stands for itself. To prepend a directory to the current value of PATH , you can use PATH=~/bin:$PATH In general, in shells other than zsh, $PATH outside double quotes breaks when the value contains spaces or other special characters , but in an assignment, it's safe. With export , however, you need to write export PATH=~/bin:"$PATH" (though you don't need export with PATH since it's already in the environemnt). In zsh, you don't need double quotes except when the variable may be empty, but if you set PATH in .profile , it's processed by /bin/sh or /bin/bash . If you're setting PATH in ~/.pam_environment , however, you can't use ~ or $HOME to stand for your home directory. This file is not parsed by a shell, it's a simple list of NAME=value lines. So you need to write the paths in full.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83366/" ] }
154,302
Why does *NIX require me to make (i.e. build) my own drivers to install them? Couldn't they just be pre-built by the author and so long as they target the right CPU architecture I could install the binaries which would save me having to have all the right dependencies to build them. I ask because I am currently faced with this problem: https://unix.stackexchange.com/questions/154188/make-asus-pce-n15-driver-errors-on-steamos . It has been suggested to me in another forum the the issue maybe they target an old version of the kernel - so what am I suppose to do in this instance? Are you telling me drivers target specific versions of the kernel and are tied to it - that sounds like a nightmare!
This isn't necessarily a *NIX thing and is more of a Linux thing, IMO, and what you're considering as a annoyance is actually a good thing! That the drivers are made available in source form, rather than only in binary form. There are many reasons why you want drivers to be avail. like this. Different architectures + kernel versions being 2 of them. I completely understand your frustration but this is how Open Source generally works. I would pursue any building of drivers with the original authors or the distro that you're trying to build them on.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154302", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66712/" ] }
154,387
I was looking at the script /etc/init.d/sudo in my Ubuntu 14.04 Linux system. While reading the script, I found this statement : find /var/lib/sudo -exec touch -d @0 '{}' \; What is the meaning of @0 here? What does the above statement do?
That has nothing to do with bash or sudo . The find command is using the -exec action to run the given command on each file found. In this case, the command being run is touch -d @0 If you check man touch on a GNU system, you will find that -d, --date=STRING parse STRING and use it instead of current time So, the -d is a way of choosing the date you want touch to set for the target file. The @ tells the GNU implementation of touch that this is a date defined as seconds since the epoch . Since it is actually 0 seconds, this means the very beginning of UNIX time. An easy way to check this is to give the same date to the GNU date command: $ TZ=UTC date -d @0Thu Jan 1 00:00:00 UTC 1970 So, the command you showed will find all files and directories in /var/lib/sudo and set their last modified date to 00:00 UTC on Thursday January first, 1970. The reason that line exists was nicely explained in the comments below: @crisron The purpose of the line is to ensure that all sudo passwords from any previous instance of sudo are expired. The first time a user runs sudo, for each session it'll create a file in that directory. Sudo then checks the time stamp on the file the next time you run it to decide whether or not to ask you for the password again. This line ensures that when the sudo daemon is restarted all passwords must be retyped the next time a user sudos something. – krowe
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154387", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55629/" ] }
154,395
I cannot copy a file over scp when the remote machine .bashrc file includes source command to update paths and some variables. Why does it happen?
You should make part or all of your .bashrc not run when your shell is non-interactive. (An scp is an example of a non-interactive shell invocation, unless someone has radically altered your systems.) Then put all commands that can possibly generate output in that section of the file. A standard way to do this in an init file is: # Put all the commands here that should run regardless of whether# this is an interactive or non-interactive shell.# Example command:umask 0027# test if the prompt var is not set and also to prevent failures# when `$PS1` is unset and `set -u` is used if [ -z "${PS1:-}" ]; then # prompt var is not set, so this is *not* an interactive shell returnfi# If we reach this line of code, then the prompt var is set, so# this is an interactive shell.# Put all the commands here that should run only if this is an# interactive shell.# Example command:echo "Welcome, ${USER}. This is the ~/.bashrc file." You might also see people use [ -z "${PS1:-}" ] && return instead of my more verbose if statement. If you don't want to rearrange your whole file, you can also just make certain lines run only in interactive context by wrapping them like so: if [ -n "${PS1:-}" ]; then echo "This line only runs in interactive mode."fi If you segregate your .bashrc this way, then your scp commands should no longer have this problem.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/154395", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68913/" ] }
154,398
I have 2 applications: Producer (N instances) Consumer (1 instance) I currently write out intermediate results from the producers, and then the consumer reads these files from disk and produces a final result. I would like to minimize this I/O by "streaming" the outputs from the producers directly to the consumer. I came across named pipes (mkfifo) and a subsequent example here . This looks great, but what I can't determine is how this is actually implemented? Is the FIFO queue just being buffered through a file? If so, that probably wouldn't help me. I would like for the contents to stream "through memory" entirely without utilizing the disk. Maybe this isn't possible across processes?
No disk i/o (except perhaps when navigating through the filesystem to open the fifo file.) From the Linux fifo(7) man page : A FIFO special file (a named pipe) is similar to a pipe, except that it is accessed as part of the filesystem. [...] When processes are exchanging data via the FIFO, the kernel passes all data internally without writing it to the filesystem. Thus, the FIFO special file has no contents on the filesystem; the filesystem entry merely serves as a reference point so that processes can access the pipe using a name in the filesystem.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1574/" ] }
154,408
I have this script: echo $HOMEcd $HOMEcd /cd /usr/local/src/ When I run it like this . script.sh I get this output: /home/user: No such file or directory: No such file or directory: No such file or directory If I run it normally (I added #!/bin/bash before the first line) sh script.sh I get this output: : not found.sh: 2: script.sh/home/userscript.sh: 3: Can't cd to /home/userscript.sh: 4: Can't cd to /script.sh: 5: Can't cd to /usr/local/src/ The script was run from $HOME directory each time. If I run each command one by one from shell, it executed without problems (invoking cd $HOME from within $HOME simply didn't change directory). What must I do to make this work?
You have CR (^M) characters in your script. Convert it to have Unix end-of-lines (only LF). In a portable way: tr -d '\r' < your_script > output_script Some explanations based on Olivier Dulac's comment about what happened with CR characters: First, in the shell language, the CR character is not regarded as a special character, e.g. not regarded as a space and not ignored. I write it as ^M below. In the echo $HOME^M line, the content of $HOME followed by ^M followed by a new line was output. Outputting the CR character put the cursor on the first column, but since it was immediately followed by a newline, this had no visible effect. In the cd $HOME^M line, since there is no space between $HOME and the CR character, they are both in the same argument $HOME^M , and this directory does not exist. In the error message, the CR character after $HOME was just output, putting the cursor on the first column, so that the beginning of the line was overwritten by the rest of the message if any: ": No such file or directory" with bash (your first example), nothing with dash (your second example sh script.sh , as #!/bin/bash was ignored since you explicitly asked to run the script with sh , which seems to be dash in your case). The error message completely depends on the shell. For instance, zsh detects that the CR character is not printable and outputs a message like: cd: no such file or directory: /usr/local/src/^M (with the characters "^" and "M", not the CR character), which allows one to detect the cause of the problem much more easily. Otherwise you need to redirect/pipe stderr to some utility that can show special characters such as cat -ve as suggested by Olivier, or to hd , which gives the byte sequence for the stream.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83413/" ] }
154,427
I just wrote a bash script and always getting this EOF-Error. So here is my script (only works on OS X): #!/bin/bash#DEFINITIONS BEGINen_sq() { echo -e "Enabling smart quotes..." defaults write NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool true status=$(defaults read NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool) if [ "$status" = "1" ] then echo -e "Success! Smart quotes are now enabled." SUCCESS="TRUE" else echo -e "Sorry, an error occured. Try again." fi}di_sq() { echo -e "Disabling smart quotes..." defaults write NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool false status=$(defaults read NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool) if [ "$status" = "0" ] then echo -e "Success! Smart quotes are now disabled." SUCCESS="TRUE" else echo -e "Sorry, an error occured. Try again." fi}en_sd() { echo -e "Enabling smart dashes..." defaults write NSGlobalDomain NSAutomaticDashSubstitutionEnabled -bool true status=$(defaults read NSGlobalDomain NSAutomaticDashSubstitutionEnabled -bool) if [ "$status" = "1" ] then echo -e "Success! Smart dashes are now enabled." SUCCESS="TRUE" else echo -e "Sorry, an error occured. Try again." fi}di_sd() { echo -e "Enabling smart dashes..." defaults write NSGlobalDomain NSAutomaticDashSubstitutionEnabled -bool false status=$(defaults read NSGlobalDomain NSAutomaticDashSubstitutionEnabled -bool) if [ "$status" = "0" ] then echo -e "Success! Smart dashes are now disabled." SUCCESS="TRUE" else echo -e "Sorry, an error occured. Try again." fi}#DEFINITIONS END#---------------#BEGIN OF CODE with properties#This is only terminated if the user entered properties (eg ./sqd.sh 1 1)if [ "$1" = "1" ] then en_sq elif [ "$1" = "0" ] then di_sqfiif [ "$2" = "1" ] then en_sd #exit 0 if both, $1 and $2 are correct entered and processed. exit 0 elif [ "$1" = "0" ] then di_sd #exit 0 if both, $1 and $2 are correct entered and processed. exit 0fi#END OF CODE with properties#---------------------------#BEGIN OF CODE without properties#This is terminated if the user didn't enter two propertiesecho -e "\n\n\n\n\nINFO: You can use this command as following: $0 x y, while x and y can be either 0 for false or 1 for true."echo -e "x is for the smart quotes, y for the smart dashes."sleep 1echo -e " \n Reading preferences...\n"status=$(defaults read NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool)if [ "$status" = "1" ] then echo -e "Smart quotes are enabled." elif [ "$status" = "0" ] then echo -e "Smart quotes are disabled." else echo -e "Sorry, an error occured. You have to run this on OS X""fistatus=$(defaults read NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool)if [ "$status" = "1" ] then echo -e "Smart dashes are enabled." elif [ "$status" = "0" ] then echo -e "Smart dashes are disabled." else echo -e "Sorry, an error occured. You have to run this on OS X!"fisleep 3echo -e "\n\n You can now enable or disable smart quotes."until [ "$SUCCESS" = "TRUE" ]doecho -e "Enter e for enable or d for disable:"read sqif [ "$sq" = "e" ] then en_sq elif [ "$sq" = "d" ] then di_sq else echo -e "\n\n ERROR! Please enter e for enable or d for disable!"fidoneSUCCESS="FALSE"echo -e "\n\n You can now enable or disable smart dashes."until [ "$SUCCESS" = "TRUE" ]doecho -e "Enter e for enable or d for disable:"read sqif [ "$sd" = "e" ] then en_sd elif [ "$sd" = "d" ] then di_sd else echo -e "\n\n ERROR! Please enter e for enable or d for disable!"fidone And here is my error: ./coding.sh: line 144: unexpected EOF while looking for matching `"'./coding.sh: line 147: syntax error: unexpected end of file
You can see your problem if you just look at your question. Note how the syntax highlighting is messed up after line 95: echo -e "Sorry, an error occurred. You have to run this on OS X"" As the error message tells you, you have an unmatched " . Just remove the extra " from the line above and you should be fine: echo -e "Sorry, an error occurred. You have to run this on OS X"
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/154427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83451/" ] }
154,434
I've got a system that is not running ntpd on which I'm attempting to update the clock using ntpdate. The system is an appliance that appears to be CentOS 6 based. When I run ntpdate 0.pool.ntp.org , I get: 8 Sep 17:52:05 ntpdate[7445]: no server suitable for synchronization found However, when I do ntpdate -d 0.pool.ntp.org , I get: 8 Sep 17:55:14 ntpdate[9499]: ntpdate [email protected] Fri Nov 18 13:21:21 UTC 2011 (1)Looking for host 0.pool.ntp.org and service ntphost found : 4.53.160.75transmit(4.53.160.75)receive(4.53.160.75)transmit(4.53.160.75)receive(4.53.160.75)transmit(4.53.160.75)receive(4.53.160.75)transmit(4.53.160.75)receive(4.53.160.75)transmit(4.53.160.75)transmit(64.16.214.60)receive(64.16.214.60)transmit(64.16.214.60)receive(64.16.214.60)transmit(64.16.214.60)receive(64.16.214.60)transmit(64.16.214.60)receive(64.16.214.60)transmit(64.16.214.60)transmit(54.236.224.171)receive(54.236.224.171)transmit(54.236.224.171)receive(54.236.224.171)transmit(54.236.224.171)receive(54.236.224.171)transmit(54.236.224.171)receive(54.236.224.171)transmit(54.236.224.171)transmit(50.22.155.163)receive(50.22.155.163)transmit(50.22.155.163)receive(50.22.155.163)transmit(50.22.155.163)receive(50.22.155.163)transmit(50.22.155.163)receive(50.22.155.163)transmit(50.22.155.163)server 4.53.160.75, port 123stratum 2, precision -23, leap 00, trust 000refid [4.53.160.75], delay 0.03160, dispersion 0.00005transmitted 4, in filter 4reference time: d7b867d0.f9841075 Mon, Sep 8 2014 17:37:20.974originate timestamp: d7b86c03.b6a49dae Mon, Sep 8 2014 17:55:15.713transmit timestamp: d7b86c02.7e12a51e Mon, Sep 8 2014 17:55:14.492filter delay: 0.03189 0.03188 0.03172 0.03160 0.00000 0.00000 0.00000 0.00000filter offset: 1.218061 1.217856 1.218023 1.217968 0.000000 0.000000 0.000000 0.000000delay 0.03160, dispersion 0.00005offset 1.217968server 64.16.214.60, port 123stratum 2, precision -23, leap 00, trust 000refid [64.16.214.60], delay 0.04886, dispersion 0.00006transmitted 4, in filter 4reference time: d7b86425.55948a73 Mon, Sep 8 2014 17:21:41.334originate timestamp: d7b86c03.f7d91219 Mon, Sep 8 2014 17:55:15.968transmit timestamp: d7b86c02.bed42c3c Mon, Sep 8 2014 17:55:14.745filter delay: 0.04919 0.04892 0.04912 0.04886 0.00000 0.00000 0.00000 0.00000filter offset: 1.210967 1.210879 1.210967 1.210836 0.000000 0.000000 0.000000 0.000000delay 0.04886, dispersion 0.00006offset 1.210836server 54.236.224.171, port 123stratum 3, precision -20, leap 00, trust 000refid [54.236.224.171], delay 0.04878, dispersion 0.00011transmitted 4, in filter 4reference time: d7b864eb.b06fee7d Mon, Sep 8 2014 17:24:59.689originate timestamp: d7b86c04.2b9d2547 Mon, Sep 8 2014 17:55:16.170transmit timestamp: d7b86c02.f1e80bed Mon, Sep 8 2014 17:55:14.944filter delay: 0.04977 0.04950 0.04878 0.04887 0.00000 0.00000 0.00000 0.00000filter offset: 1.214091 1.214069 1.213755 1.213750 0.000000 0.000000 0.000000 0.000000delay 0.04878, dispersion 0.00011offset 1.213755server 50.22.155.163, port 123stratum 2, precision -23, leap 00, trust 000refid [50.22.155.163], delay 0.07384, dispersion 0.00005transmitted 4, in filter 4reference time: d7b869c9.2b3f3d0b Mon, Sep 8 2014 17:45:45.168originate timestamp: d7b86c04.75472e97 Mon, Sep 8 2014 17:55:16.458transmit timestamp: d7b86c03.384a83b1 Mon, Sep 8 2014 17:55:15.219filter delay: 0.07408 0.07414 0.07384 0.07387 0.00000 0.00000 0.00000 0.00000filter offset: 1.214115 1.214122 1.214012 1.214069 0.000000 0.000000 0.000000 0.000000delay 0.07384, dispersion 0.00005offset 1.214012 8 Sep 17:55:15 ntpdate[9499]: step time server 4.53.160.75 offset 1.217968 sec Based on my previous searches, this result (with the various receive() lines and the offset) suggests that it is communicating with the remote NTP servers correctly (not blocked by a firewall). So, why does it not update my clock when I run it?
Try running it as: ntpdate -u 0.pool.ntp.org The -u configures ntpdate to use an unprivileged port, which it always does when you use the -d option. Therefore, it it works with -u and -d but not without either, I'd double check your firewalls. From the man page: -u Direct ntpdate to use an unprivileged port for outgoing packets. This is most useful when behind a firewall that blocks incoming traffic to privileged ports, and you want to synchronize with hosts beyond the firewall. Note that the -d option always uses unprivileged ports.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83455/" ] }
154,440
I am currently reading the "Malware Analyst's Cookbook and DVD". There is a chapter "Dynamic Analysis" and there are also some recipes about hooking and monitoring API calls of process but it is for Windows. I want to do the same thing like recipe 9-10 explains but for Linux. 9-10 is called "Capturing process, Thread, and Image Load Events". In this receipe it is showed "how to implement a driver that alerts you when any events occure on the system while your malware samlpe executes". It uses the API functions of the Windows Driver Kit (WDK) to call a user-defined callback function. It uses the callback functions: Process creation callback function called PsSetCreateProcessNotifyRoutine(...) Thread creation callback function called PsSetCreateThreadNotifyRoutine(...) Image load callback function called PsSetLoadImageNotifyRoutine(...). And when any events occur it will display them as a debug message which can then be viewed in e.g. DebugView. It seems well documented for Windows and it is easy to find information for this, but I have a bit of a problem in finding information for Linux. I've found some general introduction to drivers and a one for hooking, but I still haven't found any that are not so general or at least are a bit more focused on malware analysis. I would be happy for tips for further readings or recommended tutorials on this topic.
Try running it as: ntpdate -u 0.pool.ntp.org The -u configures ntpdate to use an unprivileged port, which it always does when you use the -d option. Therefore, it it works with -u and -d but not without either, I'd double check your firewalls. From the man page: -u Direct ntpdate to use an unprivileged port for outgoing packets. This is most useful when behind a firewall that blocks incoming traffic to privileged ports, and you want to synchronize with hosts beyond the firewall. Note that the -d option always uses unprivileged ports.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154440", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83276/" ] }
154,459
I have a service which I am calling from another application. Below is my service URL which I am calling - http://www.betaservice.domain.host.com/web/hasChanged?ver=0 I need to do some load test on my above service URL in multithreaded way instead of calling sequentially one by one. Is there any way from bash shell script, I can put a load on my above service URL by calling it in multithreaded way? I can have 60-70 threads calling above URL in parallel very fast if possible?
I wouldn't call it multithreading as such but you could simply launch 70 jobs in the background: for i in {1..70}; do wget http://www.betaservice.domain.host.com/web/hasChanged?ver=0 2>/dev/null &done That will result in 70 wget processes running at once. You can also do something more sophisticated like this little script: #!/usr/bin/env bash## The time (in minutes) the script will run for. Change 10## to whatever you want.end=$(date -d "10 minutes" +%s);## Run until the desired time has passed.while [ $(date +%s) -lt "$end" ]; do ## Launch a new wget process if there are ## less than 70 running. This assumes there ## are no other active wget processes. if [ $(pgrep -c wget) -lt 70 ]; then wget http://www.betaservice.domain.host.com/web/hasChanged?ver=0 2>/dev/null & fidone
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64455/" ] }
154,473
I normally use a debugger such as cgdb so I thankfully hadn't needed to resort to system calls for debugging, until now. I have a binary that takes around 30 minutes to load in gdb and frequently crashes the debugger thereafter, probably due to the very large number of symbols. This makes my normal workflow (loading gdb , adding breakpoints, identifying problem variables, etc.) untenable. In windows I would at this point begin adding lots of cout and system("pause") (poor man's breakpoint) statements throughout my code; but how do I do this in a unix-like environment?
You may want to read from cin to get a poor man’s pause – it will wait for you to type an Enter ,rather than resuming while you’re getting coffee (as sleep() will).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65634/" ] }
154,478
I have Debian. I was installing it from netinst to save disk space and network transfer (small partition, UMTS connection). I decided to install Gnome. When I typed: apt-get install gnome it started downloading big things like Libre Office, Gimp... I really don't like it. Why Debian developers assume I need Libre Office or Gimp when I'm installing Gnome? Or I missed something? How can I install Gnome and only REALLY required packages?
When you install gnome package, you're installing a "Desktop environment" which includes Libre Office and some others things like Gimp, Rhythmbox, Oregano, etc. If you want to install a "clean" gnome, use the gnome-core package. Here you can see what each package includes: https://packages.debian.org/stable/gnome-core https://packages.debian.org/stable/gnome
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16148/" ] }
154,485
In a shell script... How do I capture stdin to a variable without stripping any trailing newlines? Right now I have tried: var=`cat`var=`tee`var=$(tee) In all cases $var will not have the trailing newline of the input stream. Thanks. ALSO: If there is no trailing newline in the input, then the solution must not add one . UPDATE IN LIGHT OF THE ACCEPTED ANSWER: The final solution that I used in my code is as follows: function filter() { #do lots of sed operations #see https://github.com/gistya/expandr for full code}GIT_INPUT=`cat; echo x`FILTERED_OUTPUT=$(printf '%s' "$GIT_INPUT" | filter)FILTERED_OUTPUT=${FILTERED_OUTPUT%x}printf '%s' "$FILTERED_OUTPUT" If you would like to see the full code, please see the github page for expandr , a little open-source git keyword-expansion filter shell script that I developed for information security purposes. According to rules set up in .gitattributes files (which can be branch-specific) and git config , git pipes each file through the expandr.sh shell script whenever checking it in or out of the repository. (That is why it was critical to preserve any trailing newlines, or lack thereof.) This lets you cleanse sensitive information, and swap in different sets of environment-specific values for test, staging, and live branches.
The trailing newlines are stripped before the value is stored in the variable. You may want to do something like: var=`cat; echo x` and use ${var%x} instead of $var . For instance: printf "%s" "${var%x}" Note that this solves the trailing newlines issue, but not the null byte one (if standard input is not text), since according to POSIX command substitution : If the output contains any null bytes, the behavior is unspecified. But shell implementations may preserve null bytes.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154485", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83488/" ] }
154,491
If I have a string with nonprintable characters, new lines, or tabs, is there a way I can use echo to print this string and show codes for these characters (e.g., \n for new line, \b for backspace)?
When you remove the -e switch to echo , in bash , and provided the xpg_echo option has not been enabled, it should print the strings as nothing more than their original text. So \n and \t would show up as literally that. Example $ echo "hi\t\tbye\n\n"hi\t\tbye\n\n$ echo -e "hi\t\tbye\n\n"hi bye You can also use the printf built-in command of ksh93 , zsh or bash to finagle this as well. $ printf "%q\n" "$(echo -e "hi\t\tbye\n\nbye")"$'hi\t\tbye\n\nbye' ( bash output shown above. There are some variations depending on the shell). excerpt from help printf in bash %q quote the argument in a way that can be reused as shell input
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154491", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34796/" ] }
154,525
I have a folder A which has files and directories, I want to move all those files and directories to another folder B , except file , file2 , directory , and directory2 . How can this be done?
With zsh : setopt extendedglob # best in ~/.zshrcmv A/^(file|directory)(|2)(D) B/ (the (D) to include dot (hidden) files). With bash : shopt -s extglob dotglob failglobmv A/!(@(file|directory)?(2)) B/ With ksh93 (FIGNORE='@(.|..|@(file|directory)?(2))'; mv A/* B)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
154,546
I need id of window which is active or focused. I try to use xdotool command. There is command: xdotool getactivewindow the result is saved to window stack. I want to get widnow id from this window stack. Command xdotool getactivewindow getwindowpid does not satisfy me. I do not want to get window id by process id.
I think xdotool getactivewindow is what you want - did you try it? It prints the window id (from the window stack) if there are no further xdotool subcommands on the command line. In xdotool getactivewindow getwindowpid for example, getactivewindow puts the id on the window stack, and getwindowpid uses this id to query the PID. Note that running that command in the terminal will always return the ID of the terminal window, because it is active. In order to get the ID from another window try sleep 2s && xdotool getactivewindow and select the window of interest in the two seconds time span. There is a complication when using xdotool with other tools for window handling: While xdotool output uses decimal numbers for windwo ids, most other tools use hexadecimal numbers for output (they support both for input usually). For example, if you find a window with xdotool getactivewindow , you will not find the result in the output of xwininfo -root -tree , that lists all windows. It needs to be converted to a hexadecimal number first: $ xdotool getactivewindow 69206716$ printf 0x%x 69206716 0x42002bc$ xwininfo -root -tree | grep 0x42002bc 0x42002bc (has no name): ("konsole" "Konsole") 1154x781+0+0 +1289+498 Converting decimal to hexadecimal: printf 0x%x 69206716 Converting hexadecimal to decimal: printf %i 0x42002bc
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54141/" ] }
154,579
I use below command to empty a file > file.txt It works perfectly fine! But there are some files with root user permissions. So I tried sudo > different-file.txt and I got below verbose usage: sudo [-D level] -h | -K | -k | -Vusage: sudo -v [-AknS] [-D level] [-g groupname|#gid] [-p prompt] [-u user name|#uid]usage: sudo -l[l] [-AknS] [-D level] [-g groupname|#gid] [-p prompt] [-U user name] [-u user name|#uid] [-g groupname|#gid] [command]usage: sudo [-AbEHknPS] [-C fd] [-D level] [-g groupname|#gid] [-p prompt] [-u user name|#uid] [-g groupname|#gid] [VAR=value] [-i|-s] [<command>]usage: sudo -e [-AknS] [-C fd] [-D level] [-g groupname|#gid] [-p prompt] [-u user name|#uid] file ... This suggests that there is some syntax error in my command. What is wrong?
To empty a file as root, you can also use the truncate command: $ sudo truncate -s0 file.txt The -s0 sets the file's size to 0, effectively emptying it.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/154579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24449/" ] }
154,599
It often happens that I want to apply an operation recursively. Some commands, such as grep, use a lowercase r to indicate recursion. For example grep -r foo . Other commands seem to prefer an uppercase R: chmod -R 755 . I am constantly getting these the wrong way around and forgetting which is which. Is there any logic behind the selection of case for these arguments?
Most POSIX commands that have recursive directory traversal option ( ls , chmod , chgrp , chmod , cp , rm ) have -R for that. rm also has -r because that's what it was initially, long before POSIX. Now, the behaviour varies when symlinks are found in walking down the tree. POSIX tried to make things consistent by adding the -L / -H / P options to give the user a chance to decide what to do with symlinks leaving the default when none is provided unspecified. POSIX grep has no -r or -R . GNU grep initially had neither. -r was added in 1998. That was following symlinks. -R was added as a synonym in 2001 for consistency with the other utilities. That was still following symlinks. In 2012 (grep 2.12), -r was changed so it no longer followed symlinks, possibly because -L , -H were already used for something else. BSDs grep were based on GNU grep for a long time. Some of them have rewritten their own and kept more or less compatibility with GNU grep . Apple OS/X addressed the symlink issue differently. -r and -R are the same and don't follow symlinks. There's a -S option however that acts like chmod / cp / find 's -L option to follow symlinks.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/154599", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83550/" ] }
154,605
How can I open a text file and let it update itself? Similar to the way top works. I want to open a log file and watch it update itself on the fly. I have just tried: $ tail error.log But just realised, that it just shows you the lines in the log file. I am using RHEL 5.10
You're looking for tail -f error.log (from man tail ): -f, --follow[={name|descriptor}] output appended data as the file grows; -f, --follow, and --fol‐ low=descriptor are equivalent That will let you watch a file and see any changes made to it.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/154605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20752/" ] }
154,692
I am trying to run Google AppEngine on my Debian machine, I created a file init.d/gae : . /lib/lsb/init-functions## Initialize variables#name=gaeuser=$namepid=/var/run/$name.pidprog="python /opt/google_appengine/dev_appserver.py --host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www"case "${1}" in start) echo "Starting...Google App Engine" start-stop-daemon --start --make-pidfile --background --oknodo --user "$user" --name "$name" --pidfile "$pid" --startas "$prog" ;; stop) echo "Stopping...Google App Engine" ;; restart) ${0} stop sleep 1 ${0} start ;; *) echo "Usage: ${0} {start|stop|restart}" exit 1 ;;esacexit 0# End scriptname I am testing the script by manually invoking, and the script runs but not as a daemon or at least it doesn't detach from terminal. I am expecting/looking for similar functionality to Apache. What switch am I missing? EDIT I should note that no PID file is being written or created despite the switch indicating it should be created
You have two problems I can see: prog=python /opt/google_appengine/dev_appserver.py --host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www Will start /opt/google_appengine/dev_appserver.py with prog=python in the environment. This is before your start block, so start-stop-daemon isn't even getting involved. The quick fix is to quote the entire assignment like this: prog='python /opt/google_appengine/dev_appserver.py --host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www' But a better fix is to use the style from /etc/init.d/skeleton , and do DAEMON='python /opt/google/appengine/dev_appserver.py'DAEMON_ARGS='--host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www' The second problem is that you're wrongly quoting $prog . start-stop-daemon --start --make-pidfile --background --oknodo --user "$user" --name "$name" --pidfile "$pid" --startas "$prog" tells start-stop-daemon to try to start a program called python /opt/google_appengine/dev_appserver.py --host=0.0.0.0 --admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www . But clearly there is no program called that. You want to start python with arguments. Removing the double quotes there is the quick fix, but a better one, again following /etc/init.d/skeleton , would be start-stop-daemon --start --quiet --chuid $CHUID --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_ARGS
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83610/" ] }
154,745
Some word processing programs have a menu for entering special characters, including non-breaking spaces. It is also possible to copy the character created in the word processing program and paste it into other places, such as a terminal or a website text field. However, it is inconvenient to need to use a word processing program just to generate a non-breaking space in the first place. How can I use my keyboard directly to enter a non-breaking space?
Once upon a time I told my debian fairy that I want compose instead of caps lock and typing compose space space now gives me the super solid unbreakable space: compose space space ! compose space space ! compose space space ! compose space space ! compose space space ! compose space space ! compose space space ! compose space space ! For debianish systems have a look into /etc/default/keyboard , I have the following assigment there: XKBOPTIONS="compose:caps" . Alternatively, if you're using KDE, the "advanced" tab, of the kcmshell4 kcm_keyboard command lets you configure what key to map to compose. This setting affects the text terminals too... at least in debian...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154745", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/653/" ] }
154,747
I have 3 directories at current path. $lsa_0db_data a_clean_0db_data a_clean_data$ls a_*_dataa_0db_data:a_clean_0db_data:a_clean_data:$ls a_[a-z]*_dataa_clean_0db_data:a_clean_data: I expected last ls command to match only a_clean_data . Why did it also match the one containing 0 ? bash --versionGNU bash, version 4.2.24(1)-release (i686-pc-linux-gnu)
The [a-z] part isn't what matches the number; it's the * . You may be confusing shell globbing and regular expressions . Tools like grep accept various flavours of regexes ( basic by default, -E for extended, -P for Perl regex ) E.g. ( -v inverts the match) $ ls a_[a-z]*_data | grep -v "[0-9]"a_clean_data If you want to use a bash regex, here is an example on how to test if the variable $ref is an integer: re='^[0-9]+$'if ! [[ $ref =~ $re ]] ; then echo "error"fi
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/154747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23301/" ] }
154,777
Some instances of bash change the command history when you re-use and edit a previous command, others apparently don't. I've been searching and searching but can't find anything that says how to prevent commands in the history from being modified when they're reused and edited. There are questions like this one , but that seems to say how to cope with the history being edited. I've only recently come across an instance of bash that does edit the history when you reuse a command - all previous bash shells I've used have (as far as I've noticed) been configured not to change the history when you reuse and edit a command. (Perhaps I've just not been paying proper attention to my shell history for the past 15 years or so...) So that's probably the best question: CAN I tell bash NEVER to modify the history - and if so, how?
Turns out revert-all-at-newline is the answer. I needed to include set revert-all-at-newline on in my ~/.inputrc file, since using the set command at the bash prompt had no effect. (Then, of course, I had to start a new shell.) Also, I found that ~/.inputrc is loaded instead of /etc/inputrc if present, which means that any defaults defined in the latter are no longer active when you create ~/.inputrc . To fix this, start ~/.inputrc with $include /etc/inputrc . Thanks to @StéphaneChazelas for pointing me in the right direction.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154777", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83666/" ] }
154,785
For benchmarking , I ran the command: for i in {1..100000000}; do echo "$i" line >> filedone Bash expanded the braces and stored the list 1 2 3 4 5 6 ... 100000000 in memory. I thought this will somehow deallocated at a point. After all, it's a temporary variable. A couple of days have since passed and the bash process is still occupying 17.9GB of memory. Can I force bash to clear these temporary variables? I cannot use unset , because I don't know the variable name. (obviously unset i doesn't help) Of course, a solution is to close the shell and open a new one. I also asked about this on the bash mailing list and got a helpful reply from Chet Ramsey : That's not a memory leak. Malloc implementations need not release memory back to the kernel; the bash malloc (and others) do so only under limited circumstances. Memory obtained from the kernel using mmap or sbrk and kept in a cache by a malloc implementation doesn't constitute a leak. A leak is memory for which there is no longer a handle, by the application or by malloc itself. malloc() is basically a cache between the application and the kernel. It gets to decide when and how to give memory back to the kernel.
So, I did this thing in testing and, yeah, it consumes a lot of memory. I pointedly used a smaller number as well. I can imagine that bash hogging those resources for days on end could be a little irritating. ps -Fp "$$"; : {1..10000000}; ps -Fp "$$"UID PID PPID C SZ RSS PSR STIME TTY TIME CMDmikeserv 32601 4241 0 3957 3756 4 08:28 pts/1 00:00:00 bash -lUID PID PPID C SZ RSS PSR STIME TTY TIME CMDmikeserv 32601 4241 59 472722 1878712 4 08:28 pts/1 00:00:28 bash -l As you can see, there is a significant impact on the process's consumed resources. Well, I'll try to clear that, but - as near as I can tell - it will require replacing the shell process with another. First, I'll set a marker variable just to show it comes with me. Note: this is not export ed. var='justtesting'\''this stuff'\''' Next I'll exec $0 . This is the same kind of thing that long-running daemons must do occasionally to refresh their state. It makes sense here. FIRST METHOD: HERE-DOCUMENT I'll use the current shell to build a heredoc input file-descriptor for the newly exec ed shell process that will contain all of the current shell's declared variables. Probably it can be done differently but I don't know all of the proper commandline switches for bash . The new shell is invoked with the -l ogin switch - which will see to it that your profile/rc files are sourced per usual - and whatever other shell options are currently set and stored in the special shell parameter $- . If you feel -l ogin is not a correct way to go then using the -i switch instead should at least get the rc file run. exec "${0#-}" "-l$-" 3<<ENV$(set)ENV Ok. That only took a second. How did it work? . /dev/fd/3 2>/dev/nullecho "$var"; ps -Fp "$$"justtesting'this stuff'UID PID PPID C SZ RSS PSR STIME TTY TIME CMDmikeserv 32601 4241 12 4054 3800 5 08:28 pts/1 00:00:29 bash -lhimBH Just fine, as it would seem. <&3 hangs on the new shell process's input until it is read - so I do so with . and source it. It will likely contain some default read-only variables which have already been set in the new shell by its rc files and etc and so there will be a few errors - but I dump that to 2>/dev/null . After I do that, as you can see, I have all of the old shell process's variables here with me - to include my marker $var . SECOND METHOD: ENVIRONMENT VARIABLES After doing a google or two on the matter, I think this may be another way worth considering. I initially considered this, but ( apparently erroneously ) discounted this option based on the belief that there was a kernel-enforced arbitrary length-limit to a single environment variable's value - something like ARGLEN or LINEMAX (which likely will affect this) but smaller for a single value. What I was correct about, though, is that an execve call will not work when the total environment is too large. And so, I believe this should be preferred only in the case you can guarantee that your current environment is small enough to allow for an exec call. In fact, this is different enough that I'll do it all again in one go. ps -pF "$$"; : {1..10000000}; ps -pF "$$"UID PID PPID C SZ RSS PSR STIME TTY TIME CMDmikeserv 26296 4241 0 3957 3788 3 14:28 pts/1 00:00:00 bash -lUID PID PPID C SZ RSS PSR STIME TTY TIME CMDmikeserv 26296 4241 38 472722 1878740 3 14:28 pts/1 00:00:11 bash -l One thing I failed to do on the first go-round is migrate shell functions. Not counting keeping track of them yourself (which is probably the best way) , to the best of my knowledge there is no shell-portable way to do this. bash does allow for it, though, as declare -f works in much the same way for functions that set does for shell variables portably. To do this as well with the first method you need only add ; declare -f to set in the here-document. My marker variable will remain the same, but here's my marker function: chk () { printf '###%s:###\n%s\n' \ \$VAR "${var-NOT SET}" \ PSINFO "$(ps -Fp $$)" \ ENV\ LEN "$(env | wc -c)"} And so rather than feeding the new shell a file-descriptor, I will instead hand it two environment variables: varstate=$(set) fnstate=$(declare -f) exec "${0#-}" "-l$-" Ok. So I've just replaced the running shell, so now what? chkbash: chk: command not found Of course. But... { echo '###EVAL/UNSET $FNSTATE###' eval "$fnstate"; unset fnstate chk echo '###EVAL/UNSET $VARSTATE###' eval "$varstate"; unset varstate chk} OUTPUT ###EVAL/UNSET $FNSTATE######$VAR:###NOT SET###PSINFO:###UID PID PPID C SZ RSS PSR STIME TTY TIME CMDmikeserv 26296 4241 10 3991 3736 1 14:28 pts/1 00:00:12 bash -lhimBH###ENV LEN:###6813###EVAL/UNSET $VARSTATE###bash: BASHOPTS: readonly variablebash: BASH_VERSINFO: readonly variablebash: EUID: readonly variablebash: PPID: readonly variablebash: SHELLOPTS: readonly variablebash: UID: readonly variable###$VAR:###justtesting'this stuff'###PSINFO:###UID PID PPID C SZ RSS PSR STIME TTY TIME CMDmikeserv 26296 4241 10 4056 3772 1 14:28 pts/1 00:00:12 bash -lhimBH###ENV LEN:###2839
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6958/" ] }
154,818
I am searching for files which name which contain AAA within their path using following command: find path_A -name "*AAA*" Given the output showed by the above command, I want to move those files into another path, say path_B . Instead of moving those file one by one, can I optimize the command by moving those files right after the find command?
With GNU mv : find path_A -name '*AAA*' -exec mv -t path_B {} + That will use find's -exec option which replaces the {} with each find result in turn and runs the command you give it. As explained in man find : -exec command ; Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of `;' is encountered. In this case, we are using the + version of -exec so that we run as few mv operations as possible: -exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invoca‐ tions of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of `{}' is allowed within the command. The command is executed in the starting directory.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/154818", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27321/" ] }
154,832
Is there a way to flag some targets in a Makefile so that they don't appear at all in the shell when the user types make and then presses Tab ?I am having a Makefile with too many recipes and I don't want them all to appear in the shell autocompletion when a users presses Tab looking for targets.
After studying the problem and coming up with a solution that I wrote in the first answer, I realized that there is another very simple way. Just start the target's names you don't want to show beginning with a dot . Example: .PHONY: foo .hiddentargetfoo: foo.x .hiddentargetfoo.x: foo.c touch foo.x.hiddentarget: foo.x echo foo.x ok, I believe And that's all you need! I came to this idea after trying to figure out how to prevent a target named .HIDDENGOALS being called by mistake and run all hidden goals. :) Warning Although I don't see any, I'm not sure if there isn't any side effect on using target names like this " .targetname " (starting with a dot).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/154832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24044/" ] }
154,861
Why does the following command not produce any output? $ tail -f /etc/passwd | tail After reading about buffering , I tried the following to no avail: $ tail -f /etc/passwd | stdbuf -oL tail Note that the following does produce output: $ tail /etc/passwd | tail So does this: $ tail -f /etc/passwd | head I am using tail version 8.21 (GNU coreutils).
tail -f 's tail is actually something unknown in the present, so how should the next tail know it. On the other hand tail -f 's head is something already known and could therefor be processed. Or to put it simpler: tail is relative to the end of the file, but the output stream of tail -f got no EOF (at least not before its termination). If you find the first tail 's pid and kill it, you should then see the output from the second.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154861", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83717/" ] }
154,903
The following bash commands go into an infinte loop: $ echo hi > x$ cat x >> x I can guess that cat continues to read from x afterit has started writing to stdout. What is confusing, however,is that my own test implementation of cat exhibits different behavior: // mycat.c#include <stdio.h>int main(int argc, char **argv) { FILE *f = fopen(argv[1], "rb"); char buf[4096]; int num_read; while ((num_read = fread(buf, 1, 4096, f))) { fwrite(buf, 1, num_read, stdout); fflush(stdout); } return 0;} If I run: $ make mycat$ echo hi > x$ ./mycat x >> x It does not loop. Given the behavior of cat and the fact that I'mflushing to stdout before fread is called again, I would expect this C code to continue reading and writing in a cycle. How are these two behaviors consistent? What mechanism explains why cat loops while the above code does not?
On an older RHEL system I've got, /bin/cat does not loop for cat x >> x . cat gives the error message "cat: x: input file is output file". I can fool /bin/cat by doing this: cat < x >> x . When I try your code above, I get the "looping" you describe. I also wrote a system call based "cat": #include <sys/types.h>#include <sys/stat.h>#include <fcntl.h>#include <unistd.h>intmain(int ac, char **av){ char buf[4906]; int fd, cc; fd = open(av[1], O_RDONLY); while ((cc = read(fd, buf, sizeof(buf))) > 0) if (cc > 0) write(1, buf, cc); close(fd); return 0;} This loops, too. The only buffering here (unlike for stdio-based "mycat") is what goes on in the kernel. I think what's happening is that file descriptor 3 (the result of open(av[1]) ) has an offset into the file of 0. Filed descriptor 1 (stdout) has an offset of 3, because the ">>" causes the invoking shell to do an lseek() on the file descriptor before handing it off to the cat child process. Doing a read() of any sort, whether into a stdio buffer, or a plain char buf[] advances the position of file descriptor 3. Doing a write() advances the position of file descriptor 1. Those two offsets are different numbers. Because of the ">>", file descriptor 1 always has an offset greater than or equal to the offset of file descriptor 3. So any "cat-like" program will loop, unless it does some internal buffering. It's possible, maybe even likely, that a stdio implementation of a FILE * (which is the type of the symbols stdout and f in your code) that includes its own buffer. fread() may actually do a system call read() to fill the internal buffer fo f . This may or may not change anything in the insides of stdout . Calling fwrite() on stdout may or may not change anything inside of f . So a stdio-based "cat" might not loop. Or it might. Hard to say without reading through a lot of ugly, ugly libc code. I did an strace on the RHEL cat - it just does a succession of read() and write() system calls. But a cat doesn't have to work this way. It would be possible to mmap() the input file, then do write(1, mapped_address, input_file_size) . The kernel would do all the work. Or you could do a sendfile() system call between the input and output file descriptors on Linux systems. Old SunOS 4.x systems were rumored to do the memory mapping trick, but I don't know if any one has ever done a sendfile-based cat. In either case the "looping" wouldn't happen, as both write() and sendfile() require a length-to-transfer parameter.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/154903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64170/" ] }