source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
269,170
I have 10 unix servers, I want to log into them one by one, execute 4-5 lines of code, save the output and exit. For Example: 10 serves: Intially at xyz server Login in server 1 --> execute 4-5 lines --> send output to xyz server ---> exit Login in server 2 --> execute 4-5 lines --> send output to xyz server ---> exit .... Login in server 10 -->execute 4-5 lines --> send output to xyz server ---> exit Finally on XYZ sever with output files. Let's say, I want to execute some time commands,...say backing time to one hour, taking new time as output, saving new time in some file on xyz server, with this format: Server Name New Time=========== ========= Server1 Date and TimeServer2 Date and Time
Debian ships locales in source form. They need to be compiled explicitly. The reason for this is that compiled locales use a lot more disk space, but most people only use a few of them. Run dpkg-reconfigure locales as root, select the locales you want in the list (with your settings, you need en_GB and en_US.UTF-8 — I recommend selecting en_US and en_GB.UTF-8 as well) then press <OK> . Alternatively, edit /etc/locale.gen , uncomment the lines for the locales you want, and run locale-gen as root. (Note: on Ubuntu, this works differently: run locale-gen with the locales you want to generate as arguments, e.g. sudo locale-gen en_GB en_US en_GB.UTF-8 en_US.UTF-8 .) Alternatively, Debian now has a package locales-all which you can install instead of locales . It has all the locales pre-generated. The downside is that they use up more disk space (112MB vs 16MB).
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/269170", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147118/" ] }
269,180
I found a way in Windows to do such thing echo "This is just a sample line appended to create a big file. " > dummy.txtfor /L %i in (1,1,21) do type dummy.txt >> dummy.txt http://www.windows-commandline.com/how-to-create-large-dummy-file/ Is there a way in UNIX to copy a file, append and then repeat the process?Something like for .. cat file1.txt > file1.txt ?
yes "Some text" | head -n 100000 > large-file With csh / tcsh : repeat 10000 echo some test > large-file With zsh : {repeat 10000 echo some test} > large-file On GNU systems, see also: seq 100000 > large-file Or: truncate -s 10T large-file (creates a 10TiB sparse file (very large but doesn't take any space on disk)) and the other alternatives discussed at "Create a test file with lots of zero bytes" . Doing cat file >> file would be a bad idea. First, it doesn't work with some cat implementations that refuse to read files that are the same as their output file. But even if you work around it by doing cat file | cat >> file , if file is larger than cat 's internal buffer, that would cause cat to run in an infinite loop as it would end up reading the data that it has written earlier. On file systems backed by a rotational hard drive, it would be pretty inefficient as well (after reaching a size greater than would possibly be cached in memory) as the drive would need to go back and forth between the location where to read the data, and that where to write it.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/269180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160688/" ] }
269,306
I am confused about stopping a job by using the percent sign with the kill command. I cannot find any documentation in the man pages for kill that indicate the percent sign can be used. Can someone explain to me if this explanation is hidden somewhere else, or why the % sign is used? kill -s 19 %1 would stop the job with an id of 1
The % sign introduces a job specification . To put it simply, a job is a process that has been started by the shell and can be running in the foreground (if it is, then you can't interact with the shell), running in the background, suspended, or already dead (but the shell hasn't noticed yet, otherwise the job would go away). %1 means the job which is the first entry in that shell's job table. Job numbers in different shell instances are unrelated, and they're unrelated to the process ID. You can use the jobs command to see a list of jobs in that shell. Other useful commands to manipulate jobs are fg and bg , to move a job to the foreground or background respectively. Other ways to manipulate jobs are pressing Ctrl + Z to suspend the foreground job and running a command with & at the end to send it directly into the background. There is an independent kill utility, and also a shell builtin called kill . The command exists as a separate utility so that it can be invoked from other programs without launching a shell. The command exists as a shell builtin so that it can be invoked even if there aren't enough resources left to launch a kill process, and so that it can understand shell internal data structures. Jobs are an internal shell data structure, so the external kill command doesn't know about them. The man page documents the external command. To find documentation about kill features related to jobs, look at the documentation of your shell, for example bash or zsh . Then refer to the section about jobs: bash , zsh . The shell manual is also where the commands jobs , fg and bg are documented.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29125/" ] }
269,342
I have a bunch of videos which I want to check if they are complete or not. Some of them may be downloaded partially, but they are not faulty. How can I efficiently check if these video are completely downloaded? If I had the links, I would have checked the size of them, but I don't. I tried to use ffprobe and mediainfo . ffprobe reports minor problems on partially downloaded files, but it also reports similar problems with some of completely downloaded files. Should I use ffmpeg to read the whole files and compare the length of the videos to check if they are downloaded? Is there a better solution?
ffmpeg is an OS agnostic tool that is capable of determining if a video file has been completely downloaded. The command below instructs ffmpeg to read the input video and encode the video to nothing. During the encoding process, any errors such as missing frames are output to test.log. ffmpeg -v error -i FILENAME.mp4 -f null - 2>test.log If a video file is not totally downloaded, there will be many lines in the test.log file. For example, .1 MB missing from a video file produced 71 lines of errors. If the video is fully downloaded and hasn’t been corrupted, no errors are found, and no lines are printed to test.log. Edit In the example I gave above, I tested the entire file because the test video I downloaded was a torrent, which can have missing chunks throughout the file. Adding -sseof -60 to the list of arguments will check the last 60 seconds of the file, which is considerably faster. ffmpeg -v error -sseof -60 -i FILENAME.mp4 -f null - 2>test.log You'll need a newer version of ffmpeg, 2.8 was missing the sseof flag, so I used 3.0.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/269342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37799/" ] }
269,363
For example, we have the content 001002004008010 in a text file named file , how to extract the missing 3 5 6 7 9 ?
ffmpeg is an OS agnostic tool that is capable of determining if a video file has been completely downloaded. The command below instructs ffmpeg to read the input video and encode the video to nothing. During the encoding process, any errors such as missing frames are output to test.log. ffmpeg -v error -i FILENAME.mp4 -f null - 2>test.log If a video file is not totally downloaded, there will be many lines in the test.log file. For example, .1 MB missing from a video file produced 71 lines of errors. If the video is fully downloaded and hasn’t been corrupted, no errors are found, and no lines are printed to test.log. Edit In the example I gave above, I tested the entire file because the test video I downloaded was a torrent, which can have missing chunks throughout the file. Adding -sseof -60 to the list of arguments will check the last 60 seconds of the file, which is considerably faster. ffmpeg -v error -sseof -60 -i FILENAME.mp4 -f null - 2>test.log You'll need a newer version of ffmpeg, 2.8 was missing the sseof flag, so I used 3.0.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/269363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67765/" ] }
269,374
I used to like the idea of a rolling release distribution, but now I have moved to a location where the ISP has datacaps and I don't want to be constantly installing updates. I want to switch to Leap but don't want to reinstall everything and reconfigure my KDE. As an alternative, if there's a way to export all my KDE settings that'd probably be good enough.
Off course! First create backup of your repos mv /etc/zypp/repos.d/*.repo %backup_dir% now you clear your repo list, then you must add leap repos zypper ar -f -c http://download.opensuse.org/distribution/leap/42.1/repo/oss/ repo-osszypper ar -f -c http://download.opensuse.org/distribution/leap/42.1/repo/non-oss/ repo-non-osszypper ar -f -c http://download.opensuse.org/update/leap/42.1/oss/ repo-update ... debug, source.. etc.. whatever you want. Refresh your repos zypper ref And change distribution zypper dup Now you successfully switched to Leap
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121116/" ] }
269,422
If I have something like: echo 1 2 3 4 5 6 or echo man woman child what do I have to put behind the pipe to pick out one element of 1 2 3 4 5 6 or man woman child ? echo 1 2 3 4 5 6 | command3
If your system has the shuf command echo 1 2 3 4 5 | xargs shuf -n1 -e If the input doesn't really need to be echoed via standard input, then it would be better to use shuf -n1 -e 1 2 3 4 5
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/269422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102788/" ] }
269,424
How can I add two rules to firewalld in order to deny everyone and accept only set of several IP addresses? In this here case I am talking about SSH port - 22. I am using CentOS 7 and firewalld.
If your system has the shuf command echo 1 2 3 4 5 | xargs shuf -n1 -e If the input doesn't really need to be echoed via standard input, then it would be better to use shuf -n1 -e 1 2 3 4 5
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/269424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160865/" ] }
269,444
I was wondering if this is true or false.
cp can copy a single file to a different filename (i.e. "rename" the destination), but there is no way to specify different filenames for multiple files. So the answer is no, cp can not rename when copying multiple files. When cp is given more than two arguments, all of the files are copied to the final argument (a directory). e.g. cp file1 file2 file3 /path/to/destdir/ With GNU cp there is a -t or --target-directory option which allows you to specify the destination before the source files. e.g. cp -t /path/to/destdir/ file1 file2 file3 -t is partcicularly useful when used with, e.g., ... | xargs cp -t destdir/ or find ... -exec cp -t destdir/ {} + . Some other GNU tools, including mv and ln also have the same -t aka --target-directory option. If you want to rename multiple files when you copy them, the easiest way is to write a script to do it. You can generate a large part of the script automatically. There are many methods of doing this, here's one of the easiest (using filenames matching *.txt as the example): find . -maxdepth 1 -name '*.txt' \ -exec echo cp \'{}\' \'/path/to/dest/newfile\' \; > mycp.sh (i've split the file command onto two lines here to avoid a horizontal scrollbar but this can be typed all on one line) This will produce output like this: $ ls -1 *.txtdict.txtqstat.txtx.txtfoo'bar.txt$ find . . -maxdepth 1 -name '*.txt' \ -exec echo cp \'{}\' \'/path/to/dest/newfile\' \;cp './qstat.txt' '/path/to/dest/newfile'cp './x.txt' '/path/to/dest/newfile'cp './dict.txt' '/path/to/dest/newfile'cp './foo'bar.txt' '/path/to/dest/newfile' Then edit mycp.sh with your preferred text editor and change newfile on each cp command line to whatever you want to rename that copied file to. If you don't want to rename some of the file(s), just delete newfile from the destination, leaving only the path as the destination. Note the final line of the output, with './foo'bar.txt' as the source filename - because the filename contains a ' character, this line needs some extra editing to change the embedded ' to '\'' , so that the line looks like this: cp './foo'\''bar.txt' '/path/to/dest/newfile' Alternatively, if you have GNU sed (with the -z or --null-data option for NUL-separated lines) and xargs you could do that automatically with : find . -maxdepth 1 -name '*.txt' -print0 | sed -z -e "s/'/'\\\''/g" | xargs -0 -r -i echo cp \'{}\' \'/path/to/dest/newfile\' > mycp.sh Once you've finished editing the script, you can run it with sh mycp.sh .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160878/" ] }
269,474
The normal way to connect to an SSH server is ssh username@ip_address . But a user may only want to run a program on the remote machine. So the program name follows after the normal argument which is ssh username@ip_address <program_name> . For example, ssh username@ip_address ls . That argument is fine except for interactive programs (that also accept user input as well as providing output) e.g. top . The output is TERM environment variable not set. which means no (pseudo-)terminal is attached between the sshd and top programs. The solution is to add argument -t where the whole command now becomes ssh -t username@ip_address top . My question is why can't sshd by default also use a pseudo-terminal to communicate with non-interactive programs so there is no need to add the -t argument for interactive programs?
It's true that, as others have said, PTYs have a certain overhead - but the big reason for not using a PTY when running a remote command is that you lose information. Normally, when you run a command remotely via ssh, the command's stdout and stderr streams are sent to the local stdout and stderr , which means you can redirect/pipe them separately - for example: $ ssh server ls foo barls: cannot access bar: No such file or directoryfoo$ ssh server ls foo bar > stdout 2> stderr$ cat stdoutfoo$ cat stderrls: cannot access bar: No such file or directory But if you use a PTY, all output goes to stdout , because PTYs don't have separate streams for output/error: $ ssh -t server ls foo bar > stdout 2> stderr$ cat stdoutls: cannot access bar: No such file or directoryfoo$ cat stderr$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63649/" ] }
269,479
I'm trying to repair a SD card with FAT, but fsck doesn't write changes — even the magic -w option doesn't help $ sudo fsck.fat -aw /dev/sda1 fsck.fat 3.0.26 (2014-03-07)0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt. Automatically removing dirty bit.Free cluster summary wrong (240886 vs. really 241296) Auto-correcting.Performing changes./dev/sda1: 3471 files, 240319/481615 clusters Looks like repaired ↑. But every restart of fsck , it reports the same problems, and pretends that it fixes them with the same text. Here's the verbose variant $ sudo fsck.fat -awv /dev/sda1 fsck.fat 3.0.26 (2014-03-07)fsck.fat 3.0.26 (2014-03-07)Checking we can access the last sector of the filesystem0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt. Automatically removing dirty bit.Boot sector contents:System ID "mkfs.fat"Media byte 0xf8 (hard disk) 512 bytes per logical sector 4096 bytes per cluster 32 reserved sectorsFirst FAT starts at byte 16384 (sector 32) 2 FATs, 32 bit entries 1926656 bytes per FAT (= 3763 sectors)Root directory start at cluster 2 (arbitrary size)Data area starts at byte 3869696 (sector 7558) 481615 data clusters (1972695040 bytes)62 sectors/track, 61 heads 2048 hidden sectors 3860480 sectors totalReclaiming unconnected clusters.Checking free cluster summary.Free cluster summary wrong (240886 vs. really 241296) Auto-correcting.Performing changes./dev/sda1: 3471 files, 240319/481615 clusters
It's true that, as others have said, PTYs have a certain overhead - but the big reason for not using a PTY when running a remote command is that you lose information. Normally, when you run a command remotely via ssh, the command's stdout and stderr streams are sent to the local stdout and stderr , which means you can redirect/pipe them separately - for example: $ ssh server ls foo barls: cannot access bar: No such file or directoryfoo$ ssh server ls foo bar > stdout 2> stderr$ cat stdoutfoo$ cat stderrls: cannot access bar: No such file or directory But if you use a PTY, all output goes to stdout , because PTYs don't have separate streams for output/error: $ ssh -t server ls foo bar > stdout 2> stderr$ cat stdoutls: cannot access bar: No such file or directoryfoo$ cat stderr$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269479", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59928/" ] }
269,500
This is what happens when i run lsmod on an arm board (banana pi) running on kernel 4.3.0 # lsmodModule Size Used byasync_raid6_recov 1434 -2async_pq 5548 -2async_xor 3771 -2async_memcpy 1665 -2sha512_generic 8213 -2rsa_generic 3235 -2asn1_decoder 2667 -2mpi 13730 -2poly1305_generic 3386 -2pcbc 2396 -2michael_mic 2051 -2md4 3536 -2ghash_generic 1908 -2gcm 10511 -2fcrypt 8128 -2echainiv 2110 -2crypto_user 4316 -2crc32 1581 -2cmac 2657 -2chacha20poly1305 6641 -2chacha20_generic 2902 -2ccm 7537 -2async_tx 1958 -2asymmetric_keys 3866 -2arc4 1882 -2algif_aead 5293 -2ablk_helper 1775 -2cryptd 7982 -2dm_crypt 17382 -2dm_mod 84208 -2algif_skcipher 7502 -2evdev 10705 -2nvmem_sunxi_sid 2444 -2nvmem_core 7792 -2sg 23835 -2sun4i_ts 3948 -2cpufreq_dt 4349 -2ohci_platform 4551 -2ohci_hcd 28715 -2sun4i_ss 15192 -2thermal_sys 30747 -2hwmon 2571 -2uio_pdrv_genirq 2949 -2uio 7074 -2# This is /proc/modules just in case it can provide any hint # cat /proc/modulesasync_raid6_recov 1434 - - Live 0xbf140000async_pq 5548 - - Live 0xbf13b000async_xor 3771 - - Live 0xbf137000async_memcpy 1665 - - Live 0xbf133000sha512_generic 8213 - - Live 0xbf12d000rsa_generic 3235 - - Live 0xbf129000asn1_decoder 2667 - - Live 0xbf125000 (P)mpi 13730 - - Live 0xbf11d000poly1305_generic 3386 - - Live 0xbf119000pcbc 2396 - - Live 0xbf115000michael_mic 2051 - - Live 0xbf111000md4 3536 - - Live 0xbf10d000ghash_generic 1908 - - Live 0xbf109000gcm 10511 - - Live 0xbf102000fcrypt 8128 - - Live 0xbf0fd000echainiv 2110 - - Live 0xbf0f9000crypto_user 4316 - - Live 0xbf0f4000crc32 1581 - - Live 0xbf0f0000cmac 2657 - - Live 0xbf0ec000chacha20poly1305 6641 - - Live 0xbf0e7000chacha20_generic 2902 - - Live 0xbf0e3000ccm 7537 - - Live 0xbf0de000async_tx 1958 - - Live 0xbf0da000asymmetric_keys 3866 - - Live 0xbf0d6000arc4 1882 - - Live 0xbf0d2000algif_aead 5293 - - Live 0xbf0cd000ablk_helper 1775 - - Live 0xbf0c9000cryptd 7982 - - Live 0xbf0c3000dm_crypt 17382 - - Live 0xbf0b9000dm_mod 84208 - - Live 0xbf099000algif_skcipher 7502 - - Live 0xbf094000evdev 10705 - - Live 0xbf08d000nvmem_sunxi_sid 2444 - - Live 0xbf089000nvmem_core 7792 - - Live 0xbf083000sg 23835 - - Live 0xbf078000sun4i_ts 3948 - - Live 0xbf074000cpufreq_dt 4349 - - Live 0xbf069000ohci_platform 4551 - - Live 0xbf064000ohci_hcd 28715 - - Live 0xbf057000sun4i_ss 15192 - - Live 0xbf04f000thermal_sys 30747 - - Live 0xbf040000hwmon 2571 - - Live 0xbf026000uio_pdrv_genirq 2949 - - Live 0xbf024000uio 7074 - - Live 0xbf000000# The thing is I need the 'Used by' field showing the modules, otherwise I wouldn't care, I guess.
In your kernel configuration ( make config , make menuconfig etc.) you need to enable CONFIG_MODULE_UNLOAD : When CONFIG_MODULE_UNLOAD is set, the kernel counts references, as you may only unload a module if there are no references to it. When CONFIG_MODULE_UNLOAD is not set, then the kernel has no need to count how many references there are to a module, and it always returns -2 as a marker value. This answer originally came from Gentoo Forums .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269500", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160917/" ] }
269,587
I've been using grep -i more often and I found out that it is slower than its egrep equivalent, where I match against the upper or lower case of each letter: $ time grep -iq "thats" testfilereal 0m0.041suser 0m0.038ssys 0m0.003s$ time egrep -q "[tT][hH][aA][tT][sS]" testfilereal 0m0.010suser 0m0.003ssys 0m0.006s Does grep -i do additional tests that egrep doesn't?
grep -i 'a' is equivalent to grep '[Aa]' in an ASCII-only locale. In a Unicode locale, character equivalences and conversions can be complex, so grep may have to do extra work to determine which characters are equivalent. The relevant locale setting is LC_CTYPE , which determines how bytes are interpreted as characters. In my experience, GNU grep can be slow when invoked in a UTF-8 locale. If you know that you're searching for ASCII characters only, invoking it in an ASCII-only locale may be faster. I expect that time LC_ALL=C grep -iq "thats" testfiletime LC_ALL=C egrep -q "[tT][hH][aA][tT][sS]" testfile would produce indistinguishable timings. That being said, I can't reproduce your finding with GNU grep on Debian jessie (but you didn't specify your test file). If I set an ASCII locale ( LC_ALL=C ), grep -i is faster. The effects depend on the exact nature of the string, for example a string with repeated characters reduces the performance ( which is to be expected ).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/269587", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137040/" ] }
269,593
I wanted to backup my ~/.ssh/id_rsa to id_rsa.old , and it looks like it got deleted! How is this possible? :) root@localhost:~/.ssh# ls -ltotal 16-rw------- 1 root root 3326 Mar 12 11:22 id_rsa-rw-r--r-- 1 root root 756 Mar 12 11:22 id_rsa.pub-rw------- 1 userx userx 666 Mar 8 11:09 known_hosts-rw-r--r-- 1 userx userx 666 Feb 29 10:53 known_hosts.oldroot@localhost:~/.ssh# mv id_rsa *.oldroot@localhost:~/.ssh# ls -ltotal 12-rw-r--r-- 1 root root 756 Mar 12 11:22 id_rsa.pub-rw------- 1 userx userx 666 Mar 8 11:09 known_hosts-rw------- 1 root root 3326 Mar 12 11:22 known_hosts.oldroot@localhost:~/.ssh# touch proot@localhost:~/.ssh# mv p *.proot@localhost:~/.ssh# ls -ltotal 12-rw-r--r-- 1 root root 756 Mar 12 11:22 id_rsa.pub-rw------- 1 userx userx 666 Mar 8 11:09 known_hosts-rw------- 1 root root 3326 Mar 12 11:22 known_hosts.old-rw-r--r-- 1 root root 0 Mar 12 11:28 *.proot@localhost:~/.ssh# rm *.proot@localhost:~/.ssh# ls -ltotal 12-rw-r--r-- 1 root root 756 Mar 12 11:22 id_rsa.pub-rw------- 1 userx userx 666 Mar 8 11:09 known_hosts-rw------- 1 root root 3326 Mar 12 11:22 known_hosts.olduserx@localhost:~$ uname -r4.2.0-30-genericuserx@localhost:~$ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 15.10Release: 15.10Codename: wilyuserx@localhost:~$ bash --versionGNU bash, version 4.3.42(1)-release (x86_64-pc-linux-gnu)
It has been renamed as known_hosts.old , hence has overwritten the previous contents of known_hosts.old . As you already have a file named known_hosts.old in there so the glob pattern *.old has been expanded to known_hosts.old . In a nutshell, the following: mv id_rsa *.old has been expanded to: mv id_rsa known_hosts.old In bash , if there was not a file named known_hosts.old present there it would expand to literal *.old (given you have not enabled nullglob ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/269593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160979/" ] }
269,600
I have done much research and attempted fixes on this issue, mostly involving tweaking the xstartup file. I've tried alternative VNC clients (UltraVNC and TightVNC) from a Windows 7 computer, with the same results for each client. Basically, I get either a blank grey screen with only an arrow cursor, or a failure to connect at all. I also tried a different VNC server (VNC4server) but abandoned that because, although I could connect, I got an error every time on the client window. And Tightvnc seems more widely used and user-supported. I find that, almost regardless of what I put in the ~/.vnc/xstartup file (for example, even if it has just one line (startkde &) it will work if I specify "root" as the VNC user. But then I'm logged in as root and I need instead to follow standard *nix practice of being logged in as a non-root user. So, the issue does appear to relate to privileges. However, I check for correct ownership and executable flags on files after every time I edit them. I read somewhere that the latest Tightvnc server will not allow KDE desktop to be started if there is already a desktop session running on the host (user logged in), so I start the host machine without anyone logged in. I have configured Tightvnc server as a service. My current xstartup file follows, but like I said, I have already attempted many variants of these lines, commenting out nearly everything, from suggestions gathered on the internet. #!/bin/sh # Uncomment the following two lines for normal desktop: unset SESSION_MANAGER exec /etc/X11/xinit/xinitrc & # unset DBUS_SESSION_BUS_ADDRESS [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & # x-window-manager & exec startkde & Here is the service file, /lib/systemd/system/tightvncserver.service : [Unit] Description=TightVNC remote desktop server After=sshd.service [Service] Type=dbus ExecStart=/usr/bin/vncserver -geometry 1024x768 -depth 24 :1 User=vnc Type=forking [Install] WantedBy=multi-user.target Here is the log after one reboot of the host followed by one connection attempt: 14/03/16 01:37:46 Xvnc version TightVNC-1.3.9 14/03/16 01:37:46 Copyright (C) 2000-2007 TightVNC Group 14/03/16 01:37:46 Copyright (C) 1999 AT&T Laboratories Cambridge 14/03/16 01:37:46 All Rights Reserved. 14/03/16 01:37:46 See http://www.tightvnc.com/ for information on TightVNC 14/03/16 01:37:46 Desktop name 'X' (test:1) 14/03/16 01:37:46 Protocol versions supported: 3.3, 3.7, 3.8, 3.7t, 3.8t 14/03/16 01:37:46 Listening for VNC connections on TCP port 5901 /home/vnc/.vnc/xstartup: 12: /home/vnc/.vnc/xstartup: vncconfig: not found x-terminal-emulator: Unknown option 'ls'. x-terminal-emulator: Use --help to get a list of available command line options. Error: cannot create directory "/tmp/ksocket-vncw1nXNU": File exists startkde: Starting up... kdeinit4: Aborting. bind() failed: Address already in use Could not bind to socket '/tmp/ksocket-vncGcyXe4/kdeinit4__1' 14/03/16 01:38:09 Got connection from client 192.168.10.10 14/03/16 01:38:09 Using protocol version 3.8 14/03/16 01:38:14 Full-control authentication passed by 192.168.10.10 14/03/16 01:38:14 Pixel format for client 192.168.10.10: 14/03/16 01:38:14 32 bpp, depth 24, little endian 14/03/16 01:38:14 true colour: max r 255 g 255 b 255, shift r 16 g 8 b 0 14/03/16 01:38:14 no translation needed 14/03/16 01:38:14 Using hextile encoding for client 192.168.10.10 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 19 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 18 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 17 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 16 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 10 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 9 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding 8 14/03/16 01:38:14 Using compression level 6 for client 192.168.10.10 14/03/16 01:38:14 Enabling full-color cursor updates for client 192.168.10.10 14/03/16 01:38:14 Enabling cursor position updates for client 192.168.10.10 14/03/16 01:38:14 Using image quality level 6 for client 192.168.10.10 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -65530 14/03/16 01:38:14 Enabling LastRect protocol extension for client 192.168.10.10 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -223 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32768 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32767 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32764 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32766 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -32765 14/03/16 01:38:14 rfbProcessClientNormalMessage: ignoring unknown encoding -1063131698 14/03/16 01:38:43 Client 192.168.10.10 gone 14/03/16 01:38:43 Statistics: 14/03/16 01:38:43 key events received 0, pointer events 260 14/03/16 01:38:43 framebuffer updates 2, rectangles 5, bytes 776789 14/03/16 01:38:43 cursor shape updates 2, bytes 4920 14/03/16 01:38:43 cursor position updates 1, bytes 12 14/03/16 01:38:43 hextile rectangles 2, bytes 771857 14/03/16 01:38:43 raw bytes equivalent 6291480, compression ratio 8.151095 Any ideas? [EDIT, 2014/03/14, 1409 UTC]: I forgot to mention that I had it working error-free with XFCE desktop. But I much prefer KDE, and I wish to get that working if at all possible. [EDIT, 2014/03/14, 2216 UTC]: This is a follow-up to Paul H.'s suggestion, I'm putting it here because the mini-formatting of comments doesn't seem to allow blockquotes and images. Thank you, that got me further. After I give the "startkde &" command, the client window opens with a sensible-looking desktop that is starting to load and gets this far before closing (note the error message in top left): The log is as follows: 14/03/16 21:32:11 Desktop name 'X' (test:1) 14/03/16 21:32:11 Protocol versions supported: 3.3, 3.7, 3.8, 3.7t, 3.8t 14/03/16 21:32:11 Listening for VNC connections on TCP port 5901 QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. 14/03/16 21:32:37 Got connection from client 192.168.10.10 14/03/16 21:32:37 Using protocol version 3.8 14/03/16 21:32:47 Full-control authentication passed by 192.168.10.10 14/03/16 21:32:47 Pixel format for client 192.168.10.10: 14/03/16 21:32:47 32 bpp, depth 24, little endian 14/03/16 21:32:47 true colour: max r 255 g 255 b 255, shift r 16 g 8 b 0 14/03/16 21:32:47 no translation needed 14/03/16 21:32:47 Using hextile encoding for client 192.168.10.10 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 19 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 18 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 17 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 16 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 10 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 9 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding 8 14/03/16 21:32:47 Using compression level 6 for client 192.168.10.10 14/03/16 21:32:47 Enabling full-color cursor updates for client 192.168.10.10 14/03/16 21:32:47 Enabling cursor position updates for client 192.168.10.10 14/03/16 21:32:47 Using image quality level 6 for client 192.168.10.10 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -65530 14/03/16 21:32:47 Enabling LastRect protocol extension for client 192.168.10.10 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -223 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32768 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32767 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32764 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32766 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -32765 14/03/16 21:32:47 rfbProcessClientNormalMessage: ignoring unknown encoding -1063131698 Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString) QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. kbuildsycoca4 running... kbuildsycoca4(989) KBuildSycoca::checkTimestamps: checking file timestamps kbuildsycoca4(989) KBuildSycoca::checkTimestamps: timestamps check ok kbuildsycoca4(989) kdemain: Emitting notifyDatabaseChanged () QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave. Object::connect: No such signal org::freedesktop::UPower::DeviceAdded(QString) Object::connect: No such signal org::freedesktop::UPower::DeviceRemoved(QString) QDBusConnection: name 'org.freedesktop.UDisks2' had owner '' but we thought it was ':1.11' klauncher: Exiting on signal 15 knotify4: Fatal IO error: client killed kded4: Fatal IO error: client killed konsole: Fatal IO error: client killed konsole(902) Konsole::SessionManager::~SessionManager: Konsole SessionManager destroyed with sessions still alive The first error message, ending with "application may misbehave," is supposed to be unimportant, from the bug reports I have seen. The rest, I'm not sure about..
It has been renamed as known_hosts.old , hence has overwritten the previous contents of known_hosts.old . As you already have a file named known_hosts.old in there so the glob pattern *.old has been expanded to known_hosts.old . In a nutshell, the following: mv id_rsa *.old has been expanded to: mv id_rsa known_hosts.old In bash , if there was not a file named known_hosts.old present there it would expand to literal *.old (given you have not enabled nullglob ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/269600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160968/" ] }
269,617
Coming from Windows administration, I want to dig deeper in Linux (Debian).One of my burning questions I could not answer searching the web (didn't find it) is: how can I achieve the so called "one-to-many" remoting like in PowerShell for Windows? To break it down to the basics I would say: My view on Linux: I can ssh into a server and type my command I get the result. For an environment of 10 servers I would have to write a (perl/python?) script sending the command for each of them? My experience from Windows: I type my command and with "invoke-command" I can "send" this to a bunch of servers (maybe from a textfile) to execute simultaneously and get the result back (as an object for further work). I can even establish multiple sessions, the connection is held in the background, and selectively send commands to these sessions, and remote in and out like I need. (I heard of chef, puppet, etc. Is this something like that?) Update 2019: After trying a lot - I suggest Rex (see this comment below ) - easy setup (effectively it just needs ssh, nothing else) and use (if you know just a little bit perl it's even better, but it's optional) With Rex(ify) you can do adhoc command and advance it to a real configuration management (...meaning: it is a CM in first place, but nice for adhoc tasks, too)The website seams outdated, but currently (as of 01/2019) it's in active development and the IRC-Channel is also active. With Windows' new openssh there are even more possibilities you can try: rex -u user -p password -H 192.168.1.3 -e 'say run "hostname"'
Summary Ansible is a DevOps tool that is a powerful replacement for PowerShell RunDeck as a graphical interface is handy Some people run RunDeck+Ansible together clusterssh For sending remote commands to several servers, for a beginner, I would recommend clusterssh To install clusterssh in Debian: apt-get install clusterssh Another clusterssh tutorial : ClusterSSH is a Tk/Perl wrapper around standard Linux tools like XTerm and SSH. As such, it'll run on just about any POSIX-compliant OS where the libraries exist — I've run it on Linux, Solaris, and Mac OS X. It requires the Perl libraries Tk (perl-tk on Debian or Ubuntu) and X11::Protocol (libx11-protocol-perl on Debian or Ubuntu), in addition to xterm and OpenSSH. Ansible As for a remote framework for multiple systems administration, Ansible is a very interesting alternative to Puppet. It is more lean, and it does not need dedicated remote agents as it works over SSH (it also has been bought by RedHat) The Playbooks are more elaborate than the command line options. However, to start using Ansible you need a simple installation and to setup the clients list text file. Afterwards, to run a command in all servers, it is as simple as doing: ansible all -m command -a "uptime" The output also is very nicely formatted and separated per rule/server, and while running it in the background can be redirected to a file and consulted later. You can start with simple rules, and Ansible usage will get more interesting as you grow in Linux, and your infra-structure becomes larger. As such it will do so much more than PowerShell. As an example, a very simple Playbook to upgrade Linux servers that I wrote: ---- hosts: all become: yes gather_facts: False tasks: - name: updates a server apt: update_cache=yes - name: upgrade a server apt: upgrade=full It also has many modules defined that let you easily write comprehensive policies. Module Index - Ansible Documentation It also has got an interesting official hub/"social" network of repositories to search for already made ansible policies by the community. Ansible Galaxy Ansible is also widely used, and you will find lots of projects in github, like this one from myself for FreeRadius setup . While Ansible is a free open source framework, it also has a paid web panel interface, Ansible Tower although the licensing is rather expensive. Nowadays, after RedHat bought it, tower has also the open source version known as AWX . As a bonus, Ansible also is capable of administering Windows servers, though I have never used it for that. It is also capable of administering networking equipment (routers, switches, and firewall), which make it a very interesting solution as an automation turn key solution. How to install Ansible Rundeck Yet again, for a remote framework easier to use, but not so potent as Ansible, I do recommend Rundeck . It is a very powerful multi-user/login graphical interface where you can automate much of your common day-to-day tasks, and even give watered down views to sysops or helpdesk people. When running the commands, it also gives you windows with the output broken down by server/task. It can run multiple jobs in the background seamlessly, and allows you to see the report and output later on. How to install RunDeck Please note there are people running Ansible+RunDeck as a web interface; not all cases are appropriated for that. It also goes without saying that using Ansible and/or RunDeck can be construed as a form or part of the infra-structure documentation, and over time allows to replicate and improve the actions/recipes/Playbooks. Lastly, talking about a central command server, I would create one just up for the task. Actually the technical term is a jump box. 'Jump boxes' improve security, if you set them up right .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/269617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161003/" ] }
269,620
I have some zip files in directory when I need to unzip each in specific dir I used ls | awk '{ print "unzip " $1 " -d " $1} 'unzip p21286665_121020_Linux-x86-64.zip -d p21286665_121020_Linux-x86-64.zipunzip p21841318_121020_Linux-x86-64.zip -d p21841318_121020_Linux-x86-64.zipunzip p22098146_121020_Linux-x86-64.zip -d p22098146_121020_Linux-x86-64.zip But I need something like this. unzip p21286665_121020_Linux-x86-64.zip -d p21286665unzip p21841318_121020_Linux-x86-64.zip -d p21841318unzip p22098146_121020_Linux-x86-64.zip -d p22098146
You could also use the -F argument to split the line on underscore and thus end up with something like this: ls | awk -F_ '{print "unzip " $0 " -d " $1; }
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146337/" ] }
269,631
Assume I have a file A: fileA fileB Suppose I have now a file named: fileA_someprefix_20160101.txt Now I want to match all lines from A which prefix this filename , so I thought: FILE_NAME="fileA_someprefix_20160101.txt" awk '"$FILE_NAME" ~ /^$1/' A.txt I tried different ways to escape the dollar sign, but it did not work. In all examples the field is part of the expression (left) instead of the regex. How do I a reverse start with?
You could also use the -F argument to split the line on underscore and thus end up with something like this: ls | awk -F_ '{print "unzip " $0 " -d " $1; }
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110044/" ] }
269,645
I have a folder which contains files (various file extensions) and subfolders. I want to keep certain file extensions and delete the rest. e.g. keep all .txt and .jpg files and delete all other files. on regular UNIX/GNU, I can use find together with the "-not" parameter to achieve this. > find . -not -name "*.jpg" -not -name "*txt" -type f -delete But sadly, this parameter is not available on busybox find. Any ideas on how it can be done? Thanks a lot
-not and -delete are non-standard extensions. There's no reason why you'd want to use -not , when there's a shorter standard equivalent: ! . For -delete , you'll have to invoke rm with the -exec predicate: find . ! -name '*.jpg' ! -name '*.txt' -type f -exec rm -f '{}' + (if you have an older version of busybox, you may need the -exec rm -f '{}' ';' which runs one rm per file). That command above is standard, so will work not only in busybox but also with other non-GNU modern implementations of find . Note that on GNU systems at least, that command (with any find implementation as long as it uses GNU fnmatch(3) ) may still remove some files whose name ends in .jpg or .txt , as the *.jpg pattern would fail to match files whose name contains invalid characters in the current locale. To work around that, you'd need: LC_ALL=C find . ! -name '*.jpg' ! -name '*.txt' -type f -exec rm -f '{}' + Also note that contrary to GNU's -delete , that approach won't work in very deep directory trees as you would then end up reaching the maximum size of a file path passed to the unlink() system call. AFAIK, with find , there's no way around that if your find implementation doesn't support -execdir nor -delete . You may also want to read the security considerations discussed in the GNU find manual if you're going to run that command in a directory writable by others.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65536/" ] }
269,648
What are exactly mknod command parameters? I want to create a jail in chroot. So I need to do: mknod /var/chroot/bind/dev/null c 1 3mknod /var/chroot/bind/dev/random c 1 8 What are c , 1 , 3 and 8 ?
mknod is creating a device file, usually to be located in the /dev branch, but not necessarily like your example shows. The first parameter is telling which kind of device to create, here c for character device. Other choices might be b for block devices, p for fifo (pipe). The second parameter is the major number, it identifies the driver for the kernel to use. The third parameter is the minor number, it is passed to the driver for its internal usage. On Linux, major/minor numbers are documented here: devices.txt So 1 is used for the so called memory devices handled by a single driver. 3 is representing the null device which returns EOF when read and discard whatever is writen to it. 8 is representing the random device which returns random numbers. To get more information, you might to have a look to the device manual pages, e.g. man -s 4 nullman -s 4 random
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/269648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56476/" ] }
269,661
In Linux Mint 17.3 / 18 iwconfig says the power management of my wireless card is turned on . I want to turn it off permanently or some workaround on this issue. sudo iwconfig wlan0 power off works, until I reboot the laptop. Also, if I randomly check iwconfig , sometimes it's on, despite I did run this command. I read some articles about making the fix permanent. All of them contained the first step "Go to directory /etc/pm/power.d ", which in my case did not exist. I followed these steps: sudo mkdir -p /etc/pm/power.dsudo nano /etc/pm/power.d/wireless_power_management_off I entered these two lines into the file: #!/bin/bash/sbin/iwconfig wlan0 power off And I finished with setting proper user rights: sudo chmod 700 /etc/pm/power.d/wireless_power_management_off But after reboot the power management is back on. iwconfig after manually turning power management off eth0 no wireless extensions.wlan0 IEEE 802.11abgn ESSID:"SSID" Mode:Managed Frequency:2.462 GHz Access Point: 00:00:00:00:00:00 Bit Rate=24 Mb/s Tx-Power=22 dBm Retry short limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=42/70 Signal level=-68 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:2 Invalid misc:18 Missed beacon:0lo no wireless extensions. I don't think this question applies only to Linux Mint, it is a general issue of particular wireless adapters.
Open this file with your favorite text editor, I use nano here: sudo nano /etc/NetworkManager/conf.d/default-wifi-powersave-on.conf By default there is: [connection]wifi.powersave = 3 Change the value to 2 . Possible values for the wifi.powersave field are: NM_SETTING_WIRELESS_POWERSAVE_DEFAULT (0): use the default valueNM_SETTING_WIRELESS_POWERSAVE_IGNORE (1): don't touch existing settingNM_SETTING_WIRELESS_POWERSAVE_DISABLE (2): disable powersaveNM_SETTING_WIRELESS_POWERSAVE_ENABLE (3): enable powersave (Informal source on GitHub for these values.) To take effect, just run: sudo systemctl restart NetworkManager
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/269661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
269,662
I'm trying to get the output of env in a shell variable and print it. #!/bin/shENV_BEFORE=$(env)printf $ENV_BEFORE As a result a single variable from the env output gets printed. When using echo instead of printf all the output is printed, however without the newlines. What am I missing here?
The problem is that you're not quoting the $ENV variable. As explained in man bash : Enclosing characters in double quotes preserves the literal value of all characters within the quotes, with the exception of $, `, \, and, when history expansion is enabled, !. The characters $ and ` retain their special meaning within double quotes. The backslash retains its special meaning only when followed by one of the following characters: $, `, ", \, or . So, enclosing a sequence like \n in double quotes preserves its meaning. This is why, when not quoted, \n is just a normal n : $ printf \nn$ While, when quoted: $ printf "\n"$ An unquoted variable in bash invokes the split+glob operator. This means that the variable is split on whitespace (or whatever the special variable $IFS has been set to) and each resulting word is used as a glob (it will expand to match any matching file names). Your problem is with the "split" part of this. To illustrate, let's take a simpler multiline variable: $ var=$(printf "foo\nbar\n") Now, using the shell's set -x debug feature, you can see exactly what's going on: $ echo $var+ echo foo barfoo bar$ echo "$var"+ echo 'foobar'foobar As you can see above, echo $var (unquoted) subjects $var to split+glob so it results in two separate strings, foo and bar . The newline was eaten by th split+glob. When the variable was quoted, it wasn't subjected to split+glob, the newline was kept and, because it is quoted, is also interpreted correctly ad printed out. The next problem is that printf is not like echo . It doesn't just print anything you give it, it expects a format string. For example printf "color:%s" "green" will print color:green because the %s will be replaced with green . It also ignores any input that can't fit into the format string it was given. So, if you run printf foo bar , printf will treat foo as its format string and bar as the variable it is supposed to format with it. Since there is no %s or equivalent to be replaced by bar , bar is ignored and foo alone is printed: $ printf $var+ printf foo barfoo That's what happened when you ran printf $ENV_BEFORE . Because the variable wasn't quoted, the split glob effectively replaced newlines with spaces, and printf only printed the first "word" it saw. To do it correctly, use format strings, and always quote your variables: printf '%s\n' "$ENV_BEFORE"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/269662", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29436/" ] }
269,689
Timezone information for Cayman Islands is incorrect. The Cayman Islands government toyed with the idea of changing the timezone to have daylight savings, in effect matching US/Eastern time, however this did not come to fruition . As such, when daylight savings in US/Eastern time started this weekend just gone by (13 March 2016), but not in Cayman, the time in Cayman is now off by 1 hour. As a work around we've had to change timezones from "Cayman" to "Jamaica". sudo mv /etc/localtime /etc/localtime.baksudo ln -s /usr/share/zoneinfo/Jamaica /etc/localtime This is obviously not a permanent solution. What should I do to fix this permanently? How / who do I report this error to?
Time Zone data 2016a already takes this into account : America/Cayman will not observe daylight saving this year after all. Revert our guess that it would. (Thanks to Matt Johnson.) All that needs to happen now is for your distribution to update its time-zone data. You can check whether a bug has already been filed in your distribution's bug tracker, and file one if necessary... In the meantime you can also download the updated tarball and use that to update your zoneinfo file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28680/" ] }
269,763
I'm using a Fedora 13 VM, and I need to install some old rpms, but only have the source rpm files. I know that I can use rpmbuild --rebuild to build the binaries, but for whatever reason, rpm-build isn't installed with yum, and I can only find a source rpm file of rpm-build for fc13. So it's a bit of a recursive problem. The specific rpms I need are libvncserver, and obviously rpm-build, but it would be ideal to know how to start with a generic SRPM and get it to work on Fedora 13, for any future needs that come up. How can I solve this? I'm open to any suggestions, but I must use Fedora 13.
You can probably just use the yum repo they include with the full DVD ISO. I downloaded the ISO to a RHEL6 server, mounted it on loopback and created the following in /etc/yum.repos.d/fedora.repo : [root@vlp-xxx tmp]# cat /etc/yum.repos.d/fedora.repo[fedora]name='Fedora base sur DVD - monter le dvd dans /repo/dvd'baseurl=file:///mnt/tmpenabled=0gpgcheck=0 Which then gave me all the Fedora 13 rpm's: [root@vlp-xxx tmp]# yum list available --disablerepo='*' --enablerepo=fedora | headLoaded plugins: product-id, security, subscription-managerAvailable PackagesBackupPC.noarch 3.1.0-13.fc13 fedoraConsoleKit.i686 0.4.1-5.fc13 fedoraConsoleKit-libs.i686 0.4.1-5.fc13 fedoraConsoleKit-x11.i686 0.4.1-5.fc13 fedoraDeviceKit-power.i686 1:0.9.0-1.fc13 fedoraGConf2.i686 2.28.1-1.fc13 fedoraGConf2-devel.i686 2.28.1-1.fc13 fedoraGConf2-gtk.i686 2.28.1-1.fc13 fedora[...snip...] And your package seems to be in there: [root@vlp-xxx tmp]# yum info rpm-build --disablerepo='*' --enablerepo=fedoraLoaded plugins: product-id, security, subscription-managerInstalled Packages[...snip...]Available PackagesName : rpm-buildArch : i686Version : 4.8.0Release : 14.fc13Size : 125 kRepo : fedoraSummary : Scripts and executable programs used to build packagesURL : http://www.rpm.org/License : GPLv2+Description : The rpm-build package contains the scripts and executable programs : that are used to build packages using the RPM Package Manager.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161100/" ] }
269,773
set -o posix will set the POSIX attribute of the bash shell. How can we check if an attribute has been set up or not?
There are two lists of options in bash. One for shopt and one for set . Check one option. To print an specific option (without changing it) for shopt, use shopt -p name : $ shopt -p xpg_echoshopt -u xpg_echo And for set , use: shopt -po name (yes, you may use shopt -op for set list). $ shopt -po xtraceset +o xtrace List options. To list all options from shopt, use shopt (or reusable shopt -p ). Also shopt -s or shopt -u could be used. The way to list all options to set is with set -o (related: set +o ). Or: shopt -o is equivalent to set -o and shopt -op is to set +o . Manual From LESS=+/'^ *shopt \[' man bash : With no options, or with the -p option, a list of all settable options is displayed, If either -s or -u is used with no optname arguments, the display is limited to those options which are set or unset, respectively. From LESS=+/'^ *set \[' man bash : If -o is supplied with no option-name, the values of the current options are printed. If +o is supplied with no option-name, a series of set commands to recreate the current option settings is displayed on the standard output. Examples $ set -oallexport offbraceexpand onemacs onerrexit offerrtrace offfunctrace offhashall onhistexpand onhistory onignoreeof offinteractive-comments onkeyword offmonitor onnoclobber offnoexec offnoglob offnolog offnotify offnounset offonecmd offphysical offpipefail offposix offprivileged offverbose offvi offxtrace off And $ shopt -spshopt -s checkwinsizeshopt -s cmdhistshopt -s expand_aliasesshopt -s extglobshopt -s extquoteshopt -s force_fignoreshopt -s histappendshopt -s histverifyshopt -s interactive_commentsshopt -s progcompshopt -s promptvarsshopt -s sourcepath It is worth mentioning about shopt -op which actually lists set options: $ shopt -opset +o allexportset -o braceexpandset -o emacsset +o errexitset +o errtraceset +o functraceset -o hashallset -o histexpandset -o historyset +o ignoreeofset -o interactive-commentsset +o keywordset -o monitorset +o noclobberset +o noexecset +o noglobset +o nologset +o notifyset +o nounsetset +o onecmdset +o physicalset +o pipefailset +o posixset +o privilegedset +o verboseset +o viset +o xtrace
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269773", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
269,805
I am trying to detach a process from a bash script so that SIGINT will not be forwarded to the process when I exit the script. I have used the disown command in terminal directly, however in bash, disown does not stop SIGINT from being forwarded. The purpose of this script is to start openocd and then gdb with a single invocation. Since the script never exits (it's running gdb) SIGINT is still forwarded from gdb to openocd which is a problem since SIGINT is used as the halt command in gdb. In terminal it would look something like this: $ openocd & # run openocd demonized$ disown $! # disown last pid$ gdb # invoke GDB when invoked on terminal in this order, the SIGINT is not passed from gdb to openocd. However if this same invocation was in a bash script, the SIGINT is passed. Any help would be greatly appreciated. ps this problem is in OS X but I am trying to use tools which are also portable to all Unix tools.
To detach a process from a bash script: nohup ./process & If you stop your bash script with SIGINT (ctrl+c), or the shell exits sending SIGHUP for instance, the process won't be bothered and will continue executing normally. stdout & stderr will be redirected to a log file: nohup.out . If you wish to execute a detached command while being able to see the output in the terminal, then use tail : TEMP_LOG_FILE=tmp.log> "$TEMP_LOG_FILE"nohup ./process &> "$TEMP_LOG_FILE" & tail -f "$TEMP_LOG_FILE" &
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/269805", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161122/" ] }
269,809
According to Bash tips: Colors and formatting (ANSI/VT100 Control sequences) I attemped to active blink code in my program, But may be blink code has been eliminated. Is it true? If is not true, Please help me to use blink code.
The blink feature depends upon the terminal (or terminal emulator). Most terminals you will use accept the control sequences documented in ECMA-48, e.g., VT100-compatible. The control sequence may cause blinking on a given terminal, or show as a particular color, or simply ignored by a given terminal Applications usually use a terminal description (terminfo or termcap). If the terminal description does not tell how to blink, then the application will not know either. If your computer has infocmp (for terminfo), that will show the capabilities listed in the terminal description. bash only looks for blink — using the termcap name, since it is a termcap application. More generally, terminfo can also describe how to blink using sgr (which is not available in termcap descriptions). For example, this is a terminfo description of vt100 : > infocmp vt100# Reconstructed via infocmp from file: /usr/local/ncurses/share/terminfo/v/vt100vt100|vt100-am|dec vt100 (w/advanced video), am, mc5i, msgr, xenl, xon, cols#80, it#8, lines#24, vt#3, acsc=``aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~, bel=^G, blink=\E[5m$<2>, bold=\E[1m$<2>, clear=\E[H\E[J$<50>, cr=^M, csr=\E[%i%p1%d;%p2%dr, cub=\E[%p1%dD, cub1=^H, cud=\E[%p1%dB, cud1=^J, cuf=\E[%p1%dC, cuf1=\E[C$<2>, cup=\E[%i%p1%d;%p2%dH$<5>, cuu=\E[%p1%dA, cuu1=\E[A$<2>, ed=\E[J$<50>, el=\E[K$<3>, el1=\E[1K$<3>, enacs=\E(B\E)0, home=\E[H, ht=^I, hts=\EH, ind=^J, ka1=\EOq, ka3=\EOs, kb2=\EOr, kbs=^H, kc1=\EOp, kc3=\EOn, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, kcuu1=\EOA, kent=\EOM, kf0=\EOy, kf1=\EOP, kf10=\EOx, kf2=\EOQ, kf3=\EOR, kf4=\EOS, kf5=\EOt, kf6=\EOu, kf7=\EOv, kf8=\EOl, kf9=\EOw, lf1=pf1, lf2=pf2, lf3=pf3, lf4=pf4, mc0=\E[0i, mc4=\E[4i, mc5=\E[5i, rc=\E8, rev=\E[7m$<2>, ri=\EM$<5>, rmacs=^O, rmam=\E[?7l, rmkx=\E[?1l\E>, rmso=\E[m$<2>, rmul=\E[m$<2>, rs2=\E>\E[?3l\E[?4l\E[?5l\E[?7h\E[?8h, sc=\E7, sgr=\E[0%?%p1%p6%|%t;1%;%?%p2%t;4%;%?%p1%p3%|%t;7%;%?%p4%t;5%;m%?%p9%t\016%e\017%;$<2>, sgr0=\E[m\017$<2>, smacs=^N, smam=\E[?7h, smkx=\E[?1h\E=, smso=\E[7m$<2>, smul=\E[4m$<2>, tbc=\E[3g, The corresponding termcap is > infocmp -Cr vt100# Reconstructed via infocmp from file: /usr/local/ncurses/share/terminfo/v/vt100vt100|vt100-am|dec vt100 (w/advanced video):\ :5i:am:bs:ms:xn:xo:\ :co#80:it#8:li#24:vt#3:\ :@8=\EOM:DO=\E[%dB:K1=\EOq:K2=\EOr:K3=\EOs:K4=\EOp:K5=\EOn:\ :LE=\E[%dD:RA=\E[?7l:RI=\E[%dC:SA=\E[?7h:UP=\E[%dA:\ :ac=``aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~:\ :ae=^O:as=^N:bl=^G:cb=\E[1K:cd=\E[J:ce=\E[K:cl=\E[H\E[J:\ :cm=\E[%i%d;%dH:cr=^M:cs=\E[%i%d;%dr:ct=\E[3g:do=^J:\ :eA=\E(B\E)0:ho=\E[H:k0=\EOy:k1=\EOP:k2=\EOQ:k3=\EOR:\ :k4=\EOS:k5=\EOt:k6=\EOu:k7=\EOv:k8=\EOl:k9=\EOw:k;=\EOx:\ :kb=^H:kd=\EOB:ke=\E[?1l\E>:kl=\EOD:kr=\EOC:ks=\E[?1h\E=:\ :ku=\EOA:l1=pf1:l2=pf2:l3=pf3:l4=pf4:le=^H:mb=\E[5m:\ :md=\E[1m:me=\E[0m:mr=\E[7m:nd=\E[C:pf=\E[4i:po=\E[5i:\ :ps=\E[0i:rc=\E8:rs=\E>\E[?3l\E[?4l\E[?5l\E[?7h\E[?8h:\ :..sa=\E[0%?%p1%p6%|%t;1%;%?%p2%t;4%;%?%p1%p3%|%t;7%;%?%p4%t;5%;m%?%p9%t\016%e\017%;$<2>:\ :sc=\E7:se=\E[m:sf=^J:so=\E[7m:sr=\EM:st=\EH:ta=^I:ue=\E[m:\ :up=\E[A:us=\E[4m: (The termcap name for blink is mb , which you can see in the description). So... if you are not seeing blinking text, that could be (a) the terminal itself or (b) the terminal description. Further reading: infocmp - compare or print out terminfo descriptions terminfo - terminal capability data base Standard ECMA-48:Control Functions for Coded Character Sets
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21911/" ] }
269,858
Do mainstream Linux distributions typically log system temperature data, e.g. CPU or HDD temperature? If so, where can those logs be found?
I'm not aware of a mainstream Linux distribution which logs this type of information by default. Most mainstream Linux distributions do include various packages which can log temperatures, and some of these packages are set up to log by default. Taking Debian as an example, sensord will periodically log all the information it knows about (system temperatures, voltages etc.) to the system log, but it needs to be configured manually before it can log anything useful; hddtemp can be set up to periodically log hard drive temperatures. Many other tools can retrieve this type of information (using IPMI, SNMP, etc.) but again in most cases they need to be configured, either to be able to access the information in the first place, or to be able to interpret it, or both. This configuration requirement means that it would be difficult to set up a generic distribution which logs temperatures by default in a meaningful way. (Most of the systems I've seen have at least one, invalid, monitoring entry which would set off alarms if it was auto-configured!) Of course it's entirely possible to set up an installer image for your own systems since you know what they are and how they're configured... Once you've configured the various tools required to extract temperature information, you'd be better off using a proper monitoring tool (such as Munin ) to log the temperatures instead of relying on the system logs. That way you can also set up alerts to be notified when things start going wrong. Expanding on the sensord example, you can find its output in the system log, with sensord as the process name; so either look for sensord in /var/log/syslog (by default), or run journalctl -u sensord . You'll see periodic logs like the following (I've removed the date and hostname): sensord[2489]: Chip: acpitz-virtual-0sensord[2489]: Adapter: Virtual devicesensord[2489]: temp1: 27.8 Csensord[2489]: temp2: 29.8 Csensord[2489]: Chip: coretemp-isa-0000sensord[2489]: Adapter: ISA adaptersensord[2489]: Physical id 0: 33.0 Csensord[2489]: Core 0: 29.0 Csensord[2489]: Core 1: 30.0 Csensord[2489]: Core 2: 26.0 Csensord[2489]: Core 3: 29.0 Csensord[2489]: Chip: nct6776-isa-0a30sensord[2489]: Adapter: ISA adaptersensord[2489]: in0: +1.80 V (min = +1.60 V, max = +2.00 V)sensord[2489]: in1: +1.86 V (min = +1.55 V, max = +2.02 V)sensord[2489]: in2: +3.41 V (min = +2.90 V, max = +3.66 V)sensord[2489]: in3: +3.39 V (min = +2.83 V, max = +3.66 V)sensord[2489]: in4: +1.50 V (min = +1.12 V, max = +1.72 V)sensord[2489]: in5: +1.26 V (min = +1.07 V, max = +1.39 V)sensord[2489]: in6: +1.04 V (min = +0.80 V, max = +1.20 V)sensord[2489]: in7: +3.31 V (min = +2.90 V, max = +3.66 V)sensord[2489]: in8: +3.22 V (min = +2.50 V, max = +3.60 V)sensord[2489]: fan1: 1251 RPM (min = 200 RPM)sensord[2489]: fan2: 0 RPM (min = 0 RPM)sensord[2489]: fan3: 299 RPM (min = 200 RPM)sensord[2489]: fan4: 1315 RPM (min = 0 RPM)sensord[2489]: fan5: 628 RPM (min = 200 RPM)sensord[2489]: SYSTIN: 32.0 C (limit = 80.0 C, hysteresis = 70.0 C)sensord[2489]: CPUTIN: 33.0 C (limit = 85.0 C, hysteresis = 80.0 C)sensord[2489]: AUXTIN: 24.0 C (limit = 80.0 C, hysteresis = 75.0 C)sensord[2489]: PECI Agent 0: 31.0 C (limit = 95.0 C, hysteresis = 92.0 C)sensord[2489]: PCH_CHIP_CPU_MAX_TEMP: 57.0 C (limit = 95.0 C, hysteresis = 90.0 C)sensord[2489]: PCH_CHIP_TEMP: 0.0 Csensord[2489]: PCH_CPU_TEMP: 0.0 Csensord[2489]: beep_enable: Sound alarm enabledsensord[2489]: Chip: jc42-i2c-9-18sensord[2489]: Adapter: SMBus I801 adapter at 0580sensord[2489]: temp1: 32.8 C (min = 0.0 C, max = 60.0 C)sensord[2489]: Chip: jc42-i2c-9-19sensord[2489]: Adapter: SMBus I801 adapter at 0580sensord[2489]: temp1: 33.5 C (min = 0.0 C, max = 60.0 C)sensord[2489]: Chip: jc42-i2c-9-1asensord[2489]: Adapter: SMBus I801 adapter at 0580sensord[2489]: temp1: 34.0 C (min = 0.0 C, max = 60.0 C)sensord[2489]: Chip: jc42-i2c-9-1bsensord[2489]: Adapter: SMBus I801 adapter at 0580sensord[2489]: temp1: 33.2 C (min = 0.0 C, max = 60.0 C) To get this I had to determine which modules were needed (using sensors-detect ): by default the system only knew about the ACPI-reported temperatures, which don't actually correspond to anything (they never vary). coretemp gives the CPU core temperatures on Intel processors, nct6776 is the motherboard's hardware monitor, and jc42 is the temperature monitor on the DIMMs. To make it useful for automated monitoring, I should at least disable the ACPI values and re-label the fans, and correct fan4 's minimum value. There are many other configuration possibilities, lm_sensors ' example configuration file gives some idea.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/269858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
269,906
If you go to the VirusTotal link , there is a tab called file info(I think; mine is dutch). You'll see a header called "Authenticode signature block and FileVersionInfo properties" I want to extract the data under the header using Linux cli. Example: Signature verification Signed file, verified signatureSigning date 7:43 AM 11/4/2014Signers[+] Microsoft Windows[+] Microsoft Windows Production PCA 2011[+] Microsoft Root Certificate Authority 2010Counter signers[+] Microsoft Time-Stamp Service[+] Microsoft Time-Stamp PCA 2010[+] Microsoft Root Certificate Authority 2010 I used the Camera.exe in Windows 10, to somehow extract the data. I extracted the .exe file, and found a CERTIFICATE file in it, there is a lot of unreadable data, but also some text, I can read, that is - roughly - the same like the above output. How can I extract Signatures from a Windows .exe file under Linux using cli
On Linux there's a tool called osslsigncode which can process Windows Authenticode signatures. Verifying a binary's signature produces output similar to what you show in your example; on a vcredist_x86.exe I have to hand I get: $ osslsigncode verify vcredist_x86.exeCurrent PE checksum : 004136A1Calculated PE checksum: 004136A1Message digest algorithm : SHA1Current message digest : 0A9F10FB285BA0064B5537023F8BC9E06E173801Calculated message digest : 0A9F10FB285BA0064B5537023F8BC9E06E173801Signature verification: okNumber of signers: 1 Signer #0: Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Corporation Issuer : /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Code Signing PCANumber of certificates: 7 Cert #0: Subject: /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority Issuer : /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority Cert #1: Subject: /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority Issuer : /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority Cert #2: Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Code Signing PCA Issuer : /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority Cert #3: Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Corporation Issuer : /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Code Signing PCA Cert #4: Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/OU=nCipher DSE ESN:D8A9-CFCC-579C/CN=Microsoft Timestamping Service Issuer : /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Timestamping PCA Cert #5: Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/OU=nCipher DSE ESN:10D8-5847-CBF8/CN=Microsoft Timestamping Service Issuer : /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Timestamping PCA Cert #6: Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Timestamping PCA Issuer : /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root AuthoritySucceeded You can also extract the signature: osslsigncode extract-signature vcredist_x86.exe vcredist_x86.sig
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/269906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54880/" ] }
269,912
I'm building a Makefile to automate the start of docker on osx . At the end of the start docker requires to launch this command on the shell in order to configure the shell: eval "$(docker-machine env dev)" The problem is that if I try to run it on the Makefile, this has no effect on the shell. In fact, if the launch: make start and then make status I get error from docker. This is my Makefile: CURRENT_DIRECTORY := $(shell pwd)start: @sh run.sh -d @eval $(docker-machine env dev)start-compose: @docker-compose up -dclean: @docker-compose rm --forcestop: @docker-compose stopshutdown: @docker-compose stop @docker-machine stop devbuild: @docker-compose buildstatus: @docker-compose pscli: @docker-compose run --rm web bashlog: @docker-compose logs weblogall: @docker-compose logsrestart: @docker-compose stop web @docker-compose start webrestartall: @docker-compose stop @docker-compose start.PHONY: clean start start-compose stop status shutdown build cli log logall restart restartall Is there a way to launch the eval "$(docker-machine env dev)" command on the Makefile and that will affect the shell?
If you don't have dependencies to track, Make is probably not the right tool. Make launches one shell per command (or per rule in some versions), if you need to configure the shell, you'll need to configure each instances. It is not possible for a process to modify its parent. So if you wanted for the Make to modify the shell in which it has been executed, that's not possible without using the same kind of trick as docker. As $ is interpreted by Make, to have it passed to the shell it needs to be escaped. Here is a structure which could be workable (note that I combine the rules into one command), but again I don't think Make is the right tool for such kind of thing and I know nothing about docker so I may miss something relevant. start: @sh run.sh -d; \ echo $$(docker-machine env dev) > ./docker-configstart-compose: @eval $$(cat ./docker-config); \ docker-compose up -dclean: @eval $$(cat ./docker-config); \ docker-compose rm --forcestop: @eval $$(cat ./docker-config); \ docker-compose stopshutdown: @eval $$(cat ./docker-config); \ docker-compose stop ; \ docker-machine stop dev
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57000/" ] }
269,919
Does someone know of a GUI application (X/GTK+/Qt/whatever) that can be used instead of less or more for viewing text, specifically one piped in from standard input? Ideally looking for something that can also run on Mac OSX (or maybe even just on Mac). I'm looking to introduce UNIX newbies to the wonderful world of command line text processing (with awk , sed , grep and even some perl ) and it would be useful to show them the text using a nice GUI that allows interactive search, scrolling with the mouse (I know most Linux terminals support mouse scrolling with less, but Mac terminals do not), etc. The best thing I found so far was to pipe input into zenity --text-info , but that viewer is very limited and does not even allow searching.
If you don't have dependencies to track, Make is probably not the right tool. Make launches one shell per command (or per rule in some versions), if you need to configure the shell, you'll need to configure each instances. It is not possible for a process to modify its parent. So if you wanted for the Make to modify the shell in which it has been executed, that's not possible without using the same kind of trick as docker. As $ is interpreted by Make, to have it passed to the shell it needs to be escaped. Here is a structure which could be workable (note that I combine the rules into one command), but again I don't think Make is the right tool for such kind of thing and I know nothing about docker so I may miss something relevant. start: @sh run.sh -d; \ echo $$(docker-machine env dev) > ./docker-configstart-compose: @eval $$(cat ./docker-config); \ docker-compose up -dclean: @eval $$(cat ./docker-config); \ docker-compose rm --forcestop: @eval $$(cat ./docker-config); \ docker-compose stopshutdown: @eval $$(cat ./docker-config); \ docker-compose stop ; \ docker-machine stop dev
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4323/" ] }
269,924
I need to install from scratch a debian 6 squeeze on a computer (well, actually a few of them). 2 weeks ago (Feb 29) debian 6 squeeze reached end-of-life status. I tried to modify the default /etc/apt/sources.lists file trying to point to the new dists folder structure, without success. My best bet is: deb http://debian.grn.cat/ squeeze-lts main contrib non-free But this does not work. The error I get is root@debian:/etc/apt# apt-get updateHit http://debian.grn.cat squeeze-lts Release.gpgIgn http://debian.grn.cat/ squeeze-lts/contrib Translation-enIgn http://debian.grn.cat/ squeeze-lts/contrib Translation-en_USIgn http://debian.grn.cat/ squeeze-lts/main Translation-enIgn http://debian.grn.cat/ squeeze-lts/main Translation-en_USIgn http://debian.grn.cat/ squeeze-lts/non-free Translation-enIgn http://debian.grn.cat/ squeeze-lts/non-free Translation-en_USHit http://debian.grn.cat squeeze-lts ReleaseW: Failed to fetch http://debian.grn.cat/dists/squeeze-lts/ReleaseUnable to find expected entry main/binary-i386/Packages in Meta-index file (malformed Release file?) I have checked the contents of this Release file and indeed it contains a broken reference that I may detail if required. My question is: Should I give up hacking / debugging with this issue, and move along debian 7 / 8? I am already using debian 8.1 with my development machine, but we need debian 6 for legacy deployed machines that do not have internet. Migrating these machines is not easily possible. Follows a rant: Why the debian developers want to force us to upgrade distribution? If our solution works on debian 6, we do not want to take risks, and spend time and money migrating on every release. Give us the freedom you claim to promote!
To install Debian 6 (or 6 LTS) you need to use http://archive.debian.org ; e.g. in your sources.list : deb http://archive.debian.org/ squeeze main contrib non-freedeb http://archive.debian.org/ squeeze-lts main contrib non-free As to your rant, while I understand your frustration, it's a cost issue: keeping a release around on the main mirror network costs storage space (and hence money, indirectly), and creates support expectations which can't be fulfilled within the existing LTS (or project more generally) framework.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50346/" ] }
269,967
I've searched around and the only thing I've found is that "yes, OpenVPN supports connections over TCP" , but I haven't found any way to coerce the openvpn server to listen the same port for both protocols at the same time. I've found some very old guides about creating tap interfaces , or recommending to have another instance of the server with the same configuration running at the same time. The former looks too complicated for something simple, and the later seems obsolete.
The same openvpn process can't listen on UDP and TCP sockets at the same time. You have two good options: use two tap interfaces for openvpn. Have two openvpn server processes, one for each tap interface; one should listen on UDP, the other on TCP. Bridge these two tap interfaces on the server. use two tun interfaces. These can't be bridged, so if you want to share the IP space between TCP and UDP clients, you'll need to use a learn-address script like the one at http://thomas.gouverneur.name/2014/02/openvpn-listen-on-tcp-and-udp-with-tun/ (however, this specific script is vulnerable to a /tmp symlink attack, so remove the logging to /tmp if you use it). The third option is to just run two openvpn instances and assign separate client IP space to both (for example, one /25 from the same /24 subnet each). This avoids bridging and the need for a learn-address script. EDIT: since I needed such a learn-address script myself, I wrote one. I place it in the public domain. #!/bin/sh## This script allows an openvpn server with several openvpn instances that# use tun interfaces to share client IP space by adjusting the routing table# to create entries towards specific clients as neededaction="$1"addr="$2"cn="$3" # not used, but it's there; you could e.g. log itcase "$action" in add) echo "sudo ip ro add $addr/32 dev $dev" >&2 sudo ip ro change $addr/32 dev $dev || sudo ip ro add $addr/32 dev $dev # if a route already existed, add will fail but change should work, and vice versa ;; delete)# even if a client connects to one OpenVPN instance first, then reconnects to the other before the first connection times out, the "add" case above sets up the correct route; it may thus not be necessary to delete routes, ever# echo "sudo ip ro del $addr/32 dev $dev" >&2# sudo ip ro del $addr/32 dev $dev# exit 0 # ignore errors ;; update) echo "sudo ip ro change $addr/32 dev $dev" >&2 sudo ip ro change $addr/32 dev $dev \ || exec sudo ip ro add $addr/32 dev $dev # 'change' can fail with ENOENT, in which case we must 'add' ;;esac This script logs to stderr, which should end up in the openvpn log.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/269967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41104/" ] }
270,005
I'm using Fedora 23, MATE edition. The computer feels slow to boot. How can I speed it up? Full details http://i.imgur.com/vrLGXDp.jpg $ systemd-analyze Startup finished in 16.571s (firmware) + 2.605s (loader) + 824ms (kernel) + 1.997s (initrd) + 48.466s (userspace) = 1min 10.464s$ systemd-analyze blame 31.448s mlocate-updatedb.service 18.211s akmods.service 16.019s firewalld.service 9.127s systemd-journald.service 7.709s accounts-daemon.service 7.368s dev-sdd3.device 7.037s systemd-udev-settle.service 5.219s abrtd.service 4.854s chronyd.service 4.629s ModemManager.service 4.081s livesys.service 3.958s unbound-anchor.service 3.920s systemd-logind.service 3.823s rsyslog.service 3.781s gssproxy.service 3.780s akmods-shutdown.service 3.698s avahi-daemon.service 3.651s mcelog.service 3.636s rtkit-daemon.service 2.735s polkit.service 2.163s systemd-udevd.service 2.150s lvm2-monitor.service 1.569s proc-fs-nfsd.mount$ systemd-analyze critical-chain The time after the unit is active or started is printed after the "@" character.The time the unit takes to start is printed after the "+" character.graphical.target @35.395s└─lightdm.service @34.563s +830ms └─systemd-user-sessions.service @34.146s +129ms └─remote-fs.target @34.143s └─remote-fs-pre.target @34.143s └─iscsi-shutdown.service @34.128s └─network.target @34.019s └─NetworkManager.service @33.009s +1.009s └─firewalld.service @16.979s +16.019s └─polkit.service @17.870s +2.735s └─basic.target @12.883s └─sockets.target @12.864s └─dbus.socket @12.844s └─sysinit.target @12.704s └─sys-fs-fuse-connections.mount @48.351s +3ms └─system.slice └─-.slice
It is known issue, described in Red Hat Bugzilla : systemd's lack of random delay functionality of cron is hitting us. I've seen that there is a feature request for that. But till then, it seems that manually putting a random sleep before running updatedb is the workaround . I'd suggest either reverting to using cron for running updatedb for now, or to put a random or specific sleep before running updatedb: e.g. sleep 1h Workaround : sed 's/daily/weekly/' /usr/lib/systemd/system/mlocate-updatedb.timer >/etc/systemd/system/mlocate-updatedb.timer Now I only have to tolerate the slow boot on Monday.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7732/" ] }
270,008
How can I retrieve from the command line (or a shell script) only the name of the active network interface, in Linux?If there are several active interfaces, I want just one (selected arbitrarily).
The modern way of doing this is using the ip command. For example, on my system with my wireless connection active, I get: $ ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 6 ::1/128 scope host valid_lft forever preferred_lft forever2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 00:26:b9:dd:2c:28 brd ff:ff:ff:ff:ff:ff3: wlp3s0b1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether c4:46:19:5f:dc:f5 brd ff:ff:ff:ff:ff:ff inet 192.168.1.4/24 brd 192.168.1.255 scope global wlp3s0b1 ← valid_lft forever preferred_lft forever inet 6 fe80::c646:19ff:fe5f:dcf5/64 scope link valid_lft forever preferred_lft forever16: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 100 link/none inet 123.167.217.2/24 brd 123.167.217.255 scope global tun0 ← valid_lft forever preferred_lft forever The active interface(s) have both an inet entry and a broadcast ( brd ) address.You can show all such interfaces with: $ ip addr show | awk '/inet.*brd/{print $NF}'wlp3s0b1tun0 If you want only one, you can get the first one (only) with: $ ip addr show | awk '/inet.*brd/{print $NF; exit}'wlp3s0b1 The exit statement tells awk to stop searchingafter it finds the first match.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92253/" ] }
270,014
Signals are the way of communication between process but I have some questions What are signal traps? How the traps are related to signals in operating system?
The modern way of doing this is using the ip command. For example, on my system with my wireless connection active, I get: $ ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 6 ::1/128 scope host valid_lft forever preferred_lft forever2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 00:26:b9:dd:2c:28 brd ff:ff:ff:ff:ff:ff3: wlp3s0b1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether c4:46:19:5f:dc:f5 brd ff:ff:ff:ff:ff:ff inet 192.168.1.4/24 brd 192.168.1.255 scope global wlp3s0b1 ← valid_lft forever preferred_lft forever inet 6 fe80::c646:19ff:fe5f:dcf5/64 scope link valid_lft forever preferred_lft forever16: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 100 link/none inet 123.167.217.2/24 brd 123.167.217.255 scope global tun0 ← valid_lft forever preferred_lft forever The active interface(s) have both an inet entry and a broadcast ( brd ) address.You can show all such interfaces with: $ ip addr show | awk '/inet.*brd/{print $NF}'wlp3s0b1tun0 If you want only one, you can get the first one (only) with: $ ip addr show | awk '/inet.*brd/{print $NF; exit}'wlp3s0b1 The exit statement tells awk to stop searchingafter it finds the first match.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154352/" ] }
270,022
I am using Raspbian (like Debian) and I used this tutorial https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/ to setup my Raspbian as a wifi access point. Clients can connect to AP successfuly. But how can I do this - client should be able to open page http://local and it should point to my apache on AP. I don't want to set /etc/hosts on clients (they can vary) so I need to set it on AP directly and it should serve the right IP to clients when they open http://local . I followed dnsmasq this How to make a machine accessible from the LAN using its hostname but it is not working (it worked for a while but then it stopped working) How should I set my AP to serve the right name IP translation?
The modern way of doing this is using the ip command. For example, on my system with my wireless connection active, I get: $ ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 6 ::1/128 scope host valid_lft forever preferred_lft forever2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 00:26:b9:dd:2c:28 brd ff:ff:ff:ff:ff:ff3: wlp3s0b1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether c4:46:19:5f:dc:f5 brd ff:ff:ff:ff:ff:ff inet 192.168.1.4/24 brd 192.168.1.255 scope global wlp3s0b1 ← valid_lft forever preferred_lft forever inet 6 fe80::c646:19ff:fe5f:dcf5/64 scope link valid_lft forever preferred_lft forever16: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 100 link/none inet 123.167.217.2/24 brd 123.167.217.255 scope global tun0 ← valid_lft forever preferred_lft forever The active interface(s) have both an inet entry and a broadcast ( brd ) address.You can show all such interfaces with: $ ip addr show | awk '/inet.*brd/{print $NF}'wlp3s0b1tun0 If you want only one, you can get the first one (only) with: $ ip addr show | awk '/inet.*brd/{print $NF; exit}'wlp3s0b1 The exit statement tells awk to stop searchingafter it finds the first match.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270022", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79705/" ] }
270,023
I want to use sed to change the following text: (3)www(5)first(3)nth(6)domain(3)com(0) to: www.first.nth.domain.com Can each group between the parenthesis separators be captured and then reconstructed in order with period separators assuming that there will be from 2 to n+3 groups (infinity)? Is there another way? I am already familiar with: s/\(\d+\)/./g but that only yields: .www.first.nth.domain.com.
The modern way of doing this is using the ip command. For example, on my system with my wireless connection active, I get: $ ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 6 ::1/128 scope host valid_lft forever preferred_lft forever2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 00:26:b9:dd:2c:28 brd ff:ff:ff:ff:ff:ff3: wlp3s0b1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether c4:46:19:5f:dc:f5 brd ff:ff:ff:ff:ff:ff inet 192.168.1.4/24 brd 192.168.1.255 scope global wlp3s0b1 ← valid_lft forever preferred_lft forever inet 6 fe80::c646:19ff:fe5f:dcf5/64 scope link valid_lft forever preferred_lft forever16: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 100 link/none inet 123.167.217.2/24 brd 123.167.217.255 scope global tun0 ← valid_lft forever preferred_lft forever The active interface(s) have both an inet entry and a broadcast ( brd ) address.You can show all such interfaces with: $ ip addr show | awk '/inet.*brd/{print $NF}'wlp3s0b1tun0 If you want only one, you can get the first one (only) with: $ ip addr show | awk '/inet.*brd/{print $NF; exit}'wlp3s0b1 The exit statement tells awk to stop searchingafter it finds the first match.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270023", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161259/" ] }
270,036
I have been working on getting /etc/security/access.conf to work as expected and so far every user can still login. Details are below. I added the following line to /etc/pam.d/login account required pam_access.so I also added the following lines in /etc/security/access.conf + : root : ALL+ : group_name : ALL- : ALL : ALL group_name is a group inside of our LDAP server (FreeIPA). Any user is still able to login regardless if they are a part of ${group_name}. I can SSH into the server without any issues from any user. Can someone help point out where I am incorrect at? I am running RHEL 6.5. Thanks
From man access.conf : Each line of the login access control table has three fields separated by a ":" character (colon): permission:users/groups:origins The first field, the permission field, can be either a "+" character (plus) for access granted or a "-" character (minus) for access denied. The second field, the users/group field, should be a list of one or more login names, group names, or ALL (which always matches). To differentiate user entries from group entries, group entries should be written with brackets, e.g. (group). So your /etc/security/access.conf should look like this: + : root : ALL+ : (group_name) : ALL- : ALL : ALL
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147475/" ] }
270,071
I have large tree, with many pdf files in it. I want to delete the pdf files in this tree, but only those pdf files in sub folders named rules/ There are other type of files inside rules/ . The rules/ subfolders have no other subfolders. For example, I have this tree. Everything below 'source' source/ A/ rules/*.pdf, *.txt, *.c,etc.. etc/ B/ keep_this.pdf rules/*.pdf whatever/ C/ D/ rules/*.pdf something/ and so on. There are pdf files all over the place, but I only want to delete all the pdf files which are in folders called rules/ and no other place. I think I need to use cd source find / -type d -name "rules" -print0 | xargs -0 <<<rm *.pdf?? now what?>>> But I am not sure what to do after getting list of all subfolders named rules/ Any help is appreciated. On Linux mint.
I would execute a find inside another find . For example, I would execute this command line in order to list the files that would be removed: $ find /path/to/source -type d -name 'rules' -exec find '{}' -mindepth 1 -maxdepth 1 -type f -iname '*.pdf' -print ';' Then, after checking the list, I would execute: $ find /path/to/source -type d -name 'rules' -exec find '{}' -mindepth 1 -maxdepth 1 -type f -iname '*.pdf' -print -delete ';'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30274/" ] }
270,075
Please look at the output below: bob ~ # df -hFilesystem Size Used Avail Use% Mounted onudev 5,7G 4,0K 5,7G 1% /devtmpfs 1,2G 1,5M 1,2G 1% /run/dev/mapper/mint--vg-root 218G 66G 142G 32% /none 4,0K 0 4,0K 0% /sys/fs/cgrouptmpfs 5,7G 528M 5,2G 10% /tmpnone 5,0M 0 5,0M 0% /run/locknone 5,7G 99M 5,6G 2% /run/shmnone 100M 48K 100M 1% /run/usertmpfs 5,7G 44K 5,7G 1% /var/tmp/dev/sda1 236M 132M 93M 59% /boot df reports that LVM partition has 218G whereas it must be 250G, well 232G if to recalculate with 1024. So where is 14G? But even 218-66=152 not 142! That is 10 more Gigabytes which are also nowhere? Other utils output: bob ~ # pvs PV VG Fmt Attr PSize PFree /dev/sda5 mint-vg lvm2 a-- 232,64g 0 bob ~ # pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name mint-vg PV Size 232,65 GiB / not usable 2,00 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 59557 Free PE 0 Allocated PE 59557 PV UUID 3FA5KG-Dtp4-Kfyf-STAZ-K6Qe-ojkB-Tagr83bob ~ # fdisk -l /dev/sdaDisk /dev/sda: 250.1 GB, 250059350016 bytes255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00097b2a Device Boot Start End Blocks Id System/dev/sda1 * 2048 499711 248832 83 Linux/dev/sda2 501758 488396799 243947521 5 Extended/dev/sda5 501760 488396799 243947520 8e Linux LVM# sfdisk -l -uMDisk /dev/sda: 30401 cylinders, 255 heads, 63 sectors/trackWarning: extended partition does not start at a cylinder boundary.DOS and Linux will interpret the contents differently.Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System/dev/sda1 * 1 243 243 248832 83 Linux/dev/sda2 244+ 238474 238231- 243947521 5 Extended/dev/sda3 0 - 0 0 0 Empty/dev/sda4 0 - 0 0 0 Empty/dev/sda5 245 238474 238230 243947520 8e Linux LVMDisk /dev/mapper/mint--vg-root: 30369 cylinders, 255 heads, 63 sectors/tracksfdisk: ERROR: sector 0 does not have an msdos signature /dev/mapper/mint--vg-root: unrecognized partition table typeNo partitions found Linux Mint 17.3 UPDATE # lvdisplay --- Logical volume --- LV Path /dev/mint-vg/root LV Name root VG Name mint-vg LV UUID ew9fDY-oykM-Nekj-icXn-FQ1T-fiaC-0Jw2v6 LV Write Access read/write LV Creation host, time mint, 2016-02-18 14:52:15 +0200 LV Status available # open 1 LV Size 232,64 GiB Current LE 59557 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 Regarding swap. Initially it was there, in LVM. Then I removed it and extended root partition with the space which was used by the swap (about 12G) UPDATE2 # tune2fs -l /dev/mapper/mint--vg-roottune2fs 1.42.9 (4-Feb-2014)Filesystem volume name: <none>Last mounted on: /Filesystem UUID: 0b5ecf9b-a763-4371-b4e7-01c36c47b5ccFilesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeFilesystem flags: signed_directory_hash Default mount options: user_xattr aclFilesystem state: cleanErrors behavior: ContinueFilesystem OS type: LinuxInode count: 14491648Block count: 57952256Reserved block count: 2897612Free blocks: 40041861Free inodes: 13997980First block: 0Block size: 4096Fragment size: 4096Reserved GDT blocks: 1010Blocks per group: 32768Fragments per group: 32768Inodes per group: 8192Inode blocks per group: 512Flex block group size: 16Filesystem created: Thu Feb 18 14:52:49 2016Last mount time: Sun Mar 13 16:49:48 2016Last write time: Sun Mar 13 16:49:48 2016Mount count: 22Maximum mount count: -1Last checked: Thu Feb 18 14:52:49 2016Check interval: 0 (<none>)Lifetime writes: 774 GBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Journal inode: 8First orphan inode: 6160636Default directory hash: half_md4Directory Hash Seed: 51743315-0555-474b-8a5a-bbf470e3ca9fJournal backup: inode blocks UPDATE3 (Final) Thanks to Jonas the space loss has been found # df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/mint--vg-root 218G 65G 142G 32% /# resize2fs /dev/mapper/mint--vg-rootresize2fs 1.42.9 (4-Feb-2014)Filesystem at /dev/mapper/mint--vg-root is mounted on /; on-line resizing requiredold_desc_blocks = 14, new_desc_blocks = 15The filesystem on /dev/mapper/mint--vg-root is now 60986368 blocks long.# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/mint--vg-root 229G 65G 153G 30% / and this is a diff of tune2fs command output before and after resize2fs running # diff /tmp/tune2fs_before_resize2fs /tmp/tune2fs2_after_resize2fs13,17c13,17< Inode count: 14491648< Block count: 57952256< Reserved block count: 2897612< Free blocks: 40041861< Free inodes: 13997980---> Inode count: 15253504> Block count: 60986368> Reserved block count: 3018400> Free blocks: 43028171> Free inodes: 1475983621c21< Reserved GDT blocks: 1010---> Reserved GDT blocks: 100938c38< Inode size: 256---> Inode size: 25642c42< First orphan inode: 6160636---> First orphan inode: 5904187
Let us do some research. I have noticed that difference before, but never checked in detail what to attribute the losses to. Have a look at my scenario for comparision: fdisk shows the following partition: /dev/sda3 35657728 1000214527 964556800 460G 83 Linux There will be some losses as I my filesystem lives in a luks container, but that should only be a few MiB. df shows: Filesystem Size Used Avail Use% Mounted on/dev/dm-1 453G 373G 58G 87% / (The luks container is also why /dev/sda3 does not match /dev/dm-1, but they are really the same device, with encryption inbetween, no LVM. This also shows that LVM is not responsible for your losses, I have them too.) Now lets ask the filesystem itself on that matter. Calling tune2fs -l , which outputs a lot of interesting information about ext-family filesystems, we get: root@altair ~ › tune2fs -l /dev/dm-1tune2fs 1.42.12 (29-Aug-2014)Filesystem volume name: <none>Last mounted on: /Filesystem UUID: 0de04278-5eb0-44b1-9258-e4d7cd978768Filesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeFilesystem flags: signed_directory_hash Default mount options: user_xattr aclFilesystem state: cleanErrors behavior: ContinueFilesystem OS type: LinuxInode count: 30146560Block count: 120569088Reserved block count: 6028454Free blocks: 23349192Free inodes: 28532579First block: 0Block size: 4096Fragment size: 4096Reserved GDT blocks: 995Blocks per group: 32768Fragments per group: 32768Inodes per group: 8192Inode blocks per group: 512Flex block group size: 16Filesystem created: Wed Oct 14 09:27:52 2015Last mount time: Sun Mar 13 12:25:50 2016Last write time: Sun Mar 13 12:25:48 2016Mount count: 23Maximum mount count: -1Last checked: Wed Oct 14 09:27:52 2015Check interval: 0 (<none>)Lifetime writes: 1426 GBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Journal inode: 8First orphan inode: 26747912Default directory hash: half_md4Directory Hash Seed: 4723240b-9056-4f5f-8de2-d8536e35d183Journal backup: inode blocks Glancing over it, the first which springs into your eyes should be Reserved blocks . Multiplying that with the Block size (also from the output), we get the difference between the df Used+Avail and Size: 453GiB - (373GiB+58GiB) = 22 GiB6028454*4096 Bytes = 24692547584 Bytes ~= 23 GiB Close enough, especially considering that df rounds (using df without -h and repeating the calculation leaves only 16 MiB of the difference between Used+Avail and Size unexplained). To whom the reserved blocks are reserved is also written in the tune2fs output. It is root. This is a safety-net to ensure that non-root users cannot make the system entirely unusable by filling the disk, and keeping a few percent of disk space unused also helps against fragmentation. Now for the difference between the size reported by df and the size of the partition. This can be explained by taking a look at the inodes. ext4 preallocates inodes, so that space is unusable for file data. Multiply the Inode count by the Inode size , and you get: 30146560*256 Bytes = 7717519360 Bytes ~= 7 GiB453 GiB + 7 GiB = 460 GiB Inodes are basically directory entries. Let us ask mkfs.ext4 about details (from man mkfs.ext4): -i bytes-per-inode Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The larger the bytes-per-inode ratio, the fewer inodes will be created. This value generally shouldn't be smaller than the blocksize of the filesystem, since in that case more inodes would be made than can ever be used. Be warned that it is not possible to change this ratio on a filesystem after it is created, so be careful deciding the correct value for this parameter. Note that resizing a filesystem changes the numer of inodes to maintain this ratio. There are different presets to use for different scenarios. On a file server with lots of linux distribution images, it makes sense to pass e.g. -T largefile or even -T largefile4 . What -T means is defined in /etc/mke2fs.conf , in those examples and on my system: largefile = { inode_ratio = 1048576}largefile4 = { inode_ratio = 4194304} So with -T largefile4 , the number of is much less than the default (the default ratio is 16384 in my /etc/mke2fs.conf ). This means, less space reserved for directory entries, and more space for data. When you run out of inodes, you cannot create new files. Increasing the number of inodes in an existing filesystem does not seem to be possible . Thus, the default number of inodes is rather conservatively chosen to ensure that the average user does not run out of inodes prematurely. I just figured that out at poking at my numbers, let me know if it (does not) work for you ☺.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37713/" ] }
270,130
Bash Manual says: When the [ form is used, the last argument to the command must be a ]. $ type [[ is a shell builtin$ type ]bash: type: ]: not found So ] isn't a reserved word, nor is it an operator, nor is it a builtin command. As a token, what is the token identifier of ] ? WORD or NAME?
] complements [ , it is the closing sign of [ command. As the man page points out, this is actually an argument to [ , but [ happens to treat it especially, as the ending. You can resemble it with some other command closing patterns, for example ; in find .. exec .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270130", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
270,136
I'm Belgian, and so the keyboard layout on my CentOS 7 is be-latin1 . I've set it up with # loadkeys be-latin1 . Everything up to here is fine, my chars are all corrects. The thing is, when I switch to another TTY ( Ctrl + Alt + Fx ), the keyboard layout is qwerty... How can I set this keyboard layout to be-latin1 too ?Is it possible to do this everywhere in the PC ? I tried editing the file /etc/sysconfig/keyboard , but it doesn't exist...
] complements [ , it is the closing sign of [ command. As the man page points out, this is actually an argument to [ , but [ happens to treat it especially, as the ending. You can resemble it with some other command closing patterns, for example ; in find .. exec .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270136", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155821/" ] }
270,140
Suppose I have the following: foo1=abci=1a="FOO${i}"echo ${${a}}echo ${`echo $a`} # I also tried that I am getting the error bash: ${${a}}: bad substitution .
You can use parameter indirection ${!parameter} i.e. in your case ${!a} : $ foo1=abc$ i=1$ a="foo${i}"$ echo "${!a}"abc From "Parameter Expansion" section of man bash : ${parameter} ....... If the first character of parameter is an exclamation point (!), it introduces a level of variable indirection. Bash uses thevalue of the variable formed from the rest of parameter as the name ofthe variable; this variable is then expanded and that value is used inthe rest of the substitution, rather than the value of parameteritself.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110103/" ] }
270,171
I am studying the command here in the post about How to compile and install Qt, qwt and overclock the RPI sudo mount -o loop,offset=62914560<date>-wheezy-raspbian.img /mnt/rasp-pi-rootfs I do fdisk 2016-02-26-raspbian-jessie.img and I get Disk: 2016-02-26-raspbian-jessie.img geometry: 976/128/63 [7870464 sectors] Signature: 0xAA55 Starting Ending #: id cyl hd sec - cyl hd sec [ start - size] ------------------------------------------------------------------------ 1: 0C 0 130 3 - 8 40 32 [ 8192 - 122880] Win95 FAT32L 2: 83 8 40 33 - 489 232 63 [ 131072 - 7739392] Linux files* 3: 00 0 0 0 - 0 0 0 [ 0 - 0] unused 4: 00 0 0 0 - 0 0 0 [ 0 - 0] unused Why is offset specific in mount?
As 62914560 points exactly 60MiB into the file, I think the best guess would be that the Raspian disk image is actually partitioned. The offset tells mount (or actually losetup ) the actual offset of the root file-system (I suggest this is the second of two partitions, the first most-probably being /boot resp. the bootloader/firmware files). The problem here is that even though the loop driver actually supports partitioned images, the number of maximum partitions per loop device has to be specified as a module parameter when loading the module (or on the kernel command line). As there are many distros out there that won't do this by default, ...,offset=XXX is the most reliable way to cope with partitioned images when loop uses the default parameter (which is 0, hence no partition support). You can test whether your loop driver was loaded with partition-support by looking into /sys/module/loop/parameters/max_part . On my current system (ArchLinux), after loading loop without parameters this is: $ cat /sys/module/loop/parameters/max_part0 To enable partitioning-support, you will have to unload loop and load it again with the desired value for the max_part options, e.g. # modprobe -r loop# modprobe loop max_part=8 After this, you could try to manually set-up the loop-device for your image by doing # losetup /dev/loop0 /path/to/<date>-wheezy-raspbian.img Now, you should not only see /dev/loop0 representing the whole image, but (as long as my theory is correct ;) also have /dev/loop0p1 , /dev/loop0p2 , etc., for all partitions in the image. Edit: If you want to do this yourself the tedious way (I'd suggest simply reloading loop with the correct max_part option and simply using the partitions), you could find out which offset is required by using fdisk directly on the image-file (shown with an ArchLinux ISO, as I had it on hand, but the idea is the same): $ fdisk -l archlinux-2016.03.01-dual.isoDisk archlinux-2016.03.01-dual.iso: 268.3 MiB, 281339392 bytes, 549491 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x2237702cDevice Boot Start End Sectors Size Id Typearchlinux-2016.03.01-dual.iso1 * 0 1452031 1452032 709M 0 Emptyarchlinux-2016.03.01-dual.iso2 172 63659 63488 31M ef EFI (FAT-12/16/32) The second partition starts at sector 172 with a sector size of 512 bytes. Multiplying both values gives you the offset in bytes, thus to mount the partition, you'll use: # mount -o loop,offset=$((172*512)) archlinux-2016.03.01-dual.iso /mnt# ls -l /mnttotal 4drwxr-xr-x 4 root root 2048 Mar 1 15:49 EFIdrwxr-xr-x 3 root root 2048 Mar 1 15:49 loader Voila.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270171", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
270,174
I have a python script that create some files and it ask me to enter a password this script is called from bash scriptis their any way to get the entered password and echo it to text file
As 62914560 points exactly 60MiB into the file, I think the best guess would be that the Raspian disk image is actually partitioned. The offset tells mount (or actually losetup ) the actual offset of the root file-system (I suggest this is the second of two partitions, the first most-probably being /boot resp. the bootloader/firmware files). The problem here is that even though the loop driver actually supports partitioned images, the number of maximum partitions per loop device has to be specified as a module parameter when loading the module (or on the kernel command line). As there are many distros out there that won't do this by default, ...,offset=XXX is the most reliable way to cope with partitioned images when loop uses the default parameter (which is 0, hence no partition support). You can test whether your loop driver was loaded with partition-support by looking into /sys/module/loop/parameters/max_part . On my current system (ArchLinux), after loading loop without parameters this is: $ cat /sys/module/loop/parameters/max_part0 To enable partitioning-support, you will have to unload loop and load it again with the desired value for the max_part options, e.g. # modprobe -r loop# modprobe loop max_part=8 After this, you could try to manually set-up the loop-device for your image by doing # losetup /dev/loop0 /path/to/<date>-wheezy-raspbian.img Now, you should not only see /dev/loop0 representing the whole image, but (as long as my theory is correct ;) also have /dev/loop0p1 , /dev/loop0p2 , etc., for all partitions in the image. Edit: If you want to do this yourself the tedious way (I'd suggest simply reloading loop with the correct max_part option and simply using the partitions), you could find out which offset is required by using fdisk directly on the image-file (shown with an ArchLinux ISO, as I had it on hand, but the idea is the same): $ fdisk -l archlinux-2016.03.01-dual.isoDisk archlinux-2016.03.01-dual.iso: 268.3 MiB, 281339392 bytes, 549491 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x2237702cDevice Boot Start End Sectors Size Id Typearchlinux-2016.03.01-dual.iso1 * 0 1452031 1452032 709M 0 Emptyarchlinux-2016.03.01-dual.iso2 172 63659 63488 31M ef EFI (FAT-12/16/32) The second partition starts at sector 172 with a sector size of 512 bytes. Multiplying both values gives you the offset in bytes, thus to mount the partition, you'll use: # mount -o loop,offset=$((172*512)) archlinux-2016.03.01-dual.iso /mnt# ls -l /mnttotal 4drwxr-xr-x 4 root root 2048 Mar 1 15:49 EFIdrwxr-xr-x 3 root root 2048 Mar 1 15:49 loader Voila.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270174", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146337/" ] }
270,175
Running sudo mount -o rw,remount /mnt/Data reports above error. Here is the fstab file # /etc/fstab: static file system information.## Use 'blkid' to print the universally unique identifier for a# device; this may be used with UUID= as a more robust way to name devices# that works even if disks are added and removed. See fstab(5).## <file system> <mount point> <type> <options> <dump> <pass># / was on /dev/sda7 during installationUUID=c8fd3429-3454-41df-ae9c-0f98615bc314 / ext4 errors=remount-ro 0 1# /boot/efi was on /dev/sda2 during installation#UUID=1EF0-739E /boot/efi vfat defaults 0 1# swap was on /dev/sda10 during installationUUID=47da3636-057c-4fb5-ab12-383d13d914c6 none swap sw 0 0#DataUUID=C06EDC746EDC6526 /mnt/Data ntfs-3g defaults auto umask=7770 0 1UUID=1EF0-739E /boot/efi vfat defaults 0 1
As 62914560 points exactly 60MiB into the file, I think the best guess would be that the Raspian disk image is actually partitioned. The offset tells mount (or actually losetup ) the actual offset of the root file-system (I suggest this is the second of two partitions, the first most-probably being /boot resp. the bootloader/firmware files). The problem here is that even though the loop driver actually supports partitioned images, the number of maximum partitions per loop device has to be specified as a module parameter when loading the module (or on the kernel command line). As there are many distros out there that won't do this by default, ...,offset=XXX is the most reliable way to cope with partitioned images when loop uses the default parameter (which is 0, hence no partition support). You can test whether your loop driver was loaded with partition-support by looking into /sys/module/loop/parameters/max_part . On my current system (ArchLinux), after loading loop without parameters this is: $ cat /sys/module/loop/parameters/max_part0 To enable partitioning-support, you will have to unload loop and load it again with the desired value for the max_part options, e.g. # modprobe -r loop# modprobe loop max_part=8 After this, you could try to manually set-up the loop-device for your image by doing # losetup /dev/loop0 /path/to/<date>-wheezy-raspbian.img Now, you should not only see /dev/loop0 representing the whole image, but (as long as my theory is correct ;) also have /dev/loop0p1 , /dev/loop0p2 , etc., for all partitions in the image. Edit: If you want to do this yourself the tedious way (I'd suggest simply reloading loop with the correct max_part option and simply using the partitions), you could find out which offset is required by using fdisk directly on the image-file (shown with an ArchLinux ISO, as I had it on hand, but the idea is the same): $ fdisk -l archlinux-2016.03.01-dual.isoDisk archlinux-2016.03.01-dual.iso: 268.3 MiB, 281339392 bytes, 549491 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x2237702cDevice Boot Start End Sectors Size Id Typearchlinux-2016.03.01-dual.iso1 * 0 1452031 1452032 709M 0 Emptyarchlinux-2016.03.01-dual.iso2 172 63659 63488 31M ef EFI (FAT-12/16/32) The second partition starts at sector 172 with a sector size of 512 bytes. Multiplying both values gives you the offset in bytes, thus to mount the partition, you'll use: # mount -o loop,offset=$((172*512)) archlinux-2016.03.01-dual.iso /mnt# ls -l /mnttotal 4drwxr-xr-x 4 root root 2048 Mar 1 15:49 EFIdrwxr-xr-x 3 root root 2048 Mar 1 15:49 loader Voila.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270175", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161379/" ] }
270,199
For any given version or installation of Linux Mint, how would I find out which version of Ubuntu it is based on? I'm sure it must be in documentation somewhere right?
You'll find Ubuntu version in the /etc/upstream-release/lsb-release file: $ cat /etc/upstream-release/lsb-release DISTRIB_ID=UbuntuDISTRIB_RELEASE= 14.04 DISTRIB_CODENAME=trusty To figure out which subrelease you are using, you need to know what kernel you are running, e.g. here kernel 3.19: $ uname -r 3.19 .0-32-generic Then you compare it with the 14.04.x Ubuntu Kernel Support schedule which says that in my case, the 3.19 kernel matches 14.04.3 . Now on wiki it listed
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/270199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106998/" ] }
270,212
I would like to define a Bash variable PART_ID as being equal to the UUID of the /dev/sdb1 partition. The closest I have gotten to the desired answer is the output of: ls -ld /dev/disk/by-uuid/* | grep sdb1 which, for me, gives: lrwxrwxrwx 1 root root 10 Mar 16 17:02 /dev/disk/by-uuid/d26c3e60-0cfb-4118-9dec-1f1819439790 -> ../../sdb1 which is not an acceptable value for me to set PART_ID to. Rather what PART_ID should equal is d26c3e60-0cfb-4118-9dec-1f1819439790 .
Note that's the UUID of the filesystem (or other structured data with a UUID the udev scripts know about) on the partition, not the UUID of the partition itself (not all partitioning schemes give UUIDs to partition anyway). See also Difference between UUID from blkid and mdadm? . A few options on Linux-based systems to get the FS UUID: fs_uuid=$(blkid -o value -s UUID /dev/sdb1)fs_uuid=$(lsblk -no UUID /dev/sdb1)fs_uuid=$(udevadm info -n sdb1 -q property | sed -n 's/^ID_FS_UUID=//p')fs_uuid=$(find /dev/disk/by-uuid -lname '*/sdb1' -printf %f) The first one may require superuser privileges or at least the right to read the device. If the filesystem is mounted, you can also use: fs_uuid=$(findmnt -fn -o UUID /dev/sdb1)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27613/" ] }
270,272
In the second method proposed by this page , one gets the tty in which bash is being run with the command: ps ax | grep $$ | awk '{ print $2 }' I though to myself that surely this is a bit lazy, listing all running processes only to extract one of them. Would it not be more efficient (I am also asking if this would introduce unwanted effects) to do: ps -p $$ | tail -n 1 | awk '{ print $2 }' FYI, I came across this issue because sometimes the first command would actually yield two (or more) lines. This would happen randomly, when there would be another process running with a PID that contains $$ as a substring. In the second approach, I am avoiding such cases by requesting the PID that I know I want.
Simply by typing tty : $ tty /dev/pts/20 Too simple and obvious to be true :) Edit: The first one returns you also the pty of the process running grep as you can notice: $ ps ax | grep $$28295 pts/20 Ss 0:00 /bin/bash29786 pts/20 S+ 0:00 grep --color=auto 28295 therefore you would need to filter out the grep to get only one result, which is getting ugly: ps ax | grep $$ | grep -v grep | awk '{ print $2 }' or using ps ax | grep "^$$" | awk '{ print $2 }' (a more sane variant)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/270272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45354/" ] }
270,274
From POSIX 7: The order of word expansion shall be as follows: Tilde expansion (see Section 2.6.1), parameter expansion (see Section 2.6.2), command substitution (see Section 2.6.3), and arithmetic expansion (see Section 2.6.4) shall be performed, beginning to end. See item 5 in Section 2.3. Field splitting (see Section 2.6.5) shall be performed on the portions of the fields generated by step 1, unless IFS is null. Pathname expansion (see Section 2.6.6) shall be performed, unless set −f is in effect. Quote removal (see Section 2.6.7) shall always be performed last. Do tilde expansion, parameter expansion, command substitution, and arithmetic expansion perform in the specified order? Does the order between them matter? If yes, how shall we understand why the order is as specified? Why does pathname expansion happen after field splitting, while other expansions before field splitting? In particular, both tilde expansion and pathname expansion are about pathnames and filenames, why are they placed differently with respect to field splitting? Is there no brace expansion in POSIX? I notice "word expansion". Do expansions apply only to tokens with token identifier WORD, and not to tokens with other token identifiers (e.g. NAME, specific operator, NEWLINE, IO_NUMBER, ASSIGNMENT)?
Tilde expansion, parameter expansion, command substitution and arithmetic expansion are listed in the same step. That means that they are performed at the same time . The result of tilde expansion does not undergo parameter expansion, the result of parameter expansion does not undergo tilde expansion, and so on. For example, if the value of foo is $(bar) qux , then the word $foo expands to $(bar) qux at step 1; the text resulting from parameter expansion is not subject to any further transformation at step 1, but it then gets split by step 2. “Beginning to end” means left-to-right processing, which matters e.g. when assignments occur: a=1; echo $a$((a=2))$a prints 122 , because arithmetic expansion of $((a=2)) is performed, setting a to 2, between the parameter expansion of the first $a and the parameter expansion of the second $a . The reason for the order is historical usage. POSIX usually follows existing implementation, it rarely specifies new behavior. There are multiple shells around; for the most part, POSIX follows the Korn shell but omits most features that are not present in the Bourne shell (as the Bourne shell is largely abandoned, the next version of POSIX is likely to include new ksh features though). The reason why the Bourne shell performed parameter expansion then field splitting then globbing is that it allowed a glob to be stored in a variable: you can set a to *.txt *.pdf and then use $a to stand for the list of names of files matching *.txt followed by the list of names matching *.pdf (assuming both patterns match). (I'm not saying this is the best design possible, just that it was designed this way.) It's less clear to me why one would want command substitution to be placed at a particular step in the Bourne shell; in the Korn shell, its syntax $(…) is close to parameter expansion ${…} so it makes sense to perform them together. The placement of tilde expansion is a historical oddity. It would have made more sense to place it later, so that you could write ~$some_user and have it expand to the home directory of the user whose name is the value of the variable some_user . I don't know why it wasn't done this way. This order even requires a special statement that the result of tilde expansion does not undergo other expansions (going by the passage you quoted, if HOME is /foo bar then ~ would expand to the two words /foo and bar due to field splitting, but no shell does that and POSIX.2008 explicitly states that “the pathname resulting from tilde expansion shall be treated as if quoted”). There is no brace expansion in POSIX, otherwise the specification would state it. Word expansion is only performed on WORDs, and with caveats mentioned in the following sections (e.g. field splitting and pathname generation are only performed in contexts that allow multiple words, not e.g. between double quotes). NAMEs, NEWLINEs, IO_NUMBERs and so on don't contain anything that could be expanded anyway.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270274", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
270,334
The shortcut for "move windows to another workspace" in Xfce should be Ctrl + Alt + Shift + ← / → / ↑ / ↓ . But it doesn't work, there're no such shortcuts. Why, am I missing anything?
There isn't any. By default, the action "Move window to left/right/up/down workspace" has no shortcuts set and that has not changed since Xfce 4.6 to this date. So the shortcuts might have been deprecated earlier or not adopted at all. But there should be Those 'old' shortcuts were originally found in GNOME; the original author of this answer was aware of this, because they had been using GNOME 2 before switching to Xfce. The oldest known proof is shown by the screenshot with additional highlight as follows. Source: Xfce 4.6 tour , screenshots by Jannis Pohlmann. The original screenshot was used to describe "fill operation" for xfwm4, which luckily showing the unset window shortcuts. Revive them anyway To define shortcuts for the action "Move window to left/right/up/down workspace", user can configure in xfwm4-settings or navigate from Settings Manager in Xfce. Go to Settings Manager > Window Manager - Keyboard In the tab, scroll down until "Toggle fullscreen" entry and the relevant actions "Move window to..." are listed below it with empty column on the right For the corresponding action "Move window to upper workspace", do either double-click the empty column , or select the row and click Edit A small popup window will appear, then press the shortcut keys of choice to be assigned for previously selected action: Ctrl + Alt + Shift + ↑ for "Move window to upper workspace" and then the popup window will be closed Repeat step 3 and 4 for other actions, and finally click Close to finish. Additional notes To this date, Wikipedia still note the 'old' shortcuts in the article of Table of keyboard shortcuts under "Window Management". That has changed since the introduction of GNOME 3, with most of the shortcuts have been redefined and favours combination of Super key .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/270334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40830/" ] }
270,390
When I compile my own kernel, basically what I do is the following: I download the sources from www.kernel.org and uncompress it. I copy my previous .config to the sources and do a make menuconfig to watch for the new options and modify the configuration according to the new policy of the kernel. Then, I compile it: make -j 4 Finally, I install it: su -c 'make modules_install && make install' . After a few tests, I remove the old kernel (from /boot and /lib/modules ) and run fully with the new one (this last step saved my life several times! It's a pro-tip !). The problem is that I always get a /boot/initrd.img-4.x.x which is huge compared to the ones from my distribution. Here the content of my current /boot/ directory as an example: # ls -alFhtotal 243Mdrwxr-xr-x 5 root root 4.0K Mar 16 21:26 ./drwxr-xr-x 25 root root 4.0K Feb 25 09:28 ../-rw-r--r-- 1 root root 2.9M Mar 9 07:39 System.map-4.4.0-1-amd64-rw-r--r-- 1 root root 3.1M Mar 11 22:30 System.map-4.4.5-rw-r--r-- 1 root root 3.2M Mar 16 21:26 System.map-4.5.0-rw-r--r-- 1 root root 170K Mar 9 07:39 config-4.4.0-1-amd64-rw-r--r-- 1 root root 124K Mar 11 22:30 config-4.4.5-rw-r--r-- 1 root root 126K Mar 16 21:26 config-4.5.0drwxr-xr-x 5 root root 512 Jan 1 1970 efi/drwxr-xr-x 5 root root 4.0K Mar 16 21:27 grub/-rw-r--r-- 1 root root 19M Mar 10 22:01 initrd.img-4.4.0-1-amd64-rw-r--r-- 1 root root 101M Mar 12 13:59 initrd.img-4.4.5-rw-r--r-- 1 root root 103M Mar 16 21:26 initrd.img-4.5.0drwx------ 2 root root 16K Apr 8 2014 lost+found/-rw-r--r-- 1 root root 3.5M Mar 9 07:30 vmlinuz-4.4.0-1-amd64-rw-r--r-- 1 root root 4.1M Mar 11 22:30 vmlinuz-4.4.5-rw-r--r-- 1 root root 4.1M Mar 16 21:26 vmlinuz-4.5.0 As you may have noticed, the size of my initrd.img files are about 10 times bigger than the ones from my distribution. So, do I do something wrong when compiling my kernel? And, how can I reduce the size of my initrd.img ?
This is because all the kernel modules are not stripped. You need to strip it to down its size. Use this command: SHW@SHW:/tmp# cd /lib/modules/<new_kernel>SHW@SHW:/tmp# find . -name *.ko -exec strip --strip-unneeded {} + This will drastically reduce the size.After executing above command, you can proceed to create initramfs/initrd man strip --strip-unneeded Remove all symbols that are not needed for relocation processing.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/270390", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40768/" ] }
270,453
I am installing CentOS Linux distribution. At the partition step, CentOS tells me that it has detected a sda HD in my machine and I should create partitions and assign mount points for this disk. But I found the logic a little twisted. I understand that Linux treat everything as file and sda is usually the device file representing my first SATA hard disk. But since no Linux is installed yet, there should be no file system yet. So how could there be any device file like sda ? Someone tells me that “Linux installer is also a Linux OS and hence there's a in-memory file system. My hard drive is just one tiny element of the file system”. Why doing like this? Does Windows or other OS do the same thing?
What /dev/sda means There are four levels: raw disk, raw partition of that disk, formatted filesystem on a partition, and actual files stored within a filesystem. /dev/sda means an entire disk, not a filesystem. Something with a number at the end is a partition of a disk: dev/sda1 is the first partition of the /dev/sda disk, and it's not even necessarily formatted yet! The filesystems each go on their own partitions by formatting each partition with its filesystem. So, what will generally happen is that you'll partition /dev/sda , format /dev/sda1 with a filesystem, mount /dev/sda1 's filesystem to somewhere, and then begin working with files on that filesystem. Why have a unified filesystem Linux (and UNIX in general) has the concept of the virtual filesystem. It combines all your real disks into one unified file system. This can be quite useful. You might, for example, want to put your operating system and its programs on one really fast real disk and all the users' personal files on another fairly slow but huge disk because you want the OS to be fast but you want an affordable means of handling the files of thousands of users. Unlike the usual method in Windows, which by default breaks each disk up into a separate letter and where using D:\Users might break some programs that hard code the path C:\Users , this can be done with ease and fluency. You format one partition in each disk, you mount the OS one to / and the user one to /home , and it acts like a system that put everything on one real disk, except you get that speed and affordability tradeoff you wanted.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4061/" ] }
270,505
Apologies if this is an abstract question - I'll try to be as specific as possible. When I'm at the bash shell and switch to a different account via su - foo , I'm prompted for a password. The characters I type at this password prompt are hidden from the screen with no indication of how many characters I'm typing or what they are. How is bash (or Linux in general) doing this?
What you type is displayed in the terminal because the terminal "echoes" it back. When asking for password, the echoing is turned off. See also help read and its -s option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270505", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1822/" ] }
270,511
In Debian >= 8, we now have apt as well as apt-get . How does apt compare to apt-get , and why did the developers decide to create a new program? A quote from the Debian Administrator's Handbook : APT is a vast project, whose original plans included a graphical interface. It is based on a library which contains the core application, and apt-get is the first front end — command-line based — which was developed within the project. apt is a second command-line based front end provided by APT which overcomes some design mistakes of apt-get. What design mistakes are they talking about?
apt is mostly intended as a new binary with some of the commonly used features of both apt-get and apt-cache (with more to be added later, probably), and with a "simplified" interface. Most of APT 's available command line functionality is exposed via apt-cache and apt-get , but these commands aren't ideal from a user experience point of view. Since those two binaries were intended as test/example commands (originally by Jason Gunthorpe, I believe), and not for serious end-user usage. The apt command is meant to be easier to use, and more "user-friendly". People often find it confusing that functionality is split between apt-get and apt-cache for instance. See comments by Michael Vogt in his blog post: apt 1.0 . I don't think it is particularly meant to be about overcoming design mistakes. So, it's not intended as an apt-get replacement. For more information, try asking the APT developers themselves. They are super-cool, but they don't hang out on Stack Exchange. Try #debian-apt on OFTC instead. Comments from Michael Vogt on the said #debian-apt channel; I postedthe question link on the IRC channel. <mvo> faheem: "design mistakes" is a bit of a strong word - we are just scared of changing anything in apt-get because it's used in a gazillion scripts by now. "apt" lets us do that plus it's easier to type and we can combine apt-get/apt-cache. so I think the answers are all fine, the key part is really that apt is more convenient to use/type. <mvo> faheem: [snip] the gist is that apt/apt-get/apt-cache all share the same library and code, just some tweaks to the default.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61235/" ] }
270,518
It seems that the package xchat is not available in the stretch repository, all I can find is xchat-gnome . Can someone tell me if it was removed and if so, why? I'm using mate desktop by the way. My sources.list deb http://ftp.de.debian.org/debian/ stretch non-free contrib main deb-src http://ftp.de.debian.org/debian/ stretch non-free contrib main deb http://security.debian.org/ stretch/updates non-free contrib main deb-src http://security.debian.org/ stretch/updates non-free contrib main deb http://ftp.de.debian.org/debian/ stretch-updates non-free contrib main deb-src http://ftp.de.debian.org/debian/ stretch-updates non-free contrib main
Since XChat has not received software updates since 2013 ( https://sourceforge.net/p/xchat/svn/HEAD/tree/ ) I ended up using the hexchat package.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107070/" ] }
270,539
I set up two new CentOS 7 boxes simultaneously, so the configurations should be identical, just different ip addresses and host names. I installed VSFTPD and configured for passive ports. One box connects fine, no issues, however the second box continuously throws me this error: GnuTLS error -15: An unexpected TLS packet was received. Here is the debug FileZilla trace: Status: Connecting to 192.168.20.68:21...Status: Connection established, waiting for welcome message...Trace: CFtpControlSocket::OnReceive()Response: 220 (vsFTPd 3.0.2)Trace: CFtpControlSocket::SendNextCommand()Command: AUTH TLSTrace: CFtpControlSocket::OnReceive()Response: 234 Proceed with negotiation.Status: Initializing TLS...Trace: CTlsSocket::Handshake()Trace: CTlsSocket::ContinueHandshake()Trace: CTlsSocket::OnSend()Trace: CTlsSocket::OnRead()Trace: CTlsSocket::ContinueHandshake()Trace: CTlsSocket::OnRead()Trace: CTlsSocket::ContinueHandshake()Trace: CTlsSocket::OnRead()Trace: CTlsSocket::ContinueHandshake()Trace: TLS Handshake successfulTrace: Protocol: TLS1.2, Key exchange: ECDHE-RSA, Cipher: AES-256-GCM, MAC: AEADStatus: Verifying certificate...Status: TLS connection established.Trace: CFtpControlSocket::SendNextCommand()Command: USER datamoverTrace: CTlsSocket::OnRead()Trace: CFtpControlSocket::OnReceive()Response: 331 Please specify the password.Trace: CFtpControlSocket::SendNextCommand()Command: PASS *******Trace: CTlsSocket::OnRead()Trace: CTlsSocket::Failure(-15)Error: GnuTLS error -15: An unexpected TLS packet was received.Trace: CRealControlSocket::OnClose(106)Trace: CControlSocket::DoClose(64)Trace: CFtpControlSocket::ResetOperation(66)Trace: CControlSocket::ResetOperation(66)Error: Could not connect to server The error is always right after the password check. I know the problem IS NOT SELinux, as I disabled that. The problem is also not the firewall, as I tried disabling the Firewall Daemon (firewalld). Here is the relevant portion of the /etc/vsftpd/vsftpd.conf file. listen=YESlisten_ipv6=NOpasv_enable=YESpasv_max_port=10100pasv_min_port=10090pasv_address=192.168.20.88ssl_enable=YESallow_anon_ssl=NOforce_local_data_ssl=YESforce_local_logins_ssl=YESssl_tlsv1=YESssl_sslv2=NOssl_sslv3=NOssl_ciphers=HIGHrequire_ssl_reuse=NOrsa_cert_file=/etc/ssl/private/vsftpd.pemrsa_private_key_file=/etc/ssl/private/vsftpd.pem I did a Google search but did not see any 15 error codes. Thoughts?
I am posting this answer in hopes that it might help someone in the future, possibly me, as I suffered solving this problem. I did not have local_root in the /etc/vsftpd/vsftpd.conf file set properly. The setting pointed to a folder, which did not exist. What through me was that I saw the failure on the password command in FileZilla, so I thought that it did not like the password. What got me thinking in the right direction was that I took the time to research why I was not receiving detailed logs. I received no logs. Once I started receiving debug logs, where I saw the FTP protocols, I saw that the FTP server said OK to the password. Sadly, there was no logging of any kind, but I came across the thought that negotiating the local root would be the next course of action after authenticating the password. I was right and that led me to the problem. Here is the code fragment in the /etc/vsftpd/vsftpd.conf file, containing the local root. # You may specify an explicit list of local users to chroot() to their home# directory. If chroot_local_user is YES, then this list becomes a list of# users to NOT chroot().# (Warning! chroot'ing can be very dangerous. If using chroot, make sure that# the user does not have write access to the top level directory within the# chroot)chroot_local_user=YES#local_root=/mnt/raid1local_root=/ftproot#chroot_list_enable=YES# (default follows)#chroot_list_file=/etc/vsftpd/chroot_list Here is how I finally turned on verbose logging, though I will turn that off now to conserve disk space and improve performance. # Activate logging of uploads/downloads.xferlog_enable=YES## If you want, you can have your log file in standard ftpd xferlog format.# Note that the default log file location is /var/log/xferlog in this case.xferlog_std_format=NOlog_ftp_protocol=YES## Activate logging of uploads/downloads.xferlog_enable=YES IMHO, I would consider the comment a bug, as xferlog_enable is more than the actual upload and download of files. This property also turns on logging. A Google research proves that log_ftp_protocol=YES requires xferlog_enable=YES .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91119/" ] }
270,614
Bash Reference Manual says The use of time as a reserved word permits the timing of shell builtins, shell functions, and pipelines. An external time command cannot time these easily. But it only compares reserved word time and external command time , and doesn't address the question: Why does Bash implement time as a reserved word, instead of abuiltin command? What advantages can that give? What are other reserved words which can also be implemented as(builtin or external) commands? What are their advantages over theircommand counterparts? (Trying to figure out if their advantages are shared or specific to each of them) For example, the reserved word [[...]] versus the builtin command test or [ the reserved word select versus the builtin command read
The text you quote already explains why time is a keyword: The use of time as a reserved word permits the timing of shell builtins, shell functions, and pipelines. An external time command cannot time these easily. If time was only a builtin, it wouldn't be able to properly measure the time taken by a pipeline, e.g.: $ time sleep 2 | sleep 4real 0m4.002suser 0m0.000ssys 0m0.002s Here time returned 4 seconds which is the time taken by the whole pipeline. If implemented as a builtin, the shell grammar would only allow it to it return 2 seconds because a command, whether builtin or not, is only seeing its parameters, in that specific case, sleep 2 . Other keywords that cannot be implemented by builtins are the ones used for structured constructions like for, while, until, case, do, done, select, if, then, else, function . Like time , they need to be able to process the lines to be interpreted without being restricted to a simple command boundary. It is for the same reason, i.e. the ability to access to the whole shell input to be parsed and not just a command and its parameters that these keywords are implement as is. For example the [ command parameters are subject to shell expansion and processing so you cannot reliably use * in a test and > would be taken as a redirection with unexpected results. On the other hand, [[ is changing the shell behavior so you can use whatever syntax it accepts without being bothered by the shell. Here are some examples showing the difference in behavior: $ if [ * = "*" ]; then echo ok; fibash: [: too many arguments$ if [[ * = "*" ]]; then echo ok; fiok $ if [ 1 > 2 ]; then echo unexpected ; else echo expected; fi unexpected $ if [ 1 -gt 2 ]; then echo unexpected ; else echo expected; fiexpected$ if [[ 1 > 2 ]]; then echo unexpected ; else echo expected; fiexpected Note that not only does if [ 1 > 2 ] return an unexpected result but it also creates (or overwrite!) in the current directory a file named 2 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270614", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
270,629
Inspired by this question . Why does Linux need both /dev/cdrom and /media/cdrom ? Why not just access files on the cdrom through /dev/cdrom ?
/media/cdrom is a convention for the mountpoint , while /dev/cdrom is the special device that could be mounted on the former. You need both, because they serve different purposes: most applications do not read directly from the special device, but can read from a filesystem (something that is mounted)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4061/" ] }
270,640
Why does bash <command> fail to run? $ bash date/bin/date: /bin/date: cannot execute binary file$ /bin/dateFri Mar 18 05:59:24 EDT 2016$ bash -c dateFri Mar 18 06:00:39 EDT 2016
From the manual : If arguments remain after option processing, and neither the -c nor the -s option has been supplied, the first argument is assumed to be the name of a file containing shell commands. So bash date means "read the date file and execute the shell commands it contains". Assuming there is no date file in the current directory, bash searches the path and finds /bin/date which is a binary rather than a shell script, hence the error.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
270,664
I'm trying to create a process that will start with X11, will be graphical and unkillable by my own user. More specifically, I have an urxvt drop-down terminal starting at startup and running a tmux session right away. What annoys me, is that I sometimes kill it with Alt + F4 when I forget that it is the drop-down terminal and I'm supposed to hide it instead. Sure, I'm not losing my tmux session since I can reattach to it from another terminal, but I'd have to re-launch the drop-down terminal (with the exact config) and reattach and that's tedious. What I would want to do, is to not be able to Alt + F4 kill this specific process . After reading this: Make a process unkillable on Linux I understood that I should launch the process as belonging to another user, yet even if I manage to do that within my own X server with: sudo xhost +local: sudo runuser -l my-secondary-user -c urxvt I face two unwanted behaviours: The process is killable by my main user (the one who owns the X session). The virtual terminal starts logged in as my-secondary-user (of course). The (2) may be rectifiable - just run a command to login as my main user instead - but I have no idea on how to fix the (1). Is there a way to make this work, or another way to do what I want? I'd rather avoid having to mess with creating kernel modules if possible. Running Arch Linux with Openbox. EDIT I got thinking that I may be able to make Alt + F4 not work for this specific window.found this: Openbox: disable Alt-F4 on per application basis and modified my rc.xml thus: <keybind key="A-F4"> <action name="If"> <application class="URxvt" title="tmux-session"> </application> <then><!-- Do nothing for urxvt-tmux-session --></then> <else> <action name="Close" /> </else> </action> </keybind> This works slightly better than before: I cannot Alt + F4 that terminal, but also cannot Alt + F4 any urxvt window now. (Guess this is because new urxvt windows are all run under the same class-name?) Besides that I can kill everything I need from the menu bar, or right clicking into my tint panel, or, of course, by command.But this approach means that: I can still kill my drop-down terminal by right-clicking into the tint2 panel (maybe fixable if I make the drop-down terminal not appearing as an icon to the panel - should be possible with some research). I cannot Alt + F4 other terminal windows (not liking it much, but maybe acceptable). Any ideas?
If switching to xterm is an option, you could use the hack below. There are a few caveats though. Once you address most of them, the solution ends up quite complicated, see the final script at the end. xterm -e 'trap "" HUP; your-application' Upon receiving the instruction to close from the window manager, xterm will send a SIGHUP to the process group of your-application, and only exit itself when the process returns. That assumes your-application doesn't reset the handler for SIGHUP and could have unwanted side effects for the children of your-application. Both of which seem to be a problem if your-application is tmux . To work around those, you could do: xterm -e sh -c 'bash -mc tmux <&1 & trap "" HUP; wait' That way, tmux would be started in a different process group, so only the sh would receive the SIGHUP (and ignore it). Now, that doesn't apply to tmux which resets the handler for those signals anyway, but in the general case, depending on your implementation of sh , the SIGINT, SIGQUIT signals and generally both will be ignored for your-application as that bash is started as an asynchronous command from a non-interactive sh . That means you couldn't interrupt your-application with Ctrl+C or Ctrl+\ . That's a POSIX requirement. Some shells like mksh don't honour it (at least not the current versions), or only in part like dash that does it for for SIGINT but not SIGQUIT. So, if mksh is available, you could do: xterm -e mksh -c 'bash -mc your-application <&1 & trap "" HUP; wait' (though that may not work in future versions of mksh if they decide to fix that non-conformance). Or if you can't guarantee that mksh or bash will be available or would rather not rely on behaviour that may change in the future, you can do their work by hand with perl and for instance write an unclosable-xterm wrapper script like: #! /bin/sh -[ "$#" -gt 0 ] || set -- "${SHELL:-/bin/sh}"exec xterm -e perl -MPOSIX -e ' $pid = fork; if ($pid == 0) { setpgrp or die "setpgrp: $!"; tcsetpgrp(0,getpid) or die "tcsetpgrp: $!"; exec @ARGV; die "exec: $!"; } die "fork: $!" if $pid < 0; $SIG{HUP} = "IGNORE"; waitpid(-1,WUNTRACED)' "$@" (to be called as unclosable-xterm your-application and its args ). Now, another side effect is that that new process group we're creating and putting in foreground (with bash -m or setpgrp + tcsetpgrp above) is no longer the session leader process group, so no longer an orphaned process group (there's a parent supposedly caring for it now ( sh or perl )). What that means is that upon pressing Ctrl+Z , that process will be suspended. Here, our careless parent will just exit, which means the process group will get a SIGHUP (and hopefully die). To avoid it, we could just ignore the SIGTSTP in the child process, but then if your-application is an interactive shell, for some implementations like mksh , yash or rc , Ctrl-Z won't work either for the jobs they run. Or we could implement a more careful parent that resumes the child each time it's stopped, like: #! /bin/sh -[ "$#" -gt 0 ] || set -- "${SHELL:-/bin/sh}"exec xterm -e perl -MPOSIX -e ' $pid = fork; if ($pid == 0) { setpgrp or die "setpgrp: $!"; tcsetpgrp(0,getpid) or die "tcsetpgrp: $!"; exec @ARGV; die "exec: $!"; } die "fork: $!" if $pid < 0; $SIG{HUP} = "IGNORE"; while (waitpid(-1,WUNTRACED) > 0 && WIFSTOPPED(${^CHILD_ERROR_NATIVE})) { kill "CONT", -$pid; }' "$@" Another issue is that if xterm is gone for another reason than the close from the window manager, for example if xterm is killed or loses the connection to the X server (because of xkill , the destroy action of you Window manager, or the X server crashes for instance), then those processes won't die as SIGHUP would also be used in those cases to terminate them. To work around that, you could use poll() on the terminal device (which would be torn down when xterm goes): #! /bin/sh -[ "$#" -gt 0 ] || set -- "${SHELL:-/bin/sh}"exec xterm -e perl -w -MPOSIX -MIO::Poll -e ' $pid = fork; # start the command in a child process if ($pid == 0) { setpgrp or die "setpgrp: $!"; # new process group tcsetpgrp(0,getpid) or die "tcsetpgrp: $!"; # in foreground exec @ARGV; die "exec: $!"; } die "fork: $!" if $pid < 0; $SIG{HUP} = "IGNORE"; # ignore SIGHUP in the parent $SIG{CHLD} = sub { if (waitpid(-1,WUNTRACED) == $pid) { if (WIFSTOPPED(${^CHILD_ERROR_NATIVE})) { # resume the process when stopped # we may want to do that only for SIGTSTP though kill "CONT", -$pid; } else { # exit when the process dies exit; } } }; # watch for terminal hang-up $p = IO::Poll->new; $p->mask(STDIN, POLLERR); while ($p->poll <= 0 || $p->events(STDIN) & POLLHUP == 0) {}; kill "HUP", -$pid; ' "$@"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158471/" ] }
270,676
Let me give an example (it's just an example taken from here ): $ ls -l /usr/bin/gnome-text-editor lrwxrwxrwx 1 root root 35 Mar 16 2015 /usr/bin/gnome-text-editor -> /etc/alternatives/gnome-text-editor$ ls -l /etc/alternatives/gnome-text-editorlrwxrwxrwx 1 root root 14 Mar 16 2015 /etc/alternatives/gnome-text-editor -> /usr/bin/gedit$ ls -l /usr/bin/gedit-rwxr-xr-x 1 root root 588064 Mar 27 2014 /usr/bin/gedit Here you can see that I've to use ls -l three times for reaching to destination. (3 rd time is for making sure that /usr/bin/gedit is not a link`) Is there any way (by means of making script or another command etc.) that I can get expected output like: $ <improved ls -l> /usr/bin/gnome-text-editor lrwxrwxrwx 1 root root 35 Mar 16 2015 /usr/bin/gnome-text-editor -> /etc/alternatives/gnome-text-editorlrwxrwxrwx 1 root root 14 Mar 16 2015 /etc/alternatives/gnome-text-editor -> /usr/bin/gedit Another good output may be: $ <some-command> /usr/bin/gnome-text-editor/usr/bin/gnome-text-editor > /etc/alternatives/gnome-text-editor > /usr/bin/gedit
In this case, that's a Debian "alternative", so to get more details, you could use: $ update-alternatives --display gnome-text-editorgnome-text-editor - auto mode link best version is /usr/bin/gedit link currently points to /usr/bin/gedit link gnome-text-editor is /usr/bin/gnome-text-editor slave gnome-text-editor.1.gz is /usr/share/man/man1/gnome-text-editor.1.gz/usr/bin/gedit - priority 50 slave gnome-text-editor.1.gz: /usr/share/man/man1/gedit.1.gz/usr/bin/leafpad - priority 40 slave gnome-text-editor.1.gz: /usr/share/man/man1/leafpad.1.gz More generally, on Linux, you can use the namei command to know about all the symlinks involved in the resolution of a path (also mount points with -x ): $ namei -lx /usr/bin/gnome-text-editorf: /usr/bin/gnome-text-editorDrwxr-xr-x root root /drwxr-xr-x root root usrdrwxr-xr-x root root binlrwxrwxrwx root root gnome-text-editor -> /etc/alternatives/gnome-text-editorDrwxr-xr-x root root /drwxr-xr-x root root etcdrwxr-xr-x root root alternativeslrwxrwxrwx root root gnome-text-editor -> /usr/bin/geditDrwxr-xr-x root root /drwxr-xr-x root root usrdrwxr-xr-x root root bin-rwxr-xr-x root root gedit For a more direct answer to your question, I'd do something like: #! /bin/zsh -zmodload zsh/stat || exitret=0for file do n=0 while ls -ld -- "$file" || ! ret=1 && [ -L "$file" ] do if ((++n > 40)) && [ ! -e "$file" ]; then echo >&2 too many symlinks ret=1 break fi zstat -A target +link -- "$file" || ! ret=1 || break case $target in (/*) file=$target;; (*) file=$file:h/$target esac donedoneexit "$ret" That may not give you all the information you need to understand what's going on. Compare for instance: $ ./resolve-symlink b/b/b/b/x/blrwxrwxrwx 1 stephane stephane 1 Mar 18 15:37 b/b/b/b/x/b -> alrwxrwxrwx 1 stephane stephane 4 Mar 18 15:37 b/b/b/b/x/a -> ../alrwxrwxrwx 1 stephane stephane 26 Mar 18 15:15 b/b/b/b/x/../a -> /usr/bin/gnome-text-editorlrwxrwxrwx 1 root root 35 Nov 5 2013 /usr/bin/gnome-text-editor -> /etc/alternatives/gnome-text-editorlrwxrwxrwx 1 root root 14 Mar 15 12:21 /etc/alternatives/gnome-text-editor -> /usr/bin/gedit-rwxr-xr-x 1 root root 10344 Nov 12 17:18 /usr/bin/gedit With: $ namei -lx b/b/b/b/x/bf: b/b/b/b/x/blrwxrwxrwx stephane stephane b -> .drwxr-xr-x stephane stephane .lrwxrwxrwx stephane stephane b -> .drwxr-xr-x stephane stephane .lrwxrwxrwx stephane stephane b -> .drwxr-xr-x stephane stephane .lrwxrwxrwx stephane stephane b -> .drwxr-xr-x stephane stephane .lrwxrwxrwx stephane stephane x -> 2drwxr-xr-x stephane stephane 2lrwxrwxrwx stephane stephane b -> alrwxrwxrwx stephane stephane a -> ../adrwxr-xr-x stephane stephane ..lrwxrwxrwx stephane stephane a -> /usr/bin/gnome-text-editorDrwxr-xr-x root root /drwxr-xr-x root root usrdrwxr-xr-x root root binlrwxrwxrwx root root gnome-text-editor -> /etc/alternatives/gnome-text-editorDrwxr-xr-x root root /drwxr-xr-x root root etcdrwxr-xr-x root root alternativeslrwxrwxrwx root root gnome-text-editor -> /usr/bin/geditDrwxr-xr-x root root /drwxr-xr-x root root usrdrwxr-xr-x root root bin-rwxr-xr-x root root gedit
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
270,678
I'm running git-bash on windows. I feel the issue I'm facing is more of a NIX geared question than windows. I have a shell script: build.sh myProject="../myProject/"build="gulp build"cd "${myProject}"pwd"${build}" When I run this script I get error gulp build: command not found When I run "gulp build" directly in the shell, running these same commands by hand then everything works. I tried executing the script via: . build.sh and just build.sh Same error either way. How can I run a script that can access gulp/npm? Why does this fail even when I am sourcing the script?
Quoting "${build}" prevents word splitting, so it has the same effect here as writing "gulp build" (with quotes), which would search for an executable called gulp build with a space inside the name; and not as writing gulp build , which executes gulp with a build argument. Concluding, the last line of your script should be: ${build}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270678", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104015/" ] }
270,747
I am set up as a sudoer on my Debian machine, and I use it all the time to install software etc... I came across this interesting situation that I am scratching my head about: I tried to enable the fancy progress bar in apt using this command: sudo echo 'Dpkg::Progress-Fancy "1";' > /etc/apt/apt.conf.d/99progressbar I have permissions issues: /etc/apt/apt.conf.d/99progressbar: Permission denied However, if I su , then run the command, everything just works. Why is this so?
Because sudo cmd > file is interpreted as (sudo cmd) > file by your shell. I.e. the redirect is done as your own user. The way I get around that is using cmd | sudo tee file Addition: That will also show the output of cmd on your console, if you don't want that you'll have to redirect it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61235/" ] }
270,755
I need to downgrade some apt packages, that I had previously pinned to testing, having stable as default. This is the preferences file: Package: *Pin: release a=stablePin-Priority: 1001Package: *Pin: release a=testingPin-Priority: 500Package: *Pin: release a=unstablePin-Priority: 400 When I check the policy for one of the upgraded packages, I get: apt-cache policy libstdc++5libstdc++5: Installed: 1:3.3.6-28 Candidate: 1:3.3.6-28 Version table: *** 1:3.3.6-28 0 500 http://mirror.hetzner.de/debian/packages/ testing/main amd64 Packages 400 http://mirror.hetzner.de/debian/packages/ unstable/main amd64 Packages 100 /var/lib/dpkg/status 1:3.3.6-27.2 0 990 http://mirror.hetzner.de/debian/packages/ stable/main amd64 Packages 990 http://cdn.debian.net/debian/ stable/main amd64 Packages Whenever I change the settings for testing and unstable, I see the changes in the priority reflected here. Though, the priority for the stable package won't change, whatever I tried so far. The idea is to set it >1000, to actually execute the downgrade. Any hints, how to actually change it? This is the full output for apt-cache policy : Package files: 100 /var/lib/dpkg/status release a=now 500 http://llvm.org/apt/jessie/ llvm-toolchain-jessie-3.7/main amd64 Packages release n=llvm-toolchain-jessie-3.7,c=main origin llvm.org 990 http://security.debian.org/ stable/updates/non-free amd64 Packages release v=8,o=Debian,a=stable,n=jessie,l=Debian-Security,c=non-free origin security.debian.org 990 http://security.debian.org/ stable/updates/contrib amd64 Packages release v=8,o=Debian,a=stable,n=jessie,l=Debian-Security,c=contrib origin security.debian.org 990 http://security.debian.org/ stable/updates/main amd64 Packages release v=8,o=Debian,a=stable,n=jessie,l=Debian-Security,c=main origin security.debian.org 990 http://cdn.debian.net/debian/ stable/contrib amd64 Packages release v=8.3,o=Debian,a=stable,n=jessie,l=Debian,c=contrib origin cdn.debian.net 990 http://cdn.debian.net/debian/ stable/non-free amd64 Packages release v=8.3,o=Debian,a=stable,n=jessie,l=Debian,c=non-free origin cdn.debian.net 990 http://cdn.debian.net/debian/ stable/main amd64 Packages release v=8.3,o=Debian,a=stable,n=jessie,l=Debian,c=main origin cdn.debian.net 990 http://mirror.hetzner.de/debian/security/ stable/updates/non-free amd64 Packages release v=8,o=Debian,a=stable,n=jessie,l=Debian-Security,c=non-free origin mirror.hetzner.de 990 http://mirror.hetzner.de/debian/security/ stable/updates/contrib amd64 Packages release v=8,o=Debian,a=stable,n=jessie,l=Debian-Security,c=contrib origin mirror.hetzner.de 990 http://mirror.hetzner.de/debian/security/ stable/updates/main amd64 Packages release v=8,o=Debian,a=stable,n=jessie,l=Debian-Security,c=main origin mirror.hetzner.de 400 http://mirror.hetzner.de/debian/packages/ unstable/non-free amd64 Packages release o=Debian,a=unstable,n=sid,l=Debian,c=non-free origin mirror.hetzner.de 400 http://mirror.hetzner.de/debian/packages/ unstable/contrib amd64 Packages release o=Debian,a=unstable,n=sid,l=Debian,c=contrib origin mirror.hetzner.de 400 http://mirror.hetzner.de/debian/packages/ unstable/main amd64 Packages release o=Debian,a=unstable,n=sid,l=Debian,c=main origin mirror.hetzner.de 500 http://mirror.hetzner.de/debian/packages/ testing/non-free amd64 Packages release o=Debian,a=testing,n=stretch,l=Debian,c=non-free origin mirror.hetzner.de 500 http://mirror.hetzner.de/debian/packages/ testing/contrib amd64 Packages release o=Debian,a=testing,n=stretch,l=Debian,c=contrib origin mirror.hetzner.de 500 http://mirror.hetzner.de/debian/packages/ testing/main amd64 Packages release o=Debian,a=testing,n=stretch,l=Debian,c=main origin mirror.hetzner.de 990 http://mirror.hetzner.de/debian/packages/ stable/non-free amd64 Packages release v=8.3,o=Debian,a=stable,n=jessie,l=Debian,c=non-free origin mirror.hetzner.de 990 http://mirror.hetzner.de/debian/packages/ stable/contrib amd64 Packages release v=8.3,o=Debian,a=stable,n=jessie,l=Debian,c=contrib origin mirror.hetzner.de 990 http://mirror.hetzner.de/debian/packages/ stable/main amd64 Packages release v=8.3,o=Debian,a=stable,n=jessie,l=Debian,c=main origin mirror.hetzner.dePinned packages:
I don't understand why you are doing here. Why do you have a preferences setting for stable at all if you are running a stable system? As far I know, no preferences setting is necessary for stable in that case. You don't explicitly say whether you are running a stable system (you really should say so), but if you are not, then I really have no idea what you are doing. And if the release is on stable, then the usual thing to do for testing and unstable is to set their preferences to less than 100. I usually use 50. And if you want to downgrade to stable, just do the following (assuming sane settings like the ones above) to downgrade pkgname1 and pkgname2 : apt-get install pkgname1/stable pkgname2/stable This sets the specified packages to the target release stable . Incidentally, mixing testing and/or unstable packages with an unstable system is generally a bad idea unless you know what you are doing. Some of the time it is Ok, but most of the time you need to use backports, either from Debian, or self-made.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54212/" ] }
270,757
I installed Debian 8.3 LXDE today and I cannot adjust the resolution of my monitor. I have a 1920x1080 resolution, but allows you to install Debian to me only 1280x1024. I tried to make as here: https://wiki.archlinux.org/index.php/xrandr , using xrandr to do manually, but there is no connected interfaces: drahenfels@debian:~$ xrandr xrandr: Failed to get size of gamma for output defaultScreen 0: minimum 640 x 400, current 1280 x 1024, maximum 1280 x 1024default connected 1280x1024+0+0 0mm x 0mm1280x1024 0.00* 1152x864 0.00 1024x768 0.00 800x600 0.00 640x480 0.00 720x400 0.00 My GPU: AMD/ATI RS780L [Radeon 3000]
I don't understand why you are doing here. Why do you have a preferences setting for stable at all if you are running a stable system? As far I know, no preferences setting is necessary for stable in that case. You don't explicitly say whether you are running a stable system (you really should say so), but if you are not, then I really have no idea what you are doing. And if the release is on stable, then the usual thing to do for testing and unstable is to set their preferences to less than 100. I usually use 50. And if you want to downgrade to stable, just do the following (assuming sane settings like the ones above) to downgrade pkgname1 and pkgname2 : apt-get install pkgname1/stable pkgname2/stable This sets the specified packages to the target release stable . Incidentally, mixing testing and/or unstable packages with an unstable system is generally a bad idea unless you know what you are doing. Some of the time it is Ok, but most of the time you need to use backports, either from Debian, or self-made.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161798/" ] }
270,769
In my script /usr/local/bin/backup , that I call every hour from /etc/crontab , I use rsync to copy data to an off-site server.That all worked fine, even in cases where we had somewhat more new data than can be pushed out in an hour. Last week someone copied an 11GB file on the data partition and when I found out the next day there were 14 rsync programs running in parallel, each of then getting no bandwidth and each probably working on the same huge file. I killed them all (before realising I should have kept the first one running), stopped the cron job and ran the backup script by hand. I can write out a file in the script before starting rsync and check in the script if that file is already there to prevent backup from running in parallel. Is there an easier way of doing this? My /etc/crontab entry: 5 * * * * root /usr/local/bin/backup
There are different ways of doing this, but IMO the easiest is inserting flock before the command in the crontab file: 5 * * * * root flock -n /var/lock/backup /usr/local/bin/backup The /var/lock/backup file is the lock that flock uses and -n immediately makes the command fail if the lock already exists. This could of course mean that if one backup takes 1 hour and 1 minute, that the next one starts 59 minutes later. If that is a problem you could look into using -x .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161824/" ] }
270,771
I'm working on an Azure Ubuntu VM root@dalil:/# df -hFilesystem Size Used Avail Use% Mounted onudev 1.7G 8.0K 1.7G 1% /devtmpfs 345M 392K 344M 1% /run/dev/sda1 79G 18G 58G 24% /none 4.0K 0 4.0K 0% /sys/fs/cgroupnone 5.0M 0 5.0M 0% /run/locknone 1.7G 0 1.7G 0% /run/shmnone 100M 0 100M 0% /run/usernone 64K 0 64K 0% /etc/network/interfaces.dynamic.d/dev/sdb1 59G 52M 56G 1% /mnt this is the result of the df commandwhat I want to do is to extend / with the space of the partition mounted on mntand if it is not possible does the space in mnt can be used by the system I mean when installing packages or other operations
There are different ways of doing this, but IMO the easiest is inserting flock before the command in the crontab file: 5 * * * * root flock -n /var/lock/backup /usr/local/bin/backup The /var/lock/backup file is the lock that flock uses and -n immediately makes the command fail if the lock already exists. This could of course mean that if one backup takes 1 hour and 1 minute, that the next one starts 59 minutes later. If that is a problem you could look into using -x .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161826/" ] }
270,772
I have a big problem under Xfce4, namely when I try to start a program using alt+F2 , I'll start typing xfce4-terminal and before I finish it, it suggests the end of the sentence--but usually you accept the suggestion by pressing tab --here I have to move my hand all the way to the right arrow because tab is just moving the focus from the text area to the other buttons. Does anyone know how to fix this issue ?
There are different ways of doing this, but IMO the easiest is inserting flock before the command in the crontab file: 5 * * * * root flock -n /var/lock/backup /usr/local/bin/backup The /var/lock/backup file is the lock that flock uses and -n immediately makes the command fail if the lock already exists. This could of course mean that if one backup takes 1 hour and 1 minute, that the next one starts 59 minutes later. If that is a problem you could look into using -x .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270772", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149443/" ] }
270,778
I love to type bash scripts, but if I prepare multiple tools, project's root directory is filled with so many shell scripts. That's why I prefer using Makefile. Makefile is good. However I want to build my makefiles just as regular bash scripts. Eg: Think that I wrote a bash script very quickly with the following content: #!/bin/bashecho "hello"cd ~do-some-work.sh my-parameter I can run this script with $ ./my-important-task.sh . If I wanted to move that script into makefile, I should do the following: SHELL := /bin/bash my-important-task: echo "hello" ;\ cd ~ ;\ do-some-work.sh my-parameter but I want the following: my-important-task: [[copy and paste the my-important-task.sh file]] Is there anyway to accomplish this goal?
If you really want to “write exactly bash scripts into Makefiles” then you'll need to do it a bit indirectly. If you just paste the script after the target line, then you'll run into two problems that just cannot be bypassed: the command lines need to be indented with a tab, and dollar signs need to be escaped. If you use GNU make (as opposed to BSD make, Solaris make, etc.), then you can define your script as a variable using the multi-line definition syntax , and then use the value function to use the raw value of the variable, bypassing expansion. In addition, as explained by skwllsp , you need to tell make to execute the command list for each target as a single shell script rather than line by line, which you can do in GNU make by defining a .ONESHELL target . define my_important_task =# script goes hereendefmy-important-task: ; $(value my_important_task).ONESHELL:
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/270778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65781/" ] }
270,828
I have seen constructs in scripts such as this: if somevar="$(somecommand 2>/dev/null)"; then...fi Is this documented somewhere? How is the return status of a variable determined and how does it relate to command substitution? (For instance, would I get the same result with if echo "$(somecommand 2>/dev/null)"; then ?)
It is documented (for POSIX) in Section 2.9.1 Simple Commands of The Open Group Base Specifications. There's a wall of text there; I direct your attention to the last paragraph: If there is a command name,execution shall continue as described in Command Search and Execution . If there is no command name,but the command contained a command substitution,the command shall complete with the exit statusof the last command substitution performed. Otherwise, the command shall complete with a zero exit status. So, for example, Command Exit Status$ FOO=BAR 0 (but see also the note from icarus, below)$ FOO=$(bar) Exit status from "bar"$ FOO=$(bar)$(quux) Exit status from "quux"$ FOO=$(bar) baz Exit status from "baz"$ foo $(bar) Exit status from "foo" This is how bash works, too. But see also the “not so simple” section at the end. phk , in his question Assignments are like commandswith an exit status except when there’s command substitution? , suggests … it appears as if an assignment itself counts as a command …with a zero exit value,but which applies before the right side of the assignment(e.g., a command substitution call…) That’s not a terrible way of looking at it. A crude scheme for determining the return status of a simple command(one not containing ; , & , | , && or || ) is: Scan the line from left to right until you reach the endor a command word (typically a program name). If you see a variable assignment,the return status for the line just might be 0. If you see a command substitution — i.e., $(…) —take the exit status from that command. If you reach an actual command (not in a command substitution),take the exit status from that command. The return status for the line is the last number you encountered. Command substitutions as arguments to the command,e.g., foo $(bar) , don’t count; you get the exit status from foo . To paraphrase phk’s notation , the behavior here is temporary_variable = EXECUTE( "bar" ) overall_exit_status = EXECUTE( "foo", temporary_variable ) But this is a slight oversimplification. The overall return status from A=$( cmd 1 ) B=$( cmd 2 ) C=$( cmd 3 ) D=$( cmd 4 ) E=mc 2 is the exit status from cmd 4 . The E= assignment that occurs after the D= assignmentdoes not set the overall exit status to 0. icarus , in his answer to phk’s question ,raises an important point: variables can be set as readonly. The third-to-last paragraph in Section 2.9.1 of the POSIX standard says, If any of the variable assignments attempt to assign a value to a variablefor which the readonly attribute is set in the current shell environment(regardless of whether the assignment is made in that environment),a variable assignment error shall occur. See Consequences of Shell Errors for the consequences of these errors. so if you say readonly AC=Garfield A=Felix T=Tigger the return status is 1. It doesn’t matter if the strings Garfield , Felix , and/or Tigger are replaced with command substitution(s) — but see notes below. Section 2.8.1 Consequences of Shell Errors has another bunch of text,and a table, and ends with In all of the cases shown in the tablewhere an interactive shell is required not to exit,the shell shall not perform any further processingof the command in which the error occurred. Some of the details make sense; some don’t: The A= assignment sometimes aborts the command line,as that last sentence seems to specify. In the above example, C is set to Garfield , but T is not set(and, of course, neither is A ). Similarly, C=$( cmd 1 ) A=$( cmd 2 ) T=$( cmd 3 ) executes cmd 1 but not cmd 3 . But, in my versions of bash (which include 4.1.X and 4.3.X),it does execute cmd 2 . (Incidentally, this further impeaches phk’s interpretationthat the exit value of the assignment appliesbefore the right side of the assignment.) But here’s a surprise: In my versions of bash, readonly AC= something A= something T= something cmd 0 does execute cmd 0 . In particular, C=$( cmd 1 ) A=$( cmd 2 ) T=$( cmd 3 ) cmd 0 executes cmd 1 and cmd 3 ,but not cmd 2 . (Note that this is the opposite of its behavior when there is no command.) And it sets T (as well as C ) in the environmentof cmd 0 . I wonder whether this is a bug in bash. Not so simple: The first paragraph of this answer refers to “simple commands”. The specification says, A “simple command” is a sequence of optional variable assignmentsand redirections, in any sequence,optionally followed by words and redirections,terminated by a control operator. These are statements like the ones in my first example block: $ FOO=BAR$ FOO=$(bar)$ FOO=$(bar) baz$ foo $(bar) the first three of which include variable assignments,and the last three of which include command substitutions. But some variable assignments aren’t quite so simple. bash(1) says, Assignment statements may also appear as arguments tothe alias , declare , typeset , export , readonly ,and local builtin commands ( declaration commands). For export , the POSIX specification says, EXIT STATUS 0 All name operands were successfully exported. >0 At least one name could not be exported, or the -p option was specified and an error occurred. And POSIX doesn’t support local , but bash(1) says, It is an error to use local when not within a function. The return status is 0 unless local is used outside a function,an invalid name is supplied, or name is a readonly variable. Reading between the lines, we can see that declaration commands like export FOO=$(bar) and local FOO=$(bar) are more like foo $(bar) insofar as they ignore the exit status from bar and give you an exit status based on the main command( export , local , or foo ). So we have weirdness like Command Exit Status$ FOO=$(bar) Exit status from "bar" (unless FOO is readonly)$ export FOO=$(bar) 0 (unless FOO is readonly, or other error from “export”)$ local FOO=$(bar) 0 (unless FOO is readonly, statement is not in a function, or other error from “local”) which we can demonstrate with $ export FRIDAY=$(date -d tomorrow)$ echo "FRIDAY = $FRIDAY, status = $?"FRIDAY = Fri, May 04, 2018 8:58:30 PM, status = 0$ export SATURDAY=$(date -d "day after tomorrow")date: invalid date ‘day after tomorrow’$ echo "SATURDAY = $SATURDAY, status = $?"SATURDAY = , status = 0 and myfunc() { local x=$(echo "Foo"; true); echo "x = $x -> $?" local y=$(echo "Bar"; false); echo "y = $y -> $?" echo -n "BUT! " local z; z=$(echo "Baz"; false); echo "z = $z -> $?"}$ myfuncx = Foo -> 0y = Bar -> 0BUT! z = Baz -> 1 Luckily ShellCheck catches the error and raises SC2155 ,which advises that export foo="$(mycmd)" should be changed to foo=$(mycmd)export foo and local foo="$(mycmd)" should be changed to local foofoo=$(mycmd) Credit and Reference I got the idea of concatenating command substitutions — $(bar)$(quux) — from Gilles’s answer to How can I get bash to exiton backtick failure in a similar way to pipefail? ,which contains a lot of information relevant to this question.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/270828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
270,839
In Bash Manual, sec 6.5 Shell Arithmetic expr1 , expr2comma What does the comma operator do? Are expr1 and expr2 arithmetic expressions?
, is a list operator. The list of arithmetic expressions will be evaluated from left to right, the last expression result is the return value: $ echo "$(( a=1, ++a, ++a ))"3 The , list operator was added in bash-2.04-devel (along with pre/post increment/decrement operators). You may want to read expr.c to see how other operators were implemented, and function expcomma() for , operator.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
270,855
From POSIX 2013 : The XSI extensions specifying the -a and -o binary primaries and the '(' and ')' operators have been marked obsolescent. (Many expressions using them are ambiguously defined by the grammar depending on the specific expressions being evaluated.) Scripts using these expressions should be converted to the forms given below. Even though many implementations will continue to support these obsolescent forms, scripts should be extremely careful when dealing with user-supplied input that could be confused with these and other primaries and operators. Unless the application developer knows all the cases that produce input to the script, invocations like: test "$1" -a "$2" should be written as: test "$1" && test "$2" "the '(' and ')' operators have been marked obsolescent". Are ( and ) the operators that group commands, and create subshell? If they are obsolete, what are their replacement? Should test "$1" -a "$2" be replaced by test "$1" && test "$2" , or by (( test "$1" && test "$2" )) ? Don't we need the ((...)) to return 0 or 1 just like test in the original command does?
Are ( and ) the operators that group commands, and create subshell? No, the document refers to the operators to group expressions using test : test 0 -eq 0 -a \( 0 -eq 1 -o 1 -eq 1 \) If they are obsolete, what are their replacement? Their replacement are () and {} , which, similarily, group commands at the shell's level: test 0 -eq 0 && (test -0 -eq 1 || test 1 -eq 1)test 0 -eq 0 && { test -0 -eq 1 || test 1 -eq 1; } It's pretty easy to see the pattern: every operator used in test for expressions is replaced with the shell's equivalent for commands. Should test "$1" -a "$2" be replaced by test "$1" && test "$2" , or by (( test "$1" && test "$2" )) ? It should be replaced with test "$1" && test "$2" ; (()) is used for arithmetic expansion and has no bearing with commands' exit status, as its purpose is to evaluate arithmetic expressions. For example this would be valid: (( $(test 1 -eq 1 && echo $?) && $(test 0 -eq 0 && echo $?) )) (notice the command substitutions, which are replaced with the inner commands' exit statuses, 0 and 0 ; the evaluated expression is actually (( 0 && 0 )) ). But this causes a syntax error: (( test 1 -eq 1 && test 0 -eq 0 )) $ (( test 1 -eq 1 && test 0 -eq 0 ))bash: ((: test 1 -eq 1 && test 0 -eq 0 : syntax error in expression (error token is "1 -eq 1 && test 0 -eq 0 ")
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270855", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
270,865
I want to make a very minimal linux os which only have a terminal interface and basic commands/applications (busybox is my choice for commands/apps). I don't want the installation option on my os. I just want it to be booted and run completely from RAM. I'm planning to use ISO-Linux as bootloader. No networking, no virtualization support, no unnecessary drivers, etc. I want it to be very very basic os. I've downloaded the latest stable kernel (v4.5) source code from kernel.org and the build environment ready. My one more confusion is that does a kernel by default has any user interface (shell, terminal, ...) where i can type commands and see output?
Technically you can achieve this.Though, kernel do not have any built-in user-interface. You need to follow steps as: 1. Create a initramfs with static busybox and nothing else.This initramfs will have few necessary directories: like proc, sys, tmp, bin, usr, etc2. Write a "/init" script, whose main job will be: a. mount the procfs,tmpfs and sysfs. b. Call busybox's udev i.e. mdev c. Install the busybox command onto virtual system by executing busybox install -s d. Calling /bin/sh3. Source the initramfs directory while compiling the kernel. You can do so by flag: CONFIG_INITRAMFS_SOURCE4. Compile your kernel.5. Boot off this kernel and you will get the shell prompt with minimal things. Though, I write above notes in a very formal way. You can fine tune it the way you desire. UPDATE: Follow this link for some guidelines.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136180/" ] }
270,869
The line below causes an error (... request body malformed."} ).It is part of a user-data.yml for use with cloud-init as part of the digital ocean api to bootstrap a server upon creation. sed -ie '\$a\ \n\#Add logfile information\nlogfile /var/log/ntp.log' /etc/ntp.conf Basically, it should do the following: append an empty line add a comment at the next line add string tonext line I am loading that user-data.yml from a bash script like following: curl -X POST "https://api.digitalocean.com/v2/droplets" \-d'{"name":"'$droplet_name'","region": "'$region'","size": "'$size'","image": "'$image'","backups":false,"ipv6":false,"private_networking":false,"user_data":"'"$(cat /user-data.yaml)"'", "ssh_keys": '$root_ssh_pub_key'}' \ -H "Authorization: Bearer $api_key" \ -H "Content-Type: application/json" After some hours hacking it all together I simply might be code blind.
Technically you can achieve this.Though, kernel do not have any built-in user-interface. You need to follow steps as: 1. Create a initramfs with static busybox and nothing else.This initramfs will have few necessary directories: like proc, sys, tmp, bin, usr, etc2. Write a "/init" script, whose main job will be: a. mount the procfs,tmpfs and sysfs. b. Call busybox's udev i.e. mdev c. Install the busybox command onto virtual system by executing busybox install -s d. Calling /bin/sh3. Source the initramfs directory while compiling the kernel. You can do so by flag: CONFIG_INITRAMFS_SOURCE4. Compile your kernel.5. Boot off this kernel and you will get the shell prompt with minimal things. Though, I write above notes in a very formal way. You can fine tune it the way you desire. UPDATE: Follow this link for some guidelines.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95506/" ] }
270,872
I used to install mysql 5.6 in this way.. But now.. # echo "deb http://repo.mysql.com/apt/debian/ $(lsb_release -sc) mysql-5.6" >> /etc/apt/sources.list && echo "deb-src http://repo.mysql.com/apt/debian/ $(lsb_release -sc) mysql-5.6" >> /etc/apt/sources.list && apt-get update# apt-get install mysql-server-5.6Reading package lists... DoneBuilding dependency treeReading state information... DonePackage mysql-server-5.6 is a virtual package provided by: mysql-community-server 5.6.29-1debian8 [Not candidate version]E: Package 'mysql-server-5.6' has no installation candidate I need to reinstall mysql 5.6. Have tried this # apt-get install --reinstall mysql-community-serverReading package lists... DoneBuilding dependency treeReading state information... DoneReinstallation of mysql-community-server is not possible, it cannot be downloaded. Originally it was installed with apt-get install mysql-server-5.6
sudo apt install mariadb-server
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270872", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83275/" ] }
270,917
My desire is not to change directory then execute. I use a jar file and need to execute that. So I made a very basic shell script to do that. #!/bin/shjava -jar TreeForm.jar Then, I saved it as TreeForm , not TreeForm.sh . Next, I created a symbolic link as below: ln -s /opt/TreeForm/TreeForm /usr/bin/chmod +x /opt/TreeForm/TreeForm It is successfully created. However, when I run, I take the error below from JAVA. Error : Unable to access jarfile TreeForm.jar So, it seems shell script does not run its own directory or does not look into that, but works in current directory. I don't really want to modify my script file giving full path of TreeForm.jar . So I wanted to ask, is there a way to find a file relative to script file without changing the current path or adding path to PATH variable? Or is there any variable having the path of script, not the current one?
The shell will not change to another directory unless you tell it to. If you want your script to execute commands in a different directory, call cd from your script. The path to the script is available as "$0" . If the script is a symbolic link, that's the path to the symbolic link. If you want to get the final target of the symbolic link, call realpath (available on most but not all modern unices; it's available on Linux (both GNU and BusyBox), FreeBSD and Solaris 11, but not OSX). cd "$(dirname "$(realpath "$0")")" realpath is a relatively recent addition to GNU coreutils; if your version is too old, you can use readlink -f which is older. For non-GNU systems, if realpath isn't present then readlink may be but it typically only looks through one level of symbolic links, most readlink implementations don't have the -f option to do the full resolution.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/270917", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73035/" ] }
270,929
Consider this from the documentation of Bash' builtin exec: exec replaces the shell without creating a new process Please provide a use case / practical example. I don’t understand how this makes sense. I googled and found about I/O redirection . Can you explain it better?
exec is often used in shell scripts which mainly act as wrappers for starting other binaries. For example: #!/bin/shif stuff; EXTRA_OPTIONS="-x -y -z"else EXTRA_OPTIONS="-a foo"fiexec /usr/local/bin/the.real.binary $EXTRA_OPTIONS "$@" so that after the wrapper is finished running, the "real" binary takes over and there is no longer any trace of the wrapper script that temporarily occupied the same slot in the process table. The "real" binary is a direct child of whatever launched it instead of a grandchild. You mention also I/O redirection in your question. That is quite a different use case of exec and has nothing to do with replacing the shell with another process. When exec has no arguments, like so: exec 3>>/tmp/logfile then I/O redirections on the command line take effect in the current shell process, but the current shell process keeps running and moves on to the next command in the script.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26612/" ] }
270,948
I have been following this tutorial – Howto Configure OpenVPN Server-Client on Ubuntu 15.04 –to set up OpenVPN on my Ubuntu 15.04 VM. I have followed it through and through,and am kind of stuck with the client config file. Here is the client.conf file given in the example: dev tunproto udp# IP and Port of remote host with OpenVPN serverremote 111.222.333.444 1194resolv-retry infiniteca /etc/openvpn/keys/ca.crtcert /etc/openvpn/keys/client.crtkey /etc/openvpn/keys/client.keytls-clienttls-auth /etc/openvpn/keys/ta.key 1auth SHA1cipher BF-CBCremote-cert-tls servercomp-lzopersist-keypersist-tunstatus openvpn-status.loglog /var/log/openvpn.logverb 3mute 20 So I am guessing remote IP would be the public IP of my server and that I would need to forward port 1194 on my router. However where I define the ca, cert and key in the conf file, if I was using this on an Android device would I need to change the path to reflect where these files are on the Android device or is the example given correct? I will be generating the client key to be used on both Android and iOS devices using the OpenVPN client. Would this also work on Windows?
exec is often used in shell scripts which mainly act as wrappers for starting other binaries. For example: #!/bin/shif stuff; EXTRA_OPTIONS="-x -y -z"else EXTRA_OPTIONS="-a foo"fiexec /usr/local/bin/the.real.binary $EXTRA_OPTIONS "$@" so that after the wrapper is finished running, the "real" binary takes over and there is no longer any trace of the wrapper script that temporarily occupied the same slot in the process table. The "real" binary is a direct child of whatever launched it instead of a grandchild. You mention also I/O redirection in your question. That is quite a different use case of exec and has nothing to do with replacing the shell with another process. When exec has no arguments, like so: exec 3>>/tmp/logfile then I/O redirections on the command line take effect in the current shell process, but the current shell process keeps running and moves on to the next command in the script.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270948", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161948/" ] }
270,953
Using /bin/bash on RHEL 5.8 and I want to automate editing a file. I need the script to search the file and replace a line in the file. Line example: Other lineCurrent date 01121990Other line Search for this line and replace the date string with a predetermined date. I can't echo to a new file and enter in my date since this file will be copied from a server-specific file. Thanks!
your_date='your desired date'sed -i "s/^Current date.*/Current date ${your_date}/" /path/to/file That's the easiest way. This assumes that all lines that contains a date to replace also are the only lines to start with 'Current date'. Note that the user level that runs this command must also have permission to edit that file. -i means inline edit, which means you are editing the file directly. ^Current date.* Means all lines starting with: Current date and ending in anything. In other words, replace the entire line with what is in the second /.../ part of the sed thing. Double " are used around the sed statement so that variables will be used as variables, not strings.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94245/" ] }
270,966
Does bash and sh execute a script differently? I executed a shell script with both bash and sh, and got the same output. So what is the difference? Also, when you just run ./executable , does it use bash or sh?
That depends on the system you are running. One many OSes, especially Linux based ones, sh is a link to bash . In such case, there are still some differences in behavior where bash try to be more like traditional bourne shell when called sh , but it still accepts most bashisms. On some other OSes, like Debian based ones, sh is provided by dash , not bash . That makes a much bigger difference as dash doesn't support bashisms, being designed to be a clean POSIX shell implementation. On proprietary OSes, sh is often either provided by a POSIX compliant ksh88 which like dash , doesn't implement bashisms. On Solaris 10 and older, depending on what your PATH is, sh will likely be a legacy Bourne shell, predating POSIX. In any case, you likely got the same output with your test simply because your script was not using any bash specific command, option or syntax. When you run ./executable , what shell will be run essentially depends on the shebang written at the beginning of the .executable script. That will be bash if the shebang specifies it: #!/bin/bash.... If there is no shebang and you call the script from a POSIX compliant shell, the script should technically be executed by the first sh found in the PATH. Many shell interpreters like bash , dash and ksh are considering themselves to be POSIX so will interpret the script. Note that the SHELL environment variable is not used here.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/270966", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139098/" ] }
270,977
In Bash, when specifying command line arguments to a command, what characters are required to be escaped? Are they limited to the metacharacters of Bash: space, tab, | , & , ; , ( , ) , < , and > ?
The following characters have special meaning to the shell itself in some contexts and may need to be escaped in arguments: Character Unicode Name Usage ` U+0060 (Grave Accent) Backtick Command substitution ~ U+007E Tilde Tilde expansion ! U+0021 Exclamation mark History expansion # U+0023 Number sign Hash Comments $ U+0024 Dollar sign Parameter expansion & U+0026 Ampersand Background commands * U+002A Asterisk Filename expansion and globbing ( U+0028 Left Parenthesis Subshells ) U+0029 Right Parenthesis Subshells U+0009 Tab ( ⇥ ) Word splitting (whitespace) { U+007B Left Curly Bracket Left brace Brace expansion [ U+005B Left Square Bracket Filename expansion and globbing | U+007C Vertical Line Vertical bar Pipelines \ U+005C Reverse Solidus Backslash Escape character ; U+003B Semicolon Separating commands ' U+0027 Apostrophe Single quote String quoting " U+0022 Quotation Mark Double quote String quoting with interpolation ↩ U+000A Line Feed Newline Line break < U+003C Less than Input redirection > U+003E Greater than Output redirection ? U+003F Question mark Filename expansion and globbing U+0020 Space Word splitting 1 (whitespace) Some of those characters are used for more things and in more places than the one I linked. There are a few corner cases that are explicitly optional: ! can be disabled with set +H , which is the default in non-interactive shells. { can be disabled with set +B . * and ? can be disabled with set -f or set -o noglob . = Equals sign (U+003D) also needs to be escaped if set -k or set -o keyword is enabled. Escaping a newline requires quoting — backslashes won't do the job. Any other characters listed in IFS will need similar handling. You don't need to escape ] or } , but you do need to escape ) because it's an operator. Some of these characters have tighter limits on when they truly need escaping than others. For example, a#b is ok, but a #b is a comment, while > would need escaping in both contexts. It doesn't hurt to escape them all conservatively anyway, and it's easier than remembering the fine distinctions. If your command name itself is a shell keyword ( if , for , do ) then you'll need to escape or quote it too. The only interesting one of those is in , because it's not obvious that it's always a keyword. You don't need to do that for keywords used in arguments, only when you've (foolishly!) named a command after one of them. Shell operators ( ( , & , etc) always need quoting wherever they are. 1 Stéphane has noted that any other single-byte blank character from your locale also needs escaping. In most common, sensible locales, at least those based on C or UTF-8, it's only the whitespace characters above. In some ISO-8859-1 locales, U+00A0 no-break space is considered blank, including Solaris, the BSDs, and OS X (I think incorrectly). If you're dealing with an arbitrary unknown locale, it could include just about anything, including letters, so good luck. Conceivably, a single byte considered blank could appear within a multi-byte character that wasn't blank, and you'd have no way to escape that other than putting the whole thing in quotes. This isn't a theoretical concern: in an ISO-8859-1 locale from above, that A0 byte which is considered a blank can appear within multibyte characters like UTF-8 encoded "à" ( C3 A0 ). To handle those characters safely you would need to quote them "à" . This behaviour depends on the locale configuration in the environment running the script, not the one where you wrote it. I think this behaviour is broken multiple ways, but we have to play the hand we're dealt. If you're working with any non-self-synchronising multibyte character set, the safest thing would be to quote everything. If you're in UTF-8 or C, you're safe (for the moment).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/270977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
271,000
I'm on debian. I would like to execute an apt-get update (and maybe something else) only if the time since last update is bigger or smaller than a certain amount with a straight command, no cron tricks or similar . Let's assume I know nothing about apt-get previous state , an update could have been never issued since the os installation, or triggered manually 2 mins ago, or issued automatically by unattended-upgrades service. Eg. if(time > 30 min) apt-get updateif(time > 2 days) something else This question is similar to another I found in askubuntu but due the different setup in debian config I can't find a timestamp file informing me when the last update command took place.
The file /var/cache/apt/pkgcache.bin is regenerated each time apt-get update runs (and isn't regenerated otherwise). For example, if you want to run apt-get update only if it hasn't been run in the past hour, you can use #!/bin/shlast_update=$(stat -c %Y /var/cache/apt/pkgcache.bin)now=$(date +%s)if [ $((now - last_update)) -gt 3600 ]; then apt-get updatefi or #!/bin/shif [ -z "$(find /var/cache/apt/pkgcache.bin -mmin -60)" ]; then apt-get updatefi Note that if you run multiple copies of this script almost at the same time, they might all decide to run apt-get update . If that's a concern, use a lock (which is a wholly separate issue).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/271000", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143935/" ] }
271,013
From man touch : -f (ignored) But I don't get what is meant by ignored . I've tried following: $ ls -l file-rw-rw-r-- 1 pandya pandya 0 Mar 20 16:17 file $ touch -f file$ ls -l file-rw-rw-r-- 1 pandya pandya 0 Mar 20 16:18 file And noticed that it changes timestamps in spite of -f . So, I want to know what -f stands for, or what it does.
For GNU utilities, the full documentation is in the info page, where you can read: -f Ignored; for compatibility with BSD versions of `touch'. See historic BSD man pages for touch , where -f was to force the touch. If you look at the source of those old BSDs, there was no utimes() system call, so touch would open the file in read+write mode, read one byte, seek back and write it again so as to update the last access and last modification time . Obviously, you needed both read and write permissions ( touch would avoid trying to do that if access(W_OK|R_OK) returned false ). -f tried to work around that by temporarily changing the permissions temporarily to 0666 ! 0666 means read and write permission to everybody. It had to be that as otherwise (like with a more restrictive permission such as 0600 that still would have permitted the touch ) that could mean during that short window, processes that would otherwise have read or write permission to the file couldn't any more, breaking functionality . That means however that processes that would not otherwise have access to the file now have a short opportunity to open it, breaking security . That's not a very sensible thing to do. Modern touch implementations don't do that. Since then, the utime() system call has been introduced, allowing changing modification and access time separately without having to mingle with the content of the files (which means it also works with non-regular files) and only needs write access for that. GNU touch still doesn't fail if passed the -f option, but just ignores the flag. That way, scripts written for those old versions of BSD don't fail when ported to GNU systems. Not much relevant nowadays.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/271013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
271,140
Ideally I'd like a command like this rm --only-if-symlink link-to-file because I have burned myself too many times accidentally deleting the file instead of the symlink pointing to the file. This can be especially bad when sudo is involved. Now I do of course do a ls -al to make sure it's really a symlink and such but that's vulnerable to operator error (similarly named file, typo, etc) and race conditions (if somebody wanted me to delete a file for some reason). Is there some way to check if a file is a symlink and only delete it if it is in one command?
$ rm_if_link(){ [ ! -L "$1" ] || rm -v "$1"; } #test $ touch nonlink; ln -s link $ rm_if_link nonlink $ rm_if_link link removed 'link'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/271140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38485/" ] }
271,151
I have a file formatted like this: train/t/temple/east_asia/00000025.jpg 94train/t/temple/east_asia/00000865.jpg 94...train/s/swamp/00000560.jpg 92train/s/swamp/00000935.jpg 92....train/m/mountain/00000428.jpg 68train/m/mountain/00000126.jpg 68 The last number is the class number. I have 50 different classes, and each class has 1,000 lines. I would like to take a random sample of size N from each class, and store the result in another text file.
Since your lines are grouped by class, you could (with gnu tools) split the file into pieces and use the --fiter option to pipe each piece to shuf to extract N random lines from it: split --filter='shuf -n N ' infile > outfile Note that split defaults to 1000 lines - which is what you need in this particular case. If the requirements change you'll have to pass the number of lines via -l e.g. to split into pieces of 200 lines and extract 30 random lines from each piece: split -l 200 --filter='shuf -n 30' infile > outfile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162057/" ] }
271,211
My system disk usage is like this: # df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/rhel-root 50G 39G 12G 77% /devtmpfs 5.8G 0 5.8G 0% /devtmpfs 5.8G 240K 5.8G 1% /dev/shmtmpfs 5.8G 50M 5.8G 1% /runtmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup/dev/mapper/rhel-home 1.3T 5.4G 1.3T 1% /home/dev/sda2 497M 212M 285M 43% /boot/dev/sda1 200M 9.5M 191M 5% /boot/efitmpfs 1.2G 16K 1.2G 1% /run/user/1200tmpfs 1.2G 16K 1.2G 1% /run/user/1000tmpfs 1.2G 0 1.2G 0% /run/user/0 I have 2 questions about devtmpfs and tmpfs : (1) devtmpfs 5.8G 0 5.8G 0% /devtmpfs 5.8G 240K 5.8G 1% /dev/shmtmpfs 5.8G 50M 5.8G 1% /runtmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup All the above spaces are 5.8G , do they share the same memory space? (2) tmpfs 1.2G 16K 1.2G 1% /run/user/1200tmpfs 1.2G 16K 1.2G 1% /run/user/1000tmpfs 1.2G 0 1.2G 0% /run/user/0 Does each user has his dedicated memory space, not shared space in /run/user partition?
For all the tmpfs mounts, "Avail" is an artificial limit. The default size for tmpfs mounts is half your RAM. It can be adjusted at mount time. ( man mount , scroll to tmpfs ). The mounts don't share the same space, in that if you filled the /dev/shm mount, /dev would not show any more "Used", and it would not necessarily stop you from writing data to /dev (Someone could contrive tmpfs mounts that share space by bind-mounting from a single tmpfs. But that's not how any of these mounts are set up by default). They do share the same space, in that they're both backed by the system memory. If you tried to fill both /dev/shm and /dev , you would be allocating space equal to your physical RAM. Assuming you have swap space, this is entirely possible. However it's generally not a good idea and would end poorly. This doesn't fit well with the idea of having multiple user-accessible tmpfs mounts. I.e. /dev/shm + /tmp on many systems. It arguably would be better if the two large mounts shared the same space. (Posix SHM is literally an interface to open files on a user-accessible tmpfs). /dev/ , /run , /sys/fs/cgroups are system directories. They should be tiny, not used for sizeable data and so not cause a problem. Debian (8) seems to be a bit better at setting limits for them; on a 500MB system I see them limited to 10, 100, 250 MB, and another 5 for /run/lock respectively. /run has about 2MB used on my systems. systemd-journal is a substantial part of it, and by default may grow to 10% of "Avail". ( RuntimeMaxUse option), which doesn't fit my model. I would bet that's why you've got 50MB there. Allowing the equivalent of 5% of physical RAM for log files... personally it's not a big problem in itself, but it's not pretty and I'd call it a mistake / oversight. It would be better if a cap was set on the same order as that 2MB mark. At the moment it suggests the size for /run should be manually set for every system, if you want to prevent death by a thousand bloats. Even 2% (from my Debian example) seems presumptuous.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85056/" ] }
271,471
I just formatted microSD card, and would like to run a dd command. Unfortunately dd command fails: $ sudo dd bs=1m if=2016-02-26-raspbian-jessie-lite.img of=/dev/rdisk2dd: /dev/rdisk2: Resource busy$ Everyone on the internet says I need to unmount the disk first. Sure, can do that and move on. But I want to understand why / what exactly in OS X is making the device busy ? How do I diagnose this? So far I tried: Listing open files: $ lsof /dev/disk2$ lsof /dev/disk2s1$ Also: $ lsof /Volumes/UNTITLED$ Listing users working on the file: $ fuser -u /dev/disk2/dev/disk2: $ fuser -u /dev/disk2s1 /dev/disk2s1:$ Also: $ fuser -u /Volumes/UNTITLED$ Check for system messages: $ sudo dmesg | grep disk$ Also: $ sudo dmesg | grep /Volumes/UNTITLED$ My environment Operating system: Darwin Eugenes-MacBook-Pro-2.local 15.3.0 Darwin Kernel Version 15.3.0: Thu Dec 10 18:40:58 PST 2015; root:xnu-3248.30.4~1/RELEASE_X86_64 x86_64 Information about my microSD: diskutil list disk2/dev/disk2 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *31.9 GB disk2 1: DOS_FAT_32 UNTITLED 31.9 GB disk2s1 P.S. I'm using OS X 10.11. Update 22/3/2016 . Figured it out. I re-ran the lsof and fuser from above using sudo , and finally got to the bottom of the issue: $ sudo fuser /Volumes/UNTITLED//Volumes/UNTITLED/: 62 282$ And: $ sudo lsof /Volumes/UNTITLED/COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEmds 62 root 8r DIR 1,6 32768 2 /Volumes/UNTITLEDmds 62 root 22r DIR 1,6 32768 2 /Volumes/UNTITLEDmds 62 root 23r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AADmds 62 root 25u REG 1,6 0 999999999 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/journalExclusionmds_store 282 root txt REG 1,6 3277 17 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexGroupsmds_store 282 root txt REG 1,6 8 23 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexCompactDirectorymds_store 282 root txt REG 1,6 312 19 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexTermIdsmds_store 282 root txt REG 1,6 3277 29 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexGroupsmds_store 282 root txt REG 1,6 1024 35 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexCompactDirectorymds_store 282 root txt REG 1,6 312 21 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexPositionTablemds_store 282 root txt REG 1,6 8192 31 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexTermIdsmds_store 282 root txt REG 1,6 2056 22 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexDirectorymds_store 282 root txt REG 1,6 8192 33 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexPositionTablemds_store 282 root txt REG 1,6 8224 34 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexDirectorymds_store 282 root txt REG 1,6 16 16 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexIdsmds_store 282 root txt REG 1,6 65536 48 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/reverseDirectoryStoremds_store 282 root txt REG 1,6 704 24 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexArraysmds_store 282 root txt REG 1,6 65536 26 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.directoryStoreFilemds_store 282 root txt REG 1,6 32768 28 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexIdsmds_store 282 root txt REG 1,6 65536 36 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexArraysmds_store 282 root txt REG 1,6 65536 38 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.directoryStoreFilemds_store 282 root 5r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AADmds_store 282 root 17u REG 1,6 8192 12 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/psid.dbmds_store 282 root 32r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AADmds_store 282 root 41u REG 1,6 28 15 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/indexState$ From the above it's easy to see that processes called mds and mds_store have created and are holding lots of files on the volume.
Apple court, Apple rules. Try diskutil : $ diskutil list...# if mounted somewhere$ sudo diskutil unmount $device# all the partitions (there's also a "force" option, see the manual)$ sudo diskutil unmountDisk $device# remember zip drives? this would launch them. good times!$ sudo diskutil eject $device (In the case of a disk image, the hdiutil command may also be of interest. You can also click around in Disk Utility.app .)
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/271471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30894/" ] }
271,475
I'm working on a bash script that will split the contents of a text document depending on the data in the line. If the contents of the original file were along the lines of 01 line01 line02 line02 line How can I insert into line 3 of this file using bash to result in 01 line01 linetext to insert02 line02 line I'm hoping to do this using a heredoc or something similar in my script #!/bin/bashvim -e -s ./file.txt <<- HEREDOC :3 | startinsert | "text to insert\n" :update :quitHEREDOC The above doesn't work of course but any recommendations that I could implement into this bash script?
You can use the POSIX tool ex by line number: ex a.txt <<eof3 insertSunday.xiteof Or string match: ex a.txt <<eof/Monday/ insertSunday.xiteof https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ex.html
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/271475", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89568/" ] }
271,541
I am still getting to grips with systemd and have run into something. It's not so much a problem, but I would like to learn more about the way this is. I could not find any reference to this elsewhere. First off, I understand that custom unit files for services should go in /etc/systemd/system . However, it would be nice for management of our servers if the unit files could be located elsewhere. In the documentation, I saw that you may 'link' unit files like so: systemctl link /path/to/servicename.service This will create a link to the above in /etc/systemd/system . You are now able to start/stop this service. On the surface, this seemed like a good way for us to manage our services. However, trying to enable such a 'linked' unit file results in failure: root@test1:/etc/systemd/system# systemctl link /root/myservice.service Created symlink from /etc/systemd/system/myservice.service to /root/myservice.service.root@test1:/etc/systemd/system# systemctl status myservice.service * myservice.service - My Test Service Loaded: loaded (/root/myservice.service; linked; vendor preset: enabled)root@test1:/etc/systemd/system# systemctl enable myservice.serviceFailed to execute operation: No such file or directory Using the exact same unit file, but copied in to /etc/systemd/system instead of linked in, you get: root@test1:/etc/systemd/system# cp -p /root/myservice.service .root@test1:/etc/systemd/system# systemctl daemon-reload root@test1:/etc/systemd/system# systemctl status myservice.service * myservice.service - My Test Service Loaded: loaded (/etc/systemd/system/myservice.service; disabled; vendor preset: enabled)root@test1:/etc/systemd/system# systemctl enable myservice.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/myservice.service to /etc/systemd/system/myservice.service. From this, it seems that it is not possible to enable linked in unit files to be called at system startup. If this is the case, what is the point of the 'link' functionality? From the docs, it says: link FILENAME Link a unit file that is not in the unit file search paths into the unit file search path. This requires an absolute path to a unit file. The effect of this can be undone with disable. The effect of this command is that a unit file is available for start and other commands although it is not installed directly in the unit search path.
The man page is misleading. systemctl link /root/myservice.service systemctl enable /root/myservice.service The first makes it possible for you to do systemctl start myservice . The second makes it possible for myservice to be started automatically (which, as @Julien pointed out, automatically adds the link ). I think... I've been trying to wrap my head around this all day.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162346/" ] }
271,651
Given in Linux environment exists lot of packages for manipulating strings (grep, awk, sed, ...), I would like a software to extract a capturing group in a php/preg like syntax. Maybe the most close one is grep -P but I don't get how it works. Stuff like cat file.txt | grep -P '/something="([\w]+)"/i' seems not to give me only the content inside the capturing group. Could someone provide me some working examples? Many please, with some variants and limits explained! EDIT: I saw somewhere used sed for doing this purpose but I'm still a bit confused about it's syntax.
pcregrep -io1 'something="(\w+)"' myfile.txt ( -i for case insensitive matching, -o1 to print the first capture group). GNU grep supports a -P (if built with perl compatible regex support) and -o . However its -o is limited to printing the whole matched portions. You can however use perl look-around operators to work around that: grep -iPo '(?<=something=")\w+(?=")' myfile.txt (that is, a regexp that matches sequence of word component characters provided it follows something=" and is followed by " ). Or with recent enough PCRE: grep -iPo 'something="\K\w+(?=")' myfile.txt (where \K resets the start of the matched string). But if you're going to use perl regexps, you might as well use perl : perl -C -lne 'print for /something="(\w+)"/ig' myfile.txt With GNU or BSD sed , to return only the right-most match per line: sed -nE 's/.*something="(\w+)".*/\1/pi' myfile.txt Portably (as extended regex support and case insensitive matching are non-standard extensions not supported by all sed implementations): sed -n 's/.*[sS][oO][mM][eE][tT][hH][iI][nN][gG]="\([[:alnum:]_]\{1,\}\)".*/\1/p' myfile.txt That one assumes uppercase i is I . That means that in locales where uppercase i is İ for instance, the behaviour will be different from the previous solution. A standard/portable solution that can find all the occurrences on a line: awk '{while(match(tolower($0), /something="[[:alnum:]_]+"/)) { print substr($0, RSTART+11, RLENGTH-12) $0 = substr($0, RSTART+RLENGTH-1)}}' myfile.txt That may not work correctly if the input contains text whose lower case version doesn't have the same length (in number of characters). Gotchas: There will be some variations between all those solutions on what \w (and [[:alnum:]_] ) matches in locales other than the C/POSIX one. In any case it should at least include underscore, all the decimal arabic digits and the letters from the latin English alphabet (uppercase and lower case). If you want only those, fix the locale to C. As already mentioned, case insensitive matching is very much locale-dependent. If you only care about a-z vs A-Z English letters, you can fix the locaIle to C again. The . regexp operator, with GNU implementations of sed at least will never match sequences of bytes that are not part of a valid character. In a UTF-8 locale, for instance, that means that it won't match characters from a single-byte charset with the 8th bit set. Or in other words, for the sed solution to work properly, the character set used in the input file must be the same as the one in the user's locale. perl , pcregrep and GNU utilities will generally work with lines of any length, and containing any arbitrary byte value (but note the caveat above), and will consider the extra data after the last newline character as an extra line. Other implementations of those utilities may not. The patterns above are matched in-turn against each line in the input. That means that they can't match more than one line of input. Not a problem for a pattern like something="\w+" that can't span over more than one line, but in the general case, if you want your pattern to match text that may span several lines like something=".*?" , then you'd need to either: change the type of record you work on. grep --null , sed -z (GNU sed only), perl -0 , awk -v RS='\0' (GNU awk and recent versions of mawk only) can work on NUL-delimited records instead of lines (newline delimited records), GNU awk can use any regexp as the record separator (with -v RS='regexp'), perl any byte value (with -0ooo`). pcregrep has a -M multiline mode for that. use perl 's slurp mode, where the whole input is the one record (with -0777 ) Then, for perl and pcre ones, beware that . will not match newline characters unless the s flag is enabled, for instance with pcregrep -Mio1 '(?s)something="(.*?)"' or perl -C -l -0777 -ne 'print for /something="(.*?)"/gis' Beware that some versions of grep and pcregrep have had bugs with -z or -M , and regexp engines in general can have some built-in limits on the amount of effort they may put into matching a regexp.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143935/" ] }
271,659
The question is about special variables. Documentation says: !!:$ designates the last argument of the preceding command . This may be shortened to !$ . ( $_ , an underscore.) At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the environment or argument list. Subsequently, expands to the last argument to the previous command after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. There must be some difference I cannot catch, because: $ echo "hello" > /tmp/a.txt$ echo "!$"echo "/tmp/a.txt"/tmp/a.txt$ echo "hello" > /tmp/a.txt$ echo $_hello What is the difference?
!$ is a word designator of history expansion; it expands to the last word of the previous command in history . In other words, the last word of the previous entry in history. This word is usually the last argument to the command, but not in case of redirection. In: echo "hello" > /tmp/a.txt the whole command 'echo "hello" > /tmp/a.txt' appeared in history, and /tmp/a.txt is the last word of that command. _ is a shell parameter; it expands to the last argument of the previous command. Here, the redirection is not a part of arguments passed to the command, so only hello is the argument passed to echo . That's why $_ expanded to hello . _ is no longer one of shell standard special parameters . It works in bash , zsh , mksh and dash only when interactive, ksh93 only when two commands are on separated lines: $ echo 1 && echo $_1/usr/bin/ksh$ echo 11$ echo $_1
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/271659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39370/" ] }
271,661
Methods I have tried: https://wiki.gnupg.org/GnomeKeyring https://blog.josefsson.org/tag/keyring/ Removing the GNOME Keyring applications from Startup Applications http://lechnology.com/software/keeagent/installation/#disable-ssh-component-of-gnome-keyring None of these stop this process from being started when I log in: me 1865 0.0 0.0 281816 7104 ? Sl 10:50 0:00 /usr/bin/gnome-keyring-daemon --daemonize --login This stops my Thunderbird from decrypting emails properly. When I kill the process, I can decrypt emails as expected but I don't want to have to do that every time I log in. OS Information: Debian GNU/Linux 8.3 (jessie) Can anyone help?
Actually the gnome-keyring-daemon in several cases is started via X login using the PAM (Pluggable Authentication Modules) files, but there is other ways like autostart files too GnomeKeyring/RunningDaemon . You can look in detail about the integration of PAM on official documentation . But in general you just need to detect which desktop manager are you using and delete the entries on your /etc/pam.d/<desktop_manager> . In my case, I use the lightdm . So I have a PAM file called /etc/pam.d/lightdm which has that contents: ❯ cat /etc/pam.d/lightdm#%PAM-1.0auth include system-login-auth optional pam_gnome_keyring.soaccount include system-loginpassword include system-loginsession include system-login-session optional pam_gnome_keyring.so auto_start Deleting or comment the entries which call the pam_gnome_keyring.so module, located on /lib/security , you can accomplish the full disable of the daemon on login. To be sure, look to /etc/xdg/autostart and ~/.config/autostart for files with the pattern gnome-keyring-*.desktop and append Hidden=true on each file to disable that component as well. How To on antiX 17.1 (based on Debian 'stretch') NOTE: This, or something close to it, should work for most Debian-based systems. For each user for which gnome-keyring-daemon should not start on login... For each service for which there is a file like... /etc/xdg/autostart/gnome-keyring-*.desktop Create a file of the exact same name in: ~/.config/autostart Containing only... [Desktop Entry]Hidden=true Such as... ~/.config/autostart/gnome-keyring-pkcs11.desktop~/.config/autostart/gnome-keyring-secrets.desktop~/.config/autostart/gnome-keyring-ssh.desktop Insure that each file is owned by their respective user and has permissions 644 (rw-r--r--) OPTIONAL: Disable gnome-keyring-daemon processes for 'login' The above per-user changes still allow 1 or 2 gnome-keyring-daemon processes to be started at login. But they will automatically stop after a couple of minutes if no per-user processes are started. Thus, alteration of these /etc/pam.d files is not really necessary but is provided for completeness. Comment out gnome-keyring-daemon lines in the PAM config file for the display manager (antiX uses slim ): /etc/pam.d/slim # auth optional pam_gnome_keyring.so# session optional pam_gnome_keyring.so auto_start Comment out gnome-keyring-daemon lines in the PAM config file: /etc/pam.d/common-password # password optional pam_gnome_keyring.so Reboot
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117923/" ] }
271,714
How is it possible to run multiple commands and background them using bash? For example: $ for i in {1..10}; do wait file$i &; done where wait is a custom binary. Right now I get an error: syntax error near unexpected token `;' when running the above command. Once backgrounded the commands should run in parallel.
The & , just like ; is a list terminator operator. They have the same syntax and can be used interchangeably (depending on what you want to do). This means that you don't want, or need, command1 &; command2 , all you need is command1 & command2 . So, in your example, you could just write: for i in {1..10}; do wait file$i & done and each wait command will be launched in the background and the loop will immediately move on to the next.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/271714", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82521/" ] }
271,742
Makefile does not require to bound variable values by quotes. For instance this will be accepted: a := ls -l -a > out.txt My problem is: If I want to do something like this: a := ls -l -a > outb := .txtc := $(a)$(b) If end of the line of variable $(a) has a white space, variable $(c) will look like this: ls -l -a > out .txt With white space after out ! This can cause errors. Is there a way to globally ignore white spaces at end of line of all makefile variable values?
No, there's no way to change the way make parses variable definitions. If you can't change the point at which variables are defined, you'll have to change the point where they're used. If you're using GNU make and the variables' values aren't supposed to have significant whitespace inside them, you can use the strip function . c := $(strip $(a))$(strip $(b))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162504/" ] }
271,749
I'm an undergraduate student and I am looking for a spectrum analyzer (or at least a collection of functions) that will output the frequency range of a sound that is played, as an array.
If you just need a library, GStreamer might be what you need Otherwhise these look pretty good: Sonic Visualiser Spek Spectrum3D
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271749", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162507/" ] }
271,826
How do you make filesystems in OSX? Mac OSX doesn't have the mkfs command.
On BSD-derived Unix systems, newfs is more commonly used than mkfs. Under Mac OS X, you would use newfs_ type as the command, where type is one of hfs , msdos , exfat or udf . There are man pages for all of these. As the other answer mentions, you can use diskutil to create filestems but by using the newfs variants you can set specific filesystem parameters unavailable via diskutil.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162551/" ] }
271,930
I have pretty decent bandwidth here, but soon I will need to be abroad with nothing except for a small mobile connection. So I would like to obtain the biggest possible ISO of Debian. In other words, the opposite of the netinst. Is it possible to obtain a Blu-ray like ISO of Debian with ALL the distro packages? Even 25-50 GB ISO file, is just that I will soon only be able to use mobile data and need to do many installs and uninstalls, but I have to use the "cd" as source. I saw there are many DVD ISOs but they are partial and i want everything in a single ISO file. Another option I was considering instead of downloading the ISO filw, would be setup an http server on my notebook and get a full mirror of Debian, then setup the sources.list to obtain files from the internal virtual lan between the vm and the machine. I think the huge ISO option is still the easier and the best for now ;)
You won't find a single ISO image, although you could probably build one. The closest you'll get with existing downloads is three Blu-ray disk images, which you'll need to use jigdo to download; see http://cdimage.debian.org/debian-cd/current/amd64/jigdo-bd/ for details. Building a partial mirror is probably more sensible; you can use apt-mirror for that. A full mirror is overkill for your situation. It's doable of course, but it would take up approximately 300GB (for sources, all and amd64 packages)...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143935/" ] }
271,935
It's easy to generate a strong password quickly using the system dictionary: $ for i in {1..4}; do shuf --head-count=1 /usr/share/dict/words; doneAmelanchierwhitecupankhsantispasmodics However, this isn't exactly the easiest list of words to remember. Is there a package or file available for getting either the N most used words (for example Simplified English) or a list of words either ordered by popularity or with a popularity index so I can choose how many to use?
You won't find a single ISO image, although you could probably build one. The closest you'll get with existing downloads is three Blu-ray disk images, which you'll need to use jigdo to download; see http://cdimage.debian.org/debian-cd/current/amd64/jigdo-bd/ for details. Building a partial mirror is probably more sensible; you can use apt-mirror for that. A full mirror is overkill for your situation. It's doable of course, but it would take up approximately 300GB (for sources, all and amd64 packages)...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/271935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }