source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
167,207
I have a laptop(thinkpad) with 2 cpus. Currently I can read the cpu temperaturesfrom the files below with cat(1): cat /sys/class/thermal/thermal_zone0/tempcat /sys/class/thermal/thermal_zone1/tempcat /sys/devices/platform/coretemp.0/hwmon/hwmon1/temp2_inputcat /sys/devices/platform/coretemp.0/hwmon/hwmon1/temp3_inputcat /sys/devices/LNXSYSTM:00/LNXCPU:00/thermal_cooling/subsystem/thermal_zone1/tempcat /sys/devices/LNXSYSTM:00/LNXCPU:01/thermal_cooling/subsystem/thermal_zone0/temp My question is why the kernel stores this information on so many different places and which one is the "standard" file to read a cpu's temperature? Is this happening due to systemd(I'm using Arch Linux) or non-systemd Linux distros like Slackware have a different approach?
Actually, the temperature is not stored anywhere. /sys is an in-memory filesystem , and reading from files in /sys invokes code in the kernel that computes values on the fly. The different directories correspond to different ways that the hardware can report temperatures. The temp*_input files have an associated temp*_label that identifies which component's temperature is reported. Locations under /sys tend to vary from kernel version to kernel version (not from distribution to distribution). That's a difficulty that authors of programs that read data in /sys have to live with ( example ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91040/" ] }
167,216
I have a patch with absolute paths that I wish to use. i.e. the first few lines are as follows. --- /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qml 2014-10-10 18:47:23.000000000 +1100+++ /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qml.mod 2014-11-11 09:44:17.786200477 +1100 However, it fails unless I am in the root directory. ~$ cd~$ sudo patch -i /tmp/fix_kde_icons.patch -p0Ignoring potentially dangerous file name /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qmlIgnoring potentially dangerous file name /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qml.modcan't find file to patch at input line 3Perhaps you used the wrong -p or --strip option?...~$ cd /tmp/tmp$ sudo patch -i /tmp/fix_kde_icons.patch -p0... #same error as above/tmp$ cd /usr/usr$ sudo patch -i /tmp/fix_kde_icons.patch -p0... #same error as above/usr$ cd //$ sudo patch -i /tmp/fix_kde_icons.patch -p0patching file /usr/share/apps/plasma/packages/org.kde.pager/contents/ui/main.qml Is there a way to make patch use the absolute path with any working directory?
Looking at the source code of GNU patch, this behavior is built in since version 2.7. As of GNU patch 2.7.1, only relative paths not containing .. are accepted, unless the current directory is the root directory. To apply a patch containing absolute paths, you can use (cd / && sudo patch -p0) <foo.patch In recent versions of GNU patch, you can simply sudo patch -d/ -p0 <foo.patch
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/167216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18887/" ] }
167,228
I would like to print the number of folders (recursive, excluding hidden folders) in a given CWD / current directory. What command, or series of commands can I use to ascertain this information?
This will find the number of non-hidden directories in the current working directory: ls -l | grep "^d" | wc -l EDIT: To make this recursive, use the -R option to ls -l : ls -lR | grep "^d" | wc -l
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167228", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33011/" ] }
167,266
I thought I was familiar with sftp commands in unix and upto my knowledge the command to used to download a compressed file from the server is sftp get filename.tar.gz But when I tried this command the file gets emptied, I mean the file size is going 0 and I don't know why it happens. Is there any other command specifically for .tar.gz type of files. What is the procedure to get this compressed files. I'm using mac terminal.
get is a legal sftp command, but can't be used that way. The correct syntax to download filename.tar.gz to your Mac is: sftp user@host:/path/to/filename.tar.gz /some/local/folder (Replace user with the user you use to connect to the remote server, replace host with the remote server name.) There's nothing special to tar.gz files in the above command, it is generic to any extension. To use get you have to enter interactive mode first: Make a connection to the remote server: sftp user@host Wait until > , the sftp prompt, appears in a newline - you can now type your commands. Change the remote path to /path/to : cd /path/to/ Change the local path to /some/local/folder : lcd /some/local/folder Use get to download the file: get filename.tar.gz
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/167266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67557/" ] }
167,278
I have installed the software package called evince . It only opens and reads PDF files but does not create them. What software package should I install on boxes that run Debian (or OpenBSD) that can create PDF documents? I do not wish to install OpenOffice or LibreOffice.
You can install cups-pdf which gives you the ability to print to PDF from any package. The output files normally show up in ~/PDF (as the printer driver has no way to ask you where to store the file it creates) E.g. in Firefox do Ctrl + P and then select the PDF printer: Sorry for the missing Droste effect
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167278", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66229/" ] }
167,280
I have command to list system process by memory usage: ps -A --sort -rss -o comm,pmem Which list a table like COMMAND %MEMfirefox 28.2chrome 5.4compiz 4.8atom 2.5chrome 2.3Xorg 2.3skype 2.2chrome 2.0chrome 1.9atom 1.9nautilus 1.8hud-service 1.5evince 1.3 I would like to get total memory share per programs instead of per process of same programs. So I could get output like COMMAND %MEMfirefox 28.2chrome 11.6compiz 4.8atom 4.4Xorg 2.3skype 2.2nautilus 1.8hud-service 1.5evince 1.3 I thought about using awk , which I don't know much. Ended up with something like: ps -A --sort -rss -o comm,pmem | awk -F "\t" '{processes[$0] += $1;}{Endfor(i in processes) { print i,"\t",processes[i];}}' But it didn't work. How can I correct this?
processes[$0] += $1; use the whole line as key in your associative array, which is not unique. You must use $1 , which is command name as key. Try: $ ps -A --sort -rss -o comm,pmem | awk ' NR == 1 { print; next } { a[$1] += $2 } END { for (i in a) { printf "%-15s\t%s\n", i, a[i]; } }' If you want to sort the output by the second field, try: $ ps -A --sort -rss -o comm,pmem | awk ' NR == 1 { print; next } { a[$1] += $2 } END { for (i in a) { printf "%-15s\t%s\n", i, a[i] | "sort -rnk2"; } }'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15519/" ] }
167,300
I have a question regarding the awk / sed operators. I have a big file which has the following set of lines repeated Expression loweWallrhoPhi : sum=-6.97168e-09Expression leftWallrhoPhi : sum=6.97168e-09Expression lowerWallPhi : sum=-5.12623e-12Expression leftWallPhi : sum=5.12623e-12Expression loweWallrhoUSf : sum=-6.936e-09Expression leftWallrhoUSf : sum=6.97169e-09Expression lowerWallUSf : sum=-5.1e-12Expression leftWallUSf : sum=5.12624e-12 I want to extract the value after the keyword sum in each case into a separate file. Is it possible to do so at one go?
With grep : grep -oP 'sum=\K.*' inpufile > outputfile grep with -P (perl-regexp) parameter supports \K , which use to ignoring the previously matched characters. With awk : awk -F"=" '{ print $NF; }' inputfile > outputfile in awk the variable NF represent the total number of fields in a current record/line which is point to the last field number too and so $NF is its value accordingly. With sed : sed 's/^.*sum=//' inpufile > outputfile ^.*=sum replace all characters( .* ) between starting of line( ^ ) and last characters( sum= ) with whitespace char. Result: -6.97168e-096.97168e-09-5.12623e-125.12623e-12-6.936e-096.97169e-09-5.1e-125.12624e-12 With cut : cut -d'=' -f2 inputfile > outputfile if you want save same values into a same file and each separately, with awk you can do: awk -F"=" '{print $NF >($NF); }' inputfile > outputfile
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91092/" ] }
167,303
zsh: exec format error... his is the error I was getting when trying to execute a large application. I am using redhat Linux. What can I do to solve this?
The file that you're running has been given the execute permission, but it isn't in a format that the kernel understands, so it can't be executed on your machine. Run file /path/to/the/executable to see what kind of a file it is. This could be an archive that you're supposed to extract, or an executable for a different architecture (e.g. a 64-bit executable on a 32-bit system), or anything else really.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91095/" ] }
167,355
I'd like to start top with sorting set to resident memory size, instead of the default CPU usage. I don't see a way to do that from command line arguments or startup file. Obviously I can't echo "Oq" | top either since I'd prevent top from using the tty. Is there a way to do this? Update: I run top on Linux (recent Ubuntu and Debian, 3.x kernels), installed e.g. as 'procps 1:3.2.8-11ubun', though I suppose that the column ordering functionality might be pretty cross-platform.
top -M sorts by resident memory usage. M sort tasks by resident memory usage. This is the version of top on my system. top -v top: procps version 3.2.7 If your Linux distribution supports the -M flag, you could use it as mentioned here . However, if your top doesn't support the -M flag, you could launch your top command and get into the interactive mode by typing h to check the sort field. (I assume it is the same across various distributions) In my system (rather the top version of my system), I could type F or O to select the sorting field and key Q of my top version lets me sort on resident memory. If you want to save your configuration you could do something as mentioned by slm here . Saving configuration You can use the Shift + W to save your changes so they're the defaults: W Write configuration file The file is stored here, $HOME/.toprc , and looks like this: $ more .toprc RCfile for "top with windows" # shameless braggin'Id:a, Mode_altscr=0, Mode_irixps=1, Delay_time=1.000, Curwin=2Def fieldscur=AEHIoqTWKNMBcdfgjpLrsuvyzX winflags=129016, sortindx=19, maxtasks=0 summclr=2, msgsclr=5, headclr=7, taskclr=7Job fieldscur=ABcefgjlrstuvyzMKNHIWOPQDX winflags=63416, sortindx=13, maxtasks=0 summclr=6, msgsclr=6, headclr=7, taskclr=6Mem fieldscur=ANOPQRSTUVbcdefgjlmyzWHIKX winflags=65464, sortindx=13, maxtasks=0 summclr=5, msgsclr=5, headclr=4, taskclr=5Usr fieldscur=ABDECGfhijlopqrstuvyzMKNWX winflags=65464, sortindx=12, maxtasks=0 summclr=3, msgsclr=3, headclr=2, taskclr=7 See section 5 of the man page for more details, "5. FILES".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3633/" ] }
167,363
I have a question about Unix and Linux and their licenses. If you choose to make an operating system based on the Linux kernel then you have to distribute it for free under the GPL License, but if you choose to make an OS based on the Unix kernel (example: an OS based on FreeBSD) do you have permission to make it closed-source and to take copyrights making it a proprietary software distributing it non-free? So if somebody chose to make an OS based on FreeBSD can they sell it as their own modified version, taking copyrights or something like that? This question arose because I know that Mac OS X is based on FreeBSD and it should have used FreeBSD licenses, and OS X is a non-free, closed-source proprietary software. So, can you do that with Unix? Or does Apple have some sort of "agreement"?
If you choose to make an operating system based on the Linux kernel then you have to distribute it for free under the GPL License, That's not quite true. You can make an OS based on the Linux kernel with no constraint whatsoever, as long as you keep it for yourself. If you distribute an OS based on the Linux kernel, then you have to distribute the source code of the kernel (or any other part where you've used code from the Linux kernel). You don't have to distribute the rest. For example, most Linux distributions include some proprietary software; the GNU GPL doesn't constrain software that is distributed together with software covered by the GPL. but if you choose to make an OS based on the Unix kernel There is no such thing as “the Unix kernel” — not anymore. There are many Unix kernels, of which the Linux kernel is one. Some of them are based on the original Unix from Bell Labs (Solaris, HP-UX), others are not (*BSD, Linux, MINIX). (example: an OS based on FreeBSD) do you have permission to make it closed-source and to take copyrights making it a proprietary software distributing it non-free? FreeBSD code comes under a BSD license which is extremely liberal and includes the right to distribute proprietary software based on the BSD-licensed software. FreeBSD is not derived from the original Unix product, which was a commercial product. (BSD was originally companion software for a commercial Unix, and eventually they rewrote all the parts under a free license.) So if somebody chose to make an OS based on FreeBSD can they sell it as their own modified version, taking copyrights or something like that? This question arose because I know that Mac OS X is based on FreeBSD and it should have used FreeBSD licenses, and OS X is a non-free, closed-source proprietary software. Yes, the FreeBSD license allows that. So, can you do that with Unix? Or does Apple have some sort of "agreement"? You can't do that with the original Unix product, but that hasn't existed as a product for a long time (and there never really was a single Unix product except at the very beginning). You can do that with the Linux kernel (and with the GNU userland, too), as long as you distribute the sources for the GPL parts that you distribute (including your modifications if you modified the sources); you can keep the source of independent components (separate programs and libraries) for yourself. You can do that with FreeBSD, with basically no constraint.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91138/" ] }
167,368
This question is merely for idle curiosity, but I suspect others will be curious as well. Searching through errno.h (of Linux 2.6) I found ENOANO "No Anode". There is no sign of a "No cathode" error. Looking through kernel source concordances, it doesn't seem to be used by a device called an anode, only as a deliberately whacky error code by some obscure device drivers. Googling revealed nothing of interest. Is it maybe a joke? Is it defined in a standards document such as POSIX, but of no use?
ENOANO appeared in Linux 0.97 , which was released on 1992-08-01. For a very long time, it wasn't used anywhere; it's since then been used now and then in some drivers as “I didn't know what error code to use”. It's now only in uapi/asm-generic/errno.h (i.e. in the header files for userland programs ), but it was moved there automatically, so that's no indication of whether anybody cares about it. The errno.h header in 0.97 got some attention because it is one of the files that SCO claimed was copied from Unix SVR4 . At the time of the SCO claims, Linus Torvalds didn't remember how that file had been assembled ; he later found that it had been generated from values known by libc 2.2.2 . This was a C library for Linux, distributed with a port of GCC for Linux . That library would probably have included error codes from all kinds of unix variants that were around at the time. Stéphane Chazelas found that the term “anode” was used in Convergent/Burroughs Unix (CENTIX) as a synonym of inode . I found another book (from 1993) mentioning “anode” as a variant of “inode”, but other than that, it seems to have been pretty obscure even then. The Solaris errno.h confirms the Convergent origin: it lists ENOANO in a section titled “Convergent Error Returns” (together with a few other error codes with esoteric descriptions but at least vaguely comprehensible like “invalid exchange”, “exchange full” or “invalid slot” which a few more drivers use). So ENOANO probably indicated that either the kernel had run out of memory for inodes, or that the filesystem's inode table was full, in some commercial Unix in the 1980s. That Unix is now forgotten, its terminology is now forgotten, and due to some quirk the error code has stayed around. At least it's not “ lp0 on fire ”.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85229/" ] }
167,375
I feel like this should be a really simple thing to do, but I don't know how to do it. I recently updated the config file for youtube-dl and I also upgraded to the latest version. The update message says to restart the program to complete the upgrade, but I cannot for the life of me figure out how. The documentation doesn't give a command to do so, and running service youtube-dl restart returns that the service cannot be found. I installed it using the manual installation method and thus upgraded using youtube-dl -U . I'm on Ubuntu Server 14.10.
There is no permanently running service. The message given after update is kind of confusing, but basically means that if you have an instance of youtube-dl running currently, it must be restarted to benefit from the update. Since it rarely takes longer to download a video, than it does to update, I suspect the number of people who actually 'need' to restart anything is close to zero.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167375", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74725/" ] }
167,386
My xorg session is on tty1 and if I want to issue a command from tty (because I cannot do it from xorg session for some reasons), I press Ctrl + Alt + F2 , for example, and type a command. But I cannot start graphical applications from any tty except first since there is no xorg session in it. Then I am curious how can I switch to tty1 where xorg session is running and back to the session?
You can switch tty as you have described by pressing: Ctrl + Alt + F1 : (tty1, X is here on Ubuntu 17.10+ ) Ctrl + Alt + F2 : (tty2) Ctrl + Alt + F3 : (tty3) Ctrl + Alt + F4 : (tty4) Ctrl + Alt + F5 : (tty5) Ctrl + Alt + F6 : (tty6) Ctrl + Alt + F7 : (tty7, X is here when using Ubuntu 17.04 and below) You might also be able to use Alt + Left/Right Note that different distros assign these differently. RHEL 6, for example, assigns the X server to tty1 and a "dumb terminal" / "console" to tty2-7, while RHEL 5 assigns consoles to tty1-6, and x.org to tty7. Some x.org setups also make switching to any random console more difficult; RHEL 5.5, for example, has a dedicated x.org key to switch to tty1, and from there you can get to tty2-6 more easily. Related: What is the difference between shell, console, and terminal?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17397/" ] }
167,398
How to make a system turn itself off and back on at different times. For example I would have my "server" turn off at 4 A.M then turn back on at 5 A.M every day. Is this possible? I am using a Raspberry-pi with the most recent version of Raspbian.
You can suspend or hibernate your system and then automatically wake it up with rtcwake command. For example to suspend (to ram) and resume in 60 seconds do rtcwake -s 60 -m mem To hibernate (suspend to disk) in one hour from now and resume in two hours: sleep 3600; rtcwake -s 3600 -m disk You can also wakeup the system at given time with -t option which takes seconds since 1970 as an argument. Run man rtcwake for more info.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89360/" ] }
167,416
I'll start by saying that I think this problem is a little less innocent than it sounds. What I need to do: check for a folder within the PATH environment variable. It could be at the start or somewhere after. I just need to verify that that folder is there. Example of my problem - let's use /opt/gnome . SCENARIO 1: folder is not at the beginning of PATH # echo "$PATH"/sbin:/usr/sbin:/opt/gnome:/var/opt/gnome# echo "$PATH" | grep ":/opt/gnome"/sbin:/usr/sbin:/opt/gnome:/var/opt/gnome Note that the grep needs to be specific enough so that it doesn't catch /var/opt/gnome . Hence the colon. SCENARIO 2: folder is at beginning of PATH. # echo "$PATH"/opt/gnome:/sbin:/usr/sbin:/var/opt/gnome# echo "$PATH" | grep "^/opt/gnome"/opt/gnome:/sbin:/usr/sbin:/var/opt/gnome This is my problem - I need to search for either a colon or a start-of-line with this folder. What I would like to do is one of these two bracket expressions: # echo $PATH | grep "[^:]/opt/gnome"# echo $PATH | grep "[:^]/opt/gnome" BUT [^ and [: have their own meanings. Therefore, the two commands above do not work. Is there a way I can grep for these two scenarios in one command?
If you're checking the content of the PATH environment variable, as opposed to looking for something in a file, then grep is the wrong tool. It's easier (and faster and arguably more readable) to do it in the shell. In bash, ksh and zsh: if [[ :$PATH: = *:/opt/gnome:* ]]; then : # already thereelse PATH=$PATH:/opt/gnomefi Portably: case :$PATH: in *:/opt/gnome:*) :;; # already there *) PATH=$PATH:/opt/gnome;;esac Note the use of :$PATH: rather than $PATH ; this way, the component is always surrounded by colons in the search string even if it was at the beginning or end of $PATH . If you're searching through a line of a file, then you can use the extended regexp (i.e. requiring grep -E ) (^|:)/opt/gnome($|:) to match /opt/gnome but only if it's either at the beginning of a line or following a colon, and only if it's either at the end of the line or followed by a colon.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167416", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88594/" ] }
167,422
I'm new to shell scripting. I'd like to know if there's a command similar to "echo" for displaying text in a terminal, but instead of simply displaying it immediately, it actually types it, like if someone was actually typing on the terminal? I'd also appreciate it if someone could point me to pages explaining simple scripting like menus and such.
Here is a pure bash solution : string='foo bar base'for ((i=0; i<=${#string}; i++)); do printf '%s' "${string:$i:1}" sleep 0.$(( (RANDOM % 5) + 1 ))done ${#variable} is the length o f the string printf can replace echo to display string and format output : %s tell to printf to display a string without newline \n ${string:$i:1} is a bash [parameter expansion] 1 to display only a specific letter from the string $(( )) is some bash arithmetic $(( ( RANDOM % 5 ) + 1 )) display an integer : 1 to 5 RANDOMly Bonus This is a function to use with an argument : matrix(){ tput setaf 2 &>/dev/null # green powaaa for ((i=0; i<=${#1}; i++)); do printf '%s' "${1:$i:1}" sleep 0.$(( (RANDOM % 5) + 1 )) done tput sgr0 2 &>/dev/null}matrix 'foo bar base'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91175/" ] }
167,451
I'm trying to determine which process is using a large number of Huge Pages, but I can't find a simple Linux command (like top ) to view the Huge Page usage. The best I could find was $ cat /sys/devices/system/node/node*/meminfo | fgrep HugeNode 0 HugePages_Total: 512Node 0 HugePages_Free: 159Node 0 HugePages_Surp: 0Node 1 HugePages_Total: 512Node 1 HugePages_Free: 0Node 1 HugePages_Surp: 0 which tells me at the granularity of Nodes where the Huge Pages are in use, but I would like to see the Huge Page usage per process. I wouldn't mind iterating over all processes and cat ing some /sys special device to get this information. A similiar question here got no reponses: https://stackoverflow.com/q/25731343/364818 I am not running Oracle, btw.
I found a discussion on ServerFault that discusses this. Basically, $ sudo grep huge /proc/*/numa_maps/proc/4131/numa_maps:80000000 default file=/anon_hugepage\040(deleted) huge anon=4 dirty=4 N0=3 N1=1/proc/4131/numa_maps:581a00000 default file=/anon_hugepage\040(deleted) huge anon=258 dirty=258 N0=150 N1=108/proc/4131/numa_maps:7f6c40400000 default file=/anon_hugepage\040(deleted) huge/proc/4131/numa_maps:7f6ce5000000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N0=1/proc/4153/numa_maps:80000000 default file=/anon_hugepage\040(deleted) huge anon=7 dirty=7 N0=6 N1=1/proc/4153/numa_maps:581a00000 default file=/anon_hugepage\040(deleted) huge anon=265 dirty=265 N0=162 N1=103/proc/4153/numa_maps:7f3dc8400000 default file=/anon_hugepage\040(deleted) huge/proc/4153/numa_maps:7f3e00600000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N0=1 and getting the process name $ ps 4131 PID TTY STAT TIME COMMAND 4131 ? Sl 1:08 /var/lib/jenkins/java/bin/java -jar slave.jar$ ps 4153 PID TTY STAT TIME COMMAND 4153 ? Sl 1:09 /var/lib/jenkins/java/bin/java -jar slave.jar will give you an idea of what processes are using huge memory. $ grep HugePages /proc/meminfoAnonHugePages: 1079296 kBHugePages_Total: 4096HugePages_Free: 3560HugePages_Rsvd: 234HugePages_Surp: 0$ sudo ~/bin/counthugepages.pl 4153273 huge pages$ sudo ~/bin/counthugepages.pl 4131263 huge pages The sum of free pages (3560) plus the pages from the 2 process (273+263) equals 4096. All accounted for! The perl script to sum the dirty= fields is here: https://serverfault.com/questions/527085/linux-non-transparent-per-process-hugepage-accounting/644471#644471
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27865/" ] }
167,454
I'm new to Unix and in the process of installing a program for my dissertation I must have played with the PATH for the basic Unix commands such as ls . Every time I type ls and the directory name I want to list the files for, it comes up as: -bash: ls: No such file or directory What can I do to fix this? Any help is very very much appreciated!
Reset your path right now (i.e. before any sort of logout) with: export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin It doesn't get your full PATH restored but basic utilities will be available again. Here's an example of the sort of thing that happens: You had a PATH variable (referred to as $PATH when reading from it) Something like: $ echo $PATH/home/durrantm/.rvm/gems/ruby-2.0.0-p247/bin:/home/durrantm/.rvm/gems/ruby-2.0.0-p247@global/bin:/home/durrantm/.rvm/rubies/ruby-2.0.0-p247/bin:/home/durrantm/.rvm/bin:/home/durrantm/.autojump/bin:/usr/local/heroku/bin:/home/durrantm/bin:/home/durrantm/.autojump/bin:/usr/local/heroku/bin:/home/durrantm/.autojump/bin:/usr/local/heroku/bin:/home/durrantm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/durrantm/.rvm/bin:/home/durrantm/.rvm/bin:/home/durrantm/.rvm/bin You tried to add to it, but you accidentally used PATH=PATH:other_dir instead of PATH=$PATH:other_dir and the result was that your path became PATH:other_dir and then all the utilities like ls and sed don't work You can fix the minimal set by doing export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin which is particularly useful if a login dot files is messing up your PATH Check your .bashrc and/or .bash_profile files for any PATH changes. As Greg says you can also just log out (or safer option, open a new window in case opening a new window is broken!) and then echo $PATH from a new window. The same thing happens when you do path= with no values. It 'wipes out' your current path and causes these problems.strong text As for why , when you have these problems cd works and ls doesn't work: cd is a "built-in" command that doesn't need your PATH to find the program ls is a program and need to use PATH to find where it is. You can see this with: $ builtin ls-bash: builtin: ls: not a shell builtin14:47:29 mdurrant C02MH2DQFD58 /Users/mdurrant$ builtin cd14:47:31 mdurrant C02MH2DQFD58 /Users/mdurrant$ No error means the command is a builtin Before 'moving on' (or logging out)... Make sure to test any changes ( particularly those to .bashrc, .profile, etc that are doing PATH setting commands) by opening a new window or doing source ~/.bash_profile to run it. It's also a good practice to keep the window and editor (when you are changing the .bash_profile file) open in case your changes don't work and prevent you from opening new windows to edit the file. Though you can still use TextEdit or another simple editor to change the file (avoiding command line and vi for example). Be careful NOT to reboot if/when your shell is broken or you may not even be able to login. and that is really really bad (without another account to su from you are hosed). Has happened to me! My 'extra account' fix was also a life saver then though and highly recommended for all (do it now!)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91207/" ] }
167,490
recently I had to clean up a hacked server. The malicious process would appear as "who" or "ifconfig eth0" or something like that in "ps aux" output, even tough the executable was just a jumble of letters, which was shown in /proc/[pid]/status . I'm curious as to how the process managed to mask itself like that.
Manipulating the name in the process list is a common practice. E.g. I have in my process listing the following: root 9847 0.0 0.0 42216 1560 ? Ss Aug13 8:27 /usr/sbin/dovecot -c /etc/dovecot/droot 20186 0.0 0.0 78880 2672 ? S Aug13 2:44 \_ dovecot-authdovecot 13371 0.0 0.0 39440 2208 ? S Oct09 0:00 \_ pop3-logindovecot 9698 0.0 0.0 39452 2640 ? S Nov07 0:00 \_ imap-loginericb 9026 0.0 0.0 48196 7496 ? S Nov11 0:00 \_ imap [ericb 192.168.170.186] Dovecot uses this mechanism to easily show what each process is doing. It's basically as simple as manipulating the argv[0] parameter in C. argv is an array of pointers to the parameters with which the process has been started. So a command ls -l /some/directory will have: argv[0] -> "ls"argv[1] -> "-l"argv[2] -> "/some/directory"argv[3] -> null By allocating some memory, putting some text in that memory, and then putting the address of that memory in argv[0] the process name shown will have been modified to the new text.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43173/" ] }
167,491
I got the task of logging into some 300 devices, execute the following commands and then copy the output into one file. I can use SSH as a login method and I can give a predefined username and password. Once we are in the device I need to log into expert mode and provide the predefined password. $ expert>>> provide password $ lspci | egrep -i --color 'network|ethernet' Copy the output to file I prepared the following script: #!/bin/bash username=XXXXpasswd=XXXcd /tmpfor host in `cat servers.txt`; do ssh $username@$host $passwd;expertecho "### $host ###" >> output.txtlspci | egrep -i --color 'network|ethernet' >> output.txtdone After prompting for the password, it gives me following: Running commands is not allowed./fibertest.sh: line 9: expert: command not found It seems like it is not running the command on the remote, but on the local machine.
Manipulating the name in the process list is a common practice. E.g. I have in my process listing the following: root 9847 0.0 0.0 42216 1560 ? Ss Aug13 8:27 /usr/sbin/dovecot -c /etc/dovecot/droot 20186 0.0 0.0 78880 2672 ? S Aug13 2:44 \_ dovecot-authdovecot 13371 0.0 0.0 39440 2208 ? S Oct09 0:00 \_ pop3-logindovecot 9698 0.0 0.0 39452 2640 ? S Nov07 0:00 \_ imap-loginericb 9026 0.0 0.0 48196 7496 ? S Nov11 0:00 \_ imap [ericb 192.168.170.186] Dovecot uses this mechanism to easily show what each process is doing. It's basically as simple as manipulating the argv[0] parameter in C. argv is an array of pointers to the parameters with which the process has been started. So a command ls -l /some/directory will have: argv[0] -> "ls"argv[1] -> "-l"argv[2] -> "/some/directory"argv[3] -> null By allocating some memory, putting some text in that memory, and then putting the address of that memory in argv[0] the process name shown will have been modified to the new text.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167491", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91228/" ] }
167,497
Kubuntu is part of the Ubuntu project. Is it also compatible with all debian software, isn't it? So are all softwares made (or certainly-compatible) for Ubuntu (i.e. Software Center, Steam, Blender, Eclipse, etc...) I'm asking because I'd like to install it alongside my windows 8 (not 8.1) partition, do you have any special recommendations for this? Should I upgrade to 8.1 before?
Manipulating the name in the process list is a common practice. E.g. I have in my process listing the following: root 9847 0.0 0.0 42216 1560 ? Ss Aug13 8:27 /usr/sbin/dovecot -c /etc/dovecot/droot 20186 0.0 0.0 78880 2672 ? S Aug13 2:44 \_ dovecot-authdovecot 13371 0.0 0.0 39440 2208 ? S Oct09 0:00 \_ pop3-logindovecot 9698 0.0 0.0 39452 2640 ? S Nov07 0:00 \_ imap-loginericb 9026 0.0 0.0 48196 7496 ? S Nov11 0:00 \_ imap [ericb 192.168.170.186] Dovecot uses this mechanism to easily show what each process is doing. It's basically as simple as manipulating the argv[0] parameter in C. argv is an array of pointers to the parameters with which the process has been started. So a command ls -l /some/directory will have: argv[0] -> "ls"argv[1] -> "-l"argv[2] -> "/some/directory"argv[3] -> null By allocating some memory, putting some text in that memory, and then putting the address of that memory in argv[0] the process name shown will have been modified to the new text.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91232/" ] }
167,519
This code generates a error line 3: [: missing `]' . So why am I getting such error? #!/bin/bashread xif [ $x == "Y" ] || [ $x == "y"]then echo "YES"else echo "NO"fi Thanks in advance.
you need to add a space between " and ] $ ./test.shYYES$ cat test.sh#!/bin/bash read xif [ $x == "Y" ] || [ $x == "y" ] then echo "YES"else echo "NO"fi Cheers.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91243/" ] }
167,527
Upgraded tomcat 6 to 8 on Centos 7. I get this error in logs: /usr/local/ctera/apache-tomcat-8.0.14/bin/catalina.sh: line 421: -Djava.endorsed.dirs=/usr/local/ctera/apache-tomcat-8.0.14/endorsed: No such file or directory This is the only entry in logs... This directory didn't exist, so I created it, with permission 777.Still get same error. Tomcat 6 did not produce such an error. I read a little about the endorsed directory - http://tomcat.apache.org/tomcat-8.0-doc/class-loader-howto.html and it shouldn't be a critical issue, but it is. What should I do..?
you need to add a space between " and ] $ ./test.shYYES$ cat test.sh#!/bin/bash read xif [ $x == "Y" ] || [ $x == "y" ] then echo "YES"else echo "NO"fi Cheers.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67047/" ] }
167,533
I want to build a debian package with git build package.(gbp)I passed all steps, and at least, when I entered gbp buildpackage , This error appeared. what does it mean?and what should I do? gbp:error: upstream/1.5.13 is not a valid treeish
The current tag/branch you are in, is not a Debian source tree, it doesn't contain the debian/ directory in its root. This is evident because you are using a "upstream/" branch, a name utilized to upload the pristine source tree to git repositories. Try using the branch stable, testing or unstable, or any branch that starts with Debian or a commit tagged using the Debian versioning scheme.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81132/" ] }
167,545
I searched mysql.user table and did not find any user named as mysql . root is the default username. I am curious to know whether mysql user exists or not? If yes, what is the purpose of this user? Does it exist on the OS level instead of the database level?
The current tag/branch you are in, is not a Debian source tree, it doesn't contain the debian/ directory in its root. This is evident because you are using a "upstream/" branch, a name utilized to upload the pristine source tree to git repositories. Try using the branch stable, testing or unstable, or any branch that starts with Debian or a commit tagged using the Debian versioning scheme.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54198/" ] }
167,559
There is program I use, say xyz , that has undesirable effects if I run the command bare with no arguments. So I'd like to prevent myself from accidentally running the xyz command with no arguments but allow it to run with arguments. How can I write a shell script so that when calling xyz with no arguments it will print an error message and otherwise pass any and all arguments to the xyz program?
You can check special variable $# : if [ $# -eq 0 ]; then echo "No arguments provided!" exit 1fi/usr/bin/xyz "$@" Then, add an alias to your ~/.bashrc ; alias xyz="/path/to/script.sh" Now, each time you run xyz , the alias will be launched instead. This will call the script which checks whether you have any arguments and only launches the real xyz command if you do. Obviously, change /usr/bin/xyz to whatever the full path of the command is.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6250/" ] }
167,582
I've noticed this on occasion with a variety of applications. I've often thought it was because the output was cancelled early (ctrl+c, for example) or something similar, and zsh is filling in a new line character. But now curiosity has gotten the best of me, since it doesn't seem to do this in bash. zsh bash The Sequence program is something I pulled from a book while reading on Java certifications and just wanted to see if it would compile and run. I did notice that it does not use the println() method from the System.out package/class. Instead it uses plain old print() . Is the lack of a new line character the reason I get this symbol?
Yes, this happens because it is a "partial line". And by default zsh goes to the next line to avoid covering it with the prompt . When a partial line is preserved, by default you will see an inverse+bold character at the end of the partial line: a "%" for a normal user or a "#" for root. If set, the shell parameter PROMPT_EOL_MARK can be used to customize how the end of partial lines are shown.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/167582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47355/" ] }
167,585
As user I have access to a remote linux machine where I can: sudo su - other_user Among other things, this allows me to add my own public key to authorized_keys for other_user , which effectively also lets me ssh into this machine as other_user . What's interesting is that, as user , I am not allowed to do the following directly, since I do not (nor I am supposed to) know other_user 's password: su - other_user Does this security policy make sense? What difference does it make to not know the password for other_user if I can ssh or sudo su - into other_user ? More generally, what differences exist in terms of what you can and cannot do depending on how you log in as other_user ?
Does this security policy make sense? Yes and no. No in the sense that it does not protect other_user's data. But it does protect other_user's password. This may seem inconsequential, but that means there is at least one important thing you cannot do: change the password so that the person who normally uses the account can't access it. Another consequence of using su and keeping passwords secret is that /var/log/auth.log should contain stuff like this: Nov 8 08:08:10 ...(su:session): session opened for user other_user by (uid=1066)[...]Nov 8 09:38:10 ...(su:session): session closed for user other_user Presuming your uid is 1066 and your password is also secret, if anything unsavoury happened to other_user 's stuff during those 90 minutes, there is a strong case to be made that you did it. I've done work places where log in details like this were used to identify people doing things they knew they were not supposed to do.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167585", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
167,586
I know that is not a exciting question, but yet I don't understand why some programs needs program -h and other program --help sometimes is very boring recognize it
Does this security policy make sense? Yes and no. No in the sense that it does not protect other_user's data. But it does protect other_user's password. This may seem inconsequential, but that means there is at least one important thing you cannot do: change the password so that the person who normally uses the account can't access it. Another consequence of using su and keeping passwords secret is that /var/log/auth.log should contain stuff like this: Nov 8 08:08:10 ...(su:session): session opened for user other_user by (uid=1066)[...]Nov 8 09:38:10 ...(su:session): session closed for user other_user Presuming your uid is 1066 and your password is also secret, if anything unsavoury happened to other_user 's stuff during those 90 minutes, there is a strong case to be made that you did it. I've done work places where log in details like this were used to identify people doing things they knew they were not supposed to do.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78795/" ] }
167,603
I have been working on a board with an embedded ARM processor. To get it to boot I have to add a bootloader, a linux kernel and a disk image containing the root file system. This disk image is available on the internet for the target board (ZedBoard). After compiling the kernel with all necessary drivers activated, I have found out that many drivers are available in /lib/modules/kernel_number. I am a little bit confused as to how this whole thing works. Are drivers loaded by the kernel ? if so why are they already a part of the rootfs ? or does the kernel overwrite them with the ones compiled in it ?
It's pretty straight forward, although we should distinguish between "driver" and "module". A driver may or may not be a module. If it is not, then it is built into the kernel loaded by the bootloader. If it is a module, then it is in a filesystem hierarchy rooted at /lib/modules/[kernel-release] . 1 Note that it is possible to boot a kernel together with a small preliminary root filesystem (an "initramfs") which may contain such a repository as well. This is normal with generic kernels so they can decide what modular drivers they need to load in order to access the real filesystem, since if they can't do that, they can't access any modules there. Are drivers loaded by the kernel ? Yes. if so why are they already a part of the rootfs ? Where else should they be stored before they are loaded? The kernel doesn't contain the rootfs within itself (except WRT some forms of initramfs), it's just the gatekeeper. does the kernel overwrite them with the ones compiled in it ? No. If you compile a driver in, the kernel will not bother to check /lib/modules for it. I'm not sure what happens if you then ask it explicitly to load such a driver anyway, presumably it will just say no. 1. As Celada hints at with $(uname -r) , this release string is not necessarily just the version number. You can have multiple kernels with the same version and different release strings therefore separate module stores. Likewise, you can have multiple kernels with the same release string, therefore the same module store.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167603", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91290/" ] }
167,610
I'm creating a shell script that would take a filename/path to a file and determine if the file is a symbolic link or a hard link. The only thing is, I don't know how to see if they are a hard link. I created 2 files, one a hard link and one a symbolic link, to use as a test file. But how would I determine if a file is a hard link or symbolic within a shell script? Also, how would I find the destination partition of a symbolic link? So let's say I have a file that links to a different partition, how would I find the path to that original file?
Jim's answer explains how to test for a symlink: by using test 's -L test. But testing for a "hard link" is, well, strictly speaking not what you want. Hard links work because of how Unix handles files: each file is represented by a single inode. Then a single inode has zero or more names or directory entries or, technically, hard links (what you're calling a "file"). Thankfully, the stat command, where available, can tell you how many names an inode has. So you're looking for something like this (here assuming the GNU or busybox implementation of stat ): if [ "$(stat -c %h -- "$file")" -gt 1 ]; then echo "File has more than one name."fi The -c '%h' bit tells stat to just output the number of hardlinks to the inode, i.e., the number of names the file has. -gt 1 then checks if that is more than 1. Note that symlinks, just like any other files, can also be linked to several directories so you can have several hardlinks to one symlink.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/167610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91166/" ] }
167,631
So let's say I have a symbolic link of a file in my home directory to another file on a different partition. How would I find the target location of the linked file? By this, I mean, let's say I have file2 in /home/user/ ; but it's a symbolic link to another file1 . How would I find file1 without manually having to go through each partition/directory to find the file?
Use readlink : readlink -f /path/file ( last target of your symlink if there's more than one level ) If you just want the next level of symbolic link, use: readlink /path/file You can also use realpath on modern systems with GNU coreutils (e.g. Linux ), FreeBSD , NetBSD , OpenBSD or DragonFly : realpath /path/file which is similar to readlink -f .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/167631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91166/" ] }
167,639
I have a file like this: a AGTACTTCCAGGAACGGTGCACTCTCCb ATGGATTTTTGGAGCAGGGAGATGGAATAGGAGCATGCTCCATc ATATTAAATGGATTTTTGGAGCAGGGAGATGGAATAGGAGCATGCTCCATCCACTCCACACd ATCAGTTTAATATCTGATACGTCCTCTATCCGAGGACAATATATTAAATGGAe TTTGGCTAAGATCAAGTGTAGTATCTGTTCTTATAAGTTTAATATCTGATATGTCCTCTATCTGA I want to make file a.seq which contains sequence AGTACTTCCAGGAACGGTGCACTCTCC .Similarly b.seq contains ATGGATTTTTGGAGCAGGGAGATGGAATAGGAGCATGCTCCAT . In short, Column1 should be used as output file name with extension .seq and then it should have corresponding column2 sequence in it. I can do this by writing a perl script but anything on command line will be helpful. Hope to hear soon.
Using awk : awk '{printf "%s\n", $2>$1".seq"}' file From the nominated file , print the second field in each record ( $2 ) to a file named after the first field ( $1 ) with .seq appended to the name. As Thor points out in the comments, for a large dataset, you may exhaust the file descriptors, so it would be wise to close each file after writing : awk '{printf "%s\n", $2>$1".seq"; close($1".seq")}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63486/" ] }
167,642
I have a very simple html file with a value inside. Value is 57 in this case. <eta version="1.0"><value uri="/user/var/48/10391/0/0/12528" strValue="57" unit="%" decPlaces="0" scaleFactor="10" advTextOffset="0">572</value></eta> What is an easy bash script way to extract and write in a variable? Is there a way to not even require a wget into a file as an intermediate step, so as not require to open and use a file where it is stored, but directly work with the wget? To clarify, I could do a simple wget , save to a file and check the file for the value or is there an even more enhanced way to do the wget somewhere in RAM and not require an explicit file to be stored? Thanks a million times, highly appreciatedNorbert
You can extract a value in your example with grep and assign it to the variable in the following way $ x=$(wget -0 - 'http://foo/bar.html' | grep -Po '<value.*strValue="\K[[:digit:]]*')$ echo $x57 Explanation: $() : command substitution grep -P : grep with Perl regexp enable grep -o : grep shows only matched part of the line \K : do not show in the output anything what was matched up to this point wget -O - : prints downloaded document to standard output (not to file) However, for general approach it is better to use dedicated parser for html code.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72765/" ] }
167,668
A shebang (#!/bin/sh) is placed on the first line of a bash script, and it's usually followed on the second line by a comment describing what action the script performs. What if, for no particular reason, you decided to place the first command far beneath the shebang and the comment by, say, 10000 lines. Would that slow the execution of the script?
To find out, I created two shell files. Each starts with a shebang line and ends with the sole command date . long.sh has 10,000 comment lines while short.sh has none. Here are the results: $ time short.sh Wed Nov 12 18:06:02 PST 2014real 0m0.007suser 0m0.000ssys 0m0.004s$ time long.shWed Nov 12 18:06:05 PST 2014real 0m0.013suser 0m0.004ssys 0m0.004s The difference is non-zero but not enough for you to notice. Let's get more extreme. I created very_long.sh with 1 million comment lines: $ time very_long.shWed Nov 12 18:14:45 PST 2014real 0m1.019suser 0m0.928ssys 0m0.088s This has a noticeable delay. Conclusion 10,000 comment lines has a small effect. A million comment lines cause a significant delay. How to create long.sh and very_long.sh To create the script long.sh , I used the following awk command: echo "date" | awk 'BEGIN{print "#!/bin/bash"} {for (i=1;i<=10000;i++) print "#",i} 1' >long.sh To create very_long.sh , I only needed to modify the above code slightly: echo "date" | awk 'BEGIN{print "#!/bin/bash"} {for (i=1;i<=1000000;i++) print "#",i} 1' >very_long.sh
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167668", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77038/" ] }
167,717
I'm running a command from a script like tar -c -f ar.tar a b c d where b, c, and d may not exist, and may be directories. The solutions that I've come up with are piping the output of ls -d to grep , then splicing it into the tar command, or turning on extended globs for @(a|b|c|d) . Is there a neater way of doing this? I'm on Debian Wheezy, which doesn't seem to have an --include parameter.
you can try tar cf ar.tar $(ls a b c d ) where c for create f ar.tar specify tar file $(ls a b c d) will list to stdin which file are realy present (and give error for other)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91360/" ] }
167,727
I ran this command yesterday, I thought on a test machine, but it was a File-Server connected through SSH. sudo rm -rf /tmp/* !(lost+found) My terminal emulator is Konsole. My system is Debian 7. Question: Did this command delete other files than the files in /tmp?
The correct syntax in bash is the following: rm /tmp/!(lost+found) As @goldilocks wrote in the comments, the original command makes an expansion on the query (it deletes all the files in the /tmp folder, then goes on, and deletes all the files in the current working folder, in your case the home folder). You can try to check if you can recover some of your data. There is a question about Linux data recovery here .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
167,755
I have a string like rev00000010 and I only want the last number, 10 in this case. I have tried this: TEST='rev00000010'echo "$TEST" | sed '/^[[:alpha:]][0]*/d'echo "$TEST" | sed '/^rev[0]*/d' both return nothing, although the regex seems to be correct (tried with regexr )
The commands you passed to sed mean: if a line matches the regex, delete it . That's not what you want. echo "$TEST" | sed 's/rev0*//' This means: on each line, remove rev followed by any number of zeroes. Also, you don't need sed for such a simple thing. Just use bash and its parameter expansion : shopt -s extglob # Turn on extended globbing.echo "${TEST##rev*(0)}" # Remove everything from the beginning up to `rev` # followed by the maximal number of zeroes.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20479/" ] }
167,798
I have a Log file which need to be parsed and analysed. File contains something similar like below: File: 20141101 server contain dump20141101 server contain nothing {uekdmsam ikdas jwdjamc ksadkek} ssfjddkc * kdlsdlsddsfd jfkdfk 20141101 server contain dump Based on the above scenario, I have to check if the starting line doesn't contain date or Number I have to append to previous line. Output file: 20141101 server contain dump20141101 server contain nothing {uekdmsam ikdas jwdjamc ksadkek} ssfjddkc * kdlsdl sddsfd jfkdfk 20141101 server contain dump
A version in perl , using negative lookaheads: $ perl -0pe 's/\n(?!([0-9]{8}|$))//g' test.txt20141101 server contain dump20141101 server contain nothing {uekdmsam ikdas jwdjamc ksadkek} ssfjddkc * kdlsdlsddsfd jfkdfk20141101 server contain dump -0 allows the regex to be matched across the entire file , and \n(?!([0-9]{8}|$)) is a negative lookahead, meaning a newline not followed by 8 digits, or end of the line (which, with -0 , will be the end of the file).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84560/" ] }
167,802
I am trying to install nautilus-dropbox-1.6.2.tar.bz2. I ran ./configure , but got the error configure: error: Package requirements (libnautilus-extension >= 2.16.0) were not met:No package 'libnautilus-extension' foundConsider adjusting the PKG_CONFIG_PATH environment variable if youinstalled software in a non-standard prefix.Alternatively, you may set the environment variables NAUTILUS_CFLAGSand NAUTILUS_LIBS to avoid the need to call pkg-config.See the pkg-config man page for more details. I have done a google search for this, and found that some people suggested that one needs to download nautilus-devel. However, I have no idea what that is and how to download it. I google searched for it, and found it is for Fedora. I don't use Fedora. Anyways, is that what I need to do? If not, then what else can I do?
This would help remove the problem sudo apt-get install libnautilus-extension-dev
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167802", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90948/" ] }
167,808
Are there any image viewers which will auto reload the view when the image file is written to? I normally use debian variations of linux however appreciate all answers related to any "Unix & Linux" environments.
The old Gnome image viewer Eye of Gnome seems to automatically reload the image when it is edit in a program such as Gimp. There is also a reload plugin so to you can use a button to reload the image: Works in version 3.8.2
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13389/" ] }
167,814
I have a text file I'm outputting to a variable in my shell script. I only need the first 50 characters however. I've tried using cat ${filename} cut -c1-50 but I'm getting far more than the first 50 characters? That may be due to cut looking for lines (not 100% sure), while this text file could be one long string-- it really depends. Is there a utility out there I can pipe into to get the first X characters from a cat command?
head -c 50 file This returns the first 50 bytes. Mind that the command is not always implemented the same on all OS.On Linux and macOS it behaves this way.On Solaris (11) you need to use the gnu version in /usr/gnu/bin/
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/167814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88163/" ] }
167,823
I'm trying to understand the difference between these two commands: sudo find / -name .DS_Store -delete and sudo find / -name ".DS_Store" -exec rm {} \; I noticed that the -exec ... {} method is preferred. Why? Which one is safer/faster/better? I've used both on my Macbook and everything appears to work well.
-delete will perform better because it doesn't have to spawn an external process for each and every matched file, but make sure to use it after -name , otherwise it will delete the specified entire file tree. For example, find . -name .DS_Store -type f -delete It is possible that you may see -exec rm {} + often recommended because -delete does not exist in all versions of find . I can't check right now but I'm pretty sure I've used a find without it. Both methods should be "safe". A common method for avoiding the overhead of spawning an external process for each matched file is: find / -name .DS_Store -print0 | xargs -0 rm (but note that there is a portability problem here too: not all versions of find have -print0 !)
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/167823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44651/" ] }
167,826
How can I update my Buildroot without losing my configuration, packages, etc.? And how can I update the Linux kernel that is configured? Is it just change the url from git repository in menuconfig? If someone helps me I will be grateful.
-delete will perform better because it doesn't have to spawn an external process for each and every matched file, but make sure to use it after -name , otherwise it will delete the specified entire file tree. For example, find . -name .DS_Store -type f -delete It is possible that you may see -exec rm {} + often recommended because -delete does not exist in all versions of find . I can't check right now but I'm pretty sure I've used a find without it. Both methods should be "safe". A common method for avoiding the overhead of spawning an external process for each matched file is: find / -name .DS_Store -print0 | xargs -0 rm (but note that there is a portability problem here too: not all versions of find have -print0 !)
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/167826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91426/" ] }
167,835
I was just advised to re-ask my question from https://security.stackexchange.com/questions/72795/command-execution-dvwa-creating-file-in-tmp here. I use CentOS 7, and I'm trying to understand command execution attack. I found a tutorial which described a task to create a file. Simply using cat /etc/passwd | tee -a /tmp/passwd Should create copy of /etc/passwd . And it is (running cat /tmp/passwd from the same place, where I ran the previous command returns exactly what I was expecting). But there is no /tmp/passwd if I try to run this command from the server's terminal (not from the site). I did no setup for apache and php. Where should I search for the missing /tmp/passwd ? As @terdon asks: mount | grep tmp will return: devtmpfs on /dev type devtmpfs (rw,nosuid,size=1966708k,nr_inodes=491677,mode=755)tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)/dev/sda7 on /tmp type xfs (rw,relatime,attr2,inode64,noquota)/dev/sda7 on /var/tmp type xfs (rw,relatime,attr2,inode64,noquota)
In Fedora 20, the directory you're looking for is in one of the (possibly multiple) /var/tmp/systemd-private-${FOO} folders. I haven't been able to verify that on a RHEL 7 or CentOS 7 system yet, but I strongly suspect it will be in the same /var/tmp/systemd-private-${FOO} area.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91378/" ] }
167,843
I have installed tmux but now text mode vim colorschemes do not fill the background colour properly. Here is how it looks with colorscheme xoria256 in the normal Ubuntu 14.10 terminal: And here when I run it in the exact same terminal after tmux: So as you can see the desktop is showing through anywhere where there is no text in vim. I have a 256 color terminal. My .tmux.conf: ~ cat .tmux.confset -g mode-mouse onset -g default-terminal "screen-256color" and I have a 256 colour terminal: ~ tput colors256 How do I get tmux to work properly with vim 256-colour colorschemes which work fine in the normal terminal?
This happens when TERM isn’t set to the correct screen[-256color] in Vim’s environment, usually by some shell startup script. If that is the case – for example, you have a TERM=xterm-256color , either remove it or make sure it checks the original value of TERM before changing it, e.g. if [[ "$TERM" = xterm ]]; then TERM=xterm-256colorfi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46385/" ] }
167,893
I am triple booting Ubuntu, Debian, and Fedora. When I installed Fedora from a liveCD I got excited and kept hitting next, not realizing I was not installing GPT, but rather LVM. After doing this I cannot boot from a hard disk. The EFI menu doesn't even show my hard drive as a boot option (although it detects it in hardware). I have a work-around currently, which is odd in how it works, I use a liveboot USB (Yumi) and choose to run Linux from hard drive, and I can choose between the distros I have on my computer. However I need this USB to boot into a distribution. I am unsure exactly how to restore my system. My computer came with Ubuntu installed, ASUS XC200 (netbook). I called Asus tech support, they wanted to re-image.. I will not give up so easily. My /dev/sda1 (fat32, with boot flag) has an EFI directory on it for Ubuntu (assuming Ubuntu was loading GRUB> chainloading Debian). How do I start to fix this? And what information do people need? (I have no CD/DVD Player) Note with efibootmgr : Fatal: Couldn't open either sysfs or procfs directories for accessing EFI variables.Try 'modprobe efivars' as root. When I run modprobe there is nothing with efivars. Update/Things I tried so far: I tried the answers posted below [ 1 ], [ 2 ] currently, good research, and in most cases I believe they would work. They did not however in my situation. Current Tools Disks-- Lost extra flashdrives with Kali & Debian & Ubuntu 14.04 still have Yumi with Ubuntu 12.04. Steps taken recently (after following answers): Ran Live Ubuntu Wiped /dev/sda except the fat partition (GPT/ESP) Tried to do install of Ubuntu, didn't work problem with grub and EFI on my GPT partition Used fsck just in case (fine) Used parted/gparted to wipe all then make GPT and other partitions(set boot flag on ESP) Tried Install again (didn't work same error) Partitions looked funny, (missing space)... Scratched Head Wiped partitions/Made partition for LiveUSB onto Harddrive Used dd to write LiveUSB to /dev/sda4 (believe this was number) This booted, but needed my USB to be in place so was useless Tried to use gfdisk , made me reboot lost session Split my LiveUSB Downloaded Arch .iso, and dd 'd onto 2nd USB partition (LiveUSB) Kept Ubuntu LiveUSB session up, went through partial install (up to chroot of Arch while in live session) Had problems with things working right Ran Arch Live, went through install (zapping and initial creation of partitions worked better than on parted/gparted) Used directions to do syslinux (from within Arch Install Guide) Basically rewrote all my efi to brand new Running great on Arch Unsure whether/how to answer my own question
Forget grub entirely - it is nothing but a distraction. It isn't even a boot- loader anymore; on EFI systems the bootloader is built-in to the firmware. grub is just a boot- manager in that context - and almost definitely entirely redundant. What's more - it is probably the grub install that broke everything in the first place. These are the things you need: A FAT-formatted GPT partition of type ef00 . A UEFI-compatible system kernel located on that partition (such as the linux kernel) . The path to that system kernel saved to a UEFI environment variable (commonly Boot0000-{UUID} , but this also depends on the value of BootOrder-{UUID} ) . Strictly speaking, that is all. You can achieve the above setup with nothing more than gdisk and the efibootmgr command-line tools. Pragmatically, a boot-manager does make sense - but grub is the most complicated of all of those available. As is elsewhere recommended, rEFInd is probably the best of the bunch. I have written a step-by-step tutorial before on how to partition, format, and setup a rEFInd -enabled EFI system partition before here . Here also is another answer on this subject, in which you might find some further explanation about the assertions I make here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167893", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68239/" ] }
167,928
I had read the post on: bash - replace space with new line ,but it does not helped to solve my issue. [My Directory][root@testlabs Config]# ls Archive Backup_Filesconfig_file.tgzhostname1-config.uachostname2-config.uachostname3-config.uacMy-config_17Oct2014.tgznon_extension-config_file1non_extension-config_file2[root@testlabs Config]# I need to echo a list of MD5 Checksum results from a file. I am able to do so by doing this: ##IDENTIFY MD5 CHECKSUM## //To output the md5sum of the original file with a correct format to a temp file [md5<space><space>file_name]ls $FULL_FILE_NAME | md5sum $FULL_FILE_NAME > /tmp/md5sum.tmp//To compare the md5sum of the orignal file with the md5sum of the backup copies in the archive directoryls $CONFIG_ARCHIVE_DIR | grep -i --text $CONFIG_FILE_HOSTNAME-$FILE_TIMESTAMP.tgz | md5sum -c /tmp/md5sum.tmp >> /tmp/md5sum2.tmp##COMPARING MD5 CHECKSUM## if [ -s /tmp/md5sum2.tmp ]; then echo "" echo "Comparison of MD5 for files archived:" echo '---------------------------------------' /bin/sort -d /tmp/md5sum2.tmp fi and this will be the result when it was executed:(echo of the CONTENTS in /tmp/md5sum2.tmp) Comparison of MD5 for files archived:--------------------------------------- config_file.tgz: OK hostname1-config.uac: OK hostname2-config.uac: OK hostname3-config.uac: OK My-config_17Oct2014.tgz: OK non_extension-config_file1: OK non_extension-config_file1: OK ## WANTED ## However I would like the result to be display in this way: Comparison of MD5 for files archived:--------------------------------------- - config_file.tgz: OK - hostname1-config.uac: OK - hostname2-config.uac: OK - hostname3-config.uac: OK - My-config_17Oct2014.tgz: OK - non_extension-config_file1: OK - non_extension-config_file2: OK I tried doing this,(echo the CONTENTS of /tmp/md5sum2.tmp into /tmp/md5sum3.tmp with the '-' infront) 1) ##COMPARING MD5 CHECKSUM## if [ -s /tmp/md5sum2.tmp ]; then echo "" echo "Comparison of MD5 for files archived:" echo '---------------------------------------' /bin/sort -d /tmp/md5sum2.tmp for CONFIG_FILES in `/bin/cat /tmp/md5sum2.tmp` do/bin/sort -d /tmp/md5sum2.tmp | grep $CONFIG_FILES > /tmp/md5sum3.tmp done for MD5_COMPARE in $(/bin/sort -d /tmp/md5sum3.tmp) do echo -e " - $MD5_COMPARE\n" done fi Result 1) Comparison of MD5 for files archived: --------------------------------------- - config_file.tgz: - OK - hostname1-config.uac: - OK - hostname2-config.uac: - OK - hostname3-config.uac: - OK - My-config_17Oct2014.tgz.tgz: - OK - non_extension-config_file1: - OK - non_extension-config_file2: - OK 2) for MD5_COMPARE in $(/bin/sort -d /tmp/md5sum3.tmp) do echo -n " - $MD5_COMPARE" done Result 2) Comparison of MD5 for files archived: ---------------------------------------- config_file.tgz: - OK - hostname1-config.uac: - OK - hostname2-config.uac: - OK - hostname3-config.uac: - OK - My-config_17Oct2014.tgz: - OK - non_extension-config_file1: -OK - non_extension-config_file2: - OK
To read file line by line standard procedure is while IFS= read -r MD5_COMPAREdo echo "- $MD5_COMPARE"done < /tmp/md5sum2.tmp | /bin/sort -d But sed should work too /bin/sort -d /tmp/md5sum2.tmp | sed 's/^/ -/'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91488/" ] }
167,929
I have some files of which some are very large (like several GB), which I need to concatenate to one big file and then zip it, so something like this: cat file1 file2 file3 file4 | gzip > compress.gz which produces extremly high CPU and memory load on the machine or even makes it crash, because the cat generates several GB. I can't use tar archives, I really need one big chunk compressed by gzip. How can I produce the same gz file in a sequential way, so that I don't have to cat several GB first, but still have all files in the same .gz in the end?
cat doesn't use any significant CPU time (unless maybe on-disk decryption or decompression is involved and accounted to the cat process which is the one reading from disk) or memory. It just reads the content of the files and writes it to the pipe in small chunks in a loop. However, here, you don't need it. You can just do: gzip -c file1 file2 file3 file4 > compress.gz (not that it will make a significant difference). You can lower the priority of the gzip process (wrt CPU scheduling) with the nice command. Some systems have an ionice command for the same with I/O. nice -n 19 ionice -c idle pigz -c file1 file2 file3 file4 > compress.gz On Linux would run a parallel version of gzip with as little impact on the system as possible. Having compress.gz on a different disk (if using rotational storage) would make it more efficient. The system may cache that data that cat or gzip/pigz reads in memory if it has memory available to do so. It does that in case you need that data again. In the process, it may evict other cached data that is more useful. Here, that data likely doesn't need to be available. With GNU dd , you can use iflag=nocache to advise the system not to cache the data: for file in file1 file2 file3 file4; do ionice -c idle dd bs=128k status=none iflag=nocache < "$file"done | nice pigz > compress.gz
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50666/" ] }
167,935
From which log can I get details about sudo commands executed by any user. It should contain the working directory, command, user. It will be helpful if you could provide me a shell script to do so
Depending on your distro; simply: $ sudo grep sudo /var/log/secure or $ sudo grep sudo /var/log/auth.log which gives: Nov 14 09:07:31 vm1 sudo: pam_unix(sudo:auth): authentication failure; logname=gareth uid=1000 euid=0 tty=/dev/pts/19 ruser=gareth rhost= user=garethNov 14 09:07:37 vm1 sudo: gareth : TTY=pts/19 ; PWD=/home/gareth ; USER=root ; COMMAND=/bin/yum updateNov 14 09:07:53 vm1 sudo: gareth : TTY=pts/19 ; PWD=/home/gareth ; USER=root ; COMMAND=/bin/grep sudo /var/log/secure The user running the command is after the sudo: - gareth in this case. PWD is the directory. USER is the user that gareth is running as - root in this example. COMMAND is the command ran. Therefore, in the example above, gareth used sudo to run yum update and then ran this example. Before that he typed in the incorrect password. Note also that there may be rolled log files, like /var/log/secure* On newer systems: $ sudo journalctl _COMM=sudo gives a very similar output.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91015/" ] }
167,944
I have installed vitualbox on Debian Jessie according to instructions on debian wiki . By running: apt-get install linux-headers-$(uname -r|sed 's,[^-]*-[^-]*-,,') virtualbox During installation some errors were reported. Now I want to re-configure virtualbox-dkms but I receive this error: Loading new virtualbox-4.3.18 DKMS files...Building only for 3.16-3-amd64Module build for the currently running kernel was skipped since thekernel source for this kernel does not seem to be installed. Note: uname -r shows 3.16-3-amd64 but my source folder in /usr/src is named: linux-headers-3.16.0-4-amd64 . I don't know what to do!
Run: $ sudo apt-get update$ sudo apt-get install linux-headers-`uname -r` If that second command still fails to find anything, then: $ apt-cache search linux-headers- to list all the linux-headers packages available. At least one should match the kernel you are running (as displayed by uname -r ). Then: sudo apt-get install linux-headers-<version number>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91470/" ] }
167,968
I am using OpenWRT on the Arduino YUN and I am trying to get the exact date in milliseconds (DD/MM/YYYY h:min:sec:ms) by getting the time by an timeserver. Unfortunately date +%N just returns %N , but not the nanoseconds. I heard +%N is not included in OpenWRT date. So is there any way to get the date (including milliseconds) how I want it?
Actually there is also a package called coreutils-date ! Didn't know about it! There all standard functionality is included!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167968", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90247/" ] }
167,972
I have a bash script that starts up a python3 script (let's call it startup.sh ), with the key line: nohup python3 -u <script> & When I ssh in directly and call this script, the python script continues to run in the background after I exit. However, when I run this: ssh -i <keyfile> -o StrictHostKeyChecking=no <user>@<hostname> "./startup.sh" The process ends as soon as ssh has finished running it and closes the session. What is the difference between the two? EDIT: The python script is running a web service via Bottle. EDIT2: I also tried creating an init script that calls startup.sh and ran ssh -i <keyfile> -o StrictHostKeyChecking=no <user>@<hostname> "sudo service start <servicename>" , but got the same behavior. EDIT3: Maybe it's something else in the script. Here's the bulk of the script: chmod 700 ${key_loc}echo "INFO: Syncing files."rsync -azP -e "ssh -i ${key_loc} -o StrictHostKeyChecking=no" ${source_client_loc} ${remote_user}@${remote_hostname}:${destination_client_loc}echo "INFO: Running startup script."ssh -i ${key_loc} -o StrictHostKeyChecking=no ${remote_user}@${remote_hostname} "cd ${destination_client_loc}; chmod u+x ${ctl_script}; ./${ctl_script} restart" EDIT4: When I run the last line with a sleep at the end: ssh -i ${key_loc} -o StrictHostKeyChecking=no ${remote_user}@${remote_hostname} "cd ${destination_client_loc}; chmod u+x ${ctl_script}; ./${ctl_script} restart; sleep 1"echo "Finished" It never reaches echo "Finished" , and I see the Bottle server message, which I never saw before: Bottle vx.x.x server starting up (using WSGIRefServer())...Listening on <URL>Hit Ctrl-C to quit. I see "Finished" if I manually SSH in and kill the process myself. EDIT5: Using EDIT4, if I make a request to any endpoint, I get a page back, but the Bottle errors out: Bottle vx.x.x server starting up (using WSGIRefServer())...Listening on <URL>Hit Ctrl-C to quit.----------------------------------------Exception happened during processing of request from ('<IP>', 55104)
I would disconnect the command from its standard input/output and error flows: nohup python3 -u <script> </dev/null >/dev/null 2>&1 & ssh needs an indicator that doesn't have any more output and that it does not require any more input. Having something else be the input and redirecting the output means ssh can safely exit, as input/output is not coming from or going to the terminal. This means the input has to come from somewhere else, and the output (both STDOUT and STDERR) should go somewhere else. The </dev/null part specifies /dev/null as the input for <script> . Why that is useful here: Redirecting /dev/null to stdin will give an immediate EOF to any read call from that process. This is typically useful to detach a process from a tty (such a process is called a daemon). For example, when starting a background process remotely over ssh, you must redirect stdin to prevent the process waiting for local input. https://stackoverflow.com/questions/19955260/what-is-dev-null-in-bash/19955475#19955475 Alternatively, redirecting from another input source should be relatively safe as long as the current ssh session doesn't need to be kept open. With the >/dev/null part the shell redirects the standard output into /dev/null essentially discarding it. >/path/to/file will also work. The last part 2>&1 is redirecting STDERR to STDOUT. There are three standard sources of input and output for a program. Standard input usually comes from the keyboard if it’s an interactive program, or from another program if it’s processing the other program’s output. The program usually prints to standard output, and sometimes prints to standard error. These three file descriptors (you can think of them as “data pipes”) are often called STDIN, STDOUT, and STDERR. Sometimes they’re not named, they’re numbered! The built-in numberings for them are 0, 1, and 2, in that order. By default, if you don’t name or number one explicitly, you’re talking about STDOUT. Given that context, you can see the command above is redirecting standard output into /dev/null, which is a place you can dump anything you don’t want (often called the bit-bucket), then redirecting standard error into standard output (you have to put an & in front of the destination when you do this). The short explanation, therefore, is “all output from this command should be shoved into a black hole.” That’s one good way to make a program be really quiet! What does > /dev/null 2>&1 mean? | Xaprb
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167972", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52894/" ] }
167,975
I need to install on a lamp server exactly the same php extensions that are installed on an old server, how do to know exactly which extensions are installed in order to install the same on the new server?
You can use the command line switch -m to php to see what modules are installed. $ php -m | head[PHP Modules]bz2calendarctypecurldatedbasedomexiffileinfo... You could also use php -i to get phpinfo(); output via the command line which would include this info as well. References extension_loaded - PHP documentation
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91524/" ] }
167,995
The following bash script won't work. I need to calculate the date depending on the number of days since 14th Oct 1582, where the argument will be the number of days. d="$1"date -d '14 Oct 1582 + "$d" days' for example the command ./datedays.sh 154748 should give Wed Jun 21 00:00:00 BST 2006 instead it give an error date: invalid date ‘14 Oct 1582 + "$d" days’
You can use the command line switch -m to php to see what modules are installed. $ php -m | head[PHP Modules]bz2calendarctypecurldatedbasedomexiffileinfo... You could also use php -i to get phpinfo(); output via the command line which would include this info as well. References extension_loaded - PHP documentation
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167995", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91540/" ] }
168,004
I'm trying to delete a line after N lines using awk and I can't seem to get it right.The file format is like this YYYYYYXXXXXXXXXXXXYYYYYYXXXXXXXXXXXX The real example would be office331office361office363office311 How can I delete the YY lines or the lines that say "office". I need to delete a line every two lines regardless of their content.
If you have GNU sed, you could use the n~m ( n skip m ) address notation sed '1~3d' file which deletes every third line, starting at the first.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43731/" ] }
168,034
I'm attemting to modify an npm package with multiple dependencies. As such npm install -g . takes a long time to execute. Do I have other options besides removing the dependencies from packages.json?
--no-optional option is now implemented according to this documentation https://docs.npmjs.com/cli/install : The --no-optional argument will prevent optional dependencies from being installed.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/168034", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15581/" ] }
168,052
On the page where you can download 32-bit Debian DVDs , there are three different ISO images listed: debian-7.7.0-i386-DVD-1.iso 2014-10-18 14:23 3.7G debian-7.7.0-i386-DVD-2.iso 2014-10-18 14:23 4.4G debian-7.7.0-i386-DVD-3.iso 2014-10-18 14:23 4.3G What is the difference between these different ISOs?
Debian contains too much software for a single DVD. Therefore the packages are split up on three different DVDs. All the basics are on the first DVD and all the more "exotic" packages on the last one. Usually you only use the first DVD to set up a base system and download everything else from the servers. But if you are in a complete offline situation you can use all three DVD and when trying to install things, debian will ask you for the needed DVD.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5769/" ] }
168,062
I want to save an SSH key passphrase in gnome-keyring and then use it automatically when I need it. How to do this?
If gnome-keyring-daemon is already running, you can use ssh-add to add your key to the service: ssh-add /path/to/private/key For example: ssh-add ~/.ssh/id_rsa
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168062", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36751/" ] }
168,095
I know that echo -e is not an ordinary command. I have tried echo '-e' and echo \-e but they still don`t work.
The best solution is not to use echo , but to use printf instead. printf '%s\n' -e This works with arbitrary variables: var=-eprintf '%s\n' "$var" ...meaning you don't need to do any special preparation/modification elsewhere in your code based on the knowledge that a value will be echo d. Incidentally, the POSIX shell command specification for echo acknowledges that it is unportable as implemented, and contains a note on that subject: It is not possible to use echo portably across all POSIX systems unless both -n (as the first argument) and escape sequences are omitted. The printf utility can be used portably to emulate any of the traditional behaviors of the echo utility as follows (assuming that IFS has its standard value or is unset): The historic System V echo and the requirements on XSI implementations in this volume of POSIX.1-2008 are equivalent to: printf "%b\n" "$*" The BSD echo is equivalent to: if [ "X$1" = "X-n" ]then shift printf "%s" "$*"else printf "%s\n" "$*"fi New applications are encouraged to use printf instead of echo. (Emphasis added). That said, on GNU systems, an alternative exists: Requesting standards-compliant behavior. $ POSIXLY_CORRECT=1 /bin/echo -e-e
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91607/" ] }
168,169
How can I configure Vim so that when I change one member of a matching pair of HTML tags, the other is automatically changed to match? For example: given <span>content</span> , I want to change <span> to <div> , and then the closing </span> automatically changes to </div> . Optimally this would happen upon returning from insert mode to normal mode, but it would be OK if I had to hit a special keybinding to make this happen.
The excellent surround.vim makes this simple — with the cursor anywhere within the tag you want to change, in normal mode, type cst<div> ( c hange s urrounding t ag to <div> ). The tag name will be changed to "div" in both the opening and closing tag, and you'll be back in normal mode.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90260/" ] }
168,181
I want to write a bash script to print all possible lower and upper case permutations of a certain word, e.g. harley: harleyharleYharlEyharLey...HARLeyHARLEyHARLEY My naive solution is to write a n-th (n is len(word)) nested for-loop for this specific word: #!/bin/bashfor a in {h,H}; do for b in {a,A}; do ... donedone However, I would have to code the script for a different word again. Is there a better way to accomplish this?
A slightly better solution: echo {h,H}{a,A}{r,R}{l,L}{e,E}{y,Y} For full scalability: echo harley \| perl -nle 'print "echo ", join "",map { "{" . lc . "," .uc ."}" } split //' \| xargs -I {} bash -c "{}" If you absolutely must have one word per line, go with for w in {h,H}{a,A}{r,R}{l,L}{e,E}{y,Y};do echo $w;done thanks to mattdm's comment The corresponding scalable version would be: echo harley \| perl -nle 'print join "",map { "{" . lc . "," .uc ."}" } split //' \| xargs -I {} bash -c 'for w in {};do echo $w;done' For fun, try replacing "harley" with "supercalifragilisticexpialidocious" It's been 5 minutes and my computer is still crunching on this one and will probably never finish :)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/168181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72471/" ] }
168,199
If you take this code: echo -e '\t\t\tString' | grep '^[\t]*String' the result is blank because it doesn't match, yet this: echo -e '\t\t\tString' | grep $'^[\t]*String' works. I swear that I must have used the first line's code a hundred times in my scripts and in the terminal, without ever using the "$" character like that, and it's always seemed to work. Has there been some recent change? Why does it need the "$" character? Or am I doing something wrong?
ANSI-C Quoting According to the Bash manual, this is called ANSI-C quoting . The manual says: Words of the form $'string' are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard. In practice, this means that '\t' will not be expanded into a tab character, but $'\t' will. The output should be equivalent to using echo -e , but can be used anywhere you'd use a string without requiring command substitution . Utilities like GNU sed perform their own expansion of escape characters, but GNU grep doesn't. The Bash shell, not grep, expands escaped characters within ANSI-C quoted strings. Without the ANSI-C quoting, the regular expression you posted contains no tab characters to match the input.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91667/" ] }
168,212
I want to change the case of the n-th letter of a string in BASH (or any other *nix tools, e.g. sed , awk , tr , etc). I know that you can change the case a whole string using: ${str,,} # to lowercase${str^^} # to uppercase Is it possible to change the case of the 3rd letter of "Test" to uppercase? $ export str="Test"$ echo ${str^^:3}TeSt
In bash you could do: $ str="abcdefgh"$ foo=${str:2} # from the 3rd letter to the endecho ${str:0:2}${foo^} # take the first three letters from str and capitalize the first letter in foo.abCdefgh In Perl: $ perl -ple 's/(?<=..)(.)/uc($1)/e; ' <<<$strabCdefgh Or $ perl -ple 's/(..)(.)/$1.uc($2)/e; ' <<<$strabCdefgh
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91603/" ] }
168,221
In my testing (in Bash and Z Shell), I saw no problems with defining functions or aliases or executable shell scripts which have hyphens in the name, but I'm not confident that this will be okay in all shells and in all use cases. The reason I would like to do this is that a hyphen is easier to type than an underscore, and therefore faster and smoother. One reason I'm hesitant to trust that it's not a problem is that in some languages (Ruby for example) the hyphen would be interpreted as a minus sign even without spaces around it. It wouldn't surprise me if something like this might happen in some shells, where the hyphen is interpreted as signaling an option even without a space. Another reason I'm a little suspicious is that my text editor screws up the syntax highlighting for functions with hyphens. (But of course it's entirely possible that that's just a bug in its syntax highlighting configuration for shell scripts.) Is there any reason to avoid hyphens?
POSIX and Hyphens: No Guarantee According to the POSIX standard, a function name must be a valid name and a name can consist of: 3.231 Name In the shell command language, a word consisting solely ofunderscores, digits, and alphabetics from the portable character set.The first character of a name is not a digit. Additionally, an alias must be a valid alias name , which can consist of: 3.10 Alias Name In the shell command language, a word consisting solely of underscores,digits, and alphabetics from the portable character set and any of thefollowing characters: '!', '%', ',', '@'. Implementations may allow other characters within alias names as anextension. (Emphasis mine.) A hyphen is not listed among the characters that must be allowed in either case. So, if they are used, portability is not guaranteed. Examples of Shells That Do Not Support Hyphens dash is the default shell ( /bin/sh ) on the debian-ubuntu family and it does not support hyphens in function names: $ a-b() { date; }dash: 1: Syntax error: Bad function name Interestingly enough, it does support hyphens in aliases, though, as noted above, this is an implementation characteristic , not a requirement: $ a_b() { printf "hello %s\n" "$1"; }$ alias a-b='a_b'$ a-b worldhello world The busybox shell (only the ash based one) also does not support hyphens in function names: $ a-b() { date; }-sh: Syntax error: Bad function name Summary of Hyphen Support by Shell The following shells are known to support hyphens in function names: pdksh and derivatives, bash, zsh some ash derivatives such as the sh of FreeBSD ( since 2010 ) or NetBSD (since 2016. busybox sh when the selected shell at compile time is hush instead of ash . csh and tcsh (in their aliases, those shells have no function support). Those shells have a radically different syntax anyway, so there's no hope to have cross shell compatibility with those. rc and derivatives (again with a radically different syntax) fish (again with a radically different syntax) The following shells are known not to support hyphens in function names: the Bourne shell and derivatives such as ksh88 and bosh (in the Bourne shell, functions and variables shared the same namespace, you couldn't have a variable and a function by the same name). ksh93, yash, the original ash and some of its derivatives (busybox ash (the default choice for sh), dash) Conclusions Hyphens are non-standard. Stay away from them if you want cross-shell compatibility. Use underscores instead of hyphens: underscores are accepted everywhere.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/168221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
168,232
I am using the following command to create messages on the fly, and send them: echo "Subject:Hello \n\n I would like to buy a hamburger\n" | sendmail [email protected] It seems that when you send the information from a file, by doing something like: sendmail [email protected] mail.txt Then sendmail sees each line as a header, and parses it. But the way I sent it above, everything ends up in the subject line. If one wants to echo a message complete with headers, into sendmail, then what is the format ? How does one do it ?
Your echo statement should really output newlines not the sequence \ followed by n . You can do that by providing the -e option: echo -e "Subject:Hello \n\n I would like to buy a hamburger\n" | sendmail [email protected] To understand what is the difference have a look at the output from the following two commands: echo "Subject:Hello \n\n I would like to buy a hamburger\n"echo -e "Subject:Hello \n\n I would like to buy a hamburger\n"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/168232", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91690/" ] }
168,240
I have a file with the following contents: [root@server list]# cat filenames.txt[AAA','ACMEDEMO2','ACMEDEMO3','ACMEDEMO4','RENTCOGH','TESTENT','DORASINE','LOKAWINK','BBB][root@qa4app01 list]# I want this as a list to use in my python script, so I am trying to change AAA' to 'AAA' and 'BBB to 'BBB' and I thought I would use sed and replace [ with [' and ] with '] . To confirm, I tried this: [root@server list]# cat filenames.txt | sed "s/]/']/g"[AAA','ACMEDEMO2','ACMEDEMO3','ACMEDEMO4','MENSCOGH','TESTENT','DORASINE','LOKAWINK','BBB'][root@server list]# It worked and I was able to replace the ] with a '] . So, for AAA I just need to change replace right square brackets and the single quote in the sed with a left square bracket and tried this: [root@server list]# cat filenames.txt | sed -e "s/]/']/g" -e "s/[/['/g"sed: -e expression #2, char 8: unterminated `s' command[root@server list]# Okay. I thought for some reason appending sed commands are not working properly and to check this I tried the sed on the left square bracket separately: [root@server list]# cat filenames.txt | sed "s/[/['/g"sed: -e expression #1, char 8: unterminated `s' command[root@server list]# Strange. It looks as if sed is treating left and square brackets differently. I was able to get away without escaping the [ , while sed refuses to work with ] without the an escape character. I eventually got what I want with adding escape characters to the left square bracket like below: [root@server list]# cat filenames.txt | sed -e "s/]/']/g" -e "s/\[/\['/g"['AAA','ACMEDEMO2','ACMEDEMO3','ACMEDEMO4','MENSCOGH','TESTENT','DORASINE','LOKAWINK','BBB'][root@server list]# I am in the process of writing a python + shell script which will be handling files with numerous instances of both the square brackets. My question is whether the right and left square brackets are treated differently by sed or bash . sed version is GNU sed version 4.2.1. P.S.: I have decided to use escape characters with both the brackets to play safe, but want to be absolutely sure it won't break the script if I don't use the escape character (for readability reasons) at places where I will be working only with right square bracket ]
[ introduces a character class and ] closes it. If sed sees only ] (i.e. the closing command), it will be OK, sed assumes this is not special command. Using only [ (without a closing ] ) confuses sed , so it must be escaped. On a side note, there is no need for g (global) flag (only one substitution per line) escape [ in replacement. I managed your change with sed -e "s/]/']/" -e "s/\[/['/"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68296/" ] }
168,242
My office has one default gateway and behind that is a local network with locally assigned IP addresses to all computers including mine. I hold admin in my Ubuntu installed office PC and is it essential that I access the computer during weekends through SSH. At office, I do not have a public IP but I always get the same local IP from the DHCP. I'm free to set up any software I like in my pc although I cannot set up port forwarding in the main firewall. I get a public IP to my home computer which also runs Linux. please note I cannot install Team Viewer-like software. How can I solve my problem?
It's easy: [execute from office machine] Setup connection Office -> Home (as Home has public IP). This will setup reverse tunnel from your office machine to home. ssh -CNR 19999:localhost:22 homeuser@home [execute from home machine] Connect to your office from home. This will use tunnel from the step 1. ssh -p 19999 officeuser@home Please ensure, that ssh tunneling is not against your company policies, cause sometimes you can get fired for such connection schema (e.g. my employer will fire me for that). ps. In the first step you may want to use autossh or something like that, so your tunnel connection will be automatically restored in case of unstable network.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168242", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90328/" ] }
168,255
I see I can do $ [ -w /home/durrantm ] && echo "writable"writable or $ test -w /home/durrantm && echo "writable"writable or $ [[ -w /home/durrantm ]] && echo "writable"writable I like using the third syntax. Are they equivalent in all ways and for all negative and edge cases? Are there any differences in portability, e.g. between bash on Ubuntu and on OS X or older/newer bash versions, e.g. before/after 4.0 and do they both expand expressions the same way?
[ is synonym of the test command and it is simultaneously a bash builtin and separate command. But [[ is a bash keyword and works in some versions only. So for reasons of portability you are better off using single [] or test [ -w "/home/durrantm" ] && echo "writable"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/168255", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
168,262
Sometime ago I tried to install Steam on my CentOS 5 server and tried almost everything I found on Internet and it seems that I have been able to leave libstdc++ installed and not installed at the same time. CPanel is failing to update because it doesn't find the correct version installed but yum is unable to install it because it's already installed. ¿How can I fix this situation and reach a consistent state? # yum install libstdc++-4.1.2-55.el5Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * contrib: mirror.wiredtree.comaddons | 1.9 kB 00:00base | 1.1 kB 00:00centosplus | 1.9 kB 00:00contrib | 1.9 kB 00:00extras | 2.1 kB 00:00updates | 1.9 kB 00:00wiredtree | 951 B 00:00Excluding Packages in global exclude listFinishedSetting up Install ProcessPackage matching libstdc++-4.1.2-55.el5.i386 already installed. Checking for update.Nothing to do# yum remove libstdc++-4.1.2-55.el5Loaded plugins: fastestmirrorSetting up Remove ProcessNo Match for argument: libstdc++-4.1.2-55.el5Loading mirror speeds from cached hostfile * contrib: mirror.wiredtree.comaddons | 1.9 kB 00:00base | 1.1 kB 00:00centosplus | 1.9 kB 00:00contrib | 1.9 kB 00:00extras | 2.1 kB 00:00updates | 1.9 kB 00:00wiredtree | 951 B 00:00Excluding Packages in global exclude listFinishedPackage(s) libstdc++-4.1.2-55.el5 available, but not installed.No Packages marked for removal# yum reinstall libstdc++-4.1.2-55.el5Loaded plugins: fastestmirrorSetting up Reinstall ProcessLoading mirror speeds from cached hostfile * contrib: mirror.wiredtree.comaddons | 1.9 kB 00:00base | 1.1 kB 00:00centosplus | 1.9 kB 00:00contrib | 1.9 kB 00:00extras | 2.1 kB 00:00updates | 1.9 kB 00:00wiredtree | 951 B 00:00Excluding Packages in global exclude listFinishedNo Match for argument: libstdc++-4.1.2-55.el5Package(s) libstdc++-4.1.2-55.el5 available, but not installed.Nothing to do# yum --showduplicates list libstdc++ | expandLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * contrib: mirror.wiredtree.comExcluding Packages in global exclude listFinishedInstalled Packageslibstdc++.i386 4.3.2-7 installedAvailable Packageslibstdc++.i386 4.1.2-55.el5 base
Thanks to Anthony Geoghegan pointing me in the right direction, I was able to find a working solution rpm -e --justdb --nodeps libstdc++ That will remove the pacakge from the db without touching the files, then simple yum install will work.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91705/" ] }
168,284
When should I use -eq vs = vs == e.g. [[ $num -eq 0 ]][[ $num = 'zzz' ]] I've observed a pattern of using -eq (and -ne , etc.) for numbers and = for strings. Is there a reason for this and when should I use ==
Because that's the definition for those operands. From POSIX test documentation, OPERANDS section : s1 = s2 True if the strings s1 and s2 are identical; otherwise, false. ... n1 -eq n2 True if the integers n1 and n2 are algebraically equal; otherwise, false. == is not defined by POSIX, it's an extension of bash , derived from ksh . You shouldn't use == when you want portability. From bash documentation - Bash Conditional Expressions : string1 == string2 string1 = string2 True if the strings are equal. ‘=’ should be used with the test command for POSIX conformance.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/168284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
168,315
I have a file containing this: 1415602803,LOGIN SUCCESS,AUTH,user2,192.168.203.63,10.146.124.73,59996,221415602807,LOGIN SUCCESS,AUTH,user1,172.24.31.10,172.32.1.1,48191,221415602811,LOGIN FAILED,AUTH,root,172.24.166.153,10.146.124.73,52506,221415602815,LOGIN FAILED,AUTH,user3,192.168.123.55,10.146.32.99,55750,22 I want to convert the timestamp to a date in this format: 2014-11-10 02:00:03,LOGIN SUCCESS,AUTH,user2,192.168.203.63,10.146.124.73,59996,222014-11-10 02:00:07,LOGIN SUCCESS,AUTH,user1,172.24.31.10,172.32.1.1,48191,222014-11-10 02:00:11,LOGIN FAILED,AUTH,root,172.24.166.153,10.146.124.73,52506,222014-11-10 02:00:15,LOGIN FAILED,AUTH,user3,192.168.123.55,10.146.32.99,55750,22 How can I do that? I know this works: perl -pe 's/(\d+)/localtime($1)/e' (from this question ) but the output format is Mon Nov 10 02:00:03 2014 . I know this command can convert timestamps into my desired output: date -d@1415602803 +"%F %H:%M:%S" , but I couldn't make it work with awk using system("cmd") because of all the quotations and whatnot.
Found something here: Stackoverflow - Convert from unixtime at command line . Came up with this: awk -F"," '{OFS=","; $1=strftime("%Y-%m-%d %H:%M:%S", $1); print $0}' file -F"," to use a field separator of , , OFS=","; so that the output fields are also separated by a , , $1=strftime("%Y-%m-%d %H:%M:%S", $1); to change the value of the first field $1 into the specified format, and print $0; to print the whole line.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38128/" ] }
168,340
I am trying to build a c++ program using Unix. I got the error Linking CXX executable ../../bin/ME/usr/bin/ld: cannot find -lboost_regex-mt I heard that I just need to set the location of libboost* in my LD_LIBRARY_PATH env variable and then invoke make as I originally did, by typing -L /usr/lib64 -l boost_regex-mt or export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64 But where is LD_LIBRARY_PATH? how do I set the LD_LIBRARY_PATH env variable?
how do I set the LD_LIBRARY_PATH env variable? You already set it when you did this: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64 But that will not solve your problem. $LD_LIBRARY_PATH is consulted at time of execution, to provide a list of additional directories in which to search for dynamically linkable libraries. It is not consulted at link time (except maybe for locating libraries required by the built tools themselves!). In order to tell the linker where to find libraries at build time, you need to use the -L linker option. You already did that too: -L /usr/lib64 If you are still getting the error, then you need to make sure that the library is actually there. Do you have a file libboost_regex-mt.so or libboost_regex-mt.a in that (or any) directory? Note that a file like libboost_regex-mt.so.othersuffix doesn't count for this purpose. If you don't have that, then you probably need to install your distribution's development package for this library.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168340", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90948/" ] }
168,354
Often I'll use tmux to start a task that will be running for a while. I will periodically go back and check on it using tmux -a and then disconnect if it still hasn't completed and check again later. Is there any way to just see a brief snapshot of what's going on in the session without fully attaching? I'm looking for something like theoretically doing a tail on the session to get the last bit of output ( but if I can avoid creating another file with a copy of the output all the better ) Maybe attaching and having it immediately detach would also work. I'm attempting to save keystrokes, perhaps such a command could be executed remotely, i.e. ssh root@server tmux --tail ?
I think capture-pane might suit your needs: tmux capture-pane -pt "$target-pane" (see “target-pane” in the man page for the ways to specify a pane) By default, that command will dump the current contents of the specified pane. You can specify a range of lines by using the -S and -E options (start and end line numbers): the first line is 0, and negative numbers refer to lines from the pane’s “scroll back” history. So adding -S -10 gets you the most recent ten lines of history plus the current contents of the pane. tmux capture-pane -pt "$target-pane" -S -10 The -p option was added in 1.8. If you are running an earlier version then you can do this instead: tmux capture-pane -t "$target_pane" \; save-buffer - \; delete-buffer But mind those semicolons if you are issuing this command via ssh since the remote shell will add an additional level of shell interpretation (the semicolons need to be passed as arguments to the final tmux command, they must not be interpreted by either the local or the remote shell).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/168354", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
168,357
I am trying to create a graphical program for my script. Inside the script I use tar to create a tar archive. From the graphical program I get the full name of file that I want to create a tar archive. tar -cvf temp.tar /home/username/dir1/dir2/selecteddir My tar archive includes home, username, dir1, dir2 and selecteddir while i want tar to create archive only including selecteddir.
You can use the -C option of tar to accomplish this: tar -C /home/username/dir1/dir2 -cvf temp.tar selecteddir From the man page of tar : -C directory In c and r mode, this changes the directory before adding the following files. In x mode, change directories after opening the archive but before extracting entries from the archive.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/168357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73285/" ] }
168,400
I have the misfortune of dealing with filenames that contain spaces. I want to concatenate files of which filenames contain spaces. I also want to sort the filenames numerically. Obviously the following fails: cat $(ls *.sql | sort -n) since foo bar.sql is passed as two arguments to cat . What is the usual approach here?
No need for ls here. It's the shell that lists the directory content by expanding the *.sql glob. On a GNU or FreeBSD system: printf '%s\0' *.sql | sort -nz | xargs -r0 cat -- (using \0 instead of \n together with -z/-0 makes sure it also works with file names containing newline characters). Note that the numeric sorting with -n assumes the number is at the start of the filename. Or if you have zsh : cat ./*.sql(.n) (The n glob qualifier is to enable numeric sorting (also works when the number is not at the start provided all file names have the same prefix (like file12.sql , file2.sql ). I added . as well to only include regular files. Add D if you also want hidden files like .foo.sql ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
168,404
Given a mountpoint such as /dev/sda1 how can I list the contents of this file system using shell script. My objective is to delete the oldest file in this drive.
/dev/sda1 is a block device. It may contain a file system. When mounted, that file system may be available at some mount point like /home , / , and within that file system, some directories may in turn be some mount points for other file systems (may they be in other block devices, or virtual ones like /proc , or network ones...). If /dev/sda1 is mounted on / , to remove the oldest (in terms of last modification time) regular file, on a recent GNU system, you can do: find / -xdev -type f -printf '%T@:%p\0' | sort -zn | sed -z 's/[^:]*://;q' | xargs -r0p rm -f The -xdev flag tells find to stick to one file system, that is, not to descend into other file systems mounted within / in this case. Note that other file systems may hide files on the file system of their mount point. For instance, if /dev/sda1 is mounted on / but contains a /home/some-old-file and /dev/sda2 is mounted on /home , /home/some-old-file will not be accessible. On Linux at least, you can work around that by bind-mounting / in another directory: mount --bind / /mnt/side-access-to-root Then all the files in the file system mounted at / will be available through /mnt/side-access-to-root . Then, you can ommit the -xdev and you could use zsh globbing to remove the oldest file: rm -i /mnt/side-access-to-root/**/*(D.Om[1])
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91015/" ] }
168,408
As far as I know the device drivers are located in the Linux kernel. For example let's say a GNU/Linux distro A has the same kernel version as a GNU/Linux distro B. Does that mean that they have the same hardware support?
The short answer is no. The driver support for the same kernel version is configurable at compile time and also allows for module loading. The actual devices supported in a distro therefore depend on the included compiled in device drivers, compiled loadable modules for devices and actual installed modules. There are also devices not included in the kernel per se that a distro might ship.I have not run into problems lately, but when I started with Linux at home I went with SuSE, although they had the same, or similar, kernel versions as RedHat, SuSE included ISDN drivers and packages "out of the box" (that was back 1998).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91805/" ] }
168,436
I would like to open a terminal, split it to lets say 9 parts (3x3) and execute some bash script. But for each terminal part different script. Can this be done using perl, python or even bash? How can I switch between those little terminals without using keyboard shortcuts? Oh, by the way, I'm using terminator . And if there is some other terminal emulator that enables such a functionality, which is it?
To plagiarize myself , you can set up a profile with your desired settings (instructions adapted from here ): Run terminator and set up the layout you want. You can use Ctrl + Shift + E to split windows vertically and Ctrl + Shift + O (that's O as in oodles, not zero) to split horizontally. For this example, I have created a layout with 6 panes: Right click on the terminator window and choose Preferences . Once the Preferences window is open, go to Layouts and click Add : That will populate the Layouts list with your new layout: Find each of the terminals you have created in the layout and click on them. Then on the right, enter the command you want to run in them on startup: IMPORTANT: Note that the command is followed by ; bash . If you don't do that, the terminals will not be accessible, since they will run the command you give and exit. You need to launch a shell after each command to be able to use the terminals. Once you have set all the commands, click Close and then exit terminator . Open the terminator config file ~/.config/terminator/config and delete the section under layouts for the default config. Then change the name of the layout you created to default. It should look something like this: [global_config] [keybindings] [profiles] [[default]] [layouts] [[default]] [[[child0]]] position = 446:100 type = Window order = 0 parent = "" size = 885, 550 [[[child1]]] position = 444 type = HPaned order = 0 parent = child0 [[[child2]]] position = 275 type = VPaned order = 0 parent = child1 [[[child5]]] position = 219 type = HPaned order = 1 parent = child1 [[[child6]]] position = 275 type = VPaned order = 0 parent = child5 [[[child9]]] position = 275 type = VPaned order = 1 parent = child5 [[[terminal11]]] profile = default command = 'df -h; bash' type = Terminal order = 1 parent = child9 [[[terminal10]]] profile = default command = 'export foo="bar" && cd /var/www/; bash' type = Terminal order = 0 parent = child9 [[[terminal3]]] profile = default command = 'ssh -Yp 24222 [email protected]' type = Terminal order = 0 parent = child2 [[[terminal4]]] profile = default command = 'top; bash' type = Terminal order = 1 parent = child2 [[[terminal7]]] profile = default command = 'cd /etc; bash' type = Terminal order = 0 parent = child6 [[[terminal8]]] profile = default command = 'cd ~/dev; bash' type = Terminal order = 1 parent = child6 [plugins] The final result is that when you run terminator it will open with 6 panes, each of which has run or is running the commands you have specified: Also, you can set up as many different profiles as you wish and either launch terminator with the -p switch giving a profile name, or manually switch to whichever profile you want after launching.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/168436", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78357/" ] }
168,452
I have the following setup Linux 1 Linux 0 eth1 eth0-------------------eth0 14.14.14.80 19.19.19.20 19.19.19.10 2005::5/64 2004::3/64 2001::3/64 From Linux0, i am able to ping 14.14.14.80 or 19.19.19.20 ( 19.19.19.20 was added as a default GW) and also on Linux1 , ipv4 forwarding was enabled.For ipv6 , i cannot add 2004::3/64 as the default ipv6 gateway on Linux0 .I tried ip -6 route add default via 2004::3 and ip -6 route add default via 2004:: But i get the error RTNETLINK answers: No route to host What am i missing here?.
You need to add the route to the gateway first: ip -6 route add 2004::3 dev eth0
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/168452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68053/" ] }
168,454
If I run sudo df -h command, I got below output: Filesystem Size Used Avail Use% Mounted on/dev/sda2 12G 9.5G 1.1G 91% //dev/sda4 3.8G 1.5G 2.1G 41% /home/dev/sda1 99M 75M 20M 80% /boottmpfs 3.9G 0 3.9G 0% /dev/shm/dev/sdc1 51G 2.6G 46G 6% /u000 But, how will I know the list of directories under /sda2 ?For example, If I run ls / command, I got all the directories under root. $ ls /bin cdunix dev etc lib lost+found misc mnt1 mtp net PatchInstall root selinux sys tmp usrboot cron_4058 esm home lib64 media mnt mnt2 NB_DIR opt proc sbin srv tftpboot u000 var But, is there any command or way through which I can also list downtheir filesystem too? Since, there is very less amount of spaceremaining on /dev/sda2/ . How can I vacant more space from thispartition?
If I am reading this question correctly, there is a program called tree . This would list all directories in a tree like structure. With it installed, you can do something like: tree -x Where -x Stay on the current file-system only. Ala find -xdev. UPDATE: I have tried tree -P /dev/xvda and it seemed to have shown directories under that filesystem. The -P command stands for pattern. So to answer your question, you should be able to use it to list directories in filesystems. To list the first levels in / directory, try command: tree -LP 1 /dev/xvda where L is level Max display depth of the directory tree. Refer to the man pages here
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54198/" ] }
168,458
I'm working with a fresh install of OpenBSD (5.6 amd64). I'm attempting to build Boost libraries, and quite a few compilations are failing with cc1plus running out of memory errors. I've read elsewhere that increasing swap can solve this problem. However, for me that's not working. Currently I have 4 gig of swap set up. However, none of it is even being used. Swapctl always shows total: 8390592 512-blocks allocated, 0 used, 8390592 available Even while the compiles fail, this remains the same. There's something unusual about my install I should mention. It's installed on a USB stick, and I've used full disk encryption via the softraid0 method. So my /dev/sd1b is my 4g of swap, /dev/sd1a is raid, /dev/sd2 (the encrypted raid) is partitioned as normal by the installer, except no swap there. My question is why is my swap space not being used at all, even as the compiler runs out of memory?
By default OpenBSD doesn't allow processes to use infinite memory. These limits are defined in /etc/login.conf . If you hit those limits, you'll get an out of memory error even though the OS as a whole still has plenty left. Most of the time this is nice, since one rogue process won't be able to suck up all memory and bring the system to its knees. Sometimes, however, it gets in the way. Fortunately you can change it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168458", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91834/" ] }
168,462
I am not sure if it is possible. I do ls -l, it gives all the files in the current directory. Is there a way to say list only files that weren't created/modified on Saturday's with shell command?
By default OpenBSD doesn't allow processes to use infinite memory. These limits are defined in /etc/login.conf . If you hit those limits, you'll get an out of memory error even though the OS as a whole still has plenty left. Most of the time this is nice, since one rogue process won't be able to suck up all memory and bring the system to its knees. Sometimes, however, it gets in the way. Fortunately you can change it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26056/" ] }
168,463
On my server, I have several public SSH keys in ~/.ssh/authorized_keys . I would like to temporarily block/disallow/deactivate one key. I want to prevent the user to log in using this key now. but I might want to reanable it later (i.e. I don't want to delete the key entirely). What is the correct/recommended way to do it? Shall I just put a comment # at the beginning of the line in authorized_keys , in front of the key? To clarify, I don't want to block a specific user. One user account is shared among several people, each person connecting with his own SSH key. I want to block one specific SSH key.
You could prefix the key with a forced command that tells the user what's going on. For example: restrict,command="printf 'Your key has been disabled\n'" ssh-rsa AAAAB2...19Q== [email protected] or for Openssh before v7.2: command="printf 'Your key has been disabled\n'",no-pty,no-port-forwarding ssh-rsa AAAAB2...19Q== [email protected] Then they get: $ ssh servernamePTY allocation request failed on channel 0Your key has been disabledConnection to servername closed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168463", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
168,476
The context is there are 2 variables that get divided to a floating point result like so: printf "%0.5f\n" $(echo 305/15 | bc -l)20.33333 How can I always round up to the next integer i.e. 21 ? This is not about rounding up a value above 20.5 to 21 i.e. nearest integer. I'm asking because I want a value to be either exactly the integer or the next integer if it's above in whatever way. So how can I evaluate that? With an if statement? If I put a float there the shell complains it expects an integer. I don't fully understand how to leverage the information in a Q&A such as this one to effect a conversion "upward" to the next integer. Something I'm missing?
You can use bc features for that: echo "a=305; b=15; if ( a%b ) a/b+1 else a/b" | bc
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
168,482
I am writing a script to read the output of a command to variable a and b. This is the script #!/bin/bashread a b < <(awk '/Application Server/ && !seen[$7]++{printf "%s ", $7}' /tmp/ServerState)echo "The value of a is $a"echo "The value of b is $b" and getting the syntax error as : line 3: syntax error near unexpected token `<'line 3: `read a b < <(awk /Application Server/ && !seen[$7]++{echo "%s ", $7} /tmp/ServerState)' But when I am typing the same command in the console it is working for me without any issue. app@user:/tmp> read a b < <(awk '/Application Server/ && !seen[$7]++{printf "%s ", $7}' /tmp/ServerState)app@user:/tmp> echo $aFAILEDapp@user:/tmp> echo $bSTARTED Any help on this is really appreciated.
sh (which in most (Debian-derived) systems is linked to dash ) doesn't allow process substitution . Try invoke by bash script.sh . Same calling by ./script.sh executes with sha-bang which is /bin/bash in your script.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91838/" ] }
168,505
I just saw a weird shortcut in dconf-editor: <Primary><Alt>KP_End What is <Primary> ? I also saw an Above-Tab key. I completely understand what that is referring to, but were are those key names defined?
<Primary> is a gtk+ thing. gtk+ 2.24.7 & gtk+ 3.2.1 introduced the concept of a platform-agnostic accelerator modifier, <Primary> , which can be used instead of <Control> : a new facility is provided in Gtk+ (as of this writing it is in Git for Gtk+-2.24, and released in Gtk+-3.2.0) to use the <Primary> descriptor in place of <Control> for accelerators and bindings. This will map the accelerator to Command on OSX and to Control for anything else. 1 As per this commit : gtk: allow to specify accelerators in a platform-independent way. Introduce <Primary> in accelerator strings, which resolves to GDK_CONTROL_MASK on X11/Win23, and to GDK_META_MASK on quartz. it is defined (along with other accelerators) in gtkaccelgroup.c gtk_accelerator_name (guint accelerator_key, GdkModifierType accelerator_mods){ static const gchar text_release[] = "<Release>"; static const gchar text_primary[] = "<Primary>"; static const gchar text_shift[] = "<Shift>"; static const gchar text_control[] = "<Control>"; static const gchar text_mod1[] = "<Alt>"; static const gchar text_mod2[] = "<Mod2>"; static const gchar text_mod3[] = "<Mod3>"; static const gchar text_mod4[] = "<Mod4>"; static const gchar text_mod5[] = "<Mod5>"; static const gchar text_meta[] = "<Meta>"; static const gchar text_super[] = "<Super>"; static const gchar text_hyper[] = "<Hyper>"; <Above_tab> is a window manager thing. As per this commit in metacity : We want switching between the windows of an application to be an easily accessible operation. The convenient and memorable keybinding is the key above the tab key - but the keysym for that key isn't consistent across different keyboard layouts. Add code that figures out the key from the XKB geometry and a magic keysym name "Above_Tab" that refers to this key and switch the default binding for cycle_group to Above_Tab See also this commit in mutter and source files meta-accel-parse.c and meta-accel-parse.h
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168505", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9417/" ] }
168,511
I am writing a script where I am using the combination of logical 'OR' and logical 'AND' statement. This is the script: #!/bin/bashecho "Enter the value of a"read $aecho "Enter the value of b"read $bif[[ $a != STARTED && $b == STARTED ] || [ $b != STARTED && $a == STARTED ]]; thenecho "Either of the JVMs is not yet up, so lets wait for some more time"i=$(($i+1))sleep 1sfi and getting the following error while executing it: line 13: syntax error in conditional expressionline 13: syntax error near `]'line 13: `[[ $a != STARTED && $b == STARTED ] || [ $b != STARTED && $a == STARTED ]]; then' I am using bash shell. Any help on this is really appreciated.
You have mismatched [[ with ] . [[ should always be closed with ]] and [ with ] . Use: if [[ $a != STARTED && $b == STARTED ]] || [[ $b != STARTED && $a == STARTED ]]; then Better yet, since you are using [[ anyway: if [[ ($a != STARTED && $b == STARTED) || ($b != STARTED && $a == STARTED) ]]; then The other mistake, which I didn't notice until formatting was applied, is that you're doing: read $aread $b You should be doing: read aread b With the first form, $a and $b are replaced by the shell with their contents, so if you hadn't set them before this line, the final command would be: read (in which case the value read would be stored in the REPLY variable.) And if you had set a to something (like a="blah blah" ), it would look like: read blah blah
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91838/" ] }
168,528
When logging in to remote machine X, by default, date will give the time in the local timezone: $ dateMon Nov 17 22:45:47 CET 2014 Note that TZ is not set: $ export | grep TZ$ So, I set TZ to my own local timezone for every machine I'm on: $ export TZ=/usr/share/zoneinfo/Canada/Eastern$ dateMon Nov 17 16:46:13 EST 2014 The question is, once I have this set, how do I get the time for the system default timezone , i.e. the timezone that applies if I, as a user, do not manually set TZ ? Unsetting TZ gives me the UTC time, which is not what I seek: $ TZ= dateMon Nov 17 21:47:13 UTC 2014 Interestingly, TZ= date gives UTC time, even when I didn't yet set TZ to anything; yet when I didn`t yet set TZ to anything, a simple date gave the date in the system default timezone...
Try unsetting the TZ variable (which is different from setting it to "" ): $ (date; export TZ=/usr/share/zoneinfo/Canada/Eastern; date; unset TZ; date)Tue Nov 18 03:25:25 IST 2014Mon Nov 17 16:55:25 EST 2014Tue Nov 18 03:25:25 IST 2014 Note how I tried it in a subshell, so that my current shell remains unaffected.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168528", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15654/" ] }
168,580
I am aware of the fact that mkdir -p /path/to/new/directory will create a new directory, along with parent directory (if needed ). If I have to create a new file, along with it's parent directories (where some or all of the parent directories are not present), I could use mkdir -p /path/to/directory && touch /path/to/directory/NEWFILE . But, is there any other command to achieve this?
AFAIK, there is nothing standard like that, but you can do it your self: ptouch() { for p do _dir="$(dirname -- "$p")" mkdir -p -- "$_dir" && touch -- "$p" done} Then you can do: ptouch /path/to/directory/file1 /path/to/directory/fil2 ...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168580", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48188/" ] }
168,594
In windows you get a count of the number of subdirectories within a directory.Is there any equivalent on Linux ? I'd like it to count recursively & Not stop at a single level.
Use find to count all directories in a tree starting from current directory: find . -mindepth 1 -type d | wc -l Note, that -mindepth is required to exclude current directory from the count. You can also limit depth of search with -maxdepth option like this: find . -mindepth 1 -maxdepth 1 -type d | wc -l More find options are available. You can check man page for that,
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89511/" ] }
168,628
I'd like to compare directories with binary files. Actually, I'm not interested in what the actual differences between files are, but to know if there's a differ (and what files differ). Previously I used meld , but it's cannot compare binary files. What such file comparison tool can do this? NOTE: It doesn't matter if it's a graphical tool or is just has a command-line.
This can easily be done with diff . For example: $ ls -l foo/total 2132-rwxr-xr-x 1 terdon terdon 1029624 Nov 18 13:13 bash-rwxr-xr-x 1 terdon terdon 1029624 Nov 18 13:13 bash2-rwxr-xr-x 1 terdon terdon 118280 Nov 18 13:13 ls$ ls -l bar/total 1124-rwxr-xr-x 1 terdon terdon 1029624 Nov 18 13:14 bash-rwxr-xr-x 1 terdon terdon 118280 Nov 18 13:14 ls$ diff bar/ foo/Only in foo/: bash2 In the example above, the foo/ and bar/ directories contain binary files and bash2 is only in foo/ . So, you could run something simple like: $ diff bar/ foo/ && echo "The directories' contents are identical" That will show you the different files, if any, or print "The directories' contents are identical" if they are. To compare subdirectories and any files they may contain as well, use diff -r . Combine it with -q to suppress the output for text files.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39370/" ] }
168,674
How does one iterate a string with the form "[A-Z][0-9]*" Or for example: "A000001"? After receiving the variable I split: current_=$(mysql -h"$mysqlhost" -u"$mysqluser" -p"$PASS" "$DBNAME" -se "SELECT current_ FROM $GLOBALDB;")current_number=$(echo $current_ | grep -oh "[0-9]*")current_letter=$(echo $current_ | grep -oh "[A-Z]*") However when I try to add 1: # add 1 & keep all leading zeros "000001"next_number=$(printf %06d $(($current_number + 1))) It counts to "000009" and rolls back over to "000000". And I join as follows: next_=$(echo "$current_letter$next_number") And in regards to the Letter iteration I was thinking of using an Associated Array? Or brace expansion {A..Z} , but that is a whole different question.
In bash , numbers with leading zeros are considered as octal. To force bash to consider them as decimal, you can add a 10# prefix: next_number=$(printf %06d "$((10#$current_number + 1))") Or with bash 3.1 or above, to avoid the forking: printf -v next_number %06d "$((10#$current_number + 1))" (note that it doesn't work for negative numbers as 10#-010 is seen as 10#0 - 010 in bash , so both $((10#-10)) and $((-10#-10)) expand to -8 ). See also: $ printf 'A%06d\n' {5..12}A000005A000006A000007A000008A000009A000010A000011A000012 Or: $ printf '%s\n' {A..C}{00008..00012}A00008A00009A00010A00011A00012B00008B00009B00010B00011B00012C00008C00009C00010C00011C00012 Or: $ seq -f A%06g 5 12A000005A000006A000007A000008A000009A000010A000011A000012
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168674", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61110/" ] }
168,679
Im trying to make some backup script as the log files get bigger and bigger. What i have is coping the current file, (for example secure file in /var/log/ ) and remove the content from that file. But there are some files with the name like: secure.1 , secure.2 and all this i like to count them, and if the number is bigger then 2 to archive them all. I can't find the method to find this files or count them. The first think that come up to me was: find /var/log/ -name *.1 | wc -l and this will always print 1 as there is one file secure.1 . How can i count like in for loop where i can specified a range of numbers like {1..5} or similar. Is there a way to separate this files and make them as one and them backup or delete or what ever ... or first of all how can i find all this numbers that ends up with number.
With simple -name : find /var/log -name '*.[2-9]' or for any digit: find /var/log -name '*.[[:digit:]]' or if other chars are possible after digit: find /var/log -name '*.[2-9]*'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/168679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78167/" ] }
168,695
In order to test some scripts 1 in as many environments as possible, I've set up several VMs with UNIX (or Unix-like operating systems): Linux Solaris OS X 2 FreeBSD I assume, though, that in many ways it's actually more important to test using different versions of each shell that I plan to care about, than to test using different operating systems. Since I'd rather not have my VMs multiply in seemingly endless permutations, I'd like to install multiple versions of each shell on any given VM. For instance, if I test under [:bash,:zsh,:fish,:ksh,:csh,:tcsh,:sh,:dash,:ash] then I've got 9 shells, and if you assume I'm testing an average of 3 versions of each then I've got over 100 VMs: # operating_systems * shells * shell_versions 4 * 9 * 3 Is there any practical way to install and use multiple versions of a given shell on a single machine or virtual machine? Can I (e.g.) install Bash 1, Bash 2, Bash 3, and Bash 4 all on one Linux VM? I realize that some combinations are less important and can probably be ignored, and ultimately I'd want to test multiple versions of each OS as well, but those are really separate from this question, so I'm putting such issues aside to consider whether this is possible. So: is there any practical way to install and use multiple versions of a given shell on a single machine? 1 I'm using the term "script" loosely. One of the first things I want to test is something that will be sourced by one's shell rc files, whether that be .zshrc , .bash_profile , or whatever, so it won't have its own shebang line. Hence the desire to make one bit of code work across multiple shells. Other things that would be useful across shells would be functions and aliases that I'd want to use on different machines although they won't necessarily all have my favorite shell (Z Shell) but might make me use Bash or Korn. Also any useful snippets that I might want to use in shell scripts on multiple machines, even when I can't put my favorite shell in the shebang line. 2 Totally tangential note, only included for the sake of not saying anything misleading: I haven't actually gotten the OS X VM set up, since this is quite a hassle, but I hope to, and included it in the list so no one would say "Hey! Why aren't you including OS X!?"
If you're happy to build from source, you can install each version into a separate prefix, then adjust the path in your scripts accordingly. bash, fish, ksh, tcsh, zsh, and dash all support the --prefix argument to configure, so you can download each version, run ./configure --prefix=/opt/SHELL-VERSION; make; make install . Then to use each version, set PATH to have /opt/SHELL-VERSION/bin at the front. csh is a bit different and will require more manual work; if you're sure you want it, you can extract the sources from the FreeBSD source tree and edit the Makefile, but most people actually use tcsh anyway. I don't think there's a canonical source for ash but it will probably have a similar way of going about things.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168695", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
168,704
When I was installing Mint Debian edition unlike the classic edition, the installation automatically formated my home partition when I did not specify to format. So the formatting previously was ext4 as is now. I believe the data is still there as it was a quick format. I have now booted the computer up on a live USB to prevent writing on it. Ran testDisk.Is there anyway to recover to a previous superblock so i can recover my data?
Take a look at the e2fsprogs package. It seems that you can get all your backup superblocks from dumpe2fs /dev/sd<partition-id> | grep -i superblock and then have e2fsck check the FS for you, or just try to do mount -o sb=<output-of-dumpe2fs> /dev/sd<partition-id> /your/mountpoint with a backup superblock. See this for reference: http://www.cyberciti.biz/faq/linux-find-alternative-superblocks/ . testdisk works well to recover partition tables, not clobbered file systems. Photorec is a last resort when you have really messed things up and can't get any of the filesystem structure recovered.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52233/" ] }
168,710
When searching on multiple files, the output of grep looks something like this: {FILE}:{MATCH} This is a problem since if I double-click on the {FILE} portion, I will at least also select (and copy) the : and possibly some of the match. How can I get grep to instead format its output something like this: {FILE} : {MATCH} Basically, I want to insert a space before the column. (and optionally one after)
grep -T will work 7/8ths of the time. % for f in a ab abc abcd abcde abcdef abcdefg abcdefgh; do echo pattern > $f; done% grep -T pattern *a :patternab :patternabc :patternabcd :patternabcde :patternabcdef :patternabcdefg:patternabcdefgh :pattern From the GNU grep manual : -T --initial-tab Make sure that the first character of actual line content lies on a tab stop, so that the alignment of tabs looks normal. This is useful with options that prefix their output to the actual content: -H , -n , and -b . In order to improve the probability that lines from a single file will all start at the same column, this also causes the line number and byte offset (if present) to be printed in a minimum-size field width.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91995/" ] }
168,763
I have re-installed Debian 7 (Wheezy) after a while. For the first time, I am using the open-free Nvidia drivers (not nouveau) and vesafb for virtual consoles. I cannot, for the life of me, stop the screen from blanking. There is no screensaver, nothing, it just goes blank, just after a couple of minutes of inactivity. This is not just during VLC (which has had such an issue in the past) but during anything. To make it worse, it seems to happen at random. Sometimes the screen will not go blank for hours, and sometimes it will. Steps I have taken so far: Added a few lines in /etc/X11/xorg.conf to stop dpms: Section "ServerLayout" Option "BlankTime" "0" Option "StandbyTime" "0" Option "SuspendTime" "0" Option "OffTime" "0" ...Section "Monitor" ... Option "DPMS" "false" Added in my .xinitrc file: xset s off # don't activate screensaverxset -dpms # disable DPMS (Energy Star) features.xset s noblank # don't blank the video device Disabled ALL screensavers and power saving modes under KDE settings. Added the following loop in my /etc/init.d/rc.local : for index in $(seq 1 6)do setterm -blank 0 -powerdown 0 -powersave off > /dev/tty${index}done Patched my xdg-screensaver with a patch I found that was forcing VLC to spawn a screensaver. (I have since stopped using VLC and reverted to Dragon player.) This is turning into a nightmare, and is truly very annoying.Before I nuke vesafb and setterm (which I have the feeling are somehow responsible for this) I would like to know if anyone has ever run into this problem, and how they managed to solve it.
DPMS can be darn resistant! Try this command: xset dpms 0 0 0 && xset s noblank && xset s off If it works, add it to whatever autostart file KDE uses. By the way, VLC has the option Preferences >> Video >> Disable screensaver . If that option is checked, the screen won't blank while VLC plays a video but DPMS will be turned on afterwards (regardless whether it was on before starting VLC). Therefore leave that option unchecked, and VLC shouldn't cause any problems with blanking.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67204/" ] }
168,794
I have a requirement to know which all ports in my Solaris machine are free to be used for any kind of network communication. I tried the netstat -a command. But the information returned didn't give a convincing result of which all ports I can use for a new application that I am writing. Appreciate any assistance provided in this regard.
You can use 1-65,535 ports on your system in which first 1024 are root privileged. So Instead of finding the free port, you can get list of used ports using below command netstat -tunlep | grep LISTEN | awk '{print $4}' Then you can use any port from 1-65535 except those ports.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/168794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92052/" ] }
168,807
I am looking for a way to mount a ZIP archive as a filesystem so that I can transparently access files within the archive. I only need read access -- the ZIP will not be modified. RAM consumption is important since this is for a (resource constrained) embedded system. What are the available options?
fuse-zip is an option and claims to be faster than the competition. # fuse-zip -r archivetest.zip /mnt archivemount is another: # archivemount -o readonly archivetest.zip /mnt Both will probably need to open the whole archive, therefore won't be particularly quick. Have you considered extracting the ZIP to a HDD or USB-stick beforehand and simply mounting that read-only? There are also other libraries like fuse-archive and ratarmount which supposedly are more performant under certain situations and provide additional features.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/168807", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67390/" ] }