source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
170,879 | I would like to add the time of the last change to the file to its filename using the bash. For example this output of ls -la : -rw-rw-r-- 1 beginner beginner 5382 Dec 1 17:18 B_F1-1.xml should become -rw-rw-r-- 1 beginner beginner 5382 Dec 1 17:18 B_F1-1_20141201T1718.xml How could I do this to all files in .? | You can try something like: EXT=${FILE#*.}NAME=${FILE%%.*}mv "$FILE" "$NAME$(date --reference "$FILE" '+%Y%m%dT%H%M').$EXT" in a script, if your date supports --reference , which picks up the last modification date of the reference file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/170879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87359/"
]
} |
170,911 | I can't seem to grab the month, day and year using awk '{print $2, $3, $6}' date Does anyone know how to? | Why use awk at all? date +"%b %d %Y" gives you the values without the hassle. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/170911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91790/"
]
} |
170,930 | I am trying to HTTP traffic to port 8007 in a file and then view them later. # tcpdump -i eth0 -s0 -n -w /tmp/capture port 8007 &# tcpdump -r /tmp/capture -A | grep '10.2.1.50' I expected to see packet data in ASCII but that does not happen. What I get instead is something like: 23:03:16.819935 IP 10.2.1.50.8007 > 10.2.1.120.57469: . ack 1369 win 272 <nop,nop,timestamp 188139705 215355175>23:03:16.819943 IP 10.2.1.120.57469 > 10.2.1.50.8007: P 1369:1592(223) ack 1 win 12 <nop,nop,timestamp 215355175 188139703>23:03:16.819947 IP 10.2.1.50.8007 > 10.2.1.120.57469: . ack 1592 win 272 <nop,nop,timestamp 188139705 215355175>23:03:17.029587 IP 10.2.1.50.8007 > 10.2.1.120.57469: P 1:780(779) ack 1592 win 272 <nop,nop,timestamp 188139758 215355175>23:03:17.029736 IP 10.2.1.50.8007 > 10.2.1.153.49989: F 822:822(0) ack 3494 win 272 <nop,nop,timestamp 188139758 1641992210>23:03:17.040759 IP 10.2.1.120.57469 > 10.2.1.50.8007: . ack 780 win 15 <nop,nop,timestamp 215355396 188139758>23:03:17.079305 IP 10.2.1.153.49989 > 10.2.1.50.8007: . ack 823 win 15 <nop,nop,timestamp 1642053303 188139758> How do I fix the write or read to see the actual content? I have tried other options such as -v but that's not for content. I am using SLES 11 SP2. Is tcpdump the right tool for this? Thanks a lot. --EDIT # tcpdump --versiontcpdump version 3.9.8libpcap version 0.9-PRE-CVS I also tried with -X option but still do not see the payload data. | You are able to just see the header and not packet contents because you piped the output to grep. So it is just getting the lines in which the IP address is present. -A option to tcpdump gives the packet contents as well. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/170930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91672/"
]
} |
170,933 | When i'm in Insert mode at vim, and press Shift + Insert to paste my code to my file, vim disassembles my indentation , Such as: Question: How can i solve this problem? | Vim is acting as if you had typed all of your pasted code by hand, so Vim will add additional indentation and otherwise change whitespace as it normally would, such as with your autoindent setting. To paste code in Vim: :set paste to enable paste mode. Paste your code. :set nopaste to disable paste mode so your normal typing will work as expected again. And see :help paste for more information, including which options are disabled/altered when paste mode is on. It's possible to set up mappings for this sort of thing if you do it a lot. See :help imap for more information on that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/170933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21911/"
]
} |
170,950 | I would like to set my Ubuntu computer such that I can't hear any audio between 9:59pm and 8:00am. How can I do that? | Found this method on AskUbuntu that shows using a crontab entry along with amixer to mute/unmute the sound. It's titled: How do I automatically mute/unmute sound during a certain time period (e.g. night)? . General steps Create a crontab entry $ crontab -e Add entries to crontab 59 21 * * * amixer set Master mute00 08 * * * amixer set Master unmute Save(if you are using ViM, then e.g. Shift + Z + Z ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/170950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93482/"
]
} |
170,961 | I have a list of .ts files: out1.ts ... out749.ts out8159.ts out8818.ts How can I get the total duration (running time) of all these files? | I have no .ts here (only .mp4 ) but this should work for all video files: Use ffprobe (part of ffmpeg ) to get the time in seconds, e.g: ffprobe -v quiet -of csv=p=0 -show_entries format=duration Inception.mp4 275.690000 So for all video files you could use a for loop and awk to calculate the total time in seconds: for f in ./*.mp4do ffprobe -v quiet -of csv=p=0 -show_entries format=duration "$f"done | awk '{sum += $1}; END{print sum}' 2735.38 To further process the output to convert the total to DD:HH:MM:SS , see the answers here . Another way is via exiftool which has an internal ConvertDuration : exiftool -n -q -p '${Duration;our $sum;$_=ConvertDuration($sum+=$_) }' ./*.mp4 | tail -n1 0:45:35 | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/170961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81823/"
]
} |
170,986 | Using pacemaker in a 2 nodes master/slave configuration. In order to perform some tests, we want to switch the master role from node1 to node2 , and vice-versa. For instance if the current master is node1 , doing # crm resource migrate r0 node2 does indeed move the resource to node2 . Then, ideally, # crm resource migrate r0 node1 would migrate back to node1 . The problem is that migrate added a line in the configuration to perform the switch location cli-prefer-r0 r0 role=Started inf: node2 and in order to migrate back I have first to remove that line... Is there a better way to switch master from one node to the other? | I know this bit old; but it seems like no one answered this satisfactorily, and the requester never posted if his problem was solved or not.So here is an explanation. When you perform: # crm resource migrate r0 node2 a cli-prefer-* rule is created. Now when you want to move the r0 back to node1, you don't do: # crm resource migrate r0 node1 but you perform: # crm resource unmigrate r0 Using umigrate or unmove gets rid ofthe cli-prefer-* rule automatically. If you try to delete this rule manually in cluster config, really bad things happen in cluster, or at least bad things happened in my case. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/170986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3527/"
]
} |
170,998 | I use unprivileged lxc containers in Arch Linux . Here are the basic system infos: [chb@conventiont ~]$ uname -aLinux conventiont 3.17.4-Chb #1 SMP PREEMPT Fri Nov 28 12:39:54 UTC 2014 x86_64 GNU/Linux It's a custom/compiled kernel with user namespace enabled : [chb@conventiont ~]$ lxc-checkconfig --- Namespaces ---Namespaces: enabledUtsname namespace: enabledIpc namespace: enabledPid namespace: enabledUser namespace: enabledNetwork namespace: enabledMultiple /dev/pts instances: enabled--- Control groups ---Cgroup: enabledCgroup clone_children flag: enabledCgroup device: enabledCgroup sched: enabledCgroup cpu account: enabledCgroup memory controller: enabledCgroup cpuset: enabled--- Misc ---Veth pair device: enabledMacvlan: enabledVlan: enabledFile capabilities: enabledNote : Before booting a new kernel, you can check its configurationusage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig[chb@conventiont ~]$ systemctl --versionsystemd 217+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD +IDN Unfortunately, systemd does not play well with lxc currently. Especially setting up cgroups for a non-root user seems to be working not well or I am just too unfamiliar how to do this. lxc will only start a container in unprivileged mode when it can create the necessary cgroups in /sys/fs/cgroup/XXX/* . This however is not possible for lxc because systemd mounts the root cgroup hierarchy in /sys/fs/cgroup/* . A workaround seems to be to do the following: for d in /sys/fs/cgroup/*; do f=$(basename $d) echo "looking at $f" if [ "$f" = "cpuset" ]; then echo 1 | sudo tee -a $d/cgroup.clone_children; elif [ "$f" = "memory" ]; then echo 1 | sudo tee -a $d/memory.use_hierarchy; fi sudo mkdir -p $d/$USER sudo chown -R $USER $d/$USER echo $$ > $d/$USER/tasksdone This code creates the corresponding cgroup directories in the cgroup hierarchy for an unprivileged user. However, something which I don't understand happens. Before executing the aforementioned I will see this: [chb@conventiont ~]$ cat /proc/self/cgroup 8:blkio:/7:net_cls:/6:freezer:/5:devices:/4:memory:/3:cpu,cpuacct:/2:cpuset:/1:name=systemd:/user.slice/user-1000.slice/session-c1.scope After executing the aforementioned code I see in the shell I ran it in: [chb@conventiont ~]$ cat /proc/self/cgroup 8:blkio:/chb7:net_cls:/chb6:freezer:/chb5:devices:/chb4:memory:/chb3:cpu,cpuacct:/chb2:cpuset:/chb1:name=systemd:/chb But in any other shell I still see: [chb@conventiont ~]$ cat /proc/self/cgroup 8:blkio:/7:net_cls:/6:freezer:/5:devices:/4:memory:/3:cpu,cpuacct:/2:cpuset:/1:name=systemd:/user.slice/user-1000.slice/session-c1.scope Hence, I can start my unprivileged lxc container in the shell I executed the code mentioned above but not in any other. Can someone explain this behaviour? Has someone found a better way to set up the required cgroups with a current version of systemd ( >= 217 )? | A better and safer solution is to install cgmanager and run it with systemctl start cgmanager (on a systemd -based distro). You can than have your root user, or if you have sudo rights on the host create cgroups for your unprivileged user in all controllers with: sudo cgm create all $USERsudo cgm chown all $USER $(id -u $USER) $(id -g $USER) Once they have been created for your unprivileged user she/he can move processes he has access to into his cgroup for every controller by using: cgm movepid all $USER $PPID Safer, faster, more reliable than the shell script I posted. Manual solution: To answer 1. for d in /sys/fs/cgroup/*; do f=$(basename $d) echo "looking at $f" if [ "$f" = "cpuset" ]; then echo 1 | sudo tee -a $d/cgroup.clone_children; elif [ "$f" = "memory" ]; then echo 1 | sudo tee -a $d/memory.use_hierarchy; fi sudo mkdir -p $d/$USER sudo chown -R $USER $d/$USER echo $$ > $d/$USER/tasksdone I was ignorant about what was going on exactly when I wrote that script but reading the cgroups documentation and experimenting a bit helped me to understand what is going on. What I am basically doing in this script is to create a new cgroup session for the current user which is what I already stated above. When I run these commands in the current shell or run them in a script and make it so that it gets evaluated in the current shell and not in a subshell (via . script The . is important for this to work!) is that I not just open a new session for user but add the current shell as a process that runs in this new cgroup. I can achieve the same effect by running the script in a subshell and then descend into the cgroup hierarchy in the chb subcgroup and use echo $$ > tasks to add the current shell to every member of the chb cgroup hierarchy . Hence, when I run lxc in that current shell my container will also become a member of all the chb subcgroup s that the current shell is a member of. That is to say my container inherits the cgroup status of my shell . This also explains why it doesn't work in any other shell that is not part of the current chb subcgroup s. I still pass at 2. . We'll probably need to wait either for a systemd update or further Kernel developments to make systemd adopt a consistent behaviour but I prefer the manual setup anyway as it forces you to understand what you're doing. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/170998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65869/"
]
} |
171,025 | I am currently doing preparation for my GCSE computing controlled assessment on Linux. I type ls > list and ls >> list into the command line, but it does not do anything. I have googled it but I can't find what it exactly does. What does: ls > list and ls >> list do? | Both redirect stdout to file. ls > list If the file exists it'll be replaced. ls >> list If the file does not exist it'll be created. If it exists, it'll be appended to the end of the file. Find out more: IO Redirection | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/171025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93519/"
]
} |
171,091 | I have large 3-column files (~10,000 lines) and I would like to remove lines when the contents of the third column of that line appear in the third column of another line. The files' sizes make sort a bit cumbersome, and I can't use something like the below code because the entire lines aren't identical; just the contents of column 3. awk '!seen[$0]++' filename | Just change your awk command to the column you want to remove duplicated lines based on that column (in your case third column): awk '!seen[$3]++' filename This command is telling awk which lines to print. The variable $3 holds the entire contents of column 3 and square brackets are array access. So, for each third column of line in filename, the node of the array named seen is incremented and the line printed if the content of that node(column3) was not ( ! ) previously set. By doing this, always the first lines (unique by the third column) will be kept. Above will work if your columns in input file are delimited with Spaces/Tabs, if that is something else, you will need to tell it to awk with its -F option. So, for example if columns delimited with comma( , ) and wants to remove lines base on third column, use the command as following: awk -F',' '!seen[$3]++' filename | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/171091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93551/"
]
} |
171,113 | I've got the beginnings of a script that I'm putting together to check Xen hosts... this question is twofold really. Firstly I've got the below code snippet; TMPFILE001=/tmp/FILE001.rxTMPFILE002=/tmp/FILE002.rxTMPFILE003=/tmp/FILE003.rxxe vm-list params=uuid,is-control-domain | grep -B1 true | grep uuid | awk {'print $5'} > $TMPFILE001xe vm-list params=uuid --minimal | tr ',' '\n' > $TMPFILE002 So this gives me two lists, I want to remove anything that appears in FILE002.rx from appearing in FILE001.rx and output that to FILE003.rx. Ideally I'd not have any files used in this at all but I was struggling to get it working by trying to capture them in variables. So if possible it would be better to have the script run, compare the output of the two commands and only show the remainder once the output of command 2 has been taken away from command 1. Output of command 1; cat /tmp/FILE001.rx 468190e5-c78b-4337-8094-20638353b4705926bbf3-c48c-4c2a-8113-d7e58520cfe73f98ee14-5e60-4e9b-852a-f924ffe80791 Output of command 2 (trimmed); cat /tmp/FILE002.rx | head -537ae6b9e-8a00-ab10-7e17-3d70ab275b9bd8208537-0d69-1332-9301-4699d194430f1cf06668-5450-4d44-a7ba-c2c7b6bcd6b27d11c6e3-48ef-0e5e-c6ed-e8f2548533fd7d3d0ba3-a6d6-d2a7-99f0-06fa74e625fa | Look at the comm command. Take the following two files f1.txt item1item2item3item4item5item6 f2.txt item1item2item22item3item4 Output $ comm -23 f1.txt f2.txtitem5item6 man page entry for Comm | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91655/"
]
} |
171,115 | I know the following commands will print up to the first occurrence of a certain pattern but will not include the later occurrences: sed -n '1,/<pattern>/p' <file>sed '/<pattern>/q' <file> For example lets say I have a file having the following lines: this is a cow this is a goat this is a some fish this is a fishie this is a fish this is a lion this is a cat Now the outputs: $ sed '/fish/q' file this is a cow this is a goat this is a some fish $ sed -n '1,/fish/p' file this is a cow this is a goat this is a some fish I want the output starting from the very first line till the line containing the last occurrence of fish i.e. my desired output is: this is a cow this is a goat this is a some fish this is a fishie this is a fish How to do that using sed ? | Try this: $ tac infile | sed -n '/fish/,$p' |tac In general case if you run below sed command, you will get all line from first matched pattern to end of input file. $ sed -n '/fish/,$p' filethis is a some fishthis is a fishiethis is a fishthis is a lionthis is a cat So my solution is: If we run tac command on input file your last matched pattern will change to as first pattern. see the result of tac infile : $ tac infilethis is a catthis is a lionthis is a fishthis is a fishiethis is a some fishthis is a goatthis is a cow The tac command is the same as cat command, while tac prints files in reverse order. Now if we run our first sed command, you will get all lines first matched pattern to end of input file. like: $ tac infile | sed -n '/fish/,$p'this is a fishthis is a fishiethis is a some fishthis is a goatthis is a cow Ok, finished. We only need to run tac command again to reverse back the lines to original order: $ tac infile | sed -n '/fish/,$p' |tacthis is a cowthis is a goatthis is a some fishthis is a fishiethis is a fish Done! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68757/"
]
} |
171,132 | Say I have some hundred custom re-mappings with iab for Java and some other hundred or so re-mappings for Haskell, then I'd want do divide these into different files to make it more manageable. What I'm looking for is to create something like this: ~/. ├── .vimrc └── .vim └── custom ├── java.vim └── haskell.vim Where .vimrc might look something like import javaimport haskell Is something like this possible to do, or am I just overcomplicating things? I guess what I am trying to achive is what one does in LaTeX with the \input command... | Yes, the vim command you're looking for is :source or :runtime to pull them from runtimepath . For example, you could do this in your .vimrc : runtime custom/java.vimruntime custom/haskell.vim presuming ~/.vim is in your runtimepath (which it is by default). You could also drop your scripts in the ~/.vim/plugin directory; see write-plugin in the docs. Vim automatically runs all the scripts in the plugin directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33928/"
]
} |
171,141 | We have a server where another script sFTPs and downloads files everyday. Question: Is it possible to detect that the file was downloaded and then for me to automatically archive the file after they are done? To clarify - We host the files and someone else comes and downloads it. This is the script they use: let $command = 'sftp -b /usr/tmp/file.sftp someuser@myserver'show 'FTP command is ' $commandcall system using $command #status##file.sftp### Set local directory on PeopleSoft serverlcd /var/tmp# Set remote directory on the remote servercd ar/in# Transfer all remote files to PeopleSoftget file.datget file2.dat# quit the sessionbye | There are 3 avenues that I can conceive of that might provide you with a solution. 1. Custom sftp Subsystem You could wrap the sftp-server daemon via sshd_config and "override" it with your own script that could then intercept what sftp-server is doing, and then act when you see that a file was downloaded. Overriding the default sftp-server in sshd_config is easy: Subsystem sftp /usr/local/bin/sftp-server Figuring out what to do in the wrapper script would be the hard part.In /usr/local/bin/sftp-server : #!/bin/sh# ...do something...chroot /my/secret/stuff /usr/libexec/openssh/sftp-server# ...do something... 2. Watch the logs If you turn up the debugging of sftp-sever you can get it to show logs of when files are being open/closed and read/written to/from the SFTP server. You could write a daemon/script that watches these logs and then backs the file up when needed. Further details on how to achieve this are already partially covered in my answer to this U&L Q&A tiled: Activity Logging Level in SFTP as well as here in this blog post titled: SFTP file transfer session activity logging . The SFTP logs can be enhanced so they look like this: Sep 16 16:07:19 localhost sftpd-wrapper[4471]: user sftp1 session start from 172.16.221.1Sep 16 16:07:19 localhost sftp-server[4472]: session opened for local user sftp1 from [172.16.221.1]Sep 16 16:07:40 localhost sftp-server[4472]: opendir "/home/sftp1"Sep 16 16:07:40 localhost sftp-server[4472]: closedir "/home/sftp1"Sep 16 16:07:46 localhost sftp-server[4472]: open "/home/sftp1/transactions.xml" flags WRITE,CREATE,TRUNCATE mode 0644Sep 16 16:07:51 localhost sftp-server[4472]: close "/home/sftp1/transactions.xml" bytes read 0 written 192062308Sep 16 16:07:54 localhost sftp-server[4472]: session closed for local user sftp1 from [172.16.221.1] You would then need to develop a daemon/script that would monitor the logs for the open/close event pairs. These represent a completed file transfer. You could also make use of syslog, which could monitor for the "CLOSE" log events and it could be used to perform the copying of the transferred files. 3. Incron You could make use of Inotify events that the Linux kernel produces every time a file is accessed. There is a service called Incron which works similarly to Cron. Where Cron works based on time, Incron works based on file events. So you could setup a Incron entry that would monitor your SFTP upload directories, and any time a specific file event is detected, copy the file. Have a look at the inotify man page for description of the various events. I believe you'd want to watch for a read() ( IN_ACCESS ) followed by a close() ( IN_CLOSE_WRITE ). These would be for files that were copied from the SFTP server. Incron rules look like this: <directory> <file change mask> <command or action> options/var/www/html IN_CREATE /root/scripts/backup.sh/sales IN_DELETE /root/scripts/sync.sh/var/named/chroot/var/master IN_CREATE,IN_ATTRIB,IN_MODIFY /sbin/rndc reload This article titled: Linux incrond inotify: Monitor Directories For Changes And Take Action shows much more of the details needed, if you want to try and go with this option. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/171141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93588/"
]
} |
171,143 | I was reading some sites and came across (LTR) by the number of kernel. Can somebody explain what is means? | LTR stands for Long Term Release. This is also known as a LTSR, short for Long Term Support Release. These releases are supported for a longer time, and are meant to be used in Production Environments, where stability is preferred over new features. In terms of the kernel you are reading about, the LTR cycle is about 3 years. This means if you are a user who needs stability, if you download an LTR kernel, it will be supported upstream for the next 3 Years. The definitive source for Linux Kernels are the Linux Kernel Archives . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/171143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93371/"
]
} |
171,150 | Is there a way to pipe back to back values into a command? e.g. If I can command.pl and want to pipe in response , then y to confirm (because it doesn't have a way to auto confirm)? I've tried echo "y" | echo "response" | ./command.pl but it doesn't work because the ordering of the pipe is from left to right. In essence, I need to run the one pipe first, then pipe the next command after that. Is there a trick with parentheses or quotes or something else? | Have you tried grouping your two answers in a single echo separated by a newline? echo -e "response\ny" | ./command.pl Note the -e flag is necessary with bash to enable interpretation of backslash escapes (unless bash is in Unix conformance mode). Or more portably: echo 'responsey' | ./command.pl Or: printf 'response\ny\n' | ./command.pl Or: printf '%s\n' response y | ./command.pl EDIT: I forgot to mention, but the problem with your initial command was that echo doesn't take any input via its STDIN. The output of the command echo "y" never reached ./command.pl . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171150",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36145/"
]
} |
171,176 | A friend gave me an old Dell to fix for her daughter. So I put Lubuntu on it. I had a disk with Lubuntu on it that came with one of my magazines, so I installed it. This machine won't boot from USB and has no DVD drive. So I installed. It seemed to install automatically without any guidance. After about an hour or so a login box appeared. I can only assume it's installed. What is the default username and password? I didn't have to set one, yet it's asking me for one. I've looked online and couldn't find anything. | I believe these are the defaults: username: lubuntu password: blank (no password) That's literally nothing, for the password. Linux Format magazine If you're using the compilation CD/DVD that comes with this magazine the username should be "ubuntu" with again a blank password. Ubuntu 14.04 compilation disc References What is the default user/password? How to disable autologin in Lubuntu? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93602/"
]
} |
171,262 | I get shared library errors whenever I seem to install software manually. Upon executing echo $LD_LIBRARY_PATH it shows up as.. nothing . I've tried adding /usr/local/lib to a .conf file in /etc/ld.so.conf.d but it seems like it never executes. This doesn't work, either (quotes or otherwise): LD_LIBRARY_PATH="/usr/local/lib"export LD_LIBRARY_PATHsudo ldconfig -v The value will be set temporarily, but it will not retain the value if I so much as exit a terminal window. Rebooting does nothing, either. | Add the following to your .bashrc : vim ~/.bashrc...export LD_LIBRARY_PATH=/usr/local/lib This will let you restart your computer, and still have that path assigned. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171262",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62368/"
]
} |
171,300 | #!/usr/bin/bashkill9="9"kill15="15"if [ $1 == $kill9 ]; then set -- "$1" "$kill9"else set -- "$1" "$kill15"fiecho $1 I want $1 to become 9 if I type -9and 15 if I type -15My script above is wrong. How can I do so? | #!/usr/bin/bashkill9="9"kill15="15"if [ "$1" = "-$kill9" ]; then set -- "$kill9"else set -- "$kill15"fiprintf '%s\n' "$1" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/171300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
171,302 | Am trying to create DNS server on Solaris 10 in VirtualBox. Steps which I did. Step I vi /etc/named.confoptions { directory "/var/named";};zone "." { type hint; file "db.cache';};#Reverse Zones###zone "0.0.127.in-addr.arpa" { type master; file "db.127.0.0';};zone "16.168.192.in-addr.arpa" { type master; file "db.192.168.16';};###Forward Zone###zone "data.serv" { type master; file "db.data.serv";}; Step II cd /var/namedmv named.root db.cache #after downloading named.root from Internet Step III vi db.127.0.0@IN SOA ns1.data.serv. postmaster.data.serv.( 2014092502 ; Serial Number 7200 ; Refresh Interval 3600 ; Retry Interval 86400 ; Expiry 600 ) ; Minimum TTL #NS|A|CNAME|PTR|MX NS ns1.data.serv.1 IN PTR localhost. Step IV vi db.192.168.16@IN SOA ns1.data.serv. postmaster.data.serv.( 2014092502 ; Serial Number 7200 ; Refresh Interval 3600 ; Retry Interval 86400 ; Expiry 600 ) ; Minimum TTL NS ns1.data.serv.128 IN PTR ns1.data.serv. Step V vi db.data.serv@IN SOA ns1.data.serv. postmaster.data.serv.( 2014092502 ; Serial Number 7200 ; Refresh Interval 3600 ; Retry Interval 86400 ; Expiry 600 ) ; Minimum TTL NS ns1.data.serv.ns1 IN PTR 192.168.16.128svcadm restart dns/serverbash-3.2# dig @localhost ns1.data.serv; <<>> DiG 9.6-ESV-R8 <<>> @localhost ns1.data.serv; (1 server found);; global options: +cmd;; connection timed out; no servers could be reached Is there anything else which needs to be done. | #!/usr/bin/bashkill9="9"kill15="15"if [ "$1" = "-$kill9" ]; then set -- "$kill9"else set -- "$kill15"fiprintf '%s\n' "$1" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/171302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/965/"
]
} |
171,314 | I have a set of log files that I need to review and I would like to search specific strings on the same files at once Is this possible? Currently I am using grep -E 'fatal|error|critical|failure|warning|' /path_to_file How do I use this and search for the strings of multiple files at once? If this is something that needs to be scripted, can someone provide a simple script to do this? | grep -E 'fatal|error|critical|failure|warning|' *.log | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/171314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81926/"
]
} |
171,341 | Maybe I'm overlooking something but is there a way to get your current bash history for the current session you are using like if i run ssh host$ pwd$ ls$ cd /tmp I just want to see those 3 commands and nothing else | A slightly roundabout way: history -a ~/current_history This will save the current session's unsaved bash history to ~/current_history , which you can then view. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/171341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9400/"
]
} |
171,346 | If you've been following unix.stackexchange.com for a while, youshould hopefully know by now that leaving a variableunquoted in list context (as in echo $var ) in Bourne/POSIXshells (zsh being the exception) has a very special meaning andshouldn't be done unless you have a very good reason to. It's discussed at length in a number of Q&A here (Examples: Why does my shell script choke on whitespace or other special characters? , When is double-quoting necessary? , Expansion of a shell variable and effect of glob and split on it , Quoted vs unquoted string expansion ) That has been the case since the initial release of the Bourneshell in the late 70s and hasn't been changed by the Korn shell(one of David Korn's biggestregrets (question #7) ) or bash which mostlycopied the Korn shell, and that's how that has been specified by POSIX/Unix. Now, we're still seeing a number of answers here and evenoccasionally publicly released shell code wherevariables are not quoted. You'd have thought people would havelearnt by now. In my experience, there are mainly 3 types of people who omit toquote their variables: beginners. Those can be excused as admittedly it's acompletely unintuitive syntax. And it's our role on this siteto educate them. forgetful people. people who are not convinced even after repeated hammering,who think that surely the Bourne shell author did notintend us to quote all our variables . Maybe we can convince them if we expose the risk associated withthis kind of behaviours. What's the worst thing that can possibly happen if youforget to quote your variables. Is it really that bad? What kind of vulnerability are we talking of here? In what contexts can it be a problem? | Preamble First, I'd say it's not the right way to address the problem.It's a bit like saying " you should not murder people becauseotherwise you'll go to jail ". Similarly, you don't quote your variable because otherwiseyou're introducing security vulnerabilities. You quote yourvariables because it is wrong not to (but if the fear of the jail can help, why not). A little summary for those who've just jumped on the train. In most shells, leaving a variable expansion unquoted (thoughthat (and the rest of this answer) also applies to commandsubstitution ( `...` or $(...) ) and arithmetic expansion ( $((...)) or $[...] )) has a very specialmeaning. The best way to describe it is that it is likeinvoking some sort of implicit split+glob operator¹. cmd $var in another language would be written something like: cmd(glob(split($var))) $var is first split into a list of words according to complexrules involving the $IFS special parameter (the split part)and then each word resulting of that splitting is considered asa pattern which is expanded to a list of files that match it(the glob part). As an example, if $var contains *.txt,/var/*.xml and $IFS contains , , cmd would be called with a number of arguments,the first one being cmd and the next ones being the txt files in the current directory and the xml files in /var . If you wanted to call cmd with just the two literal arguments cmd and *.txt,/var/*.xml , you'd write: cmd "$var" which would be in your other more familiar language: cmd($var) What do we mean by vulnerability in a shell ? After all, it's been known since the dawn of time that shellscripts should not be used in security-sensitive contexts.Surely, OK, leaving a variable unquoted is a bug but that can'tdo that much harm, can it? Well, despite the fact that anybody would tell you that shellscripts should never be used for web CGIs, or that thankfullymost systems don't allow setuid/setgid shell scripts nowadays,one thing that shellshock (the remotely exploitable bash bugthat made the headlines in September 2014) revealed is thatshells are still extensively used where they probably shouldn't:in CGIs, in DHCP client hook scripts, in sudoers commands,invoked by (if not as ) setuid commands... Sometimes unknowingly. For instance system('cmd $PATH_INFO') in a php / perl / python CGI script does invoke a shell to interpret that command line (not tomention the fact that cmd itself may be a shell script and itsauthor may have never expected it to be called from a CGI). You've got a vulnerability when there's a path for privilegeescalation, that is when someone (let's call him the attacker )is able to do something he is not meant to. Invariably that means the attacker providing data, that databeing processed by a privileged user/process which inadvertentlydoes something it shouldn't be doing, in most of the cases becauseof a bug. Basically, you've got a problem when your buggy code processesdata under the control of the attacker . Now, it's not always obvious where that data may come from,and it's often hard to tell if your code will ever get toprocess untrusted data. As far as variables are concerned, In the case of a CGI script,it's quite obvious, the data are the CGI GET/POST parameters andthings like cookies, path, host... parameters. For a setuid script (running as one user when invoked byanother), it's the arguments or environment variables. Another very common vector is file names. If you're getting afile list from a directory, it's possible that files have beenplanted there by the attacker . In that regard, even at the prompt of an interactive shell, youcould be vulnerable (when processing files in /tmp or ~/tmp for instance). Even a ~/.bashrc can be vulnerable (for instance, bash willinterpret it when invoked over ssh to run a ForcedCommand like in git server deployments with some variables under thecontrol of the client). Now, a script may not be called directly to process untrusteddata, but it may be called by another command that does. Or yourincorrect code may be copy-pasted into scripts that do (by you 3years down the line or one of your colleagues). One place where it'sparticularly critical is in answers in Q&A sites as you'llnever know where copies of your code may end up. Down to business; how bad is it? Leaving a variable (or command substitution) unquoted is by farthe number one source of security vulnerabilities associatedwith shell code. Partly because those bugs often translate tovulnerabilities but also because it's so common to see unquotedvariables. Actually, when looking for vulnerabilities in shell code, thefirst thing to do is look for unquoted variables. It's easy tospot, often a good candidate, generally easy to track back toattacker-controlled data. There's an infinite number of ways an unquoted variable can turninto a vulnerability. I'll just give a few common trends here. Information disclosure Most people will bump into bugs associated with unquotedvariables because of the split part (for instance, it'scommon for files to have spaces in their names nowadays and spaceis in the default value of IFS). Many people will overlook the glob part. The glob part is at least as dangerous as the split part. Globbing done upon unsanitised external input means theattacker can make you read the content of any directory. In: echo You entered: $unsanitised_external_input if $unsanitised_external_input contains /* , that means theattacker can see the content of / . No big deal. It becomesmore interesting though with /home/* which gives you a list ofuser names on the machine, /tmp/* , /home/*/.forward forhints at other dangerous practises, /etc/rc*/* for enabledservices... No need to name them individually. A value of /* /*/* /*/*/*... will just list the whole file system. Denial of service vulnerabilities. Taking the previous case a bit too far and we've got a DoS. Actually, any unquoted variable in list context with unsanitizedinput is at least a DoS vulnerability. Even expert shell scripters commonly forget to quote thingslike: #! /bin/sh -: ${QUERYSTRING=$1} : is the no-op command. What could possibly go wrong? That's meant to assign $1 to $QUERYSTRING if $QUERYSTRING was unset. That's a quick way to make a CGI script callable fromthe command line as well. That $QUERYSTRING is still expanded though and because it'snot quoted, the split+glob operator is invoked. Now, there are some globs that are particularly expensive toexpand. The /*/*/*/* one is bad enough as it means listingdirectories up to 4 levels down. In addition to the disk and CPUactivity, that means storing tens of thousands of file paths(40k here on a minimal server VM, 10k of which directories). Now /*/*/*/*/../../../../*/*/*/* means 40k x 10k and /*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/* is enough tobring even the mightiest machine to its knees. Try it for yourself (though be prepared for your machine tocrash or hang): a='/*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/*' sh -c ': ${a=foo}' Of course, if the code is: echo $QUERYSTRING > /some/file Then you can fill up the disk. Just do a google search on shellcgi or bashcgi or kshcgi , and you'll finda few pages that show you how to write CGIs in shells. Noticehow half of those that process parameters are vulnerable. Even David Korn'sownone is vulnerable (look at the cookie handling). up to arbitrary code execution vulnerabilities Arbitrary code execution is the worst type of vulnerability,since if the attacker can run any command, there's no limit onwhat he may do. That's generally the split part that leads to those. Thatsplitting results in several arguments to be passed to commandswhen only one is expected. While the first of those will be usedin the expected context, the others will be in a different contextso potentially interpreted differently. Better with an example: awk -v foo=$external_input '$2 == foo' Here, the intention was to assign the content of the $external_input shell variable to the foo awk variable. Now: $ external_input='x BEGIN{system("uname")}'$ awk -v foo=$external_input '$2 == foo'Linux The second word resulting of the splitting of $external_input is not assigned to foo but considered as awk code (here thatexecutes an arbitrary command: uname ). That's especially a problem for commands that can execute othercommands ( awk , env , sed (GNU one), perl , find ...) especiallywith the GNU variants (which accept options after arguments).Sometimes, you wouldn't suspect commands to be able to executeothers like ksh , bash or zsh 's [ or printf ... for file in *; do [ -f $file ] || continue something-that-would-be-dangerous-if-$file-were-a-directorydone If we create a directory called x -o yes , then the testbecomes positive, because it's a completely differentconditional expression we're evaluating. Worse, if we create a file called x -a a[0$(uname>&2)] -gt 1 ,with all ksh implementations at least (which includes the sh of most commercial Unices and some BSDs), that executes uname because those shells perform arithmetic evaluation on thenumerical comparison operators of the [ command. $ touch x 'x -a a[0$(uname>&2)] -gt 1'$ ksh -c 'for f in *; do [ -f $f ]; done'Linux Same with bash for a filename like x -a -v a[0$(uname>&2)] . Of course, if they can't get arbitrary execution, the attacker maysettle for lesser damage (which may help to get arbitraryexecution). Any command that can write files or changepermissions, ownership or have any main or side effect could be exploited. All sorts of things can be done with file names. $ touch -- '-R ..'$ for file in *; do [ -f "$file" ] && chmod +w $file; done And you end up making .. writeable (recursively with GNU chmod ). Scripts doing automatic processing of files in publicly writable areas like /tmp are to be written very carefully. What about [ $# -gt 1 ] That's something I find exasperating. Some people go down allthe trouble of wondering whether a particular expansion may beproblematic to decide if they can omit the quotes. It's like saying. Hey, it looks like $# cannot be subject tothe split+glob operator, let's ask the shell to split+glob it .Or Hey, let's write incorrect code just because the bug isunlikely to be hit . Now how unlikely is it? OK, $# (or $! , $? or anyarithmetic substitution) may only contain digits (or - forsome²) so the glob part is out. For the split part to dosomething though, all we need is for $IFS to contain digits (or - ). With some shells, $IFS may be inherited from the environment,but if the environment is not safe, it's game over anyway. Now if you write a function like: my_function() { [ $# -eq 2 ] || return ...} What that means is that the behaviour of your function dependson the context in which it is called. Or in other words, $IFS becomes one of the inputs to it. Strictly speaking, when youwrite the API documentation for your function, it should besomething like: # my_function# inputs:# $1: source directory# $2: destination directory# $IFS: used to split $#, expected not to contain digits... And code calling your function needs to make sure $IFS doesn'tcontain digits. All that because you didn't feel like typingthose 2 double-quote characters. Now, for that [ $# -eq 2 ] bug to become a vulnerability,you'd need somehow for the value of $IFS to become undercontrol of the attacker . Conceivably, that would not normallyhappen unless the attacker managed to exploit another bug. That's not unheard of though. A common case is when peopleforget to sanitize data before using it in arithmeticexpression. We've already seen above that it can allowarbitrary code execution in some shells, but in all of them, it allows the attacker to give any variable an integer value. For instance: n=$(($1 + 1))if [ $# -gt 2 ]; then echo >&2 "Too many arguments" exit 1fi And with a $1 with value (IFS=-1234567890) , that arithmeticevaluation has the side effect of settings IFS and the next [ command fails which means the check for too many args isbypassed. What about when the split+glob operator is not invoked? There's another case where quotes are needed around variables and other expansions: when it's used as a pattern. [[ $a = $b ]] # a `ksh` construct also supported by `bash`case $a in ($b) ...; esac do not test whether $a and $b are the same (except with zsh ) but if $a matches the pattern in $b . And you need to quote $b if you want to compare as strings (same thing in "${a#$b}" or "${a%$b}" or "${a##*$b*}" where $b should be quoted if it's not to be taken as a pattern). What that means is that [[ $a = $b ]] may return true in cases where $a is different from $b (for instance when $a is anything and $b is * ) or may return false when they are identical (for instance when both $a and $b are [a] ). Can that make for a security vulnerability? Yes, like any bug. Here, the attacker can alter your script's logical code flow and/or break the assumptions that your script are making. For instance, with a code like: if [[ $1 = $2 ]]; then echo >&2 '$1 and $2 cannot be the same or damage will incur' exit 1fi The attacker can bypass the check by passing '[a]' '[a]' . Now, if neither that pattern matching nor the split+glob operator apply, what's the danger of leaving a variable unquoted? I have to admit that I do write: a=$bcase $a in... There, quoting doesn't harm but is not strictly necessary. However, one side effect of omitting quotes in those cases (for instance in Q&A answers) is that it can send a wrong message to beginners: that it may be all right not to quote variables . For instance, they may start thinking that if a=$b is OK, then export a=$b would be as well (which it's not in many shells as it's in arguments to the export command so in list context) or env a=$b . What about zsh ? zsh did fix most of those design awkwardnesses. In zsh (at least when not in sh/ksh emulation mode), if you want splitting , or globbing , or pattern matching , you have to request it explicitly: $=var to split, and $~var to glob or for the content of the variable to be treated as a pattern. However, splitting (but not globbing) is still done implicitly upon unquoted command substitution (as in echo $(cmd) ). Also, a sometimes unwanted side effect of not quoting variable is the empties removal . The zsh behaviour is similar to what you can achieve in other shells by disabling globbing altogether (with set -f ) and splitting (with IFS='' ). Still, in: cmd $var There will be no split+glob , but if $var is empty, instead of receiving one empty argument, cmd will receive no argument at all. That can cause bugs (like the obvious [ -n $var ] ). That can possibly break a script's expectations and assumptions and cause vulnerabilities. As the empty variable can cause an argument to be just removed , that means the next argument could be interpreted in the wrong context. As an example, printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2 If $attacker_supplied1 is empty, then $attacker_supplied2 will be interpreted as an arithmetic expression (for %d ) instead of a string (for %s ) and any unsanitized data used in an arithmetic expression is a command injection vulnerability in Korn-like shells such as zsh . $ attacker_supplied1='x y' attacker_supplied2='*'$ printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2[1] <x y>[2] <*> fine, but: $ attacker_supplied1='' attacker_supplied2='psvar[$(uname>&2)0]'$ printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2Linux[1] <2>[0] <> The uname arbitrary command was run. Also note that while zsh doesn't do globbing upon substitutions by default, as globs in zsh are much more powerful than in other shells, that means they can do a lot more damage if ever you enabled the globsubst option at the same time of the extendedglob one, or without disabling bareglobqual and left some variables unintentionally unquoted. For instance, even: set -o globsubstecho $attacker_controlled Would be an arbitrary command execution vulnerability, because commands can be executed as part of glob expansions, for instance with the e valuation glob qualifier: $ set -o globsubst$ attacker_controlled='.(e[uname])'$ echo $attacker_controlledLinux. emulate sh # or kshecho $attacker_controlled doesn't cause an ACE vulnerability (though it still a DoS one like in sh) because bareglobqual is disabled in sh/ksh emulation. There's no good reason to enable globsubst other than in those sh/ksh emulations when wanting to interpret sh/ksh code. What about when you do need the split+glob operator? Yes, that's typically when you do want to leave your variable unquoted. But then you need to make sure you tune your split and glob operators correctly before using it. If you only want the split part and not the glob part (which is the case most of the time), then you do need to disable globbing ( set -o noglob / set -f ) and fix $IFS . Otherwise you'll cause vulnerabilities as well (like David Korn's CGI example mentioned above). Conclusion In short, leaving a variable (or command substitution orarithmetic expansion) unquoted in shells can be very dangerousindeed especially when done in the wrong contexts, and it's veryhard to know which are those wrong contexts. That's one of the reasons why it is considered bad practice . Thanks for reading so far. If it goes over your head, don'tworry. One can't expect everyone to understand all the implications ofwriting their code the way they write it. That's why we have good practice recommendations , so they can be followed withoutnecessarily understanding why. (and in case that's not obvious yet, please avoid writingsecurity sensitive code in shells). And please quote your variables on your answers on this site! ¹In ksh93 and pdksh and derivatives, brace expansion is also performed unless globbing is disabled (in the case of ksh93 versions up to ksh93u+, even when the braceexpand option is disabled). ² In ksh93 and yash , arithmetic expansions can also include things like 1,2 , 1e+66 , inf , nan . There are even more in zsh , including # which is a glob operator with extendedglob , but zsh never does split+glob upon arithmetic expansion, even in sh emulation | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/171346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22565/"
]
} |
171,430 | I am currently using Esc + p to know my previous command run on terminal. Similarly, there is one like Esc + Backspace to delete only certain character. I want to know more of such short-cut combos and some more information about such shortcut keys. | You can list all currently active keybinds in tcsh with the bindkey command: % bindkeyStandard key bindings"^@" -> set-mark-command"^A" -> beginning-of-line"^B" -> backward-char"^C" -> tty-sigintr... etc ... In this output, ^[ is the escape character this is Esc followed by your key (eg. p ). Some terminal emulators may also send Alt as the escape character. M- is Meta ( Alt ), and ^ is Control . You can also use bindkey to set commands; See the manpage entry on bindkey for more information. A list of keybinds for xterm can be found here ; the manpage also has a section on it, but it's not very to-the-point... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93151/"
]
} |
171,440 | In my Bash script I have this line: mv -f $HOME_DIR/$dir $HOME_SAVE/BACKUP The problem is when $dir is empty (no $dir directory) the mv command moved the directory $HOME_DIR . We want only to move the $dir directories. How can I avoid this case? I can use this: ls $HOME_DIR | wc -l # and verify if the number is -gt 1 But this is not an elegant solution. | You should use : mv -f -- "$HOME_DIR/${dir:?}" "$HOME_SAVE/BACKUP/" The expansion will fail, the script quit, and an error message will be emitted to stderr if $dir 's value is empty. You can even specify the message printed like: mv -f -- "$HOME_DIR/${dir:?\$dir is UNSET or NULL!}" "$HOME_SAVE/BACKUP/" Alternatively - and less ultimately - you can specify default and/or substitute values for $dir which will only be applied if it is :- unset or null or :+ set and not null using a similar form. All of these examples - above and below - are representative of a few of several standard POSIX-specified parameter expansion forms. In the below example when $dir is set and not null the first portion evaluates to your source directory, else to nothing at all and the second portion evaluates to its value if any, but if none it evals instead to your target dir. mv is specified to fail when its first and second args name the same pathname and so nothing is moved at all. mv -f -- "${dir:+$HOME_DIR/}${dir:-$HOME_SAVE/BACKUP/}" "$HOME_SAVE/BACKUP/" 2>/dev/null That is probably overkill though as I suppose: mv -f -- "$HOME_DIR/$dir" ${dir:+"$HOME_SAVE/BACKUP/"} 2>/dev/null ...should do just as well - mv can't move anything nowhere . Note that in a POSIX-conforming shell the quotes even inside the {} curlies serve to protect the expansion because in that case it is not $dir 's value expanded but instead ${dir:+word} word 's value expanded. Putting them within the braces serves to eval the expansion to nothing at all - not even a null string - when ${dir} is unset or null. That probably doesn't matter really - I'm fairly certain a null filename is invalid pretty much everywhere - but it's how I usually do it. This is not safe to do with ${dir:-"word"} however - in that case you would get either ${dir} 's unquoted expansion or word 's quoted expansion. You might also optionally invoke the -i nteractive option to mv only if $dir is null like: mv "-i${dir:+f}" -- "$HOME_DIR/$dir" "$HOME_SAVE/BACKUP/" </dev/tty ...and so you would be sure at least not to accidentally overwrite any files in .../BACKUP without someone first pressing a y (or whatever) at the terminal. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
171,456 | I have an attendance device running Linux OS . I can connect this device via telnet session. There are some files in the device that I would like to download and upload with new files. How can I do that? I have very low knowledge on Linux OS. Would you please help! | This depends which tools are installed on the client device / supported by the kernel. Possible methods for file transfer (unordered): ssh / sftp encoding binary files into displayable format with base64/uuencode and then copy from/into your telnet terminal window. over a simple tcp connection with netcat or socat or with bash and /dev/tcp upload / download with wget or curl from a web server ftp server with a command line ftp client samba or nfs mount Read Simple file transfer and How to get file to a host when all you have is a serial console? for more possibilites. Copy desktop.jpg from the device to your pc with the netcat/nc method: On your pc, disable temporarily (or reconfigure if possible) any firewall andrun netcat -l -p 10000 > desktop.jpg and on the device busybox nc A.B.C.D -p 10000 < desktop.jpg where you need to replace A.B.C.D with the IP address of your pc.As soon as the transfer was successful, the netcat process on your pc should stop automatically. If not, something could have gone wrong and you can stop it with Ctrl + C (compare source and destination checksums to verify). For the other direction, just exchange < and > on both sides. Make first a backup of the original desktop.jpg ( cp desktop.jpg desktop_orig.jpg ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93773/"
]
} |
171,465 | I have a Debian Wheezy system with a couple of 500 GB HDDs in RAID-1 ( mdadm mirror), on top of which sits LVM logical volumes with 5 partitions ( boot , root , usr , var and tmp ), total size of 47.15 GiB. 418.38 GiB in the physical volume are free. GRUB installed on both drives. One of HDDs failed and now array is degraded, but data is intact. What I want is to swap all of these 2 HDDs to 80 GB SSDs without the need to reinstall the system from scratch. The subtle point here is that I need to shrink LVM physical volume to match SSD's size, but logical volumes are not contiguous (there is a lot of a free space in the beginning), so I have to somehow move logical volumes within a physical one. And there is no lvmove command in Debian. How do I achieve this? Some console output: Versions: root@wheezy:~# uname -a && mdadm --version && lvm versionLinux wheezy 3.2.0-4-amd64 #1 SMP Debian 3.2.63-2+deb7u1 x86_64 GNU/Linuxmdadm - v3.2.5 - 18th May 2012 LVM version: 2.02.95(2) (2012-03-06) Library version: 1.02.74 (2012-03-06) Driver version: 4.22.0 Array details: root@wheezy:~# mdadm -D /dev/md0/dev/md0: Version : 1.2 Creation Time : Thu Dec 4 12:20:22 2014 Raid Level : raid1 Array Size : 488148544 (465.53 GiB 499.86 GB) Used Dev Size : 488148544 (465.53 GiB 499.86 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Dec 4 13:08:59 2014 State : clean, degraded Active Devices : 1Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : wheezy:0 (local to host wheezy) UUID : 44ea4079:b3b837d3:b9bb2ca1:1b95272a Events : 26Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed LVM brief details: root@wheezy:~# pvs && vgs && lvs PV VG Fmt Attr PSize PFree /dev/md0 system lvm2 a-- 465.53g 418.38g VG #PV #LV #SN Attr VSize VFree system 1 5 0 wz--n- 465.53g 418.38g LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert boot system -wi----- 152.00m root system -wi----- 2.00g tmp system -wi----- 10.00g usr system -wi----- 20.00g var system -wi----- 15.00g Segmentation of the PV: root@wheezy:~# pvs -v --segments /dev/md0 Using physical volume(s) on command line PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges /dev/md0 system lvm2 a-- 465.53g 418.38g 0 89600 0 free /dev/md0 system lvm2 a-- 465.53g 418.38g 89600 38 boot 0 linear /dev/md0:89600-89637 /dev/md0 system lvm2 a-- 465.53g 418.38g 89638 512 root 0 linear /dev/md0:89638-90149 /dev/md0 system lvm2 a-- 465.53g 418.38g 90150 5120 usr 0 linear /dev/md0:90150-95269 /dev/md0 system lvm2 a-- 465.53g 418.38g 95270 3840 var 0 linear /dev/md0:95270-99109 /dev/md0 system lvm2 a-- 465.53g 418.38g 99110 1280 0 free /dev/md0 system lvm2 a-- 465.53g 418.38g 100390 2560 tmp 0 linear /dev/md0:100390-102949 /dev/md0 system lvm2 a-- 465.53g 418.38g 102950 16226 0 free | You don't need to shrink the pv or rebuild the array. You just need to create a new array out of the new drives and add that as a new pv ( pvcreate + vgextend ), then pvmove all of the existing lvs off the old pv, then remove the old pv ( vgreduce ) and take that drive out of service. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42158/"
]
} |
171,489 | So, I was moving my laptop around (and I have the bad habit of setting things on the keyboard...) and I woke up to discover this: $Display all 2588 possibilities? (y or n) What command would display something like this? I'm using Bash. | Hitting TAB key helps you to auto complete either a command or a file/directory (as long as it is executable) you want to use, depending on what you are requesting. Double hitting the TAB key helps you displaying the available stuff you could use for next. e.g. Command completition: I want to edit my crontab. Typing cront and hitting TAB then I will see my command complete: crontab . File/Directory completition: I want to backup my crontab. crontab -l >> Type some words of the destination /ho TAB then I will see: /home/ , type next us TAB then I will see: /home/user/ Now, when you double hit TAB key without typing something, then the prompt expects something, so it will want to help you displaying all the possibilities. With the prompt empty, it's expecting a command or a file/directory so it will want to display all the commands available for you & all the files/directories located in the directory where you are. The 2588 possibilities output, means the total amount of commands/files/directories available to type. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/171489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147281/"
]
} |
171,503 | Summary : I'm trying to figure out why my tmux session dies when I disconnect from ssh Details : I have tmux installed on an Arch Linux system. When I start a tmux session I can detach from it and then attach again while the ssh session is active. But if I end my ssh session then the tmux session gets killed. I know this is not the normal behavior because I have other system where the tmux session continues running even if the ssh session is ended and I can attach to the tmux session after establishing a new ssh connection. The system that has a problem and the one that works correctly have very similar configurations so I'm not sure what to check. I'm running tmux version 1.9a. The system that has a problem (that I have root access for) has a Linux kernel version of 3.17.4-1 and the system that works correct has kernel version 3.16.4-1-ARCH (I don't have root on that system). I doubt that the kernel version is the source of the problem though, that's just one difference I noticed. I thought I'd ask to see if anyone has seen a similar problem and knows of a possible solution. The precise steps that lead to the problem are: ssh to machine run tmux to start tmux ctrl-B D to detach (at this point I could reattach with tmux attach close ssh session (at this point the tmux session is killed, I've been able to observe this when I'm logged in as root in a different terminal) reconnect with ssh and run tmux attach and I get the message no sessions and running tmux ls returns failed to connect to server: Connection refused . This makes sense because the serve is not running. What doesn't make sense to me is why it gets killed in step 4 when I disconnect from the ssh session. strace data: In response to one of the comments I used strace to see what systems calls the tmux server process makes. It looks like when I exit my ssh session (by typing exit or with ctrl-d ) that the tmux process is being killed. Here's a snippet of the final part of the strace output. poll([{fd=4, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}], 3, 424) = ? ERESTART_RESTARTBLOCK (Interrupted by signal)--- SIGTERM {si_signo=SIGTERM, si_code=SI_USER, si_pid=1, si_uid=0} ---sendto(3, "\17", 1, 0, NULL, 0) = 1+++ killed by SIGKILL +++ I compared this with a different system where tmux works properly and on that system the tmux process continues running even after I exit. So the root cause appears to be that the tmux process is being terminated when I close the ssh session. I'll need to spend some time troubleshooting this to figure out why, but I thought I would update my question since the strace suggestion was useful. | Theory Some init systems including systemd provide a feature to kill all processes belonging to the service. The service typically starts a single process which that creates more processes by forking and those processes can do that as well. All such processes are typically considered part of the service. In systemd this is done using cgroups . In systemd, all processes belonging to a service are killed when the service is stopped by default. The SSH server is obviously part of the service. When you connect to the server, SSH server typically forks and the new process handles your SSH session. By forking from the SSH session process or its children, other server side processes are started, including your screen or tmux . Killmode and socket activation The default behavior can be changed using the KillMode directive. The upstream project doesn't AFAIK include any .service files and so those vary by distribution. There are typically two ways to enable SSH on your system. One is the classic ssh.service that maintains a long running SSH daemon listening on the network. The other is via socket activation handled by the ssh.socket that in turn starts [email protected] which only runs for a single SSH session. Solutions If your processes get killed at the end of the session, it is possible that you are using socket activation and it gets killed by systemd when it notices that the SSH session process exited. In that case there are two solutions. One is to avoid using socket activation by using ssh.service instead of ssh.socket . The other is to set KillMode=process in the Service section of [email protected] . The KillMode=process setting may also be useful with the classic ssh.service , as it avoids killing the SSH session process or the screen or tmux processes when the server gets stopped or restarted. Future notes This answer apparently gained a level of popularity. While it worked for the OP it might happen that it doesn't work for someone in the future due to systemd-logind development or configuration. Please check documentation on logind sessions if you experience behavior different from the description in this answer. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/171503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18797/"
]
} |
171,515 | I'm really not into mail configuration, but somehow I managed to configure exim4 and it does send e-mails. I spent so many hours on it though I even can't count. (And I don't know why it is so complicated, while needs are usually very similar: to have e-mails addresses which should be used to actually send e-mails - and that's almost everything, not taking into account a security subject, which is mostly related to authentication). When I'm sending e-mail, I have automatically set a FROM field as 'root' (linux user). I'd like to have a custom field (e.g. "Contact me"), and I couln't find on internet any answer how to do this. Second, some say that using linux users as related to email addresses is not a good thing e.g.: http://t-machine.org/index.php/2014/06/27/webmail-on-your-debian-server-exim4-dovecot-roundcube/ But the tutorials I found use them. I'm not using data base as in the url above, but I would still prefer to have no linux user related to email - is it something difficult to achieve? How could I do this? Here's /etc/exim4/update-exim4.conf.conf content: # /etc/exim4/update-exim4.conf.conf## Edit this file and /etc/mailname by hand and execute update-exim4.conf# yourself or use 'dpkg-reconfigure exim4-config'## Please note that this is _not_ a dpkg-conffile and that automatic changes# to this file might happen. The code handling this will honor your local# changes, so this is usually fine, but will break local schemes that mess# around with multiple versions of the file.## update-exim4.conf uses this file to determine variable values to generate# exim configuration macros for the configuration file.## Most settings found in here do have corresponding questions in the# Debconf configuration, but not all of them.## This is a Debian specific filedc_eximconfig_configtype='internet'dc_other_hostnames='url.com; mail.url.com; url; localhost; localhost.localdomain'dc_local_interfaces='127.0.0.1; my_ip'dc_readhost=''dc_relay_domains=''dc_minimaldns='false'dc_relay_nets=''dc_smarthost=''CFILEMODE='644'dc_use_split_config='true'dc_hide_mailname=''dc_mailname_in_oh='true'dc_localdelivery='maildir_home' For any case, I substituted my domain with url and ip with my_ip. And /etc/email-addresses is very short: root: [email protected] | Theory Some init systems including systemd provide a feature to kill all processes belonging to the service. The service typically starts a single process which that creates more processes by forking and those processes can do that as well. All such processes are typically considered part of the service. In systemd this is done using cgroups . In systemd, all processes belonging to a service are killed when the service is stopped by default. The SSH server is obviously part of the service. When you connect to the server, SSH server typically forks and the new process handles your SSH session. By forking from the SSH session process or its children, other server side processes are started, including your screen or tmux . Killmode and socket activation The default behavior can be changed using the KillMode directive. The upstream project doesn't AFAIK include any .service files and so those vary by distribution. There are typically two ways to enable SSH on your system. One is the classic ssh.service that maintains a long running SSH daemon listening on the network. The other is via socket activation handled by the ssh.socket that in turn starts [email protected] which only runs for a single SSH session. Solutions If your processes get killed at the end of the session, it is possible that you are using socket activation and it gets killed by systemd when it notices that the SSH session process exited. In that case there are two solutions. One is to avoid using socket activation by using ssh.service instead of ssh.socket . The other is to set KillMode=process in the Service section of [email protected] . The KillMode=process setting may also be useful with the classic ssh.service , as it avoids killing the SSH session process or the screen or tmux processes when the server gets stopped or restarted. Future notes This answer apparently gained a level of popularity. While it worked for the OP it might happen that it doesn't work for someone in the future due to systemd-logind development or configuration. Please check documentation on logind sessions if you experience behavior different from the description in this answer. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/171515",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93815/"
]
} |
171,519 | What exactly is happening here? root@bob-p7-1298c:/# ls -l /tmp/report.csv && lsof | grep "report.csv"-rw-r--r-- 1 mysql mysql 1430 Dec 4 12:34 /tmp/report.csvlsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. | FUSE and its access rights lsof by default checks all mounted file systems including FUSE - file systems implemented in user space which have special access rights in Linux. As you can see in this answer on Ask Ubuntu a mounted GVFS file system (special case of FUSE) is normally accessible only to the user which mounted it (the owner of gvfsd-fuse ). Even root cannot access it. To override this restriction it is possible to use mount options allow_root and allow_other . The option must be also enabled in the FUSE daemon which is described for example in this answer ...but in your case you do not need to (and should not) change the access rights. Excluding file systems from lsof In your case lsof does not need to check the GVFS file systems so you can exclude the stat() calls on them using the -e option (or you can just ignore the waring): lsof -e /run/user/1000/gvfs Checking certain files by lsof You are using lsof to get information about all processes running on your system and only then you filter the complete output using grep . If you want to check just certain files and the related processes use the -f option without a value directly following it then specify a list of files after the "end of options" separator -- . This will be considerably faster. lsof -e /run/user/1000/gvfs -f -- /tmp/report.csv General solution To exclude all mounted file systems on which stat() fails you can run something like this (in bash ): x=(); for a in $(mount | cut -d' ' -f3); do test -e "$a" || x+=("-e$a"); donelsof "${x[@]}" -f -- /tmp/report.csv Or to be sure to use stat() ( test -e could be implemented a different way): x=(); for a in $(mount | cut -d' ' -f3); do stat --printf= "$a" 2>/dev/null || x+=("-e$a"); done | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/171519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61110/"
]
} |
171,603 | I have temp file with some lower-case and upper-case contents. Input Contents of my temp file: hiJigarGANDHIjiga I want to convert all upper to lower . Command I tried the following command: sed -e "s/[A-Z]/[a-z]/g" temp but got wrong output. Output I want it as: hijigargandhijiga What needs to be in the substitute part of argument for sed ? | If your input only contains ASCII characters, you could use tr like: tr A-Z a-z < input or (less easy to remember and type IMO; but not limited to ASCII latin letters, though in some implementations including GNU tr , still limited to single-byte characters, so in UTF-8 locales, still limited to ASCII letters): tr '[:upper:]' '[:lower:]' < input if you have to use sed : sed 's/.*/\L&/g' < input (here assuming the GNU implementation). With POSIX sed , you'd need to specify all the transliterations and then you can choose which letters you want to convert: sed 'y/AǼBCΓDEFGH.../aǽbcγdefgh.../' < input With awk : awk '{print tolower($0)}' < input | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/171603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93151/"
]
} |
171,609 | I have to check if the fifth field is empty in a CSV file. This is my file: 1,abc,543,87,DATA,fsg; 1,abc,543,87,,fsg; 1,abc,543,87,DATA,fsg; 1,abc,543,88,,fsg; 1,abc,543,,DATA,fsg; As you can see, the second and fourth lines have an empty fifth field. I want to print all these lines. Result should be this: 1,abc,543,87,,fsg;1,abc,543,87,,fsg; | Another awk : $ awk -F, '!length($5)' file1,abc,543,87,,fsg; 1,abc,543,88,,fsg; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171609",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59734/"
]
} |
171,677 | I'm attempting to write a script that will be run in a given directory with many single level sub directories. The script will cd into each of the sub directories, execute a command on the files in the directory, and cd out to continue onto the next directory. What is the best way to do this? | for d in ./*/ ; do (cd "$d" && somecommand); done | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/171677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93933/"
]
} |
171,715 | I have an OpenPGP smart card key (YubiKey NEO) as well as a local secret key installed in my GnuPG keyring. I'd like to encrypt and sign a file with my card's key, not the key in my keyring. How can I specify what key I'd like to sign with? If my filesystem secret key id is DEADBEEF and my smartcard key is DEADBEE5 , how do I sign with that key? | You should specify --default-key : gpg -s --default-key DEADBEE5 input > output and check afterwards with gpg -d < output | head -1 From the gpg man page( --sign section): The key to be used for signing is chosen by default or can be set with the --local-user and --default-key options. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/171715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
171,763 | For example, we have N files (file1, file2, file3 ...) We need first 20% of them, the result directory should be like (file1_20, file2_20, file3_20 ...). I was thinking use wc to get the lines of the file, then times 0.2 Then use head to get 20% and then redirect to a new file, but i don't know how to automate it. | So creating a single example to work from: root@crunchbang-ibm3:~# echo {0..100} > file1 root@crunchbang-ibm3:~# cat file1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 We can grab the size of the file in bytes with stat : root@crunchbang-ibm3:~# stat --printf %s "file1"294 And then using bc we can multipy the size by .2 root@crunchbang-ibm3:~# echo "294*.2" | bc58.8 However we get a float so lets convert it to an integer for head ( dd might work here too ): root@crunchbang-ibm3:~# printf %.0f "58.8" 59 And finally the first twenty percent (give or take a byte) of file1: root@crunchbang-ibm3:~# head -c "59" "file1" 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Putting it together we could then do something like this mkdir -p a_new_directoryfor f in file*; do file_size=$(stat --printf %s "$f") percent_size_as_float=$(echo "$file_size*.2" | bc) float_to_int=$(printf %.0f "$percent_size_as_float") grab_twenty=$(head -c "$float_to_int" "$f") new_fn=$(printf "%s_20" "$f") # new name file1_20 printf "$grab_twenty" > a_new_directory/$new_fndone where f is a place holder for any items found in the directory in which the for loop is run that matches file* which when done: root@crunchbang-ibm3:~# cat a_new_directory/file1_200 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 update (to grab 20% of lines): To grab the first approximate 20% of lines we could replace stat --printf %s "$f" with: wc -l < "$f" Since we are using printf and bc we can effectively round up from .5 , however if a file is only 1 or 2 lines long it will miss them. So we would want to not only round up, but default to at least grabbing 1 line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36211/"
]
} |
171,788 | It is easy to convert a symlink into a hardlink with ln -f ( example ) It would also be easy to convert a hardlink (filenames link and original ) back to a symbolic link to link->original in the case where you know both files and define yourself which one is the "original file". You could easily create a simple script convert-known-hardlink-to-symlink that would result in something like: convert-known-hardlink-to-symlink link original$ ls -li3802465 lrwxrwxrwx 1 14 Dec 6 09:52 link -> original3802269 -rw-rw-r-- 1 0 Dec 6 09:52 original But it would be really useful if you had a script where you could define a working directory (default ./ ) and a search-directory where to search (default / ) for files with the same inode and then convert all those hard-links to symbolic-links. The result would be that in the defined working directory all files that are hard-links are replaced with symbolic-links to the first found file with the same inode instead. A start would be find . -type f -links +1 -printf "%i: %p (%n)\n" | I created a script that will do this. The script converts all hard-links it finds in a source directory (first argument) that are the same as in the working directory (optional second argument) into symbolic links: https://gist.github.com/rubo77/7a9a83695a28412abbcd It has an option -n for a dry-run, that doesn't do anything but shows what would be done. Main part: $WORKING_DIR=./#relative source directory from working directory:$SOURCE_DIR=../otherdir/with/hard-links/with-the-same-inodes# find all files in WORKING_DIRcd "$WORKING_DIR"find "." -type f -links +1 -printf "%i %p\n" | \ while read working_inode working_ondo find "$SOURCE_DIR" -type f -links +1 -printf "%i %p\n" | sort -nk1 | \ while read inode file do if [[ $inode == $working_inode ]]; then ln -vsf "$file" "$working_on" fi donedone The -links +1 --> Will find all files that have MORE than 1 link. Hardlinked files have a link count of at least two. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
171,832 | I have a file in UTF-8 that contains texts in multiple languages. A lot of it are people's names. I need to convert it to ASCII and I need the result to look as decent as possible. There are many ways how to approach converting from a wider encoding to a narrower one. The simplest transformation would be to replace all non-ASCII characters with some placeholder, like '_'. If I know the language the file is written in, there are additional possibilities, like romanization. What Unix tool or programming language library available on Unix can give me a decent (best-effort) conversion from UTF-8 to ASCII? Most of the text is in European, latin type based languages. | This will work for some things: iconv -f utf-8 -t ascii//TRANSLIT echo ĥéĺłœ π | iconv -f utf-8 -t ascii//TRANSLIT returns helloe ? . Any characters that iconv doesn’t know how to convert will be replaced with question marks. iconv is POSIX, but I don’t know if all systems have the TRANSLIT option. It works for me on Linux. Also, the IGNORE option will silently discard characters that cannot be represented in the target character set (see man iconv_open ). An inferior but POSIX-compliant option is to use tr . This command replaces all non-ASCII code points with a question mark. It reads UTF-8 text one byte at a time. “É” might be replaced with E? or ? , depending on whether it was encoded using a combining accent or a precomposed character. echo café äëïöü | tr -d '\200-\277' | tr '\300-\377' '[?*]' That example returns caf? ????? , using precomposed characters. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/171832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23890/"
]
} |
171,842 | I'm curious about df -H and df -h , then man df tells me: -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G) -H, --si likewise, but use powers of 1000 not 1024 So what is the rationale behind the use of powers of 1000? Maybe a side question (or even related): root@host:~# dfFilesystem 1K-blocks Used Available Use% Mounted on Is K block 1024 or 1000? | I speculate that this is due to storage manufacturers using the SI decimal prefixes near-universally. Further on in the manpage (assuming GNU df ): SIZE is an integer and optional unit (example: 10M is 10*1024*1024). Units are K, M, G, T, P, E, Z, Y (powers of 1024) or KB, MB, ... (pow‐ ers of 1000). So 1K is 1024. In another GNU tool, dd , this bug discussion provides some insight: I recall considering this when I added this kind of diagnostic to GNU dd back in 2004, and going with powers-of-1000 abbreviations because secondary storage devices are normally measured that way. For this reason, I expect many users will prefer powers-of-1000 here. This is particularly true for transfer rates: it's rare to see "GiB/s" in real-world prose. The commit which added this feature to df in 1997 only says what, and not why. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171842",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11941/"
]
} |
171,938 | How can I display the welcome message "welcome panni" every time I log into unix? | Normally, a welcome message can be shown by customizing the /etc/motd file (which stands for Message Of The Day). /etc/motd is not a script but a text file which contents are shown before the first prompt of a login session. You can also add some messages in /etc/profile or /etc/bashrc scripts using the echo or print commands (note that the /etc/bashrc assumes you are using the bash shell). Here are examples of commands that can be added to /etc/profile file to obtain a result like the one you expected: echo "Welcome ${USER}" or echo "Welcome $(whoami)" OBS1: If the system is correctly configured, the results of the above should be the same, but the ways they work are different: The first one shows the $USER environment variable while the second executes the command whoami . OBS2: Note that the /etc/profile is ran once per session and only for login shells. This means that the message will be shown when the user logs in in the console or rsh / ssh to the machine, but not when he/she simply opens a terminal in an X session, for example. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/171938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94094/"
]
} |
171,946 | When I do this command: wget --report-speed=type they only type it accepts is bits . It won't have numbers, kilobits / kilobytes or bytes. The help page ( wget --help ) says: --report-speed=TYPE Output bandwidth as TYPE. TYPE can be bits. suggesting that they TYPE can be something else? What options does it take that I haven't found, and (if this option doesn't do this) how can I force the speed to be displayed as bytes or Kilobytes. | This is a rather new addition to wget (1.13.4 doesn't have it) and there is no other value for that option, the manual is quite clear: ‘--report-speed=type’ Output bandwidth as type. The only accepted value is ‘bits’. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/171946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/69049/"
]
} |
171,964 | Is there a way to tweak Alt + Tab in Gnome 3 so it will iterate windows only in the current workspace? I'm using Debian wheezy. | This command also does the trick: gsettings set org.gnome.shell.app-switcher current-workspace-only true | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/171964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83261/"
]
} |
171,968 | Im trying to run this simple sed command sed '/VALUES\ \([0-9]/d!' It still gives me an error saying "RE error: parentheses not balanced" (yes the quotes are there). What can I do? I mean I could just add another ) but it wouldn't match my regex. | The escaped ( has special meaning in sed : it used for back-references . To match a literal ( , simply use it without the backslash: /VALUES ([0-9]/d! If you're attempting to match \( , then escape the \ instead: \\( Escaping the (space) makes no difference. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/171968",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
172,000 | I have input coming from pipe which contains special escape characters. For illustration, the stream looks like this: printf "foo.bar\033[00m" > filecat file How can I remove the trailing .bar\033[00m ?I have tried the following, but that does not work: cat file | sed 's/\.bar\033[00m//' | If your file contains control characters such as printf "foo.bar\033[00m" > file then to remove the specific, single occurrence of .bar\033[00m write the following: sed $'s/\.bar\033\[00m//' To remove all kinds of escape sequences in the entire file: sed $'s/\033\[[0-9;]*m//g' The dollar-before-single-quoted-string ( $'some text' ) instructs the shell to apply ANSI C quoting to the string's content, like printf does.This is required to produce the "escape" ASCII character ( 0x1B / 033 /...). The character can also be produced via keyboard shortcuts (no $' necessary): sed 's/\.bar Ctrl-v ESC \[00m//' After hitting Ctrl-v ESC you should see ^[ on the screen, but not literal ^ and [ (two characters), but one single control character. Original answer If you want in the output just foo then printf '%s' 'foo.bar\033[00m' | sed 's/\.bar\\033\[00m//' Notice that both \ and [ has to escaped by another \ . Additionally I've added '%s' to printf to print the input characters as literal string, otherwise \033[ could be interpreted as escape code followed by ANSI colour. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
172,044 | Is there a way to have terminal applications (gnome-terminal, terminator, etc) automatically move selected text to the CLIPBOARD system buffer, and not just the PRIMARY (i.e. terminal-local) buffer? I found this which describes the primitives, but not to the extent that I could make the behavior change: primary / clipboard intro ... I've got a specific hosted VM use case where I'm very frequently copying text from a bash or vim session in the linux guest back to the windows host... and after 20 years of linux I'm so used to the buffer "just being there" that I'm trying to replicate that behavior... | The primary is not local to the terminal, you can paste it in other X applications by using the middle mouse button. What you should install is autocutsel : Autocutsel tracks changes in the server's cutbuffer and CLIPBOARD selection. When the CLIPBOARD is changed, it updates the cutbuffer. When the cutbuffer is changed, it owns the CLIPBOARD selection. On my Ubuntu 12.04 system I can just install this with apt-get install autocutsel , your distro might have it as well. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93838/"
]
} |
172,049 | I can extract list of patterns using following command, fgrep -A 1 -f patternlist.txt filename.fasta but, is there a way I can extract without creating another file (patternlist.txt in this case) from other command's output? Such as: cut -d " " Cell_cycle.txt -f 1 | grep ...???... filename.fasta EDIT: The Cell_cycle.txt looks like this: $ cat Cell_cycle_Kegg_pathwayctg2977_3.g207.t1 K06626P05_Ctg654_12.g311.t2 K03094P06_Ctg710_7.g346.t1 K05868 i want to take the first column and extract those sequences from fasta file. EDIT 2: I have list of sequences in a UniqueSeq_28Dec2014.fasta >ctg1474_1.g69.t1 (first line)atgaaatgttggtgcagcgccctggcacttctcc...... (second line)>ctg1475_1.g70.t1 (third line)atgaaattgcagcgccctggcacttctcctgcag...... (fourth line) I want to print the first two sequences (from lines 1 to 4). However, I do not want to use head -4 UniqueSeq_28Dec2014.fasta which can also give my output, but I want it using process substitution. I tried the following command but does not seem to work. I just see 4 empty lines. grep -A 1 -Ff <(grep '>' UniqueSeq_28Dec2014.fasta |head -4) UniqueSeq_28Dec2014.fasta | The primary is not local to the terminal, you can paste it in other X applications by using the middle mouse button. What you should install is autocutsel : Autocutsel tracks changes in the server's cutbuffer and CLIPBOARD selection. When the CLIPBOARD is changed, it updates the cutbuffer. When the cutbuffer is changed, it owns the CLIPBOARD selection. On my Ubuntu 12.04 system I can just install this with apt-get install autocutsel , your distro might have it as well. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77543/"
]
} |
172,095 | x=1while [ $x -le 50 ]do echo $x $x=(($x + 1))done I have wrote the above code. What seems to be a easy task in many programming languages is giving this error for me. solution.sh: line 5: syntax error near unexpected token `('solution.sh: line 5: ` $x=(($x + 1))' How to debug errors in bash. Is there any IDE? | On line 5:Change $x=(($x + 1)) to x=$(($x + 1)) . Instead of using an entire bash script, you can just use seq 1 50 . If the case were x=$(($x + 2)) , you could use seq 1 2 50 , where 2 denotes step/increment. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94206/"
]
} |
172,113 | Should be easy .... zpool status -l rpool or zpool status -v or so I thought . My issue is we're running Solaris 10 on a HP DL380 G5 and I suspect the non native hardware is confusing things.We have 2x zpools, one of which is made up of several disks.However, when I run zpool status -l rpool it just lists a single disk.We have reason to believe a disk is failing or has failed and want to remove it from the zpool but can't list the physical disks .... What can I do? Martin | zpool status does not support a -l option, you must be confusing with something else. # cat /etc/release Oracle Solaris 10 1/13 s10x_u11wos_24a X86 Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved. Assembled 17 January 2013# zpool help statususage: status [-vx] [-T d|u] [pool] ... [interval [count]] You write you have two pools but you are running the command against the root pool which is unlikely to be the one with several disks. Just run zpool status -v without specifying a pool name and both of your pool should be reported with their disks. Should for some reason you are still missing a disk in the report, you can use zpool history to get an idea about what commands were used on the pools. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94220/"
]
} |
172,146 | I have a text file contaning the following: "randomtextA""randomrandomtextB" The separator is " How can I split the containt into multiple file as follow using a bash command? File 1 : randomtextA File 2 : randomrandomtextB I came into examples using csplit or awk but they does not cover this text layout. | Simple awk command: awk 'NR%2==0{ print >("File " ++i) }' RS='"' infile RS defines " as record separator and NR is the record number. If the record number was modulo of 2 (because we have another first " for records), then print the current record $0 into a File # . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94246/"
]
} |
172,179 | I'm a long time KDE user, never seen GNOME since may be Mandrake Linux 10 something. Yesterday I took a look at GNOME Shell on Youtube and thought it may worth a look. It actually is worth, but I can't figure out one moment. I've got a shell script that used to run on my KDE Laptop installations for ages: synclient |grep -E 'TapT|RightB|EdgeScr'|awk '{print $1}'|while read item; do synclient $item=0; donesynclient VertScrollDelta=-111synclient HorizScrollDelta=1synclient RightEdge=999999999synclient TopEdge=0synclient CircularScrolling=0synclient BottomEdge=999999999synclient RightButtonAreaLeft=9999999synclient ClickFinger3=2synclient HorizScrollDelta=0synclient HorizTwoFingerScroll=0 This disables right-click, all taps, inverts scroll directions, disables horizontal scrolling and stuff. Touchpad menu nor in KDE neither in gnome doesn't allow this kind of configuration. (TBH, only Mac of all the operating systems has the gui to set up touchpad exactly like that :)). I can't figure out, how do I run this after gnome shell session starts?I've already tried this with no results: [1] % cat /home/neko/.config/autostart/script.desktop [Desktop Entry]Name="Auto stuff"GenericName="Auto startup stuff"Comment="Synclient mostly"Exec=/home/neko/bin/auto_stuff.shTerminal=falseType=ApplicationX-Gnome-Autostart=true Any other suggestions,please?Thank you. | You can use the program gnome-session-properties. Just execute it from your shell prompt (gnome-terminal): $ gnome-session-properties This will open a GUI where you can configure (i.e., add, edit, remove, enable and disable) startup programs. Nice and easy. Enjoy. Update: As noted by don_crissti (thanks) in the comments below, the gnome-session-properties startup programs functionality has migrated to gnome-tweak-tool since Gnome 3.12 (which btw is a great tool, but a little messier as it concentrates just too many functions). So, for versions 3.12 and above, this is the new place to look. OBS: And btw, using this method, you don't need to create a .desktop file, you can simply specify the command line of the /home/neko/bin/auto_stuff.sh script you created (or any other script or command you want). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93959/"
]
} |
172,186 | Ok, I'm simply trying to strip out double-quotes in my filenames. Here's the command I came up with (bash). $ find . -iname "*\"*" -print0 | xargs -0 -I {} mv {} {} | tr -d \" The problem is the 'mv {} {} | tr -d \"' part. I think it's a precedence problem: bash seems to be interpreting as (mv {} {}) | tr -d \") , and what I'm left with are both filenames stripped of double-quotes. That's not what I want, obviously, because then it fails to rename the file. Instead, I want the first filename to have the quotes, and the second not to, more like this: mv {} ({} | tr -d \") . How do I accomplish this? I've tried brackets and curly braces, but I'm not really sure what I'm doing when it comes to explicitly setting command execution precedence. | Assuming you have the rename command installed, use: find . -name '*"*' -exec rename 's/"//g' {} + The rename command takes a Perl expression to produce the new name. s/"//g performs a global substitution of the name, replacing all the quotes with an empty string. To do it with mv you need to pipe to a shell command, so you can execute subcommands: find . -name '*"*' -exec sh -c 'mv "$0" "$(printf %s "$0" | tr -d "\"")"' {} \; What you wrote pipes the output of xargs to tr , it doesn't use tr to form the argument to mv . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3488/"
]
} |
172,199 | I tried the following link to install via the commands. $ sudo apt-get update$ sudo apt-get install chromium-browser$ sudo add-apt-repository ppa:skunk/pepper-flash$ sudo apt-get update$ sudo apt-get install pepperflashplugin-nonfree$ sudo update-pepperflashplugin-nonfree --install However the package pepperflashplugin-nonfree is nowhere to be seen, what repository does it lie within? | Assuming you have the rename command installed, use: find . -name '*"*' -exec rename 's/"//g' {} + The rename command takes a Perl expression to produce the new name. s/"//g performs a global substitution of the name, replacing all the quotes with an empty string. To do it with mv you need to pipe to a shell command, so you can execute subcommands: find . -name '*"*' -exec sh -c 'mv "$0" "$(printf %s "$0" | tr -d "\"")"' {} \; What you wrote pipes the output of xargs to tr , it doesn't use tr to form the argument to mv . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63803/"
]
} |
172,240 | I have a temp file, and I want to grep the only word that matches the pattern instead of whole word. i tried grep -o <pattern> file but its not giving me desired output Input xi29 vddf vss vddf vss int_s s2 rstb mg91a02_l_nd2_bulk_vt1 ln1=16n ln2=16n lp1=16n lp2=16n nf_n1=1 nf_n2=1 nf_p1=1 nf_p2=1 xi28 vddf vss vddf vss d1 d mg91a02_l_inv_bulk_vt1 ln=16n lp=16n nf_n=1 nf_p=1 nfin_n=2 nfin_p=2 m=1 xi25 vddf vss vddf vss int_m2 int_m1 mg91a02_l_inv_bulk_vt1 ln=16n lp=16n nf_n=1 nf_p=1 Command grep -o 'mg91a02' temp Output (obtained) mg91a02mg91a02mg91a02 Output (desired) mg91a02_l_nd2_bulk_vt1mg91a02_l_inv_bulk_vt1mg91a02_l_inv_bulk_vt1 | try grep -E -o 'mg91a02\w+' where -E : extended regexp -o print only matched word \w : not a white space + one or more time | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93151/"
]
} |
172,320 | I need a script that checks and lists all text files for project conventions. With conventions I mean for example: UTF-8 encoding No trailing white spaces Newline at the end of file No not-ascii chars LF for line endings I do not want to reinvent the wheel. Maybe there is a tool doing this. Do you know some? | Detecting UTF-8 encoding : file will usually give you the encoding: file --brief --mime-encoding myfile.txt Note that it may either be 'us-ascii' or 'utf-8', depending on whether it finds some UTF-8 characters, so you'll need to accept both. The following points will mostly require you to pipe the output into wc -l (to count the number of lines of the output) and check whether it's 0 or not. Alternatively, they should usually have a return value of 0 if they found something, or 1 if not (in which case your requirements are fulfilled): No trailing white spaces : That's a job for grep , I guess: grep -e '\s\+$' myfile.txt Newline at the end of file : If the last character according to hexdump or xxd is 0a , there is a newline, and it's fine: xxd -ps myfile.txt | tail -n 1 | grep '0a$' ( note that, unlike for the other points denoted here, you want this to find something ) No not-ascii chars : This is the same as "UTF-8 encoding", except maybe a little more strict. If you really want to be sure there are only ASCII characters in a file (see @Anthon's answer), you'll probably need something like xxd -g1 myfile.txt | cut -c 10-56 | grep '[a-f89][a-f0-9]' This searches for any characters outside the ASCII range (0x00-0x7F). It's not very elegant, though. LF for line endings : file without any options will tell you something like ASCII text, with CRLF line terminators For a script, probably something like the following could do: xxd -g1 myfile.txt | cut -c 10-56 | grep '0d' Fixing UTF-8 encoding : There is iconv (1) . Essentially it takes a "from" encoding ( -f ), a "to" encoding ( -t ), and the file.The "to" encoding is probably utf-8 , whereas the "from" encoding can be obtained using file as described at the top of my post: file_encoding="$(file --brief --mime-encoding myfile.txt)"iconv -f "$file_encoding" -t 'utf-8' myfile.txt No trailing white spaces : That's a job for sed , although I prefer the POSIX way (i.e. no -i switch) which means using printf + ex . Either way, the regex will look something like s/\s\+$//g . For the POSIX-compliant way: printf "%%s/\\s\\+\$//g\nwq\n" | ex -s myfile.txt For the non-POSIX-compliant way: sed -i 's/\s\+$//g' myfile.txt Newline at the end of file : Unix applications usually append a missing newline at the end of file when they save it. To exploit that, this is bit of a hack: printf "wq\n" | ex -s myfile.txt ( this will actually just open, save, quit ) No not-ascii chars : See "UTF-8 encoding" above. LF for line endings : There is dos2unix (1) . It should do exactly what you need. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94371/"
]
} |
172,349 | I've tried installing the newest version of SVN but nothing is working. On yum there is 1.7.When downloading rpm/zip error occurs when trying to configure.Setting some additional repositories failed also. Does anybody have a proven way on how to do it? | Step 0 : Remove old version of subversion (if already installed) #yum remove subversion Step 1: Setup Yum Repository Create a new repo file /etc/yum.repos.d/wandisco-svn.repo and add following content [WandiscoSVN]name=Wandisco SVN Repobaseurl=http://opensource.wandisco.com/centos/7/svn-1.8/RPMS/$basearch/enabled=1gpgcheck=0 Step 2: Install Subversion Package # yum clean all# yum install subversion Step 3: Verify Subversion Version # svn --versionsvn, version 1.8.13 (r1667537) compiled Apr 2 2015, 15:54:41 on i686-pc-linux-gnuCopyright (C) 2014 The Apache Software Foundation.This software consists of contributions made by many people;see the NOTICE file for more information.Subversion is open source software, see http://subversion.apache.or | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/172349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39918/"
]
} |
172,364 | On my debian installation I installed chromium 39 and the latest version of libnss3, netflix failed to play. I tried on my ubuntu installation and it too failed. I tried installing chrome from the website and it WORKED. I looked at the version of chrome and chromium. They're both 39.0.2171.XY. AFAIK chrome 38+ works. Why doesn't netflix work in chromium while chrome does? Is there a way I can have netflix run in chromium? | It is because chrome packages the... Widevine Content Decryption Module - Version: 1.4.6.667 Enables Widevine licenses for playback of HTML audio/video content. (version: 1.4.6.667) ...whereas chromium does not and in August 2014 Netflix switched to allowing HTML5 content by default. Visit: chrome://plugins ...to see a list. You will need that plugin installed to chromium for it to work. You might also add the google talk plugin and pdf plugin while you're at it, but if you do so you pretty much just installed chrome as those are some of the primary differences. In fact, though, until late summer 2015 you couldn't install that component singly to chromium - we can chalk that one up to another (short-lived) win for Digital Restrictive Management, I guess. With some serious downtime and expert hacking you might be able to compile your own package (a chromium compile is no Sunday drive, by the way) - but you might have to hack the plugin out of chrome . Failing that, you can use the Ubuntu chrome ppa source, I suppose: wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -echo "deb http://dl.google.com/linux/chrome/deb/ stable main" | sudo tee -a /etc/apt/sources.list.d/google.list ...should probably do the trick, I guess, if you're ok with using the closed-source chrome binary. As of August 2015, though, you can now install the Widevine module separately as the chromium maintainer has patched the source to accept its use. For example, on an Arch Linux system there is the chromium-widevine AUR package. Have a look at its PKGBUILD script to see how it's done - it doesn't look very complicated. Essentially the chrome...deb debian package file is downloaded, from it are extracted only a few Widevine relevant files, their version numbers captured, and then these are copied into the relevant chromium installation paths. There is also the Pipelight project which should enable you to use the Silverlight plugin (via wine ) to watch Netflix video (and so not the HTML5 method which works with chrome ) in chromium . It is a somewhat heavy-handed approach in my opinion, but it is a popular option. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/172364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
172,382 | I work a lot with imaged drives, meaning a do a dd-copy of the drive in question and then work on the image instead of the drive itself. For most work, I use kpartx to map the drive's partitions to a device under /dev/mapper/. What I'm wondering here is if there's a way to find which of the mapping belong to which image. Consider this: root@vyvyan:/tmp# kpartx -a -v Image1 add map loop1p1 (254:4): 0 10240 linear /dev/loop1 2048add map loop1p2 (254:5): 0 10240 linear /dev/loop1 12288add map loop1p3 (254:6): 0 52848 linear /dev/loop1 22528root@vyvyan:/tmp# kpartx -a -v Image2add map loop2p1 (254:7): 0 33508 linear /dev/loop2 2048add map loop2p2 (254:8): 0 39820 linear /dev/loop2 35556 Now, let's say I forget which image went to which mapping. Is there a way to let kpartx - or the kernel, or anything else - tell me which image goes where? EDIT Also, if I accidentally rm the image-file while kpartx has added the mappings, how do you remove the mappings? kpartx wants the actual image to be present. | losetup (the command normally used to set them up) will tell you: $ /sbin/losetup --listNAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE/dev/loop0 0 0 0 0 /var/tmp/jigdo/debian-7.6.0-amd64-CD-1.iso Note that with older versions you may hat to use use -a instead of --list , and this outputs in a different and now deprecated format. The information comes from /sys : $ cat /sys/class/block/loop0/loop/backing_file /var/tmp/jigdo/debian-7.6.0-amd64-CD-1.iso Another, possibly more portable, option is to get it from udisks: $ udisksctl info -b /dev/loop0/org/freedesktop/UDisks2/block_devices/loop0:⋮ org.freedesktop.UDisks2.Loop: Autoclear: false BackingFile: /var/tmp/jigdo/debian-7.6.0-amd64-CD-1.iso SetupByUID: 1000⋮ losetup will also happily remove them for you, using the -d option. That just requires the loop device as a parameter; it doesn't care about the backing file/device. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/172382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11865/"
]
} |
172,385 | I am getting an error when I try to copy a file from my local computer to a remote server with ssh . I get the same error whether I use cp or scp . Here is my input at the resulting error: [[email protected] /]# cp /home/username/some.xml [email protected]:/path/to/directory/ cp: cannot stat ‘/home/username/some.xml’: No such file or directory I have checked, and there is definitely a file at the path /home/username/some.xml on my local machine. Both the local computer and the remote server are running CentOS 7 . How can I resolve this error and copy successfully? | If you are logged into the local machine , you would use scp like this: scp /home/username/some.xml [email protected]:/path/to/directory/ If you are logged into the remote machine (as in the OP), use scp like this: scp [email protected]:/home/username/some.xml /path/to/directory Substitute the IP addresses as directed in the commands. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/172385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
172,399 | I'm trying to take the contents of a file and insert it after a matching pattern in another file using sed. My question is very similar to this question , but I wish to insert the contents of a file inline rather than on a new line. How can I do this? Using the example question I referenced, the first answer does exactly what I want; however, I want the insertion to happen inline: sed '/First/r file1.txt' infile.txt The actual data I want to insert is a JSON file: [ { "foo": "bar", "baz": "biff", "data": [ { "a": 1945619, "b": [ { "c": 512665, "d": "futz" } ] } ] }] | In your linked question there is already good awk answer, just modify it a little bit by using printf instead of print to insert the content without newline: awk '/First/ { printf $0; getline < "File1.txt" }1' infile.txt Result: Some Text hereFirstThis is text to be inserted into the File.SecondSome Text here You may want to add space or other delimeter after "First" with printf $0 " "; ... If inserted file has many lines then: awk '/First/{printf $0; while(getline line<"File1.txt"){print line};next}1' infile.txt Result: Some Text hereFirst[ { "foo": "bar", "baz": "biff", "data": [ { "a": 1945619, "b": [ { "c": 512665, "d": "futz" } ] } ] }]SecondSome Text here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172399",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26274/"
]
} |
172,481 | Suppose that I want to delete all files in a folder that are greater than 1 MB. $ find . -size +1M | xargs -0 rm This will not delete files that have space in their names. So I want it to quote all arguments it sends to rm . If find gives it Some report.docx it should pass "Some report.docx" to rm . How can I do that? | As you're already using that non-standard 1M , chances are your find implementation also supports -delete . So, simply use: find . -type f -size +1M -delete Where supported, that's by far the safest and most efficient. If you insist on using xargs and rm with find , just add -print0 in your command: find . -type f -size +1M -print0 | xargs -r0 rm -f -- ( -print0 and -0 are non-standard, but pretty common. -r (to avoid running rm at all if find doesn't find anything) is less common, but if your xargs doesn't support it, you can just omit it, as rm with -f won't complain if called without argument). The standard syntax would be: find . -type f -size +1048576c -exec rm -f -- {} + Other way: find . -type f -size +1M -execdir rm -f -- {} + (that's safer than -exec / xargs -0 and would work with very deep directory trees (where full file paths would end up larger than PATH_MAX ), but that's also non-standard, and runs at least one rm for each directory that contains at least one big file, so would be less efficient). From man find on a GNU system: -print0 True; print the full file name on the standard output, followed by a nullcharacter (instead of the newline character that -print uses). This allows file namesthat contain newlines or other types of white space to be correctly interpreted byprograms that process the find output. This option corresponds to the -0 option of xargs . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/172481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19506/"
]
} |
172,506 | When I execute a program in Bash , for example, [ls][2] , it sends its output to standard output ( fd &1 ). And the ouput of the executed program is displayed in the terminal. How does Bash/terminal get the output of the ls command? | As a child process of the shell, ls inherits the open file descriptors of the shell. And the standard file descriptors (stdin, stdout, stderr (or 0, 1, 2)) are connected to a pseudo-terminal, which is handled by the terminal emulator. For example (on a Linux system): $ ls /proc/$$/fd -ltotal 0lrwx------ 1 muru muru 64 Dec 10 16:15 0 -> /dev/pts/3lrwx------ 1 muru muru 64 Dec 10 16:15 1 -> /dev/pts/3lrwx------ 1 muru muru 64 Dec 10 16:15 2 -> /dev/pts/3lrwx------ 1 muru muru 64 Dec 10 16:15 255 -> /dev/pts/3$ ls /proc/$(pgrep terminator -f)/fd -l | grep pts/3lrwx------ 1 muru muru 64 Dec 10 16:15 26 -> /dev/pts/3 That is, the output of ls , or for that matter the shell itself, is not handled by the shell, but by the terminal emulator (GNOME Terminal, terminator, xterm, etc.). You can test this out: On Linux, find a pseudoterminal ( pts ) used by your terminal emulator (say GNOME Terminal): $ ls -l /proc/$(pgrep -n gnome-terminal)/fd | grep ptslrwx------ 1 muru muru 64 Dec 10 18:00 1 -> /dev/pts/1lrwx------ 1 muru muru 64 Dec 10 18:00 15 -> /dev/pts/20lrwx------ 1 muru muru 64 Dec 10 18:00 2 -> /dev/pts/1 Now, the non-standard fds (those other than 0,1,2) of gnome-terminal would be used by it to provide input and output for a shell. The terminal emulator reads in data send to that PTS and (after some processing, for colours and such) presents it on the screen. In this case, that would be 15 , connected to pts/20 . If I write something to that pts, I can expect it to appear in that terminal: Further reading: What is stored in /dev/pts files and can we open them? The other case, where I do things like: echo $(ls)a=$(date)vim `command -v some_script` is called Command Substitution . In command substitution, the output of the command is captured by the shell itself, and never reaches the terminal, unless you do print it out (for example, echo $(ls) ). This case is handled in Hauke Laging's answer . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172506",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90372/"
]
} |
172,520 | I've found some strange traffic through tcpdump (traffic is ongoing always): 13:00:13.203754 IP 1.2.3.4.1028 > 188.113.188.16.56881: UDP, length 10313:00:13.204396 IP 1.2.3.4.1028 > 180.183.209.27.29546: UDP, length 10313:00:13.204972 IP 1.2.3.4.1028 > 95.188.250.39.6881: UDP, length 10313:00:13.205509 IP 1.2.3.4.1028 > 125.39.30.33.5493: UDP, length 10313:00:13.206048 IP 1.2.3.4.1028 > 46.194.14.254.32232: UDP, length 10313:00:13.206526 IP 1.2.3.4.1028 > 151.52.30.111.6881: UDP, length 10313:00:13.207097 IP 1.2.3.4.1028 > 70.27.63.150.64389: UDP, length 10313:00:13.207555 IP 1.2.3.4.1028 > 108.12.215.184.42880: UDP, length 10313:00:13.208082 IP 1.2.3.4.1028 > 37.105.27.136.54752: UDP, length 10313:00:13.209671 IP 1.2.3.4.1028 > 61.53.14.223.6881: UDP, length 10613:00:13.266142 IP 46.35.235.75.10995 > 1.2.3.4.1028: UDP, length 28913:00:13.276353 IP 86.162.78.254.20206 > 1.2.3.4.1028: UDP, length 28913:00:13.345021 IP 108.12.215.184.42880 > 1.2.3.4.1028: UDP, length 28913:00:13.349955 IP 46.194.14.254.32232 > 1.2.3.4.1028: UDP, length 28913:00:13.357145 IP 70.27.63.150.64389 > 1.2.3.4.1028: UDP, length 28913:00:13.373275 IP 37.105.27.136.54752 > 1.2.3.4.1028: UDP, length 28913:00:13.785877 IP 61.53.14.223.6881 > 1.2.3.4.1028: UDP, length 31113:00:13.880421 IP 1.2.3.4.1028 > 86.38.202.92.63287: UDP, length 143813:00:13.913168 IP 122.174.79.84.34858 > 1.2.3.4.1028: UDP, length 28913:00:14.057212 IP 86.38.202.92.63287 > 1.2.3.4.1028: UDP, length 20... many more lines with same or different hosts 1.2.3.4 is my WAN address (changed it to not make it very publicly available). Port 1028 is never opened in my firewall, I even tried to DROP it: $IPT -I INPUT -p udp --sport 1028 -j DROP$IPT -I INPUT -p udp --dport 1028 -j DROP$IPT -I FORWARD -p udp --sport 1028 -j DROP$IPT -I FORWARD -p udp --dport 1028 -j DROP$IPT -I OUTPUT -p udp --sport 1028 -j DROP$IPT -I OUTPUT -p udp --dport 1028 -j DROP$IPT -A INPUT -p udp --sport 1028 -j DROP$IPT -A INPUT -p udp --dport 1028 -j DROP$IPT -A FORWARD -p udp --sport 1028 -j DROP$IPT -A FORWARD -p udp --dport 1028 -j DROP$IPT -A OUTPUT -p udp --sport 1028 -j DROP$IPT -A OUTPUT -p udp --dport 1028 -j DROP Rules look like this: root@server-14:/# iptables -n -LChain INPUT (policy DROP)target prot opt source destinationDROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:1028DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:1028ACCEPT all -- 127.0.0.1 0.0.0.0/0ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHEDACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 67,68ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:221 state NEW,ESTABLISHEDACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 state ESTABLISHEDDROP udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:1028DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:1028Chain FORWARD (policy ACCEPT)target prot opt source destinationDROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:1028DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:1028DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:1028DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:1028Chain OUTPUT (policy ACCEPT)target prot opt source destinationDROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:1028DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:1028DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:1028DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:1028 lsof -p 1028 doesn't show anything, netstat -antlpu doesn't show it either. Tried with -A , with -I , even (as you can see) with both of them. It just doesn't work, traffic continues to flow. Sometimes at such rates that internet connection is becoming wobbly. I'm even starting to think that I've been put into some sort of botnet or something... | As a child process of the shell, ls inherits the open file descriptors of the shell. And the standard file descriptors (stdin, stdout, stderr (or 0, 1, 2)) are connected to a pseudo-terminal, which is handled by the terminal emulator. For example (on a Linux system): $ ls /proc/$$/fd -ltotal 0lrwx------ 1 muru muru 64 Dec 10 16:15 0 -> /dev/pts/3lrwx------ 1 muru muru 64 Dec 10 16:15 1 -> /dev/pts/3lrwx------ 1 muru muru 64 Dec 10 16:15 2 -> /dev/pts/3lrwx------ 1 muru muru 64 Dec 10 16:15 255 -> /dev/pts/3$ ls /proc/$(pgrep terminator -f)/fd -l | grep pts/3lrwx------ 1 muru muru 64 Dec 10 16:15 26 -> /dev/pts/3 That is, the output of ls , or for that matter the shell itself, is not handled by the shell, but by the terminal emulator (GNOME Terminal, terminator, xterm, etc.). You can test this out: On Linux, find a pseudoterminal ( pts ) used by your terminal emulator (say GNOME Terminal): $ ls -l /proc/$(pgrep -n gnome-terminal)/fd | grep ptslrwx------ 1 muru muru 64 Dec 10 18:00 1 -> /dev/pts/1lrwx------ 1 muru muru 64 Dec 10 18:00 15 -> /dev/pts/20lrwx------ 1 muru muru 64 Dec 10 18:00 2 -> /dev/pts/1 Now, the non-standard fds (those other than 0,1,2) of gnome-terminal would be used by it to provide input and output for a shell. The terminal emulator reads in data send to that PTS and (after some processing, for colours and such) presents it on the screen. In this case, that would be 15 , connected to pts/20 . If I write something to that pts, I can expect it to appear in that terminal: Further reading: What is stored in /dev/pts files and can we open them? The other case, where I do things like: echo $(ls)a=$(date)vim `command -v some_script` is called Command Substitution . In command substitution, the output of the command is captured by the shell itself, and never reaches the terminal, unless you do print it out (for example, echo $(ls) ). This case is handled in Hauke Laging's answer . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86779/"
]
} |
172,528 | man 4 random has a very vague description of Linux kernel entropy sources: The random number generator gathers environmental noise from device drivers and other sources into an entropy pool. The paper Entropy transfers in the Linux Random Number Generator isn't much more specific, either. It lists: add_disk_randomness() , add_input_randomness() , and add_interrupt_randomness() . These functinos are from random.c , which includes following comment: Sources of randomness from the environment include inter-keyboard timings, inter-interrupt timings from some interrupts, and other events which are both (a) non-deterministic and (b) hard for an outside observer to measure. Further down, there is a function add_hwgenerator_randomness(...) indicating support for hardware random number generators. All those information are rather vague (or, in the case of the source code, require deep knowledge of the Linux kernel to understand). What are the actual entropy sources used, and does the Linux kernel support any hardware random number generators out-of-the-box? | Most commodity PC hardware has a random number generator these days. VIA Semiconductor has put them in their processors for many years; the Linux kernel has the via-rng driver for that. I count 34 source modules in the drivers/char/hw_random/ directory in the latest source tree, including drivers for Intel and AMD hardware, and for systems that have a TPM device. You can run the rng daemon (rngd) to push random data to the kernel entropy pool. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40798/"
]
} |
172,541 | I have a script, that does not exit when I want it to. An example script with the same error is: #!/bin/bashfunction bla() { return 1}bla || ( echo '1' ; exit 1 )echo '2' I would assume to see the output: :~$ ./test.sh1:~$ But I actually see: :~$ ./test.sh12:~$ Does the () command chaining somehow create a scope? What is exit exiting out of, if not the script? | () runs commands in the subshell, so by exit you are exiting from subshell and returning to the parent shell. Use braces {} if you want to run commands in the current shell. From bash manual: (list) list is executed in a subshell environment. Variable assignments and builtin commands that affect the shell's environment do not remain in effect after the command completes. The return status is the exit status of list. { list; } list is simply executed in the current shell environment. list must be terminated with a newline or semicolon. This is known as a group command. The return status is the exit status of list. Note that unlike the metacharacters ( and ), { and } are reserved words and must occur where a reserved word is permitted to be recognized. Since they do not cause a word break, they must be separated from list by whitespace or another shell metacharacter. It's worth mentioning that the shell syntax is quite consistent and the subshell participates also in the other () constructs like command substitution (also with the old-style `..` syntax) or process substitution, so the following won't exit from the current shell either: echo $(exit)cat <(exit) While it may be obvious that subshells are involved when commands are placed explicitly inside () , the less visible fact is that they are also spawned in these other structures: command started in the background exit & doesn't exit the current shell because (after man bash ) If a command is terminated by the control operator &, the shell executes the command in the background in a subshell. The shell does not wait for the command to finish, and the return status is 0. the pipeline exit | echo foo still exits only from the subshell. However different shells behave differently in this regard. For example bash puts all components of the pipeline into separate subshells (unless you use the lastpipe option in invocations where job control is not enabled), but AT&T ksh and zsh run the last part inside the current shell (both behaviours are allowed by POSIX). Thus exit | exit | exit does basically nothing in bash, but exits from the zsh because of the last exit . coproc exit also runs exit in a subshell. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/172541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88357/"
]
} |
172,548 | I know that, if a coloured terminal is available, one can colour the output of it using escape characters . But is there a possibility to find out, which colour the output is currently being displayed as? Or better, what colour the text would be, if I would output it right now? I'm asking to not break any previous colour settings, when using these escape characters. The 'default foreground colour' escape character is getting it's information from the colour scheme, rather than the text colour before I changed it. | In general, obtaining the current colours is impossible. The control sequence processing of a terminal happens "inside" the terminal, wherever that happens to be. With a terminal emulator such as xterm or the one built into an operating system kernel that provides the kernel virtual terminals, the internal state of the emulator, including its notion of the current "graphic rendition" (i.e. colour and attributes), is on the machine itself and is theoretically accessible. But for a real terminal this information is in some RAM location on a physically separate machine connected via a serial link. That said, some terminals include a mechanism for reading out such information as part of their terminal protocol, that is sent over that serial link. They provide control sequences that a program can send to the terminal, that cause it to send back information about its internal state, as terminal input. mikeserv has shown you the control sequences that the xterm terminal emulator responds to. But these are specific to xterm . The built-in terminal emulators in the Linux kernel and the various BSD kernels are different terminal types, for example, and don't implement any such control sequences at all. The same goes for whole families of real terminals. DEC VT525 terminals implement a read-out mechanism, but have a set of control sequences that bears no relationship to those used by xterm . One sends the DECRQSS (Request Selection or Setting) sequence to request the current graphic rendition, and the terminal responds by sending the DECRPSS (Report Selection or Setting). Specifically: Host sends: DCS $ q m ST (DECRQSS with the control function part of SGR as the setting) Terminal responds: DCS 0 $ r 0 ; 3 3 ; 4 4 m ST (DECRPSS with the parameters and control function part of an SGR control sequence that sets the current foreground and background colours) Of course, a careful reading of your question reveals that you are waving a chocolate-covered banana at those European currency systems again. What you're actually trying to do, for which you've selected a solution and then asked how to do part of that solution, is preserve the previous state whilst you write some colourized output. Not only is there a DEC VT control sequence for doing this, there's a SCO console terminal sequence for it that is recognized by xterm and various kernel built-in terminal emulators, and a termcap/terminfo entry that tells you what they are for your terminal. The termcap entries are sc and rc . The terminfo entries are save_cursor and restore_cursor . The names are somewhat misleading as to effect (although they do act as a warning that you are relying upon something that is de facto rather than de jure ). The actual DECSC, DECRC, SCOSC, and SCORC control sequences save and restore the current graphic rendition as well. Given that the article that you pointed to is all about generating control sequences from shell scripts, the command that you are now looking for is tput . Further reading Jonathan de Boyne Pollard. 2007. Put down the chocolate-covered banana and step away from the European currency systems. . Frequently Given Answers. VT420 Programmer Reference Manual . EK-VT420-RM-002. February 1992. Digital. VT520/VT525 Video Terminal Programmer Information . EK-VT520-RM. July 1994. Digital. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/172548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88357/"
]
} |
172,549 | I need to get all intefaces names in Arch linux. If I issue command ifconfig, I get following response: [root@pi ~]# ifconfigeth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.154 netmask 255.255.255.0 broadcast 192.168.0.255 ether b8:27:eb:3c:03:fe txqueuelen 1000 (Ethernet) RX packets 119099 bytes 96958556 (92.4 MiB) RX errors 0 dropped 8 overruns 0 frame 0 TX packets 18304 bytes 5456443 (5.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 0 (Local Loopback) RX packets 285 bytes 88221 (86.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 285 bytes 88221 (86.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 That is everything ok, but I just need interface names. How do I get just interface names? | One way could be to use ifconfig with -s (short list), and cut out the part you need: $ ifconfig -a -sIface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flgeth0 1500 0 1374267176 0 116420 0 2848281091 0 0 0 BMRUlo 65536 0 761767047 0 0 0 761767047 0 0 0 LRUvboxnet0 1500 0 0 0 0 0 0 0 0 0 BMvirbr0 1500 0 0 0 0 0 0 0 0 0 BMU$ ifconfig -s -a | awk '$1 !~ /Iface/ {print $1}'eth0lovboxnet0virbr0 Or a similar method with ip : $ ip -o link show | awk -F': ' '{print $2}'loeth0virbr0vboxnet0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40993/"
]
} |
172,559 | In our cluster, we are restricting our processes resources, e.g. memory ( memory.limit_in_bytes ). I think, in the end, this is also handled via the OOM killer in the Linux kernel (looks like it by reading the source code ). Is there any way to get a signal before my process is being killed? (Just like the -notify option for SGE's qsub , which will send SIGUSR1 before the process is killed.) I read about /dev/mem_notify here but I don't have it - is there something else nowadays? I also read this which seems somewhat relevant. I want to be able to at least dump a small stack trace and maybe some other useful debug info - but maybe I can even recover by freeing some memory. One workaround I'm currently using is this small script which frequently checks if I'm close (95%) to the limit and if so, it sends the process a SIGUSR1 . In Bash, I'm starting this script in background ( cgroup-mem-limit-watcher.py & ) so that it watches for other procs in the same cgroup and it quits automatically when the parent Bash process dies. | It's possible to register for a notification for when a cgroup's memory usage goes above a threshold. In principle, setting the threshold at a suitable point below the actual limit would let you send a signal or take other action. See: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/172559",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2954/"
]
} |
172,570 | Why does bash sometimes refuse to accept my orders by simply starting a new line beginning with a greater-than sign instead of executing the command?Every press on enter adds a new line, and the only way to escape this seems to be hitting Ctrl + C . As an example: a small command that I sometimes need to limit the fan speed of my laptop is not working anymore: RegenbogenBook:Resources Vincent$ smc -k F0Mx -w $(python -c 'print hex(2800 << 2)[2:]’)>>> I have the impression that I'm only missing something really obvious, but this kind of basic thing is just not covered in any FAQs or accessible via search... | You're probably cutting and pasting the command (or parts of it) from a document instead of typing it in manually. Usually this doesn't make any difference, but in this case, the second quote character was inserted as a "right single quotation mark" ( ’ ) instead of an "apostrophe" ( ' ). The difference is subtle -- see this page for more details: http://en.wikipedia.org/wiki/Quotation_mark_glyphs The reason this probably happened is that when you first typed in the command to the document to save it for future reference, your word processor automatically converted the second apostrophe into a right single quotation mark. It does this to make the character look nicer on the screen, but bash doesn't recognize this character as a valid closing quote, so you run into the problem. It prints " > " to prompt for further input, because it still thinks the original quote has not been closed. The fix is to change that character to an apostrophe -- just retype it manually into bash from the keyboard. And you can also correct it in your document so that future cut+pasting will work fine. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/172570",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94506/"
]
} |
172,573 | How to check whether eth0 has connection speed 10Mbit, 100Mbit or 1Gbit? I tried ethtool , but it says No data available . I also tried dmesg | grep -i duplex and it's empty. [root@dioptase ~]# lspci00:0a.0 Ethernet controller: Digital Equipment Corporation DECchip 21140 [FasterNet] (rev 20)[root@dioptase ~]# ethtool eth0Settings for eth0:No data available[root@dioptase ~]# ethtool -i eth0driver: tulipversion: 1.1.15firmware-version:bus-info: 0000:00:0a.0[root@dioptase ~]# ifconfig eth0eth0 Link encap:Ethernet HWaddr 00:15:5D:6F:1E:09 inet addr:192.168.140.106 Bcast:192.168.140.255 Mask:255.255.255.0 inet6 addr: 2a00:1120:0:1002:215:5dff:fe6f:1e09/64 Scope:Global inet6 addr: fe80::215:5dff:fe6f:1e09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:95671897 errors:6 dropped:0 overruns:0 frame:6 TX packets:16524440 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23933711964 (22.2 GiB) TX bytes:19761966217 (18.4 GiB) Interrupt:9 Base address:0xe000 | Do NOT use mii-tool. It was last updated years ago and does not support anything over fast ethernet. There are few ways you can determine ethernet speed. The most recommended one is cat /sys/class/net/<interface>/speed The output will be 10, 100, 1000, ...etc. In fact you can get almost all data you need about your network card from /sys/class/net// Another option (not sure why didn't work for you) lspci | grep -iE --color 'network|ethernet' 01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172573",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73160/"
]
} |
172,581 | I need to repair a network related package (Samba) that is preventing my system from booting. (mint 17). I have a bootable USB stick with the same OS. How would I fix the broken package on my hard disk through the USB operating system? | Do NOT use mii-tool. It was last updated years ago and does not support anything over fast ethernet. There are few ways you can determine ethernet speed. The most recommended one is cat /sys/class/net/<interface>/speed The output will be 10, 100, 1000, ...etc. In fact you can get almost all data you need about your network card from /sys/class/net// Another option (not sure why didn't work for you) lspci | grep -iE --color 'network|ethernet' 01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/172581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86050/"
]
} |
172,624 | I've piped a command to less , and now I want to save the command's output to a file. How do I do that? In this case, I don't want to use tee , I want a solution directly from less, so that I don't have to rerun a long-running command if I forgot to use tee . This question is similar to this one, the only difference is that I want to save all of the lines, not a subset: Write lines to a file from less | From less , type s then type the file name you want to save to, then Enter . From the man page, under COMMANDS : s filename Save the input to a file. This only works if the input is a pipe, not an ordinary file. man page also states that, depending on your particular installation, the s command might not be available. In that case, you could go to line 1 with: g or < or ESC-< Go to line N in the file, default 1 (beginning of file). and pipe the whole content to cat with: | <m> shell-command <m> represents any mark letter. Pipes a section of the input file to the given shell command. The section of the file to be piped is between the first line on the current screen and the position marked by the letter. <m> may also be ^ or $ to indicate beginning or end of file respectively. so either: g|$cat > filename or: <|$cat > filename i.e. type g or < ( g or less-than ) | $ ( pipe then dollar ) then cat > filename and Enter . This should work whether input is a pipe or an ordinary file. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/172624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9041/"
]
} |
172,655 | I want to know how to clear all variables which I defined in command prompt without closing terminal ? for example, if I set a variable in command prompt as: $ a=1 now I want to delete the variable $a (and many other variables defined in similar way) without closing terminal. I could use unset but it will be hectic if there are large no. of variables | If you do (GNU coreutils) exec env --ignore-environment /bin/bash you will use a fresh & new environment | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/172655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
172,666 | When I try to log in to gmail with mutt, it flashes a quick Webalert with a url, something like accounts.gmail.com or something. It's too quick for me to see or copy it. Then it says Login failed. Then I get an email from Gmail saying: Google Account: sign-in attempt blockedHi Adam, We recently blocked a sign-in attempt to your Google Account [[email protected]]. Sign in attempt detailsDate & Time: Wednesday, December 10, 2014 11:55:21 PM UTC Location: Utah, USA If this wasn't youPlease review your Account Activity page at https://security.google.com/settings/security/activity to see if anything looks suspicious. Whoever tried to sign in to your account knows your password; we recommend that you change it right away. If this was youYou can switch to an app made by Google such as Gmail to access your account (recommended) or change your settings at https://www.google.com/settings/security/lesssecureapps so that your account is no longer protected by modern security standards. To learn more, see https://support.google.com/accounts/answer/6010255. Sincerely,The Google Accounts team I can go to the link and enable "Access for less secure apps" and then I can log in just fine, but is there a way to login with mutt without having to turn on this less secure option in Gmail? Update: I'm on mac os x YosemiteWhen I run mutt -v, in the compile options, it does contain +USE_SSL_OPENSSLI'm not using google 2-step verificationI'm not using an application specific passwordHere are the messages that I get when I try to log in: Reading imaps://imap.gmail.com:993/INBOX...Looking up imap.gmail.com...Connecting to imap.gmail.com...TLSv1.2 connection using TLSv1/SSLv3 (ECDHE-RSA-AES128-GCM-SHA256)Logging in...[WEBALERT https://accounts.google.com/ContinueSignIn?sarp=1&scc=1&plt=AKgnsbsm0P...... I found this answer, but it didn't work: https://stackoverflow.com/a/25209735/1665818 | I finally got it to work by enabling Google 2-step verification and using an app-specific password for mutt. More detail: I enabled 2-step verification on my Google account, which means that when I log in to Google, I have to enter a pin number from either a text or from the Google Authenticator app. Then I had to get an app-specific password for mutt. You can generate an app specific password here . Then I used that app-specific password for logging into mutt instead of my normal password. And then I don't have to enter a pin number. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/172666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64643/"
]
} |
172,685 | I'm writing a shell script that needs to do a bunch of commands and every command depends on every previous command. If any command fails the entire script should fail and I call an exit function. I could check the exit code of each command but I am wondering if there is mode I can enable or a way to get bash to do that automatically. For example, take these commands: cd foo || myfuncrm a || myfunccd bar || myfuncrm b || myfunc Is there a way where I can somehow signal to the shell before executing these commands that it should call myfunc if any of them fail, so that instead I can write something cleaner like: cd foorm acd barrm b | You can use bash trap ERR to make your script quit if any command returns status greater than zero and execute your function on exiting. Something like: myfunc() { echo 'Error raised. Exiting!'}trap 'myfunc' ERR# file does not exist, causing errorls asdecho 123 Note that bash trap ERR does implicit set -o errexit or set -e and is not POSIX. And the ERR trap is not executed if the failed command is part of the command list immediately following until or while keyword, part of the test following the if or elif reserved words, part of a command executed in && or || list, or if the command’s return status is being inverted using ! . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/172685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19416/"
]
} |
173,704 | I created a cache pool following this article . The process seems to be successful. After doing an upgrade-grub and rebooting, Grub complains that it can not find the root volume (showing a UUID). - My bad, but I can not recall if this was the id of the original volume or the cached one. The question: is there an article detailing the tasks to use the cached volume as the root? | The following Debian Forum topic and blog post based on it has the missing information. Outline -1) Back up your LVM configuration and have a like CD ready. 0) Make sure you have a separate /boot partition (your cached root will only be available later). This can be a 200MB partition and can be part of the same VG as your cached root. 1) You need dm-cache in your kernel image (instead of module). check your config and make sure you have CONFIG_DM_CACHE=y . If it is a module (=m) you will need to recompile a kernel where this is set to y . It is probably a good idea to use menuconfig and set this option from there (it will make sure dm-cache's dependency chain is also =y ). Device Drivers ---> Generic Driver Options --->--- Multiple devices driver support (RAID and LVM)<*> Device mapper support<*> Cache target (EXPERIMENTAL) 2) Install thin-provisioning-tools (will do fsck -like functions on the cache at boot-time). 3) Create a file in /etc/initramfs-tools/hooks with the following content. This will make sure the executable from step 2 and some dependencies are inside your init ramdisk image. #!/bin/shPREREQ="lvm2"prereqs(){ echo "$PREREQ"}case $1 inprereqs) prereqs exit 0 ;;esacif [ ! -x /usr/sbin/cache_check ]; then exit 0fi. /usr/share/initramfs-tools/hook-functionscopy_exec /usr/sbin/cache_checkmanual_add_modules dm_cache dm_cache_mq 4) Run update-initramfs -u -k all to re-generate all your initrd images. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/173704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23586/"
]
} |
173,708 | I'm trying to force a newly created user to change a password at the first time login using ssh. For security reasons I want to give him a secure password until he logs in for the first time. I did the following so far: useradd -s /bin/bash -m -d /home/foo foopasswd foo Doing chage -d 0 foo only gives me the the error Your account has expired; please contact your system administrator on ssh login. | change the age of password to 0 day syntax chage -d 0 {user-name} In this case chage -d0 foo This works for me over ssh also | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/173708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17859/"
]
} |
173,719 | I'm trying to configure sshd for our internal network to accept public key authentication if a user has set up their key or ask for password if the user has not, but not both. So a user should be able to login passwordless if they have their public key configured or be asked for a password if they haven't set up a public key. Ubuntu 14.04 OpenSSH-server 1:6.6p1-2ubuntu2 | change the age of password to 0 day syntax chage -d 0 {user-name} In this case chage -d0 foo This works for me over ssh also | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/173719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94600/"
]
} |
173,749 | I'm writing a shell script, that needs to be run with root privileges. I can check if a user has root privileges with sudo -nv || echo "no sudo" , but that doesn't help me, if his credentials are still cached by sudo , but he didn't call my script with it. So I have no way of reacting to a user, not calling my script with sudo. I could put sudo in front of every command that needs it, so just checking to see if the user has root privileges would be enough, but it seems to me, that there should be a better solution. I'm looking for a command, that I can put into my script, that asks the user for root privileges and, if provided, executes the rest of the script, as if the user called it with root privileges in the first place. What I want: #!/bin/bashif ! command; then # what I'm looking for echo "This script needs root privileges." exit 1fimv /bin/cmd1 /bin/cmd2 # requires root Edited 2 times | Test if you are root , and if not, restart with sudo , for example: #! /bin/bashif [[ $EUID -ne 0 ]];then exec sudo /bin/bash "$0" "$@"fi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/173749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88357/"
]
} |
173,754 | Suppose I have window w1, w2 open, and sub-window w1-a, w1-b with w1 as their parent window. Is there a way to move w1-a up to its parent window level? | I found the answer by watch the video tutorial on i3 website: Shift + $mod + Up Focus parent: $mod + a I hope this might be useful for other people who is also new to the i3wm. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/173754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92712/"
]
} |
173,756 | I accidently left a trailing slash on my directory name when renaming a folder in Linux, mv /images/images/ .. Now the folder has vanished, But I know it's still there due to the space it's using, I'm thinking it's renamed to '..' but I cannot seem to rename it back? Anyone know the answer? :) | I found the answer by watch the video tutorial on i3 website: Shift + $mod + Up Focus parent: $mod + a I hope this might be useful for other people who is also new to the i3wm. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/173756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94627/"
]
} |
173,779 | I have a following small bash script var=awhile true :do echo $var sleep 0.5 read -n 1 -s vardone It just prints the character entered by the user and waits for the next input. What I want to do is actually not block on the read, i.e. every 0.5 second print the last character entered by the user. When user presses some key then it should continue to print the new key infinitely until the next key press and so on. Any suggestions? | From help read : -t timeout time out and return failure if a complete line of input is not read within TIMEOUT seconds. The value of the TMOUT variable is the default timeout. TIMEOUT may be a fractional number. If TIMEOUT is 0, read returns immediately, without trying to read any data, returning success only if input is available on the specified file descriptor. The exit status is greater than 128 if the timeout is exceeded So try: while truedo echo "$var" IFS= read -r -t 0.5 -n 1 -s holder && var="$holder"done The holder variable is used since a variable loses its contents when used with read unless it is readonly (in which case it's not useful anyway), even if read timed out: $ declare -r a $ read -t 0.5 abash: a: readonly variablecode 1 I couldn't find any way to prevent this. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/173779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92901/"
]
} |
173,785 | I need to list only the files in a directory that do NOT begin with qc_dl , qc_dt or qd_df , but that also DO end in .sas . I can list the files that do not begin with "qc_dl" as such: ls -I "qc_dl*" But then I do not know how to only select the SAS programs from the resultant list. Furthermore, I can select all SAS files that begin either with "qc_dl", "qc_dt" or "qd_df", as follows: ls qc_d{l,t,f}*.sas However, I cannot combine the two commands to only list SAS files that DO NOT begin with qc_dl/t/f . | From man bash: If the extglob shell option is enabled using the shopt builtin, several extended pattern matching operators are recognized. In the following description, a pattern-list is a list of one or more patterns separated by a |. Composite patterns may be formed using one or more of the following sub-patterns: ?(pattern-list) Matches zero or one occurrence of the given patterns *(pattern-list) Matches zero or more occurrences of the given patterns +(pattern-list) Matches one or more occurrences of the given patterns @(pattern-list) Matches one of the given patterns !(pattern-list) Matches anything except one of the given patterns You can then use more powerful pattern matching when this is enabled. To turn it on: ~$ shopt -s extglob Then you can list files that end with .sas and do not begin with qc_d : ~$ lsa.sas a.txt b.sas b.txt c.sas c.txt qc_dc.sas qc_df.sas qc_dl.sas qc_dt.sas~$ ls !(qc_d*).sasa.sas b.sas c.sas To turn it off: ~$ shopt -u extglob | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/173785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94640/"
]
} |
173,805 | I have a remote Linux (Debian) machine which I'd like to access from a very restricted network. In fact the only two open ports are port 80 (for HTTP) and 443 (HTTPS). On this machine I have nginx server which is running on port 80 and 443. I haven't done anything like this before and am fairly inexperienced with any server software other than nginx (and Minecraft which isn't particularly hard to do). If there is a simple way to achieve this please tell me. The ssh server on this machine is this: OpenSSH_6.0p1 Debian-4+deb7u2, OpenSSL 1.0.1e 11 Feb 2013 | There is sslh . It can multiplex the connections depending on what type of client is asking. So if a webbrowser comes along it will forward it to nginx and if a ssh client tries to connect forward it to the sshd. The README.md will hook you up with a nice explanation on how it has to be configured. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/173805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45867/"
]
} |
173,851 | I am trying to get the output of some programs and assign them to variables. I am using backticks to do it, but I can switch to a different method if necessary. What I notice is that often I do not get the expected output in the variable. A simple case is this below. The actual examples I have are more complicated. Whatever the output from the program I'm running in the backticks I want it in the variable. Is that possible? test=`echo [asdf]` If I echo $test it shows just a. | Don't use backticks , use $() . Use single quotes around literal strings and double around variables , so echo "$test" . Don't use echo , use printf . After all: $ test="$(printf '%s' '[asdf]')" $ printf "%s\n" "$test" [asdf] | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/173851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19416/"
]
} |
173,878 | For example, I have been trying to say "There are 10 people online at the moment" in my script file. I can never seem to get the command working without the "people online at the moment" part on the next line. At the moment, I have w='who | wc -l' echo "There are $w people online at the moment" However, I always end up with the output There are who | wc -l users online at the moment How do you get the command working in the middle? I've been trying to look and copy examples, but it doesn't seem to help my command substitution issue. | You want the output of who | wc -l assigned to w , not that string, which is what you get because of the quotes around it. You should use command substitution $(...) : w=$(who | wc -l)echo "There are $w people online at the moment" (you can also use the backquotes, but you cannot easily nest those). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/173878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94695/"
]
} |
173,898 | I can show the target file that a link points to using ls -l : snowch$ ls -l /usr/local/bin/mvnlrwxr-xr-x 1 snowch admin 29 12 Dec 08:58 /usr/local/bin/mvn -> ../Cellar/maven/3.2.3/bin/mvn Is there a way to show less output without having to pipe through another command such as awk? E.g: snowch$ ls ?? /usr/local/bin/mvn/usr/local/bin/mvn -> ../Cellar/maven/3.2.3/bin/mvn I'm running 3.2.53 on OS X 10.9.5. The output from several commands is shown below: snowch$ ls -H /usr/local/bin/mvn/usr/local/bin/mvnsnowch$ ls -L /usr/local/bin/mvn/usr/local/bin/mvnsnowch$ file /usr/local/bin/mvn/usr/local/bin/mvn: POSIX shell script text executablesnowch$ file -b /usr/local/bin/mvnPOSIX shell script text executable | ls unfortunately doesn't have an option to retrieve file attributes and display them in an arbitrary way. Some systems have separate commands for that (for instance GNU has a stat command or the functionality in GNU find ). On most modern systems, with most files, this should work though: $ ln -s '/foo/bar -> baz' the-file$ LC_ALL=C ls -ldn the-file | sed ' 1s/^\([^[:blank:]]\{1,\}[[:blank:]]\{1,\}\)\{8\}//'the-file -> /foo/bar -> baz That works by removing the first 8 blank delimited fields of the first line of the output of ls -l . That should work except on systems where the gid is not displayed there or the first 2 fields are joined together when there's a large number of links. With GNU stat : $ LC_ALL=C stat -c '%N' the-file'the-file' -> '/foo/bar -> baz' With GNU find : $ find the-file -prune \( -type l -printf '%p -> %l\n' -o -printf '%p\n' \)the-file -> /foo/bar -> baz With FreeBSD/OS/X stat: f=the-fileif [ -L "$f" ]; then stat -f "%N -> %Y" -- "$f"else printf '%s\n' "$f"fi With zsh stat: zmodload zsh/statf=the-filezstat -LH s -- "$f"printf '%s\n' ${s[link]:-$f} Many systems also have a readlink command to specifically get the target of a link: f=the-fileif [ -L "$f" ]; then printf '%s -> ' "$f" readlink -- "$f"else printf '%s\n' "$f"fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/173898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24554/"
]
} |
173,900 | I'm trying to add 1 number from the command line, and one number as like a default. For example: When user types in the number 50 the script will add 10 ( as the default number). ./script 50The sum of 50+ 10 is 60. This is what I have so far. echo -n "Please enter a number: " read number default = 10sum = $((default + number)) // this line does not seem to workecho "The sum of $number and 10 is $sum." Do I have the syntax wrong? I'm not sure if I'm on the right track. Am I adding the numbers wrong? Should I use awk instead? let sum = $default + $number | You should not have a spaces inbetween "default = 10" & "sum = $", also default & number should have $ before them to read from the variables. The script then works as expected for me, when written like below; #!/bin/bashecho -n "Please enter a number: " read number default=10sum=$(($default + $number))echo "The sum of $number and 10 is $sum." | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/173900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94695/"
]
} |
173,916 | I encountered BASEDIR=$(pwd) in a script. Are there any advantages or disadvantages over using BASEDIR="$PWD" , other than maybe, that $PWD could be overwritten? | If bash encounters $(pwd) it will execute the command pwd and replace $(pwd) with this command's output. $PWD is a variable that is almost always set. pwd is a builtin shell command since a long time. So $PWD will fail if this variable is not set and $(pwd) will fail if you are using a shell that does not support the $() construct which is to my experience pretty often the case. So I would use $PWD . As every nerd I have my own shell scripting tutorial | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/173916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88357/"
]
} |
173,939 | I have a script that execute a java program, and I want to create a service to start this script at boot time. So I've created a script called run.sh. /test/run.sh #!/bin/bashjava -cp myjar:/test/lib/* com.xxxx.util.AmazonS3FileDownloader I have also created a file called test in /etc/init.d /etc/init.d/test #!/bin/bash/test/run.sh For testing purposes I gave the test folder /test all rights (chmod 777 /test). drwxrwxrwx 7 testuser testuser 4096 Dec 12 13:28 test And this is what inside /etc/ini.d folder -rwxr-xr-x 1 root root 2062 Dec 12 13:18 test If I run this command. Everything is fine. No error, the program is running fine. $ /test/run.sh but for a reason I ignore if I try to do the same thing but using the service. It doesn;t work. $ service test start I've got permission denied when creating the receipts_download.log in /test folder. log4j:ERROR setFile(null,true) call failed.java.io.FileNotFoundException: receipts_download.log (Permission denied) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:221) at java.io.FileOutputStream.<init>(FileOutputStream.java:142) at org.apache.log4j.FileAppender.setFile(FileAppender.java:290) at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:194) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:164) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:257) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:133) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:97) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:689) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:647) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:544) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:440) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:476) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:471) at org.apache.log4j.LogManager.<clinit>(LogManager.java:125) at org.apache.log4j.Logger.getLogger(Logger.java:118) at com.xxxxx.util.AmazonS3FileDownloader.<init>(Unknown Source) at com.xxxxx.util.AmazonS3FileDownloader.main(Unknown Source) /test has all permission and why I can run $ /test/run.sh without problem but not $ service test start Thanks for your help. | If bash encounters $(pwd) it will execute the command pwd and replace $(pwd) with this command's output. $PWD is a variable that is almost always set. pwd is a builtin shell command since a long time. So $PWD will fail if this variable is not set and $(pwd) will fail if you are using a shell that does not support the $() construct which is to my experience pretty often the case. So I would use $PWD . As every nerd I have my own shell scripting tutorial | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/173939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94739/"
]
} |
173,947 | Can someone tell me the differences from: du -s dir 3705012 dirdu -s --apparent-size dir3614558 dir these dirs are inside a block device (created using cryptsetup). Or better: why I need add --apparent-size only with files inside a crypted block device? | The "apparent size" of a file is how much valid data is actually in the file. It is the actual amount of data that can be read from the file. Block-oriented devices can only store in terms of blocks, not bytes. As a result, the disk usage is always rounded up to the next highest block. A "block" in this case may not equate to a physical block on the storage device, either, depending on how the file system allocates space. In the case of your encrypted device, the file system may expand the amount of space used to include overhead to support the encryption/decryption information. It probably also encrypts or randomizes the unused space between the end of file and the end of the block containing it, which may make it appear larger to du . None of this takes into account sparse file handling, which may not be supported in an encrypted filesystem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/173947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40628/"
]
} |
173,958 | I am working on a script that will copy ONLY files that have been created within the last day off to another folder. The issue I am having is the script I have copies all of the files in the source directory instead of just the files less than a day old. This is what I have: find . -mtime -1 -exec cp --preserve --parents -a "{}" /somefolder \; The above code copies all of the files in the source directory. If I remove all of the arguments for 'cp' then it works: find . -mtime -1 -exec cp "{}" /somefolder \; The above code copies only the newest files as I want but I need to preserve the attributes using the cp arguments. I have also tried variables and for loops thinking maybe the -exec option was the issue: files="$(find -mtime -1)"for file in "$files"docp --parents --preserve -a file /somefolder However, the above for loop results in the same issue, all files are copied. If I echo $files only the files I need are shown. How can I get this to work? | Some implementations of cp have a -t option, so you can put the target directory before the files: find . -mtime -1 -exec cp -a --parents -t /somefolder "{}" \+ Note that -a implies --preserve so I removed the latter. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/173958",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94752/"
]
} |
174,021 | I am trying to chroot into an old HD to change a forgotten password, but chroot says permission denied? what gives? I am root! The harddrive I am trying to chroot into is an old version of edUbuntu 7.10 might that have anything to do with it? root@h:~# chroot /media/usb0/chroot: failed to run command `/bin/bash': Permission denied | Chroot in ubuntu or recovering Ubuntu,Debian Linux boot from livecd of ubuntu, if you installed with system 32bit use 32bit Live CD, If 64bit use 64 bit live cd. Mount the Linux Partitions using # sudo blkid Output: sysadmin@localhost:~$ sudo blkid[sudo] password for sysadmin: /dev/sda1: UUID="846589d1-af7a-498f-91de-9da0b18eb54b" TYPE="ext4" /dev/sda5: UUID="36e2f219-da45-40c5-b340-9dbe3cd89bc2" TYPE="swap" /dev/sda6: UUID="f1d4104e-22fd-4b06-89cb-8e9129134992" TYPE="ext4" Here my / Partition is /dev/sda6 Mount the / Partition to mount point using # sudo mount /dev/sda6 /mnt Then Mount the linux access points, Linux devices, Proc, sys Linux Device # sudo mount --bind /dev/ /mnt/dev proc system information # sudo mount --bind /proc/ /mnt/proc Kernel information to user space # sudo mount --bind /sys /mnt/sys If we need to enable the networking we need to do the following steps (Optional). # cp /etc/resolv.conf /mnt/etc/resolv.conf Change the Linux root to be the device we mounted earlier in step 2 # sudo chroot /mnt Now try to change the root password it will work. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174021",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93273/"
]
} |
174,028 | I have written a shell script to monitor a directory using the inotifywait utility of inotifyt-tools. I want that script to run continuously in the background, but I also want to be able to stop it when desired. To make it run continuously, i used while true ; like this: while true;do #a set of commands that use the inotifywait utilityend I have saved it in a file in /bin and made it executable. To make it run in background, i used nohup <script-name> & and closed the terminal. I don't know how do I stop this script. I have looked at the answers here and a very closely related question here . UPDATE 1: On the basis of the answer of @InfectedRoot below, I have been able to solve my problem using the following strategy.First use ps -aux | grep script_name and use sudo kill -9 <pid> to kill the processes.I then had to pgrep inotifywait and use sudo kill -9 <pid> again for the id returned. This works but i think this is a messy approach, I am looking for a better answer. UPDATE 2: The answer consists of killing 2 processes . This is important because running the script on the command line initiates 2 processes, 1 the script itself and 2, the inotify process . | To improve, use killall , and also combine the commands: ps -aux | grep script_namekillall script_name inotifywait Or do everything in one line: killall `ps -aux | grep script_name | grep -v grep | awk '{ print $1 }'` && killall inotifywait | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174028",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94810/"
]
} |
174,037 | I have a string "rtcpOnNbActive true" stored in a variable x . I want to extract "true" as substring and store in a variable. How can I do this? | Try this way: y=$(echo $x | awk '{print $2}')echo $y echo $x display the value of x . awk '{print $2}' prints the second field of the previously displayed x . $( ... ) hold the output and let assign it to y . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94818/"
]
} |
174,049 | I have loads of files which look like this: data1.csvdata2.csv..data(n).csv My use case is that when I will call my script it will change data1.csv to data.csv and remaining files as it is and next time when I will call my script second time, it will move data1.csv to processed folder and change data2.csv to data.csv and so on. | Try this way: y=$(echo $x | awk '{print $2}')echo $y echo $x display the value of x . awk '{print $2}' prints the second field of the previously displayed x . $( ... ) hold the output and let assign it to y . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72128/"
]
} |
174,062 | I was going through a tutorial on setting up a custom initramfs where it states: The only thing that is missing is /init, the executable in the root of the initramfs that is executed by the kernel once it is loaded. Because sys-apps/busybox includes a fully functional shell, this means you can write your /init binary as a simple shell script (instead of making it a complicated application written in Assembler or C that you have to compile). and gives an example of init as a shell script that starts with #!/bin/busybox sh So far, I was under the impression that init is the main process that is launched and that all the other user space process are eventually children of init. However, in the given example, the first process is actually bin/busybox/ sh from which later init is spawned. Is this a correct interpertation? If I were, for example, have a available interpreter available at that point, I could write init as a Python script etc.? | init is not "spawned" (as a child process), but rather exec 'd like this: # Boot the real thing.exec switch_root /mnt/root /sbin/init exec replaces the entire process in place. The final init is still the first process (pid 1), even though it was preceded with those in the Initramfs. The Initramfs /init , which is a Busybox shell script with pid 1, exec s to Busybox switch_root (so now switch_root is pid 1); this program changes your mount points so /mnt/root will be the new / . switch_root then again exec s to /sbin/init of your real root filesystem; thereby it makes your real init system the first process with pid 1, which in turn may spawn any number of child processes. Certainly it could just as well be done with a Python script, if you somehow managed to bake Python into your Initramfs. Although if you don't plan to include busybox anyway, you would have to painstakingly reimplement some of its functionality (like switch_root , and everything else you would usually do with a simple command). However, it does not work on kernels that do not allow script binaries ( CONFIG_BINFMT_SCRIPT=y ), or rather in such a case you'd have to start the interpreter directly and make it load your script somehow. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/174062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29436/"
]
} |
174,085 | To copy a bunch of zip files from one location to another: $ cp *.zip destination/ To unzip a bunch of zip files to a directory: $ unzip \*.zip -d destination/ man unzip says: The backslash before the asterisk is only required if the shell expands wildcards, as in Unix What confuses me is that the two commands do not seem to behave the same way. Why wouldn't they? Is there a way to know or guess how each command behaves, short of just remembering or reading the man pages every time? Is it different because cp is implemented to take several source arguments while unzip isn't? unzip can take a string with wildcards, why not implement taking in several source files? What is the standard way to take source arguments? Is it cp that actually does too much? | If you have two zip files a.zip and b.zip in your current directory, then $ cp *.zip destination/ expands to $ cp a.zip b.zip destination/ The semantics for cp is to copy both a.zip and b.zip to destination. If you type $ cp \*.zip destination/ it simply "expands" to $ cp '*.zip' destination/ i.e. it will try to copy a single file named "*.zip" to destination, which is not what you want. On the other hand if you type $ unzip *.zip -d destination/ it will again expand to $ unzip a.zip b.zip -d destination/ The semantics for unzip is to find a file named "b.zip" inside the archive "a.zip", which is again not what you want. If you type $ unzip \*.zip -d destination/ the unzip command does not simply try to unzip the file called *.zip but it will unzip all files ending with .zip . The difference is that both commands interpret their arguments differently. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38106/"
]
} |
174,091 | I updated my Ubuntu Linux to 14.04 and now the mouse and keyboard don't work in the login screen. I tried re-installing grub from a Live USB, but it's still not working. I would really appreciate your help. | Reinstall input device drivers sudo apt-get install --reinstall xserver-xorg-input-all | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94859/"
]
} |
174,092 | I want to delete all files that have two numbers and a dot in beginning of their names for example: 01. abc02. xyz | rm [0-9][0-9].* will do it for files in the current directory (no quotes — you want to match files). The . doesn't need to be escaped, because this is a shell glob and not a regular expression (if it were a regex, that would be a wildcard). If you are looking to do this recursively, find is probably your best bet. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174092",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
174,116 | I'm interested in creating a script which prints total number of received packets and sent packets on an interface. The output should be like: interfaceTX: numberRX: number Can anyone help me please? | You can figure out the amount of packets received and transmitted across eth0 by running the following commands: cat /sys/class/net/eth0/statistics/rx_packetscat /sys/class/net/eth0/statistics/tx_packets You could then use this fact to write simple a shell script which will poll these files every second, and then calculate and output a PPS value (packets per second). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/174116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92488/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.