source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
377,452
How can I write an specific number of zeros in a binary file automatically? I write: #!/bin/bashfor i in {0..127};do echo -e -n "\x00" >> file.bindone But looking the file with Bless outputs: 2D 65 20 2D 6E 20 5C 78 30 30 0A which corresponds to -e -n \x00 .
printf should be portable and supports octal character escapes: i=0while [ "$i" -le 127 ]; do printf '\000' i=$((i+1)) done >> file.bin ( printf isn't required to support hex escapes like \x00 , but a number of shells support that.) See Why is printf better than echo? for the troubles with echo .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160836/" ] }
377,483
I'm not sure what the general methodology of daemonizing a script is. For example, I've searched online and if I was trying to write a python script that checked the time every second on my computer, all I could think of is using systemd to start it up and then in Python write the script within a never ending loop with a timer. This neither make much sense to me nor does it seem like a good way of daemonizing. All I would do with systemd is use it to run the script (and any script) at startup, so systemd itself doesn't seem very useful. I think I may be daemonizing my script wrong, so do you know what the better ways are to use systemd to turn a python script into a daemon process? Thanks
systemd is not a catch-all. It won't be the solution to every problem, however it does give you a lot of tools to help you solve problems. The usefulness of those tools comes down to how well you can use them. Let's look at a very basic service file check-time.service (note that I created this service file by hand, using other service files located in /usr/lib/systemd/system/ as references): [Unit]Description=Checks the time every second[Service]Type=simpleExecStart=/usr/bin/check-time.py The service file belongs in /usr/lib/systemd/system/ or /etc/systemd/system/ to be used by systemd Line By Line [*] The section headers. These just group together directives. You can find references to which directives belong where in the systemd man pages: [Unit] section [Service] section [Install] section Description A free-form string describing the unit. This is intended for use in UIs to show descriptive information along with the unit name. The description should contain a name that means something to the end user. "Apache2 Web Server" is a good example. Bad examples are "high-performance light-weight HTTP server" (too generic) or "Apache2" (too specific and meaningless for people who do not know Apache). type Configures the process start-up type for this service unit. One of simple, forking, oneshot, dbus, notify or idle. If set to simple (the default if neither Type= nor BusName=, but ExecStart= are specified), it is expected that the process configured with ExecStart= is the main process of the service. In this mode, if the process offers functionality to other processes on the system, its communication channels should be installed before the daemon is started up (e.g. sockets set up by systemd, via socket activation), as systemd will immediately proceed starting follow-up units. ExecStart Commands with their arguments that are executed when this service is started. The value is split into zero or more command lines according to the rules described below (see section "Command Lines" below). Summary This service file would simply run the command /usr/bin/check-time.py when started. If the command exits then it would be considered "dead", so long as it continues to run it is considered "Active". How useful is this service file? Well, not very. As it is the only thing it does is allow you to run the python script using systemctl start check-time.service instead of the normal full path, however there are a wealth of additional options that can be useful. Useful Options WantedBy If you want the service to start on boot, set WantedBy= your default target. Restart Determines when systemd should automatically restart a service, such as "always" or "on-failure" Literally hundreds of other options that include limiting hardware usage, which user to use to execute the process, setting environment variables, setting dependencies, and many more. systemd is useful for all the additional functionality it provides, not simply because it can wrap things.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142064/" ] }
377,618
After I run the set -a var=99 , I can find a sentence in the output of set : ...TERM=xtermUID=0USER=rootVIRTUAL_ENV_DISABLE_PROMPT=1_=var=99colors=/etc/DIR_COLORS... Could anyone tell me what "_=" means? I note that echo $var will give nothing. If I run the set -a , then the set won't include this variable again. What is happening?
The underscore is actually a special shell variable. What you are seeing here is the underscore ( _ ) variable with a value of var=99 . It is readonly, and maintained by the shell. It is: Set at shell startup, containing the absolute file name of the shell or script being executed as passed in the argument list. After that, it expands to the last argument to the previous command, after expansion. When checking mail, this parameter holds the name of the mail file. It is also set to the full pathname of each command executed and placed in the environment exported to that command. Your example falls into the second category. You typed: set -a var=99 So the last argument there was var=99 , and that was the value you were setting (you weren't setting var to 99 ). Hence, _ was set to that. And that is what is being reported: _=var=99 It's a bit confusing, but the first = indicates the assignment to the variable _ , and the second is part of the value. It is also worth mentioning that the -a option to set will cause all subsequently defined variables to be exported.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/377618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237289/" ] }
377,672
Let's say the group name of user1 is xyz. Is it possible, in the same Linux machine, that the username of some other user (not user1) is xyz?
Yes, it is possible but I wouldn't recommend it as it would be confusing. Actually it is common practice in most UNIXes and Linux distros, when user xyz is created, to automatically create a group named xyz and assign to it user xyz as its (only) member.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240124/" ] }
377,676
Example script: #!/bin/sh -e sudo useradd -m user_asudo useradd -m user_b -g user_asudo chmod g+w /home/user_aset +esudo su user_a <<EOFcdumask 027>> file_a >> file_b>> file_cls -l file_*EOFsudo su user_b <<EOFcdumask 000rm -f file_*ls -l ~user_a/set -xmv ~user_a/file_a .cp ~user_a/file_b .ln ~user_a/file_c .set +xls -l ~/EOFsudo userdel -r user_bsudo userdel -r user_a Output: -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_b-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_ctotal 0-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_b-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_c+ mv /home/user_a/file_a .+ cp /home/user_a/file_b .+ ln /home/user_a/file_c .ln: failed to create hard link β€˜./file_c’ => β€˜/home/user_a/file_c’: Operation not permitted+ set +xtotal 0-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a-rw-r----- 1 user_b user_a 0 Jul 11 12:26 file_buserdel: user_b mail spool (/var/mail/user_b) not founduserdel: user_a mail spool (/var/mail/user_a) not found
Which system are you running? On Linux, that behaviour is configurable, through /proc/sys/fs/protected_hardlinks (or sysctl fs.protected_hardlinks ). The behaviour is described in proc(5) : /proc/sys/fs/protected_hardlinks (since Linux 3.6) When the value in this file is 0, no restrictions are placed on the creation of hard links (i.e., this is the historical behavior before Linux 3.6). When the value in this file is 1, a hard link can be created to a target file only if one of the following conditions is true: The calling process has the CAP_FOWNER capability ... The filesystem UID of the process creating the link matches the owner (UID) of the target file ... All of the following conditions are true: the target is a regular file; the target file does not have its set-user-ID mode bit enabled; the target file does not have both its set-group-ID and group-executable mode bits enabled; and the caller has permission to read and write the target file (either via the file's permissions mask or because it has suitable capabilities). And the rationale for that should be clear: The default value in this file is 0. Setting the value to 1 prevents a longstanding class of security issues caused by hard-link-based time-of-check, time-of-use races, most commonly seen in world-writable directories such as /tmp. On Debian systems protected_hardlinks and the similar protected_symlinks default to one, so making a link without write access to the file doesn't work: $ ls -ld . ./foodrwxrwxr-x 2 root itvirta 4096 Jul 11 16:43 ./-rw-r--r-- 1 root root 4 Jul 11 16:43 ./foo$ mv foo bar$ ln bar bar2ln: failed to create hard link 'bar2' => 'bar': Operation not permitted Setting protected_hardlinks to zero lifts the restriction: # echo 0 > /proc/sys/fs/protected_hardlinks $ ln bar bar2$ ls -l bar bar2-rw-r--r-- 2 root root 4 Jul 11 16:43 bar-rw-r--r-- 2 root root 4 Jul 11 16:43 bar2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
377,718
I recently formatted my /home partition using BTRFS, but since it was my first touch with this FS, I didn't know about sub-volumes. Yesterday I reinstalled my Linux Mint and selected to mount my existing home partition as /home . Now I was surprised, that my files first seemed to be lost, but then I noticed that Linux Mint created a sub-volume called @home next to my existing user home folder. Current situation is the following disk layout: 250 GiB SSD-- 64 GiB / (BTRFS)-- 8 GiB swap-- ~170 GiB /home (BTRFS) When I try to move my home folder into the @home sub-volume, I get the error that there isn't enough space left (?!) although there's ~50 GiB left on this partition and I want to move, and not to copy the files.I haven't any other disk right now that I could reformat to any non-NTFS format, which would be required to keep any symlinks.. Now I got the question: How to move the files from the folder into the sub-volume correctly? And why doesn't moving the files work?
You can attempt a COW (copy-on-write) copy with cp -a --reflink=always /home/<whatever> /home/@home/ . It's a true copy as far as the Linux VFS (virtual filesystem) is concerned, but within BTRFS the files would share the same blocks/extends so no additional space is required until you modify the files. If the copy succeeds, modify /etc/fstab to mount the subvolume instead of the whole filesystem: /dev/sdXn /home btrfs subvol=@home Then, reboot. If all is well, you can remove the original files: mount /dev/sdXn /mntpushd /mntrm -fR <whatever>popdumount /mnt Of course, you should have backups before attempting any of this. Next Once everything is good, please read the BTRFS wiki , in particular all the articles under the Guides and Usage Information . BTRFS is pretty neat and all, but it doesn't work like your traditional Linux filesystems (extN, ReiserFS, etc.). It's not one of those things one can jump into and just figure out as you go. To use BTRFS well, you gotta know what you're doing. And reading the documentation is the best way to do that. I happen to love BTRFS, and I hope you enjoy it as well.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240156/" ] }
377,758
How do I give myself read/write/execute permissions for all files in the root directory without making the system unusable? Would this command do it? sudo chown -R jacob3 /sudo chmod -R a+rwx / This is for my personal computer, where I (jacob3) am the only user. No private information is stored on my device without being encrypted. The reason I want to do this is to avoid having to use sudo. Also, I'm not actually going to do this. This is more of a hypothetical, in the sense of how this would be done.
Changing the ownership of all files on the system is a very very very bad idea. Consider just for starters that the first command you propose will change the owner of sudo , which means it will no longer have root privileges to allow you to run the second command. Ponder this. You are immediately breaking fundamental tools and you haven't even finished doing what it is you think you want to do. I would strongly suggest that you instead think about what the problem is that you are trying to solve for which you suggest your proposed solution. sudo is provided for a reason; avail yourself of it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/377758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178962/" ] }
377,808
I am trying to download a file with wget but in background and inside a screen that detaches ... My initial command was: wget http://www.example.com/file.zip -O temp/file.zip; mv temp/file.zip downloads/file.zip; That would nicely move file once downloaded, preventing me from handling downloads/ files while they still download in background. Now I need to run that in backgorund with screen, so I run and detach it: screen -dm wget http://www.example.com/file.zip -O temp/file.zip; But how can I still pass the move command and so that it runs when first one is completed ? Edit: I tried quotes based on DopeGhoti's answer: screen -dm 'wget http://mirror.leaseweb.com/speedtest/100mb.bin -O 1.bin; mv 1.bin 2.bin' cannot identify account 'wget http:'. and this: screen 'wget http://mirror.leaseweb.com/speedtest/100mb.bin -O 1.bin; mv 1.bin 2.bin' cannot exec 'wget http://mirror[...] no such file or directory Edit: I tried with full /usr/bin/wget and /usr/bin/mv paths, it complains about missing session name, I gave it session name with -S foo , now it exits silently, no such screen to resume and no files downloaded: screen -dm -S foo '/usr/bin/wget http://mirror.leaseweb.com/speedtest/100mb.bin -O 1.bin; /usr/bin/mv 1.bin 2.bin'
It works if I specify bash -c screen -dm bash -c 'command1; command2;' User meuh provided this solution in comments, thanks.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/377808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166991/" ] }
377,812
I'm trying to create a new variable array out of the unique values that are in another array but I'm not getting the desired result. Original Array # echo ${owner[*]}390920ad-2858-e651-a4af-a9eaa6acaebb 390920ad-2858-e651-a4af-a9eaa6acaebb e14c2413-7179-44f8-dfc3-b8624dcb10bb 390920ad-2858-e651-a4af-a9eaa6acaebb e14c2413-7179-44f8-dfc3-b8624dcb10bb 390920ad-2858-e651-a4af-a9eaa6acaebb e14c2413-7179-44f8-dfc3-b8624dcb10bb 390920ad-2858-e651-a4af-a9eaa6acaebb 390920ad-2858-e651-a4af-a9eaa6acaebb 390920ad-2858-e651-a4af-a9eaa6acaebb e14c2413-7179-44f8-dfc3-b8624dcb10bb 390920ad-2858-e651-a4af-a9eaa6acaebb e14c2413-7179-44f8-dfc3-b8624dcb10bb 390920ad-2858-e651-a4af-a9eaa6acaebb 390920ad-2858-e651-a4af-a9eaa6acaebb 390920ad-2858-e651-a4af-a9eaa6acaebb 390920ad-2858-e651-a4af-a9eaa6acaebb 0a452389-5ed2-e46f-ad15-cc538c82650d 4232f23d-ed48-4b14-c0ea-aa911fd24920 8ee1b05f-2473-4c37-bfc5-ae393921b939 Command I'm using uniq=($(printf "%s\n" "${owner[@]}" | sort -u)) Issue (It's storing all the unique values as a single value under index 0) # echo ${uniq[0]}0a452389-5ed2-e46f-ad15-cc538c82650d390920ad-2858-e651-a4af-a9eaa6acaebb4232f23d-ed48-4b14-c0ea-aa911fd249208ee1b05f-2473-4c37-bfc5-ae393921b939e14c2413-7179-44f8-dfc3-b8624dcb10bb# echo ${uniq[1]}# Anyone know a better way of grabbing the unique values from this array? Using bash on SmartOS (Similar to Solaris) EDIT I've tried the following as well which will store each value under it's own index number but it does not remove the duplicate values: uniq=($(printf "%s " "${owner[@]}" | sort -u))
uniq=($(printf "%s\n" "${owner[@]}" | sort -u | tr '\n' ' ')) Should do it. Or, as noted in a comment, modify your IFS.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237982/" ] }
377,849
I have two digit month value (01 to 12). I need to get the three letter month abbreviation (like JAN, FEB, MAR etc.) I am able to get it in mixed case using the following command: date -d "20170711" | date +"%b" The output is "Jul" I want it to be "JUL". Is there a standard date option to get it?
^ use upper case if possible Result: $ date +%^bJUL Bonus: how I got this answer: man date Enter /case Enter n
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/377849", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239690/" ] }
377,874
Looking for some help turning a CSV into variables. I tried using IFS, but seems you need to define the number of fields. I need something that can handle varying number of fields. *I am modifying my original question with the current code I'm using (taken from the answer provided by hschou) which includes updated variable names using type instead of row, section etc. I'm sure you can tell by my code, but I am pretty green with scripting, so I am looking for help to determine if and how I should add another loop or take a different approach to parsing the typeC data because although they follow the same format, there is only one entry for each of the typeA and typeB data, and there can be between 1-15 entries for the typeC data. The goal being only 3 files, one for each of the data types. Data format: Container: PL[1-100] TypeA: [1-20].[1-100].[1-1000].[1-100]-[1-100] TypeB: [1-20].[1-100].[1-1000].[1-100]-[1-100] TypeC (1 to 15 entries): [1-20].[1-100].[1-1000].[1-100]-[1-100] *There is no header in the CSV, but if there were it would look like this (Container, typeA, and typeB data always being in position 1,2,3, and typeC data being all that follow): Container,typeA,typeB,typeC,tycpeC,typeC,typeC,typeC,.. CSV: PL3,12.1.4.5-77,13.6.4.5-20,17.3.577.9-29,17.3.779.12-33,17.3.802.12-60,17.3.917.12-45,17.3.956.12-63,17.3.993.12-42PL4,12.1.4.5-78,13.6.4.5-21,17.3.577.9-30,17.3.779.12-34PL5,12.1.4.5-79,13.6.4.5-22,17.3.577.9-31,17.3.779.12-35,17.3.802.12-62,17.3.917.12-47PL6,12.1.4.5-80,13.6.4.5-23,17.3.577.9-32,17.3.779.12-36,17.3.802.12-63,17.3.917.12-48,17.3.956.12-66PL7,12.1.4.5-81,13.6.4.5-24,17.3.577.9-33,17.3.779.12-37,17.3.802.12-64,17.3.917.12-49,17.3.956.12-67,17.3.993.12-46PL8,12.1.4.5-82,13.6.4.5-25,17.3.577.9-34 Code: #!/bin/bash#Set input file_input="input.csv"# Pull variables in from csv# read file using while loopwhile read; do declare -a COL=( ${REPLY//,/ } ) echo -e "containerID=${COL[0]}\ntypeA=${COL[1]}\ntypeB=${COL[2]}" >/tmp/typelist.txt idx=1 while [ $idx -lt 10 ]; do echo "typeC$idx=${COL[$((idx+2))]}" >>/tmp/typelist.txt let idx=idx+1#whack off empty variablessed '/\=$/d' /tmp/typelist.txt > /tmp/typelist2.txt && mv /tmp/typelist2.txt /tmp/typelist.txt#set variables from temp file. /tmp/typelist.txtdonesleep 1#Parse data in this loop.#echo -e "\n"echo "Begin Processing for $container"#echo $typeA#echo $typeB#echo $typeC#echo -e "\n"#Strip - from sub data for extra parsing typeAsub="$(echo "$typeA" | sed 's/\-.*$//')"typeBsub="$(echo "$typeB" | sed 's/\-.*$//')"typeCsub1="$(echo "$typeC1" | sed 's/\-.*$//')"#strip out first two decimils for extra parsingtypeAprefix="$(echo "$typeA" | cut -d "." -f1-2)"typeBprefix="$(echo "$typeB" | cut -d "." -f1-2)"typeCprefix1="$(echo "$typeC1" | cut -d "." -f1-2)"#echo $typeAsub#echo $typeBsub#echo $typeCsub1#echo -e "\n"#echo $typeAprefix#echo $typeBprefix#echo $typeCprefix1#echo -e "\n"echo "Getting typeA dataset for $typeA"#call api script to pull data ; echo out for testecho "API-gather -option -b "$typeAsub" -g all > "$container"typeA-dataset"sleep 1 echo "Getting typeB dataset for $typeB"#call api script to pull data ; echo out for testecho "API-gather -option -b "$typeBsub" -g all > "$container"typeB-dataset"sleep 1 echo "Getting typeC dataset for $typeC1"#call api script to pull data ; echo out for testecho "API-gather -option -b "$typeCsub" -g all > "$container"typeC-dataset"sleep 1 echo "Getting additional typeC datasets for $typeC2-15"#call api script to pull data ; echo out for testecho "API-gather -option -b "$typeCsub2-15" -g all >> "$container"typeC-dataset"sleep 1 echo -e "\n"done < "$_input"exit 0 Speed isnt a concern, but if I've done anything really stupid up there, feel free to slap me in the right direction. :)
^ use upper case if possible Result: $ date +%^bJUL Bonus: how I got this answer: man date Enter /case Enter n
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/377874", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240270/" ] }
377,876
Hopefully somebody can help me or at least get me pointed in the correct direction.I've got a NAT instance on AWS which is providing outbound traffic for a host to the internet. This works just fine.What I need to do is take an arbitrary port (lets use port 26 as an example) from the HOST and on the NAT machine, translate that to appear to come from the NAT machine on port 25.This rule is what I'm trying to make work: iptables -A POSTROUTING -t nat -o eth0 -p tcp --dport 26 -j SNAT --to 172.17.2.125:25 The rule currently will happily change the IP address but leaves the outgoing port unaltered. A redirect will happily change the port but won't change the source IP (so it won't leave the box on the correct public IP). I've thought about using something like pfSense or some other firewall product which I think might be able to do this but it seems like overkill for something which seems "simple". I've tried googling like crazy but I must be googling the wrong terms... I've tried the "transparent proxy" but that didn't seem to work either. Updating this a bit HOST = 172.17.1.125, outbound SMTP connection via port 26NAT = 172.17.2.126DESTINATION = 216.1.2.3 port 25 (standard SMTP port) What is needed is to have HOST (internal) -> NAT -> DESTINATION To make this happen correctly, NAT server must translate port 26 trafficto 172.17.2.126 port 25.
^ use upper case if possible Result: $ date +%^bJUL Bonus: how I got this answer: man date Enter /case Enter n
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/377876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240281/" ] }
377,883
Of course we know cat /proc/cpuinfo |grep "cpu cores" will give an output [root@14:47 ~]# cat /proc/cpuinfo |grep "cpu cores"cpu cores : 2cpu cores : 2cpu cores : 2cpu cores : 2 But actually I want to get the total number of cpu cores. I want the result to be cpu cores: 8 How can I get such a result?
If you are only interested in the sum , you can also use GNU awk: cat /proc/cpuinfo |grep "cpu cores" | awk -F: '{ num+=$2 } END{ print "cpu cores", num }' Edit: This is the correct answer for the OP's intention to sum the numbers of the last column. However the question about finding out how many cores (physical/virtual) are on the machine is given in the other answers to the question.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237289/" ] }
377,894
I see that users including myself at work do ssh to 127.0.0.1. In my case I do it as a different user and to a non-standard port as part of reaching my Linux work environment (the hosting). I wonder why I do that? The command I use is ssh [email protected] -p xxxxx where xxxxx is the port number (not port 22 but something completely different). The procedure is part of me inheriting old Linux installations at work. What would be the difference between ssh [email protected] and just su user ?
If you use ssh [email protected] , you are effectively logging in again, so you get the standard login environment for that user. su user will just change user. A better comparison would be with su - user , which does set the environment. In that case, there seems to be little difference unless you are doing something special. I wonder if the people who are using ssh are just using inherited company lore because no-one knew about - with su !
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/377894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
377,920
With the first version of Linux, is the correct version number 0.01 (as seen in Tanenbaum's OS book) or should the first version be written 0.0.1 including the dot?
The correct version is β€œ0.01”, as used in the tarball at the time ( available here ) and in the release notes .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/377920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
377,926
I tried different solutions but none of them work. I generated a key with ssh-keygen on a machine. I add this key on a linux server in /root/.ssh/authorized_keys , and it works perfectly. On a second linux server, I did the same. But it doesn't work. I tried different things : Put my key in /root/.ssh/authorized_keys2 Edited /etc/ssh/ssh_config and /etc/ssh/sshd_config (by commenting and uncommenting lines) Gave the right rights (700 to .ssh directory and 600 to authorized_keys files) Nothing seems to work... Still asking me a password. What could be the problem ? Thanks a lot. EDIT : `sshd_config Port 22# Use these options to restrict which interfaces/protocols sshd will bind to#ListenAddress ::#ListenAddress 0.0.0.0Protocol 2# HostKeys for protocol version 2HostKey /etc/ssh/ssh_host_rsa_key#HostKey /etc/ssh/ssh_host_dsa_key#HostKey /etc/ssh/ssh_host_ecdsa_keyHostKey /etc/ssh/ssh_host_ed25519_key#Privilege Separation is turned on for securityUsePrivilegeSeparation yes# Lifetime and size of ephemeral version 1 server keyKeyRegenerationInterval 3600ServerKeyBits 1024# LoggingSyslogFacility AUTHLogLevel INFO# Authentication:LoginGraceTime 120#PermitRootLogin without-passwordPermitRootLogin noStrictModes yesAllowUsers userRSAAuthentication yesPubkeyAuthentication yes#AuthorizedKeysFile %h/.ssh/authorized_keys# Don't read the user's ~/.rhosts and ~/.shosts filesIgnoreRhosts yes# For this to work you will also need host keys in /etc/ssh_known_hostsRhostsRSAAuthentication no# similar for protocol version 2HostbasedAuthentication no# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication#IgnoreUserKnownHosts yes# To enable empty passwords, change to yes (NOT RECOMMENDED)PermitEmptyPasswords no# Change to yes to enable challenge-response passwords (beware issues with# some PAM modules and threads)ChallengeResponseAuthentication no# Change to no to disable tunnelled clear text passwords#PasswordAuthentication yes# Kerberos options#KerberosAuthentication no#KerberosGetAFSToken no#KerberosOrLocalPasswd yes#KerberosTicketCleanup yes# GSSAPI options#GSSAPIAuthentication no#GSSAPICleanupCredentials yesX11Forwarding yesX11DisplayOffset 10PrintMotd noPrintLastLog yesTCPKeepAlive yes#UseLogin no#MaxStartups 10:30:60#Banner /etc/issue.net# Allow client to pass locale environment variablesAcceptEnv LANG LC_*Subsystem sftp /usr/lib/openssh/sftp-server# Set this to 'yes' to enable PAM authentication, account processing,# and session processing. If this is enabled, PAM authentication will# be allowed through the ChallengeResponseAuthentication and# PasswordAuthentication. Depending on your PAM configuration,# PAM authentication via ChallengeResponseAuthentication may bypass# the setting of "PermitRootLogin without-password".# If you just want the PAM account and session checks to run without# PAM authentication, then enable this but set PasswordAuthentication# and ChallengeResponseAuthentication to 'no'.UsePAM yesCiphers [email protected],[email protected],aes256-ctr,aes128-ctrMACs [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-ripemd160KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 My ssh -vv result OpenSSH_6.7p1 Debian-5+deb8u3, OpenSSL 1.0.1t 3 May 2016debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to IP [IP] port 22.debug1: Connection established.debug1: permanently_set_uid: 0/0debug1: identity file /root/.ssh/id_rsa type 1debug1: key_load_public: No such file or directorydebug1: identity file /root/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /root/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /root/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /root/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /root/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /root/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /root/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.7p1 Debian-5+deb8u3debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7p1 Debian-5+deb8u3debug1: match: OpenSSH_6.7p1 Debian-5+deb8u3 pat OpenSSH* compat 0x04000000debug2: fd 3 setting O_NONBLOCKdebug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: [email protected],ssh-ed25519,[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dssdebug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1debug2: kex_parse_kexinit: ssh-rsa,ssh-ed25519debug2: kex_parse_kexinit: [email protected],[email protected],aes256-ctr,aes128-ctrdebug2: kex_parse_kexinit: [email protected],[email protected],aes256-ctr,aes128-ctrdebug2: kex_parse_kexinit: [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-ripemd160debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-ripemd160debug2: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup [email protected]: kex: server->client aes128-ctr [email protected] nonedebug2: mac_setup: setup [email protected]: kex: client->server aes128-ctr [email protected] nonedebug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<8192) sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_GROUPdebug2: bits set: 1580/3072debug1: SSH2_MSG_KEX_DH_GEX_INIT sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_REPLYdebug1: Server host key: ED25519 67:fd:fd:6e:a5:c1:32:96:9d:33:32:1a:cf:83:94:eadebug1: Host '[IP]:22' is known and matches the ED25519 host key.debug1: Found key in /root/.ssh/known_hosts:1debug2: bits set: 1546/3072debug2: kex_derive_keysdebug2: set_newkeys: mode 1debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug2: set_newkeys: mode 0debug1: SSH2_MSG_NEWKEYS receiveddebug1: SSH2_MSG_SERVICE_REQUEST sentdebug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: /root/.ssh/id_rsa (0x7f84b8ce74a0),debug2: key: /root/.ssh/id_dsa ((nil)),debug2: key: /root/.ssh/id_ecdsa ((nil)),debug2: key: /root/.ssh/id_ed25519 ((nil)),debug1: Authentications that can continue: publickey,passworddebug1: Next authentication method: publickeydebug1: Offering RSA public key: /root/.ssh/id_rsadebug2: we sent a publickey packet, wait for replydebug1: Authentications that can continue: publickey,passworddebug1: Trying private key: /root/.ssh/id_dsadebug1: Trying private key: /root/.ssh/id_ecdsadebug1: Trying private key: /root/.ssh/id_ed25519debug2: we did not send a packet, disable method user@IP's password: /var/log/auth.log (on the distant server) Jul 12 12:24:43 ns3111463 sshd[12971]: input_userauth_request: invalid user root [preauth]Jul 12 12:24:44 ns3111463 sshd[12971]: Connection closed by IP [preauth]Jul 12 12:24:46 ns3111463 sshd[12973]: Connection closed by IP [preauth]Jul 12 12:24:48 ns3111463 sshd[11787]: Received signal 15; terminating.Jul 12 12:24:48 ns3111463 sshd[12977]: Server listening on 0.0.0.0 port 22.Jul 12 12:24:48 ns3111463 sshd[12977]: Server listening on :: port 22.Jul 12 12:24:50 ns3111463 sshd[12978]: Connection closed by IP [preauth]Jul 12 12:24:51 ns3111463 sshd[12980]: User root from IP not allowed because not listed in AllowUsersJul 12 12:24:51 ns3111463 sshd[12980]: input_userauth_request: invalid user root [preauth]Jul 12 12:24:51 ns3111463 sshd[12980]: Connection closed by IP [preauth]Jul 12 12:25:49 ns3111463 sshd[12977]: Received signal 15; terminating.Jul 12 12:25:49 ns3111463 sshd[13029]: Server listening on 0.0.0.0 port 22.Jul 12 12:25:49 ns3111463 sshd[13029]: Server listening on :: port 22.Jul 12 12:25:50 ns3111463 sshd[13030]: User root from IP not allowed because not listed in AllowUsersJul 12 12:25:50 ns3111463 sshd[13030]: input_userauth_request: invalid user root [preauth]Jul 12 12:25:51 ns3111463 sshd[13030]: Connection closed by IP [preauth]Jul 12 12:25:53 ns3111463 sshd[13032]: Connection closed by IP [preauth]Jul 12 12:37:41 ns3111463 sshd[13552]: Connection closed by IP [preauth]Jul 12 12:38:09 ns3111463 sshd[13598]: Connection closed by IP [preauth]
The correct version is β€œ0.01”, as used in the tarball at the time ( available here ) and in the release notes .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/377926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240314/" ] }
377,931
Appreciate if any help for below query. Will require bash script. I am new to this scripting technology. I have the below file at some location - say filename as MemberFile.txt. #[ID ] #1[ADDRE1 ] Address Line #1[ADDRE2 ] Mumbai City[ADDRE3 ] India#[ID ] #2[ADDRE1 ] House No 2[ADDRE3 ] Green Society[ADDRE4 ] Kolkatta#[ID ] #3[ADDRE1 ] Plot Num 77[ADDRE2 ] House No # [567][ADDRE3 ] greener Apt# File can have millions of such records. I wanted to quickly iterate through each record and get and store value for [ADDRE3 ] . Also check if that record contains either of word 'society' or 'Num' (case insensitive). If yes, then get value of tag [ID ] in that record. Expected output is #2 and #3. Please note that below one represents one record. [ID ] #1[ADDRE1 ] Address Line #1[ADDRE2 ] Mumbai City[ADDRE3 ] India
The correct version is β€œ0.01”, as used in the tarball at the time ( available here ) and in the release notes .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/377931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240104/" ] }
377,973
I recently upgraded my kernel from 3.16.4 (Debian jessie) to 4.9.0 (Debian stretch).Everything was fine, until I tried to "Hibernate" (suspend to disk). When I use Hibernate option in LXDE, it appears to hibernate. I can hear the disk spindle ticking and writing data. But the problems appears when resuming from hibernation. The kernel successfully restores the image from swap, but then freezes and reboots, with all that work lost. I could not find answer anywhere on internet. The people are just solving some mistakes around not setting /etc/initramfs-tools/conf.d/resume or have set kernel paramters, or have wrong entry in /etc/fstab. I have these correct. Correct UUID in /etc/initramfs-tools/conf.d/resume, correct fstab and not set resume kernel paramter. I moved the swap partition outside of the extended partition to primary. The UUID was saved and applied to the new swap. The system reaches "Restoring image 100%" and then "Suspending consoles", and then it powers off and boots normally, with all work lost. Tried clean install, but without luck. Happens only on i386 (32-bit x86), amd64 (64-bit x86) does not suffer. Disk partition table layout: NAME FSTYPE LABEL UUID MOUNTPOINTsda β”œβ”€sda1 ext4 HDD <ROOT-UUID> /└─sda2 swap HDD-SWAP <SW-UUID> [SWAP]sr0 The sda2 was logical(resides-inside-extended) before the upgrade. Fstab: UUID=<ROOT-UUID> / ext4 errors=remount-ro 0 1UUID=<SW-UUID> none swap sw 0 0 /etc/initramfs-tools/conf.d/resume RESUME=UUID=<SW-UUID> Kernel cmdline BOOT_IMAGE=/boot/vmlinuz-4.9.0-3-686-pae root=UUID=<ROOT-UUID> ro quiet System information: Computer: Compaq CQ60-120ecSwap Size: 3.5GiBProcessor: AMD Athlon X2 64 QL-66GPU: Nvidia Geforce 8200M GMemory: 2G DDR2 667MHzDesktop Environment: LXDEDebian Version: 9 (stretch)Kernel version: 4.9.0-3Graphics Driver: nvidia legacy 304xxx (I know the processor is 64bit but it came with 32bit os originally, so I thought it was 32bit until I examined /proc/cpuinfo)
The issue is due to a conflict between hibernate and kASLR on x86-32 . This can be solved by disabling kASLR with the nokaslr kernel boot option. x86-64 is not affected. For Grub this can be done by editing /etc/default/grub and adding nokaslr to the boot options, e.g.:GRUB_CMDLINE_LINUX_DEFAULT="quiet nokaslr " Then run update-grub to update the configuration and reboot to give it a try. I had exactly the same issue and it seems that only the PAE kernel is affected by that issue. The same kernel without PAE works without issues. The workaround for me was to install linux-image-686 and uninstall linux-image-686-pae and linux-image-4.9.0-4-686-pae. The exact kernel version may change over time due to upgrades, but basically the currently running PAE kernel need to be replaced with a kernel without PAE. It has actually nothing to do with PAE support of the CPU, as my CPU supports PAE according to /proc/cpuinfo. But PAE is anyway not of much use on old notebooks. It has also nothing to do with kernel 4.9 PAE as the same issue happens with kernel 4.13 PAE from Debian backports.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/377973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198649/" ] }
377,979
Let's take a simple for loop #!/bin/bashfor i in `seq 1 10`;do echo $idone AFAIK semicolon in bash scripts makes shell execute current command synchronously and then go to the next one.Pressing enter does literally the same except it doesn't allow you to enter the following command, flushing the buffer immediately. So why shell can't interpret the following line for i in `seq 1 10`; do; echo $i; done how does this for loop actually work?
The syntax of a for loop from the bash manual page is for name [ [ in [ word ... ] ] ; ] do list ; done The semicolons may be replaced with carriage returns, as noted elsewhere in the bash manual page: "A sequence of one or more newlines may appear in a list instead of a semicolon to delimit commands." However, the reverse is not true; you cannot arbitrarily replace newlines with semicolons. Your multiline script can be converted to a single line as long as you observe the above syntax rules and do not insert an extra semicolon after the do : for i in `seq 1 10`; do echo $i; done
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/377979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176344/" ] }
377,986
We are currently testing a new application on Linux that previously worked on Windows. It appears there were some hard coded paths in the program and a bunch of test files with long filenames with backslashes were created. For example: directory\subdirectory\subdirectory\subdirectory\filename.ext I need to take these files, create the directory they were supposed to be in and move them there. So, for example, the file above should actually be: directory/subdirectory/subdirectory/subdirectory/filename.ext How can I do this?
The syntax of a for loop from the bash manual page is for name [ [ in [ word ... ] ] ; ] do list ; done The semicolons may be replaced with carriage returns, as noted elsewhere in the bash manual page: "A sequence of one or more newlines may appear in a list instead of a semicolon to delimit commands." However, the reverse is not true; you cannot arbitrarily replace newlines with semicolons. Your multiline script can be converted to a single line as long as you observe the above syntax rules and do not insert an extra semicolon after the do : for i in `seq 1 10`; do echo $i; done
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/377986", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240372/" ] }
378,016
When looking for matches with grep , I often notice that the subsequent search takes significantly less time than the first-- e.g. 25s vs. 2s. Obviously, it's not by reusing the data structures from its last run-- those should've been deallocated. Running a time command on grep , I noticed an interesting phenomenon: real 24m36.561suser 1m20.080ssys 0m7.230s Where does the rest of the time go? Is there anything I can do to make it run fast every time? (e.g. having another process read the files, before grep searches them.)
It is quite often related to the page cache . The first time, the data has to be read (physically) from the disk. The second time (for not too big files) it is likely to be sitting in the page cache. So you could issue first a command like cat(1) to bring the (not too big) file into the page cache (i.e. in RAM), then a second grep(1) (or any program reading the file) would generally run faster. (however, the data still needs to be read from the disk at some time) See also (sometimes useful in your application programs, but practically rarely) readahead(2) & posix_fadvise(2) and perhaps madvise(2) & sync(2) & fsync(2) etc.... Read also LinuxAteMyRAM . BTW, this is why it is recommended, when benchmarking a program, to run it several times. Also, this is why it could be useful to buy more RAM (even if you don't run programs using all of it for their data). If you want to understand more, read some book like e.g. Operating Systems : Three Easy Pieces
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/378016", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145784/" ] }
378,021
Trying to create GPG keys which will be used for an apt repository hosted on my Centos7 box. I created a new user "apt", and then tried to create the keys, but at the very end, it states that I need a pass phrase, but then instantly closes stating cancelled by user. No it wasn't! I have since successfully repeated these same steps root and as my standard username which happens to be in the wheels group. Two questions: Is it a good idea to use different gpg keys for different uses such as this apt repository, and should keys ever be created as root? Why am I not able to create a gpg key for this user? Do I need to first create some other key for this user? Thanks [apt@devserver ~]$ gpg --gen-keygpg (GnuPG) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only)Your selection?RSA keys may be between 1024 and 4096 bits long.What keysize do you want? (2048)Requested keysize is 2048 bitsPlease specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n yearsKey is valid for? (0) 1yKey expires at Thu 12 Jul 2018 04:32:05 PM UTCIs this correct? (y/N) yGnuPG needs to construct a user ID to identify your key.Real name: somenameEmail address: [email protected]:You selected this USER-ID: "somename <[email protected]>"Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? OYou need a Passphrase to protect your secret key.gpg: cancelled by usergpg: Key generation canceled.[apt@devserver ~]$
As to the "cancelled by user" error: GnuPG tries to make sure it's reading the passphrase directly from the terminal, not (e.g.) piped from stdin. To do so, it tries to open the tty directly. Unfortunately, file permissions get in the way β€” the tty device is owned by the user you log in as. So only that user and root can open it. GnuPG appears to report the error incorrectly, saying you canceled (when in fact it got a permission denied). As to if you should have a separate key for the repository: yes. There are a couple of reasons that come to mind: A repository can be maintained by more than one person. All of them will need access to the key. You obviously don't want to give them access to your personal key. The software processing new packages will need access to the key. For many repositories, that means you have to keep the key available on an Internet-connected machine. This necessitates a lower level of security than you'd ideally have on your personal key. If you're processing uploads automatically, you may even need to store the key with no passphrase. Obviously lowers security. In case of compromise of your personal key, it's nice to only have to revoke that. Same with compromise of the repository key. It makes revoking a compromised key cheaper. It's pretty normal to use your personal key to sign the repository key. As to running key generation as root: not ideal (don't run things as root without good reason), but likely not really an issue.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/378021", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137912/" ] }
378,022
In the example script below it is used to send to one or two email addresses, depending on which process is using the script. Do I need to add $3 to account for an additional email address or is $2 sufficient? (for file in /usr/app/tst/$1/MS_CASE_ST*.csv;do uuencode ${file} $(basename ${file})done) | mailx -s "MS_CASE_ST*.csv file contains data. Please Research" $2 An example of how the script, example.sh, is executed: $ ./example.sh output [email protected] [email protected]
$2 will always be only the second argument. $@ is an array of all the arguments, so if you want the second until the end you could do: ... | mailx -s "MS_CASE_ST*.csv file contains data. Please research" "${@:2}" the :2 is specifying an offset in the parameter expansion when expanding the $@ array
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/378022", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210133/" ] }
378,108
I am running CentOS/RHEL 6 and having the same issues as described in the referenced question below. I have tried all of the settings suggested in this almost identical question , but to no avail. On the server-side I have the following sshd_config settings: X11Forwarding yesX11DisplayOffset 10 xauth is installed on the server, and after successfully connecting over SSH I do get a MAGIC-COOKIE in ~/.Xauthority . I do not get any xauth related errors. When I ssh in using -X (and add verbosity for troubleshooting -vvv ), I successfully connect. When I try to run xclock it fails with an error of " Can't open display: localhost:10.0 ". This is a STDOUT error and not an error from the ssh -vvv . I do NOT receive any failed X11 attempts in ssh. Then I try to verify the $DISPLAY variable, but get no output (it's not set). It there some other setting somewhere that sets $DISPLAY properly? In this particular case, I can force the setting export DISPLAY=localhost:10.0 , which then returns correctly after running echo $DISPLAY . Unfortunately, I still do NOT get any X-Windows program (e.g., xlcock ) to come back. I still get the " Can't open display: localhost:10.0 " error. I'm at a loss. Any suggestions? Anything else that can set $DISPLAY during an SSH session?
Turns out the guidance here is correct. However, I did run into a unique issue that may help others. I started troubleshooting with -vvv and because there was so much data, I missed a critical warning (lesson learned is to start broader ( -v )). The host key for the server changed (new build) and I disabled key checking in my ssh_config, so because it was a mismatched key X11 forwarding was disabled by SSH.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/378108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169617/" ] }
378,148
It is well-known that empty text files have zero bytes: However, each of them contains metadata , which according to my research, is stored in inodes , and do use space . Given this, it seems logical to me that it is possible to fill a disk by purely creating empty text files. Is this correct? If so, how many empty text files would I need to fill in a disk of, say, 1GB? To do some checks, I run df -i but this apparently shows the % of inodes being used(?) rather than how much they weigh. Filesystem Inodes IUsed IFree IUse% Mounted onudev 947470 556 946914 1% /devtmpfs 952593 805 951788 1% /run/dev/sda2 28786688 667980 28118708 3% /tmpfs 952593 25 952568 1% /dev/shmtmpfs 952593 5 952588 1% /run/locktmpfs 952593 16 952577 1% /sys/fs/cgroup/dev/sda1 0 0 0 - /boot/efitmpfs 952593 25 952568 1% /run/user/1000/home/lucho/.Private 28786688 667980 28118708 3% /home/lucho
This output suggests 28786688 inodes overall, after which the next attempt to create a file in the root filesystem (device /dev/sda2 ) will return ENOSPC ("No space left on device"). Explanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4. Linux supports other filesystems without this limitation. On btrfs , space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes ). Of course you can still run out of disk space by creating too much metadata / directory entries. Thinking about it, tmpfs is another example where inodes are allocated dynamically. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems. I wouldn't attach any meaning to the value shown. "XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule. "BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h " – Peter Cordes in a comment to this answer
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/378148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/192321/" ] }
378,163
I was wondering if it is possible to convert my boot system from xfs to ext4. If it is possible, how do I do so?
You can do this using fstransform , which is a tool to convert a filesystem type into another: fstransform /dev/sda1 ext4 Currently it supports all main Linux filesystems i.e. ext2, ext3, ext4, jfs, ntfs, reiserfs, xfs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/378163", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234183/" ] }
378,270
When I use find , it often finds multiple results like find -name pom.xml./projectA/pom.xml./projectB/pom.xml./projectC/pom.xml I often want to select only a specific result, (e.g edit ./projectB/pom.xml ). Is there a way to enumerate find output and select a file to pass into another application? like: find <print line nums?> -name pom.xml1 ./projectA/pom.xml2 ./projectB/pom.xml3 ./projectC/pom.xml!! | <get 2nd entry> | xargs myEditor ? [Edit] I've bumped into some perculiar bugs with some of the solutions mentioned. So I'd like to explain steps to reproduce: git clone http://git.eclipse.org/gitroot/platform/eclipse.platform.swt.gitcd eclipse.platform.swt.git<now try looking for 'pom.xml' and 'feature.xml' files> [Edit] Solution 1 So far a combination of 'nl' (enumirate output), head & tail seems to work if I combine them into functions and use $(!!). i.e: find -name pom.xml | nl #look for files, enumirate output.#I then define a function called "nls"nls () { head -n $1 | tail -n 1}# I then type: (suppose I want to select item #2)<my command> $(!!s 2)# I press enter, it expands like: (suppose my command is vim)vim $(find -name pom.xml |nls 2)# bang, file #2 opens in vim and Bob's your uncle. [Edit] Solution 2 Using "select" seems to work quite well as well. e.x: findexec () { # Usage: findexec <cmd> <name/pattern> # ex: findexec vim pom.xml IFS=$'\n'; select file in $(find -type f -name "$2"); do #$EDITOR "$file" "$1" "$file" break done; unset IFS }
Use bash 's built-in select : IFS=$'\n'; select file in $(find -type f -name pom.xml); do $EDITOR "$file" breakdone; unset IFS For the "bonus" question added in the comment: declare -a manifestIFS=$'\n'; select file in $(find -type f -name pom.xml) __QUIT__; do if [[ "$file" == "__QUIT__" ]]; then break; else manifest+=("$file") fidone; unset IFSfor file in ${manifest[@]}; do $EDITOR "$file"done# This for loop can, if $EDITOR == vim, be replaced with # $EDITOR -p "${manifest[@]}"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/378270", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77271/" ] }
378,282
Given these file names: $ ls -1filefile nameotherfile bash itself does perfectly fine with embedded whitespace: $ for file in *; do echo "$file"; donefilefile nameotherfile$ select file in *; do echo "$file"; done1) file2) file name3) otherfile#? However, sometimes I might not want to work with every file, or even strictly in $PWD , which is where find comes in. Which also handles whitespace nominally: $ find -type f -name file\*./file./file name./directory/file./directory/file name I'm trying to concoct a whispace-safe version of this scriptlet which will take the output of find and present it into select : $ select file in $(find -type f -name file); do echo $file; break; done1) ./file2) ./directory/file However, this explodes with whitespace in the filenames: $ select file in $(find -type f -name file\*); do echo $file; break; done1) ./file 3) name 5) ./directory/file2) ./file 4) ./directory/file 6) name Ordinarily, I would get around this by messing around with IFS . However: $ IFS=$'\n' select file in $(find -type f -name file\*); do echo $file; break; done-bash: syntax error near unexpected token `do'$ IFS='\n' select file in $(find -type f -name file\*); do echo $file; break; done-bash: syntax error near unexpected token `do' What is the solution to this?
If you only need to handle spaces and tabs (not embedded newlines) then you can use mapfile (or its synonym, readarray ) to read into an array e.g. given $ ls -1fileother filesomefile then $ IFS= mapfile -t files < <(find . -type f)$ select f in "${files[@]}"; do ls "$f"; break; done1) ./file2) ./somefile3) ./other file#? 3./other file If you do need to handle newlines, and your bash version provides a null-delimited mapfile 1 , then you can modify that to IFS= mapfile -t -d '' files < <(find . -type f -print0) . Otherwise, assemble an equivalent array from null-delimited find output using a read loop: $ touch $'filename\nwith\nnewlines'$ $ files=()$ while IFS= read -r -d '' f; do files+=("$f"); done < <(find . -type f -print0)$ $ select f in "${files[@]}"; do ls "$f"; break; done1) ./file2) ./somefile3) ./other file4) ./filenamewithnewlines#? 4./filename?with?newlines 1 the -d option was added to mapfile in bash version 4.4 iirc
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/378282", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20246/" ] }
378,301
I want to launch the wine executable (Version 2.12), but I get the following error ( $ =shell prompt): $ winebash: /usr/bin/wine: No such file or directory$ /usr/bin/winebash: /usr/bin/wine: No such file or directory$ cd /usr/bin$ ./winebash: ./wine: No such file or directory However, the file is there: $ which wine/usr/bin/wine The executable definitely is there and no dead symlink: $ stat /usr/bin/wine File: /usr/bin/wine Size: 9712 Blocks: 24 IO Block: 4096 regular fileDevice: 802h/2050d Inode: 415789 Links: 1Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)Access: 2017-07-13 13:53:00.000000000 +0200Modify: 2017-07-08 03:42:45.000000000 +0200Change: 2017-07-13 13:53:00.817346043 +0200 Birth: - It is a 32-bit ELF: $ file /usr/bin/wine/usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped I can get the dynamic section of the executable: $ readelf -d /usr/bin/wineDynamic section at offset 0x1efc contains 27 entries: Tag Type Name/Value 0x00000001 (NEEDED) Shared library: [libwine.so.1] 0x00000001 (NEEDED) Shared library: [libpthread.so.0] 0x00000001 (NEEDED) Shared library: [libc.so.6] 0x0000001d (RUNPATH) Library runpath: [$ORIGIN/../lib32] 0x0000000c (INIT) 0x7c000854 0x0000000d (FINI) 0x7c000e54 [more addresses without file names] However, I cannot list the shared object dependencies using ldd : $ ldd /usr/bin/wine/usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory strace shows: execve("/usr/bin/wine", ["wine"], 0x7fff20dc8730 /* 66 vars */) = -1 ENOENT (No such file or directory)fstat(2, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 4), ...}) = 0write(2, "strace: exec: No such file or di"..., 40strace: exec: No such file or directory) = 40getpid() = 23783exit_group(1) = ?+++ exited with 1 +++ Edited to add suggestion by @jww : The problem appears to happen before dynamically linked libraries are requested, because no ld debug messages are generated: $ LD_DEBUG=all winebash: /usr/bin/wine: No such file or directory Even when only printing the possible values of LD_DEBUG , the error occurs instead $ LD_DEBUG=help winebash: /usr/bin/wine: No such file or directory Edited to add suggestion of @Raman Sailopal: The problem seems to lie within the executable, as copying the contents of /usr/bin/wine to another already created file produces the same error root:bin # cp cat testcmd root:bin # testcmd --helpUsage: testcmd [OPTION]... [FILE]...Concatenate FILE(s) to standard output.[rest of cat help page]root:bin # dd if=wine of=testcmd 18+1 records in18+1 records out9712 bytes (9.7 kB, 9.5 KiB) copied, 0.000404061 s, 24.0 MB/sroot:bin # testcmdbash: /usr/bin/testcmd: No such file or directory What is the problem or what can I do to find out which file or directory is missing? uname -a : Linux laptop 4.11.3-1-ARCH #1 SMP PREEMPT Sun May 28 10:40:17 CEST 2017 x86_64 GNU/Linux
This: $ file /usr/bin/wine/usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped Combined with this: $ ldd /usr/bin/wine/usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory Strongly suggests that the system does not have the /lib/ld-linux.so.2 ELF interpreter. That is, this 64-bit system does not have any 32-bit compatibility libraries installed. Thus, @user1334609's answer is essentially correct.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/378301", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54138/" ] }
378,307
I know about pwdx , but it would be really nice if top showed the PWD: in other words, I want to see PWD and CPU/mem usage side by side. Does anyone have a script or one-liner to combine the output from top / ps with pwdx with periodic refresh?
This: $ file /usr/bin/wine/usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped Combined with this: $ ldd /usr/bin/wine/usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory Strongly suggests that the system does not have the /lib/ld-linux.so.2 ELF interpreter. That is, this 64-bit system does not have any 32-bit compatibility libraries installed. Thus, @user1334609's answer is essentially correct.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/378307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81987/" ] }
378,310
I made some Live USBs using both Tuxboot and USB Image Writer but I can't boot any of them. I've tried most of them on other computers and they all work. When I press ESC while booting I'm taken to the grub menu, but only my main system appears there (Linux Mint 18.1) and an option to go to the BIOS config. If I go to the BIOS and change the boot order so that EFI Flash goes first the system reboots but boots normally on my main OS. The next time I access the BIOS the order is returned to the original settings. A few notes: I have a samsung NP900 laptop with SSD I'm using the linux kernel v4.10 on a Mint 18.1 I have secure boot set as custom, and I haven't tried without it because I'm worried by system won't boot The output of efibootmgr is BootCurrent: 0006Timeout: 0 secondsBootOrder: 0006,0005,0004,0002,0000Boot0000* Windows Boot ManagerBoot0002* Windows Boot ManagerBoot0004* UEFI: Generic Flash Disk 5.00Boot0005* UEFI: Generic Flash Disk 5.00Boot0006* ubuntu Those two Windows entries are there, but I don't have any Windows installed. I changed the boot order with sudo efibootmgr -o 0004,0005,0006,0000,0002 and rebooted but again, system booted into main OS. And after I checked boot order again it was set as 0006,0005,0004,0002,0000 , which is not what it was when I restarted it.
This: $ file /usr/bin/wine/usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped Combined with this: $ ldd /usr/bin/wine/usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory Strongly suggests that the system does not have the /lib/ld-linux.so.2 ELF interpreter. That is, this 64-bit system does not have any 32-bit compatibility libraries installed. Thus, @user1334609's answer is essentially correct.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/378310", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121808/" ] }
378,373
I want to create a dummy, virtual output on my Xorg server on current Intel iGPU (on Ubuntu 16.04.2 HWE, with Xorg server version 1.18.4). It is the similiar to Linux Mint 18.2, which one of the xrandr output shows the following: Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767...eDP1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 0mm x 0mm...VIRTUAL1 disconnected (normal left inverted right x axis y axis)... In the Linux Mint 18.2, I can turn off the built-in display ( eDP1 ) and turn on the VIRTUAL1 display with any arbitrary mode supported by the X server, attach x11vnc to my main display and I'll get a GPU accelerated remote desktop. But in Ubuntu 16.04.2, that's not the case. The VIRTUAL* display doesn't exist at all from xrandr . Also, FYI, xrandr's output names is a little bit different on Ubuntu 16.04.2, where every number is prefixed with a - . E.g. eDP1 in Linux Mint becomes eDP-1 in Ubuntu, HDMI1 becomes HDMI-1 , and so on. So, how to add the virtual output in Xorg/xrandr? And how come Linux Mint 18.2 and Ubuntu 16.04.2 (which I believe uses the exact same Xorg server, since LM 18.2 is based on Ubuntu, right?) can have a very different xrandr configurations? Using xserver-xorg-video-dummy is not an option, because the virtual output won't be accelerated by GPU.
Create a 20-intel.conf file: sudo vi /usr/share/X11/xorg.conf.d/20-intel.conf Add the following configuration information into the file: Section "Device" Identifier "intelgpu0" Driver "intel" Option "VirtualHeads" "2"EndSection This tells the Intel GPU to create 2 virtual displays. You can change the number of VirtualHeads to your needs. Then logout and login. You should see VIRTUAL1 and VIRTUAL2 when you run xrandr . Note if you were using the modesetting driver previously (which is the modern default) switching to the intel driver will cause the names of displays to change from, eg, HDMI-1 or DP-1 to HDMI1 or DP1 .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/378373", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240651/" ] }
378,494
I have a bunch of files in the following format: 2014-11-19.8.ext2014-11-26.1.ext2014-11-26.2.blah.ext2014-11-26_3.ext2014-11-26.4.stuff_here.ext2014-12-03.1. could be anything.ext2014-12-032b.ext2014-11-26 613 adva.ext My goal is to iterate over the entire list of files and to take the date formatting from YYYY-MM-DD and store that in a variable in the format of YYYYMMDD for further processing (in my case It's going to be pushed into a touch command). So normally I would match against this regular expression: (\d{4})-(\d{2})-(\d{2}).* And then use $1$2$3 to get my desired pattern, however I'm not sure how to do this in bash / zsh . How can this be done within a shell script as such?
Using parameter expansion $ touch 2014-11-19.8.ext 2014-11-26.1.ext$ for f in *.ext; do d="${f:0:4}${f:5:2}${f:8:2}"; echo "$d"; done2014111920141126 ${f:0:4} means 4 characters starting from index 0 and f is variable name replace echo "$d" with your code
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/378494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47012/" ] }
378,547
I would like to split a string into substrings, separated by some separator (which is also a string itself). How can I do that using bash only? (for minimalism, and my main interest) or If allowing some text processing program? (for convenience when the program is available) Thanks. Simple example, split 1--123--23 by -- into 1 , 123 and 23 . split 1?*123 by ?* into 1 and 123
Pure bash solution, using IFS and read . Note that the strings shouldn't contain $'\2' (or whatever else you use for IFS, unfortunately $'\0' doesn't work, but e.g. $'\666' does): #!/bin/bashsplit_by () { string=$1 separator=$2 tmp=${string//"$separator"/$'\2'} IFS=$'\2' read -a arr <<< "$tmp" for substr in "${arr[@]}" ; do echo "<$substr>" done echo}split_by '1--123--23' '--'split_by '1?*123' '?*' Or use Perl: perl -E 'say for split quotemeta shift, shift' -- "$separator" "$string"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/378547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
378,554
I have a 10+ TB NFS /home for 100+ users plus public folders. I need to change the username and UID of 10 of those users. At first, I was thinking of running the following: find /home -uid 812 -exec chown NEWUSER {} \; Now, the issue on this is that it will go through all my 10TB once and change whatever file it finds with uid 812 to NEWUSER, which is what I want. But it will take a pretty long time, and will do it just for that user; then I will have to run the command again for each of the other 9 users, turning it from a pretty long time to a pretty long time * 9. Besides the fact that I don't like scripting, I guess a script would be a friend here, but I don't know where to start. I want to use the find command and check all the files from /home. then: IF FILEOWNER IS 813 then NEWOWNER IS NEWUSER1IF FILEOWNER IS 814 then NEWOWNER IS NEWUSER2 ... and so on. Do you think there is a way to do this so I don't have to scan 10 GB of data 10 times -- to just scan once?
Do the whole thing using find . find /home ! -type l \( \ -uid 812 -exec chown NEWUSER {} + \ -o -uid 813 -exec chown ANOTHER {} + \ -o -uid 814 -exec chown SOMEONE {} + \ -o -uid 815 -exec chown SOMEGUY {} + \) One directory structure traversal. All chown s complete. We exclude symlinks as otherwise the chown would apply to the target of the symlink. On some systems, you can change the owner of a symlink with chown -h . On those, you could add the -h and remove the ! -type l . Note that if any of the files are setuid , this will screw that up. You can handle that also using find . find 's business is evaluating expressions β€” not locating files. Yes, find certainly locates files; but that's really just a side effect. β€” Unix Power Tools
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/378554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240803/" ] }
378,560
I've recently installed Debian 9 and encountered the following error. During installation I've set up 'root' password and I've set up 'user' with his own password. Later when I log into 'user' account and want to install some package I have this problem. If I run: sudo apt-get install 'package' then I get this message: 'user' is not in sudoers list And if I try to log into 'root' terminal with: su and enter password, I get: su: Authentification error P.S. I understand that question may be really silly, but I've not found any information about it in internet, so I need to ask it here.
It seems you have been bitten by a bug in the Debian 9 installer, as described in this forum topic: http://forums.debian.net/viewtopic.php?f=17&t=133604 There's a workaround given in the (currently) last post in that thread. As I recall, the sudo command is not configured in Debian unless there is no root password given on install. Formerly, the sudo command was not even installed by default in Debian.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/378560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240809/" ] }
378,566
From https://www.gnu.org/software/bash/manual/bashref.html#The-Set-Builtin set [--abefhkmnptuvxBCEHPT] [-o option-name] [argument …]set [+abefhkmnptuvxBCEHPT] [+o option-name] [argument …] ... -- If no arguments follow this option, then the positional parameters are unset. Otherwise, the positional parameters are set to the arguments, even if some of them begin with a β€˜-’. - Signal the end of options, cause all remaining arguments to be assigned to the positional parameters. The -x and -v options are turned off. If there are no arguments, the positional parameters remain unchanged. Using β€˜+’ rather than β€˜-’ causes these options to be turned off. The options can also be used upon invocation of the shell. The current set of options may be found in $-. The remaining N arguments are positional parameters and are assigned, in order, to $1, $2, … $N. The special parameter # is set to N. It seems that there are three ways to set the position parameters: set -- argumentset - argumentset argument What are their differences? Thanks.
The difference between -- and - is that when - is used, the -x and -v options are also unset. $ set -vx$ echo "$-"himvxBHs # The options -v and -x are set.$ set - a b c$ echo "$- <> $@" # The -x and -v options are turned off.himBHs <> a b c That's the usual way in which shells accepted the - , however, in POSIX, this option is "unspecified": If the first argument is '-', the results are unspecified. The difference between set -- and plain set is quite commonly used. It is clearly explained in the manual: -- If no arguments follow this option, then the positional parameters are unset. Otherwise, the positional parameters are set to the args, even if some of them begin with a -. The -- signals the "end of options" and any argument that follows even if it start with a - will be used as a Positional argument. $ set -- -a -b -e -f arg1$ echo "$@"-a -b -e -f arg1 Instead: $ set -a -b -e -f arg1$ echo "$@"arg1 But also some shell options have changed. Not using any of - or -- will allow the setting of set options with variables that expand to options names (even if quoted): $ echo "$-"himBHs$ a='-f'$ set "$a"$ echo "$-"fhimBHs
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/378566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
378,584
In my-script, "$1" is a string of tokens, separated by whitespace. how can I "spread" this single argument into multiple arguments to pass to another program for example ./my-script2 ...$1 # spread the string into multiple arguments, delineating by whitepace Hopefully you understand the question, had trouble searching for answer to this. I tried this: ./my-script2 <<< "$1" and this: ./my-script2 "" <<< $1 and other combinations, but that didn't seem to work. I need to support Bash v3 and above.
./my-script2 $1 Unquoted parameters are subject to word splitting when expanded : The shell scans the results of parameter expansion, command substitution, and arithmetic expansion that did not occur within double quotes for word splitting. This is the specified POSIX behaviour , though some shells (notably zsh) disable it by default. If you've changed the value of IFS from the default, that's on you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/378584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
378,644
I want to install mongodb 3.4 on debian stretch. Unfortunately debian stretch packages are only mongodb 3.2 ( https://packages.debian.org/stretch/mongodb ). The mongodb docs only mentions debian 7 and 8 ( https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/ ). When using debian 8 commands I can't install mongodb packages because they have unmet dependencies. When allowing jessie-backports the unmet dependencies error is gone but I am unsure if I should do this and install jessie-backports packages in stretch. How would you install mongodb 3.4 on debian stretch? Thanks for any advice.
The error when you attempt to use the Debian 8 instructions suggests lots of missing dependencies, but in fact will work if you install the single actual missing dependency ( libssl1.0.0 ). For reference, to work out what was missing, I downloaded the mongod binary and had a look at ldd : adam@debian9:~/mongo/mongodb-linux-x86_64-debian81-3.4.6/bin$ ldd mongod linux-vdso.so.1 (0x00007ffd0e15d000) libssl.so.1.0.0 => not found libcrypto.so.1.0.0 => not found librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f93c6dff000) *snip* If you have a look at what is installed in Debian 9, basically we just have versions of libssl that are too new. The libssl and libcrypto libraries are both installed by the libssl package and it is pretty much standalone. Hence, we can just grab the Debian 8 libssl1.0.0 package and install that. The amd64 version of the package can be found here (just Google for libssl1.0.0 Jesse and your arch for another version). To install that package, download the file (in my case it was to Downloads) and then install it with dpkg : adam@debian9:~$ sudo dpkg -i /home/adam/Downloads/libssl1.0.0_1.0.1t-1+deb8u6_amd64.deb Selecting previously unselected package libssl1.0.0:amd64.(Reading database ... 126471 files and directories currently installed.)Preparing to unpack .../libssl1.0.0_1.0.1t-1+deb8u6_amd64.deb ...Unpacking libssl1.0.0:amd64 (1.0.1t-1+deb8u6) ...Setting up libssl1.0.0:amd64 (1.0.1t-1+deb8u6) ... With that complete we quickly re-check ldd : adam@debian9:~/mongo/mongodb-linux-x86_64-debian81-3.4.6/bin$ ldd mongod linux-vdso.so.1 (0x00007ffdf25de000) libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f86bc12d000) libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f86bbd31000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f86bbb29000)*snip* Success! Now let's retry the package installation of mongodb-org : adam@debian9:~$ sudo apt install mongodb-orgReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following additional packages will be installed: mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-toolsThe following NEW packages will be installed: mongodb-org mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-tools0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.Need to get 66.8 MB of archives.After this operation, 270 MB of additional disk space will be used.Do you want to continue? [Y/n] Get:1 http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4/main amd64 mongodb-org-shell amd64 3.4.6 [7,980 kB]Get:2 http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4/main amd64 mongodb-org-server amd64 3.4.6 [14.2 MB]Get:3 http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4/main amd64 mongodb-org-mongos amd64 3.4.6 [8,103 kB]Get:4 http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4/main amd64 mongodb-org-tools amd64 3.4.6 [36.5 MB]Get:5 http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4/main amd64 mongodb-org amd64 3.4.6 [3,820 B]Fetched 66.8 MB in 7s (9,509 kB/s) Selecting previously unselected package mongodb-org-shell.(Reading database ... 126491 files and directories currently installed.)Preparing to unpack .../mongodb-org-shell_3.4.6_amd64.deb ...Unpacking mongodb-org-shell (3.4.6) ...Selecting previously unselected package mongodb-org-server.Preparing to unpack .../mongodb-org-server_3.4.6_amd64.deb ...Unpacking mongodb-org-server (3.4.6) ...*snip*Adding system user `mongodb' (UID 119) ...Adding new user `mongodb' (UID 119) with group `nogroup' ...Not creating home directory `/home/mongodb'.Adding group `mongodb' (GID 123) ...Done.Adding user `mongodb' to group `mongodb' ...Adding user mongodb to group mongodbDone.Setting up mongodb-org (3.4.6) ... Finally, let's make sure the service starts and we can connect with a shell: adam@debian9:~$ sudo systemctl start mongodadam@debian9:~$ mongoMongoDB shell version v3.4.6connecting to: mongodb://127.0.0.1:27017MongoDB server version: 3.4.6 And there you have it - Jesse packages working on Stretch. I'm sure that there will be an official release soon that will make this obsolete, but in the meantime this is a relatively painless workaround.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/378644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219653/" ] }
378,733
At https://nixos.org/ I can view the recent releases of NixOS. Is there a command I can run to see which version is on my machine?
From the Nixos manual : nixos-version
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/378733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236812/" ] }
378,752
I have seen several times the use of "list context" and "string context". I know and understand the use of such descriptions in perl. They apply to $ and @ . However, when used in shell descriptions: two contexts in shell syntax: list context and string context. the absence of quotes (in list contexts) You always need quotes around variables in all list contexts leaving a variable unquoted in list context (as in echo $var) They seem diffuse as a term that has not been defined anywhere or at best, poorly documented. There is no definition in POSIX for that, acording to google Is this ( from this ) the gist of it ? : In a nutshell, double quotes are necessary wherever a list of words or a pattern is expected. They are optional in contexts where a raw string is expected by the parser. But it seems like a dificult term to use. How could we find "what the result should be" when "the result is needed" to know if it is a string or list context. Or could it be preciselly and correctly defined?
There is no such concept in the standard shell language. There are no "contexts" only expansion steps. Quotes are first identified in the tokenization which produces words. They glue words together so that abc"spaces here"xyz is one "word". The important thing to understand is that quotes are preserved through the subsequent expansion steps, and the original quotes are distinguished from quotes that might arise out of expansions. Parameters are expanded without regard for double quotes. Later, though, a field splitting process takes place which harkens back to the first tokenization. Once again, quotes prevent splitting and, once again, are preserved. Pathname expansion ("globbing") takes place after this splitting. The preserved quotes prevent it: globbing operators are not recognized inside quotes. Finally the quotes are removed by a late stage called "quote removal". Of course, only the original quotes! POSIX does a good job of presenting the process in a way that is understandable; attempts to demystify it with extraneous concepts (that may be misleading) are only going to muddle the understanding. People throwing around ad hoc concepts like "list context" should formalize their thinking to the point that it can provide a complete alternative specification for all of the processing, which is equivalent (produces the same results). And then, avoid mixing concepts between the parallel designs: use one explanation or the other. A "list context" or "string context" makes sense in a theory of shell expansion in which these are well defined, and the processing steps are organized around these concepts. If I were to guess, then "list context" refers to the idea that the shell is working with a list of tokenized words such as the two-word list {foo} {abc" x "def} . The quotes are not part of the second word: its content is actually abc x def ; they are semantic quotes which prevent the splitting on whitespace. Inside these quotes, we have "string context". However, a possible implementation of these expansion steps is not to actually have quotes which are identified as the original quotes, but some sort of list data structure, so that {foo} {abc" x "def} is, say, a list of lists in which the quoted parts are identified as different kinds of nodes (and the quotes are gone). Using Lisp notation it could be: (("foo") ;; one-element word ("abc" (:dq-str " x ") "def")) ;; three-element word The nodes without a label are literal text, :dq-str is a double-quote region. Another type could be :sq-str for a single quoted item. The expansion can walk this structure, and then do different things based on whether it's looking at a string object, a :dq-str expression or whatever. File expansion and field splitting would be suppressed within both :dq-str or :sq-str . But parameter expansion does take place within :dq-str . "Quote removal" would then correspond to a final pass which takes the pieces and catenates the strings, flattening the interior list structure and losing the type indicating symbols, resulting in: ("foo" "abc x def") ;; plain string list, usable as command arguments Now here, note how in the second item we have ("abc" (:dq-str " x ") "def") . The first and last items are unwrapped: they are direct elements of the list and so we can say these are in the "list context". Whereas, the middle " x " is wrapped in a :dq-str expression, so that is "(double quoted) string context". What "list" refers to in "list context" is anyone's guess without a clearly defined model such as this. Is it the master word list? Or a list of chunks representing one word?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/378752", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
378,990
I want to change in file /var/www with /home/lokesh/www with sed command sed -i 's///var//www///home//lokesh//www/g' lks.php but this give error sed: couldn't open file ww///home//lokesh//www/g: No such file or directory
Not sure if you know, but sed has a great feature where you do not need to use a / as the separator. So, your example could be written as: sed -i 's#/var/www#/home/lokesh/www#g' lks.php It does not need to be a # either, it could be any single character. For example, using a 3 as the separator: echo "foo" | sed 's3foo3bar3g'bar
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/378990", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77823/" ] }
379,002
I am using a CentOS and I have enabled the USB driver for GSM and CDMA modems using make menuconfig . But, how does it work? After changing in menuconfig, is the modification performed in the moment? Or do I have to compile the whole kernel in order to get this configuration?
With make menuconfig you only change the configuration file .config which is used in the compilation process. One doesn't need to use this menuconfig tool - there are other scripts for that or one can even edit .config by hand (although this is error prone and thus not recommended). So in order to finish the task you've started you need to compile the kernel with new settings, copy that kernel to /boot (or wherever your boot loader is reading), optionally update link /usr/src/linux to point to correct source, add to grub (or other bootloader you use) a line with new kernel, and after that just reboot, select the previously set line in the grub menu, and voilΓ .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201019/" ] }
379,103
From Gawk's manual: When awk statements within one rule are short, you might want to put more than one of them on a line. This is accomplished by separating the statements with a semicolon (β€˜;’). This also applies to the rules themselves. Thus, the program shown at the start of this section could also be written this way: /12/ { print $0 } ; /21/ { print $0 } NOTE: The requirement that states that rules on the same line must be sepa rated with a semicolon was not in the original awk language; it was added for consistency with the treatment of statements within an action. But I have seen from https://stackoverflow.com/q/20262869/156458 awk '$2=="no"{$3="N/A"}1' file Aren't $2=="no"{$3="N/A"} and 1 two statements? why are they not separated by anything? Thanks.
Very good question! I think the key is this: "Thus, the program shown at the start of this section could also be written this way:" Is not mandatory to be written in this way. It is a kind of alternative way. This means (and has been proved in action) that below statements are both correct : $ awk '/12/ { print $0 } /21/ { print $0 }' file$ awk '/12/ { print $0 } ; /21/ { print $0 }' file I think this semicolon usage is to cover really short - idiomatic code , for example cases that we omit the action part and we want to apply multiple rules on the same line: $ awk '/12//21/' fileawk: cmd. line:2: /12//21/awk: cmd. line:2: ^ unexpected newline or end of string In this case using a semicolon is mandatory to separate rules (=conditions): $ awk '/12/;/21/' file Since the {action} part is ommited in both rules/both conditions, the default action will be performed for every rule = {print $0}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/379103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
379,170
I used to work with Fedora until F15 and later I saw there were three different flavors (workstation, server and atomic(?)). These versions are mutually exclusive? What are their differences and purpose?
The difference is in the packages that are installed. Fedora Workstation installs a graphical X Windows environment (GNOME) and office suites. Fedora Server installs no graphical environment (useless in a server) and provides installation of DNS, mailserver, webserver, etc. Fedora Atomic is designed around Kubernetes and containers.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/379170", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/193334/" ] }
379,175
How do I make postfix send emails from user@mydomain instead of root@hostname ? Even after installing and entering my domain when it asked, it's still being sent with the hostname and not the domain I provided. In my main.cf file myorigin = /etc/mailname and /etc/mailname contains: gateblogs.com which is my domain. I have managed to fix my problem temporarily by changing my hostname to my domain name. However, how can I change who the email is from; currently mail is shown from root I want it to be something else.
Postfix itself does not "set" the from address for an mail (as long has you haven't really tweaked the postfix configuration). The from address header of an e-mail is set by the mail client which is asking postfix to deliver the mail (Postfix is a MTA ). Hence you are looking at the wrong place - as far as I understand your question. You mentioned that you are using the mail command to test your configuration.This neat little command is - by default - using the systems user name under which the command gets executed. In your case this seems to be user root . Try executing mail as a different user and you'll see the from part changed. And since the command mail does - by default - not append the domain part to the "from" header of the mail it hands over to postfix, postfix is appending the myorigin part automatically to root . Though mail is not limiting you to not use some other "from" e-mail header. You can read about on the www or the manual page of mail . Also consider using sendmail . Please note that postfix is a beast when it comes to configurability. You may accomplish almost everything you want postfix to do if you really understand its architecture and configuration files. But since you are asking a quite 'newbee-ish' question you may not yet want to go down that road...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379175", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241173/" ] }
379,181
This question is not about how to write a properly escaped string literal. I couldn't find any related question that isn't about how to escape variables for direct consumption within a script or by other programs. My goal is to enable a script to generate other scripts. This is because the tasks in the generated scripts will run anywhere from 0 to n times on another machine, and the data from which they are generated may change before they're run (again), so doing the operations directly, over a network will not work. Given a known variable that may contain special characters such as single quotes, I need to write that out as a fully escaped string literal, e.g. a variable foo containing bar'baz should appear in the generated script as: qux='bar'\''baz' which would be written by appending "qux=$foo_esc" to the other lines of script. I did it using Perl like this: foo_esc="'`perl -pe 's/('\'')/\\1\\\\\\1\\1/g' <<<"$foo"`'" but this seems like overkill. I have had no success in doing it with bash alone. I have tried many variations of these: foo_esc="'${file//\'/\'\\\'\'}'"foo_esc="'${file//\'/'\\''}'" but either extra slashes appear in the output (when I do echo "$foo" ), or they cause a syntax error (expecting further input if done from the shell).
Bash has a parameter expansion option for exactly this case : ${parameter@Q} The expansion is a string that is the value of parameter quoted in a format that can be reused as input. So in this case: foo_esc="${foo@Q}" This is supported in Bash 4.4 and up. There are several options for other forms of expansion as well, and for specifically generating complete assignment statements ( @A ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/379181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161288/" ] }
379,235
There is a record: 45 * * * 1 script.sh and 45 0-23 * * 1 script.sh The desired effect is to run the script 45 minutes after every hour on Mondays. Are they identical? If not, what is the difference?
Yes, they are identical. I'd suggest the first syntax as it is more concise.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379235", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209233/" ] }
379,250
I created one table of employees. 1 Andy Account 2 Grecie HR 3 Jyorge Marketing 4 Seeya HR 5 Princy Account 6 Siya Production Here the names of 4th employee and 6th employee are same but spellings are different. So i want to display only that records using grep command. I tried it like: grep S[iee]ya emp and grep S[[i][ee]]ya emp but it didn't work.Any solution?
You need the OR operator '|' in grep: grep -E 'S(i|ee)ya' emp 4 Seeya HR6 Siya Production
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184505/" ] }
379,272
When I do git push I get the command prompt like Username for 'https://github.com': then I enter my username manually like Username for 'https://github.com': myusername and then I hit Enter and I get prompt for my password Password for 'https://[email protected]': I want the username to be written automatically instead of manually having to type it all the time. I tried doing it with xdotool but it didn't work out. I have already done git config --global user.name myusernamegit config --global user.email [email protected] but still it always asks for me to type manually
In Terminal, enter the following to enable credential memory: $ git config --global credential.helper cache You may update the default password cache timeout (in seconds): # This cache timeout is in seconds$ git config --global credential.helper 'cache --timeout=3600' You may also use (but please use the single quotes, else double quotes may break for some characters): $ git config --global user.name 'your user name'$ git config --global user.password 'your password'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/379272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224025/" ] }
379,347
I have the following two test files: test1 test2 Both of them are blank. Now I issue the following commands: $ cat > test1 Enter This is a test file Enter Ctrl + D $ cat > test2 Enter This is another test file Enter ^C Ctrl + C $ Now I check the contents of the two files $ cat test1This is a test file$ cat test2This is another test file$ So is there any real difference in the outcome if we use the above two methods to achieve the same outcome?
When the cat command is running, the terminal is in canonical input mode . This means, in short, that the terminal's line discipline is handling line editing, and is responding to all of the special characters configured for the terminal (viewable and settable with the stty command). The cat command is simply read() ing from its standard input until a read() call returns zero bytes read, the POSIX convention for hitting end of file. Terminals do not really have an "end". But there is a circumstance where read() of a terminal device returns zero bytes. When the line discipline receives the "EOF" special character, whatever that happens to be configured as at the time, it causes read() to return with whatever is in the editing buffer at that point. If the editing buffer was empty, that returns zero bytes read from read() , causing cat to exit. cat also exits in response to signals whose default actions are to terminate the process. The line discipline also generates signals in response to special characters. The "INTR" and "QUIT" special characters cause the INT and QUIT signals to be sent to the foreground process (group), which will be/contain the cat process. The default action of these signals is to terminate the cat process. Which leads to the observable differences: Ctrl + D only has this action when it is the EOT special character. This is usually the case, but it is not necessarily the case. Similarly, Ctrl + C only has its action when it is the INTR special character. Ctrl + D will not cause cat to terminate when the line is not in fact empty at the time. An interrupt generated by Ctrl + C will, though. A naΓ―ve implementation of cat in the C language will block buffer standard output if it finds it directed at a file, as in the question. In theory, this could lead to buffered and not yet output lines being lost if cat is terminated by SIGINT . In practice, the BSD and GNU C libraries implement a buffering mode that is not described in the C or C++ language standards. Standard output when redirected to file or pipe is smart buffered . It is block buffered; except that whenever the C library finds itself about to read() the beginning of a new line from any file descriptor that is open to a terminal device, it flushes standard output. (The BSD and GNU C libraries do not quite implement the same semantics and do more than this, strictly speaking, but this behaviour is a common subset.) Thus an interrupt signal will not cause lost buffered output when cat is built on top of such a C library. Of course, if cat is part of a command pipeline, some other process could be buffering the data, downstream of cat before those data reach an output file. So again when the line discipline sends SIGINT , which (by default) terminates all of the processes in the pipeline, input data buffered and not yet written will be lost; whereas terminating cat normally with the "EOF" special character will cause the pipeline to terminate normally, with all of the data passing to the downstream process before it receives an EOF indication from its read() of its standard input. Note that this bears very little relationship to what happens when your interactive shell is reading a line of input from you. When your shell is waiting for input, the terminal is in non-canonical input mode , in which mode the line discipline does not do any special handling of special characters. How your shell treats Ctrl + D and Ctrl + C is entirely up to the input editing library that your shell uses (libedit, readline, or ZLE) and how that editing library has been configured (with key bindings and suchlike). Further reading POSIX terminal interface . Wikipedia.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240931/" ] }
379,352
Yes, I know it is a step into a lesser secure system, but the current setting makes it reasonable (the key is not important, but the signing has to be automatized). Google results say this: List the keys with a gpg --list-keys Edit the key with a gpg --edit-key C0DEEBED.... A gpg command line console starts, there a passwd command changes the passphrase Giving the password twice (in my case, simple enter) changes the key. However, it doesn't work, because gpg2 simply doesn't allow an empty password. What to do?
As of gpg version 2.2.17, gpg --edit-key <keyid> seems to work fine for removing a passphrase. Issue the command, then type passwd in the prompt. It will ask you to provide your current passphrase and then the new one. Just type Enter for no passphrase. Then type quit to quit the program.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/379352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52236/" ] }
379,374
How can I do logrotate? I can see no effect when I do logrotate: root@me-Latitude-E5550:/etc/logrotate.d# cd ..root@me-Latitude-E5550:/etc# cd ..root@me-Latitude-E5550:/# logrotate -d /etc/logrotate.confIgnoring /etc/logrotate.conf because of bad file mode.Handling 0 logsroot@me-Latitude-E5550:/# chmod 644 /etc/logrotate.d/*root@me-Latitude-E5550:/# cd /etc/logrotate.droot@me-Latitude-E5550:/etc/logrotate.d# lsapport custom pm-utils speech-dispatcher upstartapt dpkg ppp ufwcups-daemon lightdm rsyslog unattended-upgradesroot@me-Latitude-E5550:/etc/logrotate.d# cd ..root@me-Latitude-E5550:/etc# cd ..root@me-Latitude-E5550:/# logrotate -d /etc/logrotate.confIgnoring /etc/logrotate.conf because of bad file mode.Handling 0 logsroot@me-Latitude-E5550:/# cd /var/logroot@me-Latitude-E5550:/var/log# ls -larthtotal 34Mdrwxrwxrwx 2 root root 4,0K Feb 18 2016 speech-dispatcherdrwxrwxrwx 2 root root 4,0K Mai 19 2016 upstartdrwxrwxrwx 2 root root 4,0K Jul 19 2016 fsck-rwxrwxrwx 1 root root 31 Jul 19 2016 dmesg-rwxrwxrwx 1 root root 57K Jul 19 2016 bootstrap.logdrwxrwxrwx 3 root root 4,0K Jul 19 2016 hpdrwxr-xr-x 14 root root 4,0K Jul 19 2016 ..drwxrwxrwx 2 root root 4,0K Jun 14 13:17 aptdrwxrwxrwx 2 root root 4,0K Jun 14 13:20 installerdrwxrwxrwx 2 root root 4,0K Jun 19 09:50 unattended-upgrades-rwxrwxrwx 1 root root 3,8K Jun 19 09:55 fontconfig.log-rwxrwxrwx 1 root root 554 Jun 19 15:02 apport.log.1drwxrwxrwx 2 root root 4,0K Jun 20 07:35 lightdm-rwxrwxrwx 1 root root 836K Jun 20 07:35 syslog.1-rwxrwxrwx 1 root root 32K Jun 20 14:31 faillog-rw------- 1 root utmp 768 Jul 14 09:28 btmpdrwxrwxrwx 2 root root 4,0K Jul 14 10:38 dist-upgrade-rwxrwxrwx 1 root root 43K Jul 14 10:45 alternatives.log-rwxrwxrwx 1 root root 286K Jul 14 11:04 lastlogdrwxrwxrwx 2 root root 4,0K Jul 18 08:59 sysstat-rwxrwxrwx 1 root root 1,8M Jul 18 11:13 dpkg.logdrwxrwxrwx 2 root root 4,0K Jul 18 12:19 cups-rw-r----- 1 root adm 14K Jul 18 12:20 apport.log-rw-r--r-- 1 root root 32K Jul 18 18:38 Xorg.0.log.old-rwxrwxrwx 1 root root 1,9K Jul 18 18:44 gpu-manager.log-rw-r--r-- 1 root root 1009 Jul 18 18:44 boot.logdrwxrwxr-x 13 root syslog 4,0K Jul 18 18:44 .-rw-rw-r-- 1 root utmp 85K Jul 18 18:45 wtmp-rw-r--r-- 1 root root 29K Jul 18 21:17 Xorg.0.log-rwxrwxrwx 1 root root 5,7M Jul 18 21:18 kern.log-rwxrwxrwx 1 root root 580K Jul 18 21:50 auth.log-rwxrwxrwx 1 root root 24M Jul 18 21:54 syslogroot@me-Latitude-E5550:/var/log# logrotate -f /etc/logroate.conferror: cannot stat /etc/logroate.conf: No such file or directoryroot@me-Latitude-E5550:/var/log# logrotate -f /etc/logrotate.confroot@me-Latitude-E5550:/var/log# ls -larthtotal 34Mdrwxrwxrwx 2 root root 4,0K Feb 18 2016 speech-dispatcherdrwxrwxrwx 2 root root 4,0K Mai 19 2016 upstartdrwxrwxrwx 2 root root 4,0K Jul 19 2016 fsck-rwxrwxrwx 1 root root 31 Jul 19 2016 dmesg-rwxrwxrwx 1 root root 57K Jul 19 2016 bootstrap.logdrwxrwxrwx 3 root root 4,0K Jul 19 2016 hpdrwxr-xr-x 14 root root 4,0K Jul 19 2016 ..drwxrwxrwx 2 root root 4,0K Jun 14 13:17 aptdrwxrwxrwx 2 root root 4,0K Jun 14 13:20 installerdrwxrwxrwx 2 root root 4,0K Jun 19 09:50 unattended-upgrades-rwxrwxrwx 1 root root 3,8K Jun 19 09:55 fontconfig.log-rwxrwxrwx 1 root root 554 Jun 19 15:02 apport.log.1drwxrwxrwx 2 root root 4,0K Jun 20 07:35 lightdm-rwxrwxrwx 1 root root 836K Jun 20 07:35 syslog.1-rwxrwxrwx 1 root root 32K Jun 20 14:31 faillog-rw------- 1 root utmp 768 Jul 14 09:28 btmpdrwxrwxrwx 2 root root 4,0K Jul 14 10:38 dist-upgrade-rwxrwxrwx 1 root root 43K Jul 14 10:45 alternatives.log-rwxrwxrwx 1 root root 286K Jul 14 11:04 lastlogdrwxrwxrwx 2 root root 4,0K Jul 18 08:59 sysstat-rwxrwxrwx 1 root root 1,8M Jul 18 11:13 dpkg.logdrwxrwxrwx 2 root root 4,0K Jul 18 12:19 cups-rw-r----- 1 root adm 14K Jul 18 12:20 apport.log-rw-r--r-- 1 root root 32K Jul 18 18:38 Xorg.0.log.old-rwxrwxrwx 1 root root 1,9K Jul 18 18:44 gpu-manager.log-rw-r--r-- 1 root root 1009 Jul 18 18:44 boot.logdrwxrwxr-x 13 root syslog 4,0K Jul 18 18:44 .-rw-rw-r-- 1 root utmp 85K Jul 18 18:45 wtmp-rw-r--r-- 1 root root 29K Jul 18 21:17 Xorg.0.log-rwxrwxrwx 1 root root 5,7M Jul 18 21:18 kern.log-rwxrwxrwx 1 root root 580K Jul 18 21:50 auth.log-rwxrwxrwx 1 root root 24M Jul 18 21:55 syslogroot@me-Latitude-E5550:/var/log#
You are changing permission on logrotate.d. you need to chmod 644 /etc/logrotate.conf And chown root:root /etc/logrotate.conf And then it could work
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/379374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234608/" ] }
379,389
My goal is to get the helm binary from helm-v2.5.0-linux-amd64.tar.gz into my /usr/local/bin in as few steps as possible. The challenge is that the .tar.gz file contains the binary in a linux-amd64 subdirectory. So when I do this: $ wget -P /tmp https://storage.googleapis.com/kubernetes-helm/helm-v2.5.0-linux-amd64.tar.gz$ tar -xzvf /tmp/helm-v2.5.0-linux-amd64.tar.gz -C /usr/local/bin linux-amd64/helm I end up with /usr/local/bin/linux-amd64/helm instead of /usr/local/bin/linux-amd64/helm . Is there a tar parameter I am missing or do I need to include some mv & cleanup steps?
You need to use --strip-components=NUMBER : --strip-components=NUMBER Strip NUMBER leading components from file names on extraction. which means that your command should look like this: tar -xzvf /tmp/helm-v2.5.0-linux-amd64.tar.gz -C /usr/local/bin --strip-components=1 linux-amd64/helm
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/379389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241328/" ] }
379,398
How can I switch off vim to automatically enter edit mode on middle mouse button click? Background: I like to create command strings like JJJj to concat three lines and then go one line down, select this command string with the mouse and paste it again and again into vim using the middle mouse button. However, recently, vim automatically enters insert mode on middle mouse click rendering this procedure dysfunctional. I don't know whether it's a change in vim itself or of some default configuration of my distribution (Gentoo), but I need a way to switch that off again. To clarify one thing: I don't have set mouse=a set (neither in my ~/.vimrc nor somewhere else. So for instance, left mouse clicks do not move the cursor to the clicked to location. This is probably the reason, why [cas]es tip to use set mouse= in ~/.vimrc doesn't solve this issue. Also, I'm talking about vim (in xterm ), not gvim . System details: Gentoo Linux Vanilla Linux 4.9.13 X.Org X Server 1.19.3 XTerm(327) vim 8.0.386 (seems modified by Gentoo)
This is fully answered here . See the excellent linked answer for details and alternative solution. Short answer: put :set t_BE= in your .vimrc file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125840/" ] }
379,464
I have this shell script saved in a file: it does some basic string substitution. #!/bin/shhtml_file=$1echo "html_file = $html_file"substr=.pdfpdf_file="${html_file/.html/.pdf}"echo "pdf_file = $pdf_file" If I paste it into the command line, it works fine: $ html_file="/home/max/for_pauld/test_no_base64.html" echo "html_file = $html_file" substr=.pdf pdf_file="${html_file/.html/.pdf}" echo "pdf_file = $pdf_file" gives html_file = /home/max/for_pauld/test_no_base64.htmlpdf_file = /home/max/for_pauld/test_no_base64.pdf That's the output from echo above - it's working as intended. But, when I call the script, with $ saucer "/home/max/for_pauld/test_no_base64.html" I get this output: html_file = /home/max/for_pauld/test_no_base64.html/home/max/bin/saucer: 5: /home/max/bin/saucer: Bad substitution Is my script using a different version of bash or something? Do I need to change my shebang line?
What is sh sh (or the Shell Command Language) is a programming language described by the POSIXstandard .It has many implementations ( ksh88 , dash , ...). bash can also beconsidered an implementation of sh (see below). Because sh is a specification, not an implementation, /bin/sh is a symlink(or a hard link) to an actual implementation on most POSIX systems. What is bash bash started as an sh -compatible implementation (although it predates the POSIX standard by a few years), but as time passed it has acquired many extensions. Many of these extensions may change the behavior of valid POSIX shell scripts, so by itself bash is not a valid POSIX shell. Rather, it is a dialect of the POSIX shell language. bash supports a --posix switch, which makes it more POSIX-compliant. It also tries to mimic POSIX if invoked as sh . sh = bash? For a long time, /bin/sh used to point to /bin/bash on most GNU/Linux systems. As a result, it had almost become safe to ignore the difference between the two. But that started to change recently. Some popular examples of systems where /bin/sh does not point to /bin/bash (and on some of which /bin/bash may not even exist) are: Modern Debian and Ubuntu systems, which symlink sh to dash by default; Busybox , which is usually run during the Linux system boot time as part of initramfs . It uses the ash shell implementation. BSDs, and in general any non-Linux systems. OpenBSD uses pdksh , a descendant of the Korn shell. FreeBSD's sh is a descendant of the original UNIX Bourne shell. Solaris has its own sh which for a long time was not POSIX-compliant; a free implementation is available from the Heirloom project . How can you find out what /bin/sh points to on your system? The complication is that /bin/sh could be a symbolic link or a hard link.If it's a symbolic link, a portable way to resolve it is: % file -h /bin/sh/bin/sh: symbolic link to bash If it's a hard link, try % find -L /bin -samefile /bin/sh/bin/sh/bin/bash In fact, the -L flag covers both symlinks and hardlinks,but the disadvantage of this method is that it is not portable β€”POSIX does not require find to support the -samefile option,although both GNU find and FreeBSD find support it. Shebang line Ultimately, it's up to you to decide which one to use, by writing the Β«shebangΒ» line. E.g. #!/bin/sh will use sh (and whatever that happens to point to), #!/bin/bash will use /bin/bash if it's available (and fail with an error message if it's not). Of course, you can also specify another implementation, e.g. #!/bin/dash Which one to use For my own scripts, I prefer sh for the following reasons: it is standardized it is much simpler and easier to learn it is portable across POSIX systems β€” even if they happen not to have bash , they are required to have sh There are advantages to using bash as well. Its features make programming more convenient and similar to programming in other modern programming languages. These include things like scoped local variables and arrays. Plain sh is a very minimalistic programming language.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/379464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27368/" ] }
379,473
I want to disable cron temporarily on Debian. As in, shut it down completely, so that it doesn't start up even on reboot. How can I do this? My current use case is that I want to shut down cron as part of a process for moving my system to new disks. NOTE: I don't want to edit individual cron files.
With systemd: sudo systemctl disable cron If you want to disable the daemon and stop it: sudo systemctl disable cronsudo systemctl stop cron (Usually you’d expect sudo systemctl disable cron --now to do the trick, but apparently it doesn’t for cron , for some reason.) With sysvinit : sudo update-rc.d cron disablesudo service cron stop All these have corresponding enable variants to re-enable the service.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4671/" ] }
379,572
I have the following string: /tmp/test/folder1/test.txt I wish to use sed to substitute / for \/ - for example: \/tmp\/test\/folder1\/test.txt So I issue: echo "/tmp/test/folder1/test.txt" | sed "s/\//\\\\//g" Although it returns: sed: -e expression #1, char 9: unknown option to `s' I am escaping the forward slash and backslash - so not sure where I have gone wrong here
You need to escape (with backslash \ ) all substituted slashes / and all backslashes \ separately, so: $ echo "/tmp/test/folder1/test.txt" | sed 's/\//\\\//g'\/tmp\/test\/folder1\/test.txt but that's rather unreadable. However, sed allows to use almost any character as a separator instead of / , this is especially useful when one wants to substitute slash / itself, as in your case, so using for example semicolon ; as separator the command would become simpler: echo "/tmp/test/folder1/test.txt" | sed 's;/;\\/;g' Other cases: If one wants to stick with slash as a separator and use double quotes then all escaped backslashes have to be escaped one more time to preserve their literal values: echo "/tmp/test/folder1/test.txt" | sed "s/\//\\\\\//g" if one doesn't want quotes at all then yet another backslash is needed: echo "/tmp/test/folder1/test.txt" | sed s/\\//\\\\\\//g
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/379572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241485/" ] }
379,606
I made a short script which can export various KDE settings from user home directory to use as a basis for a quick setup of the desktop environment on a different machine. I was successful with all the settings that were of interest to me, but only one is elusive: I can't seem to find where the chosen keyboard layouts are stored. Basically, I would like to get to all the configuration that can be manipulated from the KDE settings application under the System Settings > Hardware > Input Devices > Keyboard > Layout tab (particularly the layouts themselves and the keyboard shortcut to switch between them). Does anyone have any idea? Maybe these settings are not specific to KDE and manipulate different configuration files? Thanks for any tips.
After some time of searching and playing with grep , I was able to locate the configuration file: ~/.config/kxkbrc .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79861/" ] }
379,683
For example: I have string1 = 'abcd' , and string2 = 'xwyz' . I want to replace the 3rd character of string1 ('c') with the 4th character of string2 ('z'). Of course, string indexing starts from 0. How can I accomplish this?
With bash substring manipulation: s1="abcd"s2="xwyz"s1=${s1:0:2}${s2:3}${s1:3} ${s1:0:2} - the 1st slice containing ab (till the 3rd character c ) ${s2:3} - the 4th character of the s2 string to be inserted ${s1:3} - the last (4th) character of the s1 string Final s1 value: echo $s1abzd Or with GNU awk tool: gawk -v s2=$s2 -v FPAT='[a-z]' '{$3=substr(s2,4)}1' OFS="" <<< $s1abzd <<< $s1 - the first string s1 is considered as input content -v s2=$s2 - passing the second string s2 as a variable into awk script FPAT='[a-z]' - regex pattern defining a field value ( [a-z] - any alphabetic character) Alternatively, you could also apply the "empty" field separator FS="" treating each character as separate field: gawk -v s2=$s2 'BEGIN{ FS=OFS="" }{$3=substr(s2,4)}1' <<< $s1abzd
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241594/" ] }
379,705
Before we run this mkfs command to create a filesystem on a device, for example /dev/sde : how to know if some data already exist on this device?: mkfs.ext4 -j -m 0 /dev/sde -F I performed the mkfs.ext4 on /dev/sde , but did not get any warning about data that exist(-ed) on this drive, therefore all data got lost. How to avoid this in the future? # mkfs.ext4 -j -m 0 /dev/sde -Fmke2fs 1.42.9 (28-Dec-2013)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks1310720 inodes, 5242880 blocks0 blocks (0.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=2153775104160 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000Allocating group tables: doneWriting inode tables: doneCreating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done
how to know if some data already exist on /dev/sde You can try to mount it. You can try examining the disk partition table. But if you don't use the proper tool that can understand what's actually on the disk, whatever tool you use will likely report that there's no data on the disk. So in the final analysis, you need to know what's on the disk before you do something that will destroy the data on it. As a system administrator, you have the power to destroy data. You need to be careful. therefore all data get lost ... how to avoid this? Don't run mkfs.ext4 -j -m 0 /dev/sde -F on a disk that has data you don't want to lose. Seriously - that's the "fix" - don't do it . You ran a command to make a new filesystem on /dev/sde and even used the -F "force" option to ensure the command run no matter what it might do. Per the mkfs.ext4 man page : -F Force mke2fs to create a filesystem, even if the specified device is not a partition on a block special device, or if other parameters do not make sense. ... The data is gone. Learn from this and be more careful in the future.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379705", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
379,725
The locate program of findutils scans one or more databases of filenames and displays any matches. This can be used as a very fast find command if the file was present during the last file name database update. There are many kinds of databases nowadays, relational databases (with query language e.g. SQL), NoSQL databases document-oriented databases (e.g. MongoDB) Key-value database (e.g. Redis) Column-oriented databases (e.g. Cassandra) Graph database So what kind of database does updatedb update and locate use? Thanks.
Implementations of locate / updatedb typically use specific databases tailored to their requirements, rather than a generic database engine. You’ll find those specific databases documented by each implementation; for example: GNU findutils ’ is documented in locatedb(5) , and is pretty much just a list of files (with a specific compression algorithm); mlocate ’s is documented in mlocate.db(5) , and can also be considered a list of directories and files (with metadata).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/379725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
379,774
I had a good running installation of Debian Jessie, but then I ran apt-get update && apt-get upgrade && apt-get dist-upgrade . And then after rebooting, it came directly to the BIOS. I realized that Grub was missing, so I ran a live cd and entered Rescue mode , mounted my root partition, + the boot partition and ran these commands: Grub finds the linux image: root@debian:~# update-grubGenerating grub configuration file ...Found background image: /usr/share/images/desktop-base/desktop-grub.pngFound linux image: /boot/vmlinuz-4.9.0-3-amd64Found initrd image: /boot/initrd.img-4.9.0-3-amd64Found linux image: /boot/vmlinuz-4.9.0-0.bpo.3-amd64Found initrd image: /boot/initrd.img-4.9.0-0.bpo.3-amd64Found linux image: /boot/vmlinuz-3.16.0-4-amd64Found initrd image: /boot/initrd.img-3.16.0-4-amd64Found Ubuntu 16.10 (16.10) on /dev/sdb2Adding boot menu entry for EFI firmware configurationdone And then grub-install : root@debian:~# grub-install /dev/sdaInstalling for x86_64-efi platform.Could not prepare Boot variable: No such file or directorygrub-install: error: efibootmgr failed to register the boot entry: Input/output error. lsblk : root@debian:~# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 223.6G 0 disk β”œβ”€sda1 8:1 0 92.6G 0 part /β”œβ”€sda2 8:2 0 130.4G 0 part └─sda3 8:3 0 573M 0 part /boot/efi Did I do something wrong? Is there too little space on my /boot/efi partition? root@debian:~# ls -l /boot/efi/EFI/debian/total 120-rwx------ 1 root root 121856 Jul 20 20:29 grubx64.efi efibootmgr doesn't show a Debian installation: root@debian:~# efibootmgr --verbose | grep debian Edit : I keep getting this error every time I try and create a boot loader using efibootmgr : grub-install: info: executing efibootmgr -c -d /dev/sda -p 3 -w -L grub -l \EFI\grub\grubx64.efi.Could not prepare Boot variable: No such file or directorygrub-install: error: efibootmgr failed to register the boot entry: Input/output error.
Fixed the efibootmgr errors by mounting the Boot variables for efibootmgr : # mount -t efivarfs efivarfs /sys/firmware/efi/efivars And then efibootmgr gave me errors about space : Could not prepare Boot variable: No space left on device Fixed that by deleting dump files : # rm /sys/firmware/efi/efivars/dump-* And then ran the usual update-grub grub-install -v --target=x86_64-efi --recheck /dev/sda and it ran successfully!
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/379774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237568/" ] }
379,810
I can't find a way to find what program implements the org.freedesktop.Notifications service. Is it possible to ask DBus to tell me what program provides it? The reason for asking this question is quite banal: I found a new desktop notifications daemon I'd like to use, but it won't start and instead complains with this message Name Lost. Is Another notification daemon running? However, I am unable to determine what program is holding the name. I already uninstalled every other notification daemon, restarted X server, and even rebooted the machine. However, when I run this command: dbus-send --session --dest=org.freedesktop.DBus --type=method_call \--print-reply /org/freedesktop/DBus org.freedesktop.DBus.ListNames string "org.freedesktop.Notifications" is present in the output, so something is holding the name, and I can't start my desired daemon.
The d-bus debug utility d-feet which is available as a package in many systems seems to be able to find the process id and command providing a service. For example, I ran it on a Fedora 23 xfce4 X11 systemd platform and selected Session Bus and entered the service name org.freedesktop.Notifications . It introspected the service, activating it, and showed the pid and /usr/lib64/xfce4/notifyd/xfce4-notifyd command:
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379810", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/193672/" ] }
379,835
I have the following file: AA,trueAA,falseBB,falseCC,falseBB,trueDD,true I am trying to look for duplicates and remove the line that has the column value equals to true . as output it should be: AA,falseBB,falseCC,falseDD,true
awk -F, '$2 == "false" {data[$1]=$2 } $2=="true" { if ( data[$1]!="false" ) { data[$1]=$2 } } END { OFS=","; for (item in data) { print item,data[item] }}' input To expand the script vertically for explanation: BEGIN { FS="," # Set the input separator; this is what -F, does.}$2 == "false" { # For any line whose second field is "false", we data[$1]=$2 # will use that value no matter what.}$2=="true" { # For lines whose second field is "true", if ( data[$1]!="false" ) { # only keep if if we haven't yet seen a data[$1]=$2 # "false" }}END { # Now that we have tabulated our data, we OFS="," # can print it out by iterating through for (item in data) { # the array we created. print item,data[item] }}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/379835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116988/" ] }
379,842
In NixOS, I installed the package yarn as usual by running $ nix-env -i yarn . Now I am attempting to run yarn via $ yarn start . But this leads me to the following error. $ yarn start yarn start v0.20.3 $ webpack-dev-server --env dev sh: webpack-dev-server: command not found error Command failed with exit code 127. When I try to install webpack-dev-server in my usual NixOS way I get a 'matches no derivations' error. $ nix-env -i webpack-dev-servererror: selector β€˜webpack-dev-server’ matches no derivations I read that webpack-dev-server is an npm package, and am unsure of a couple questions regarding the relevance of that in this case. Does it make sense to use npm, a different package manager than nix,under Nix? If answer to (1) is yes, then how to install npm on NixOS? I do notsee npm available when searching via nix-env , as $ nix-env -qa npm also matches no derivations. What is the correct way to install webpack-dev-server on NixOS? EDIT I attempted to install webpack-dev-server following the commented link and was able to install node2nix , but am not able to follow through on step 2 listed in the readme there. I located the file referenced in step 2 in /nix/store at /nix/store/sgk7sxgqxrv2axkxjwc3y15apcqbrv1z-nixos-17.03.1482.1b57bf274a/nixos/pkgs/development/node-packages/node-packages.json I can open that file to view the npm packages listed, but the permissions are read-only, even running with sudo -- so I would need to edit it's permissions in order to alter it. It seems that I should not be editing this /nix/store file directly and should instead be manipulating it indirectly via nix. Am I correct that I should not be editing this file directly? If so, how else can I complete step 2 by using nix or something to add webpack-dev-server to it?
There are multiple ways to use npm packages through nix: For my personal projects, I use nix-shell then within the shell I use npm scripts to prevent the need for npm global packages (like with gulp). The process looks something like this (and is probably very similar for yarn): $ nix-shell -p nodejs-8_x[nix-shell:yourproject]$ npm install # installs npm deps to project-local node_modules[nix-shell:yourproject]$ npm exec (...) # using scripts configured in package.json This works well for me since none of my packages have binary dependencies. This post describes the creation of a default.nix for your project so you won't have to specify dependencies for every invocation of nix-shell, but it's optional. Another way is using npm2nix: node2nix -i node-packages.json # creates ./default.nixnix-shell # nix-shell will look for a default.nix, which above will have generated Which will cause Nix to manage all npm packages in the project. It may be a good idea to become familiar with nix-shell, since trying to install node packages / any dependency in your nix profile (through nix-env or nox) defeats the purpose of nix by polluting the "global" namespace.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/379842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236812/" ] }
379,845
I am attempting to set up chrooted SFTP access to a RHEL 6.5 server. I have gone through the standard steps of editing the sshd_config file to match any users in the group an chroot them like so: Match group prisoners ChrootDirectory /home/%u AllowTCPForwarding no X11Forwarding no ForceCommand internal-sftp as well as to set Subsystem sftp internal-sftp The user 'test' has a directory as follows: [root@ip-10-0-1-158 ~]# ls -l /home/testtotal 4drwxrwxr-x. 3 root prisoners 4096 Jul 20 17:55 SFTP (I have also recursively set both ownership and access permissions on this directory) and is in the proper group: [root@ip-10-0-1-158 ~]# sudo -u test iduid=501(test) gid=498(prisoners) groups=498(prisoners) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 When I try to edit or create files in the /home/test/SFTP directory as the test user via my ssh session, everything works as intended. If I log in via WinSCP, I authenticate properly and can see the contents of the /home/test directory (but not modify them). However, it does not allow me to view, edit, or create files in the /home/test/SFTP directory. WinSCP error message: Error listing directory '/SFTP'.Permission denied.Error code: 3Error message from server: Permission denied Any help would be greatly appreciated. Note:I have successfully set up a similar sftp chrooted access on RHEL 7 and am struggling to understand why the user permissions seem to not be working via SFTP.
There are multiple ways to use npm packages through nix: For my personal projects, I use nix-shell then within the shell I use npm scripts to prevent the need for npm global packages (like with gulp). The process looks something like this (and is probably very similar for yarn): $ nix-shell -p nodejs-8_x[nix-shell:yourproject]$ npm install # installs npm deps to project-local node_modules[nix-shell:yourproject]$ npm exec (...) # using scripts configured in package.json This works well for me since none of my packages have binary dependencies. This post describes the creation of a default.nix for your project so you won't have to specify dependencies for every invocation of nix-shell, but it's optional. Another way is using npm2nix: node2nix -i node-packages.json # creates ./default.nixnix-shell # nix-shell will look for a default.nix, which above will have generated Which will cause Nix to manage all npm packages in the project. It may be a good idea to become familiar with nix-shell, since trying to install node packages / any dependency in your nix profile (through nix-env or nox) defeats the purpose of nix by polluting the "global" namespace.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/379845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241731/" ] }
379,947
When running dd command to copy an ISO on Linux, I get a single progress print that stays open for a long time (many minutes). Then another at the end. The problem seems to be that a very large cache is being used which confuses dd 's output. sudo dd bs=4M if=my.iso of=/dev/sdc status=progress Output (first line shows for a long time). 1535115264 bytes (1.5 GB, 1.4 GiB) copied, 1.00065 s, 1.5 GB/s403+1 records in403+1 records out1692844032 bytes (1.7 GB, 1.6 GiB) copied, 561.902 s, 3.0 MB/s Is there a way to prevent this from happening so the progress output is meaningful?
From the first line we can tell dd has read and written 1.5GB in one second. Even an SSD can't write that fast. What happened is that the /dev/sdc block device accepted it (writeback), but didn't send it to disk but buffered it and started writing to disk at the rate the disk can take it. Something like 3MiB/s. The system can't buffer data indefinitely like that, there's only so much data it will accept to hold in that non-committed dirty state. So after a while (in your case, after more than 1.5GB have been written but less than 2 seconds have passed (as progress lines are written every second)), dd 's write() system call will block until the data has been flushed to the disk (during which it cannot write progress messages). When it gets through, dd can send the few extra missing megabytes, and that happens within less than a second, so you get only one extra progress line. To see a different behaviour, you could force the writes to be synchronous, that is not to return unless the data has been committed to disk. For instance by using oflag=sync or oflag=dsync or oflag=direct (not that I would advise doing that though).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/379947", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63928/" ] }
379,982
I'm using tail -f a.txt to watch updates on a file called a.txt . If I update the file using something like ls -a >> a.txt in a second virtual console, the changes will display in real-time in the first one. If I update the file using Vim in a second virtual console, the changes will not display in the first one. I don't necessarily expect it to trigger an update in that window - but why exactly doesn't this update the terminal running the tail -f command?
If you edit a file with vim , typically it reads the file into memory, then writes a new file. So tail is now operating on an out of date copy of the file (which remains in the file system until tail (and any other program) stops using it. You can make tail follow the filename (rather than the file) by using: tail -F yourfile Note the upper case F .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/379982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237109/" ] }
380,005
Goal: Have crontab running at start up logging output from arp command in a txt file. > Chrontab:> > # daemon's notion of time and timezones.> #> # Output of the crontab jobs (including errors) is sent through> # email to the user the crontab file belongs to (unless redirected).> #> # For example, you can run a backup of all your user accounts> # at 5 a.m every week with:> # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/> #> # For more information see the manual pages of crontab(5) and cron(8)> #> # m h dom mon dow command* * * * * arp -n > results.txt Unfortunately, instead of writing the output of arp -n it overwrites results.txt with a blank file. The weird thing is if I use arp-n > results.txt in the terminal I get: GNU nano 2.2.6 File: results.txt Address HWtype HWaddress Flags Mask Iface192.168.42.19 (incomplete) wlan0192.168.42.14 ether (incomplete) C wlan0192.168.42.13 (incomplete) wlan0192.168.42.18 (incomplete) wlan0192.168.1.1 ether (incomplete) C eth0192.168.1.25 ether (incomplete) C eth0192.168.42.12 ether (incomplete) C wlan0192.168.1.240 ether (incomplete) C eth0192.168.42.11 (incomplete) wlan0192.168.42.16 M A wlan0 Does anyone know how to fix this so I can get it running and updating the file using crontab?
The problem seems to be that probably crontab does not know the PATH where the arp command lives. I would use: * * * * * /usr/sbin/arp -n >> results.txt However, I would use arpwatch to monitor ARP changes. It work as a daemon, and as it registers the MAC changes in a file over time, together with the epoch time of the change. It also is able to send messages to syslog and emails. From man arpwatch Arpwatch keeps track for ethernet/ip address pairings. It syslogs activity and reports certain changes via email. Arpwatch uses pcap(3) to listen for arp packets on a local ethernet interface. Report Messages Here's a quick list of the report messages generated by arpwatch(1) (and arpsnmp(1)): new activity This ethernet/ip address pair has been used for the first time six months or more. new station The ethernet address has not been seen before. flip flop The ethernet address has changed from the most recently seen address to the second most recently seen address. (If either the old or new ethernet address is a DECnet address and it is less than 24 hours, the email version of the report is suppressed.) changed ethernet address The host switched to a new ethernet address. Syslog Messages Here are some of the syslog messages; note that messages that are reported are also sysloged. ethernet broadcast The mac ethernet address of the host is a broadcast address. ip broadcast The ip address of the host is a broadcast address. bogon The source ip address is not local to the local subnet. ethernet broadcast The source mac or arp ethernet address was all ones or all zeros. ethernet mismatch The source mac ethernet address didn't match the address inside the arp packet. reused old ethernet address The ethernet address has changed from the most recently seen address to the third (or greater) least recently seen address. (This is similar to a flip flop.) suppressed DECnet flip flop A "flip flop" report was suppressed because one of the two addresses was a DECnet address. Files /var/lib/arpwatch - default directory arp.dat - ethernet/ip address database ethercodes.dat - vendor ethernet block list
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/380005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241868/" ] }
380,013
> uname -rFATAL: kernel too old> cat /proc/cmdlineFATAL: kernel too old There are 3 *.vmlinuz-linux files in /boot. How do I determine which kernel is currently running? Note that I'm running in a limited environment with a minimal shell. I've also tried: > sh -c 'read l < /proc/version; echo $l'FATAL: kernel too old> dd if=/proc/versionFATAL: kernel too old Any thoughts?
You have upgraded your libc (the most basic system library) and now no program works. To be precise, no dynamically linked program works. In your particular scenario, rebooting should work. The now-installed libc requires a newer kernel, and if you reboot, you should get that newer kernel. As long as you still have a running shell, there's often a way to recover, but it can be tricky if you didn't plan for it. If you don't have a shell then usually there's no solution other than rebooting. Here you may not be able to recover without rebooting, but you can at least easily find out what kernel is running. Just use a way to read /proc/version that doesn't require an external command. read v </proc/version; echo $vecho $(</proc/version) # in zsh/bash/ksh If you still have a copy of the old libc around, you can run programs with it. For example, if the old libc is in /old/lib and you have executables that work with this old libc in /old/bin , you can run LD_LIBRARY_PATH=/old/lib /old/lib/ld-linux.so.2 /old/bin/uname If you have some statically linked binaries, they'll still work. I recommend installing statistically linked system utilities for this kind of problem (but you have to do it before the problem starts). For example, on Debian/Ubuntu/Mint/…, install one or more of busybox-static (collection of basic Linux command line tools including a shell), sash (shell with some extra builtins), zsh-static (just a shell but with quite a few handy tools built in). busybox-static unamesash -c '-cat /proc/version'zsh-static -c '</proc/version'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/380013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4530/" ] }
380,025
Original Problem I have a file on one filesystem: /data/src/file and I want to hard link it to: /home/user/proj/src/file but /home is on one disk, and /data is on another so I get an error: $ cd /home/user/proj/src$ ln /data/src/file .ln: failed to create hard link './file' => '/data/src/file': Invalid cross-device link Okay, so I learned I can't hard link across devices. Makes sense. Problem at hand So I thought I'd get fancy and bind mount a src folder that's on /data 's file system: $ mkdir -p /data/other/src$ cd /home/user/proj$ sudo mount --bind /data/other/src src/$ cd src$ # (now we're technically on `/data`'s file system, right?)$ ln /data/src/file .ln: failed to create hard link './file' => '/data/src/file': Invalid cross-device link Why does this still not work? Workaround I know I have this setup right because I can make the hard link as long as I'm in the "real" /data directory instead of the bound one. $ cd /data/other/src$ ln /data/src/file .$ # OK$ cd /home/user/proj/src$ ls -lhtotal 35M-rw------- 2 user user 35M Jul 17 22:22 file$ Some System Info $ uname -aLinux <host> 4.10.0-24-generic #28-Ubuntu SMP Wed Jun 14 08:14:34 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux$ findmnt...β”œβ”€/home /dev/sdb8 ext4 rw,relatime,data=orderedβ”‚ └─/home/usr/proj/src /dev/sda2[/other/src]β”‚ ext4 rw,relatime,data=ordered└─/data /dev/sda2 ext4 rw,relatime,data=ordered$ mountpoint -d /data8:2$ mountpoint -d /home/usr/proj/src/8:2 Note : I manually changed the file and directory names to make the situation more clear, so there may be a typo or two in the command readouts.
There's a disappointing lack of comments in the code . It's as if no-one ever thought it useful, since the time bind mounts were implemented in v2.4. Surely all you'd need to do is substitute .mnt->mnt_sb where it says .mnt ... Because it gives you a security boundary around a subtree. PS: that had been discussed quite a few times, but to avoid searches: consider e.g. mount --bind /tmp /tmp; now you've got a situation when users can't create links to elsewhere no root fs, even though they have /tmp writable to them. Similar technics works for other isolation needs - basically, you can confine rename/link to given subtree. IOW, it's a deliberate feature. Note that you can bind a bunch of trees into chroot and get predictable restrictions regardless of how the stuff might get rearranged a year later in the main tree, etc. -- Al Viro There's a concrete example further down the thread Whenever we get mount -r --bind working properly (which I use to place copies of necessary shared libraries inside chroot jails while allowing page cache sharing), this feature would break security. mkdir /usr/lib/libs.jailfor i in $LIST_OF_LIBRARIES; doln /usr/lib/$i /usr/lib/libs.jail/$idonemount -r /usr/lib/libs.jail /jail/libchown prisoner /usr/log/jailmount /usr/log/jail /jail/usr/logchrootuid /jail prisoner /bin/untrusted & Although protections should be enough, but I'd rather avoid having the prisoner link /jail/lib/libfoo.so (write returns EROFS) to /jail/usr/log where it's potentially writeable.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/380025", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237316/" ] }
381,052
JQ looks like a great tool, but I'm struggling with it. Here is what I am trying to do:Extract just the values from this chef knife search and generate a CSV. given this command and output: knife search node "name:foo*" -a name -a cpu.total -a memory.total -Fj { "results": 2, "rows": [ { "foo-01": { "name": "foo-01", "cpu.total": 12, "memory.total": "16267368kB" } }, { "foo-02": { "name": "foo-02", "cpu.total": 12, "memory.total": "16264296kB" } } ]} I would like to get the values extracted to CSV like this: foo-01,12,16267368kBfoo-02,12,16264296kB (I can deal with the quotes)
... | jq -r '.rows[] | .[] | [.name, .["cpu.total"], .["memory.total"]] | map(tostring) | join(",")' This: Expands the array in .rows into the output stream ( .rows.[] ). Pipes that stream into the next step ( | ). Expands the object it's given into the (in this case) single value it contained ( .[] ). Creates an array with the results of .name , .["cpu.total"] , and .["memory.total"] each evaluated on that object ( .[ .name, ... ] ). Converts all the values of that array into strings ( map(tostring) ). Joins the elements of each array with a comma ( join(",") ). jq -r outputs raw data , rather than quoting and escaping it. The output is then: foo-01,12,16267368kBfoo-02,12,16264296kB as you wanted. Depending on your CSV parser & the real data, you might need extra quoting around the strings, which you can add in, or use @csv in place of the last two steps. ... | jq -r '.rows[] | .[] | [.name, .["cpu.total"], .["memory.total"]] | @csv' We could skip the map by converting only the one value inside, which takes some extra brackets: ... | jq -r '.rows[]|.[]|[.name, (.["cpu.total"] | tostring), .["memory.total"]] | join(",")' And probably the ugliest alternative: ... | jq -r '.rows[]|to_entries|.[]|.key + "," + (.value["cpu.total"] | tostring) + "," + .value["memory.total"]' In this case, we don't rely on the .name field, and build up the whole string manually. If you need a highly customised format, this is the most flexible option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/242902/" ] }
381,065
I am using rsync to generate a list of files that have changed and then using that list to upload the files to s3: rsync -av somefiles/ someotherfiles/ > list.txtwhile read F ; do echo $F aws s3 cp $lcDir/$F s3://durktest/blender/$Fdone <list.txt Example of what's in the list swresample-2.dllswscale-4.dlltahoe.logucrtbase.dllvcomp140.dllvcruntime140.dll2.78/2.78/config/2.78/config/bookmarks.txt2.78/config/recent-files.txt2.78/config/userpref.blend2.78/datafiles/2.78/datafiles/colormanagement/2.78/datafiles/colormanagement/config.ocio2.78/datafiles/colormanagement/filmic/2.78/datafiles/colormanagement/filmic/filmic_desat65cube.spi3d2.78/datafiles/colormanagement/filmic/filmic_false_color.spi3d2.78/datafiles/colormanagement/filmic/filmic_to_0-35_1-30.spi1d2.78/datafiles/colormanagement/filmic/filmic_to_0-48_1-09.spi1d2.78/datafiles/colormanagement/filmic/filmic_to_0-60_1-04.spi1d2.78/datafiles/colormanagement/filmic/filmic_to_0-70_1-03.spi1d2.78/datafiles/colormanagement/filmic/filmic_to_0-85_1-011.spi1d2.78/datafiles/colormanagement/filmic/filmic_to_0.99_1-0075.spi1d2.78/datafiles/colormanagement/filmic/filmic_to_1.20_1-00.spi1d2.78/datafiles/colormanagement/luts/2.78/datafiles/colormanagement/luts/aces_to_xyz.spimtx2.78/datafiles/colormanagement/luts/adx_adx10_to_cdd.spimtx Is there any better way to do this besides just using a fuse driver and doing rsync directly to s3?
There is a better way to do this : use aws s3 sync Example aws s3 sync somefiles/ s3://durktest/blender read AWS CLI Command Reference for more
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/381065", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/242915/" ] }
381,071
Linux systems have been having a historical (>5 yrs) problem in configuring audio devices especially commonplace headphones with combo jacks.Since many people want to use their favorite linux systems for video chatting, there are records of frustrating unresolved problems all across various forums. I get that the drivers for the external mic (in headphones with combo jacks) are not currently available (or not developed(?)). So given that, the user should be able to use the internal microphone for input and headphone for output. Along this line I went down the rabbit hole (digged up upto 5-6 yr old issues) and tried lot of things only to get no success in the end (I am using common combo jack headphones and hp laptop running ubuntu 16.04). Many people have variously reported this issue. Here's what commonly happens.. When headphone is not connected, Internal microphone and speakers work well. PulseAudio shows: pacmd list-cards shows: ports: analog-input-internal-mic: Internal Microphone (priority 8900, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-input-microphone" analog-input-mic: Microphone (priority 8700, latency offset 0 usec, available: no) properties: device.icon_name = "audio-input-microphone" analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-speakers" analog-output-headphones: Headphones (priority 9000, latency offset 0 usec, available: no) properties: device.icon_name = "audio-headphones" When headphone is connected, Output through headphones work well.But external microphone (located on headphone) does not work (stuttering noise, drivers not present, alright), but then Internal microphone is 'unplugged' too. (So there is no way to record any sound.) PulseAudio shows: pacmd list-cards shows: ports: analog-input-internal-mic: Internal Microphone (priority 8900, latency offset 0 usec, available: no) properties: device.icon_name = "audio-input-microphone" analog-input-mic: Microphone (priority 8700, latency offset 0 usec, available: yes) properties: device.icon_name = "audio-input-microphone" analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: no) properties: device.icon_name = "audio-speakers" analog-output-headphones: Headphones (priority 9000, latency offset 0 usec, available: yes) properties: device.icon_name = "audio-headphones" So, headphones gives the output, that's great, but is there any way to force internal microphone for input? (somehow make available: yes )
Linux doesn't have a "historical problem in configuring audio devices". The problem actually are the manufacturers of devices, mostly laptops, who just ship the laptop with pre-installed Windows drivers that can be configured acurately to deal with the laptop hardware, because the manufacturer of course knows exactly how the hardware is set up. On the other hand, the manufacturer doesn't want to share the details, doesn't document them and keeps them secret; after all, they have supplied the Windows driver. That's why often all non-standard features of laptops have to be found out through painful and time-consuming reverse engineering to be able to use them under Linux. With regard to audio, basically all modern devices uses the Intel HDA architecture, which is self-describing and provides a graph representation of the codec (the analog audio chip). In principle, the BIOS is supposed to supply the "pin configuration", i.e. which pin of the codec chip is connected to internal speakers and mics, to line-in, line-out and headphone or headphone combo jacks. However, especially in the last years, manufacturers didn't seem to find it necessary to correctly configure the BIOS (after all, they already supplied the Windows driver). So the cycle is: New laptop model comes out, somebody with the actual hardware and enough technical understanding figures out how the BIOS is lying and what the configuration should really look like, the kernel developers add a quirk (special treatment of that hardware) in the drivers to deal with that, with the newest driver the other people who use the same hardware don't notice and are happy. Until the next model comes out, and the cycle starts again. In the meantime, lots of unhappy users where it doesn't work leave their traces all over the internet asking in forums everywhere, often without getting the right answer. So yes, drivers for the external mic are developed and available (in fact, they are the same drivers as for the rest of the codec). You can see all the quirks (non-standard behaviour) that have accumulated by inspecting the kernel source files in /sound/pci/hda/ and grep for "quirk". Which means if you happen to have a laptop where the external headphone mic is not working, someone (maybe you?) has to dive into the technical details, make it work, and report it to the ALSA kernel developers. You can look at what the codec chips report about their internal structure with cat /proc/asound/card*/codec\#* This gives the internal graph, you can follow it manually, or try programs like codecgraph to visualize it (doesn't always give good results). The "Pin complex" represent the pins. Look at the one you have, try to guess which might represent the external mic, even though it's mislabeled. Use hdajackretask to correctly label the pin if you've identified it (or guess until you do). There are ways to make this relabelling default on boot until the driver gets updated. As for combining internal mic and headphones: ALSA has a mixer element called "Auto-Mute mode". This will cause ALSA to mute internal mic/speaker and unmute external mic/headphone (or vice versa) when the headphone jack is plugged or unplugged. Disable this in alsamixer or amixer , mute and unmute internal and external mic/speaker/headphone as desired. You can also configure this in Pulseaudio: Start pavucontrol , select the right ports in the input and output tab. If both internal and external "ports" are not available, or if you want to make it the default, that's also possible, though it's a bit of a pain to do: Look at the files in /usr/share/pulseaudio/alsa-mixer/ , read the comments that explain how they work. You need a new "configuration" for your soundcard that combines two "paths", one for the internal mic, and one for the external headphone, using the correct ALSA mixer elements. This will probably take some time to make it work correctly, Pulseaudio is not very intuitive in this resepect. No, there's no easy way to do what you want; you'll have to get your hands dirty.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/381071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
381,076
Before showing menu or splash-screen, Grub2 load something like a console for some messages like errors, It shows for about 200ms but the cursor is very visible. It is an aesthetic problem but I would appreciate to control this behavior and hide the cursor.
Linux doesn't have a "historical problem in configuring audio devices". The problem actually are the manufacturers of devices, mostly laptops, who just ship the laptop with pre-installed Windows drivers that can be configured acurately to deal with the laptop hardware, because the manufacturer of course knows exactly how the hardware is set up. On the other hand, the manufacturer doesn't want to share the details, doesn't document them and keeps them secret; after all, they have supplied the Windows driver. That's why often all non-standard features of laptops have to be found out through painful and time-consuming reverse engineering to be able to use them under Linux. With regard to audio, basically all modern devices uses the Intel HDA architecture, which is self-describing and provides a graph representation of the codec (the analog audio chip). In principle, the BIOS is supposed to supply the "pin configuration", i.e. which pin of the codec chip is connected to internal speakers and mics, to line-in, line-out and headphone or headphone combo jacks. However, especially in the last years, manufacturers didn't seem to find it necessary to correctly configure the BIOS (after all, they already supplied the Windows driver). So the cycle is: New laptop model comes out, somebody with the actual hardware and enough technical understanding figures out how the BIOS is lying and what the configuration should really look like, the kernel developers add a quirk (special treatment of that hardware) in the drivers to deal with that, with the newest driver the other people who use the same hardware don't notice and are happy. Until the next model comes out, and the cycle starts again. In the meantime, lots of unhappy users where it doesn't work leave their traces all over the internet asking in forums everywhere, often without getting the right answer. So yes, drivers for the external mic are developed and available (in fact, they are the same drivers as for the rest of the codec). You can see all the quirks (non-standard behaviour) that have accumulated by inspecting the kernel source files in /sound/pci/hda/ and grep for "quirk". Which means if you happen to have a laptop where the external headphone mic is not working, someone (maybe you?) has to dive into the technical details, make it work, and report it to the ALSA kernel developers. You can look at what the codec chips report about their internal structure with cat /proc/asound/card*/codec\#* This gives the internal graph, you can follow it manually, or try programs like codecgraph to visualize it (doesn't always give good results). The "Pin complex" represent the pins. Look at the one you have, try to guess which might represent the external mic, even though it's mislabeled. Use hdajackretask to correctly label the pin if you've identified it (or guess until you do). There are ways to make this relabelling default on boot until the driver gets updated. As for combining internal mic and headphones: ALSA has a mixer element called "Auto-Mute mode". This will cause ALSA to mute internal mic/speaker and unmute external mic/headphone (or vice versa) when the headphone jack is plugged or unplugged. Disable this in alsamixer or amixer , mute and unmute internal and external mic/speaker/headphone as desired. You can also configure this in Pulseaudio: Start pavucontrol , select the right ports in the input and output tab. If both internal and external "ports" are not available, or if you want to make it the default, that's also possible, though it's a bit of a pain to do: Look at the files in /usr/share/pulseaudio/alsa-mixer/ , read the comments that explain how they work. You need a new "configuration" for your soundcard that combines two "paths", one for the internal mic, and one for the external headphone, using the correct ALSA mixer elements. This will probably take some time to make it work correctly, Pulseaudio is not very intuitive in this resepect. No, there's no easy way to do what you want; you'll have to get your hands dirty.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/381076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125205/" ] }
381,113
If I am in a deep directory, let's say: ~/Desktop/Dropbox/School/2017/C/A3/ then when I open up terminal, it says bob@bob-ubuntu:~/Desktop/Dropbox/School/2017/C/A3/$ and then I write my command.That is very long, and every line I write in the terminal goes to the next line. I want to know if there's a way so that it only displays my current directory. I want it to display: bob@bob-ubuntu: A3/$ This way it's much clear, and always I can do pwd to see my entire directory. I just don't want the entire directory visible in terminal because it takes too much space.
You need to modify PS1 in your shell startup file (probably .bashrc ). If it's there already, its setting will contain \w , which is what gives your working directory. Change that to \W (upper case). The line in bashrc file looks like below: if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\W\[\033[00m\]\$ ' Log out and in again, or do: . .bashrc or (you need to add this prefix '~/' if you are in others directory) source ~/.bashrc (or whatever your file is). If it isn't there, add something like: PS1='\u@\h: \W:\$' to .bashrc or whatever. Look up PS1 in the bash manual page to get more ideas. Be careful; bash can use several more than one initialisation file, e.g. .bashrc and .bash_profile ; it may be that PS1 is set in a system-wide one. But you can override that in one of your own files.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/381113", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230247/" ] }
381,221
I am running debian stretch and following this guide for building package from source for debian. Sometimes building process takes hours , when I run dpkg-buildpackage -rfakeroot again , it starts building from scratch. dpkg-buildpackage --help does not show any option to resume. How can I resume package building ?
To continue a build that was interrupted for some reason, you can call the appropriate targets of debian/rules directly: debian/rules build will compile the sources, then fakeroot debian/rules binary will run the installation and prepare the packages.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87903/" ] }
381,228
In my ~/.profile I have a last block which should load my personal bin/ directory like this: # set PATH so it includes user's private bin if it existsif [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH"fi But it is seemingly not loaded: echo $PATH/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games Why is this not working? (My shell is bash.) Edit for Tigger echo $0 => bashecho $HOME => /home/studentwhoami => studentless /etc/*-release => PRETTY_NAME="Debian GNU/Linux 9 (stretch)"NAME="Debian GNU/Linux"VERSION_ID="9"VERSION="9 (stretch)"ID=debianHOME_URL="https://www.debian.org/"SUPPORT_URL="https://www.debian.org/support"BUG_REPORT_URL="https://bugs.debian.org/"
From the top of ~/.profile : # ~/.profile: executed by the command interpreter for login shells.# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login# exists.# see /usr/share/doc/bash/examples/startup-files for examples.# the files are located in the bash-doc package. So (if you are using bash as your shell) I'm guessing either ~/.bash_profile or ~/.bash_login is on your system. Select one and edit it to include: export PATH=$PATH:$HOME/bin Then save and source ~/.bash_login or logout and log back in. Edit : You say that both ~/.bash_profile and ~/.bash_login are both missing from your $HOME . I think we need confirm a few things. Please post the results of the following in your original question: echo $0echo $HOMEwhoamiless /etc/*-release Edit 2 : Personally, I do not know why ~/.profile is not being included in your case based on the information provided and documentation. While testing I did notice that my ~/.profile is scanned when I ssh in but not when I launch a new terminal. But, there is a simple solution to allow $HOME/bin to be included in your interactive shell. Edit (create if not present) ~/.bashrc and add the following line to it: export PATH=$PATH:$HOME/bin Save, logout and log back in, or source ~/.bashrc . The export line could be expanded to check that $HOME/bin exists if you like with: if [ -d "$HOME/bin" ]then export PATH=$PATH:$HOME/binfi Why ~/.bashrc instead of another file? Personally preference and seems to be more reliable too.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/381228", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
381,230
I have a file in UTF-8 encoding with BOM and want to remove the BOM. Are there any linux command-line tools to remove the BOM from the file? $ file test.xmltest.xml: XML 1.0 document, UTF-8 Unicode (with BOM) text, with very long lines
If you're not sure if the file contains a UTF-8 BOM, then this (assuming the GNU implementation of sed ) will remove the BOM if it exists, or make no changes if it doesn't. sed '1s/^\xEF\xBB\xBF//' < orig.txt > new.txt You can also overwrite the existing file with the -i option: sed -i '1s/^\xEF\xBB\xBF//' orig.txt If you are using the BSD version of sed (eg macOS) then you need to have bash do the escaping: sed $'1s/\xef\xbb\xbf//' < orig.txt > new.txt
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/381230", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80062/" ] }
381,276
Xfce panel is very nice, but the GUI configuration is tedious, especially if you want to replicate the same panel multiple times. Because of that, I want to be able to manually edit the XML file defining the Xfce panel's settings. If I believe correctly, the file is located in $HOME/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-panel.xml . I am aware that xfce4-panel won't take configuration changes on the fly and it must be restarted. Additionally, I am also aware that the panel will write its current configuration to the aforementioned file before finishing. Thus, my workflow is the following: $ xfce4-panel -q$ ... # Edit config file$ xfce4-panel Surprisingly, when doing that, the panel not only gets its old configuration, but also overwrites the supposedly "config" file with the old values, discarding my edits. The panel must be reading a different configuration from somewhere else, but apparently it's not a file. I strace'd the panel executable and I found no open calls of something that resembled a configuration file (just in case it's relevant, here are all the open calls for xfce4-panel: https://pastebin.com/eHdEATMV ) How can I manually edit Xfce's panel configuration file so changes take place?
I had the same problem: I wanted to copy the panel configuration from one machine to another, and it just didn't want to use the one I copied, but always used the old even though I logged out. It turns out, even when logging out, the following process kept running under that user: /usr/lib/x86_64-linux-gnu/xfce4/xfconf/xfconfd I suppose this is where xfce stores its configuration at runtime. Since it kept running, it didn't see the change in the files and even overwrite it. A bug about that is already reported it seems: https://bugzilla.xfce.org/show_bug.cgi?id=13445
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93991/" ] }
381,282
to copy contents from a folder i've read , the use is: cp -rfva ../foldersource/. ./ but this works too cp -rfva ../foldersource/* ./ is there any difference? by example if i want to delete a content from a folder with . : rm -rf ../foldersource/. i get the error: rm: rejet delete folder '.' or '..': but with asterisk is ok rm -rf ../foldersource/* so, asterisk is better options that works anywhere ?
There is a fundamental difference between these two argument forms. And it's an important one to understand what is happening. With ../foldersource/. the argument is passed unchanged to the command, whether it's cp or rm or something else. It's up to the command whether that trailing dot has special or unique semantics different from the standard Unix convention of simply pointing to the directory it resides in; both rm and cp seem to treat it as a special case. With ../foldersource/* the argument is first expanded by the shell before the command is ever even executed and passed any arguments. Thus, rm never sees ../foldersource/* ; it sees the expanded version ../foldersource/file1.ext ../foldersource/file2.ext ../foldersource/childfolder1 and so on. This is important because operating systems limit how many arguments can be passed to a command, usually only a few hundred.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381282", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/243072/" ] }
381,291
On every reboot the USB port assignments of an attached scanner are incorrect. My goal is to create a script that runs on reboot eliminating my interaction. Here's what I do manually to correct the port assignments. 1) lsusb -d 04f9:0272 #the output identifies the correct ports of the scanner 2) sudo chmod a+w /dev/bus/usb/001/002 #scanner now works The following script creates the variables but chmod fails reporting "no such file or directory". buss=$(lsusb -d 04f9:0272 |awk '{print $2}') devis=$(lsusb -d 04f9:0272 |awk '{print $4}') sudo chmod a+w /dev/bus/usb/$buss/$devis The correct values are displayed when I echo $buss or $devis. I know I will need to do more to automate this process.
There is a fundamental difference between these two argument forms. And it's an important one to understand what is happening. With ../foldersource/. the argument is passed unchanged to the command, whether it's cp or rm or something else. It's up to the command whether that trailing dot has special or unique semantics different from the standard Unix convention of simply pointing to the directory it resides in; both rm and cp seem to treat it as a special case. With ../foldersource/* the argument is first expanded by the shell before the command is ever even executed and passed any arguments. Thus, rm never sees ../foldersource/* ; it sees the expanded version ../foldersource/file1.ext ../foldersource/file2.ext ../foldersource/childfolder1 and so on. This is important because operating systems limit how many arguments can be passed to a command, usually only a few hundred.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381291", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223003/" ] }
381,391
I run this command to find the biggest files: du -Sh | sort -rh | head -5 Then I do -rm rf someFile . Is there a way to automatically delete the files found from the former command?
If you're using GNU tools (which are standard on linux), you could do something like this: stat --printf '%s\t%n\0' ./* | sort -z -rn | head -z -n 5 | cut -z -f 2- | xargs -0 -r echo rm -f -- (remove the 'echo' once you've tested it). The stat command prints out the filesize and name of each file in the current directory separated by a tab, and with each record terminated by a NUL (\0) byte. the sort command sorts each NUL-terminated record in reverse numeric order. The head command lists only the first five such records, then cut removes the file size field from each record. Finally xargs takes that (still NUL-terminated) input and uses it as arguments for echo rm -f . Because this uses NUL as the record (filename) terminator, it copes with filenames that have any valid character in them. If you want a minimum file size, then you could insert awk or something between the stat and the sort . e.g. stat --printf '%s\t%n\0' ./* | awk 'BEGIN {ORS = RS = "\0" } ; $1 > 25000000' | sort -z -rn | ... NOTE: GNU awk doesn't have a -z option for NUL-terminated records, but does allow you to set the record separator to whatever you want. We have to set both the output record separator (ORS) and the input record separator (RS) to NUL. Here's another version that uses find to explicitly limit itself to regular files (i.e. excluding directories, named pipes, sockets, etc) in the specified directory only ( -maxdepth 1 , no subdirs) which are larger than 25M in size (no need for awk ). This version doesn't need stat because GNU find also has a printf feature. BTW, note the difference in the format string - stat uses %n for the filename, while find uses %p . find . -maxdepth 1 -type f -size +25M -printf '%s\t%p\0' | sort -z -rn | head -z -n 5 | cut -z -f 2- | xargs -0 -r echo rm -f -- To run it for a different directory, replace the . in the find command. e.g. find /home/web/ .... shell script version: #!/bin/shfor d in "$@" ; do find "$d" -maxdepth 1 -type f -size +25M -printf '%s\t%p\0' | sort -z -rn | head -z -n 5 | cut -z -f 2- | xargs -0 -r echo rm -f --done save it as, e.g., delete-five-largest.sh somewhere in your PATH and run it as delete-five-largest.sh /home/web /another/directory /and/yet/another This runs the find ... once for each directory specified on the command line. This is NOT the same as running find once with multiple path arguments (which would look like find "$@" ... , without any for loop in the script). It deletes the 5 largest files in each directory, while running it without the for loop would delete only the five largest files found while searching all of the directories. i.e. five per directory vs five total.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61100/" ] }
381,408
So I'm trying to install Debian 9.0 to my laptop (UEFI) via DVD and everything worked fine except that the mousepad doesn't work (yet?) and that I'm getting this error: Debootstrap error Failed to determine the codename for the release. at the step "install the base system" after the partitioning. Any suggestions for what I should try to get it working? This thread somewhat suggests that some change to my partitioning could solve this issue. I selected "Guided - Use entire disk and set up encrypted LVM". Changing the 2 ext4 partitions to ext3 didn't help. Any help is welcome! Edit : I skipped to step "Check integrity of the CD" and it says The CD of yours is not a valid Debian-CD-ROM. Please change the CD. Note that during installation I connected to the Internet and that I used the same CD for the installation on another PC. Please help. Edit : Related question of mine . I rebooted and did the integrity-check offline where it says: "Integrity test failed The ./pool/main/k/kde-l10n-de_16.04.3-1_all.deb file failed the MD5 checksum verification. Your CR-ROM or this file may have been corrupted."
This question is old but I just came across a working fix for this. As it turns out, the issue was caused due to the USB drive being unmounted during the LVM setup process. It might've been a bad USB connector or USB drive. There is a very easy fix for which you don't even have to reboot or re-do any of the setup again. Press esc to enter the menu of the installer. Select Enter a shell (or command prompt) Run fdisk -l to find out the name and partition of your USB install drive Run mount /dev/sdc1 /cdrom (replace sdc1 with your USB drive) Run exit , then go back to Install the base system from the menu It will continue to install as normal. All credits and thanks go to this guy On Debian 10 busybox there is no fdisk command, so you can list disk and their partitions using ls /dev/sd* . Once you find your USB partition go to step 4.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/381408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233262/" ] }
381,498
I'm trying to write my own shell without looking at any bash source code, but there's one thing I'm not able to do. Whenever I run "su" from any custom shell including my own, it takes my password and takes me to the bash prompt with the hash indicating root power. I've entered code to make sure my shell gives the hash prompt itself when it has root power but thats only when run as root since whenever I try to become root with su from within my shell, it forcibly takes me to bash. Is there any way to make my own su provision, maybe even my custom su executable which just asks for the root password and gives you the privileges, sending you back to the shell you were using without taking you to bash? Thanks a lot.
The man page for su is quite clear on this point: The command will be executed by the shell specified in /etc/passwd for the target user. and -s, --shell SHELL The shell that will be invoked. -m, -p, --preserve-environment Preserve the current environment [...] The reason you land in a bash shell after calling su - is that this the default shell for root. I can see three ways to override this default shell: Call su -s /path/to/your/shell instead of just su Ensure export SHELL=/path/to/yourshell has been set and then call su -m Change the default shell for root in /etc/passwd (not recommended)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381498", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238647/" ] }
381,573
Extracting results from a command in a terminal I ran a nmap scan on my local network using this command: nmap -sP 192.168.1.* When I ran that command I get something that looks similar to this: Nmap scan report for macbook.att.net (192.168.1.21)Host is up (0.019s latency).MAC Address: 71:DF:4B:44:80:F1 (Apple)Nmap scan report for lenovo.att.net (192.168.1.15)Host is up (0.045s latency).MAC Address: 21:EA:7D:84:08:A1 (Liteon Technology) How can I run that command, but only output the results like this: 1. Apple (192.168.1.21)2. Liteon Technology (192.168.1.15) What I have tried so far So far, I have tried to use grep , but it's not working out as well as I expected. I just need to know how to take the results from that nmap scan and organize it in a list with just what's between the "( )" and also the IP Address.
You could try with 'awk' command as follow, nmap -sP 192.168.1.* | awk -F"[)(]" '/^Nmap/{Nmap=$(NF-1); C+=1} /^MAC Address/{print C"."$(NF-1) "("Nmap")" }' output, 1. Apple (192.168.1.21)2. Liteon Technology (192.168.1.15) explanations: with awk's -F open your are telling 'awk' that your inputs are delimited with ( and/or ) , as what we specified within groups of delimiters -F"[)(]" the '/.../{...} /.../{...}' , it's awk's script body, which in your case it will only run first /^Nmap/{Nmap=$(NF-1); C+=1} , or second /^MAC Address/{print C"."$(NF-1) "("Nmap")" } or none of these two condition parts where we specified only run if input string or line starts ( ^ which is the start line anchor and pointing to the beginning of a line/record) with Nmap (or in second part MAC Address ) patterns. any match found it will go to run the codes within its braces {...} what the first part is doing? As explained above, if match found, then hold the second last feild ( NF pointing the last feild(or its returning number of feilds in a record based on defined delims, and $NF its value) value into variable Nmap with $(NF-1) ; the C+=1 is a counter flag variable we used to count number of matches also at the end using for ID list in output what the second part is doing? same as above, but when match found ^MAC Address , then first print counter C value, print a point . , next print the second last feild of matched line and at the end print the value of 'Nmap' within paranteces which is IP of previous matched line
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239874/" ] }
381,577
From server A to server B copying files via SFTP. Files copied with permissions of 700, If i change the permissions of the file in server B ,sftp is not working as its says" permissions Denied". But i want to give permissions for other users. thanks.
You could try with 'awk' command as follow, nmap -sP 192.168.1.* | awk -F"[)(]" '/^Nmap/{Nmap=$(NF-1); C+=1} /^MAC Address/{print C"."$(NF-1) "("Nmap")" }' output, 1. Apple (192.168.1.21)2. Liteon Technology (192.168.1.15) explanations: with awk's -F open your are telling 'awk' that your inputs are delimited with ( and/or ) , as what we specified within groups of delimiters -F"[)(]" the '/.../{...} /.../{...}' , it's awk's script body, which in your case it will only run first /^Nmap/{Nmap=$(NF-1); C+=1} , or second /^MAC Address/{print C"."$(NF-1) "("Nmap")" } or none of these two condition parts where we specified only run if input string or line starts ( ^ which is the start line anchor and pointing to the beginning of a line/record) with Nmap (or in second part MAC Address ) patterns. any match found it will go to run the codes within its braces {...} what the first part is doing? As explained above, if match found, then hold the second last feild ( NF pointing the last feild(or its returning number of feilds in a record based on defined delims, and $NF its value) value into variable Nmap with $(NF-1) ; the C+=1 is a counter flag variable we used to count number of matches also at the end using for ID list in output what the second part is doing? same as above, but when match found ^MAC Address , then first print counter C value, print a point . , next print the second last feild of matched line and at the end print the value of 'Nmap' within paranteces which is IP of previous matched line
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/243286/" ] }
381,581
I have a script that should create a macvlan bridge in the host when it starts up. The host is an up-to-date Arch Linux. This is intended to allow host and guest to share the same network *and talk to each other*. I found instructions given at: https://www.furorteutonicus.eu/2013/08/04/enabling-host-guest-networking-with-kvm-macvlan-and-macvtap (Regarding the execution at startup, I also consulted How to write startup script for Systemd? and https://stackoverflow.com/questions/21830670/systemd-start-service-after-specific-service ). The problem, however, is that the script is not effective at first try. It creates the macvlan device and the routing table, but doesn't make it possible for the host to ping the guest and vice-versa. But when executed a second time, it works - that is, despite an error message which reads "create_macvlan_bridge.sh[4489]: RTNETLINK answers: File exists" . Host can now ping guest as expected. It's supposed to work at first try , though, and I can't figure out why it's not. Can anyone help? [Update] I noticed the result of ip a shows a second inet entry for macvlan0@enp10s0 after the second execution: macvlan0@enp10s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscnoqueue state UP group default qlen 1000 link/ether da:a2:21:d1:95:24 brd ff:ff:ff:ff:ff:ff inet 192.168.1.3/24 scope global macvlan0 valid_lft forever preferred_lft forever inet 192.168.1.22/24 brd 192.168.1.255 scope global secondary macvlan0 valid_lft forever preferred_lft forever Notice how this second ip address was provided by the dhcp from the router, and it has secondary attribute. The weird thing is that, after the second execution, the guest can ping the host at 192.168.1.3 and at the "secondary" address. Code is below. Script: /usr/local/bin/create_macvlan_bridge.sh #!/bin/sh# Evert Mouw, 2013# Modified by Marc Ranolfi, 2017-07-24# ------------# wait for network availability# ------------TESTHOST=kernel.orgwhile ! ping -q -c 1 $TESTHOST > /dev/nulldo echo "$0: Cannot ping $TESTHOST, waiting another 5 secs..." sleep 5done# ------------# network config# ------------HWLINK=enp10s0MACVLN=macvlan0IP=192.168.1.3/24NETWORK=192.168.1.0/24GATEWAY=192.168.1.1# ------------# setting up $MACVLN interface# ------------ip link add link $HWLINK $MACVLN type macvlan mode bridgeip address add $IP dev $MACVLNip link set dev $MACVLN up# ------------# routing table# ------------# empty routesip route flush dev $HWLINKip route flush dev $MACVLN# add routesip route add $NETWORK dev $MACVLN metric 0# add the default gatewayip route add default via $GATEWAY Systemd unit file: /etc/systemd/system/create_macvlan_bridge.service [Unit]Description=Create_macvlan_bridgeWants=network-online.targetAfter=network.target network-online.target dhcpcd.service[Service]Type=oneshotExecStart=/usr/local/bin/create_macvlan_bridge.sh[Install]WantedBy=multi-user.target
You could try with 'awk' command as follow, nmap -sP 192.168.1.* | awk -F"[)(]" '/^Nmap/{Nmap=$(NF-1); C+=1} /^MAC Address/{print C"."$(NF-1) "("Nmap")" }' output, 1. Apple (192.168.1.21)2. Liteon Technology (192.168.1.15) explanations: with awk's -F open your are telling 'awk' that your inputs are delimited with ( and/or ) , as what we specified within groups of delimiters -F"[)(]" the '/.../{...} /.../{...}' , it's awk's script body, which in your case it will only run first /^Nmap/{Nmap=$(NF-1); C+=1} , or second /^MAC Address/{print C"."$(NF-1) "("Nmap")" } or none of these two condition parts where we specified only run if input string or line starts ( ^ which is the start line anchor and pointing to the beginning of a line/record) with Nmap (or in second part MAC Address ) patterns. any match found it will go to run the codes within its braces {...} what the first part is doing? As explained above, if match found, then hold the second last feild ( NF pointing the last feild(or its returning number of feilds in a record based on defined delims, and $NF its value) value into variable Nmap with $(NF-1) ; the C+=1 is a counter flag variable we used to count number of matches also at the end using for ID list in output what the second part is doing? same as above, but when match found ^MAC Address , then first print counter C value, print a point . , next print the second last feild of matched line and at the end print the value of 'Nmap' within paranteces which is IP of previous matched line
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381581", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154671/" ] }
381,616
I have a Makefile. Somewhere in the makefile there is a variable defined: FOO=hello Later on I need to append some text to FOO 's content. I tried it like this: FOO=$(FOO)_world I suggested that echo $(FOO) would output hello_world . Instead I get an error: *** Recursive variable 'FOO' references itself (eventually). Stop. Using the += operator is no option, because this would add a space in between.
You need the := in place of the recursive = : FOO := helloFOO := $(FOO)_world$(info FOO=$(FOO)) hello_world
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/381616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/243323/" ] }
381,621
I need to transform some data into assignments. I'm pretty sure it looks like a pretty straightforward job for awk, but I am far from confortable with it. Each data element (and columns) are tab-separated.Data elements may contain spaces and special characters, but no TABs. example input : column1 column2 column3rowA1 rowA2 rowA3rowB1 rowB2 rowB3 expected output : column1 = rowA1column2 = rowA2column3 = rowA3column1 = rowB1column2 = rowB2column3 = rowB3 (with arbitrary number of rows, not exceeding hundreds) Any clue how to do this ? (with awk or any standard command-line tool on a linux)
For example: { if (NR==1){ for (i=1; i<=NF; ++i){ arr[i] = $i } }else{ for (i=1; i<=NF; ++i){ print(arr[i]," = ",$i) } } print("")} To run: awk -f script.awk input
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206854/" ] }
381,652
I am writing a script, there is a section where the script needs to fetch the server name and port detail from a reference file. I have used the below logic to fetch the details, however I am looking for some better option. Please advise. HOST=$(grep SERVER_NAME $HOME/env.ref | awk -F '=' '{print $2}')PORT=$(grep SERVER_PORT $HOME/env.ref | awk -F '=' '{print $2}')if [ "${HOST}" -a "{PORT}" ]thenecho "Details extracted"elseecho "Details not found"fi
For example: { if (NR==1){ for (i=1; i<=NF; ++i){ arr[i] = $i } }else{ for (i=1; i<=NF; ++i){ print(arr[i]," = ",$i) } } print("")} To run: awk -f script.awk input
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381652", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181240/" ] }
381,657
i have 2 virtual centos7 nodes , root can login passwordless among themself, i have configured stonith like this but the services are not coming up, fencing is not happening , im new to this, could someone help me rectify issue~ [root@node1 cluster]# pcs stonith create nub1 fence_virt pcmk_host_list="node1"[root@node1 cluster]# pcs stonith create nub2 fence_virt pcmk_host_list="node2"[root@node1 cluster]# pcs stonith show nub1 (stonith:fence_virt): Stopped nub2 (stonith:fence_virt): Stopped[root@node1 cluster]#[root@node1 cluster]#[root@node1 cluster]#[root@node1 cluster]#[root@node1 cluster]# pcs statusCluster name: myclusterStack: corosyncCurrent DC: node2 (version 1.1.15-11.el7_3.5-e174ec8) - partition with quorumLast updated: Tue Jul 25 07:03:37 2017 Last change: Tue Jul 25 07:02:00 2017 by root via cibadmin on node12 nodes and 3 resources configuredOnline: [ node1 node2 ]Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started node1 nub1 (stonith:fence_virt): Stopped nub2 (stonith:fence_virt): StoppedFailed Actions:* nub1_start_0 on node1 'unknown error' (1): call=56, status=Error, exitreason='none', last-rc-change='Tue Jul 25 07:01:34 2017', queued=0ms, exec=7006ms* nub2_start_0 on node1 'unknown error' (1): call=58, status=Error, exitreason='none', last-rc-change='Tue Jul 25 07:01:42 2017', queued=0ms, exec=7009ms* nub1_start_0 on node2 'unknown error' (1): call=54, status=Error, exitreason='none', last-rc-change='Tue Jul 25 07:01:26 2017', queued=0ms, exec=7010ms* nub2_start_0 on node2 'unknown error' (1): call=60, status=Error, exitreason='none', last-rc-change='Tue Jul 25 07:01:34 2017', queued=0ms, exec=7013msDaemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled[root@node1 cluster]# pcs stonith fence node2Error: unable to fence 'node2'Command failed: No route to host[root@node1 cluster]# pcs stonith fence nub2Error: unable to fence 'nub2'Command failed: No such device[root@node1 cluster]# ping node2PING node2 (192.168.100.102) 56(84) bytes of data.64 bytes from node2 (192.168.100.102): icmp_seq=1 ttl=64 time=0.247 ms64 bytes from node2 (192.168.100.102): icmp_seq=2 ttl=64 time=0.304 ms^C--- node2 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.247/0.275/0.304/0.032 ms
For example: { if (NR==1){ for (i=1; i<=NF; ++i){ arr[i] = $i } }else{ for (i=1; i<=NF; ++i){ print(arr[i]," = ",$i) } } print("")} To run: awk -f script.awk input
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206348/" ] }
381,707
I am trying to activate NordVPN CyberSec by completing the following instructions in Debian 9. I should be able to do the changes as root and with sudo like described for Ubuntu in the thread Should I edit my resolv.conf file to fix wrong DNS problem? and in the thread Linux: How do i edit resolv.conf but I cannot. If you are using Linux or Mac OS X, please open the terminal and type in: su You will be asked for your root password, please type it in and press enter rm -r /etc/resolv.conf nano /etc/resolv.conf When the text editor opens, please type in these lines: nameserver 103.86.99.99nameserver 103.86.96.96 Now you have to close and save the file, you can do that by clicking Ctrl + X and pressing Y . Then please continue typing in the terminal: chattr +i /etc/resolv.conf reboot now That is it. Your computer will reboot and everything should work correctly. If you will ever need to change your DNS addresses, please open the terminal and type in the following: su You will be asked for your root password, please type it in and press enter chattr -i /etc/resolv.conf nano /etc/resolv.conf Change DNS addresses, save and close the file. chattr +i /etc/resolv.conf I do the first step as su /root but get the following. Trying to change the file /etc/resolv.conf content there with sudo , I get operation not permitted . root@masi:/etc# ls -la * | grep resolv.conf-rw-r--r-- 1 root root 89 Jan 22 2017 resolv.conf-rw-r--r-- 1 root root 89 Jul 25 17:10 resolv.conf~-rw-r--r-- 1 root root 0 Jan 22 2017 resolv.conf.tmp-rwxr-xr-x 1 root root 1301 Nov 12 2015 update-resolv-confroot@masi:/etc# sudo mv resolv.conf resolv.conf.tmp2mv: cannot move 'resolv.conf' to 'resolv.conf.tmp2': Operation not permitted OS: Debian 9
As per your steps, you protected the file /etc/resolv.conf from being deleted/overwritten with chattr +i (immutable) So, you won't be able to move it to another file without doing sudo chattr -i /etc/resolv.conf first. From man chattr A file with the 'i' attribute cannot be modified: it cannot be deleted or renamed, no link can be created to this file and no data can be written to the file. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
381,717
I need to find the best approach that search the unused disk in my linux OS for example from the following ouput we can see that sde is not mounted and seems to be free disk ( we need free disk in order to create on him FS and then mount to some folder ) please advice what is the best approach to find the free disk ? with command line or command line with awk / sed / perl etc , in order to capture the unused disk sda is for the OS lsblk | grep disk | grep -v fd0 sda 8:0 0 150G 0 disksdb 8:16 0 20G 0 disk /jededsdc 8:32 0 20G 0 disk /var/mmnsdd 8:48 0 20G 0 disk /var/nrddsde 8:64 0 20G 0 disk expected output should be sde there are some other command to view the disk as sfdisk -s , or fdisk -l , but what we want to find is which disk is a free disk ( without FS / mounted )
As per your steps, you protected the file /etc/resolv.conf from being deleted/overwritten with chattr +i (immutable) So, you won't be able to move it to another file without doing sudo chattr -i /etc/resolv.conf first. From man chattr A file with the 'i' attribute cannot be modified: it cannot be deleted or renamed, no link can be created to this file and no data can be written to the file. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
381,761
What does declare name without providing any option do? Does it declare a string variable'sname? What does declare -g without providing a variable name do? Does it display the values ofall the variables with global attribute? I didn't find answers in the description of declare at Bash manual. Thanks.
Variable handling and scoping in shell and especially bash can be very obscure and unintuitive (and sometimes buggy). ksh had typeset for a similar feature. ksh , zsh , yash have typeset . bash has typeset as an alias of declare for compatibility with ksh , zsh has declare as an alias of typeset for compatibility with bash . Most shells have export , readonly and local that implement part of what typeset does. One of the reasons why bash authors chose declare over typeset may be because typeset does not only set the type, it also declares a variable: introduce it in a given scope, possibly with a type, attributes and/or value. In bash , variables can be: unknown (like when they never have been set or declared) declared (after declare ) set (when given a value, possibly empty). They can be of different types: scalar array associative array and have several attributes: integer exported readonly all lower-case/upper-case named references (though the distinction between type and attribute can be quite blurry). Not all combinations of types and attributes are supported or effective. Now, declare declares the variable in the current scope. bash , even though it implements dynamic scoping treats the outer-most scope specially. It calls it the global scope. declare behaves very differently when it's called in that global scope and when in a function (I'm not talking of the kind of separate scope that is introduced by subshells or associated with the environment). When you do declare var inside a function and provided that same variable hadn't been declared in that same scope, it declares a new variable which is initially unset and that shadows a potential var variable that would have been present in the parent scope (the caller of the function). That's dynamic scoping implemented via some sort of stack. When the function exits, the status, type, attributes and value of the variable as it was when the function was invoked is restored (popped from the stack). Outside of any function however (in the global scope), declare does declare a variable, but does not initialise it as unset if it was set before (same as when using declare a second time within the same function scope). If a type is specified, the value of the variable may be converted, though not all conversion paths are allowed (only scalar to array/hash), and attributes may be added or removed. In bash , within a function declare -g acts on the variable at the bottom of the stack, in the outer-most ("global") scope. declare -g was inspired from ksh93 's typeset -g . But ksh93 implements static scoping where the global scope is different and separate from each function scope. Doing the same with dynamic scoping makes little sense. In all other shells that have typeset -g ( mksh , zsh , yash ), typeset -g is used to change some attribute of a variable without instantiating a new local one. In bash , people generally use it for the same purpose, but because it affects the variable of the outer-most scope instead of the current variable, it doesn't always work. For instance: integer() { typeset -gi "$1"; } To make a variable an integer works in mksh / yash / zsh . It works in bash only on variables that have not been declared local by the caller: $ bash -c 'f() { declare a; integer a; a=1+1; echo "$a"; }; integer() { typeset -gi "$1"; }; f'1+1$ bash -c 'f() { integer a; a=1+1; echo "$a"; }; integer() { typeset -gi "$1"; }; f'2 Note that export var is neither typeset -x var nor typeset -gx var . It adds the export attribute without declaring a new variable if the variable already existed. Same for readonly vs typeset -r . Also note that unset in bash only unsets a variable if it has been declared in the current scope (leaves it declared though except in the global scope; it removes attributes and values and the variable is no longer array or hash; also note that on namerefs, it unsets the referenced variable). Otherwise, it just pops one variable layer from the stack mentioned above. With bash 5.0 or above, that can be fixed by setting the localvar_unset option. So to sum up: declare var When called in a function and if var has not been declared before in that same function, declares a variable of type scalar with no attributes and that is initially unset. If called outside of any function or if var had already been declared in the same function, it has no effect as we're not specifying any new type or attribute. declare -g var Wherever it's called would declare a var in the outer-most ("global") scope: make it declared , of type scalar , no attribute, no value if it was previously unknown in that scope (which for all intent and purpose is the same as an unknown variable except that it would show in the output of typeset -p ), or do nothing otherwise. In any case, you might not be able to access that variable in the context you're running that command: f() { local a; g; }; g() { typeset -g a=123; echo "$a"; }; f outputs nothing.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
381,771
I am trying to setup my VPN connections of NordVPN such that I can use internet without VPN nameservers and internet with VPN. Code in /etc/resolv.conf setup by the official NordVPN instructions in the thread How to do these NordVPN changes for CyberSec in Debian? + my first line which allows me to use internet when not using VPN nameserver 8.8.4.4nameserver 103.86.99.99nameserver 103.86.96.96 RuiFRibeiro's comment about the situation where he points out that my settings are creating DNS in turns . This means you are using Google DNS in and out of VPN, and thus having DNS leaking outside of the VPN. Their DNS servers must be used while inside the VPN, and Google outside - in fact you may be using DNS in turns, more complicated, but you got the idea. Actually I have setup VPNs for use in our organization, and I intercept DNS requests, and it does not matter whichever DNS the client has configured. I am surprised both they do not do that, and also they do not provide clearer instructions. Supported /etc/resolv.conf by NordVPN This way, you cannot access internet without VPN, but you will have no leaks while using VPN. nameserver 103.86.99.99nameserver 103.86.96.96 Dynamic setting Pseudocode If no openvpn active, use Google nameservers, etc 8.8.4.4 . If openvpn active, use NordVPN nameservers such that the key method can be you might change resolv.conf if calling a script to activate the VPN (RuiFRibeiro) with iptables rules intercepting DNS when going the VPN route (RuiFRibeiro) checking for the presence of a VPN connection/interface in a script piggybacking the dhcp client - - ugly hack (RuiFRibeiro) ... NordVPN answer acceptable by me I received a few answers from them but accept only the following ones. I asked them a schedule when this bottleneck will be solved. Currently Cybersec feature does not work with Linux machines as there will be internet connection only when connected to the VPN. If you wish to have no leaks on your Linux machine while connected to the VPN and internet while not connected, use these DNS addresses. These are our DNS servers: 162.242.211.137 and 78.46.223.24 We are sorry to inform you, that CyberSec for Linux is not in the priority list at the moment. ETA is unknown. Future wishes for NordVPN Some binary blob to fix the issue but I want documentation what it does Use OpenVPN directly instead of IPesc or PPPT OS: Debian 9
Variable handling and scoping in shell and especially bash can be very obscure and unintuitive (and sometimes buggy). ksh had typeset for a similar feature. ksh , zsh , yash have typeset . bash has typeset as an alias of declare for compatibility with ksh , zsh has declare as an alias of typeset for compatibility with bash . Most shells have export , readonly and local that implement part of what typeset does. One of the reasons why bash authors chose declare over typeset may be because typeset does not only set the type, it also declares a variable: introduce it in a given scope, possibly with a type, attributes and/or value. In bash , variables can be: unknown (like when they never have been set or declared) declared (after declare ) set (when given a value, possibly empty). They can be of different types: scalar array associative array and have several attributes: integer exported readonly all lower-case/upper-case named references (though the distinction between type and attribute can be quite blurry). Not all combinations of types and attributes are supported or effective. Now, declare declares the variable in the current scope. bash , even though it implements dynamic scoping treats the outer-most scope specially. It calls it the global scope. declare behaves very differently when it's called in that global scope and when in a function (I'm not talking of the kind of separate scope that is introduced by subshells or associated with the environment). When you do declare var inside a function and provided that same variable hadn't been declared in that same scope, it declares a new variable which is initially unset and that shadows a potential var variable that would have been present in the parent scope (the caller of the function). That's dynamic scoping implemented via some sort of stack. When the function exits, the status, type, attributes and value of the variable as it was when the function was invoked is restored (popped from the stack). Outside of any function however (in the global scope), declare does declare a variable, but does not initialise it as unset if it was set before (same as when using declare a second time within the same function scope). If a type is specified, the value of the variable may be converted, though not all conversion paths are allowed (only scalar to array/hash), and attributes may be added or removed. In bash , within a function declare -g acts on the variable at the bottom of the stack, in the outer-most ("global") scope. declare -g was inspired from ksh93 's typeset -g . But ksh93 implements static scoping where the global scope is different and separate from each function scope. Doing the same with dynamic scoping makes little sense. In all other shells that have typeset -g ( mksh , zsh , yash ), typeset -g is used to change some attribute of a variable without instantiating a new local one. In bash , people generally use it for the same purpose, but because it affects the variable of the outer-most scope instead of the current variable, it doesn't always work. For instance: integer() { typeset -gi "$1"; } To make a variable an integer works in mksh / yash / zsh . It works in bash only on variables that have not been declared local by the caller: $ bash -c 'f() { declare a; integer a; a=1+1; echo "$a"; }; integer() { typeset -gi "$1"; }; f'1+1$ bash -c 'f() { integer a; a=1+1; echo "$a"; }; integer() { typeset -gi "$1"; }; f'2 Note that export var is neither typeset -x var nor typeset -gx var . It adds the export attribute without declaring a new variable if the variable already existed. Same for readonly vs typeset -r . Also note that unset in bash only unsets a variable if it has been declared in the current scope (leaves it declared though except in the global scope; it removes attributes and values and the variable is no longer array or hash; also note that on namerefs, it unsets the referenced variable). Otherwise, it just pops one variable layer from the stack mentioned above. With bash 5.0 or above, that can be fixed by setting the localvar_unset option. So to sum up: declare var When called in a function and if var has not been declared before in that same function, declares a variable of type scalar with no attributes and that is initially unset. If called outside of any function or if var had already been declared in the same function, it has no effect as we're not specifying any new type or attribute. declare -g var Wherever it's called would declare a var in the outer-most ("global") scope: make it declared , of type scalar , no attribute, no value if it was previously unknown in that scope (which for all intent and purpose is the same as an unknown variable except that it would show in the output of typeset -p ), or do nothing otherwise. In any case, you might not be able to access that variable in the context you're running that command: f() { local a; g; }; g() { typeset -g a=123; echo "$a"; }; f outputs nothing.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
381,776
I want a quick and easy way to tell how many times a service managed by systemd has restarted (i.e. like a counter). Does that exist?
The counter has since been added and can be accessed with the following command: systemctl show foo.service -p NRestarts It will return a value if the service is in a restart loop, otherwise, will return nothing. https://github.com/systemd/systemd/pull/6495
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/381776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145572/" ] }
381,785
Mint 18.2 64bCinnamon 3.4.3 This is running in a VM on my machine so I'm not worried about login security. I've been looking around on the Mint forums and only found a lot of threads about troubleshooting problems with autologin. The setting is not in the Login Window settings screen. The setting is not in the Users and Groups settings screen. Where is this setting located now?
It seems there isn't a GUI setting for auto-login, except during initial installation. Bahamut already pointed to the right solution. When look at the file: /etc/X11/default-display-manager , it points to the lightdm . Then for Mint 18.2 (Cinnamon), the correct config file for lightdm is: /etc/lightdm/lightdm.conf . The contents of the file could contain multiple settings: [Seat:*]autologin-guest=falseautologin-enable=trueautologin-user=USERNAMEautologin-user-timeout=0allow-guest=false Where USERNAME is your username, that should be auto-logged-in. autologin-enable=true - this line was added for Linux MATE 18.3 64bit, but not tested for 18.2 Cinnamon. Remove it if necessary... PS: Even it's too late to help the OP, it could be of help for other people. Let me know if someone can't get it working.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/381785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29679/" ] }
381,831
Is there a way to make xdotool or xte or any other alternative to work in Fedora 26?I'm trying to emulate keypresses (using xbindkeys), e.g. pressing Alt+B would emulate pressing Ctrl+Shift+B . But apparently neither xdotool nor xte work in Wayland (for security reasons [?]) Is there a workaround? Otherwise how do I rebind keys to other keys?
I'm using this little script. It needs the package evemu installed and sudo-confguration for evemu-event without password-notification. EVDEVICE is the device used to emulate the input. /dev/input/event8 is my keyboard (use sudo evemu-record to find yours) #!/bin/bash# keycomb.shEVDEVICE=/dev/input/event8for key in $@; do sudo evemu-event $EVDEVICE --type EV_KEY --code KEY_$key --value 1 --syncdone# reverse orderfor (( idx=${#@}; idx>0; idx-- )); do sudo evemu-event $EVDEVICE --type EV_KEY --code KEY_${!idx} --value 0 --syncdone you can e.g. change a tab with ./keycomb.sh RIGHTCTL PAGEDOWN . Please note: This script does no validation on parameters, use with care ;) EDIT Feb 2021: Finally I found a project doing it right: https://github.com/sezanzeb/key-mapper
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/381831", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43520/" ] }