source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
438,635
If Linux stops responding one might be forced to make an unclean shutdown and in this case one might unknowingly just turn off the power. I read that you should try and make a soft reset with ctrl+prntSc + R +E +I + S+ U+ B with Linux if possible, because a suddently turning off the power might corrupt the file system. What are the details of this and is it true to say that Linux is less resilient that MS-Windows in this case?
I don't think Ctrl+PrtScr will do much, what you need is SysRq (usually on the same physical key as PrtScr , accessed by holding Alt when pressing that key, for that reason it's a little unclear to be if the "magic" combinations are actually SysRq+<letter> or Alt+SysRq+<letter> ). The B function will b oot the system, so your combination is a waste of time, only the B will ever be done, and just booting is about as bad as powercycling. What can (sometimes) be gained by SysRq+R,E,I,S,U,B (to me + indicates that you need to press all the keys at once, and pressing eight keys at once is hard and not what you want to do - and note that "BUSIER" is actually the classic combination completely backwards), is a nicer shutdown where as much data as possible is written to the disk(s) nicely, so a fsck is not needed on the next boot, and the risk of data loss is minimised. There's a lot of information, including a complete list of SysRq-combinations and some mnemonics on the wikipedia page for Magic SysRq .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/438635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
438,725
The Thinkpad T480s has a "clickpad": a touchpad where (parts of) the touchpad itself is pressable instead of having physical dedicated buttons. Running X.org 7.7, there is a horizontal stripe at the bottom of the touchpad that acts as the mouse buttons 1, 2, and 3 (i.e. left, middle and right); basically it looks like this: +-----------------+| || || || ||11111 22222 33333||11111 22222 33333|+-----------------+ How do I disable button 2 and reallocate that area to between buttons 1 and 3? I.e. I would like the following layout: +-----------------+| || || || ||11111111 33333333||11111111 33333333|+-----------------+ Note this question is different from mtrack: how to get vertical button zones? since I am trying to do this in the context of XInput, not mtrack. Also, the hardware is not Synaptics. The hardware in question is identified by XInput as ⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ ETPS/2 Elantech Touchpad id=11 [slave pointer (2)]⎜ ↳ ETPS/2 Elantech TrackPoint id=12 [slave pointer (2)]
If I type: $ xinput get-button-map 'DLL07BF:01 06CB:7A13 Touchpad' I get: 1 2 3 4 5 6 7 I tried using: $ xinput set-button-map 13 1 2 0 4 5 6 7 It disabled middle and right click.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/438725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6706/" ] }
438,851
I'm running Kali Linux 2018.1 on a VM. I want to run a bunch of commands that I have stored in the file start.sh at startup. I know how to do this on a normal distro by going into /etc/rc.local , but that doesn't exist in Kali. Here are some of the commands I want to run. apt-get clean && apt-get update && apt-get upgrade -yopenvas-start/etc/init.d/nessusd start Any suggestions?
You can add this script to /etc/crontab : @reboot /path/to/your/start.sh From man 5 crontab : @reboot : Run once after reboot.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/438851", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261464/" ] }
438,871
I am using bellow to run a page every hour in my Virtualmin installed on CentOS 7 wget https://domain.tld/index.php?page=cron > /dev/null 2>&1 but it creates below files every hour when cron run index.php?page=cronindex.php?page=cron.1index.php?page=cron.2 etc. Please let me know how to avoid creation of these file.
wget , by default, saves the fetched web page in a file whose name corresponds to the document at the end of the URL (it does not send it to its standard output). If that file already exists, it adds a number to the end of the name. If you don't want to save the document, then specify that you'd like to save it in /dev/null : wget -O /dev/null 'https://domain.tld/index.php?page=cron' >/dev/null 2>&1 or wget --output-document=/dev/null --quiet 'https://domain.tld/index.php?page=cron' It's also a good idea to quote the URL as URLs sometimes contain characters that may be interpreted a filename globbing character or command terminators by the shell (like & and [ and ] etc.).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/438871", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287054/" ] }
438,884
I tried to mount manually on my Linux shared folders from windows server 2012 R2. The syntaxe is right but Im stuck on the same issue error 13: #mount.cifs //ip/division /mnt/division -o username=bob@dude-uk,password=myscretpass,vers=2.1dmesg:Status code returned 0xc000006d STATUS_LOGON_FAILURECIFS VFS: Send error in SessSetup = -13CIFS VFS: cifs_mount failed w/return code = -13 If I tried other vers= options I got the same issue.If I remove the option vers= then syslog claim : No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount. If I use option sec= then I specify this option then I got error 126 #mount.cifs //ip/division /mnt/division -o username=bob@dude-uk,password=myscretpass,vers=2.1,sec=krb5mount error(126): Required key not available Package Keyutils is installed. If I tried other sec= options I got error 22 or error 13 if I tried to prompt the password: #mount.cifs //ip/division /mnt/division -o username=bob@dude-ukPassword for bob@dude-uk@//ip/division: mount error(13): Permission denied Nemo (file explorer in Linux Mint) can mount the shared folders.MacOsx can mount shared folders. My kernel is 4.13Mount.cifs is 6.4I tried to mount manually before setup my fstab. Do you have any idea ?
wget , by default, saves the fetched web page in a file whose name corresponds to the document at the end of the URL (it does not send it to its standard output). If that file already exists, it adds a number to the end of the name. If you don't want to save the document, then specify that you'd like to save it in /dev/null : wget -O /dev/null 'https://domain.tld/index.php?page=cron' >/dev/null 2>&1 or wget --output-document=/dev/null --quiet 'https://domain.tld/index.php?page=cron' It's also a good idea to quote the URL as URLs sometimes contain characters that may be interpreted a filename globbing character or command terminators by the shell (like & and [ and ] etc.).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/438884", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287066/" ] }
438,927
I'm trying to covert all m4a to mp3 my code look like this: find . -name '*.m4a' -print0 | while read -d '' -r file; do ffmpeg -i "$file" -n -acodec libmp3lame -ab 128k "${file%.m4a}.mp3";done but it only work for first mp3 file for next it show error: Parse error, at least 3 arguments were expected, only 1 given in string '<All files in one line>'Enter command: <target>|all <time>|-1 <command>[ <argument>] The files contain spaces ampersands and parenthesis.
When reading a file line by line, if a command inside the loop also reads stdin, it can exhaust the input file. Continue reading here: Bash FAQ 89 So the code should look like this: find . -name '*.m4a' -print0 | while read -d '' -r file; do ffmpeg -i "$file" -n -acodec libmp3lame -ab 128k "${file%.m4a}.mp3" < /dev/nulldone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/438927", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1806/" ] }
438,937
I'm trying to write a script that uses sed to process lines in a text file to generate sample documentation. I've got most of the script working but I'm stuck with one edge case. Consider the following file line-1line-2, part2line-3-should-be-a-very-long, line-3-continuedline-4 The problem is that some, but not all, lines end in a special token (it happens to be a comma). The token indicates that the line should be concatenated with following one to produce one long line. So in my example line-3-should-be-a-very-long, should be concatenated with line-3-continued to give me line-3-should-be-a-very-long, line-3-continued (I do want to keep the comma). There is no special action on line 2 even though it contains a comma which is NOT at the end of the line. The rest of the processing is done by piping a few sed and grep commands together, so a sed solution would fit well.
$ sed '/,$/{N;s/\n//;}' fileline-1line-2line-3-should-be-a-very-long, line-3-continuedline-4 If the blanks should be deleted: $ sed '/,$/{N;s/\n[[:blank:]]*//;}' fileline-1line-2line-3-should-be-a-very-long,line-3-continuedline-4 (if you want a single space to remain between the lines, replace // in the code by / / ) If lines can be continued multiple times, as in line-1line-2line-3-should-be-a-very-long, line-3-continued, line-3-continued-furtherline-4 then, $ sed '/,$/{:loop;N;s/\n[[:blank:]]*//;/,$/bloop;}' fileline-1line-2line-3-should-be-a-very-long,line-3-continued,line-3-continued-furtherline-4 This last sed script explained with annotations: /,$/{ # if the current line ends with a comma, then... :loop # define label "loop" N # append next line from input (a newline will be inserted in-between) s/\n[[:blank:]]*// # delete that newline and any blanks (tabs or spaces) directly after it /,$/bloop # if the line now ends with comma, branch to the "loop" label}# implicit output of (possibly) modified line at end
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/438937", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219426/" ] }
438,939
I have a Server (Debian) that is serving some folders trough NFS and a Client (Debian) that connects to the NFS Server (With NFSv4) and mounts that exported folder. So far everything is fine, I can connect and modify the content of the folders. But the users are completely messed up. From what I understand this is due to NFS using the UIDs to set the permissions, and as the UIDs of the users from the Client and the Server differ, then this happens, which is still expected. But from what I understood, by enabling NFSv4, IDMAPD should kick in and use the username instead of the UIDs. The users do exist on the Server and Client side, they just have different UIDs. But for whatever reason IDMAPD doesn't work or doesn't seem to do anything. So here is what I've done so far: On Server Side: installed nfs-kernel-server populated the /etc/exports with the proper export settings --> /rfolder ip/24(rw,sync,no_subtree_check,no_root_squash) and changed /etc/default/nfs-common to have NEED_IDMAPD=yes On the Client Side installed nfs-common and changed /etc/default/nfs-common to have NEED_IDMAPD=yes and mount the folder with " mount -t nfs4 ip:/rfolder /media/lfolder " Rebooted and restarted both several times, but still nothing. When I create from the Server a folder with user A , on the Client I see that the folder owner is some user X . When I create a file from the Client with user A , on the Server side it says its from some user Y . I checked with HTOP that the rpc.idmap process is running on the Server and it is indeed. Although on the Client it doesn't appears to be running. By trying to manually start the service on the Client I just got an error message stating that IDMAP requires the nfs-kernel-server dependency to run. So I installed it on the Client side, and now I have the rpc.idmap process running on both Client and Server . Restarted both, and the issue still persists. Any idea what is wrong here? Or how to configure this properly?
There are a couple of things to note when using NFSv4 id mapping on mounts which use the default AUTH_SYS authentication ( sec=sys mount option) instead of Kerberos. NOTE: With AUTH_SYS idmapping only translates the user/group names. Permissions are still checked against local UID/GID values. Only way to get permissions working with usernames is with Kerberos. On recent kernels, only the server uses rpc.idmapd (documented in man rpc.idmapd ). When using idmap, the user names are transmitted in user@domain format. Unless a domain name is configured in /etc/idmapd.conf , idmapd uses the system's DNS domain name. For idmap to map the users correctly, the domain name needs to be same on the client and on the server. Secondly, kernel disables id mapping for NFSv4 sec=sys mounts by default . Setting nfs4_disable_idmapping parameter to false enables id mapping for sec=sys mounts. On server: echo "N" > /sys/module/nfsd/parameters/nfs4_disable_idmapping and on client(s): echo "N" > /sys/module/nfs/parameters/nfs4_disable_idmapping You need to clear idmap cache with nfsidmap -c on clients for the changes to be visible on mounted NFSv4 file systems. To make these changes permanent , create configuration files in /etc/modprobe.d/ , on server ( modprobe.d/nfsd.conf ): options nfsd nfs4_disable_idmapping=N on client(s) ( modprobe.d/nfs.conf ): options nfs nfs4_disable_idmapping=N
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/438939", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259611/" ] }
438,940
When I run lsmod or sudo lsmod , I get an error that says: libkmod: ERROR ../libkmod/libkmod-module.c:1655 kmod_module_new_from_loaded: could not open /proc/modules: No such file or directoryError: could not get list of modules: No such file or directory I searched on a lot of forums but am unable to find a solution for this. I'm running Debian on Windows Subsystem for Linux. I was recently also trying to edit the sysctl.conf file for the purpose of disabling ipv6 . I had added the following lines: net.ipv6.conf.all.disable_ipv6 = 1net.ipv6.conf.default.disable_ipv6 = 1net.ipv6.conf.lo.disable_ipv6 = 1 And when I tried sudo sysctl -p , it returned this error: sysctl: cannot stat /proc/sys/net/ipv6/conf/all/disable_ipv6: No such file or directorysysctl: cannot stat /proc/sys/net/ipv6/conf/default/disable_ipv6: No such file or directorysysctl: cannot stat /proc/sys/net/ipv6/conf/lo/disable_ipv6: No such file or directory I'm not sure if the above 2 errors are connected. I was trying to run Linux shell on Windows. Any solution to the problem?
In both cases, you’re trying to interact with the kernel. Any Linux environment running on WSL isn’t running a Linux kernel, it’s running on the Windows kernel; so anything tied to the Linux kernel (including modules and system controls) won’t work. In the IPv6 case, you need to configure the network using Windows’ tools.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/438940", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/245216/" ] }
438,969
Yesterday I, from the gui, shut my computer down to physically move it. Then I turned it back on. Today when I ssh into it from a different computer I'm told " * System restart required * " It's reasonable to think I had a reboot left over from packages I installed last week, but that would mean a full power down isn't a superset of a reboot. I'm using Ubuntu 16.04.4 LTS
At the core, there is no difference between Shutdown or Reboot, with regards to the "System Restart Required" message. Both a shutdown and a reboot will clear it. However, this only applies when you don't have a new pending update that requires a reboot to completely apply, and automatic updates could run on your system since the 'last reboot' you mentioned. As such, you need to be mindful of whether your system has unattended-upgrades installed and enabled. If this is the case, your system gets updates once a day or so, and if you have automatic updates set up to install all available updates (not just security-only updates), then it will autorun and autoinstall updates at its configured time point. The best way to determine that is to look at /var/log/apt/history.log , where automatic updates will show up. This can explain an 'unexpected' "Restart Required" message because since the last reboot your system might have gotten new updates that triggered the message.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/438969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/190964/" ] }
439,046
Declaring JSON in bash is kind of annoying because you have to escape a lot of characters. Say I have an array like this: value1="foo" value2="bar" arr=("key1" "$value1" "key2" "$value2") Is there a way to somehow join the array with ":" and "," characters. The only thing I can think of is a loop where you add the right characters, something like this: data="";for i in "${arr[@]}"; do data="$data\"$i\""done
With jo , which makes it easy to generate JSON on the command line: $ jo -p key1="$value1" key2="$value2"{ "key1": "foo", "key2": "bar"} or, depending on what you want the end result to be, $ jo -a -p "$(jo key1="$value1")" "$(jo key2="$value2")"[ { "key1": "foo" }, { "key2": "bar" }] Note that jo will also properly encode the values in the strings $value1 and $value2 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439046", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
439,106
I need to create a persistent volume for Docker. The volume must be named extra-addons and located in /mnt/ . I run this command: sudo docker volume create /mnt/extra-addons I got this error message: Error response from daemon: create /mnt/extra-addons: "/mnt/extra-addons" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path Note that when I simply run: sudo docker volume create extra-addons , I do not face this problem but when I inspect the volume in question using sudo docker inspect extra-addons , I see it is located in a place I do not want: [ { "CreatedAt": "2018-04-21T14:40:25+03:00", "Driver": "local", "Labels": {}, "Mountpoint": "/var/lib/docker/volumes/extra-addons/_data", "Name": "extra-addons", "Options": {}, "Scope": "local" }] I mean I rather want to see the volume like this: /mnt/extra-addons Any idea?
I found the solution: I had to install local-persist plugin. I had to mount the volume to create to the mount point as follows: sudo docker volume create -d local-persist -o mountpoint=/mnt/ --name=extra-addons Check if I got what I expected: sudo docker volume inspect extra-addons Result: [ { "CreatedAt": "0001-01-01T00:00:00Z", "Driver": "local-persist", "Labels": {}, "Mountpoint": "/mnt/", "Name": "extra-addons", "Options": { "mountpoint": "/mnt/" }, "Scope": "local" }] That was what I am looking for.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155809/" ] }
439,109
I'm trying to optimize the IO schedulers and to use a proper scheduler for rotational and for non rotational drives (different).When I run: cat /sys/block/sd*/queue/rotational I get: 1 <-- for sda1 <-- for sdb although sdb is the usb flash drive and it shouldn't be rotational. $ udevadm info -a -n /dev/sda | grep queueATTRS{queue_depth}=="31"ATTRS{queue_ramp_up_period}=="120000"ATTRS{queue_type}=="simple"$ udevadm info -a -n /dev/sdb | grep queueATTRS{queue_depth}=="1"ATTRS{queue_type}=="none" so there is no such attribute as: ATTR{queue/rotational}=="0" or ...=="1"
I found the solution: I had to install local-persist plugin. I had to mount the volume to create to the mount point as follows: sudo docker volume create -d local-persist -o mountpoint=/mnt/ --name=extra-addons Check if I got what I expected: sudo docker volume inspect extra-addons Result: [ { "CreatedAt": "0001-01-01T00:00:00Z", "Driver": "local-persist", "Labels": {}, "Mountpoint": "/mnt/", "Name": "extra-addons", "Options": { "mountpoint": "/mnt/" }, "Scope": "local" }] That was what I am looking for.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
439,122
I am new to Linux. When I make a file as read only I am still able to delete that file. I read on the net that deleting a file depends upon the permissions of the folder in which it lies. To make things clear let's say I have a directory test(with all permissions) in which I have a read only file 'abc.txt'. Even if this file is read only I can easily delete it. Now consider the case where I have a subdirectory named 'sub' under test. This directory is read only. When I want to delete this subdirectory it throws an error saying that can't delete this directory. In linux a directory is also treated as a file. But the behaviour differs for read only files vs read only directories. What's the reason for this?
Because Unix was defined that way, and POSIX requires that behavior. And Linux tries to be Unix & POSIX compliant. You might have some misconception about what files are (caveat, there are not exactly the same on Unix and on Windows). BTW, they matter for many system calls (listed in syscalls(2) ), with several system calls giving a file descriptor from a file path (see also path_resolution(7) ). In contrast to some other OSes, a file (on Linux or Unix or POSIX systems) has not only one name (or path ): some files have no names, others have several ones; indeed most files have one name. Remember that files are an abstraction provided to user-space (and applications -including utility programs- running in processes ) by the operating system kernel . And system calls are the only way for programs (and processes running them) to interact with the kernel. Your disk does not know about files (but your OS does). A disk contains simply blocks of bytes. It is your OS which understands them as files A file is (on Unix & Linux) an inode . See also inode(7) . The inode contains meta-data about that file (which you could query with stat(2) , fstat etc...) -including type, creation time, permissions, ownership, size, etc...- and contains (or more often points to) the file data (a sequence of bytes). But deleting a read only directory poses no problem. A directory is a special kind of file (there are other kinds of files than plain files and directories, e.g. fifo(7) -s, symlink(7) -s, etc). It contains a dictionary mapping strings to inodes. How that happens is specific to every file system type. Use opendir(3) (and later closedir ) and readdir(3) to read it. .... permissions of of the folder in which it lies... Misconception. Folders do not exit on Linux (they are a GUI artefact sometimes displayed by your desktop environment ), you probably are talking of directories. File systems contains various kind of files (including directories and symbolic links). A given inode can appear in several directories (you might say that a file could have several paths, then they all have the same "power" and similar "role"). Use the link(2) system call -perhaps via the ln(1) command- for adding some additional path to a file. Use the unlink(2) system call for removing a path to a file. In some cases, an inode can exist without appearing in any directory. A common case (used for implementing temporary files) is when you create a file -e.g. using creat(2) or open then unlink (or remove(3) ) that file just after (e.g. in the same process, but perhaps not). When an inode becomes unreachable (because there is no open file descriptor to it, and because it is no more mentioned in some directory) the kernel removes that inode (and the data blocks related to it). When you "remove" a file (e.g. using the rm(1) utility), the /bin/rm program (and the process running that command) is just using unlink (and you are writing into a directory containing some mapping between names and inodes). If nothing more "points to" that inode, indeed it gets removed. Since the kernel is writing into a directory, it requires your process to have write permission for that. See also credentials(7) . A directory needs mkdir(2) to be made, and rmdir(2) to be removed (from its parent): if you use unlink(2) to remove it, that would fail with EISDIR . But rmdir(2) requires the directory to be empty (because the kernel requires the file hierarchy to be a direct acyclic graph , and circular references are forbidden, by some kind of reference counting ). Both mkdir and rmdir syscalls handle the magic . and .. entries of directories. But deleting a read only directory poses no problem. Why this thing does not depend on the parent directory? It does in general (but sticky bit on directories has some specific meaning). about edited question In your edited question, you claim (incorrectly, or else some important detail is missing): Now consider the case where I have a subdirectory named sub under test . This directory is read only. When I want to delete this subdirectory it throws an error saying that can't delete this directory. I cannot reproduce your claim (please provide some MCVE ). For readability, I am considering directories testdir and subdir instead of your names (but that does not change anything; however your test is confusable with test(1) ) % /bin/mkdir testdir % /bin/mkdir testdir/subdir % /bin/ls -la testdirtotal 12drwxr-xr-x 3 basile basile 4096 Apr 24 13:09 .drwxr-xr-x 6 basile basile 4096 Apr 24 13:08 ..drwxr-xr-x 2 basile basile 4096 Apr 24 13:09 subdir % /bin/chmod a-w testdir/subdir % /bin/ls -la testdir total 12drwxr-xr-x 3 basile basile 4096 Apr 24 13:09 .drwxr-xr-x 6 basile basile 4096 Apr 24 13:08 ..dr-xr-xr-x 2 basile basile 4096 Apr 24 13:09 subdir % /bin/rmdir testdir/subdir % /bin/ls -la testdir total 8drwxr-xr-x 2 basile basile 4096 Apr 24 13:14 .drwxr-xr-x 6 basile basile 4096 Apr 24 13:08 .. Remember that rmdir(1) (it uses the rmdir(2) system call) require the removed directory to be empty, and some files (whose name starts with a dot) could be "hidden" by your shell or by ls . List all files of the removed directory with ls -a You might read Operating Systems: Three Easy Pieces
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/190042/" ] }
439,135
It is easy to find that ext2 filesystem labels can be set with tune2fs and e2label . GParted GUI offers to give partition labels when creating partitions of any type, but not to change the label of an existing partition. I am only interested in MBR partitions (not GPT) and preferably console tools. In particular, I am using the JFS filesystem. Can I give it a label to be used in /etc/fstab ? Human-readable label, not the GUID?
Compare the description of an MBR partition table entry with the description of a GPT/GUID partition entry . You'll see that while the GPT/GUID partition has dedicated locations to have both an "unique partition GUID" and a "partition name", there none of those available for MBR. So you just can't do this on MBR, it's available only for GPT. There's still an unique 32bits identifier for the whole MBR (at position 0x1B8) that might be usable, along with the partition number. It can be changed using fdisk 's expert options: # fdisk /dev/ram0[...]Command (m for help): xExpert command (m for help): iEnter the new disk identifier: 0xdf201070Disk identifier changed from 0xdeadbeaf to 0xdf201070.Expert command (m for help): rCommand (m for help): wThe partition table has been altered.Calling ioctl() to re-read partition table.Syncing disks. What you should probably use, like with tune2fs for ext2, is jfs_tune to label the filesystem . For example: # jfs_tune -L mylabel /dev/ram0p1jfs_tune version 1.1.15, 04-Mar-2011Volume label updated successfully.# blkid |grep ram0/dev/ram0: PTUUID="df201070" PTTYPE="dos"/dev/ram0p1: LABEL="mylabel" UUID="e1805bac-44fb-4f4e-860b-64a1d303400f" TYPE="jfs" PARTUUID="df201070-01" All "variables" output by blkid are probably usable in /etc/fstab , you should do tests.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439135", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151881/" ] }
439,145
In a posix-compatible way that works with multiple implementations, how can I print the list of currently defined environment variable without their values? On some implementations (mksh, freebsd /bin/sh), just using export by itself will fit the bill: $ exportFOO2FOO But for some other implementations (bash, zsh, dash), export also shows the value. With bash, for example: $ exportexport FOO2='as df\ asdk=fja:\asd=fa\asdf'export FOO='sjfkasjfd kjasdf:\ asdkj=fkajdsf:\ :askjfkajsf=asdfkj:\ safdkj'$ printenv | sed -n lFOO2=as\tdf\$ asdk=fja:\$asd=fa\$asdf$FOO=sjfkasjfd kjasdf:\$ asdkj=fkajdsf:\$\t:askjfkajsf=asdfkj:\$ safdkj$ Other options like env or printenv don't have options to print just the variable names without values, at least not on the linux and freebsd platforms I have tried. Piping to awk/sed/etc. or trimming the list with parameter expansion techniques (e.g., ${foo%%=*} ) is acceptable, but it has to work with values that may span lines and have = and whitespace in the value (see example above). Answers specific to particular shell implementations are interesting, but I am primarily looking for something that is compatible across implementations.
It's pretty easy in awk. awk 'BEGIN{for(v in ENVIRON) print v}' However beware some awk implementations add environment variables of their own (e.g. GNU awk adds AWKPATH and AWKLIBPATH to ENVIRON ). The output is ambiguous if the name of an environment variable contains a newline, which is extremely unusual but technically possible.A pure sh solution would be difficult. Your best bet is to start with export -p but massaging it in pure sh is difficult. You can use sed to massage the output of export -p , then use eval to get the shell to remove what it quoted. Bash and zsh print non-standard prefixes. report () { echo "${1%%=*}"; };eval "$(export -p | sed "s/^export /report /; s/^declare -x /report /; s/typeset -x /report /")" Note that depending on the shell, export -p may or may not show variables whose name is not valid in the shell, and if it doesn't, then it may or may not quote the names properly. For example, dash, mksh and zsh omit variables whose name includes a newline, BusyBox dash and ksh93 print them raw, and bash prints them raw without their value. If you need to defend against untrusted input, don't rely on a pure POSIX solution, and definitely don't call eval on anything derived from the output of export -p .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100450/" ] }
439,151
I'm on a Windows 10 machine with Windows Subsystem for Linux enabled/configured (Ubuntu). To explain my problem let me present you with two scenarios: Scenario 1: I start a cmd.exe prompt I run bash in the cmd.exe prompt (inside bash ) I run a given command, called dwiextract in my case (from a neuroimaging analysis software package) Works fine suggesting a successful installation of the software package. Scenario 2: I start a cmd.exe prompt I attempt to pass the exact same command directly to bash from cmd.exe by using the following syntax: bash -c dwiextract I get command not found . (Note I learned about bash -c here and have used it successfully in other occasions.) The following image shows exactly what I've done: My question: Shouldn't these two scenarios be equivalent. Why does Scenario 1 work and Scenario 2 does not work? Many thanks.
Running bash as an interactive shell (using -i option) solved my problem. That is: bash -c -i <command> .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153973/" ] }
439,196
I hear that for named pipes, writes that are smaller than about 512bytes are atomic (the writes won't interleave). Is there a way to increase that amount for a specific named pipe? something like: mkfifo mynamedpipe --buf=2500 Supposedly this is the full documentation: http://www.gnu.org/software/coreutils/manual/html_node/mkfifo-invocation.html#mkfifo-invocation man mkfifo takes me to that page.
A fifo file is just a type of file which when opened for both reading and writing instantiates a pipe like a pipe() system call would. On Linux at least, the data that transits though that pipe is not stored on the file system at all (only in the kernel as kernel memory). And the size attribute of the fifo file is not relevant and is always 0. On Linux, you can change the size of a pipe buffer (whether that pipe has been instantiated with pipe() or via opening a fifo file) with the F_SETPIPE_SZ fcntl() , though for unprivileged users, that's bound by /proc/sys/fs/pipe-max-size . Any of the writer or reader to the pipe can issue that fcntl() though it makes more sense for the writer to do it. In the case of a named pipe, you'd need to do that for each pipe instantiated though the fifo file. $ mkfifo fifo$ exec 3<> fifo # instantiate the pipe$ seq 20000 > fifo^C # seq hangs because it's trying to write more than 64KiB$ exec 3<&- 3<> fifo # close the first pipe and instantiate a new one$ (perl -MFcntl -e 'fcntl(STDOUT, 1031, 1048576)'; seq 20000) > fifo$ # went through this time Above, I used perl to issue the fcntl() , harcoding the value of F_SETPIPE_SZ (1031 on my system).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439196", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
439,280
I can not find a way to do that from bash . So Is there a way to define a bash readline shortcut that will insert a dynamically generated string at the position of the cursor? E.g., I want to insert date: bind '"\C-xx": my-dynamical-date'aaa bbb-------- ^ cursor is here# After pressing "\C-xx":aaa Sun Apr 22 22:19:00 CST 2018 bbb------------------------------------ ^ cursor is here So how to define my-dynamical-date readline command?
A bit silly but it could be something like this: bind '"\C-xx":"$(date) \e\C-e\ef\ef\ef\ef\ef"' It first enters a literal $(date) , then calls shell-expand-line and then moves 5 words forward. To save the keybinding, add the following to inputrc : "\C-xx":"$(date) \e\C-e\ef\ef\ef\ef\ef"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205981/" ] }
439,312
I want to create a copy the RAM used by a certain program to a file. And then restore that state again later. Something similar to the effect of ctrl+z & fg, but I also want to free the RAM from that program.
I want to create a copy the RAM used by a certain program to a file. And then restore that state again later. Misconception: a process has its virtual address space and uses virtual memory . The RAM itself is a resource managed by the OS kernel. Read Operating System: Three Easy Pieces (a process does not use directly RAM). (you'll better edit and improve your question a lot to explain much more about it: what kind of program do you have in mind, in what context, why, for how long do you need to "stop" the program, do you need to restart later an improved version of it...? without these details we cannot help much more.) On Linux, you could use proc(5) to query the virtual address space of some given process. Try cat /proc/$$/maps and cat /proc/self/maps I also want to free the RAM No need for that, since the kernel is managing the RAM (sometimes thrashing could happen). See also madvise(2) , posix_fadvise(2) , mmap(2) , mlock(2) . When a process terminates, the kernel will release its virtual address space, and later reuses RAM allocated for it. When a process stops (e.g. thru Ctrl Z sending SIGTSTP , see signal(7) & termios(3) ), the kernel might reuse its RAM for other purposes (and use swap space to store dirty pages -i.e. page out - of that stopped process). Read about demand paging & http://linuxatemyram.com/ What you want is related to application checkpointing and orthogonal persistence . On Unix and Linux (and most other OSes, including Windows, Android, MacOSX, ...) it is not possible or very difficult in general (how would you handle opened file descriptors , child processes , sockets , ASLR , semaphores , threads , file locking , graphical user interfaces , shared libraries , etc...). But you could write an application with such feature (and you can find libraries helping in that); of course you'll follow some additional conventions and restrictions to make persistence or checkpointing feasible and practical. If you want that system-wide, consider hibernation . Persistence is something to be thought of very early at design time of your application (it may be difficult to add afterwards). Notice that databases ( sqlite , RDBMS , nosql databases, ...) and indexed files ( gdbm ...) can be viewed as a common way to achieve some kind of persistence (you could view your heap as a cyclic graph of objects). Persisting code-related data (e.g. classes , vtables , closures , function pointers ...) is hard in general. You can find some libraries for checkpointing , e.g. BLCR or CRIU . Of course they work in a limited context on applications developed to use them. At last, from the algorithmic point of view, persisting the entire state (or checkpointing it) is very close to copying precise garbage collectors . So reading something about them, e.g. the GC handbook , is useful. However, genuine persistence or checkpointing is difficult to implement and should be though of early at design time of your application. In many cases, it is hard enough to require a full rewrite of an application not providing it. Stay compatible with the evolution of the code is even harder (e.g. being able to restart with a newer version of your code applied to an old checkpoint). You might get inspired by dynamic software updating techniques. Some programming languages implementations (e.g. Ocaml, Python, Java, ...) provide serialization or marshalling facilities that could help. Others have some way of checkpointing (e.g. SBCL save-lisp-and-die , PolyML export ). Homoiconicity and reflection are helpful programming language features. See also RefPerSys and Bismon as examples of persistent systems.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/253453/" ] }
439,327
There is quite useful program for managing packages in Debian-based operating systems. It is called Synaptic . Is there same program on Red Hat-based systems, such as CentOS ?
I want to create a copy the RAM used by a certain program to a file. And then restore that state again later. Misconception: a process has its virtual address space and uses virtual memory . The RAM itself is a resource managed by the OS kernel. Read Operating System: Three Easy Pieces (a process does not use directly RAM). (you'll better edit and improve your question a lot to explain much more about it: what kind of program do you have in mind, in what context, why, for how long do you need to "stop" the program, do you need to restart later an improved version of it...? without these details we cannot help much more.) On Linux, you could use proc(5) to query the virtual address space of some given process. Try cat /proc/$$/maps and cat /proc/self/maps I also want to free the RAM No need for that, since the kernel is managing the RAM (sometimes thrashing could happen). See also madvise(2) , posix_fadvise(2) , mmap(2) , mlock(2) . When a process terminates, the kernel will release its virtual address space, and later reuses RAM allocated for it. When a process stops (e.g. thru Ctrl Z sending SIGTSTP , see signal(7) & termios(3) ), the kernel might reuse its RAM for other purposes (and use swap space to store dirty pages -i.e. page out - of that stopped process). Read about demand paging & http://linuxatemyram.com/ What you want is related to application checkpointing and orthogonal persistence . On Unix and Linux (and most other OSes, including Windows, Android, MacOSX, ...) it is not possible or very difficult in general (how would you handle opened file descriptors , child processes , sockets , ASLR , semaphores , threads , file locking , graphical user interfaces , shared libraries , etc...). But you could write an application with such feature (and you can find libraries helping in that); of course you'll follow some additional conventions and restrictions to make persistence or checkpointing feasible and practical. If you want that system-wide, consider hibernation . Persistence is something to be thought of very early at design time of your application (it may be difficult to add afterwards). Notice that databases ( sqlite , RDBMS , nosql databases, ...) and indexed files ( gdbm ...) can be viewed as a common way to achieve some kind of persistence (you could view your heap as a cyclic graph of objects). Persisting code-related data (e.g. classes , vtables , closures , function pointers ...) is hard in general. You can find some libraries for checkpointing , e.g. BLCR or CRIU . Of course they work in a limited context on applications developed to use them. At last, from the algorithmic point of view, persisting the entire state (or checkpointing it) is very close to copying precise garbage collectors . So reading something about them, e.g. the GC handbook , is useful. However, genuine persistence or checkpointing is difficult to implement and should be though of early at design time of your application. In many cases, it is hard enough to require a full rewrite of an application not providing it. Stay compatible with the evolution of the code is even harder (e.g. being able to restart with a newer version of your code applied to an old checkpoint). You might get inspired by dynamic software updating techniques. Some programming languages implementations (e.g. Ocaml, Python, Java, ...) provide serialization or marshalling facilities that could help. Others have some way of checkpointing (e.g. SBCL save-lisp-and-die , PolyML export ). Homoiconicity and reflection are helpful programming language features. See also RefPerSys and Bismon as examples of persistent systems.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439327", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
439,340
I recently read that I can eke more performance out of my CPU by setting the governor to "performance" instead of "powersave". According to the Arch wiki , this will "run the CPU at the maximum frequency" instead of the "minimum frequency". I found this wording confusing, so I also read the kernel documentation . 2.1 Performance The CPUfreq governor "performance" sets the CPU statically to the highest frequency within the borders of scaling_min_freq and scaling_max_freq. 2.2 Powersave The CPUfreq governor "powersave" sets the CPU statically to the lowest frequency within the borders of scaling_min_freq and scaling_max_freq. What does "statically" mean? To me, it contrasts with "dynamic", and implies frequency would never change, i.e. with powersave the CPU frequency would always be a single value, equal to scaling_min_freq . However, this is clearly not the case. I am currently running "powersave" by default. I can monitor the CPU frequencies with $ watch grep \"cpu MHz\" /proc/cpuinfo and see them changing dynamically. What does the kernel documentation mean by "statically"? What factors affect the CPU frequency, and how do these change with "powersave" and "performance"? Hence, what are the implications of changing from the former to the latter? Would a higher frequency be used? During what circumstances? Specifically, will this affect power draw, heat and lifespan of my CPU?
For the record, the (up-to-date) cpufreq documentation is here . What does "statically" mean?To me, it contrasts with "dynamic", and implies frequency would never change, i.e. with powersave the CPU frequency would always be a single value, equal to scaling_min_freq You're right. Back in the old cpufreq driver days, there were two kinds of governors: dynamic ones and static ones. The difference was that dynamic governors ( ondemand and conservative ) could switch between CPU frequencies based on CPU utilization whereas static governors ( performance and powersave ) would never change the CPU frequency. However, as you have noticed, with the new driver this is clearly not the case. This is because the new driver, which is called intel_pstate , operates differently. The p-states aka operation performance points involve active power management and race to idle which means scaling voltage and frequency. For more details see the official documentation. As to your actual question, What are the implications of setting the CPU governor to "performance" ? it's also answered in the same document. As with all Skylake+ processors, the operating mode of your CPU is - by default - "Active Mode with HWP" so the implications of using the performance governor are (emphasize mine): HWP + performance In this configuration intel_pstate will write 0 to the processor’s Energy-Performance Preference ( EPP ) knob (if supported) or its Energy-Performance Bias ( EPB ) knob (otherwise), which means that the processor’s internal P-state selection logic is expected to focus entirely on performance, . This will override the EPP / EPB setting coming from the sysfs interface (see Energy vs Performance Hints below). Also, in this configuration the range of P-states available to the processor’s internal P-state selection logic is always restricted to the upper boundary (that is, the maximum P-state that the driver is allowed to use). In a nutshell: intel_pstate is actually a governor and a hardware driver all in one. It supports two policies: the performance policy always picks the highest p-state : maximize the performance and then go back down to a virtual zero energy draw state, also called "Race to Idle" the powersave policy attempts to balance performance with energy savings: it selects the appropriate p-state based on CPU utilization (load at this specific p-state, will probably go down when going to a higher p-state) and capacity (maximum performance in highest p-state)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439340", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18887/" ] }
439,349
I am beginning to work with micro controllers and programming them using C language. All my programming experience is with Python language. I know that if I want to test a script I have written in python, I can simply launch terminal and type in “python” with the path of the file I want to run. I tried a web search, but most didn’t seem to understand what I was asking. How do I run c from terminal?
C is not an interpreted language like Python or Perl. You cannot simply type C code and then tell the shell to execute the file. You need to compile the C file with a C compiler like gcc then execute the binary file it outputs. For example, running gcc file.c will output a binary file with the name a.out . You can then tell the shell to execute the binary file by specifying the files full path ./a.out . Edit: As some comments and other answers have stated, there are some C interpreters that exist. However, I would argue that C compilers are more popular.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/439349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270306/" ] }
439,467
I’m having fun with OpenSSH, and I know the /etc/ssh directory is for the ssh daemon and the ~/.ssh directory is for a particular user. Both directories contain private and public keys: But what is the difference between those keys? I’m confused because the ones I use as a user is in my home directory, and what are the roles of the keys found in /etc/ssh ?
/etc/ssh provides configuration for the system: default configuration for users ( /etc/ssh/ssh_config ), and configuration for the daemon ( /etc/ssh/sshd_config ). The various host files in /etc/ssh are used by the daemon: they contain the host keys, which are used to identify the server — in the same way that users are identified by key pairs (stored in their home directory), servers are also identified by key pairs. Multiple key pairs are used because servers typically offer multiple types of keys: RSA, ECDSA, and Ed25519 in your case. (Users can also have multiple keys.) The various key files are used as follows: your private key, if any, is used to identify you to any server you’re connecting to (it must then match the public key stored in the server’s authorized keys for the account you’re trying to connect to); the server’s private key is used by the client to identify the server; such identities are stored in ~/.ssh/known_hosts , and if a server’s key changes, SSH will complain about it and disable certain features to mitigate man-in-the-middle attacks; your public key file stores the string you need to copy to remote servers (in ~/.ssh/authorized_keys ); it isn’t used directly; the server’s public key files store strings you can copy to your known hosts list to pre-populate it; it also isn’t used directly. The last part isn’t used all that often; the default SSH model is known as “TOFU” (trust on first use): a connection is trusted by default the first time it’s used, and SSH only cares about unexpected changes . In some cases though it’s useful to be able to trust the first connection too: a server’s operator can communicate the server’s public keys, and users can add these to their known hosts before the first connection. See the ssh_config and sshd_config manpages for details ( man ssh_config and man sshd_config on your system). The format used for known hosts is described in the sshd manpage .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169380/" ] }
439,471
I want to put a command string in a variable. This is what I am doing ssh=">/dev/null ssh -i key domain" Then I want to call this command: $ssh ls >&2 But this fails with: bash: 1>/dev/null: No such file or directory Is this the expected behavior?
/etc/ssh provides configuration for the system: default configuration for users ( /etc/ssh/ssh_config ), and configuration for the daemon ( /etc/ssh/sshd_config ). The various host files in /etc/ssh are used by the daemon: they contain the host keys, which are used to identify the server — in the same way that users are identified by key pairs (stored in their home directory), servers are also identified by key pairs. Multiple key pairs are used because servers typically offer multiple types of keys: RSA, ECDSA, and Ed25519 in your case. (Users can also have multiple keys.) The various key files are used as follows: your private key, if any, is used to identify you to any server you’re connecting to (it must then match the public key stored in the server’s authorized keys for the account you’re trying to connect to); the server’s private key is used by the client to identify the server; such identities are stored in ~/.ssh/known_hosts , and if a server’s key changes, SSH will complain about it and disable certain features to mitigate man-in-the-middle attacks; your public key file stores the string you need to copy to remote servers (in ~/.ssh/authorized_keys ); it isn’t used directly; the server’s public key files store strings you can copy to your known hosts list to pre-populate it; it also isn’t used directly. The last part isn’t used all that often; the default SSH model is known as “TOFU” (trust on first use): a connection is trusted by default the first time it’s used, and SSH only cares about unexpected changes . In some cases though it’s useful to be able to trust the first connection too: a server’s operator can communicate the server’s public keys, and users can add these to their known hosts before the first connection. See the ssh_config and sshd_config manpages for details ( man ssh_config and man sshd_config on your system). The format used for known hosts is described in the sshd manpage .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38892/" ] }
439,478
I'm trying to set xinput properties for a USB input device whenever it is connected. I have seen solutions that require a script to run in the background and poll USB devices, but I would like to find a triggered approach rather than one involving user-space polling. I have tried creating a udev rule that runs a script on device connection, but it appears that the connected device is not yet visible to xinput when the udev add rule is triggered. This appears to be a constant order of events rather than a race condition as adding a sleep command to the script also delays the device being listed in xinput list . Is there any reliable method of setting xinput properties for devices when they are connected?
/etc/ssh provides configuration for the system: default configuration for users ( /etc/ssh/ssh_config ), and configuration for the daemon ( /etc/ssh/sshd_config ). The various host files in /etc/ssh are used by the daemon: they contain the host keys, which are used to identify the server — in the same way that users are identified by key pairs (stored in their home directory), servers are also identified by key pairs. Multiple key pairs are used because servers typically offer multiple types of keys: RSA, ECDSA, and Ed25519 in your case. (Users can also have multiple keys.) The various key files are used as follows: your private key, if any, is used to identify you to any server you’re connecting to (it must then match the public key stored in the server’s authorized keys for the account you’re trying to connect to); the server’s private key is used by the client to identify the server; such identities are stored in ~/.ssh/known_hosts , and if a server’s key changes, SSH will complain about it and disable certain features to mitigate man-in-the-middle attacks; your public key file stores the string you need to copy to remote servers (in ~/.ssh/authorized_keys ); it isn’t used directly; the server’s public key files store strings you can copy to your known hosts list to pre-populate it; it also isn’t used directly. The last part isn’t used all that often; the default SSH model is known as “TOFU” (trust on first use): a connection is trusted by default the first time it’s used, and SSH only cares about unexpected changes . In some cases though it’s useful to be able to trust the first connection too: a server’s operator can communicate the server’s public keys, and users can add these to their known hosts before the first connection. See the ssh_config and sshd_config manpages for details ( man ssh_config and man sshd_config on your system). The format used for known hosts is described in the sshd manpage .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4292/" ] }
439,486
I've moved from Gnome to i3 on Manjaro, and I'm almost done with configuring the window manager, and terminal colors and whatnot. After some time I just decided to listen to some music, and after a couple of minutes I realized that the volume keys and playback keys don't work. I have a Razer Blackwidow Stealth 2014 keyboard, so those media keys are actually together with the Function keys. For example: Play/Pause is on F6 , and it acts as a media key when I press the Fn key, like in Fn + F6 .
The search for the answer After some time messing around with the controls, I've found a post on the old i3 FAQ board: https://faq.i3wm.org/question/3747/enabling-multimedia-keys.1.html It says to paste the following into i3's .config file (bellow is a lightly modified version, with some lines removed, which are not relevant to this particular question): # Pulse Audio controlsbindsym XF86AudioRaiseVolume exec --no-startup-id pactl set-sink-volume 0 +5% #increase sound volumebindsym XF86AudioLowerVolume exec --no-startup-id pactl set-sink-volume 0 -5% #decrease sound volumebindsym XF86AudioMute exec --no-startup-id pactl set-sink-mute 0 toggle # mute sound# Sreen brightness controlsbindsym XF86MonBrightnessUp exec xbacklight -inc 20 # increase screen brightnessbindsym XF86MonBrightnessDown exec xbacklight -dec 20 # decrease screen brightness# Media player controlsbindsym XF86AudioPlay exec playerctl play-pausebindsym XF86AudioPause exec playerctl play-pausebindsym XF86AudioNext exec playerctl nextbindsym XF86AudioPrev exec playerctl previous And it didn't work either, however the process of finding the answer is correct. The real answer To me, at least, the problem was that after copying those lines, the keys would not work. After some more research, I found out that the volume commands could be a little different, using amixer instead of PulseAudio's pactl . At the end, those were left like this: # Media volume controlsbindsym XF86AudioMute exec amixer sset 'Master' togglebindsym XF86AudioLowerVolume exec amixer sset 'Master' 5%-bindsym XF86AudioRaiseVolume exec amixer sset 'Master' 5%+ and they started working. The playback keys were a little more trickier. I deduced that the .config tells which command is executed to do the action. Then I proceeded to try playerctl play-pause on my terminal. Of course it didn't work, because playerctl was not installed . After installing it (using sudo pacman -S playerctl ) those keyboard commands worked just fine too.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439486", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/281695/" ] }
439,497
Suppose that I were using sha1pass to generate a hash of some sensitive password on the command line. I can use sha1pass mysecret to generate a hash of mysecret but this has the disadvantage that mysecret is now in the bash history. Is there a way to accomplish the end goal of this command while avoiding revealing mysecret in plain text, perhaps by using a passwd -style prompt? I'm also interested in a generalized way to do this for passing sensitive data to any command. The method would change when the sensitive data is passed as an argument (such as in sha1pass ) or on STDIN to some command. Is there a way to accomplish this? Edit : This question attracted a lot of attention and there have been several good answers offered below. A summary is: As per @Kusalananda's answer , ideally one would never have to give a password or secret as a command-line argument to a utility. This is vulnerable in several ways as described by him, and one should use a better-designed utility that is capable of taking the secret input on STDIN @vfbsilva's answer describes how to prevent things from being stored in bash history @Jonathan's answer describes a perfectly good method for accomplishing this as long as the program can take its secret data on STDIN. As such, I've decided to accept this answer. sha1pass in my OP was just an example, but the discussion has established that better tools exist that do take data on STDIN. as @R.. notes in his answer , use of command expansion on a variable is not safe. So, in summary, I've accepted @Jonathan's answer since it's the best solution given that you have a well-designed and well-behaved program to work with.Though passing a password or secret as a command-line argument is fundamentally unsafe, the other answers provide ways of mitigating the simple security concerns.
If using the zsh or bash shell, use the -s option to the read shell builtin to read a line from the terminal device without it echoing it. IFS= read -rs VARIABLE < /dev/tty Then you can use some fancy redirection to use the variable as stdin. sha1pass <<<"$VARIABLE" If anyone runs ps , all they'll see is "sha1pass". That assumes that sha1pass reads the password from stdin (on one line, ignoring the line delimiter) when not given any argument.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/439497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7868/" ] }
439,514
I have a file with one column with names that repeat a number of times each. I want to condense each repeat into one, while keeping any other repeats of the same name that are not adjacent to other repeats of the same name. E.g. I want to turn the left side to the right side: Golgb1 Golgb1 Golgb1 AknaGolgb1 Spata20Golgb1 Golgb1Golgb1 AknaAknaAknaAknaSpata20Spata20Spata20Golgb1Golgb1Golgb1AknaAknaAkna This is what I've been using: perl -ne 'print if ++$k{$_}==1' file.txt > file2.txt However, this method only keeps one representative from the left (i.e. Golb1 and Akna are not repeated). Is there a way to keep unique names for each block, while keeping names that repeat in multiple, non-adjacent blocks?
uniq will do this for you: $ uniq inputfileGolgb1AknaSpata20Golgb1Akna
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/439514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213707/" ] }
439,521
I'm trying to do streaming from a source (tv box transmitting in multicast in this source rtp://@X.X.X.X:Y) to Internet (to my mobile phone as example or another device inside of my LAN) but I can not achieve it. The command I'm using is something like this ffmpeg -i rtp://@X.X.X.X:Y -vcodec copy -f mpegts udp://127.0.0.1:1234 But it does not work as I expected, I mean, I'm able to open vlc and play the streaming in the same machine I'm running ffmpeg but not in another machine in the same LAN. Somebody can help me? Thank you! EDIT:Finally I solved installing a software called "udpxy" that forwards multicast content to the clients. I installed in a raspberry and it works perfect for this purpose. Thank you for all your explanations. It helped me to understand what I want to do and the limitations I have using a transcoder. I guess I can do the same with udpxy with ffmpeg but I can publish directly the TV Box IPs.
uniq will do this for you: $ uniq inputfileGolgb1AknaSpata20Golgb1Akna
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/439521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287509/" ] }
439,529
I am trying to run VMware on kali linux but when I try to run it show message that Before you can run VMware several modules must be compiled and loaded into the running kernel Here is log: 2018-04-23T20:11:48.254+04:30| vthread-1| I125: Log for VMware Workstation pid=8508 version=14.1.0 build=build-7370693 option=Release2018-04-23T20:11:48.254+04:30| vthread-1| I125: The process is 64-bit.2018-04-23T20:11:48.254+04:30| vthread-1| I125: Host codepage=UTF-8 encoding=UTF-82018-04-23T20:11:48.254+04:30| vthread-1| I125: Host is Linux 4.15.0-2-amd64 Kali GNU/Linux Rolling2018-04-23T20:11:48.254+04:30| vthread-1| I125: DictionaryLoad: Cannot open file "/usr/lib/vmware/settings": No such file or directory.2018-04-23T20:11:48.254+04:30| vthread-1| I125: [msg.dictionary.load.openFailed] Cannot open file "/usr/lib/vmware/settings": No such file or directory.2018-04-23T20:11:48.254+04:30| vthread-1| I125: PREF Optional preferences file not found at /usr/lib/vmware/settings. Using default values.2018-04-23T20:11:48.254+04:30| vthread-1| I125: DictionaryLoad: Cannot open file "/home/linux/.vmware/config": No such file or directory.2018-04-23T20:11:48.254+04:30| vthread-1| I125: [msg.dictionary.load.openFailed] Cannot open file "/home/linux/.vmware/config": No such file or directory.2018-04-23T20:11:48.254+04:30| vthread-1| I125: PREF Optional preferences file not found at /home/linux/.vmware/config. Using default values.2018-04-23T20:11:48.254+04:30| vthread-1| I125: DictionaryLoad: Cannot open file "/home/linux/.vmware/preferences": No such file or directory.2018-04-23T20:11:48.254+04:30| vthread-1| I125: [msg.dictionary.load.openFailed] Cannot open file "/home/linux/.vmware/preferences": No such file or directory.2018-04-23T20:11:48.254+04:30| vthread-1| I125: PREF Optional preferences file not found at /home/linux/.vmware/preferences. Using default values.2018-04-23T20:11:48.326+04:30| vthread-1| W115: Logging to /tmp/vmware-root/vmware-8508.log2018-04-23T20:11:48.340+04:30| vthread-1| I125: Obtaining info using the running kernel.2018-04-23T20:11:48.340+04:30| vthread-1| I125: Created new pathsHash.2018-04-23T20:11:48.340+04:30| vthread-1| I125: Setting header path for 4.15.0-2-amd64 to "/lib/modules/4.15.0-2-amd64/build/include".2018-04-23T20:11:48.340+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64".2018-04-23T20:11:48.340+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h2018-04-23T20:11:48.340+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead.2018-04-23T20:11:48.340+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check2018-04-23T20:11:48.348+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64".2018-04-23T20:11:48.348+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo!2018-04-23T20:11:48.571+04:30| vthread-1| I125: found symbol version file /lib/modules/4.15.0-2-amd64/build/Module.symvers2018-04-23T20:11:48.571+04:30| vthread-1| I125: Reading symbol versions from /lib/modules/4.15.0-2-amd64/build/Module.symvers.2018-04-23T20:11:48.597+04:30| vthread-1| I125: Read 20056 symbol versions2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vmmon module.2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vmnet module.2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vmblock module.2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vmci module.2018-04-23T20:11:48.597+04:30| vthread-1| I125: Reading in info for the vsock module.2018-04-23T20:11:48.597+04:30| vthread-1| I125: Setting vsock to depend on vmci.2018-04-23T20:11:48.597+04:30| vthread-1| I125: Invoking modinfo on "vmmon".2018-04-23T20:11:48.600+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256.2018-04-23T20:11:48.600+04:30| vthread-1| I125: Invoking modinfo on "vmnet".2018-04-23T20:11:48.602+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256.2018-04-23T20:11:48.602+04:30| vthread-1| I125: Invoking modinfo on "vmblock".2018-04-23T20:11:48.604+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256.2018-04-23T20:11:48.604+04:30| vthread-1| I125: Invoking modinfo on "vmci".2018-04-23T20:11:48.606+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256.2018-04-23T20:11:48.606+04:30| vthread-1| I125: Invoking modinfo on "vsock".2018-04-23T20:11:48.608+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 0.2018-04-23T20:11:48.623+04:30| vthread-1| I125: to be installed: vmmon status: 02018-04-23T20:11:48.623+04:30| vthread-1| I125: to be installed: vmnet status: 02018-04-23T20:11:48.639+04:30| vthread-1| I125: Obtaining info using the running kernel.2018-04-23T20:11:48.639+04:30| vthread-1| I125: Setting header path for 4.15.0-2-amd64 to "/lib/modules/4.15.0-2-amd64/build/include".2018-04-23T20:11:48.639+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64".2018-04-23T20:11:48.639+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h2018-04-23T20:11:48.639+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead.2018-04-23T20:11:48.639+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check2018-04-23T20:11:48.646+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64".2018-04-23T20:11:48.646+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo!2018-04-23T20:11:48.867+04:30| vthread-1| I125: found symbol version file /lib/modules/4.15.0-2-amd64/build/Module.symvers2018-04-23T20:11:48.867+04:30| vthread-1| I125: Reading symbol versions from /lib/modules/4.15.0-2-amd64/build/Module.symvers.2018-04-23T20:11:48.892+04:30| vthread-1| I125: Read 20056 symbol versions2018-04-23T20:11:48.893+04:30| vthread-1| I125: Kernel header path retrieved from FileEntry: /lib/modules/4.15.0-2-amd64/build/include2018-04-23T20:11:48.893+04:30| vthread-1| I125: Update kernel header path to /lib/modules/4.15.0-2-amd64/build/include2018-04-23T20:11:48.893+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64".2018-04-23T20:11:48.893+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h2018-04-23T20:11:48.893+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead.2018-04-23T20:11:48.893+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check2018-04-23T20:11:48.900+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64".2018-04-23T20:11:48.900+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo!2018-04-23T20:11:48.902+04:30| vthread-1| I125: Found compiler at "/usr/bin/gcc"2018-04-23T20:11:48.906+04:30| vthread-1| I125: Got gcc version "7".2018-04-23T20:11:48.906+04:30| vthread-1| I125: The GCC version matches the kernel GCC minor version like a glove.2018-04-23T20:11:48.910+04:30| vthread-1| I125: Got gcc version "7".2018-04-23T20:11:48.910+04:30| vthread-1| I125: The GCC version matches the kernel GCC minor version like a glove.2018-04-23T20:11:48.912+04:30| vthread-1| I125: Trying to find a suitable PBM set for kernel "4.15.0-2-amd64".2018-04-23T20:11:48.912+04:30| vthread-1| I125: No matching PBM set was found for kernel "4.15.0-2-amd64".2018-04-23T20:11:48.912+04:30| vthread-1| I125: The GCC version matches the kernel GCC minor version like a glove.2018-04-23T20:11:48.912+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64".2018-04-23T20:11:48.912+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h2018-04-23T20:11:48.912+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead.2018-04-23T20:11:48.912+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check2018-04-23T20:11:48.922+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64".2018-04-23T20:11:48.922+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo!2018-04-23T20:11:48.925+04:30| vthread-1| I125: The GCC version matches the kernel GCC minor version like a glove.2018-04-23T20:11:48.925+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64".2018-04-23T20:11:48.925+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h2018-04-23T20:11:48.925+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead.2018-04-23T20:11:48.925+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check2018-04-23T20:11:48.937+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64".2018-04-23T20:11:48.937+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo!2018-04-23T20:11:48.937+04:30| vthread-1| I125: Using temp dir "/tmp".2018-04-23T20:11:48.940+04:30| vthread-1| I125: Obtaining info using the running kernel.2018-04-23T20:11:48.940+04:30| vthread-1| I125: Setting header path for 4.15.0-2-amd64 to "/lib/modules/4.15.0-2-amd64/build/include".2018-04-23T20:11:48.940+04:30| vthread-1| I125: Validating path "/lib/modules/4.15.0-2-amd64/build/include" for kernel release "4.15.0-2-amd64".2018-04-23T20:11:48.940+04:30| vthread-1| I125: Failed to find /lib/modules/4.15.0-2-amd64/build/include/linux/version.h2018-04-23T20:11:48.940+04:30| vthread-1| I125: /lib/modules/4.15.0-2-amd64/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead.2018-04-23T20:11:48.940+04:30| vthread-1| I125: using /usr/bin/gcc-7 for preprocess check2018-04-23T20:11:48.951+04:30| vthread-1| I125: Preprocessed UTS_RELEASE, got value "4.15.0-2-amd64".2018-04-23T20:11:48.951+04:30| vthread-1| I125: The header path "/lib/modules/4.15.0-2-amd64/build/include" for the kernel "4.15.0-2-amd64" is valid. Whoohoo!2018-04-23T20:11:49.171+04:30| vthread-1| I125: found symbol version file /lib/modules/4.15.0-2-amd64/build/Module.symvers2018-04-23T20:11:49.171+04:30| vthread-1| I125: Reading symbol versions from /lib/modules/4.15.0-2-amd64/build/Module.symvers.2018-04-23T20:11:49.196+04:30| vthread-1| I125: Read 20056 symbol versions2018-04-23T20:11:49.196+04:30| vthread-1| I125: Invoking modinfo on "vmmon".2018-04-23T20:11:49.200+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256.2018-04-23T20:11:49.200+04:30| vthread-1| I125: Invoking modinfo on "vmnet".2018-04-23T20:11:49.203+04:30| vthread-1| I125: "/sbin/modinfo" exited with status 256.2018-04-23T20:11:49.594+04:30| vthread-1| I125: Setting destination path for vmmon to "/lib/modules/4.15.0-2-amd64/misc/vmmon.ko".2018-04-23T20:11:49.595+04:30| vthread-1| I125: Extracting the vmmon source from "/usr/lib/vmware/modules/source/vmmon.tar".2018-04-23T20:11:49.606+04:30| vthread-1| I125: Successfully extracted the vmmon source.2018-04-23T20:11:49.606+04:30| vthread-1| I125: Building module with command "/usr/bin/make -j4 -C /tmp/modconfig-stxrjw/vmmon-only auto-build HEADER_DIR=/lib/modules/4.15.0-2-amd64/build/include CC=/usr/bin/gcc IS_GCC_3=no"2018-04-23T20:11:52.158+04:30| vthread-1| W115: Failed to build vmmon. Failed to execute the build command.2018-04-23T20:11:52.161+04:30| vthread-1| I125: Setting destination path for vmnet to "/lib/modules/4.15.0-2-amd64/misc/vmnet.ko".2018-04-23T20:11:52.161+04:30| vthread-1| I125: Extracting the vmnet source from "/usr/lib/vmware/modules/source/vmnet.tar".2018-04-23T20:11:52.170+04:30| vthread-1| I125: Successfully extracted the vmnet source.2018-04-23T20:11:52.170+04:30| vthread-1| I125: Building module with command "/usr/bin/make -j4 -C /tmp/modconfig-stxrjw/vmnet-only auto-build HEADER_DIR=/lib/modules/4.15.0-2-amd64/build/include CC=/usr/bin/gcc IS_GCC_3=no"2018-04-23T20:11:56.805+04:30| vthread-1| I125: Successfully built vmnet. Module is currently at "/tmp/modconfig-stxrjw/vmnet.o".2018-04-23T20:11:56.805+04:30| vthread-1| I125: Found the vmnet symvers file at "/tmp/modconfig-stxrjw/vmnet-only/Module.symvers".2018-04-23T20:11:56.805+04:30| vthread-1| I125: Installing vmnet from /tmp/modconfig-stxrjw/vmnet.o to /lib/modules/4.15.0-2-amd64/misc/vmnet.ko.2018-04-23T20:11:56.809+04:30| vthread-1| I125: Registering file "/lib/modules/4.15.0-2-amd64/misc/vmnet.ko".2018-04-23T20:11:57.108+04:30| vthread-1| I125: "/usr/lib/vmware-installer/2.1.0/vmware-installer" exited with status 0.2018-04-23T20:11:57.109+04:30| vthread-1| I125: Registering file "/usr/lib/vmware/symvers/vmnet-4.15.0-2-amd64".2018-04-23T20:11:57.404+04:30| vthread-1| I125: "/usr/lib/vmware-installer/2.1.0/vmware-installer" exited with status 0. I tried to google but was unable to find relevant post.
Issue At Hand You are reporting that you are unable to run VMware on Kali Linux. According to the errors you have posted your Operating System is missing the VMware modules necessary to run. I will take this time to point out that Kali Linux is not meant as a general purpose Operating System . You may continue to run into these kinds of errors using software not designed for Kali Linux. Running virtualization or hypervisor software is not an intended function of Kali Linux. One possible solution to your issue would be to run your virtualization software on Ubuntu, Debian, or any other general purpose operating system instead. If you wish to continue using Kali Linux or encounter the same error in a different Operating System the following steps may work as a possible solution to the above error. Possible Solutions I will be referencing this post as it contains a few different possible fixes. First off try and run this command: sudo vmware-modconfig --console --install-all This should install all VMware modules. You should now be able to run Vmware as expected. Look over this VMware forum post as they cover additional scripts you may need to run to verify the install process. Alternatively, you could try this first: sudo apt-get install build-essential linux-headers-$(uname -r) open-vm-dkmssudo ln -s /usr/src/linux-headers-$(uname -r)/include/generated/uapi/linux/version.h /usr/src/linux-headers-$(uname -r)/include/linux/version.h After which run: sudo vmware-config-tools.pl . It might be necessary to run sudo vmware-modconfig --console --install-all again after this is complete. Starting from Scratch You may need to start over with a fresh install of VMware. Purge the existing installation by running sudo vmware-installer -u vmware-player . Then rerun the installer script, i.e: ./VMware-*.bundle. I would also verify that your graphics drivers and all other parts of your system are fully up to date. Conclusion Again, I suggest you use a different Operating System than Kali Linux to complete this task. Please read over this post in its entirety before going with a possible fix. Remember you need to install the proper kernel headers for your kernel to get this to work. I am also including a link to a guide on installing VMware on Kali Linux . There are even some comments in that post on how to troubleshoot the issue further. I am also including a link to the Official Kali Linux documentation on how to install VMware tools as well as a link to another stack exchange post that appears to be related to this issue. Please comment if there are any questions about this answer. I appreciate corrections to any misconceptions and feedback on how to improve my posts. Best of Luck!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439529", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/285464/" ] }
439,539
Suppose you have a file like this: NW_006521251.1 428 84134NW_006521251.1 511 84135NW_006521038.1 202 84155NW_006521038.1 1743 84153NW_006521038.1 1743 84154NW_006520495.1 198 84159NW_006520086.1 473 84178NW_006520086.1 511 84180 I want to keep the unique rows based on columns 1 and 2 (i.e. not just column two as this number may repeat under a different label in column one). Such that I get this as output (removes the second repeat of NW_006521038.1 1743 from the list): NW_006521251.1 428 84134 NW_006521251.1 511 84135 NW_006521038.1 202 84155 NW_006521038.1 1743 84153 NW_006520495.1 198 84159 NW_006520086.1 473 84178 NW_006520086.1 511 84180 Is there a way to do this with awk?Using uniq file doesn't work.
There is a "famous" awk idiom for exactly this. You want to do: awk '!seen[$1,$2]++' file That creates an associative array "seen" with the 2 columns as the key. Use the post-increment operator so that, for the first time you encounter that key, the value is zero. The use the negation operator for a "true" result the first time you see the key.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213707/" ] }
439,620
Is there a way to list the installed files with pkg for a certain package?
pkg info -l PACKAGENAME or pkg info --list-files PACKAGENAME . You can find the -l option in man pkg-info . (And you can in turn find the pkg info subcommand and a pointer to its aformentioned manual page in man pkg .)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150422/" ] }
439,633
I need to create a while loop that if dmesg returns some/any value, then it should kill a determined process. Here is what I have. #!/bin/bashwhile [ 1 ];doBUG=$(dmesg | grep "BUG: workqueue lockup" &> /dev/null) if [ ! -z "$BUG" ]; then killall someprocessnameelse break fi done I don't know if instead of ! -z I should do [ test -n "$BUG" ] I think with -n it says something about expecting a binary. I don't know if the script will even work because the BUG lockup halts every process, but still there are few more lines in dmesg until the computer gets completely borked - maybe I can catch-up and kill the process.
Some issues: You are running this in a busy loop, which will consume as much resources as it can. This is one instance where sleep ing could conceivably be justified. However, recent versions of dmesg have a flag to follow the output , so you could rewrite the whole thing as (untested) while truedo dmesg --follow | tail --follow --lines=0 | grep --quiet 'BUG: workqueue lockup' killall someprocessnamedone The code should be indented to be readable. It is really strange, but [ is the same as test - see help [ .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/251794/" ] }
439,674
In light of memtest86+ not working with UEFI , is there an open source alternative or something I can use from grub to test memory?
Yes, there is, and it is now Memtest86+ v6 itself. This is a new version of Memtest86+, based on PCMemTest , which is a rewrite of Memtest86+ which can be booted from UEFI. Its authors still label it as not ready for production, but it does work in many configurations. Binaries of Memtest86+ v6 are available on memtest.org . Alternatively, the Linux kernel itself contains a memory test tool: the memtest option will run a memory check with up to 17 patterns (currently). If you add memtest to your kernel boot parameters, it will run all tests at boot, and reserve any failing addresses so that they’re not used. If you want fewer tests, you can specify the number of patterns ( memtest=8 for example). This isn’t as extensive as Memtest86+’s tests, but it still gives pretty good results. Some distribution kernels don’t include this feature; you can check whether it’s available by looking for CONFIG_MEMTEST in your kernel configuration (try /boot/config-$(uname -r) ). The kernel won’t complain if you specify memtest but it doesn’t support it; when it does run, you should see output like [ 0.000000] early_memtest: # of tests: 17[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern 4c494e5558726c7a[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern 4c494e5558726c7a[ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern 4c494e5558726c7a[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern eeeeeeeeeeeeeeee[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern eeeeeeeeeeeeeeee[ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern eeeeeeeeeeeeeeee[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern dddddddddddddddd[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern dddddddddddddddd[ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern dddddddddddddddd[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern bbbbbbbbbbbbbbbb[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern bbbbbbbbbbbbbbbb... while the kernel boots (or in its boot logs, later). You can use QEMU to get a feel for this: qemu-system-x86_64 -kernel /boot/vmlinuz-$(uname -r) -append "memtest console=ttyS0" -nographic (or whichever qemu-system-... is appropriate for your architecture), and look for “early_memtest”. To exit QEMU after the kernel panics, press Ctrl a , c , q , Enter .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/439674", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
439,680
I'm trying to use shebang with /usr/bin/env form to execute my script under custom interpret. This is how my file looks: $ cat test.rb #!/usr/bin/env winrubyprint "Input someting: "puts "Got: #{gets}"sleep(100) but it fails when executed: $ ./test.rb /usr/bin/env: ‘winruby’: No such file or directory and I do not understand why tv185035@WCZTV185035-001 ~$ winruby --versionruby 2.5.1p57 (2018-03-29 revision 63029) [x64-mingw32]tv185035@WCZTV185035-001 ~$ env winruby --versionenv: ‘winruby’: No such file or directorytv185035@WCZTV185035-001 ~$ which winruby/home/tv185035/bin/winruby The winruby exists, is in path and is executable. But env fails to find it. I took a look at man env but it didn't tell me anything useful. EDIT: $ cat ~/bin/winruby #!/usr/bin/bashwinpty /cygdrive/g/WS/progs/Ruby25-x64/bin/ruby.exe "$@"
Yes, there is, and it is now Memtest86+ v6 itself. This is a new version of Memtest86+, based on PCMemTest , which is a rewrite of Memtest86+ which can be booted from UEFI. Its authors still label it as not ready for production, but it does work in many configurations. Binaries of Memtest86+ v6 are available on memtest.org . Alternatively, the Linux kernel itself contains a memory test tool: the memtest option will run a memory check with up to 17 patterns (currently). If you add memtest to your kernel boot parameters, it will run all tests at boot, and reserve any failing addresses so that they’re not used. If you want fewer tests, you can specify the number of patterns ( memtest=8 for example). This isn’t as extensive as Memtest86+’s tests, but it still gives pretty good results. Some distribution kernels don’t include this feature; you can check whether it’s available by looking for CONFIG_MEMTEST in your kernel configuration (try /boot/config-$(uname -r) ). The kernel won’t complain if you specify memtest but it doesn’t support it; when it does run, you should see output like [ 0.000000] early_memtest: # of tests: 17[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern 4c494e5558726c7a[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern 4c494e5558726c7a[ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern 4c494e5558726c7a[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern eeeeeeeeeeeeeeee[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern eeeeeeeeeeeeeeee[ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern eeeeeeeeeeeeeeee[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern dddddddddddddddd[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern dddddddddddddddd[ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern dddddddddddddddd[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern bbbbbbbbbbbbbbbb[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern bbbbbbbbbbbbbbbb... while the kernel boots (or in its boot logs, later). You can use QEMU to get a feel for this: qemu-system-x86_64 -kernel /boot/vmlinuz-$(uname -r) -append "memtest console=ttyS0" -nographic (or whichever qemu-system-... is appropriate for your architecture), and look for “early_memtest”. To exit QEMU after the kernel panics, press Ctrl a , c , q , Enter .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/439680", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112194/" ] }
439,689
man test only explains what -n means, wit a lowercase n. How does the capital -N work in this script? #!/bin/bash# Check for an altered certificate (means there was a renew)if [[ -N '/etc/letsencrypt/live/mx1.example.com/fullchain.pem' ]]; then # Reload postfix /bin/systemctl reload postfix # Restart dovecot /bin/systemctl restart dovecotfi
Yes, there is, and it is now Memtest86+ v6 itself. This is a new version of Memtest86+, based on PCMemTest , which is a rewrite of Memtest86+ which can be booted from UEFI. Its authors still label it as not ready for production, but it does work in many configurations. Binaries of Memtest86+ v6 are available on memtest.org . Alternatively, the Linux kernel itself contains a memory test tool: the memtest option will run a memory check with up to 17 patterns (currently). If you add memtest to your kernel boot parameters, it will run all tests at boot, and reserve any failing addresses so that they’re not used. If you want fewer tests, you can specify the number of patterns ( memtest=8 for example). This isn’t as extensive as Memtest86+’s tests, but it still gives pretty good results. Some distribution kernels don’t include this feature; you can check whether it’s available by looking for CONFIG_MEMTEST in your kernel configuration (try /boot/config-$(uname -r) ). The kernel won’t complain if you specify memtest but it doesn’t support it; when it does run, you should see output like [ 0.000000] early_memtest: # of tests: 17[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern 4c494e5558726c7a[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern 4c494e5558726c7a[ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern 4c494e5558726c7a[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern eeeeeeeeeeeeeeee[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern eeeeeeeeeeeeeeee[ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern eeeeeeeeeeeeeeee[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern dddddddddddddddd[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern dddddddddddddddd[ 0.000000] 0x000000000500d000 - 0x0000000007fe0000 pattern dddddddddddddddd[ 0.000000] 0x0000000000010000 - 0x0000000000099000 pattern bbbbbbbbbbbbbbbb[ 0.000000] 0x0000000000100000 - 0x0000000003800000 pattern bbbbbbbbbbbbbbbb... while the kernel boots (or in its boot logs, later). You can use QEMU to get a feel for this: qemu-system-x86_64 -kernel /boot/vmlinuz-$(uname -r) -append "memtest console=ttyS0" -nographic (or whichever qemu-system-... is appropriate for your architecture), and look for “early_memtest”. To exit QEMU after the kernel panics, press Ctrl a , c , q , Enter .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/439689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
439,710
The following simple bash script prints the row number (from the file list.txt ): function get_row{ [[ $1 = 1 ]] && awk '{print $1}' list.txt[[ $1 = 2 ]] && awk '{print $2}' list.txt } get_row 1get_row 2 But I would like to write it more elegantly. Is it possible to set $1 as variable as awk '{print $val}' ,so that if I call a function as get_row 1 , the variable $val gets the value 1 ?
Do you want something like this? function get_column{ awk -v val=$1 '{print $val}' list.txt} Above is returning the column match with $1 passing to the function. if you really need print the line match with line number in $1 from function, instead use below. function get_row{ awk -v val=$1 'NR==val{print ; exit}' list.txt} Or let shell evaluated and set the val value and print that within awk as following: function get_column{ awk '{print $val}' val=$1 list.txt} function get_row{ awk 'NR==val{print ; exit}' val=$1 list.txt} Here you are passing val with only numbers and if val was contain backslash escape character you will encounter a problem which awk does C escape sequence processing on values passed via -v val= and a shell variable with val="\\n" will change to value with \n by awk .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
439,744
I have a folder containing files and folders. folder/file1.jpgfolder/file2.jpgfolder/file3.jpgfolder/subfolder1/file1.txtfolder/subfolder1/file2.txtfolder/subfolder2/file1.txtfolder/subfolder3/destination/ I want to move all folders (and their content) in a new folder, but not the files. Eg. folder/file1.jpgfolder/file2.jpgfolder/file3.pngdestination/subfolder1/file1.txtdestination/subfolder1/file2.txtdestination/subfolder2/file1.txtdestination/subfolder3/ I know that, if I wanted to select all of my jpeg files (for example), I would do mv folder/*.jpg destination . But what is the command to select all folders?
For that, you just need to include a aditional / at end of * like that: mv folder/*.jpg destination (match only jpg files) mv folder/* destination (match anything found)mv folder/*/ destination (match only the folders) This will move only the folders inside "folder" to the destination, and not the files inside "folder" (note that the files in subfolders, are moved).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439744", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287678/" ] }
439,752
I have an input: ab TOCHANGEcd e TOCHANGE where I need to change the patterns "TOCHANGE" using an external file : line1line2... so that I get the following output : ab line1 cde line2 I tried the following command : while read k ; do sed -i "s/TOCHANGE/$k/g" input ; done < externalfile but I got : ab line1 cde line1
With perl : perl -pi -e 's{TOCHANGE}{chomp ($repl = <STDIN>); $repl}ge' input <externalfile With awk , assuming TOCHANGE doesn't occur in externalfile (or more generally that replacements don't generate new occurrences of TOCHANGE which could also happen for instance on an input that contains TOTOCHANGE FROMTOCHANGE and externalfile contains CHANGE and WHATEVER ): POSIXLY_CORRECT=1 PAT=TOCHANGE awk ' { while ($0 ~ ENVIRON["PAT"]) { getline repl < "externalfile" gsub(/[&\\]/, "\\\\&", repl) sub(ENVIRON["PAT"], repl) } print }' < input > input.new ( POSIXLY_CORRECT=1 is needed for GNU awk where without which it wouldn't work correctly for replacement strings that contain backslash characters). Note that $PAT above is taken as an extended regular expression. You may need to escape ERE operators if you want them to be treated literally (like PAT='TO\.CHANGE' to replace TO.CHANGE strings).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439752", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47338/" ] }
439,755
Memtester has outputted the following response, memtester version 4.3.0 (64-bit)Copyright (C) 2001-2012 Charles Cazabon.Licensed under the GNU General Public License version 2 (only).pagesize is 4096pagesizemask is 0xfffffffffffff000want 10240MB (10737418240 bytes)got 10240MB (10737418240 bytes), trying mlock ...locked.Loop 1/1: Stuck Address : testing 1FAILURE: possible bad address line at offset 0x12325b7a8.Skipping to next test... Random Value : okFAILURE: 0xa003776ad640ac0c != 0xe003776ad640ac0c at offset 0x7a4f2680. Compare XOR : FAILURE: 0xe7139f89d94112c0 != 0x27139f89d94112c0 at offset 0x7a4f2680.FAILURE: 0x4e53ee3a9704bdf5 != 0x4a53ee3a9704bdf5 at offset 0x950b4930. Compare SUB : FAILURE: 0x96ecab120464e9c0 != 0xd6ecab120464e9c0 at offset 0x7a4f2680.FAILURE: 0x7f67022cef637b99 != 0x2b67022cef637b99 at offset 0x950b4930.FAILURE: 0x96c38c9f6e6dd229 != 0xd6c38c9f6e6dd229 at offset 0xe40d2b50. Compare MUL : FAILURE: 0x00000001 != 0x00000002 at offset 0x69394a08.FAILURE: 0x00000001 != 0x00000000 at offset 0x950b4930.FAILURE: 0x400000000000001 != 0x00000001 at offset 0xea6b07a8.FAILURE: 0x400000000000000 != 0x00000000 at offset 0xfb853610.FAILURE: 0x00000000 != 0x800000000000000 at offset 0x12bf3ed10. Compare DIV : FAILURE: 0x777fd9f1ddc6c1cd != 0x777fd9f1ddc6c1cf at offset 0x69394a08.FAILURE: 0x777fd9f1ddc6c1cd != 0x7f7fd9f1ddc6c1cd at offset 0x12bf3ed10. Compare OR : FAILURE: 0x367600d19dc6c040 != 0x367600d19dc6c042 at offset 0x69394a08.FAILURE: 0x367600d19dc6c040 != 0x767600d19dc6c040 at offset 0x7a4f2680.FAILURE: 0x367600d19dc6c040 != 0x3e7600d19dc6c040 at offset 0x12bf3ed10. Compare AND : Sequential Increment: ok Solid Bits : testing 0FAILURE: 0x4000000000000000 != 0x00000000 at offset 0x12325b7a8. Block Sequential : testing 0FAILURE: 0x400000000000000 != 0x00000000 at offset 0xfb853610. Checkerboard : testing 1FAILURE: 0xaaaaaaaaaaaaaaaa != 0xeaaaaaaaaaaaaaaa at offset 0x7a4f2680. Bit Spread : testing 1FAILURE: 0xdffffffffffffff5 != 0xfffffffffffffff5 at offset 0x102e353e8. Bit Flip : testing 0FAILURE: 0x4000000000000001 != 0x00000001 at offset 0x12325b7a8. Walking Ones : testing 40FAILURE: 0xdffffeffffffffff != 0xfffffeffffffffff at offset 0x102e353e8. Walking Zeroes : testing 0FAILURE: 0x400000000000001 != 0x00000001 at offset 0xea6b07a8.FAILURE: 0x400000000000001 != 0x00000001 at offset 0xfb853610. 8-bit Writes : -FAILURE: 0xfeefa0a577dfa825 != 0xdeefa0a577dfa825 at offset 0x4bd600e8. 16-bit Writes : -FAILURE: 0xf3dfa5fff79e950b != 0xf7dfa5fff79e950b at offset 0x2b04cca8.FAILURE: 0x3ffb3fc56e7532c1 != 0x7ffb3fc56e7532c1 at offset 0xe40d2b50.Done. Clearly this shows bad memory. Is it possible to mark this memory as bad in the kernel or hypervisor and keep using it? Or is to put it in File 13 and buy replacement?
Unless you can detect errors reasonably quickly, e.g. with ECC memory or by rebooting regularly with memtest , it’s better to replace the module. You risk silent data corruption. You can tell the kernel to ignore memory by reserving it, with the memmap option (see the kernel documentation for details): memmap=nn[KMG]$ss[KMG] [KNL,ACPI] Mark specific memory as reserved. Region of memory to be reserved is from ss to ss+nn . Example: Exclude memory from 0x18690000-0x1869ffff memmap=64K$0x18690000 or memmap=0x10000$0x18690000 Some bootloaders may need an escape character before '$', like Grub2, otherwise '$' and the following number will be eaten. The difficult part here is figuring out what address ranges to reserve; memtester gives you addresses from its virtual address space, which don’t match physical addresses as needed for memmap . The simplest approach is to boot with memtest , you'll see something like this 4c494e5558726c7a bad mem addr 0x000000012f9eaa78 - 0x000000012f9eaa80 reserved4c494e5558726c7a bad mem addr 0x00000001b86fe928 - 0x00000001b86fe930 reserved0x000000012f9eaa80 - 0x00000001b86fe928 pattern 4c494e5558726c7a The kernel will then inactivate the range that it detects to be bad. You can continue booting with memtest , or use the reserved address ranges to construct memmap arguments instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
439,801
I have a Linux based process controller that occasionally locks up to the point where you can't ping it (i.e. I can ping it, then it becomes no longer pingable without any modifications to network settings). I'm curious, what process/system is responsible for actually responding to pings? It appears that this process is crashing.
The kernel network stack is handling ICMP messages, which are those sent by the ping command. If you do not get replies, besides network problems or filtering, and host based filtering/rate-limiting/black-holing/etc. it means the machine is probably overloaded by something, which can be transient, or the kernel crashed, which is rare but can happen (faulty hardware, etc.), not necessarily because of the ICMP traffic (but trying to overload it with such traffic can be a good test at the beginning of life of a server to see how it sustains things).In the later case of kernel crash you should have ample information in the log files or on the console. Also note that ping is almost always the wrong tool to check if a service is online or not. For various reasons, but mostly because it does not mimic real application traffic, by definition.For example if you need to check that a webserver is still live, you should instead do an HTTP query to it (TCP port 80 or 443), if you need to check a mailserver you do an SMTP query (TCP port 25), if a DNS server, an UDP and a TCP query to port 53, etc.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/439801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287718/" ] }
439,851
I have a text file that looks like this (111)1111111(111)-111-1111(111)111-1111111.111.1111 that I'm using to practice group capturing with regex and sed. The command I am running on the file (called test) is sed 's/(?\(\d(3}\)[-.]?\(\d{3}\)[-.]?\(\d{4}\)/\1\2\3' test > output Expecting the output that is just all 1's on every line. However, what I'm getting is just the entire file with no changes. What's going wrong?
In standard basic regex, (?\(\d(3}\)[-.]? means: a literal left parenthesisa literal question mark(start of a group)a literal character 'd'a literal left parenthesis the number '3'a literal closing brace(end of group)a dash or a dota question mark i.e., this will print x : echo '(?d(3}-?' |sed 's/(?\(\d(3}\)[-.]?/x/' You're very likely to want sed -E to enable extended regular expressions (ERE), and to then use ( and ) for grouping, and \( and \) for literal parenthesis. Also note that \d is part of Perl regexes, not standard ones, and while GNU sed supports some \ X escapes, they're not standard (and I don't think it supports \d ). Same for \? , GNU sed supports it in BRE to mean what ? means in ERE, but it's not standard. With all that in mind: $ echo '(123)-456-7890' | sed -E 's/\(?([0-9]{3})\)?[-.]?([0-9]{3})[-.]?([0-9]{4})/\1\2\3/'1234567890 Though you might almost as well just brute force it and just remove everything but the digits: $ echo '(123)-456-7890' | sed -e 's/[^0-9]//g'1234567890 (that would of course also accept stuff like (123)-4.5-6-7a8b9c0 ...) See also: The regex(7) man page Why does my regular expression work in X but not in Y?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439851", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/284384/" ] }
439,903
Claws Mail uses the MH format according to its manual . Mbox vs Maildir: Mail Storage Formats states that The Unix world has two ways of storing mail messages, the traditional mbox format and the newer maildir format. Postfix and Dovecot supports the two mail storage format so you can use any format, but I highly recommend you use the maildir format. Is the MH format used by Claws Mail different from both Mbox and Maildir? Is a special designed format?
MH is the name of a set of programs used to handle email; their current incarnation is nmh . The “format” they define, which uses one file per message and a simple directory layout , is not the same as either mbox or Maildir. As you mention, this is Claws Mail’s native format. Maildir is effectively a descendant of MH, fixing a number of problems around synchronisation with its tmp , new and cur sub-directories. MH and Maildir aren’t directly compatible, so Claws Mail on its own can’t use Maildir directories; it has a plugin to add support but it’s unmaintained.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/439903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102745/" ] }
439,924
Saying that I want to check the size of each file in some directory. Here is what I do: du -sh * Also, I can do: ls | xargs du -sh The two commands do exactly the same thing. I want to know if both of them are exactly the same, such as their cost, their efficiency etc. (The first command is lighter than the second I guess?)
One is correct, the other isn’t. du -sh * (should be du -sh -- * to avoid problems with filenames starting with - ) relies on the shell to expand the glob * ; du sees all the non-hidden files and directories in the current directory as individual arguments. This handles special characters correctly. ls | xargs du -sh relies on xargs to process ls ’s output. xargs splits its input on whitespace (at least space, tab and newline, more with some implementations), also understanding some form of quoting, and runs du (one (even for an empty input¹) or more invocations) with every single whitespace-separated string as individual arguments. Both appear equivalent if your current directory doesn’t contain files with whitespace, single quote, double quote or backslash characters in their names, and if there are few enough files (but at least one) that xargs runs only one du invocation, but they’re not. In terms of efficiency, du -sh * uses one process, ls | xargs du -sh uses at least three. There is one scenario where the pipe approach will work, while the glob won’t: if you have too many files in the current directory, the shell won’t be able to run du with all their names in one go, but xargs will run du as many times as necessary to cover all the files, in which case you would see several lines, and files with more than one hard link may be counted several times. See also Why *not* parse `ls`? ¹ If there's no non-hidden file in the current directory du -sh -- * will either fail with an error by your shell, or with some shells like bash run du with a literal * as argument and du will complain about that * file not existing. While with ls | xargs du -sh -- , most xargs implementations (exceptions being some BSD) will run du with no argument and so give the disk usage of the current directory (so also including the disk usage of the directory file itself and all hidden files and directories in it)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
439,959
Is there an easy (preferably command line) way to reverse the pages in a PDF file?
PDFtk can also do this (it’s available in most distributions as pdftk ): pdftk myfile.pdf cat end-1 output myfilereversed.pdf
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439959", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10047/" ] }
439,965
I'm building a lighting node, and run the command [sudo mount -a] I get this error saying my UUID I can't be found. Can someone please help?
PDFtk can also do this (it’s available in most distributions as pdftk ): pdftk myfile.pdf cat end-1 output myfilereversed.pdf
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287831/" ] }
439,973
Which command can I use to: Enter the last modified directory in the current directory. If no directory exists or is possible to enter, do nothing. I'm looking for an alias named ca that will work at least in zsh , but also in bash and ash , if possible.
PDFtk can also do this (it’s available in most distributions as pdftk ): pdftk myfile.pdf cat end-1 output myfilereversed.pdf
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/439973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3920/" ] }
440,004
I see for syslog logging, kill -HUP is used. /var/log/cron/var/log/maillog/var/log/messages/var/log/secure/var/log/spooler{ sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true endscript} I understood that -HUP is used because daemons like syslog, when they catch the SIGHUP, will try to restart itself and thus all the openfiles will be refreshed. I do not understand why they need to be refreshed. If syslog does only appending new log to the log files, the open files would be in write mode. If that is the case, when the log switching happens and at some point when the old log file entry in the filesystem is removed, won't it be creating a new file automatically when it needs to append a new log line (as afterall syslog service is running as root)? I think the difference is more in the understanding of w and u modes. I am unable to come to a quick conclusion on it. Also, why use only kill -HUP, why not restarting the service. Will there be any difference?
Generally services keep the log files opened while they are running. This mean that they do not care if the log files are renamed/moved or deleted they will continue to write to the open file handled. When logrotate move the files, the services keep writing to the same file. Example: crond will write to /var/log/cron.log. Then logrotate will rename the file to /var/log/cron.log.1, so crond will keep writing to the open file /var/log/cron.log.1. Sending the HUP signal to crond will force him to close existing file handle and open new file handle to the original path /var/log/cron.log which will create a new file. The use of the HUP signal instead of another one is at the discretion of the program. Some services like php-fpm will listen to the USR1 signal to reopen it's file handle without terminating itself.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74221/" ] }
440,088
I'm using Ubuntu 16.04 with Bash and I tried to read in Wikipedia , in here and in here , but I failed to understand the meaning of "command substitution" in shell-scripting in general, and in Bash in particular, as in: $(command) or `command` What is the meaning of this term? Edit: When I first published this question I already knew the pure concept of substitution and also the Linux concept of variable substitution (replacing a variable with its value by execution), yet I still missed the purpose of this shell feature from the documentation for whatever reason or group of reasons. My answer after question locked Command substitution is an operation with dedicated syntax to both execute a command and to have this command's output hold (stored) by a variable for later use. An example with date : thedate="$(date)" We can then print the result using the command printf : printf 'The date is %s\n' "$thedate" The command substitution syntax is $() . The command itself is date . Combining both we get $(date) , its value is the result of the substitution (that we could get after execution ). We save that value in a variable, $thedate , for later use. We display the output value held by the variable with printf , per the command above. Note: \n in printf is a line-break.
"Command substitution" is the name of the feature of the shell language that allows you to execute a command and have the output of that command replace (substitute) the text of the command. There is no other feature of the shell language that allows you to do that. A command substitution, i.e. the whole $(...) expression, is replaced by its output, which is the primary use of command substitutions. The command that the command substitution executes, is executed in a subshell, which means it has its own environment that will not affect the parent shell's environment. Not all subshell executions are command substitutions though (see further examples at end). Example showing that a command substitution is executed in a subshell: $ s=123$ echo "hello $( s=world; echo "$s" )"hello world$ echo "$s"123 Here, the variable s is set to the string 123 . On the next line, echo is invoked on a string containing the result of a command substitution. The command substitution sets s to the string world and echoes this string. The string world is the output of the command in the command substitution and thus, if this was run under set -x , we would see that the second line above would have been expanded to echo 'hello world' , which produces hello world on the terminal: $ set -x$ echo "hello $( s=world; echo "$s" )"++ s=world++ echo world+ echo 'hello world'hello world ( bash adds an extra level of + prompts to every level of a command substitution subshell in the trace output, other shells may not do this) Lastly, we show that the command inside the command substitution was run in its own subshell, because it did not affect the value of s in the calling shell (the value of s is still 123 , not world ). There are other situations where commands are executed in subshells, such as in echo 'hello' | read message In bash , unless you set the lastpipe option (only in non-interactive instances), the read is executed in a subshell, which means that $message will not be changed in the parent shell, i.e. doing echo "$message" after the above command will echo an empty string (or whatever value $message was before). A process substitution in bash also executes in a subshell: cat < <( echo 'hello world' ) This too is distinct from a command substitution.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/440088", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
440,129
I was trying to find the naming convention for Linux commands. For commands like cp , rm , mv , etc it seems to be based on first and second last character like move is mv list is ls copy is cp change directory is cd , the first character of two words, which makes sense while sometimes ignoring vowels in some commands, which makes sense. On the other hand, the command mkdir is not based on the previous perception: "make directory" is mkdir which should be more like md . As we have naming convention for a variable in a bash script and another guide line (for example 1 , 2 ), I am wondering whether any similar convention exists for these commands.
Historically, UNIX commands are short because, at the time the OS was created, memory was scarce, networks were slow, keyboard were hard to use, and terminals (when available -- most of the times you got the output as a paper print) had small resolution. So it made sense to try to economise as much as possible. Clearly one couldn't shorten at maximum any arbitrary command and yours is a good example: md could better be attributed to a command to generate a MD hash, and a shorthand for "directory" is "dir", so choosing mkdir for a command that creates a directory makes perfect sense. See also my answer here: Why are UNIX/POSIX system call namings so illegible? To sum up, there is no convention - as far as I know - for UNIX command names, apart from the guidelines above mentioned which come from historical technical limitations.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282797/" ] }
440,197
I am trying to execute this simple bash script but I am getting a syntax error. I followed this simple documentation here https://www.cyberciti.biz/faq/bash-for-loop/ but no luck. I am not sure what am I doing wrong? #!/bin/bashfor i in 1 2 3 4 5do echo "Count $i"done; error: setup.sh: 3: setup.sh: Syntax error: word unexpected (expecting "do") I am executing that script from my Windows Subsystem Linux (WSL) here.
Your script file is a DOS text file. The extra carriage returns at the end of each line confuses bash (it sees do\r rather than do ). Convert it to a Unix text file using a tool such as dos2unix , or make sure that your editor saves it as a Unix text file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/440197", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82080/" ] }
440,212
I have found some "almost good" examples, but none of them worked as it should. I have a file a.txt and a file checkme.txt .I want to execute the md5sum checkme.txt command and write it as a 10th line of the a.txt file. I tried to use the sed command, but it didn't work. Can anyone help me, please?
Your script file is a DOS text file. The extra carriage returns at the end of each line confuses bash (it sees do\r rather than do ). Convert it to a Unix text file using a tool such as dos2unix , or make sure that your editor saves it as a Unix text file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/440212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288047/" ] }
440,213
sudo yum list all | grep openconnect NetworkManager-openconnect.x86_64 0.8.6.0-3.el6 epel openconnect.i686 7.07-2.el6 epel openconnect.x86_64 7.07-2.el6 epel openconnect-devel.i686 7.07-2.el6 epel openconnect-devel.x86_64 7.07-2.el6 epel Tried to install NetworkManager-openconnect sudo yum install NetworkManager-openconnect Ended up in dependencies error. Error: Package: NetworkManager-openconnect-0.8.6.0-3.el6.x86_64 (epel) Requires: libgnome-keyring.so.0()(64bit)Error: Package: NetworkManager-openconnect-0.8.6.0-3.el6.x86_64 (epel) Requires: NetworkManager >= 1:0.8.1Error: Package: NetworkManager-openconnect-0.8.6.0-3.el6.x86_64 (epel) Requires: libgdk-x11-2.0.so.0()(64bit)Error: Package: openconnect-7.07-2.el6.x86_64 (epel) Requires: libpcsclite.so.1()(64bit)Error: Package: NetworkManager-openconnect-0.8.6.0-3.el6.x86_64 (epel) Requires: libgconf-2.so.4()(64bit)Error: Package: NetworkManager-openconnect-0.8.6.0-3.el6.x86_64 (epel) Requires: libnm-util.so.1()(64bit)Error: Package: NetworkManager-openconnect-0.8.6.0-3.el6.x86_64 (epel) Requires: libgtk-x11-2.0.so.0()(64bit)Error: Package: NetworkManager-openconnect-0.8.6.0-3.el6.x86_64 (epel) Requires: libgdk_pixbuf-2.0.so.0()(64bit)Error: Package: NetworkManager-openconnect-0.8.6.0-3.el6.x86_64 (epel) Requires: libnm-glib-vpn.so.1()(64bit)Error: Package: NetworkManager-openconnect-0.8.6.0-3.el6.x86_64 (epel) Requires: libnm-glib.so.2()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Can you please try to help me to install occlient on Amazon linux?
Your script file is a DOS text file. The extra carriage returns at the end of each line confuses bash (it sees do\r rather than do ). Convert it to a Unix text file using a tool such as dos2unix , or make sure that your editor saves it as a Unix text file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/440213", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288048/" ] }
440,239
I have some service which produces logs in the file logs.log . I have some other command which interacts with this service. Let's say it is some foo.sh . What I need, is to cut and save logs from logs.log exactly during foo.sh running. In other words I need that part of service's logs when it interacts with my foo.sh (so I don't care about foo.sh 's logs). I would expect that this command will do the trick, but it continues reading the file when foo.sh has already finished: > foo.sh | tail -f logs.log > foo_part.log Is any nice way to perform this trick?
This is made rather straightforward by sending your background processes to, well, the background: foo.sh &mypid=$!tail -f /path/to/logs.log > /path/to/partial.log &tailpid=$!wait $mypidkill -TERM $tailpid $! captures the PID of the last job sent to run in the background, so we can wait on your script to finish, and then kill the tail process when we no longer need it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/440239", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62243/" ] }
440,284
I'm trying to run a script which downloads a file from a VM, but I cannot specify the VM that is passed in. However, I want to ensure that the download is successful regardless of the state of the passed in VM, and I know it is a unix/linux system. My question is - is there a specific file I can use to download that would be guaranteed to exist on all unix/linux systems? Thanks!
Just about the only thing that's guaranteed to exist is the root directory itself. There are a lot of standards that are almost universally adhered to, but it's entirely possible for a bored systems engineer to roll out a Linux build where /etc lives in /andsoforth , /bin lives in /cupboard , /usr lives in /luser , and so forth. In such a system, all bets are off.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78706/" ] }
440,364
When I use this command: root:~# systemctl Output is: System has not been booted with systemd as init system (PID 1). Can't operate This problem occurred in "Kali Linux" and "Debian 9"How can I resolve this problem?
To start and stop services without having to worry about which init system is in use, you should use service : service openvas start will use whatever command is appropriate to start the openvas service.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/440364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277996/" ] }
440,369
I have the following directory structure (example) - main|--src |--com |- company |--org |- apache|--resources |--abc|--etc I need the disk space used by each sub-directory under the main directory. So, the output would be something like - user@main> du -{command_switches}1M src20M resources3M etc I have tried several switches available - sh, Sh, ash, aSH - but I could not get the required result.
Some du implementations support -d ¹ to limit the depth at which disk usage is displayed (not at which disk usage is accounted ), so du -hd 1 . should work for the current directory. Portably, you can always do: find . ! -name . -prune -type d -exec du -s {} + (add -h if your du implementation supports it) Though note that if there are a lot of directories in the current directory, find may end up running several invocations of du which could mean that some hard links are counted several times. Some du implementations also don't prevent hard links from being counted several times if they're encountered through the traversal of different arguments. ¹, with older versions of GNU du , you may need --max-depth instead. The -d short option equivalent was only added in coreutils 8.8 for compatibility with FreeBSD
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199417/" ] }
440,390
The following sequence gives me the return value of the first command, not the 2nd as I would have expected (no matter if I run the 1st command in a subshell): sudo systemctl start x; sudo systemctl is-active --quiet x; echo $?;(sudo systemctl start x); sudo systemctl is-active --quiet x; echo $?; The service x is broken and could not be started - so he's not running. The following command, ran stand-alone, gives me a correct return value of 3 as it should be: sudo systemctl is-active --quiet x; echo $?; So, why am I getting the return value of the first command ( 0 ) when running command; command; echo $? instead of the return value ( 3 ) of the second with echo $? ? I'm on GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) . I know, that if I split it on 2 lines, it works: sudo systemctl start x;sudo systemctl is-active --quiet x; echo $?; But I need to have it as a one-liner, as I'm putting it in PHP shell_exec() function. And running twice shell_exec() has the same result as putting the commands in one line.
When I encounter an issue like this, I tend to follow Sherlock Holmes’ mantra, and consider what is left, however implausible, once the impossible is eliminated. Of course with computers nothing is impossible, however some things are so unlikely we can ignore them at first. (This makes more sense with the original title, “ command; command; echo $? — return value is not correct, why?”) In this case, if sudo systemctl start x; sudo systemctl is-active --quiet x; echo $?; shows that $? is 0, that means that systemctl is-active really did indicate success. The fact that a separate systemctl is-active shows that the service isn’t active strongly suggests that there’s a race between the service and the human operator typing the commands; basically, that the service does start, to a sufficient extent for systemctl start to finish, and systemctl is-active to run and find the service active, but then the service fails, so a human-entered systemctl is-active finds it inactive. Adding a short delay between systemctl start and systemctl is-active should avoid the false positive.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/440390", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264975/" ] }
440,426
When a script runs, commands in it may output some text to stdout/stderr. Bash itself may also output some text. But if a few scripts are running at the same time, it is hard to identify where does an error come from. So is it possible to insert a prefix to all output of the script? Something like: #!/bin/bashprefix 'PREFIX' &2echo "wrong!" >&2 Then: $ ./script.shPREFIXwrong!
You can redirect stderr/stdout to a process substitution that adds the prefix of choice. For example, this script: #! /bin/bashexec > >(trap "" INT TERM; sed 's/^/foo: /')exec 2> >(trap "" INT TERM; sed 's/^/foo: (stderr) /' >&2)echo fooecho bar >&2date Produces this output: foo: foofoo: (stderr) barfoo: Fri Apr 27 20:04:34 IST 2018 The first two lines redirect stdout and stderr respectively to sed commands that add foo: and foo: (stderr) to the input. The calls to the shell built-in command trap make sure that the subshell does not exit when terminating the script with Ctrl+C or by sending the SIGTERM signal using kill $pid . This ensures that your shell won't forcefully terminate your script because the stdout file descriptor disappears when sed exits because it received the termination signal as well. Effectively you can still use exit traps in your main script and sed will still be running to process any output generated while running your exit traps. The subshell should still exit after your main script ends so sed process won't be left running forever.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/440426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50636/" ] }
440,467
When I boot my Raspberry Pi3 (4.14.34-v7+), I find the following error in the dmesg and other logs, after boot up. However, I am not currently using triggerhappy , so could probably disable that service. But in case I want to use in the future, I would like to know what is causing that error. systemd-udevd[157]: Process '/usr/sbin/th-cmd --socket /var/run/thd.socket --passfd --udev' failed with exit code 1. There are two entries in the systemd services: systemctl status triggerhappy.servicesystemctl status triggerhappy.socket And the code trying to be executed seem to come from: /lib/udev/rules.d/60-triggerhappy.rules : Why does this fail during boot? (It seem to run later though...)
Why: This error is being caused by a combination of things: 1) The command th-cmd --socket /var/run/thd.socket --passfd --udev produces a segfault . This seems to be because triggerhappy hasn't been patched to address a number of issues reported over the last 4 years... https://github.com/wertarbyte/triggerhappy/issues 2) Unfortunately, the error will still appear in syslog even if you disable triggerhappy Eg: $ sudo systemctl disable triggerhappy.service$ sudo systemctl disable triggerhappy.socket This is because disabling triggerhappy doesn't remove the udev rules here: /lib/udev/rules.d/60-triggerhappy.rules . Solution (If you're not using triggerhappy anyway - like on a headless system): $ sudo apt-get remove triggerhappy
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56324/" ] }
440,534
I've a bash file on my project root with this line $ ls | grep -P '^some_pattern_matching_regex_goeshere.txt$' | xargs rm -f The above line removes all the .txt files from the project root but when I push all the .txt files to another folder e.g process_logs/ and try the same commands with ls , grep and rm its doesn't work. This is what I tried but not worked to removed files on the process_logs directory. $ ls process_logs/ | grep -P '^some_pattern_matching_regex_goeshere.txt$' | xargs rm -f N.B : I've also tried the command with simple regex pattern like ls process_logs/ | grep -P '^*.txt$' | xargs rm -f to remove files from directory but It doesn't work though.
Try using find instead. If you don't want find to be recursive, you can use depth options: find /process_logs -maxdepth 1 -mindepth 1 -type f -name 'some_shell_glob_pattern_here' -delete Parsing the output of ls is not recommended because ls is not always accurate because ls prints a human-readable version of the filename, which may not match the actual filename. For more info see the parsing ls wiki article.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440534", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288313/" ] }
440,558
How can I describe or explain "buffers" in the output of free ? $ free -h total used free shared buff/cache availableMem: 501M 146M 19M 9.7M 335M 331MSwap: 1.0G 85M 938M$ free -w -h total used free shared buffers cache availableMem: 501M 146M 19M 9.7M 155M 180M 331MSwap: 1.0G 85M 938M I don't have any (known) problem with this system. I am just surprised and curious to see that "buffers" is almost as high as "cache" (155M v.s. 180M). I thought "cache" represented the page cache of file contents, and tended to be the most significant part of "cache/buffers". I'm not sure what "buffers" are though. For example, I compared this to my laptop which has more RAM. On my laptop, the "buffers" figure is an order of magnitude smaller than "cache" (200M v.s. 4G). If I understood what "buffers" were then I could start to look at why the buffers grew to such a larger proportion on the smaller system. From man proc (I ignore the hilariously outdated definition of "large"): Buffers %lu Relatively temporary storage for raw disk blocks that shouldn't get tremendously large (20MB or so). Cached %lu In-memory cache for files read from the disk (the page cache). Doesn't include SwapCached. $ free -Vfree from procps-ng 3.3.12$ uname -r # the Linux kernel version4.9.0-6-marvell$ systemd-detect-virt # this is not inside a virtual machinenone$ cat /proc/meminfoMemTotal: 513976 kBMemFree: 20100 kBMemAvailable: 339304 kBBuffers: 159220 kBCached: 155536 kBSwapCached: 2420 kBActive: 215044 kBInactive: 216760 kBActive(anon): 56556 kBInactive(anon): 73280 kBActive(file): 158488 kBInactive(file): 143480 kBUnevictable: 10760 kBMlocked: 10760 kBHighTotal: 0 kBHighFree: 0 kBLowTotal: 513976 kBLowFree: 20100 kBSwapTotal: 1048572 kBSwapFree: 960532 kBDirty: 240 kBWriteback: 0 kBAnonPages: 126912 kBMapped: 40312 kBShmem: 9916 kBSlab: 37580 kBSReclaimable: 29036 kBSUnreclaim: 8544 kBKernelStack: 1472 kBPageTables: 3108 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 1305560 kBCommitted_AS: 1155244 kBVmallocTotal: 507904 kBVmallocUsed: 0 kBVmallocChunk: 0 kB$ sudo slabtop --once Active / Total Objects (% used) : 186139 / 212611 (87.5%) Active / Total Slabs (% used) : 9115 / 9115 (100.0%) Active / Total Caches (% used) : 66 / 92 (71.7%) Active / Total Size (% used) : 31838.34K / 35031.49K (90.9%) Minimum / Average / Maximum Object : 0.02K / 0.16K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 59968 57222 0% 0.06K 937 64 3748K buffer_head 29010 21923 0% 0.13K 967 30 3868K dentry 24306 23842 0% 0.58K 4051 6 16204K ext4_inode_cache 22072 20576 0% 0.03K 178 124 712K kmalloc-32 10290 9756 0% 0.09K 245 42 980K kmalloc-96 9152 4582 0% 0.06K 143 64 572K kmalloc-node 9027 8914 0% 0.08K 177 51 708K kernfs_node_cache 7007 3830 0% 0.30K 539 13 2156K radix_tree_node 5952 4466 0% 0.03K 48 124 192K jbd2_revoke_record_s 5889 5870 0% 0.30K 453 13 1812K inode_cache 5705 4479 0% 0.02K 35 163 140K file_lock_ctx 3844 3464 0% 0.03K 31 124 124K anon_vma 3280 3032 0% 0.25K 205 16 820K kmalloc-256 2730 2720 0% 0.10K 70 39 280K btrfs_trans_handle 2025 1749 0% 0.16K 81 25 324K filp 1952 1844 0% 0.12K 61 32 244K kmalloc-128 1826 532 0% 0.05K 22 83 88K trace_event_file 1392 1384 0% 0.33K 116 12 464K proc_inode_cache 1067 1050 0% 0.34K 97 11 388K shmem_inode_cache 987 768 0% 0.19K 47 21 188K kmalloc-192 848 757 0% 0.50K 106 8 424K kmalloc-512 450 448 0% 0.38K 45 10 180K ubifs_inode_slab 297 200 0% 0.04K 3 99 12K eventpoll_pwq 288 288 100% 1.00K 72 4 288K kmalloc-1024 288 288 100% 0.22K 16 18 64K mnt_cache 287 283 0% 1.05K 41 7 328K idr_layer_cache 240 8 0% 0.02K 1 240 4K fscrypt_info
What is the difference between "buffers" and the other type of cache? Why is this distinction so prominent? Why do some people say "buffer cache" when they talk about cached file content? What are Buffers used for? Why might we expect Buffers in particular to be larger or smaller? 1. What is the difference between "buffers" and the other type of cache? Buffers shows the amount of page cache used for block devices. "Block devices" are the most common type of data storage device. The kernel has to deliberately subtract this amount from the rest of the page cache when it reports Cached . See meminfo_proc_show() : cached = global_node_page_state(NR_FILE_PAGES) - total_swapcache_pages() - i.bufferram;...show_val_kb(m, "MemTotal: ", i.totalram);show_val_kb(m, "MemFree: ", i.freeram);show_val_kb(m, "MemAvailable: ", available);show_val_kb(m, "Buffers: ", i.bufferram);show_val_kb(m, "Cached: ", cached); 2. Why is this distinction made so prominent? Why do some people say "buffer cache" when they talk about cached file content? The page cache works in units of the MMU page size, typically a minimum of 4096 bytes. This is essential for mmap() , i.e. memory-mapped file access.[1][2] It is designed to share pages of loaded program / library code between separate processes, and allow loading individual pages on demand. (Also for unloading pages when something else needs the space, and they haven't been used recently). [1] Memory-mapped I/O - The GNU C Library manual. [2] mmap - Wikipedia. Early UNIX had a "buffer cache" of disk blocks, and did not have mmap(). Apparently when mmap() was first added, they added the page cache as a new layer on top. This is as messy as it sounds. Eventually, UNIX-based OS's got rid of the separate buffer cache. So now all file cache is in units of pages. Pages are looked up by (file, offset), not by location on disk. This was called "unified buffer cache", perhaps because people were more familiar with "buffer cache".[3] [3] UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD ("One interesting twist that Linux adds is that the device block numbers where a page is stored on disk are cached with the page in the form of a list of buffer_head structures. When a modified page is to be written back to disk, the I/O requests can be sent to the device driver right away, without needing to read any indirect blocks to determine where the page's data should be written."[3]) In Linux 2.2 there was a separate "buffer cache" used for writes, but not for reads. "The page cache used the buffer cache to write back its data, needing an extra copy of the data, and doubling memory requirements for some write loads".[4] Let's not worry too much about the details, but this history would be one reason why Linux reports Buffers usage separately. [4] Page replacement in Linux 2.4 memory management , Rik van Riel. By contrast, in Linux 2.4 and above, the extra copy does not exist. "The system does disk IO directly to and from the page cache page."[4] Linux 2.4 was released in 2001. 3. What are Buffers used for? Block devices are treated as files, and so have page cache. This is used "for filesystem metadata and the caching of raw block devices".[4] But in current versions of Linux, filesystems do not copy file contents through it, so there is no "double caching". I think of the Buffers part of the page cache as being the Linux buffer cache. Some sources might disagree with this terminology. How much buffer cache the filesystem uses, if any, depends on the type of filesystem. The system in the question uses ext4. ext3/ext4 use the Linux buffer cache for the journal, for directory contents, and some other metadata. Certain file systems, including ext3, ext4, and ocfs2, use the jbd orjbd2 layer to handle their physical block journalling, and this layerfundamentally uses the buffer cache. -- Email article by Ted Tso , 2013 Prior to Linux kernel version 2.4, Linux had separate page and buffer caches. Since 2.4, the page and buffer cache are unified and Buffers is raw disk blocks not represented in the page cache—i.e., not file data. ... The buffer cache remains, however, as the kernel still needs to perform block I/O in terms of blocks, not pages. As most blocks represent file data, most of the buffer cache is represented by the page cache. But a small amount of block data isn't file backed—metadata and raw block I/O for example—and thus is solely represented by the buffer cache. -- A pair of Quora answers by Robert Love , last updated 2013. Both writers are Linux developers who have worked with Linux kernel memory management. The first source is more specific about technical details. The second source is a more general summary, which might be contradicted and outdated in some specifics. It is true that filesystems may perform partial-page metadata writes, even though the cache is indexed in pages. Even user processes can perform partial-page writes when they use write() (as opposed to mmap() ), at least directly to a block device. This only applies to writes, not reads. When you read through the page cache, the page cache always reads full pages. Linus liked to rant that the buffer cache is not required in order to do block-sized writes, and that filesystems can do partial-page metadata writes even with page cache attached to their own files instead of the block device. I am sure he is right to say that ext2 does this. ext3/ext4 with its journalling system does not. It is less clear what the issues were that led to this design. The people he was ranting at got tired of explaining. ext4_readdir() has not been changed to satisfy Linus' rant. I don't see his desired approach used in readdir() of other filesystems either. I think XFS uses the buffer cache for directories as well. bcachefs does not use the page cache for readdir() at all; it uses its own cache for btrees. I'm not sure about btrfs. 4. Why might we expect Buffers in particular to be larger or smaller? In this case it turns out the ext4 journal size for my filesystem is 128M. So this explains why 1) my buffer cache can stabilize at slightly over 128M; 2) buffer cache does not scale proportionally with the larger amount of RAM on my laptop. For some other possible causes, see What is the buffers column in the output from free? Note that "buffers" reported by free is actually a combination of Buffers and reclaimable kernel slab memory. To verify that journal writes use the buffer cache, I simulated a filesystem in nice fast RAM (tmpfs), and compared the maximum buffer usage for different journal sizes. # dd if=/dev/zero of=/tmp/t bs=1M count=1000...# mkfs.ext4 /tmp/t -J size=256...# LANG=C dumpe2fs /tmp/t | grep '^Journal size'dumpe2fs 1.43.5 (04-Aug-2017)Journal size: 256M# mount /tmp/t /mnt# cd /mnt# free -w -m total used free shared buffers cache availableMem: 7855 2521 4321 285 66 947 5105Swap: 7995 0 7995# for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done# free -w -m total used free shared buffers cache availableMem: 7855 2523 3872 551 237 1223 4835Swap: 7995 0 7995 # dd if=/dev/zero of=/tmp/t bs=1M count=1000...# mkfs.ext4 /tmp/t -J size=16...# LANG=C dumpe2fs /tmp/t | grep '^Journal size'dumpe2fs 1.43.5 (04-Aug-2017)Journal size: 16M# mount /tmp/t /mnt# cd /mnt# free -w -m total used free shared buffers cache availableMem: 7855 2507 4337 285 66 943 5118Swap: 7995 0 7995# for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done# free -w -m total used free shared buffers cache availableMem: 7855 2509 4290 315 77 977 5086Swap: 7995 0 7995 History of this answer: How I came to look at the journal I had found Ted Tso's email first, and was intrigued that it emphasized write caching. I would find it surprising if "dirty", unwritten data was able to reach 30% of RAM on my system. sudo atop shows that over a 10 second interval, the system in question consistently writes only 1MB. The filesystem concerned would be able to keep up with over 100 times this rate. (It's on a USB2 hard disk drive, max throughput ~20MB/s). Using blktrace ( btrace -w 10 /dev/sda ) confirms that the IOs which are being cached must be writes, because there is almost no data being read. Also that mysqld is the only userspace process doing IO. I stopped the service responsible for the writes (icinga2 writing to mysql) and re-checked. I saw "buffers" drop to under 20M - I have no explanation for that - and stay there. Restarting the writer again shows "buffers" rising by ~0.1M for each 10 second interval. I observed it maintain this rate consistently, climbing back to 70M and above. Running echo 3 | sudo tee /proc/sys/vm/drop_caches was sufficient to lower "buffers" again, to 4.5M. This proves that my accumulation of buffers is a "clean" cache, which Linux can drop immediately when required. This system is not accumulating unwritten data. ( drop_caches does not perform any writeback and hence cannot drop dirty pages. If you wanted to run a test which cleaned the cache first, you would use the sync command). The entire mysql directory is only 150M. The accumulating buffers must represent metadata blocks from mysql writes, but it surprised me to think there would be so many metadata blocks for this data.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/440558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29483/" ] }
440,582
I want that line from the file which have highest length among all the lines using awk command.
What is the difference between "buffers" and the other type of cache? Why is this distinction so prominent? Why do some people say "buffer cache" when they talk about cached file content? What are Buffers used for? Why might we expect Buffers in particular to be larger or smaller? 1. What is the difference between "buffers" and the other type of cache? Buffers shows the amount of page cache used for block devices. "Block devices" are the most common type of data storage device. The kernel has to deliberately subtract this amount from the rest of the page cache when it reports Cached . See meminfo_proc_show() : cached = global_node_page_state(NR_FILE_PAGES) - total_swapcache_pages() - i.bufferram;...show_val_kb(m, "MemTotal: ", i.totalram);show_val_kb(m, "MemFree: ", i.freeram);show_val_kb(m, "MemAvailable: ", available);show_val_kb(m, "Buffers: ", i.bufferram);show_val_kb(m, "Cached: ", cached); 2. Why is this distinction made so prominent? Why do some people say "buffer cache" when they talk about cached file content? The page cache works in units of the MMU page size, typically a minimum of 4096 bytes. This is essential for mmap() , i.e. memory-mapped file access.[1][2] It is designed to share pages of loaded program / library code between separate processes, and allow loading individual pages on demand. (Also for unloading pages when something else needs the space, and they haven't been used recently). [1] Memory-mapped I/O - The GNU C Library manual. [2] mmap - Wikipedia. Early UNIX had a "buffer cache" of disk blocks, and did not have mmap(). Apparently when mmap() was first added, they added the page cache as a new layer on top. This is as messy as it sounds. Eventually, UNIX-based OS's got rid of the separate buffer cache. So now all file cache is in units of pages. Pages are looked up by (file, offset), not by location on disk. This was called "unified buffer cache", perhaps because people were more familiar with "buffer cache".[3] [3] UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD ("One interesting twist that Linux adds is that the device block numbers where a page is stored on disk are cached with the page in the form of a list of buffer_head structures. When a modified page is to be written back to disk, the I/O requests can be sent to the device driver right away, without needing to read any indirect blocks to determine where the page's data should be written."[3]) In Linux 2.2 there was a separate "buffer cache" used for writes, but not for reads. "The page cache used the buffer cache to write back its data, needing an extra copy of the data, and doubling memory requirements for some write loads".[4] Let's not worry too much about the details, but this history would be one reason why Linux reports Buffers usage separately. [4] Page replacement in Linux 2.4 memory management , Rik van Riel. By contrast, in Linux 2.4 and above, the extra copy does not exist. "The system does disk IO directly to and from the page cache page."[4] Linux 2.4 was released in 2001. 3. What are Buffers used for? Block devices are treated as files, and so have page cache. This is used "for filesystem metadata and the caching of raw block devices".[4] But in current versions of Linux, filesystems do not copy file contents through it, so there is no "double caching". I think of the Buffers part of the page cache as being the Linux buffer cache. Some sources might disagree with this terminology. How much buffer cache the filesystem uses, if any, depends on the type of filesystem. The system in the question uses ext4. ext3/ext4 use the Linux buffer cache for the journal, for directory contents, and some other metadata. Certain file systems, including ext3, ext4, and ocfs2, use the jbd orjbd2 layer to handle their physical block journalling, and this layerfundamentally uses the buffer cache. -- Email article by Ted Tso , 2013 Prior to Linux kernel version 2.4, Linux had separate page and buffer caches. Since 2.4, the page and buffer cache are unified and Buffers is raw disk blocks not represented in the page cache—i.e., not file data. ... The buffer cache remains, however, as the kernel still needs to perform block I/O in terms of blocks, not pages. As most blocks represent file data, most of the buffer cache is represented by the page cache. But a small amount of block data isn't file backed—metadata and raw block I/O for example—and thus is solely represented by the buffer cache. -- A pair of Quora answers by Robert Love , last updated 2013. Both writers are Linux developers who have worked with Linux kernel memory management. The first source is more specific about technical details. The second source is a more general summary, which might be contradicted and outdated in some specifics. It is true that filesystems may perform partial-page metadata writes, even though the cache is indexed in pages. Even user processes can perform partial-page writes when they use write() (as opposed to mmap() ), at least directly to a block device. This only applies to writes, not reads. When you read through the page cache, the page cache always reads full pages. Linus liked to rant that the buffer cache is not required in order to do block-sized writes, and that filesystems can do partial-page metadata writes even with page cache attached to their own files instead of the block device. I am sure he is right to say that ext2 does this. ext3/ext4 with its journalling system does not. It is less clear what the issues were that led to this design. The people he was ranting at got tired of explaining. ext4_readdir() has not been changed to satisfy Linus' rant. I don't see his desired approach used in readdir() of other filesystems either. I think XFS uses the buffer cache for directories as well. bcachefs does not use the page cache for readdir() at all; it uses its own cache for btrees. I'm not sure about btrfs. 4. Why might we expect Buffers in particular to be larger or smaller? In this case it turns out the ext4 journal size for my filesystem is 128M. So this explains why 1) my buffer cache can stabilize at slightly over 128M; 2) buffer cache does not scale proportionally with the larger amount of RAM on my laptop. For some other possible causes, see What is the buffers column in the output from free? Note that "buffers" reported by free is actually a combination of Buffers and reclaimable kernel slab memory. To verify that journal writes use the buffer cache, I simulated a filesystem in nice fast RAM (tmpfs), and compared the maximum buffer usage for different journal sizes. # dd if=/dev/zero of=/tmp/t bs=1M count=1000...# mkfs.ext4 /tmp/t -J size=256...# LANG=C dumpe2fs /tmp/t | grep '^Journal size'dumpe2fs 1.43.5 (04-Aug-2017)Journal size: 256M# mount /tmp/t /mnt# cd /mnt# free -w -m total used free shared buffers cache availableMem: 7855 2521 4321 285 66 947 5105Swap: 7995 0 7995# for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done# free -w -m total used free shared buffers cache availableMem: 7855 2523 3872 551 237 1223 4835Swap: 7995 0 7995 # dd if=/dev/zero of=/tmp/t bs=1M count=1000...# mkfs.ext4 /tmp/t -J size=16...# LANG=C dumpe2fs /tmp/t | grep '^Journal size'dumpe2fs 1.43.5 (04-Aug-2017)Journal size: 16M# mount /tmp/t /mnt# cd /mnt# free -w -m total used free shared buffers cache availableMem: 7855 2507 4337 285 66 943 5118Swap: 7995 0 7995# for i in $(seq 40000); do dd if=/dev/zero of=t bs=1k count=1 conv=sync status=none; sync t; sync -f t; done# free -w -m total used free shared buffers cache availableMem: 7855 2509 4290 315 77 977 5086Swap: 7995 0 7995 History of this answer: How I came to look at the journal I had found Ted Tso's email first, and was intrigued that it emphasized write caching. I would find it surprising if "dirty", unwritten data was able to reach 30% of RAM on my system. sudo atop shows that over a 10 second interval, the system in question consistently writes only 1MB. The filesystem concerned would be able to keep up with over 100 times this rate. (It's on a USB2 hard disk drive, max throughput ~20MB/s). Using blktrace ( btrace -w 10 /dev/sda ) confirms that the IOs which are being cached must be writes, because there is almost no data being read. Also that mysqld is the only userspace process doing IO. I stopped the service responsible for the writes (icinga2 writing to mysql) and re-checked. I saw "buffers" drop to under 20M - I have no explanation for that - and stay there. Restarting the writer again shows "buffers" rising by ~0.1M for each 10 second interval. I observed it maintain this rate consistently, climbing back to 70M and above. Running echo 3 | sudo tee /proc/sys/vm/drop_caches was sufficient to lower "buffers" again, to 4.5M. This proves that my accumulation of buffers is a "clean" cache, which Linux can drop immediately when required. This system is not accumulating unwritten data. ( drop_caches does not perform any writeback and hence cannot drop dirty pages. If you wanted to run a test which cleaned the cache first, you would use the sync command). The entire mysql directory is only 150M. The accumulating buffers must represent metadata blocks from mysql writes, but it surprised me to think there would be so many metadata blocks for this data.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/440582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288367/" ] }
440,586
My system is full of very sensitive data, so I need to encrypt as much of it as possible. I have an encrypted Debian installation which asks for a long password every time during boot.Is there a simple way to set it up so that I can input that password remotely? If some other distribution can do it, I don't mind installing something else instead of Debian.
You can enable this by installing dropbear-initramfs and following the instructions to configure your SSH keys. This will start an SSH server from the initramfs, allowing you to connect remotely and enter your encryption passphrase.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/440586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288371/" ] }
440,637
I know how to do it on Red Hat based systems. yum –disablerepo=* --enablerepo=epel update The above command will temporarily disable all repos and enable epel and update only epel packages. yum update–disablerepo=remi-safe,updates This will also disable two repos while updating all other enable repos. What is the equivalent of the above on ubuntu for instance ? I know we can comment out the repo in the /etc/apt/sources.list.d But this will permanently disable the repo right? Is there a way that I can run apt-get update while temporarily disable one repo for instance?
The easiest way I've found to manage repos is to have them in individual files in /etc/apt/sources.list.d/ . That way, disabling the repo is as easy as moving the file from /etc/apt/sources.list.d/repo.list to /etc/apt/sources.list.d/repo.list.bak , and re-enabling the repo is as easy as going the other way. You could even create a script which temporarily disables a repo by moving the file, running update/install/whatever, and then moving the file back again.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100193/" ] }
440,640
As I understand from answers given elsewhere on StackExchange - the process is suppossed to be simple: a) git checkout masterb) git fetch originc) git pull origind) git push myremote # named aixtoolse) git checkout bpo-XXXXXf) git merge masterg) git push myremote Until step g) everything works as expected. And when it is a tag from the 'origin' rather than a 'PR' I am trying to keep in sync, the process works fine. Help, wisdom, guidance, et al is much appreciated: step g currently michael@x071:[/data/prj/python/git0/gcc-python3-3.7]git statusOn branch bpo-11191Your branch is ahead of 'aixtools/bpo-11191' by 564 commits. (use "git push" to publish your local commits)nothing to commit, working tree cleanmichael@x071:[/data/prj/python/git0/gcc-python3-3.7]git push aixtoolsUsername for 'https://github.com':Password for 'https://[email protected]':To https://github.com/aixtools/cpython.git ! [rejected] bpo-11191 -> bpo-11191 (non-fast-forward)error: failed to push some refs to 'https://github.com/aixtools/cpython.git'hint: Updates were rejected because the tip of your current branch is behindhint: its remote counterpart. Integrate the remote changes (e.g.hint: 'git pull ...') before pushing again.hint: See the 'Note about fast-forwards' in 'git push --help' for details. A day further: I am thinking, after reading up more on reset and merge, and some of the examples there - that rather than a pull to my local master, commit of local master, and then merge local branch with local 'master' I should also just 'pull' from 'other remote' as a kindof merge. In any case - I restored a backup of the local situation (always make a copy when I try some 'new' for me with git) And I hope the following answers the comment/query regarding the status of the 'local' michael@x071:[/data/prj/python/git0/gcc-python3-3.7]git branch* bpo-11191 mastermichael@x071:[/data/prj/python/git0/gcc-python3-3.7]git statusOn branch bpo-11191Your branch is up-to-date with 'aixtools/bpo-11191'.nothing to commit, working tree cleanmichael@x071:[/data/prj/python/git0/gcc-python3-3.7]git diff aixtools/bpo-11191michael@x071:[/data/prj/python/git0/gcc-python3-3.7] Or michael@x071:[/data/prj/python/git0/gcc-python3-3.7]git log --oneline | head -15a3284f (HEAD -> bpo-11191, aixtools/bpo-11191) Fix test_run and related tests Almost fix test_search_cppmichael@x071:[/data/prj/python/git0/gcc-python3-3.7] aixtools/bpo-11191 | head -1 <5a3284f (HEAD -> bpo-11191, aixtools/bpo-11191) Fix test_run and related tests Almost fix test_search_cpp So, maybe the question needs to be rephrased: should I merge locally? pull from "other remote" aka 'project owners'? something else entirely? Next attempt - using merge (maybe I should have merged from a different branch. "master" was what is now 3.7 (I am guessing) - so maybe I should have merged from cpython/3.7. So, now I feel "howto"ish again. Basically, I have a PR that I would like to keep/verify that it is still "in sync" with current developments - and update the PR with the current status. A (desired by me) side-effect is that the PR will be re-evaluated again by the "verification process". I may be facing a situation that "cpython" does not merge my PR before the next release. Should that happen I would like to use, as much as possible, git "merge" and/or "pull" of my open PR (aka bpo-XXXXX) into a "AIX-release" branch. The steps I see for that would be: a) checkout the official release branchb) create and checkout a new "aix-release" branchc) merge my (local) branches, one at a time, into the new branchd) commit (and push) the new "release" So, to simplify/verify the merge into the new release I would like to know, and "store" the bpo-XXXXX with the most recent "sync" with the master and/or named branches. I am aware that - when it comes to git - I am a Sunday driver. It may be obvious to you. But I think for many (at least I hope I am not alone :smile:) git is not straightforward in what does what. Very powerful - yes - but I feel overwhelmed by it's power. Thx for your assistance! (FYI: last attempt, new failure - I am very curious about what specific steps were used to -- checkout, merge, and push - and have it all work? Did you get my bpo, push that, then merge, then commit (local), then push - or use some other workflow. Probably - what I am looking for is the workflow that came ' naturally' to you. michael@x071:[/data/prj/python/git0/gcc-python3-3.7]git merge cpython/master...Merge made by the 'recursive' strategy....michael@x071:[/data/prj/python/git0/gcc-python3-3.7]git statusOn branch bpo-11191Your branch is ahead of 'aixtools/bpo-11191' by 564 commits. (use "git push" to publish your local commits)nothing to commit, working tree cleanmichael@x071:[/data/prj/python/git0/gcc-python3-3.7]git push aixtoolsUsername for 'https://github.com':Password for 'https://[email protected]':To https://github.com/aixtools/cpython.git ! [rejected] bpo-11191 -> bpo-11191 (non-fast-forward)error: failed to push some refs to 'https://github.com/aixtools/cpython.git'hint: Updates were rejected because the tip of your current branch is behindhint: its remote counterpart. Integrate the remote changes (e.g.hint: 'git pull ...') before pushing again.hint: See the 'Note about fast-forwards' in 'git push --help' for details.michael@x071:[/data/prj/python/git0/gcc-python3-3.7]git pull cpythonremote: Counting objects: 1908, done.remote: Compressing objects: 100% (4/4), done.remote: Total 1908 (delta 752), reused 751 (delta 751), pack-reused 1153Receiving objects: 100% (1908/1908), 1.11 MiB | 2.01 MiB/s, done.Resolving deltas: 100% (1334/1334), completed with 161 local objects.From https://github.com/aixtools/cpython fea0a12..b94d739 3.7 -> cpython/3.7You asked to pull from the remote 'cpython', but did not specifya branch. Because this is not the default configured remotefor your current branch, you must specify a branch on the command line.michael@x071:[/data/prj/python/git0/gcc-python3-3.7]git pull cpython 3.7From https://github.com/aixtools/cpython * branch 3.7 -> FETCH_HEADAuto-merging README.rstCONFLICT (content): Merge conflict in README.rstAuto-merging Python/importlib_external.hCONFLICT (content): Merge conflict in Python/importlib_external.hAuto-merging Python/importlib.hCONFLICT (content): Merge conflict in Python/importlib.hAuto-merging Python/compile.cCONFLICT (content): Merge conflict in Python/compile.cAuto-merging Objects/frameobject.cCONFLICT (content): Merge conflict in Objects/frameobject.cAuto-merging Lib/test/test_sys_settrace.pyCONFLICT (content): Merge conflict in Lib/test/test_sys_settrace.pyAuto-merging Lib/test/test_random.pyCONFLICT (content): Merge conflict in Lib/test/test_random.pyAuto-merging Lib/pydoc_data/topics.pyCONFLICT (content): Merge conflict in Lib/pydoc_data/topics.pyAuto-merging Lib/importlib/_bootstrap_external.pyCONFLICT (content): Merge conflict in Lib/importlib/_bootstrap_external.pyAuto-merging Lib/enum.pyAuto-merging Include/patchlevel.hCONFLICT (content): Merge conflict in Include/patchlevel.hAuto-merging Doc/whatsnew/3.7.rstCONFLICT (content): Merge conflict in Doc/whatsnew/3.7.rstAuto-merging Doc/tools/templates/indexsidebar.htmlCONFLICT (content): Merge conflict in Doc/tools/templates/indexsidebar.htmlAuto-merging Doc/tools/extensions/pyspecific.pyAuto-merging Doc/library/re.rstCONFLICT (content): Merge conflict in Doc/library/re.rstAuto-merging Doc/library/dis.rstCONFLICT (content): Merge conflict in Doc/library/dis.rstAuto-merging Doc/library/configparser.rstCONFLICT (content): Merge conflict in Doc/library/configparser.rstAuto-merging .travis.ymlCONFLICT (content): Merge conflict in .travis.ymlAuto-merging .github/appveyor.ymlCONFLICT (content): Merge conflict in .github/appveyor.ymlAutomatic merge failed; fix conflicts and then commit the result.
The easiest way I've found to manage repos is to have them in individual files in /etc/apt/sources.list.d/ . That way, disabling the repo is as easy as moving the file from /etc/apt/sources.list.d/repo.list to /etc/apt/sources.list.d/repo.list.bak , and re-enabling the repo is as easy as going the other way. You could even create a script which temporarily disables a repo by moving the file, running update/install/whatever, and then moving the file back again.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201205/" ] }
440,691
I have a question.Studying processes management I observed a strange behavior, on a CentOS 7.I know that killing a parent process, the child processes are killed also. But not in the following case. I ran the command dd, just for example: [root@server2 ~]# dd if=/dev/zero of=/dev/null &[1] 1756[root@server2 ~]# ps fax | grep -B2 dd1737 pts/2 S 0:00 \_ su -1741 pts/2 S 0:00 \_ -bash1756 pts/2 R 1:18 \_ dd if=/dev/zero of=/dev/null After that I tried to kill (with SIGKILL signal) the parent process, that is the bash, but this action doesn't kill the dd process: [root@server2 ~]# kill -9 1741Killed[user@server2 ~]# The shell terminates but as you can see in the top command output, the dd process is still working: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND1756 root 20 0 107948 612 512 R 99.9 0.1 10:06.98 dd Do you have any idea about it please?
By default killing a parent process does not kill the children processes. I suggest you look for other questions about how to kill both the parent and child using the process group (a negative PID). A good answer about how to do this in detail can be found at Process descendants
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/440691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271131/" ] }
440,698
I already know that I can check if multiple dependencies required to install a package in Debian or Ubuntu exist in my repositories by running the following command: apt policy first-package second-package ... last-package This command also tells me if I have each package currently installed or not. My question is how to quickly check if multiple dependency packages exist in a supported version of Debian or Ubuntu that I do not currently have installed. Because I do not have that OS currently installed I can't check if the dependency packages exist locally and offline, but I want to check if the required dependency packages exist in the default repositories from the terminal. One possible use for this information is to check if an application that is installed in Ubuntu can also be installed in the latest version of Ubuntu before installing the latest version of Ubuntu or upgrading the existing OS to the latest version.
The ideal tool for this is rmadison , which is a simple Perl script with few dependencies (the URI module and wget or curl ), so it can run pretty much everywhere. It interrogates the Madison services hosted by Debian and Ubuntu to determine the availability of packages: rmadison gcc-7 tells you which versions of GCC 7 are available in the various Debian suites, rmadison -u ubuntu gcc-7 does the same for Ubuntu. You can restrict the output to a specific version: rmadison -u ubuntu -s bionic gcc-7
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38285/" ] }
440,748
We have Linux Redhat version 7.2 , with xfs file system. from /etc/fstab/dev/mapper/vgCLU_HDP-root / xfs defaults 0 0UUID=7de1ab5c-b605-4b6f-bdf1-f1e8658fb9 /boot xfs defaults 0 0/dev/mapper/vg/dev/mapper/vgCLU_HDP-root / xfs defaults 0 0UUID=7de1dc5c-b605-4a6f-bdf1-f1e869f6ffb9 /boot xfs defaults 0 0/dev/mapper/vgCLU_HDP-var /var xfs defaults 0 0 var /var xfs defaults 0 0 The machines are used for hadoop clusters. I just thinking what is the best file-system for this purpose? So what is better EXT4, or XFS regarding that machines are used for hadoop cluster?
This is addressed in this knowledge base article ; the main consideration for you will be the support levels available: Ext4 is supported up to 50TB, XFS up to 500TB. For really big data, you’d probably end up looking at shared storage, which by default means GFS2 on RHEL 7, except that for Hadoop you’d use HDFS or GlusterFS. For local storage on RHEL the default is XFS and you should generally use that unless you have specific reasons not to.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440748", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
440,752
Is it possible to, somehow, access xterm's scrollback buffer as a (read-only) file or a character device? The core issue (to avoid x/y "problemming"), is this:sometimes the command I've just executed creates non-deterministic output, and I'd like to use its output somehow without pre-thought of tee-ing it. Right now, the only way to do this (that I'm aware of) is to use the mouse to select the text into primary selection.
You could do this by telling xterm to print the data using the print-everything action (normally not bound to a key). Alternatively, there's an escape sequence documented in XTerm Control Sequences : CSI ? Pm i Media Copy (MC), DEC-specific. Ps = 1 -> Print line containing cursor. Ps = 4 -> Turn off autoprint mode. Ps = 5 -> Turn on autoprint mode. Ps = 1 0 -> Print composed display, ignores DECPEX. Ps = 1 1 -> Print all pages. which could be invoked as printf '\033[?11i' But either approach (to write to a file) would need a printerCommand configured.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440752", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288491/" ] }
440,762
Under a folder, there are files created by multiple users. How can I list all the files/directories created by a specific user, using ls or something else?
You could do this by telling xterm to print the data using the print-everything action (normally not bound to a key). Alternatively, there's an escape sequence documented in XTerm Control Sequences : CSI ? Pm i Media Copy (MC), DEC-specific. Ps = 1 -> Print line containing cursor. Ps = 4 -> Turn off autoprint mode. Ps = 5 -> Turn on autoprint mode. Ps = 1 0 -> Print composed display, ignores DECPEX. Ps = 1 1 -> Print all pages. which could be invoked as printf '\033[?11i' But either approach (to write to a file) would need a printerCommand configured.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440762", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14207/" ] }
440,765
I recorded a video with my camera and when I open the directory containing the video files, the modified time is always wrong. Here is a screenshot from the video clearly showing the correct time and date as provided by the camera in the bottom left corner: However here is the output of ls -ltr : brett@brett-HP-Laptop-17-bs0xx:~/Vidéos$ ls -ltrtotal 9604-rw-r--r-- 1 brett brett 9832867 avr 27 05:04 REC_0039.MOVbrett@brett-HP-Laptop-17-bs0xx:~/Vidéos$ The modified time showed by Linux is several hours behind the actual time this video was filmed. Why is this the case and how can I display the correct time in my file manager?
You could do this by telling xterm to print the data using the print-everything action (normally not bound to a key). Alternatively, there's an escape sequence documented in XTerm Control Sequences : CSI ? Pm i Media Copy (MC), DEC-specific. Ps = 1 -> Print line containing cursor. Ps = 4 -> Turn off autoprint mode. Ps = 5 -> Turn on autoprint mode. Ps = 1 0 -> Print composed display, ignores DECPEX. Ps = 1 1 -> Print all pages. which could be invoked as printf '\033[?11i' But either approach (to write to a file) would need a printerCommand configured.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/251899/" ] }
440,803
I'm trying to use Remmina on Ubuntu to remote into one of the servers at my work. However, after entering the connection information in I get the following error: "You requested an H264 GFX mode for ser [email protected], but your libfreedp does not support H264. Please check colour depth settings." I am quite new to Ubuntu in general so I am not really sure what to do about the above error. Could anybody help me out? Cheers
quoting from the following GitLab issue link: in the profile Basic settings, change the colour depth untill you find the one that is supported by your server. remmina issue explained if you have some issues to find the profile basic settings, check the remmina user's guide
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/440803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288534/" ] }
440,812
i'd like to display the output of lsb_release -ds in my Conky display. ftr, in my current installation that would output Linux Mint 18.3 Sylvia . i had thought of assigning the output of that command to a local variable but it seems Conky doesn't do local vars. maybe assigning the output of that command to a global (system) variable? but that's a kludge and it's not at all clear that Conky can access global vars. sounds like an exec... might do it but the docs stress that that's resource inefficient and since this is a static bit of info (for any given login session) it seems a waste to keep running it over and over. so, what to do? suggestions most welcome.
You should prefer the execi version of exec, with an interval, where you can give the number of seconds before repeating: ${execi 999999 lsb_release -ds}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288517/" ] }
440,819
My data resides on a SSD and — thanks to re-writes and Write Amplification — any modification of an atime will result in not only the inode being modified, but the whole block that it resides on being erased and rewritten. That is, obviously, undesirable as it would cause a large amount of unnecessary wear on the drive. When Duplicity backs up files, does it modify the atime attribute of the source files in the process? If it does modify the atime, does it do so on the initial (full) backup, the incremental backups, or both?
You should prefer the execi version of exec, with an interval, where you can give the number of seconds before repeating: ${execi 999999 lsb_release -ds}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24141/" ] }
440,840
I'm trying to install the most up-to-date NVIDIA driver in Debian Stretch. I've downloaded NVIDIA-Linux-x86_64-390.48.run from here , but when I try to do sudo sh ./NVIDIA-Linux-x86_64-390.48.run as suggested, an error message appears. ERROR: An NVIDIA kernel module 'nvidia-drm' appears to already be loaded in your kernel. This may be because it is in use (for example, by an X server, a CUDA program, or the NVIDIA Persistence Daemon), but this may also happen if your kernel was configured without support for module unloading. Please be sure to exit any programs that may be using the GPU(s) before attempting to upgrade your driver. If no GPU-based programs are running, you know that your kernel supports module unloading, and you still receive this message, then an error may have occured that has corrupted an NVIDIA kernel module's usage count, for which the simplest remedy is to reboot your computer. When I try to find out who is using nvidia-drm (or nvidia_drm ), I see nothing. ~$ sudo lsof | grep nvidia-drmlsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete.~$ sudo lsof -e /run/user/1000/gvfs | grep nvidia-drm~$ And when I try to remove it, it says it's being used. ~$ sudo modprobe -r nvidia-drmmodprobe: FATAL: Module nvidia_drm is in use.~$ I have rebooted and started in text-only mode (by pressing Ctrl+Alt+F2 before giving username/password), but I got the same error. Besides it, how do I "know that my kernel supports module unloading"? I'm getting a few warnings on boot up related to nvidia, no idea if they're related, though: Apr 30 00:46:15 debian-9 kernel: nvidia: loading out-of-tree module taints kernel.Apr 30 00:46:15 debian-9 kernel: nvidia: module license 'NVIDIA' taints kernel.Apr 30 00:46:15 debian-9 kernel: Disabling lock debugging due to kernel taintApr 30 00:46:15 debian-9 kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel Module 375.82 Wed Jul 19 21:16:49 PDT 2017 (using threaded interrupts)
I imagine you want to stop the display manager which is what I'd suspect would be using the Nvidia drivers. After change to a text console (pressing Ctrl + Alt + F2 ) and logging in as root, use the following command to disable the graphical target, which is what keeps the display manager running: # systemctl isolate multi-user.target At this point, I'd expect you'd be able to unload the Nvidia drivers using modprobe -r (or rmmod directly): # modprobe -r nvidia-drm Once you've managed to replace/upgrade it and you're ready to start the graphical environment again, you can use this command: # systemctl start graphical.target
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/440840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134202/" ] }
440,841
I was copying files in Linux Mint to NTFS external USB disk when I moved my laptop, probably USB connection was lost and now I got error $MFTMirr does not match $MFT . Mount error message is to use Windows to fix drive errors. However, I want to use Linux to fix that IMHO common error.
I ran sudo ntfsfix /dev/sdb1 (where sdb1 is the device name from error message) and it fixed the problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440841", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266260/" ] }
440,844
I'm trying to send a bug report for the app file, /usr/bin/file But consulting the man and sending an email BUGS Please report bugs and send patches to the bug tracker at http://bugs.gw.com/ or the mailing list at ⟨[email protected]⟩ (visit http://mx.gw.com/mailman/listinfo/file first to subscribe). Made me find out the mail address does not exist. Is there another way of communicating to the community?Hopefully this here question is already part of it :) So here's my email: possible feature failure: the --extension option doesn't seem to output anything $ file --extension "ab.gif" ab.gif: ??? It would be useful to easily be able to use the output of this to rename a file to its correct extension. something like file --likely_extension would only output the likely detected extension or an error if the detection was too low like thus: $ file --likely_extension "ab.gif"gif better though would be a --correct_extension option: $ file --correct_extension "ab.jpg"$ lsab.gif Tyvm for this app :)
You are following the correct procedure to file an issue or enhancement request: if a program’s documentation mentions how to do so, follow those instructions. Unfortunately it often happens that projects die, or that the instructions in the version you have are no longer accurate. In these cases, things become a little harder. One possible general approach is to file a bug with your distribution; success there can be rather hit-or-miss though... (I should mention that it’s usually better to report a bug to the distribution you got your package from, if you’re using a package; this is especially true if the packaged version is older than the current “upstream” version, and if you haven’t checked whether the issue is still present there.) For file specifically, the official documentation has been updated to mention that the bug tracker and mailing list are down, and it also provides a direct email address for the current maintainer, which you could use to contact him.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/440844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150158/" ] }
440,905
To be more precise: I know what the -s options stands for - I use it daily. But I saw someone in a tutorial, who was moving the document root of his website from /var/www/html/project to ~/www/project to increase security (he later on changed the rights and so on, but's that not significant in this context). Then he created the following symlink: ln -sT ~/www/project /var/www/html/project I was wondering what the -T is for, because normally I would have just used -s . From the man page I get the following sparse information regarding the -T option: -T, --no-target-directory treat LINK_NAME as a normal file always I don't really understand what this is for. Why should I use -T in conjunction with -s when creating a symlink? Is there any great benefit from doing so?
ln ’s synopsis is as follows: ln [OPTION]... [-T] TARGET LINK_NAME (1st form)ln [OPTION]... TARGET (2nd form)ln [OPTION]... TARGET... DIRECTORY (3rd form)ln [OPTION]... -t DIRECTORY TARGET... (4th form) Without -T , if LINK_NAME already exists and is a directory (or symlink verified to eventually resolve to a directory), the first and third forms are ambiguous, and ln chooses the third form: it creates the link inside the directory. Thus ln -s ~/www/project /var/www/html/project will create a link named project inside /var/www/html/project if the latter already exists. -T removes the ambiguity, and forces ln to consider the first form only: if the link doesn’t exist, the link is created as named; if there is already a file or directory with the given LINK_NAME , ln fails with an error (unless -f is specified too). So ln -sT ~/www/project /var/www/html/project guarantees that you end up either with a link /var/www/html/project pointing to ~/www/project , or with an error message (and non-zero exit code).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264975/" ] }
440,927
This is a text processing question. I have 2 files: joeblogsjohnsmithchriscomp 12:00:00 (AAA) OUT: "string" joeblogs@hostname12:00:00 (AAA) OUT: "string" joeblogs@hostname12:00:00 (AAA) OUT: "string" johnsmith@hostname12:00:00 (AAA) OUT: "string" joeblogs@hostname12:00:00 (AAA) OUT: "string" chriscomp@hostname File 1 contains a list of unique usernames that appear in a log (file 2). Desired output 12:00:00 (AAA) OUT: "string" USER1@hostname12:00:00 (AAA) OUT: "string" USER1@hostname12:00:00 (AAA) OUT: "string" USER2@hostname12:00:00 (AAA) OUT: "string" USER1@hostname12:00:00 (AAA) OUT: "string" USER3@hostname I guess I don't need both files. File 1 is generated by parsing file 2 for the unique usernames. My logic was to get a list of usernames that I know appear in file 2, and loop through it, replacing with sed . Something like: for i in $(cat file1);do sed -e 's/$i/USER[X]';done Where USER[X] increments with each unique username. However I can't do this. I don't even think that logic is sound. Can I have help to achieve the desired output? awk / sed / grep / bash are all welcome.
As you have realized that you "don't need the 2 files" , use the following awk solution to process the initial log file in one pass: awk '{ u_name = substr($5, 1, index($5, "@")); if (!(u_name in users)) users[u_name] = ++c; sub(/^[^@]+/, "USER" users[u_name], $5) }1' file.log The output: 12:00:00 (AAA) OUT: "string" USER1@hostname12:00:00 (AAA) OUT: "string" USER1@hostname12:00:00 (AAA) OUT: "string" USER2@hostname12:00:00 (AAA) OUT: "string" USER1@hostname12:00:00 (AAA) OUT: "string" USER3@hostname
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/440927", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288657/" ] }
440,978
From what little I know, in openpgp you have a private key which you keep locked or hidden somewhere and a public key which you can freely share with anybody. Now I have seen many people attaching .asc file. If I click on that, it reveals the other person's public key. Is having an .asc file nothing but using the putting your public key and then renaming it as something like signature.asc or is something else involved as well ? The .asc file seems to be an archive file (like a .rar or zip file) $ cat shirish-public-key.txt-----BEGIN PGP SIGNATURE-----publickeystring$-----END PGP SIGNATURE----- How can I make/transform it into an .asc file ? I could just do - $ mv shirish-public-key.txt shirish.asc but I don't know if that is the right thing to do or not. Update - I tried but it doesn't work :( $ gpg --armor export shirish-public-key.txt > pubkey.ascgpg: WARNING: no command supplied. Trying to guess what you mean ...usage: gpg [options] [filename] Update 2 - Still it doesn't work - $ gpg --armor --export shirish-public-key.txt > pubkey.asc gpg: WARNING: nothing exported seems it can't figure out that the public key is in a text file . Update 3 - This is what the contents of the file look like See http://paste.debian.net/1022979/ But if I run - $ gpg --import shirish-public-key.txt gpg: invalid radix64 character 3A skipped gpg: invalid radix64 character 2E skipped gpg: invalid radix64 character 2E skipped gpg: invalid radix64 character 2E skipped gpg: invalid radix64 character 3A skipped gpg: invalid radix64 character 3A skipped gpg: invalid radix64 character 2E skipped gpg: CRC error; 1E6A49 - B36DCC gpg: [don't know]: invalid packet (ctb=55) gpg: read_block: read error: Invalid packet gpg: import from 'shirish-public-key.txt' failed: Invalid keyring gpg: Total number processed: 0 Seems something is wrong somewhere. FWIW gpg is version 2.2.5 from Debian testing (am running testing with all updates) $ gpg --versiongpg (GnuPG) 2.2.5libgcrypt 1.8.2Copyright (C) 2018 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Home: /home/shirish/.gnupgSupported algorithms:Pubkey: RSA, ELG, DSA, ECDH, ECDSA, EDDSACipher: IDEA, 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH, CAMELLIA128, CAMELLIA192, CAMELLIA256Hash: SHA1, RIPEMD160, SHA256, SHA384, SHA512, SHA224Compression: Uncompressed, ZIP, ZLIB, BZIP2
Usually, a .asc file is an ASCII-armored representation of key material (or a signature). Your shirish-public-key.txt looks like it’s just that, so if you’re sure it contains the right information you could simply rename it, as you suggest. (I doubt it contains your public key though — that should start with -----BEGIN PGP PUBLIC KEY BLOCK----- .) If a file contains “binary” data (which I’m guessing is what you mean when you say it looks like an archive), it’s not an ASCII file and wouldn’t usually be named with a .asc extension. To export your key in this format, from your keyring rather than an existing file (thus ensuring it contains the correct data), run gpg --armor --export YOUR_FINGERPRINT > pubkey.asc To make things easier, files are often named by their key id; in my case: gpg --armor --export "79D9 C58C 50D6 B5AA 65D5 30C1 7597 78A9 A36B 494F" > 0x759778A9A36B494F.asc There are various options you can use to tweak the exported data; for example, --export-options export-minimal will strip most signatures from the key, greatly reducing its size (but also its utility for people who care about the web of trust).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/440978", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
441,044
How can I remove a line with a specific word but keeping in mind: if another word is found, then don't delete the line.For example, I'm deleting a line having sup or gnd with :%s/.*sup.*//g or :%s/.*gnd.*//g , but it is also deleting some lines which is breaking breaking loop line as well. I don't want to delete lines which have module in the same line as gnd or sup .Any idea how to conquer the line?
You could use: :v/module/s/\v.*(sup|gnd).*// :v/pattern/cmd runs cmd on the lines that do not match the pattern \v turns on very-magic so that all the ( , | characters are treated as regexp operator without the need to prefix them with \ . Note that it empties but otherwise doesn't remove the lines that contain sup or gnd . To remove them, since you can't nest g / v commands, you could use vim 's negative look ahead regexp operator in one g command instead: :g/\v^(.*module)@!.*(sup|gnd)/d :g/pattern/cmd runs cmd on the lines the do match the pattern (pattern)@! (with very-magic ) matches with zero width if the pattern is not matched at that position, so ^(.*module)@! matches at the beginning of a line that doesn't contain module . There's also the option of piping to sed or awk : :%!sed -e /module/b -e /sup/d -e /gnd/d /module/b branches off (and the line is printed) for lines that contain module . for the lines that don't, we carry on with the next two commands that delete the line if it contains sup or gnd respectively. or: :%!awk '/sup|gnd/ && ! /module/' If you wanted to find those files that need those lines removed and remove them, you'd probably want to skip vim and do the whole thing with text processing utilities. On a GNU system: find . ! -name '*.back' -type f -size +3c -exec gawk ' /sup|gnd/ && ! /module/ {printf "%s\0", FILENAME; nextfile}' '{}' + | xargs -r0 sed -i.back -e /module/b -e /sup/d -e /gnd/d (here saving a backup of the original files as the-file.back , change -i.back to -i if you don't need the backup). find finds the regular files whose name doesn't end in .back and whose size is at least 4 bytes (smaller ones can't possibly contain a line that contains sup or gnd (and the line delimiter)) and runs gawk with the paths of the corresponding files as arguments. when gawk finds a line in any of those files that match, it prints the path of the file ( FILENAME special variable) delimited with a NUL character (for xargs -0 ) and skips to the next file. xargs -r0 processes that gawk outputs to run sed with those file paths as arguments. sed edits the file in place.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288742/" ] }
441,061
I am trying to transfer a file from my local machine to a remote machine. When I use scp without -v option it gives only following output: .--. or '\033[0;1;33;93m.-\033[0;1;32;92m-.\033[0m' When I try scp with -v option I get following output, seems files transferred succesfully: -- $ scp -v file.sh user@IP:/home/user/foodebug1: channel 0: new [client-session]debug1: Requesting [email protected]: Entering interactive session.debug1: pledge: networkdebug1: Sending environment.debug1: Sending env LC_PAPER = tr_TR.UTF-8debug1: Sending env LC_ADDRESS = tr_TR.UTF-8debug1: Sending env LC_MONETARY = tr_TR.UTF-8debug1: Sending env LC_NUMERIC = tr_TR.UTF-8debug1: Sending env LC_ALL = en_US.UTF-8debug1: Sending env LC_TELEPHONE = tr_TR.UTF-8debug1: Sending env LC_IDENTIFICATION = tr_TR.UTF-8debug1: Sending env LANG = en_US.UTF-8debug1: Sending env LC_MEASUREMENT = tr_TR.UTF-8debug1: Sending env LC_CTYPE = UTF-8debug1: Sending env LC_TIME = tr_TR.UTF-8debug1: Sending env LC_NAME = tr_TR.UTF-8debug1: Sending command: scp -v -t /home/user/foo .--.debug1: client_input_channel_req: channel 0 rtype exit-status reply 0debug1: channel 0: free: client-session, nchannels 1debug1: fd 0 clearing O_NONBLOCKdebug1: fd 1 clearing O_NONBLOCKTransferred: sent 2504, received 2668 bytes, in 1.7 secondsBytes per second: sent 1510.2, received 1609.1debug1: Exit status 0 Please see sshd_config file here . Please note that I can ssh into the remote machine. Also ssh user@IP pwd returns /home/user . [Q] scp successfully transfers file but it does not show up on the remote machine. What might be the reason for this and how could I solve it?
Make sure you don't have startup scripts in your shell that echo data to the terminal. This might be in .bashrc or .profile When scp connects to the remote host, it expects to see the SSH server headers followed by an opened stdin stream. If your .profile on the remote host echoes any output, it causes scp to fail silently. If this is the case, you might want to remove this or put a guard condition to ensure that nothing gets printed in the absence of a controlling tty device. See tty command for that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441061", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198423/" ] }
441,076
In an Linux application I don't want to use "my own" configuration parser, but use the one which (should be?) is already available. Simple to keep the maintenance of the application config simple and not adding extra libraries.
There really isn't a 'standard' configuration parser library. If you peruse through /etc , you will find some combination of: XML config Windows INI style config Basic KEY=VALUE config JSON (mostly seen with web applications). YAML (mostly seen with newer stuff, especially if written in Python). Things that look like JSON or YAML or XML, but aren't (see for example configuration for Nginx (looks like JSON, but isn't), Apache (looks like XML, but isn't), and Unbound (looks like YAML, but isn't). Various shell languages. Various things that look like shell languages but technically aren't. Source snippets in various scripting languages. Possibly other things I haven't thought of. As far as what to use with your app: Please for the love of all things holy avoid XML. It's unnecessarily verbose, very complicated to parse (and thus takes a long time and lots of memory), and brings a number of security issues. It's also non-trivial to get the proper balance between elements and attributes, and you will usually end up regretting choices made regarding that at some point later. The only real advantage here is that you're pretty much guaranteed to have a working XML parser on any system you come across. Windows style INI files are generally a safe bet, though they limit the complexity of your config structures. Lots of libraries exist for this, and your system probably already has at least one. They aren't as prevalent on Linux (classic config files are traditionally KEY=VALUE pairs without section headers), but they're still widely used, and they're easy to understand. Basic KEY=VALUE pairs (one per line ideally) are so trivial to parse that you don't even need a library for it, but are very limited in what they can do. JSON is safe and easy to parse, is widely supported (pretty much every major language has at least one parser these days), and supports arbitrary nesting of config structures. However, it doesn't support comments (some parsers might, but the results won't be interoperable), which is not great for files designed to be edited with a text editor. YAML is my personal favorite, it's reasonably safe and easy to parse, looks very natural to most people, supports comments, and has very minimal overhead. The only big thing here is that indentation really matters, as it accounts for about 80% of the syntax, which combined with the fact that YAML requires spaces for indentation (no tabs), can make it a bit of a hassle to work with if you don't have a good editor. If you're using a scripting language, you might consider using source snippets for config, but be very careful doing this. Unless you're very careful about how you parse them, you're pretty much letting users do arbitrary things to your internal logic if they want to, which is a customer support nightmare (you will eventually get people complaining that you broke their config, which happened to include stuff poking at core program internals that you changed).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288778/" ] }
441,151
$ ls -l /usr/bin/sudo-rwsr-xr-x 1 root root 136808 Jul 4 2017 /usr/bin/sudo so sudo is runnable by any user, and any user who runs sudo will have root as the effective user ID of the process because the set-user-id bit of /usr/bin/sudo is set. From https://unix.stackexchange.com/a/11287/674 the most visible difference between sudo and su is that sudo requires the user's password and su requires root's password. Which user's password does sudo asks for? Is it the user represented by the real user ID of the process? If yes, doesn't any user can gain the superuser privilege by running sudo and then providing their own password? Can Linux restrict that on some users? Is it correct that sudo asks for the password after execve() starts to execute main() of /usr/bin/sudo ? Since the euid of the process has been changed to root (because the set-user-id bit of /usr/bin/sudo is set), what is the point of sudo asking for password later? Thanks. I have read https://unix.stackexchange.com/a/80350/674 , but it doesn't answer the questions above.
In its most common configuration, sudo asks for the password of the user running sudo (as you say, the user corresponding to the process’ real user id). The point of sudo is to grant extra privileges to specific users (as determined by the configuration in sudoers ), without those users having to provide any other authentication than their own. However, sudo does check that the user running sudo really is who they claim to be, and it does that by asking for their password (or whatever authentication mechanism is set up for sudo , usually using PAM — so this could involve a fingerprint, or two-factor authentication etc.). sudo doesn’t necessarily grant the right to become root, it can grant a variety of privileges. Any user allowed to become root by sudoers can do so using only their own authentication; but a user not allowed to, can’t (at least, not by using sudo ). This isn’t enforced by Linux itself, but by sudo (and its authentication setup). sudo does indeed ask for the password after it’s started running; it can’t do otherwise ( i.e. it can’t do anything before it starts running). The point of sudo asking for the password, even though it’s root, is to verify the running user’s identity (in its typical configuration).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/441151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
441,182
So I have an IP Address 5x.2x.2xx.1xx I want to map to localhost. In my hosts file I have: cat /etc/hosts127.0.1.1 test test127.0.0.1 localhost# The following lines are desirable for IPv6 capable hosts::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allroutersff02::3 ip6-allhosts5x.2x.2xx.1xx 127.0.0.1 What I want to accomplish is that when I connect in this machine to 5x.2x.2xx.1xx, I go to localhost. What I really want is to connect to MySQL using mysql -uroot 5x.2x.2xx.1xx -p and instead of pointing to that IP address I want to use the local MySQL server At the time it isn't working since it stills redirect to the server's IP (5x.2x.2xx.1xx) I've also tried: sudo service nscd restart with no luck
/etc/hosts can be used if you want to map a specific DNS name to a different IP address than it really has, but if the IP address is already specified by the application, that and any other techniques based on manipulating hostname resolution will be useless: the application already has a perfectly good IP address to connect to, so it does not need any hostname resolution services. If you want to redirect traffic that is going out to a specified IP address back to your local system, you'll need iptables for that. sudo iptables -t nat -I OUTPUT --dst 5x.2x.2xx.1xx -p tcp --dport 3306 -j REDIRECT --to-ports 3306 This will redirect any outgoing connections from your system to the default MySQL port 3306 of 5x.2x.2xx.1xx back to port 3306 of your own system. Replace the 5x.2x.2xx.1xx and 3306 with the real IP address and port numbers, obviously. The above command will be effective immediately, but will not persist over a reboot unless you do something else to make the settings persistent, but perhaps you don't even need that?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95422/" ] }
441,240
After updating from Fedora 27 to to 28 when I tried to login, I am facing a login loop. After giving correct password, again login screen is coming. I checked journalctl and saw that gnome-shell is crashing. I have attached a log also. I have tried both wayland ang xorg, the result is same.log- https://hastebin.com/raw/tuvuxemilu
I had a similar problem and logged in a virtual terminal (ctrl + alt + f2) and disabled all gnome shell extensions with the following command: gsettings set org.gnome.shell enabled-extensions "[]" and then i was able to login again. Apparently an extension called system-monitor crashed my login every time
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272940/" ] }
441,254
I need to upload files to an SFTP server as part of an automated pipeline process (so can not be interactive). I do not have ssh access so I can not use scp or rsync. I have had some success using the solution proposed in this answer : sftp user@server <<EOFput localPath remotePathEOF However I am looking for something a bit more solid, as I will have no indication if this fails. For example, if I wanted to download from an SFTP server I can use the following syntax: sftp user@server:remotePath localPath Is there an equivalent one-liner for uploading?
You can use "Batch Mode" of sftp.From manual: > -b batchfile> Batch mode reads a series of commands from an input batchfile instead of stdin. Since it lacks user interaction it> should be used in conjunction with non-interactive authentication. A batchfile of ‘-’ may be used to indicate> standard input. sftp will abort if any of the following commands fail: get, put, reget, reput, rename, ln, rm,> mkdir, chdir, ls, lchdir, chmod, chown, chgrp, lpwd, df, symlink, and lmkdir. Termination on error can be sup‐> pressed on a command by command basis by prefixing the command with a ‘-’ character (for example, -rm /tmp/blah*). Which means, you create a temporary file with the commands and execute the commands in the file with "sftp -b tempfile user@server" There are other tools around for such things, e.g. lftp
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441254", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30276/" ] }
441,265
we need to fix filesystem corruption on sdb on redhat 6 version sdb is xfs file system df -h | egrep "Filesystem|/data"Filesystem Size Used Avail Use% Mounted on/dev/sdb 8.2T 7.0T 1.0T 86% /data becaus the data on sdb is huge we want to know what is the best option 1 or 2 ? or other idea to do the file-system fixing ? option 1 umount /datafsck -y /dev/sdbmount /data option 2 umount /datae2fsck -y /dev/sdbmount /data option 3 umount /dataxfs_repair /dev/sdbmount /data second - what are the risks when doing fsck on huge data ?
You can use "Batch Mode" of sftp.From manual: > -b batchfile> Batch mode reads a series of commands from an input batchfile instead of stdin. Since it lacks user interaction it> should be used in conjunction with non-interactive authentication. A batchfile of ‘-’ may be used to indicate> standard input. sftp will abort if any of the following commands fail: get, put, reget, reput, rename, ln, rm,> mkdir, chdir, ls, lchdir, chmod, chown, chgrp, lpwd, df, symlink, and lmkdir. Termination on error can be sup‐> pressed on a command by command basis by prefixing the command with a ‘-’ character (for example, -rm /tmp/blah*). Which means, you create a temporary file with the commands and execute the commands in the file with "sftp -b tempfile user@server" There are other tools around for such things, e.g. lftp
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441265", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
441,314
I want to delete all the files in a folder which are not created today.I know how to get the list of files which are created today using find . -type f -mtime -1 But, I am not getting how to get the list of all files which are not created today.Basically I have to find if there are files with old timestamp except today in a folder. If present I have to delete only the old files.
find . -type f -mtime +0 -exec rm -f {} + or find . -type f ! -mtime -1 -exec rm -f {} + Would remove the regular files whose content has been last modified more than 24 hours ago ( -mtime +0 meaning: whose age in days (rounded down to an integer, days are 24 hours, or 86400 Unix epoch second duration) is strictly greater than 0). Some find implementations have a -delete predicate which you can use in place of -exec rm -f {} + which would make it safer and more efficient. For files that have been last modified earlier than today 00:00:00, with GNU find , you can add the -daystart predicate. That will include the files that were last modified yesterday even if less than 24 hours ago. With some find implementations, you can also do: find . ! -newermt 00:00:00 -delete To delete files that have been last modified before (or at exactly) 00:00:00 today.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441314", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/203247/" ] }
441,323
we have redhat machines version 6.x we want to fix the file-system on one of our disks UUID=198s5364-a29c-429e-b16d-e772acd /data_SA xfs rw,noatime,inode64,allocsize=16m 1 2 but disk is UUID so dose the following syntax is right ? xfs_repair UUID=198s5364-a29c-429e-b16d-e772acd FROM MAN PAGE SYNOPSIS xfs_repair [ -dfLnPv ] [ -m maxmem ] [ -c subopt=value ] [ -o subopt[=value] ] [ -t interval ] [ -l logdev ] [ -r rtdev ] device xfs_repair -V
You should find your device UUID in /dev/disk/by-uuid : xfs_repair /dev/disk/by-uuid/198s5364-a29c-429e-b16d-e772acd
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/441323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
441,364
I know about memory overcommitment and I profoundly dislike it and usually disable it. I am not thinking of setuid -based system processes (like those running sudo or postfix ) but of an ordinary Linux process started on some command line by some user not having admin privileges. A well written program could malloc (or mmap which is often used by malloc ) more memory than available and crash when using it. Without memory overcommitment, that malloc or mmap would fail and the well written program would catch that failure. The poorly written program (using malloc without checks against failure) would crash when using the result of a failed malloc . Of course virtual address space (which gets extended by mmap so by malloc ) is not the same as RAM (RAM is a resource managed by the kernel, see this ; processes have their virtual address space initialized by execve(2) and extended by mmap & sbrk so don't consume directly RAM, only virtual memory ). Notice that optimizing RAM usage could be done with madvise(2) (which could give a hint, using MADV_DONTNEED to the kernel to swap some pages onto the disk), when really needed. Programs wanting some overcommitment could use mmap(2) with MAP_NORESERVE . My understanding of memory overcommitment is as if every memory mapping (by execve or mmap ) is using implicitly MAP_NORESERVE My perception of it is that it is simply useful for very buggy programs. But IMHO a real developer should always check failure of malloc , mmap and related virtual address space changing functions (e.g. like here ). And most free software programs whose source code I have studied have such check, perhaps as some xmalloc function.... Are there real life programs, e.g. packaged in a typical Linux distributions, which actually need and are using memory overcommitment in a sane and useful way? I know none of them! What are the disadvantages of disabling memory overcommitment? Many older Unixes (e.g. SunOS4, SunOS5 from the previous century) did not have it, and IMHO their malloc (and perhaps even the general full-system performance, malloc -wise) was not much worse (and improvements since then are unrelated to memory overcommitment). I believe that memory overcommitment is a misfeature for lazy programmers. The user of that program could setup some resource limit for setrlimit(2) called with RLIMIT_AS by the parent process (e.g. ulimit builtin of /bin/bash ; or limit builtin of zsh , or any modern equivalent for e.g. at , crontab , batch , ...), or a grand-parent process (up to eventually /sbin/init of pid 1 or its modern systemd variant).
The reason for overcommitting is to avoid underutilization of physical RAM. There is a difference between how much virtual memory a process has allocated and how much of this virtual memory has been actually mapped to physical page frames. In fact, right after a process is started, it reserves very little RAM. This is due to demand paging: the process has a virtual memory layout, but the mapping from the virtual memory address to a physical page frame isn't established until the memory is read or written. A program typically never uses its whole virtual memory space, and the memory areas touched varies during the run of the program. For example, mappings to page frames containing initialization code that is executed only at the start of the run can be discarded and the page frames can be used for other mappings. The same applies to data: when a program calls malloc , it reserves a sufficiently large contiguous virtual address space for storing data. However, mappings to physical page frames are not established until the pages are actually used, if ever . Or consider the program stack: every process gets a fairly big contiguous virtual memory area set aside for the stack (typically 8 MB). A process typically uses only a fraction of this stack space; small and well-behaving programs use even less. A Linux computer typically has a lot of heterogeneous processes running in different stages of their lifetimes. Statistically, at any point in time, they do not collectively need a mapping for every virtual page they have been assigned (or will be assigned later in the program run). A strictly non-overcommitting scheme would create a static mapping from virtual address pages to physical RAM page frames at the moment the virtual pages are allocated. This would result in a system that can run far fewer programs concurrently, because a lot of RAM page frames would be reserved for nothing. I don't deny that overcommitting memory has its dangers, and can lead to out-of-memory situations that are messy to deal with. It's all about finding the right compromise.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/441364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50557/" ] }
441,434
If I create a file as an unprivileged user, and change the permissions mode to 400 , it's seen by that user as read-only, correctly: $ touch somefile$ chmod 400 somefile$ [ -w somefile ] && echo rw || echo roro All is well. But then root comes along: # [ -w somefile ] && echo rw || echo rorw What the heck? Sure, root can write to read-only files, but it shouldn't make a habit of it: Best Practice would tend to dictate that I should be able to test for the write permission bit, and if it's not, then it was set that way for a reason. I guess I want to understand both why this is happening, and how can I get a false return code when testing a file that doesn't have the write bit set?
test -w aka [ -w doesn't check the file mode. It checks if it's writable. For root, it is. $ help test | grep '\-w' -w FILE True if the file is writable by you. The way I would test would be to do a bitwise comparison against the output of stat(1) (" %a Access rights in octal"). (( 0$(stat -c %a somefile) & 0200 )) && echo rw || echo ro Note the subshell $(...) needs a 0 prefixed so that the output of stat is interpreted as octal by (( ... )) .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/441434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67464/" ] }
441,438
I have been trying to install GUI, i.e. gnome and lxde, into Debian 9 stretch in google cloud computing instance. I have even increased the cpu, ram, harddisk size. However, the installation is always stuck at "Setting up dbus (1.10.26-0+deb9u1)" My last attempt is letting it sit for 6 hours now. It's still stuck there. What can I do? Thanks and Regards Edit1: I found this line. Does this have to do with this error? Setting up rtkit (0.11-4+b1) ...Created symlink /etc/systemd/system/graphical.target.wants/rtkit-daemon.service → /lib/systemd/system/rtkit-daemon.service.Job for rtkit-daemon.service failed because a timeout was exceeded.See "systemctl status rtkit-daemon.service" and "journalctl -xe" for details.rtkit-daemon.service couldn't start. Edit2: I shutdown the instance and get thefollowing. Not sure if this may mean anything or not related at all - again because I forced shutdown the system. Setting up dbus (1.10.26-0+deb9u1) ...Job for dbus.service canceled.invoke-rc.d: initscript dbus, action "start" failed.● dbus.service - D-Bus System Message Bus Loaded: loaded (/lib/systemd/system/dbus.service; static; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2018-05-02 21:38:17 UTC; 31min ago Docs: man:dbus-daemon(1) Main PID: 15748 (code=exited, status=1/FAILURE)May 02 21:37:52 instance-2 systemd[1]: Started D-Bus System Message Bus.May 02 21:37:52 instance-2 dbus-daemon[15748]: Failed to start message bus: Could not get UID and GID for username "messagebus"May 02 21:38:17 instance-2 systemd[1]: dbus.service: Main process exited, code=exited, status=1/FAILUREMay 02 21:38:17 instance-2 systemd[1]: dbus.service: Unit entered failed state.May 02 21:38:17 instance-2 systemd[1]: dbus.service: Failed with result 'exit-code'.dpkg: error processing package dbus (--configure): subprocess installed post-installation script returned error exit status 1
test -w aka [ -w doesn't check the file mode. It checks if it's writable. For root, it is. $ help test | grep '\-w' -w FILE True if the file is writable by you. The way I would test would be to do a bitwise comparison against the output of stat(1) (" %a Access rights in octal"). (( 0$(stat -c %a somefile) & 0200 )) && echo rw || echo ro Note the subshell $(...) needs a 0 prefixed so that the output of stat is interpreted as octal by (( ... )) .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/441438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289033/" ] }
441,442
I'm on a sort of frankendebian stretch/sid (not the best idea, I know; planning on reinstalling soon). Tab completion works for git branch names in git repo directories: :~/project $ git checkout <TAB><TAB>Display all 200 possibilities? (y or n):~/project $ git checkout private-rl_<TAB><TAB>private-rl_1219_misspelled_locale_zhtw private-rl_1950_scheduler_offset private-rl_bootstrap_rake_tasksprivate-rl_1854_ldap_filter_reset private-rl_bootstrap_rake_task But some of the branches it shows don't exist anymore: :~/project $ git branch* develop private-rl_1219_misspelled_locale_zhtw stable This also happens for deleted remote branches. What's going on here? Does the git completion script keep a cache of old branches that can be flushed somehow? How can I stop these branches from accumulating in my tab-completion results?
I figured it out, thanks to some gentle prodding from @PatrickMevzek: The branches I was seeing were actually references to remote branches that had already been deleted . To quote the top answer from the SO thread linked above, $ git remote prune origin fixed it for me.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441442", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176219/" ] }
441,449
I downloaded Emacs in Linux 14.04, and when I type emacs filename.c , it opens Emacs in the terminal instead of the external Graphical User Interface. How do I force Emacs to open outside of the terminal?
I figured it out, thanks to some gentle prodding from @PatrickMevzek: The branches I was seeing were actually references to remote branches that had already been deleted . To quote the top answer from the SO thread linked above, $ git remote prune origin fixed it for me.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289043/" ] }
441,511
Can we confirm the log message "recovering journal" from fsck should be interpreted as indicating the filesystem was not cleanly unmounted / shut down the last time? Or, are there other possible reasons to be aware of? May 03 11:52:34 alan-laptop systemd-fsck[461]: /dev/mapper/alan_dell_2016-fedora: recovering journalMay 03 11:52:42 alan-laptop systemd-fsck[461]: /dev/mapper/alan_dell_2016-fedora: clean, 365666/2621440 files, 7297878/10485760 blocksMay 03 11:52:42 alan-laptop systemd[1]: Mounting /sysroot...May 03 11:52:42 alan-laptop kernel: EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)May 03 11:52:42 alan-laptop systemd[1]: Mounted /sysroot. Compare fsck of /home from the same boot, which shows no such message: (ignore the -1 hour jump, it's due to "RTC time in the local time zone") May 03 10:52:57 alan-laptop systemd[1]: Starting File System Check on /dev/mapper/alan_dell_2016-home...May 03 10:52:57 alan-laptop systemd-fsck[743]: /dev/mapper/alan_dell_2016-home: clean, 1469608/19857408 files, 70150487/79429632 blocksMay 03 10:52:57 alan-laptop systemd[1]: Started File System Check on /dev/mapper/alan_dell_2016-home.May 03 10:52:57 alan-laptop audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsc>May 03 10:52:57 alan-laptop systemd[1]: Mounting /home...May 03 10:52:57 alan-laptop systemd[1]: Mounted /boot/efi.May 03 10:52:57 alan-laptop kernel: EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)May 03 10:52:57 alan-laptop systemd[1]: Mounted /home.May 03 10:52:57 alan-laptop systemd[1]: Reached target Local File Systems. Version $ rpm -q --whatprovides $(which fsck.ext4)e2fsprogs-1.43.8-2.fc28.x86_64 Motivation This happened immediately after an offline update; it was most likely triggered by a PackageKit bug: Bug 1564462 - offline update performed unclean shutdown where it effectively uses systemctl reboot --force . I'm concerned that there's a bug in Fedora here, because systemd forced shutdown is still supposed to kill all processes and then unmount the filesystems cleanly where possible. The above messages are from Fedora 28, systemd-238-7.fc28.1.x86_64 . Fedora 27 was using a buggy version of systemd which could have failed to unmount filesystems: systemd-shutdown[1]: Failed to parse /proc/self/mountinfo #6796 however the fix should be included in systemd 235 and above . So I'm concerned there's yet another bug lurking somewhere. The filesystem is on LVM. I seem to remember that shutdown is associated with a few screenfuls of repeated messages in a few seconds immediately before the screen goes black. I think they are from inside the shutdown initrd . I don't know if this represents a problem or not.
The “recovering journal” message is output by e2fsck_run_ext3_journal , which is only called if ext2fs_has_feature_journal_needs_recovery indicates that the journal needs recovery . This “feature” is a flag which is set by the kernel whenever a journalled Ext3/4 file system is mounted , and cleared when the file system is unmounted , when recovery is completed (when mounting an unclean file system, or remounting a file system read-only), and when freezing the file system (before taking a snapshot). Ignoring snapshots, this means that e2fsck only prints the message when it encounters a file system which hasn’t been cleanly unmounted, so its presence is proof of an unclean unmount (and perhaps shutdown, assuming the unmount was supposed to take place during shutdown).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/441511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29483/" ] }
441,543
I'm trying to compose myself a bash script with the intent of it being used commonly in a terminal. It should start an application as a background process and discard it's stderr output. Here's what I got: for app in $@; do $app 2>/dev/null done It seemed to work just fine with bare applications started without parameters, like script.sh firefox gedit but failed to do the following: script.sh "vlc video.mp4" My question is: How can I enhance this basic script to handle applications which take parameters/files as their input? Maybe there already is a tool I can use?
There are two issues: The use of $@ without quotes. This would make the loop iterate over vlc and video.mp4 as two separate items, even if these were within the same quoted string in the invocation of the script. Using a command in a variable. If the command is anything more complicated than a single simple command, then this won't work. You would have to eval the given string instead. Taking this into account, your script could look like #!/bin/shfor cmd do # or: for cmd in "$@"; do eval "$cmd" 2>/dev/nulldone Calling this as ./script 'echo "hello world"' 'vim "$HOME/.profile"' 'tr a-z A-Z <"$HOME/.profile" | grep -c EXPORT' would first run echo "hello world" , and when that finishes, it would open vim for editing the named file in that second command. The last command is more complex but handled by the fact that we use eval (it just changes all alphabetic characters to uppercase and counts the number of times the string EXPORT occurs in a file). It is run as soon as the vim session exits. With this, you could even do ./script 'd=$HOME' 'ls "$d"' i.e., set variables that are used in later commands. This works because the commands invoked by eval are run in the same environment as the script. This would not work if you start the commands as background tasks though, as the title of your question suggests.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264449/" ] }
441,575
I have a service running software that generates some configuration files if they don't exist, and read them if they do exist. The problem I have been facing is that these files sometimes get corrupt, making the software unable to start, and thus making the service fail. In this case I would like to remove these files and restart the service. I tried creating a service that should get executed in case of failure, by doing this: [Service]ExecStart=/bin/run_programOnFailure=software-fail.service where this service is: [Service]ExecStart=/bin/rm /file/to/deleteExecStop=systemctl --user start software.service The problem, however, is that this service doesn't start, even when the service has failed. I tried doing systemctl --user enable software-fail.service but then it starts every time the system starts, just like any other service. My temporary solution is to use ExecStopPost=/bin/rm /file/to/delete but this is not a satisfying way of solving it, as it will always delete the file upon stopping the service, no matter if it was because of failure or not. Output when failing: ● software.service - Software Loaded: loaded (/home/trippelganger/.config/systemd/user/software.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2018-05-04 09:05:26 CEST; 5s ago Process: 1839 ExecStart=/bin/run_program (code=exited, status=1/FAILURE) Main PID: 1839 (code=exited, status=1/FAILURE)May 04 09:05:26 trippelganger systemd[595]: software.service: Main process exited, code=exited, status=1/FAILUREMay 04 09:05:26 trippelganger systemd[595]: software.service: Unit entered failed state.May 04 09:05:26 trippelganger systemd[595]: software.service: Failed with result 'exit-code'. Output of systemctl --user status software-fail.serviceis: ● software-fail.service - Delete corrupt files Loaded: loaded (/home/trippelganger/.config/systemd/user/software-fail.service; disabled; vendor preset: enabled) Active: inactive (dead)
NOTE : You probably want to use ExecStopPost= instead of OnFailure= here (see my other answer), but this is trying to address why your OnFailure= setup is not working. The problem with OnFailure= not starting the unit might be because it's in the wrong section, it needs to be in the [Unit] section and not [Service] . You can try this instead: # software.service[Unit]Description=SoftwareOnFailure=software-fail.service[Service]ExecStart=/bin/run_program And: # software-fail.service[Unit]Description=Delete corrupt files[Service]ExecStart=/bin/rm /file/to/deleteExecStop=/bin/systemctl --user start software.service I can make it work with this setup. But note that using OnFailure= is not ideal here, since you can't really tell why the program failed, and chaining another start of it in ExecStop= by calling /bin/systemctl start directly is pretty hacky... The solution using ExecStopPost= and looking at the exit status is definitely superior. If you define OnFailure= inside [Service] , systemd (at least version 234 from Fedora 27) complains with: software.service:6: Unknown lvalue 'OnFailure' in section 'Service' Not sure if you're seeing that in your logs or not... (Maybe this was added in recent systemd?) That should be a hint of what is going on there.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/441575", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240685/" ] }