source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
194,936 | I have a string that I would like to manipulate. The string is H08W2345678 how would I be able to manipulate it so the output is just W2345678 ? Similarly if the I wanted to drop the last 4 characters from H08W2345678 so that I get H08W234 how would I do this? | Just using bash (or ksh93 where that syntax comes from or zsh ): string="H08W2345678"
echo "${string:3}"
W2345678
echo "${string:0:-4}"
H08W234 See the Wooledge wiki for more on string manipulation . | {
"source": [
"https://unix.stackexchange.com/questions/194936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102813/"
]
} |
195,116 | I am running Oracle Linux 7 (CentOS / RedHat based distro) in a VirtualBox VM on a Mac with OS X 10.10. I have a Synology Diskstation serving as an iscsi target. I have successfully connected to the Synology, partitioned the disk and created a filesystem. It is refernced as /dev/sdb and the partition is /dev/sdb1 . Now, what I would like to do is create a mount point so I can easily access it: mount /dev/sdb1 /mnt/www That command works. But obviously, it isn't persistent across a reboot. No worries...into /etc/fstab we go. First, I got the UUID of the partition to ensure I am always using the correct device: blkid /dev/sdb1
Result:
/dev/sdb1: UUID="723eb295-8fe0-409f-a75f-a26eede8904f" TYPE="ext3" Now, I inserted the following line into my /etc/fstab UUID=723eb295-8fe0-409f-a75f-a26eede8904f /mnt/www ext3 defaults 0 0 Upon reboot, the system crashes and goes into maintenance mode. If I remove the line I inserted, all works again. However, I am following the instructions verbatim from Oracle-Base I know I am missing something..can anyone point me in the right direction? | Just change the parameter "defaults" by "_netdev", like this: UUID=723eb295-8fe0-409f-a75f-a26eede8904f /mnt/www ext3 _netdev 0 0 This way the mount point will be mounted only after the network start correctly. | {
"source": [
"https://unix.stackexchange.com/questions/195116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107777/"
]
} |
195,337 | In a file system where filenames are in UTF-8, I have a file with a faulty name; it is displayed as: D�sinstaller , actual name according to zsh: D$'\351'sinstaller , Latin1 for Désinstaller , itself a French barbarism for "uninstall." Zsh would not match it with [[ $file =~ '^.*$' ]] but would match it with a globbing * —this is the behavior I expect. Now I still expect to find it when running find . -name '*' —as a matter of fact, I would never expect a filename to fail this test. However, with LANG=en_US.utf8 , the file does not show up, and I have to set LANG=C (or en_US , or '' ) for it to work. Question: What is the implementation behind, and how could I have predicted that outcome? Infos: Arch Linux 3.14.37-1-lts, find (GNU findutils) 4.4.2 | That's a really nice catch. From a quick look at the source code for GNU find, I would say this boils down to how fnmatch behaves on invalid byte sequences ( pred_name_common in pred.c ): b = fnmatch (str, base, flags) == 0;
(...)
return b; This code tests the return value of fnmatch for equality with 0, but does not check for errors; this results in any errors being reported as "doesn't match". It has been suggested, many years ago, to change the behavior of this libc function to always return true on the * pattern, even on broken file names, but from what I can tell the idea must have been rejected (see the thread starting at https://sourceware.org/ml/libc-hacker/2002-11/msg00071.html ): When fnmatch detects an invalid multibyte character it should fall back to
single byte matching, so that "*" has a chance to match such a string. And why is this better or more correct? Is there existing practice? As mentioned by Stéphane Chazelas in a comment, and also in the same 2002 thread, this is inconsistent with the glob expansion performed by shells, which do not choke on invalid characters. Perhaps even more puzzling is the fact that reversing the test will match only those files that have broken names (create files in bash with touch $'D\351marrer' $'Touch\303\251' $'\346\227\245\346\234\254\350\252\236' ): $ find -name '*'
.
./Touché
./日本語
$ find -not -name '*'
./D?marrer So, to answer your question, you could have predicted this by knowing the behavior of your fnmatch in this case, and knowing how find handles this function's return value; you probably could not have found out solely by reading the documentation. | {
"source": [
"https://unix.stackexchange.com/questions/195337",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14298/"
]
} |
195,466 | On my server I have directory /srv/svn . Is it possible to set this directory to have multiple group ownerships, for instance devFirmA , devFirmB and devFirmC ? The point is, I want to subversion version control manage multiple users accross multiple repositories and I do not know how to merge /srv/svn , the root directory of repositories, permissions. I have, for instance, three firms, FirmA , FirmB and FirmC . Now, inside /srv/svn I've created three directories, FirmA , FirmB , FirmC and inside them I've created repository for each project and now I do not know how to establish permission scheme since all elementes inside /srv/svn are owned by root:root , which is not ok, or am I wrong? | This is an extremely common problem, if I understand it accurately, and I encounter it constantly. If I used ACLs for every trivial grouping problem, I would have tons of unmanageable systems. They are using the best practice when you cannot do it any other way, not for this situation. This is the method I very strongly recommend. First you need to set your umask to 002, this is so a group can share with itself. I usually create a file like /etc/profile.d/firm.sh , and then add a test command with the umask. [ $UID -gt 10000 ] && umask 002 Next you need to set the directories to their respective groups, chgrp -R FirmA /srv/svn/FirmA
chgrp -R FirmB /srv/svn/FirmB
chgrp -R FirmC /srv/svn/FirmC Finally you need to set the SGID bit properly, so the group will always stay to the one you set. This will prevent a written file from being set to the writer's GID. find /srv/svn/FirmA -type d -print0 | xargs -0 chmod 2775
find /srv/svn/FirmB -type d -print0 | xargs -0 chmod 2775
find /srv/svn/FirmC -type d -print0 | xargs -0 chmod 2775
find /srv/svn/FirmA -type f -print0 | xargs -0 chmod 664
find /srv/svn/FirmB -type f -print0 | xargs -0 chmod 664
find /srv/svn/FirmC -type f -print0 | xargs -0 chmod 664 Now finally if you want to prevent the directories from being accessed by other users. chmod 2770 /srv/svn/FirmA
chmod 2770 /srv/svn/FirmB
chmod 2770 /srv/svn/FirmC | {
"source": [
"https://unix.stackexchange.com/questions/195466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40993/"
]
} |
195,490 | I want to try gitlab community edition in my fedora laptop. From the downloads page, it has binary packages for Ubuntu and CentOS 6 and CentOS 7. Which one should I be installing in fedora(release 21) or should I compile from source? | This is an extremely common problem, if I understand it accurately, and I encounter it constantly. If I used ACLs for every trivial grouping problem, I would have tons of unmanageable systems. They are using the best practice when you cannot do it any other way, not for this situation. This is the method I very strongly recommend. First you need to set your umask to 002, this is so a group can share with itself. I usually create a file like /etc/profile.d/firm.sh , and then add a test command with the umask. [ $UID -gt 10000 ] && umask 002 Next you need to set the directories to their respective groups, chgrp -R FirmA /srv/svn/FirmA
chgrp -R FirmB /srv/svn/FirmB
chgrp -R FirmC /srv/svn/FirmC Finally you need to set the SGID bit properly, so the group will always stay to the one you set. This will prevent a written file from being set to the writer's GID. find /srv/svn/FirmA -type d -print0 | xargs -0 chmod 2775
find /srv/svn/FirmB -type d -print0 | xargs -0 chmod 2775
find /srv/svn/FirmC -type d -print0 | xargs -0 chmod 2775
find /srv/svn/FirmA -type f -print0 | xargs -0 chmod 664
find /srv/svn/FirmB -type f -print0 | xargs -0 chmod 664
find /srv/svn/FirmC -type f -print0 | xargs -0 chmod 664 Now finally if you want to prevent the directories from being accessed by other users. chmod 2770 /srv/svn/FirmA
chmod 2770 /srv/svn/FirmB
chmod 2770 /srv/svn/FirmC | {
"source": [
"https://unix.stackexchange.com/questions/195490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2832/"
]
} |
195,571 | Sometimes I need to look up certain words through all the manual pages. I am aware of apropos , but if I understand its manual right, it restricts search to the descriptions only. Each manual page has a short description available within it. apropos searches the descriptions for instances of keyword. For example, if I look up a word like 'viminfo', I get no results at all... $ apropos viminfo
viminfo: nothing appropriate. ... although this word exists in a later section of the manual of Vim (which is installed on my system). -i {viminfo}
When using the viminfo file is enabled, this option sets the filename to use, instead of the default "~/.vim‐
info". This can also be used to skip the use of the .viminfo file, by giving the name "NONE". So how can I look up a word through every section of every manual? | From man man : -K, --global-apropos
Search for text in all manual pages. This is a brute-force
search, and is likely to take some time; if you can, you should
specify a section to reduce the number of pages that need to be
searched. Search terms may be simple strings (the default), or
regular expressions if the --regex option is used. This directly opens the manpage ( vim , then ex , then gview , ...) for me, so you could add another option, like -w to get an idea of which manpage will be displayed. $ man -wK viminfo
/usr/share/man/man1/vim.1.gz
/usr/share/man/man1/vim.1.gz
/usr/share/man/man1/gvim.1.gz
/usr/share/man/man1/gvim.1.gz
/usr/share/man/man1/run-one.1.gz
/usr/share/man/man1/gvim.1.gz
/usr/share/man/man1/gvim.1.gz
/usr/share/man/man1/run-one.1.gz
/usr/share/man/man1/run-one.1.gz
... | {
"source": [
"https://unix.stackexchange.com/questions/195571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110008/"
]
} |
195,765 | I want to create a USB-to-USB data transfer system in Linux (preferably Ubuntu). For this I want to use no external hardware or switch ( except this cable ). It's going to be like mounting a USB drive to a system, but in this scenario one of the Linux systems is going to be mounted on the other. How can I create this? Are there any kernel modules available, given my experience with kernel programming is very basic? | Yes this is possible, but it is not possible by cutting two USB cables with USB-A connectors (what is normally going into the USB on your motherboard) and cross connecting the data cables. If you connect the USB power lines on such a self made cable, you are likely to end up frying your on-board USB handling chip . Don't try this at home! On most computer boards the chips handling USB are host only. Not only that but, it also handles a lot of the low level communication to speed things up and reduce the load on the CPU. It is not as if you could program your computer to handle the pins on the USB port to act as if a non-host. The devices capable, on the chip level, of switching between acting as a host and connecting to a host are few, as this requires a much more expensive chip¹. This is e.g. why intelligent devices like my smart-phone, GPS and ebook, although they all run Linux or something similar, do not allow me to use ssh to communicate when connected via a normal USB cable. Those devices go into some dumb mode when connected, where the host (my desktop system) can use its storage as a USB disc. After disconnecting the device uses the same interface as a host as to get to the data (although no cable connection is required, this happens internally). With that kind of devices even if Linux runs on both, there is no communication between the systems, i.e. the linuxes . This independent of a normal micro or mini USB cable connecting them to my desktop. Between two desktop PCs the above is normally impossible to do as you would require a USB-A to USB-A cable, which is is not common (as it would not work with the normal chips that are driving the connections anyway). Any solution doing USB to USB with two USB-A connectors that I have seen, is based on a cable that has some electronics in between. (much like a USB → Serial plugged into a Serial → USB cable, but then all in one piece). These normally require drivers to do the transfer, although you might be able to use UUCP or something else over such a cable, like you would over a "normal" serial port. This probably requires inetd and proper configuration to login on the other computer as well. ¹ The only device I have that is software changeable in this way is a Arduino board with exactly such a special chip. Just this chip made the board twice as expensive as a normal Arduino board. | {
"source": [
"https://unix.stackexchange.com/questions/195765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110115/"
]
} |
195,794 | I installed Unified Remote using dpkg : dpkg -i urserver.deb How do I uninstall it so I can reinstall from scratch? | First of all you should check if this package is correctly installed in your system and being listed by dpkg tool: dpkg -l '*urserver*' It should have an option ii in the first column of the output - that means 'installed ok installed'. If you'd like to remove the package itself (without the configuration files), you'll have to run: dpkg -r urserver If you'd like to delete (purge) the package completely (with configuration files), you'll have to run: dpkg -P urserver You may check if the package has been removed successfully - simply run again: dpkg -l urserver If the package has been removed without configuration files, you'll see the rc status near the package name, otherwise, if you have purged the package completely, the output will be empty. | {
"source": [
"https://unix.stackexchange.com/questions/195794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67364/"
]
} |
195,898 | Reading "What is the difference between Halt and Shutdown commands?" , I generally have an idea what does the command shutdown does, with or without -h/-r options. The "halt" command performs power off of the system to run-level 0 of
the system. The "shutdown" command performs a power off of the system to run-level
1 without -h or -r command. What about the command "poweroff" does it goes into run-level 0 or 1 ?
Is this the only main difference between these three commands? | And now, the systemd answer. You're using, per the tag on your question, Red Hat Enterprise Linux. Since version 7, that has used systemd. None of the other answers are correct for the world of systemd; nor even are some of the assumptions in your question. Forget about runlevels ; they exist, but only as compatibility shims. The systemd documentation states that the concept is "obsolete". If you're starting to learn this stuff on a systemd operating system, don't start there. Forget about the manual page that marcelm quoted; it's not from the right toolset at all, and is a description of another toolset's command, incorrect for systemd's. It's the one for the halt command from the van Smoorenburg "System 5" init utilities. Ignore the statements that /sbin/halt is a symbolic link to /sbin/reboot ; that's not true with systemd. There is no separate reboot program at all. Ignore the statements that halt or reboot invoke a shutdown program with command-line arguments; they are also not true with systemd. There is no separate shutdown program at all. Every system management toolset has its version of these utilities. systemd, upstart, nosh , van Smoorenburg init , and BSD init all have their own halt , poweroff , and so forth. On each their mechanics are slightly different. So are their manual pages. In the systemd toolset halt , poweroff , reboot , telinit , and shutdown are all symbolic links to /bin/systemctl . They are all backwards compatibility shims, that are simply shorthands for invoking systemd's primary command-line interface: systemctl . They all map to (and in fact are) that same single program. (By convention, the shell tells it which name it has been invoked by.) targets, not runlevels Most of those commands are shorthands for telling systemd, using systemctl , to isolate a particular target . Isolation is explained in the systemctl manual page (q.v.), but can be, for the purposes of this answer, thought of as starting a target and stopping any others. The standard targets used in systemd are listed on the systemd.special (8) manual page. The diagrams on the bootup (7) manual page in the systemd toolset, in particular the last one, show that there are three "final" targets that are relevant here: halt.target — Once the system has reached the state of fully isolating this target, it will have called the reboot(RB_HALT_SYSTEM) system call. The kernel will have attempted to enter a ROM monitor program, or simply halted the CPU (using whatever mechanism is appropriate for doing so). reboot.target — Once the system has reached the state of fully isolating this target, it will have called the reboot(RB_AUTOBOOT) system call (or the equivalent with the magic command line). The kernel will have attempted to trigger a reboot. poweroff.target — Once the system has reached the state of fully isolating this target, it will have called the reboot(RB_POWER_OFF) system call. The kernel will have attempted to remove power from the system, if possible. These are the things that you should be thinking about as the final system states, not run levels. Notice from the diagram that the systemd target system itself encodes things that are, in other systems, implicit rather than explicit: such as the notion that each of these final targets encompasses the shutdown.target target, so that one describes services that must be stopped before shutdown by having them conflict with the shutdown.target target. systemctl tries to send requests to systemd-logind when the calling user is not the superuser. It also passes delayed shutdowns over to systemd-shutdownd . And some shorthands trigger wall notifications. Those complexities aside, which would make this answer several times longer, assuming that you are currently the superuser and not requesting a scheduled action: systemctl isolate halt.target has the shorthands: shutdown -H now systemctl halt plain unadorned halt systemctl isolate reboot.target has the shorthands: shutdown -r now telinit 6 systemctl reboot plain unadorned reboot systemctl isolate poweroff.target has the shorthands: shutdown -P now telinit 0 shutdown now systemctl poweroff plain unadorned poweroff systemctl isolate rescue.target has the shorthands: telinit 1 systemctl rescue systemctl isolate multi-user.target has the shorthands: telinit 2 telinit 3 telinit 4 systemctl isolate graphical.target has the shorthand: telinit 5 After parsing the various differing command-line syntaxes, these all eventually end up in the same code paths inside the systemctl program. Notes: The traditional behaviour of option-less shutdown now has been to switch to single-user mode . This is not the case with systemd. rescue.target — single-user mode being renamed rescue mode in systemd — is not reachable with the shutdown command. telinit really does wholly ignore all of those runlevel N .target and default.target symbolic links in the filesystem that the manual pages describe. The aforegiven mappings are hardwired into the systemctl program, in a table. systemd has no notion of a current run level . The operation of these commands is not conditional upon any "if you are in run-level N ". The --force option to the halt , reboot , and poweroff commands is the same as saying --force --force to the systemctl halt , systemctl reboot , and systemctl poweroff commands. This makes systemctl try to call reboot() directly. Normally it just tries to isolate targets. telinit is not the same as init . They are different programs in the systemd world, the latter being another name for the systemd program, not for the systemctl program. The systemd program is not necessarily compiled with any van Smoorenburg compatibility at all, and on some systemd operating systems complains about being invoked incorrectly if one attempts init N . Further reading Are there any good reasons for halting system without cutting power? Why does `init 0` result in "Excess Arguments" on Arch install? Stephen Wadeley (2014). "8. Managing Services with systemd" Red Hat Enterprise Linux 7 System Administrators' Guide . Red Hat. Lennart Poettering (2013-10-07). systemctl . systemd manual pages. freedesktop.org. Lennart Poettering (2013-10-07). systemd.special . systemd manual pages. freedesktop.org. Lennart Poettering (2013-10-07). bootup . systemd manual pages. freedesktop.org. Jonathan de Boyne Pollard (2018). init . nosh Guide . Softwares. | {
"source": [
"https://unix.stackexchange.com/questions/195898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91488/"
]
} |
195,939 | I want to know the meaning of {} + in the exec command, and what is the difference between {} + and {} \; .
To be exact, what is the difference between these two: find . -type f -exec chmod 775 {} +
find . -type f -exec chmod 775 {} \; | Using ; (semicolon) or + (plus sign) is mandatory in order to terminate the shell commands invoked by -exec / execdir . The difference between ; (semicolon) or + (plus sign) is how the arguments are passed into find's -exec / -execdir parameter. For example: using ; will execute multiple commands (separately for each argument), Example: $ find /etc/rc* -exec echo Arg: {} ';'
Arg: /etc/rc.common
Arg: /etc/rc.common~previous
Arg: /etc/rc.local
Arg: /etc/rc.netboot All following arguments to find are taken to be arguments to the command. The string {} is replaced by the current file name being processed. using + will execute the least possible commands (as the arguments are combined together). It's very similar to how xargs command works, so it will use as many arguments per command as possible to avoid exceeding the maximum limit of arguments per line. Example: $ find /etc/rc* -exec echo Arg: {} '+'
Arg: /etc/rc.common /etc/rc.common~previous /etc/rc.local /etc/rc.netboot The command line is built by appending each selected file name at the end. Only one instance of {} is allowed within the command. See also: man find Using semicolon (;) vs plus (+) with exec in find at SO Simple unix command, what is the {} and \; for at SO | {
"source": [
"https://unix.stackexchange.com/questions/195939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110379/"
]
} |
196,009 | as many (most?) others, I edit my crontab via crontab -e , where I keep all routine operations such as incremental backup, ntpdate, various rsync operations, as well as making my desktop background christmas themed once a year. From what I've understood, on a fresh install or new user, this also automatically creates the file if it doesn't exist. However, I want to copy this file to another user, so where is the actual file that I'm editing? If this varies between distros, I'm using Centos5 and Mint 17 | The location of cron files for individual users is /var/spool/cron/crontabs/ . From man crontab : Each user can have their own crontab, and though these are files in /var/spool/cron/crontabs , they are not intended to be edited directly. | {
"source": [
"https://unix.stackexchange.com/questions/196009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107480/"
]
} |
196,078 | I often encounter the "Another app is currently holding the yum lock; waiting for it to exit..." message when trying to install an app and I have to kill yum manually. How can I avoid that? Is there any simple method to unlock yum? It seems that only one instance of yum can be running. Is it the same with other package mangers (apt-get, pacman)? | I think it is caused by PackageKit. You have to check for PackageKit and disable it (I assume it is CentOS 7 with systemctl , otherwise you can use service and chkconfig ) (as mentioned in comments, the service name is packagekit not packagekitd ): systemctl stop packagekit
systemctl disable packagekit Another approach (On CentOS/RHEL 6, Fedora 19 or earlier) is to open /etc/yum/pluginconf.d/refresh-packagekit.conf with a text editor, and change enabled=1 to enabled=0 . Or you can completely remove it: yum remove PackageKit | {
"source": [
"https://unix.stackexchange.com/questions/196078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18997/"
]
} |
196,098 | I use xubuntu 14.04, 64 bit. Every now and then, when I try to paste some text in xfce4-terminal, instead of the expected text to be pasted, it is surrounded by 0~ and 1~ , such as: 0~mvn clean install1~ The text is supposed to be mvn clean install -- I verified this by pasting the content in various other applications (gnome-terminal, gedit and others). Every application pastes correctly the content, except xfce4-terminal. I couldn't find any references for this on the internet (unfortunately, it is hard to search for text with special characters on google.com...). Why does this happen? | The issue is that your terminal is in bracketed paste mode, but doesn’t seem to support it properly. The issue was fixed in VTE, but xfce4-terminal is still using an old and unmaintained version of it. You can try temporarily turning bracketed paste mode off by using: printf "\e[?2004l" | {
"source": [
"https://unix.stackexchange.com/questions/196098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13945/"
]
} |
196,166 | Is there a simple way to find out which initsystem is being used e.g by a recent Debian wheezy or Fedora system? I'm aware that Fedora 21 uses systemd initsystem but that is because I read that and because all relevant scripts/symlinks are stored in /etc/systemd/ . However, I'm not sure about e.g Debian squeeze or CentOS 6 or 7 and so on. Which techniques exist to verify such initsystem? | You can poke around the system to find indicators. One way is to check for the existence of three directories: /usr/lib/systemd tells you you're on a systemd based system. /usr/share/upstart is a pretty good indicator that you're on an Upstart-based system. /etc/init.d tells you the box has SysV init in its history The thing is, these are heuristics that must be considered together, possibly with other data, not certain indicators by themselves. The Ubuntu 14.10 box I'm looking at right now has all three directories. Why? Because Ubuntu just switched to systemd from Upstart in that version, but keeps Upstart and SysV init for backwards compatibility. In the end, I think the best answer is "experience." You will see that you have logged into a CentOS 7 box and know that it's systemd. How do you learn this? Playing around, RTFMing, etc. The same way you gain all experience. I realize this is not a very satisfactory answer, but that's what happens when there is fragmentation in the market, creating nonstandard designs. It's like asking how you know whether ls accepts -C , or --color , or doesn't do color output at all. Again, the answer is "experience." | {
"source": [
"https://unix.stackexchange.com/questions/196166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43380/"
]
} |
196,168 | The command less can be used to replace tail in tail -f file to provide features like handling binary output and navigating the scrollback: less +F file The + prefix means "pretend I type that after startup", and the key F starts following. But can less also replace tail --follow=name file which follows file even if the actual file gets deleted or moved away, like a log file that is moved to file.log.1 , and then a new file is created with the same name as the followed file? | Yes, less can follow by file name The feature has a fairly obscure syntax: less --follow-name +F file.log With less, --follow-name is different from the tail option --follow=name . It does not make less follow the file, instead it modifies the behaviour of the command key F inside of less to follow based on the file name, not the file descriptor. Also, there is no normal option to start less in follow mode. But you can use the command line to give keystrokes to execute after startup, by prefixing them with + . Combining the modifier option with +F , less will actually start in the (modified) follow mode. Use +F alone for the equivalent of plain tail -f : less +F file.log | {
"source": [
"https://unix.stackexchange.com/questions/196168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63775/"
]
} |
196,239 | If I have a string that looks like this: "this_is_the_string" Inside a bash script, I would like to convert it to PascalCase, ie UpperCamelCase to look like this: "ThisIsTheString" I found that converting to lowerCamelCase can be done like this: "this_is_the_string" | sed -r 's/([a-z]+)_([a-z])([a-z]+)/\1\U\2\L\3/' Unfortunately I am not familiar enough with regexes to modify this. | $ echo "this_is_the_string" | sed -r 's/(^|_)([a-z])/\U\2/g'
ThisIsTheString Substitute pattern (^|_) at the start of the string or after an underscore - first group ([a-z]) single lower case letter - second group by \U\2 uppercasing second group g globally. | {
"source": [
"https://unix.stackexchange.com/questions/196239",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110304/"
]
} |
196,512 | I'm running Ubuntu Desktop 14.04 as a VM on a mac with vmware fusion. I'm getting space warning issues and now want to expand from 20GB to 200GB. I powered off the VM and on the vmware side increased the allocated disk space: Power off the VM VMWare Fusion -> Virtual Machine -> Settings -> Hard Disk (SCSI) It then warned me that I should increase the partition size within the guest VM, which is unfortunate because I was hoping this would be automatic. Looking at the disk usage analyzer inside of Ubuntu, it only currently sees the original 20 GB. How do I increase this to the 200 GB I allocated? I'm looking for better direction than what is posted here . From the Disks app, I see: | From Ubuntu (in VM) Install gparted by executing sudo apt-get install gparted in Terminal. Open gparted either from terminal or from dash. Then extend you disk, maybe you may have to move your extended partition at the end of disk. | {
"source": [
"https://unix.stackexchange.com/questions/196512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
196,537 | I have a folder SOURCE that contains several sub-level folders, each with its own files. I want to copy this folder in a new folder COPY where I need to copy the directory structure but keep the files as symbolic links to the original files in SOURCE and its subfolders. | Here's the solution on non-embedded Linux and Cygwin: cp -as SOURCE/ COPY Note that SOURCE must be an absolute path and have a trailing slash. If you want to give a relative path, you can use cp -as "$(pwd)/SOURCE/" COPY | {
"source": [
"https://unix.stackexchange.com/questions/196537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77329/"
]
} |
196,549 | I'm making a curl request where it displays an html output in the console like this <b>Warning</b>: Cannot modify header information - headers already sent by (output started at /home/domain/public_html/wp-content/themes/explicit/functions/ajax.php:87) in <b>/home/domain/public_html/wp-content/themes/explicit/functions/ajax.php</b> on line <b>149</b><br />...... etc I need to hide these outputs when running the CURL requests, tried running the CURL like this curl -s 'http://example.com' But it still displays the output, how can I hide the output? Thanks | From man curl -s, --silent
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute. It will still output the data you ask for,
potentially even to the terminal/stdout unless you redirect it . So if you don't want any output use: curl -s 'http://example.com' > /dev/null | {
"source": [
"https://unix.stackexchange.com/questions/196549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110666/"
]
} |
196,565 | I wanted to format the Unix files conditionally, I am currently working on diff command and wanted to know if it is possible to format the text of the diff command output. Example: Matched values should be displayed in green. Unmatched values should be displayed in red. Suppose I have two files file1 and file2 and my command is diff file1 file2 . Now I wanted that suppose output contain 5 mismatch then those mismatch should be displayed in Red color. How to achieve this using unix? In short "Change color to red for the output of diff command for values which mismatch" | diff --color option was added to GNU diffutils 3.4 (2016-08-08) This is the default diff implementation on most distros, which will soon be getting it. Ubuntu 18.04 has diffutils 3.6 and therefore has it. On 3.5 it looks like this: Tested: diff --color -u \
<(seq 6 | sed 's/$/ a/') \
<(seq 8 | grep -Ev '^(2|3)$' | sed 's/$/ a/') Apparently added in commit c0fa19fe92da71404f809aafb5f51cfd99b1bee2 (Mar 2015). Word-level diff Like diff-highlight . Not possible it seems, feature request: https://lists.gnu.org/archive/html/diffutils-devel/2017-01/msg00001.html Related threads: https://stackoverflow.com/questions/1721738/using-diff-or-anything-else-to-get-character-level-diff-between-text-files diff within a line https://superuser.com/questions/496415/using-diff-on-a-long-one-line-file ydiff does it though, see below. ydiff side-by-side word level diff https://github.com/ymattw/ydiff Is this Nirvana? python3 -m pip install --user ydiff
diff -u a b | ydiff -s Outcome: If the lines are too narrow (default 80 columns), fit to screen with: diff -u a b | ydiff -w 0 -s Contents of the test files: a 1
2
3
4
5 the original line the original line the original line the original line
6
7
8
9
10
11
12
13
14
15 the original line the original line the original line the original line
16
17
18
19
20 b 1
2
3
4
5 the original line teh original line the original line the original line
6
7
8
9
10
11
12
13
14
15 the original line the original line the original line the origlnal line
16
17
18
19
20 ydiff Git integration ydiff integrates with Git without any configuration required. From inside a git repository, instead of git diff , you can do just: ydiff -s and instead of git log : ydiff -ls See also: https://stackoverflow.com/questions/7669963/how-can-i-get-a-side-by-side-diff-when-i-do-git-diff/14649328#14649328 Tested on Ubuntu 16.04, git 2.18.0, ydiff 1.1. | {
"source": [
"https://unix.stackexchange.com/questions/196565",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108771/"
]
} |
196,603 | On the man page , it just says: -m Job control is enabled. But what does this actually mean? I came across this command in a SO question , I have the same problem as OP, which is "fabric cannot start tomcat". And set -m solved this. The OP explained a little, but I don't quite understand: The issue was in background tasks as they will be killed when the
command ends. The solution is simple: just add "set -m;" prefix before command. | Quoting the bash documentation (from man bash ): JOB CONTROL
Job control refers to the ability to selectively stop
(suspend) the execution of processes and continue (resume)
their execution at a later point. A user typically employs
this facility via an interactive interface supplied jointly
by the operating system kernel's terminal driver and bash. So, quite simply said, having set -m (the default for
interactive shells) allows one to use built-ins such as fg and bg ,
which would be disabled under set +m (the default for non-interactive shells). It's not obvious to me what the connection is between job control and
killing background processes on exit, however, but I can confirm that
there is one: running set -m; (sleep 10 ; touch control-on) & will
create the file if one quits the shell right after typing that
command, but set +m; (sleep 10 ; touch control-off) & will not. I think the answer lies in the rest of the documentation for set -m : -m Monitor mode. [...] Background pro‐
cesses run in a separate process group and a line con‐
taining their exit status is printed upon their comple‐
tion. This means that background jobs started under set +m are not actual
"background processes" ("Background processes are those whose process
group ID differs from the terminal's"): they share the same process
group ID as the shell that started them, rather than having their own
process group like proper background processes. This explains the
behavior observed when the shell quits before some of its background
jobs: if I understand correctly, when quitting, a signal is sent to
the processes in the same process group as the shell (thus killing
background jobs started under set +m ), but not to those of other
process groups (thus leaving alone true background processes started
under set -m ). So, in your case, the startup.sh script presumably starts a
background job. When this script is run non-interactively, such as
over SSH as in the question you linked to, job control is disabled,
the "background" job shares the process group of the remote shell, and
is thus killed as soon that shell exits. Conversely, by enabling job
control in that shell, the background job acquires its own process
group, and isn't killed when its parent shell exits. | {
"source": [
"https://unix.stackexchange.com/questions/196603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55914/"
]
} |
196,677 | I asked Google the same question and didn't like the results I got. What is /tmp/.X11-unix/ ? | On my fairly up-to-date Arch laptop, /tmp/.X11-unix/ is a directory with one entry: X0 , a Unix-domain socket . The X11 server (usuall Xorg these days) communicates with clients like xterm , firefox, etc via some kind of reliable stream of bytes. A Unix domain socket is probably a bit more secure than a TCP socket open to the world, and probably a bit faster, as the kernel does it all, and does not have to rely on an ethernet or wireless card. My X11 server shows up as: bediger 294 293 0 Apr09 tty1 01:23:26 /usr/lib/xorg-server/Xorg -nolisten tcp :0 vt1 -auth /tmp/serverauth.aK3Lrv5hMV The "-nolisten tcp" keeps it from opening TCP port 6000 for communications. The command lsof -U can tell you what processes are using which Unix domain sockets. I see Xorg as connected to /tmp/.X11-unix/X0 . | {
"source": [
"https://unix.stackexchange.com/questions/196677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61349/"
]
} |
196,715 | I have a file that I want to pad until it reaches 16 MiB (16777216 bytes). Currently it is 16515072 bytes. The difference is 262144 bytes. How do I pad it? This doesn't seem to be working: cp smallfile.img largerfile.img
dd if=/dev/zero of=largerfile.img bs=1 count=262144 | Besides the answers to get a physical padding you may also leave most of the padding space in the file just empty ("holes"), by seek ing to the new end-position of the file and writing a single character: dd if=/dev/zero of=largerfile.txt bs=1 count=1 seek=16777215 (which has the advantage to be much more performant, specifically with bs=1 , and does not occupy large amounts of additional disk space). That method seems to work even without adding any character, by using if=/dev/null and the final desired file size: dd if=/dev/null of=largerfile.txt bs=1 count=1 seek=16777216 A performant variant of a physical padding solution that uses larger block-sizes is: padding=262144 bs=32768 nblocks=$((padding/bs)) rest=$((padding%bs))
{
dd if=/dev/zero bs=$bs count=$nblocks
dd if=/dev/zero bs=$rest count=1
} 2>/dev/null >>largerfile.txt | {
"source": [
"https://unix.stackexchange.com/questions/196715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
196,907 | I have a service (docker registry) that runs on port 5000 , I have installed nginx to redirect http request from 8080 to 5000 . If I make a curl to localhost:5000 it works, but when I make a curl to localhost:8080 I get a Bad gateway error. nginx config file: upstream docker-registry {
server localhost:5000;
}
server {
listen 8080;
server_name registry.mydomain.com;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_pass http://docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
} In /var/log/nginx/error.log I have: [crit] 15595#0: *1 connect() to [::1]:5000 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: registry.mydomain.com, request: "GET / HTTP/1.1", upstream: "http://[::1]:5000/", host: "localhost:8080" Any idea? | I assume its a Linux box, so most likely SELinux is preventing the connection as there is no policy allowing the connection. You should be able to just run # setsebool -P httpd_can_network_connect true and then restart nginx. | {
"source": [
"https://unix.stackexchange.com/questions/196907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64116/"
]
} |
196,915 | I have to replace special characters using shell, so i use sed but i have some mistakes that i don't understand. <%_ by [@, ("_" = dash)
_%> by ] for the first 2 characters my synthax is : sed -i y/\<%\/\]\/ test.htm it works, but here how can i add the dash character ?
The second should be this way sed -i y/\%>\/\]\/ but i have this mistake bash: /]/: is a folder can you help me please | I assume its a Linux box, so most likely SELinux is preventing the connection as there is no policy allowing the connection. You should be able to just run # setsebool -P httpd_can_network_connect true and then restart nginx. | {
"source": [
"https://unix.stackexchange.com/questions/196915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110890/"
]
} |
197,124 | I am running an Ubuntu 12.04 desktop system. So far I have only installed some programs (I have sudo rights). When I check the list of users on the system, I see a long list, like more than 20 users—when were these users created (e.g. daemon, sys, sync, games, pulse, etc.)? How are these related to new programs being installed? If I run a program on my system, it should run with my UID. But on doing a ps , I see many other programs running with different UID (like root, daemon, avahi, syslog, colord etc.) — how were these programs started with different UIDs? | User accounts are used not only for actual, human users, but also to run system services and sometimes as owners of system files. This is done because the separation between human users' resources (processes, files, etc.) and the separation between system services' resources requires the same mechanisms under the hood. The programs that you run normally run with your user ID. It's only system daemons that run under their own account. Either the configuration file that indicates when to run the daemon also indicates what user should run it, or the daemon switches to an unprivileged account after starting. Some daemons require full administrative privileges, so they run under the root account. Many daemons only need access to a specific hardware device or to specific files, so they run under a dedicated user account. This is done for security: that way, even if there's a bug or misconfiguration in one of these services, it can't lead to a full system attack, because the attacker will be limited to what this service can do and won't be able to overwrite files, spy on processes, etc. Under Ubuntu, user IDs in the range 0–99 are created at system installation. 0 is root; many of the ones in the range 1–99 exist only for historical reasons and are only kept for backward compatibility with some local installations that use them (a few extra entries don't hurt). User IDs in the range 100–999 are created and removed dynamically when services that need a dedicated user ID are installed or removed. The range from 1000 onwards is for human users or any other account created by the system administrator. The same goes for groups. | {
"source": [
"https://unix.stackexchange.com/questions/197124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63934/"
]
} |
197,127 | Near as I can tell the zip -T option only determines if files can be extracted -- it doesn't really test the archive for internal integrity. For example, I deliberately corrupted the local (not central directory) CRC for a file, and zip didn't care at all, reporting the archive as OK. Is there some other utility to do this? There's a lot of internal redundancy in ZIP files, and it would be nice to have a way of checking it all. Of course, normally the central directory is all you need, but when repairing a corrupted archive often all you have is a fragment, with the central directory clobbered or missing. I'd like to know if archives I create are as recoverable as possible. | unzip -t Test archive files. This option extracts each specified file in memory and compares the CRC (cyclic redundancy check, an enhanced checksum) of the expanded file with the original's stored CRC value. [ source: https://linux.die.net/man/1/unzip ] | {
"source": [
"https://unix.stackexchange.com/questions/197127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110982/"
]
} |
197,134 | I want to take a 78gb folder and store it in a single file (for upload into a cloud service), as if I am compressing it in an archive, but I don't want any compression (I don't have that much CPU time available). Is there anyway that I can accomplish this, perhaps a terminal command I don't know about? | Use tar : tar -cf my_big_folder.tar /my/big/folder Restore the archive with tar -xf my_big_folder.tar -C / -C will change to the root directory to restore your archive since the archive created above contains absolute paths. EDIT : Due to the relatively big size of the archive, it'd be best to send it [directly] to its final location, using SSH or a mount point of the cloud resource/folder. For example, as Cole Johnson suggests : tar -cf /network/mount/point/my_big_folder.tar /my/big/folder or tar -c /my/big/folder | ssh example.com "cat > my_big_folder.tar" EDIT : As Blacklight Shining also suggests , If you want to avoid absolute paths, you can change to the big folder's parent and tar from there: tar -cf /network/mount/point/my_big_folder.tar \
-C /my/big/folder/location the_big_folder or tar -cC /my/big/folder/location the_big_folder | \
ssh example.com "cat > my_big_folder.tar" Personal reflexions Whether to include relative or absolute paths is a matter of personal preference. There are cases absolute paths are obvious, e.g. for a restore in a disaster recovery situation. For local projects or collections it's common to archive a directory tree from the desired folder's parent so as to avoid cluttering the current directory, in case the archive is accidentally unpacked in-place. If big_folder lies somewhere deep in a standard *NIX hierarchy , it may make some sense to start archiving the first non-standard folder where big_folder deviates from and its directory tree from there. Finally — going pedantic here — tar archive members are always relative since a) they may be restored in any directory and b) tar removes the leading / when creating an archive. I personally tend to always use -C when unpacking an archive. | {
"source": [
"https://unix.stackexchange.com/questions/197134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110984/"
]
} |
197,160 | Within a desktop environment we can resize terminals ( GNOME Terminal for example) for our convenience. How can I know the size of the terminal in terms of pixels or number of columns and rows? | If you issue the command stty size it returns the size of the current terminal in rows and columns. Example: $ stty size
24 80 You can read the rows and columns into variables like this (thanks to Janis' comment ): $ read myrows mycols < <(stty size) Obtaining the size in pixels requires knowledge of your screen's resolution and I don't think stty has direct access to such information. | {
"source": [
"https://unix.stackexchange.com/questions/197160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
197,437 | I am creating a linux distro and now I need an init program. I can code in c really well and I know quite a bit about linux (not much but I've been using arch linux for development for 4 years), so I thought I should try writing my own basic init script in C. I was just wondering, what tasks does init do to set the system up for a simple shell? (When I ask "what does init do?", I do know what init is and what it's for. I just don't know what tasks it does.) I don't need code and I possibly don't even need basic commands but I do need the order that they are run in. | System 5 init will tell you only a small part of the story. There's a sort of myopia that affects the Linux world. People think that they use a thing called "System 5 init ", and that is both what is traditional and the best place to start. Neither is in fact the case. Tradition isn't in fact what such people say it to be, for starters. System 5 init and System 5 rc date to AT&T UNIX System 5, which was almost as far after the first UNIX as we are now (say) after the first version of Linux-Mandrake. 1st Edition UNIX only had init . It did not have rc . The 1st Edition assembly language init ( whose code has been restored and made available by Warren Toomey et al. ) directly spawned and respawned 12 getty processes, mounted 3 hardwired filesystems from a built-in table, and directly ran a program from the home directory of a user named mel . The getty table was also directly in the program image. It was another decade after UNIX System 5 that the so-called "traditional" Linux init system came along. In 1992, Miquel van Smoorenburg (re-)wrote a Linux init + rc , and their associated tools, which people now refer to as "System 5 init ", even though it isn't actually the software from UNIX System 5 (and isn't just init ). System 5 init / rc isn't the best place to start, and even if one adds on knowledge of systemd that doesn't cover half of what there is to know. There's been a lot of work in the area of init system design (for Linux and the BSDs) that has happened in the past two decades alone. All sorts of engineering decisions have been discussed, made, designed, implemented, and practised. The commercial Unices did a lot, too. Existing systems to study and and learn from Here is an incomplete list of some of the major init systems other than those two, and one or two of their (several) salient points: Joachim Nilsson's finit went the route of using a more human-readable configuration file. Felix von Leitner's minit went for a filesystem-is-the-database configuration system, small memory footprints, and start/stop dependencies amongst things that init starts. Gerrit Pape's runit went for what I have previously described as the just spawn four shell scripts approach. InitNG aimed to have dependencies, named targets, multiple configuration files, and a more flexible configuration syntax with a whole load more settings for child processes. upstart went for a complete redesign, modelling the system not as services and interdependencies at all, but as events and jobs triggered by them. The design of nosh includes pushing all of the service management out (including even the getty spawning and zombie reaping) into a separate service manager, and just handling operating-system-specific "API" devices/symlinks/directories and system events. sinit is a very simple init. It executes /bin/rc.init whose job it is to start programs, mount filesystem, etc. For this you can use something like minirc . Moreover, about 10 years ago, there was discussion amongst daemontools users and others of using svscan as process #1, which led to projects like Paul Jarc's svscan as process 1 study , Gerrit Pape's ideas , and Laurent Bercot's svscan as process 1 . Which brings us to what process #1 programs do. What process #1 programs do Notions of what process #1 is "supposed" to do are by their natures subjective. A meaningful objective design criterion is what process #1 at minimum must do. The kernel imposes several requirements on it. And there are always some operating-system-specific things of various kinds that it has to do. When it comes to what process #1 has traditionally done, then we are not at that minimum and never really have been. There are several things that various operating system kernels and other programs demand of process #1 that one simply cannot escape. People will tell you that fork() ing things and acting as the parent of orphaned processes is the prime function of process #1. Ironically, this is untrue. Dealing with orphaned processes is (with recent Linux kernels, as explained at https://unix.stackexchange.com/a/177361/5132 ) a part the system that one can largely factor out of process #1 into other processes, such as a dedicated service manager . All of these are service managers, that run outwith process #1: the IBM AIX srcmstr program, the System Resource Controller Gerrit Pape's runsvdir from runit Daniel J. Bernstein's svscan from daemontools, Adam Sampson's svscan from freedt , Bruce Guenter's svscan from daemontools-encore, and Laurent Bercot's s6-svscan from s6 Wayne Marshall's perpd from perp the Service Management Facility in Solaris 10 the service-manager from nosh Similarly, as explained at https://superuser.com/a/888936/38062 , the whole /dev/initctl idea doesn't need to be anywhere near process #1. Ironically, it is the highly centralized systemd that demonstrates that it can be moved out of process #1. Conversely, the mandatory things for init , that people usually forget in their off-the-top-of-the-head designs, are things such as handling SIGINT , SIGPWR , SIGWINCH , and so forth sent from the kernel and enacting the various system state change requests sent from programs that "know" that certain signals to process #1 mean certain things. (For example: As explained at https://unix.stackexchange.com/a/196471/5132 , BSD toolsets "know" that SIGUSR1 has a specific meaning.) There are also once-off initialization and finalization tasks that one cannot escape, or will suffer greatly from not doing, such as mounting "API" filesystems or flushing the filesystem cache. The basics of dealing with "API" filesystems are little different to the operation of init rom 1st Edition UNIX: One has a list of information hardwired into the program, and one simply mount() s all of the entries in the list. You'll find this mechanism in systems as diverse as BSD (sic!) init , through the nosh system-manager , to systemd. "set the system up for a simple shell" As you have observed, init=/bin/sh doesn't get "API" fileystems mounted, crashes in an ungainly fashion with no cache flush when one types exit ( https://unix.stackexchange.com/a/195978/5132 ), and in general leaves it to the (super)user to manually do the actions that make the system minimally usable. To see what one actually has no choice but to do in process #1 programs, and thus set you on a good course for your stated design goal, your best option is to look at the overlaps in the operation of Gerrit Pape's runit, Felix von Leitner's minit, and the system-manager program from the nosh package. The former two show two attempts to be minimalist, yet still handle the stuff that it is impossible to avoid. The latter is useful, I suggest, for its extensive manual entry for the system-manager program, which details exactly what "API" filesystems are mounted, what initialization tasks are run, and what signals are handled; in a system that by design has the system manager just spawn three other things (the service manager, an accompanying logger, and the program to run the state changes) and only do the unavoidable in process #1. | {
"source": [
"https://unix.stackexchange.com/questions/197437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111203/"
]
} |
197,448 | In the current version of Raspian, I know it is possible to change the password of the current logged in user from the command line like so: sudo passwd which will then prompt the user to enter a new password twice. This will produce output like so: Changing password for pi.
(current) UNIX password:
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully I was wondering if there is a possible way to change a password programmatically, like from a shell script. I'm trying to make a configuration script to deploy on my Raspberry Pis and I don't want to manually have to type in new passwords for them. | You're looking for the chpasswd command. You'd do something like this: echo 'pi:newpassword' | chpasswd # change user pi password to newpassword Note that it needs to be run as root, at least with the default PAM configuration. But presumably run as root isn't a problem for a system deployment script. Also, you can do multiple users at once by feeding it multiple lines of input. | {
"source": [
"https://unix.stackexchange.com/questions/197448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146317/"
]
} |
197,567 | What is the simplest and most versatile way to send files over the network to other computers? By that I mean computers that other people are using at the moment. I don't think SSH works if the computer has an active session open. So far I am using netcat , which works alright. But are there any other simple ways to do this? One problem I have with netcat , is that the receiver needs to know the file ending and has to come up with a name for the stream. | You're complicating your life needlessly. Use scp . To transfer a file myfile from your local directory to directory /foo/bar on machine otherhost as user user , here's the syntax: scp myfile user@otherhost:/foo/bar . EDIT: It is worth noting that transfer via scp/SSH is encrypted while transfer via netcat or HTTP isn't. So if you are transferring sensitive files, always use the former. | {
"source": [
"https://unix.stackexchange.com/questions/197567",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111041/"
]
} |
197,568 | I am in the process of moving some Linux servers onto a virtualized environment with their filesystems mounted from LVM volumes, which are in turn hosted on a remote NAS via iSCSI. I am able to start them up and they run perfectly with no issues. However, the NAS server is Windows-based and, when Microsoft issues patches, it automatically applies them and reboots. When it reboots, all of the virtual servers' filesystems detect errors and go into read-only mode. I have attempted to remount them as read/write, but the kernel has the filesystem flagged as write-protected, so this fails. The only way I've been able to find to recover is to shut the virt down, fsck its LVM volume, and restart it. The virts mount these LVM volumes with an fstab entry of the form: /dev/xvda2 / ext3 noatime,nodiratime,errors=remount-ro 0 1 or /dev/xvda2 / ext4 errors=remount-ro 0 1 The virtual host OS also has an LVM/iSCSI mount from the NAS server (in the same volume group, even) which continues working in read/write mode despite these interruptions. Its fstab entry is: /dev/mapper/nas6-dom0 /mnt/nas6 ext4 _netdev 0 0 This leads me to suspect that removing errors=remount-ro from the guests' fstab entries would provide fault-tolerance, but I'm a bit uneasy about doing that - if an actual error develops in the filesystem, I would expect that allowing continued writes to the fs could make things much worse in short order. What is the best practice for resolving this such that the virtual guests will be able to continue running after the NAS reboots itself? | You're complicating your life needlessly. Use scp . To transfer a file myfile from your local directory to directory /foo/bar on machine otherhost as user user , here's the syntax: scp myfile user@otherhost:/foo/bar . EDIT: It is worth noting that transfer via scp/SSH is encrypted while transfer via netcat or HTTP isn't. So if you are transferring sensitive files, always use the former. | {
"source": [
"https://unix.stackexchange.com/questions/197568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20546/"
]
} |
197,670 | There is a service I want to run only when another service fails ( [Unit] OnFailure=foo ), but I don't want this service ( foo ) to start up automatically on boot. One option is running systemctl disable foo , but I'm looking for another way. Background: I am creating an OS image, and I don't want to have to boot the machine up, run that command ( systemctl disable foo ), then shut it down before declaring my image final. | systemctl enable works by manipulating symlinks in /etc/systemd/system/ (for system daemons). When you enable a service, it looks at the WantedBy lines in the [Install] section, and plops symlinks in those .wants directories. systemctl disable does the opposite. You can just remove those symlinks—doing that by hand is fully equivalent to using systemctl disable . | {
"source": [
"https://unix.stackexchange.com/questions/197670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
197,792 | I'm trying to join all of the arguments to a Bash function into one single string with spaces separating each argument. I also need to have the string include single quotes around the whole string. Here is what I have so far... $array=("$@")
str="\'"
for arg in "${array[@]}"; do
let $str=$str+$arg+" "
done
let $str=$str+"\'" Obviously this does not work but I'm wondering if there is a way to achieve this? | I believe that this does what you want. It will put all the arguments in one string, separated by spaces, with single quotes around all: str="'$*'" $* produces all the scripts arguments separated by the first character of $IFS which, by default, is a space. Inside a double quoted string, there is no need to escape single-quotes. Example Let us put the above in a script file: $ cat script.sh
#!/bin/sh
str="'$*'"
echo "$str" Now, run the script with sample arguments: $ sh script.sh one two three four 5
'one two three four 5' This script is POSIX. It will work with bash but it does not require bash . A variation: concatenating with slashes instead of spaces We can change from spaces to another character by adjusting IFS : $ cat script.sh
#!/bin/sh
old="$IFS"
IFS='/'
str="'$*'"
echo "$str"
IFS=$old For example: $ sh script.sh one two three four
'one/two/three/four' | {
"source": [
"https://unix.stackexchange.com/questions/197792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111391/"
]
} |
197,824 | What is the difference between: find . and find . -print What does -print actually do? $ find .
.
./hello.txt
./hello
./hello/txt
./hello/hello2
./hello/hello2/hello3
./hello/hello2/hello3/txt
./hello/hello2/txt
$ find . -print
.
./hello.txt
./hello
./hello/txt
./hello/hello2
./hello/hello2/hello3
./hello/hello2/hello3/txt
./hello/hello2/txt | From the findutils find manpage : If no expression is given, the expression -print is used (but you should probably consider using -print0 instead, anyway). ( -print is a find expression .) The POSIX documentation confirms this: If no expression is present, -print shall be used as the expression. So find . is exactly equivalent to find . -print ; the first has no expression so -print is added internally. The explanation of what -print does comes further down in the manpage: -print True; print the full file name on the standard output, followed by a newline. If you are piping the output of find into another program and there is the faintest possibility that the files which you are searching for might contain a newline, then you should seriously consider using the -print0 option instead of -print . See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. | {
"source": [
"https://unix.stackexchange.com/questions/197824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111372/"
]
} |
197,830 | I have two text files, one file contains entries such as Id Value
1 apple
2 orange
3 mango
4 banana
5 strawberry
6 papaya In other file I have entries like Id Value
6 strawberry
4 banana
3 orange
1 mango
2 papaya
5 straw berry I have to match between Ids and the corresponding strings in the value column and find the string correctness. How can this be done? | From the findutils find manpage : If no expression is given, the expression -print is used (but you should probably consider using -print0 instead, anyway). ( -print is a find expression .) The POSIX documentation confirms this: If no expression is present, -print shall be used as the expression. So find . is exactly equivalent to find . -print ; the first has no expression so -print is added internally. The explanation of what -print does comes further down in the manpage: -print True; print the full file name on the standard output, followed by a newline. If you are piping the output of find into another program and there is the faintest possibility that the files which you are searching for might contain a newline, then you should seriously consider using the -print0 option instead of -print . See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. | {
"source": [
"https://unix.stackexchange.com/questions/197830",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111431/"
]
} |
198,000 | I have read the following article: How do I bypass/ignore the gpg signature checks of apt? It outlines how to configure apt to not check the signatures of packages at all . However, I'd like to limit the effect of this setting to a single (in this case locally hosted) repository. That is: all official repositories should use the GPG signature check as usual, except for the local repo . How would I go about doing that? Failing that, what would be the advantage (security-wise) of signing the packages during an automated build (some meta-packages and a few programs) and then doing all that secure apt prescribes? After all the host with the repo would then also be the one on which the secret GPG key resides. | You can set options in your sources.list : deb [trusted=yes] http://localmachine/debian wheezy main The trusted option is what turns off the GPG check. See man 5 sources.list for details. Note: this was added in apt 0.8.16~exp3. So it's in wheezy (and of course jessie), but not squeeze. | {
"source": [
"https://unix.stackexchange.com/questions/198000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
198,003 | How can I pick which kernel GRUB2 should load by default? I recently installed a the linux realtime kernel and now it loads by default. I'd like to load the regular one by default. So far I only managed to pick the default OS.. and for some reason the /boot/grub.cfg already assumes that I want to load the rt-kernel and put it into the generic linux menu entry (in my case Arch Linux). | I think most distributions have moved additional kernels into the advanced options sub menu at this point, as TomTom found was the case with his
Arch. I didn't want to alter my top level menu structure in order to select a previous kernel as the default. I found the answer here . To summarize: Find the $menuentry_id_option for the submenu: $ grep submenu /boot/grub/grub.cfg
submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { Find the $menuentry_id_option for the menu entry for the kernel you want to use: $ grep gnulinux /boot/grub/grub.cfg
menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-rt-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-rt-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-rt-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-rt-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.17.0-0.bpo.1-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.17.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.17.0-0.bpo.1-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.17.0-0.bpo.1-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.9.0-8-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.9.0-8-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {
menuentry 'Debian GNU/Linux, with Linux 4.9.0-8-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.9.0-8-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { Comment out your current default grub in /etc/default/grub and replace it with the sub-menu's $menuentry_id_option from step one, and the selected kernel's $menuentry_id_option from step two separated by > . In my case the modified GRUB_DEFAULT is: #GRUB_DEFAULT=0
GRUB_DEFAULT="gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc>gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc" Update grub to make the changes. For Debian this is done like so: $ sudo update-grub Done. Now when you boot, the advanced menu should have an asterisk and you should boot into the selected kernel. You can confirm this with uname . $ uname -a
Linux NAME 4.18.0-0.bpo.1-amd64 #1 SMP Debian 4.18.0-0 (2018-09-13) x86_64 GNU/Linux Changing this back to the most recent kernel is as simple as commenting out the new line and uncommenting #GRUB_DEFAULT=0 : GRUB_DEFAULT=0
#GRUB_DEFAULT="gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc>gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc" then rerunning update-grub . Specifying IDs for all the entries from the top level menu is mandatory. The format for setting the default boot entry can be found in the documentation . | {
"source": [
"https://unix.stackexchange.com/questions/198003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111041/"
]
} |
198,045 | I have a script which requires an directory as one argument.
I want to support the two form: one is like a/b/c (no slash at the end) and another is like a/b/c/ (has slash at the end). My question: given either of the two form, how can I just keep the first form unchanged and strip the last slash of the second form to convert it to the first form. | dir=${1%/} will take the script's first parameter and remove a trailing slash if there is one. | {
"source": [
"https://unix.stackexchange.com/questions/198045",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87568/"
]
} |
198,065 | How can I skip the first 6 lines/rows in a text file (input.txt) and process the rest with awk? The format of my awk script (program.awk) is: BEGIN {
}
{
process here
}
END {
} My text file is like this: 0
3
5
0.1 4.3
2.0 1.5
1.5 3.0
0.3 3.3
1.5 2.1
.
.
. I want to process the file starting from: 0.3 3.3
1.5 2.1
.
.
. | Use either of the two patterns: NR>6 { this_code_is_active } or this: NR<=6 { next }
{ this_code_is_active } Use FNR instead of NR if you have many files as arguments to awk and want to skip 6 lines in every file. | {
"source": [
"https://unix.stackexchange.com/questions/198065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111564/"
]
} |
198,128 | Consider this line: ${libdir}/bin/licenseTool check "${SERIAL}" "${VERSION}" "${PRODUCT}" ${libdir} | grep '^200' >/dev/null What's the point of looking for the pattern in the output if the result of that is thrown away? And, if a line like that appears as the last thing in a bash script, is its exit value returned to the script's caller, or ignored? (I'm speculating on whether we can assume this is done for side effects only or returns something to the caller somehow.) | Your suspicion is correct; the exit status of the last command of the script will be passed to the calling environment. So the answer is that this script will return an exit status 0 if grep matched someting in the data, exist status 1 if there was no match, and exit status 2 if some error occurred. | {
"source": [
"https://unix.stackexchange.com/questions/198128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106567/"
]
} |
198,138 | I can do diff filea fileb to see the difference between files. I can also do head -1 filea to see the first line of filea or fileb. How can I combine these commands to show the difference between the first line of filea and the first line of fileb? | If your shell supports process substitution , try: diff <(head -n 1 filea) <(head -n 1 fileb) | {
"source": [
"https://unix.stackexchange.com/questions/198138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101831/"
]
} |
198,178 | I often saw the words "kernel ring buffer", "user level", "log level" and some other words appear together. e.g. /var/log/dmesg Contains kernel ring buffer information. /var/log/kern.log Contains only the kernel's messages of any loglevel /var/log/user.log Contains information about all user level logs Are they all about logs? How are they related and different? By "level", I would imagine a hierarchy of multiple levels? Is "user level" related to "user space"? Are they related to runlevel or protection ring in some way? | Yes, all of this has to do with logging. No, none of it has to do with runlevel or "protection ring". The kernel keeps its logs in a ring buffer. The main reason for this is so that the logs from the system startup get saved until the syslog daemon gets a chance to start up and collect them. Otherwise there would be no record of any logs prior to the startup of the syslog daemon. The contents of that ring buffer can be seen at any time using the dmesg command, and its contents are also saved to /var/log/dmesg just as the syslog daemon is starting up. All logs that do not come from the kernel are sent as they are generated to the syslog daemon so they are not kept in any buffers. The kernel logs are also picked up by the syslog daemon as they are generated but they also continue to be saved (unnecessarily, arguably) to the ring buffer. The log levels can be seen documented in the syslog(3) manpage and are as follows: LOG_EMERG : system is unusable LOG_ALERT : action must be taken immediately LOG_CRIT : critical conditions LOG_ERR : error conditions LOG_WARNING : warning conditions LOG_NOTICE : normal, but significant, condition LOG_INFO : informational message LOG_DEBUG : debug-level message Each level is designed to be less "important" than the previous one. A log file that records logs at one level will also record logs at all of the more important levels too. The difference between /var/log/kern.log and /var/log/mail.log (for example) is not to do with the level but with the facility, or category. The categories are also documented on the manpage. | {
"source": [
"https://unix.stackexchange.com/questions/198178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
198,444 | I would like to execute a script every 30 min after booting into the system. I know you can use cron, but I don't plan to use this feature often therefore I'd like to try it with systemd. So far I have only found the monotonic timers which allows to execute something once (at least I think so). How would the foo.timer and [email protected] look like in case I wanted to execute something every 30 minutes from boot/system start? [email protected] [Unit]
Description=run foo
Wants=foo.timer
[Service]
User=%I
Type=simple
ExecStart=/bin/bash /home/user/script.sh foo.timer [Unit]
Description=run foo
[Timer]
where I am stuck... ??? | You need to create two files: one for service, other for timer with same name. example: /etc/systemd/system/test.service [Unit]
Description=test job
[Service]
Type=oneshot
ExecStart=/bin/bash /tmp/1.sh /etc/systemd/system/test.timer [Unit]
Description=test
[Timer]
OnUnitActiveSec=10s
OnBootSec=10s
[Install]
WantedBy=timers.target after that reload the systemd using command systemctl daemon-reload and start your timer by systemctl start test.timer , or enable it by default ( systemctl enable test.timer ). test content of 1.sh #!/bin/bash
echo `date` >> /tmp/2 And command to check all available timers: systemctl list-timers --all More detailed info on project page and examples on ArchLinux page | {
"source": [
"https://unix.stackexchange.com/questions/198444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111041/"
]
} |
198,590 | What is a “bind mount”? How do I make one? What is it good for? I've been told to use a bind mount for something, but I don't understand what it is or how to use it. | What is a bind mount? A bind mount is an alternate view of a directory tree. Classically, mounting creates a view of a storage device as a directory tree. A bind mount instead takes an existing directory tree and replicates it under a different point. The directories and files in the bind mount are the same as the original. Any modification on one side is immediately reflected on the other side, since the two views show the same data. For example, after issuing the Linux command- mount --bind /some/where /else/where the directories /some/where and /else/where have the same content, which is the content of /some/where . (If /else/where was not empty, its previous content is now hidden.) Unlike a hard link or symbolic link, a bind mount doesn't affect what is stored on the filesystem. It's a property of the live system. How do I create a bind mount? bindfs The bindfs filesystem is a FUSE filesystem which creates a view of a directory tree. For example, the command bindfs /some/where /else/where makes /else/where a mount point under which the contents of /some/where are visible. Since bindfs is a separate filesystem, the files /some/where/foo and /else/where/foo appear as different files to applications (the bindfs filesystem has its own st_dev value). Any change on one side is “magically” reflected on the other side, but the fact that the files are the same is only apparent when one knows how bindfs operates. Bindfs has no knowledge of mount points, so if there is a mount point under /some/where , it appears as just another directory under /else/where . Mounting or unmounting a filesystem underneath /some/where appears under /else/where as a change of the corresponding directory. Bindfs can alter some of the file metadata: it can show fake permissions and ownership for files. See the manual for details, and see below for examples. A bindfs filesystem can be mounted as a non-root user, you only need the privilege to mount FUSE filesystems. Depending on your distribution, this may require being in the fuse group or be allowed to all users. To unmount a FUSE filesystem, use fusermount -u instead of umount , e.g. fusermount -u /else/where nullfs FreeBSD provides the nullfs filesystem which creates an alternate view of a filesystem. The following two commands are equivalent: mount -t nullfs /some/where /else/where
mount_nullfs /some/where /else/where After issuing either command, /else/where becomes a mount point at which the contents of /some/where are visible. Since nullfs is a separate filesystem, the files /some/where/foo and /else/where/foo appear as different files to applications (the nullfs filesystem has its own st_dev value). Any change on one side is “magically” reflected on the other side, but the fact that the files are the same is only apparent when one knows how nullfs operates. Unlike the FUSE bindfs, which acts at the level of the directory tree, FreeBSD's nullfs acts deeper in the kernel, so mount points under /else/where are not visible: only the tree that is part of the same mount point as /some/where is reflected under /else/where . The nullfs filesystem may be usable under other BSD variants (OS X, OpenBSD, NetBSD) but it is not compiled as part of the default system. Linux bind mount Under Linux, bind mounts are available as a kernel feature. You can create one with the mount command, by passing either the --bind command line option or the bind mount option. The following two commands are equivalent: mount --bind /some/where /else/where
mount -o bind /some/where /else/where Here, the “device” /some/where is not a disk partition like in the case of an on-disk filesystem, but an existing directory. The mount point /else/where must be an existing directory as usual. Note that no filesystem type is specified either way: making a bind mount doesn't involve a filesystem driver, it copies the kernel data structures from the original mount. mount --bind also support mounting a non-directory onto a non-directory: /some/where can be a regular file (in which case /else/where needs to be a regular file too). A Linux bind mount is mostly indistinguishable from the original. The command df -T /else/where shows the same device and the same filesystem type as df -T /some/where . The files /some/where/foo and /else/where/foo are indistinguishable, as if they were hard links. It is possible to unmount /some/where , in which case /else/where remains mounted. With older kernels (I don't know exactly when, I think until some 3.x), bind mounts were truly indistinguishable from the original. Recent kernels do track bind mounts and expose the information through <code/proc/ PID /mountinfo, which allows findmnt to indicate bind mount as such . You can put bind mount entries in /etc/fstab . Just include bind (or rbind etc.) in the options, together with any other options you want. The “device” is the existing tree. The filesystem column can contain none or bind (it's ignored, but using a filesystem name would be confusing). For example: /some/where /readonly/view none bind,ro If there are mount points under /some/where , their contents are not visible under /else/where . Instead of bind , you can use rbind , also replicate mount points underneath /some/where . For example, if /some/where/mnt is a mount point then mount --rbind /some/where /else/where is equivalent to mount --bind /some/where /else/where
mount --bind /some/where/mnt /else/where/mnt In addition, Linux allows mounts to be declared as shared , slave , private or unbindable . This affects whether that mount operation is reflected under a bind mount that replicates the mount point. For more details, see the kernel documentation . Linux also provides a way to move mounts: where --bind copies, --move moves a mount point. It is possible to have different mount options in two bind-mounted directories. There is a quirk, however: making the bind mount and setting the mount options cannot be done atomically, they have to be two successive operations. (Older kernels did not allow this.) For example, the following commands create a read-only view, but there is a small window of time during which /else/where is read-write: mount --bind /some/where /else/where
mount -o remount,ro,bind /else/where I can't get bind mounts to work! If your system doesn't support FUSE, a classical trick to achieve the same effect is to run an NFS server, make it export the files you want to expose (allowing access to localhost ) and mount them on the same machine. This has a significant overhead in terms of memory and performance, so bind mounts have a definite advantage where available (which is on most Unix variants thanks to FUSE). Use cases Read-only view It can be useful to create a read-only view of a filesystem, either for security reasons or just as a layer of safety to ensure that you won't accidentally modify it. With bindfs: bindfs -r /some/where /mnt/readonly With Linux, the simple way: mount --bind /some/where /mnt/readonly
mount -o remount,ro,bind /mnt/readonly This leaves a short interval of time during which /mnt/readonly is read-write. If this is a security concern, first create the bind mount in a directory that only root can access, make it read-only, then move it to a public mount point. In the snippet below, note that it's important that /root/private (the directory above the mount point) is private; the original permissions on /root/private/mnt are irrelevant since they are hidden behind the mount point. mkdir -p /root/private/mnt
chmod 700 /root/private
mount --bind /some/where /root/private/mnt
mount -o remount,ro,bind /root/private/mnt
mount --move /root/private/mnt /mnt/readonly Remapping users and groups Filesystems record users and groups by their numerical ID. Sometimes you end up with multiple systems which assign different user IDs to the same person. This is not a problem with network access, but it makes user IDs meaningless when you carry data from one system to another on a disk. Suppose that you have a disk created with a multi-user filesystem (e.g. ext4, btrfs, zfs, UFS, …) on a system where Alice has user ID 1000 and Bob has user ID 1001, and you want to make that disk accessible on a system where Alice has user ID 1001 and Bob has user ID 1000. If you mount the disk directly, Alice's files will appear as owned by Bob (because the user ID is 1001) and Bob's files will appear as owned by Alice (because the user ID is 1000). You can use bindfs to remap user IDs. First mount the disk partition in a private directory, where only root can access it. Then create a bindfs view in a public area, with user ID and group ID remapping that swaps Alice's and Bob's user IDs and group IDs. mkdir -p /root/private/alice_disk /media/alice_disk
chmod 700 /root/private
mount /dev/sdb1 /root/private/alice_disk
bindfs --map=1000/1001:1001/1000:@1000/1001:@1001/1000 /root/private/alice_disk /media/alice_disk See How does one permissibly access files on non-booted system's user's home folder? and mount --bind other user as myself another examples. Mounting in a jail or container A chroot jail or container runs a process in a subtree of the system's directory tree. This can be useful to run a program with restricted access, e.g. run a network server with access to only its own files and the files that it serves, but not to other data stored on the same computer). A limitation of chroot is that the program is confined to one subtree: it can't access independent subtrees. Bind mounts allow grafting other subtrees onto that main tree. This makes them fundamental to most practical usage of containers under Linux. For example, suppose that a machine runs a service /usr/sbin/somethingd which should only have access to data under /var/lib/something . The smallest directory tree that contains both of these files is the root. How can the service be confined? One possibility is to make hard links to all the files that the service needs (at least /usr/sbin/somethingd and several shared libraries) under /var/lib/something . But this is cumbersome (the hard links need to be updated whenever a file is upgraded), and doesn't work if /var/lib/something and /usr are on different filesystems. A better solution is to create an ad hoc root and populate it with using mounts: mkdir /run/something
cd /run/something
mkdir -p etc/something lib usr/lib usr/sbin var/lib/something
mount --bind /etc/something etc/something
mount --bind /lib lib
mount --bind /usr/lib usr/lib
mount --bind /usr/sbin usr/sbin
mount --bind /var/lib/something var/lib/something
mount -o remount,ro,bind etc/something
mount -o remount,ro,bind lib
mount -o remount,ro,bind usr/lib
mount -o remount,ro,bind usr/sbin
chroot . /usr/sbin/somethingd & Linux's mount namespaces generalize chroots. Bind mounts are how namespaces can be populated in flexible ways. See Making a process read a different file for the same filename for an example. Running a different distribution Another use of chroots is to install a different distribution in a directory and run programs from it, even when they require files at hard-coded paths that are not present or have different content on the base system. This can be useful, for example, to install a 32-bit distribution on a 64-bit system that doesn't support mixed packages, to install older releases of a distribution or other distributions to test compatibility, to install a newer release to test the latest features while maintaining a stable base system, etc. See How do I run 32-bit programs on a 64-bit Debian/Ubuntu? for an example on Debian/Ubuntu. Suppose that you have an installation of your distribution's latest packages under the directory /f/unstable , where you run programs by switching to that directory with chroot /f/unstable . To make home directories available from this installations, bind mount them into the chroot: mount --bind /home /f/unstable/home The program schroot does this automatically. Accessing files hidden behind a mount point When you mount a filesystem on a directory, this hides what is behind the directory. The files in that directory become inaccessible until the directory is unmounted. Because BSD nullfs and Linux bind mounts operate at a lower level than the mount infrastructure, a nullfs mount or a bind mount of a filesystem exposes directories that were hidden behind submounts in the original. For example, suppose that you have a tmpfs filesystem mounted at /tmp . If there were files under /tmp when the tmpfs filesystem was created, these files may still remain, effectively inaccessible but taking up disk space. Run mount --bind / /mnt (Linux) or mount -t nullfs / /mnt (FreeBSD) to create a view of the root filesystem at /mnt . The directory /mnt/tmp is the one from the root filesystem. NFS exports at different paths Some NFS servers (such as the Linux kernel NFS server before NFSv4) always advertise the actual directory location when they export a directory. That is, when a client requests server:/requested/location , the server serves the tree at the location /requested/location . It is sometimes desirable to allow clients to request /request/location but actually serve files under /actual/location . If your NFS server doesn't support serving an alternate location, you can create a bind mount for the expected request, e.g. /requested/location *.localdomain(rw,async) in /etc/exports and the following in /etc/fstab : /actual/location /requested/location bind bind A substitute for symbolic links Sometimes you'd like to make symbolic link to make a file /some/where/is/my/file appear under /else/where , but the application that uses file expands symbolic links and rejects /some/where/is/my/file . A bind mount can work around this: bind-mount /some/where/is/my to /else/where/is/my , and then realpath will report /else/where/is/my/file to be under /else/where , not under /some/where . Side effects of bind mounts Recursive directory traversals If you use bind mounts, you need to take care of applications that traverse the filesystem tree recursively, such as backups and indexing (e.g. to build a locate database). Usually, bind mounts should be excluded from recursive directory traversals, so that each directory tree is only traversed once, at the original location. With bindfs and nullfs, configure the traversal tool to ignore these filesystem types, if possible. Linux bind mounts cannot be recognized as such: the new location is equivalent to the original. With Linux bind mounts, or with tools that can only exclude paths and not filesystem types, you need to exclude the mount points for the bind mounts. Traversals that stop at filesystem boundaries (e.g. find -xdev , rsync -x , du -x , …) will automatically stop when they encounter a bindfs or nullfs mount point, because that mount point is a different filesystem. With Linux bind mounts, the situation is a bit more complicated: there is a filesystem boundary only if the bind mount is grafting a different filesystem, not if it is grafting another part of the same filesystem. Going beyond bind mounts Bind mounts provide a view of a directory tree at a different location. They expose the same files, possibly with different mount options and (with bindfs) different ownership and permissions. Filesystems that present an altered view of a directory tree are called overlay filesystems or stackable filesystems . There are many other overlay filesystems that perform more advanced transformations. Here are a few common ones. If your desired use case is not covered here, check the repository of FUSE filesystems . loggedfs — log all filesystem access for debugging or monitoring purposes ( configuration file syntax , Is it possible to find out what program or script created a given file? , List the files accessed by a program ) Filter visible files clamfs — run files through a virus scanner when they are read filterfs — hide parts of a filesystem rofs — a read-only view. Similar to bindfs -r , just a little more lightweight. Union mounts — present multiple filesystems (called branches ) under a single directory: if tree1 contains foo and tree2 contains bar then their union view contains both foo and bar . New files are written to a specific branch, or to a branch chosen according to more complex rules. There are several implementations of this concept, including: aufs — Linux kernel implementation, but rejected upstream many times funionfs — FUSE implementation mhddfs — FUSE, write files to a branch based on free space overlay — Linux kernel implementation, merged upstream in Linux v3.18 unionfs-fuse — FUSE, with caching and copy-on-write features Modify file names and metadata ciopfs — case-insensitive filenames (can be useful to mount Windows filesystems) convmvfs — convert filenames between character sets ( example ) posixovl — store Unix filenames and other metadata (permissions, ownership, …) on more restricted filesystems such as VFAT ( example ) View altered file contents avfs — for each archive file, present a directory with the content of the archive ( example , more examples ). There are also many FUSE filesystems that expose specific archives as directories . fuseflt — run files through a pipeline when reading them, e.g. to recode text files or media files ( example ) lzopfs — transparent decompression of compressed files mp3fs — transcode FLAC files to MP3 when they are read ( example ) scriptfs — execute scripts to serve content (a sort of local CGI) ( example ) Modify the way content is stored chironfs — replicate files onto multiple underlying storage ( RAID-1 at the directory tree level ) copyfs — keep copies of all versions of the files encfs — encrypt files pcachefs — on-disk cache layer for slow remote filesystems simplecowfs — store changes via the provided view in memory, leaving the original files intact wayback — keep copies of all versions of the files | {
"source": [
"https://unix.stackexchange.com/questions/198590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
198,703 | I'm trying to run yum update and I'm running this error: rpmdb: PANIC: fatal region error detected; run recovery
error: db3 error(-30974) from dbenv->open: DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db3 - (-30974)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed I checked page like this one but running yum clean all runs the same error. How can I solve this? | This is how I fixed my problem. You may fix this by cleaning out rpm database. But first, in order to minimize the risk, make sure you create a backup of files in /var/lib/rpm/ using cp command: mkdir /root/backups.rpm.mm_dd_yyyy/
cp -avr /var/lib/rpm/ /root/backups.rpm.mm_dd_yyyy/ The try this to fix this problem: # rm -f /var/lib/rpm/__db*
# db_verify /var/lib/rpm/Packages
# rpm --rebuilddb
# yum clean all And finally verify that error has gone with the following yum command # yum update | {
"source": [
"https://unix.stackexchange.com/questions/198703",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112017/"
]
} |
198,756 | rsync -avP /home/user/.profile hpux3:/home/user/.profile
bash: rsync: command not found If I did ssh to hpux3 machine rsync
version 3.1.1 protocol version 31
Copyright (C) 1996-2014 by Andrew Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
output truncated I have set PATH in $HOME/.profile and $HOME/.bashrc . Should I set it in the /etc/profile file? | Your .profile is only read when you log in interactively. When rsync connects to another machine to execute a command, /etc/profile and ~/.profile are not read. If your login shell is bash, then ~/.bashrc may be read (this is a quirk of bash — ~/.bashrc is read by non-login interactive shells, and in some circumstances by login non-interactive shells). This doesn't apply to all versions of bash though. The easiest way to make rsync work is probably to pass the --rsync-path option, e.g. rsync --rsync-path=/home/elbarna/bin/rsync -avP /home/user/.profile hpux3:/home/user/.profile If you log in over SSH with key-based authentication, you can set the PATH environment variable via your ~/.ssh/authorized_keys . See sh startup files over ssh for explanations of how to arrange to load .profile when logging in over SSH with a key. | {
"source": [
"https://unix.stackexchange.com/questions/198756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
198,787 | If I have an array with 5 elements, for example: [a][b][c][d][e] Using echo ${myarray[4]} I can see what it holds. But what if I didn't know the number of elements in a given array? Is there a way of reading the last element of an unknown length array? i.e. The first element reading from the right to the left for any array? I would like to know how to do this in bash. | As of bash 4.2 , you can just use a negative index ${myarray[-1]} to get the last element. You can do the same thing for the second-last, and so on; in Bash: If the subscript used to reference an element of an indexed array
evaluates to a number less than zero, it is interpreted as relative to
one greater than the maximum index of the array, so negative indices
count back from the end of the array, and an index of -1 refers to the
last element. The same also works for assignment. When it says "expression" it really means an expression; you can write in any arithmetic expression there to compute the index, including one that computes using the length of the array ${#myarray[@]} explicitly like ${myarray[${#myarray[@]} - 1]} for earlier versions. | {
"source": [
"https://unix.stackexchange.com/questions/198787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102813/"
]
} |
198,794 | When I open a terminal window with the GNOME Terminal emulator in the desktop GUI the shell TERM environment variable defaults to the value xterm . If I use CTL + ALT + F1 to switch to a console TTY window and echo $TERM the value is set to linux . My motivation for asking is that inside my ~/.bashrc file a variable is used to determine if a color shell is provided or just good old fashioned monochrome. # set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color) color_prompt=yes;;
esac In both the console shell and the Gnome Terminal emulator shell if I type export TERM=xterm-color
source /.bashrc both shells change to color mode (something I'd like to have happen always in both). Where do the default TERM values get set please and where is the best place to change their defaults, if at all possible? There appears to be nothing in the terminal emulator GUI to select or set the default TERM value. I did consider just adding the line export TERM=xterm-color to the top of my ~/.bashrc file but my gut instinct tells this is not the best solution and my Google searches haven't yet led me to a good answer. I'm running Ubuntu 15.04 Desktop Edition (Debian Based). | In lots of places, depending On virtual terminals and real terminals, the TERM environment variable is set by the program that chains to login , and is inherited all of the way along to the interactive shell that executes once one has logged on. Where, precisely, this happens varies from system to system, and according to the kind of terminal. real terminals Real, serial, terminals can vary in type, according to what's at the other end of the wire. So conventionally the getty program is invoked with an argument that specifies the terminal type, or is passed the TERM program from a service manager's service configuration data. On van Smoorenburg init systems, one can see this in /etc/inittab entries, which will read something along the lines of S0:3:respawn:/sbin/agetty ttyS0 9600 vt100-nav The last argument to agetty in that line, vt100-nav , is the terminal type set for /dev/ttyS0 . So /etc/inittab is where to change the terminal type for real terminals on such systems. On systemd systems, one used to be able to see this in the /usr/lib/systemd/system/[email protected] unit file ( /lib/systemd/system/[email protected] on un-merged systems), which used to read Environment=TERM=vt100 setting the TERM variable in the environment passed to agetty . On the BSDs, init takes the terminal type from the third field of each terminal's entry in the /etc/ttys database, and sets TERM from that in the environment that it executes getty with. So /etc/ttys is where one changes the terminal type for real terminals on the BSDs. systemd's variability The [email protected] service unit file, or drop-in files that apply thereto, is where to change the terminal type for real terminals on systemd systems. Note that such a change applies to all terminal login services that employ this service unit template. (To change it for only individual terminals, one has to manually instantiate the template, or add drop-ins that only apply to instantiations.) systemd has had at least four mechanisms during its lifetime for picking up the value of the TERM environment variable. At the time of first writing this answer, as can be seen, there was an Environment=TERM= something line in the template service unit files. At other times, the types linux and vt102 were hard-wired into the getty and serial-getty service unit files respectively. More recently, the environment variable has been inherited from process #1, which has set it in various ways. As of 2020, the way that systemd decides what terminal type to specify in a service's TERM environment variable is quite complex, and not documented at all. The way to change it remains a drop-in configuration file with Environment=TERM= something . But where the default value originates from is quite variable. Subject to some fairly complex to explain rules that involve the TTYPath= settings of individual service units, it can be one of three values : a hardwired linux , a hardwired vt220 (no longer vt102 ), or the value of the TERM environment variable that process #1 inherited, usually from the kernel/bootstrap loader. (Ironically, the getttyent() mechanism still exists in the GNU C library, and systemd could have re-used the /etc/ttys mechanism.) kernel virtual terminals Kernel virtual terminals, as you have noted, have a fixed type. Unlike NetBSD, which can vary the kernel virtual terminal type on the fly, Linux and the other BSDs have a single fixed terminal type implemented in the kernel's built-in terminal emulation program. On Linux, that type matches linux from the terminfo database. (FreeBSD's kernel terminal emulation since version 9 has been teken . Prior to version 9 it was cons25 OpenBSD's is pccon .) On systems using mingetty or vc-get-tty (from the nosh package) the program "knows" that it can only be talking to a virtual terminal, and they hardwire the "known" virtual terminal types appropriate to the operating system that the program was compiled for. On systemd systems, one used to be able to see this in the /usr/lib/systemd/system/[email protected] unit file ( /lib/systemd/system/[email protected] on un-merged systems), which read Environment=TERM=linux setting the TERM variable in the environment passed to agetty . For kernel virtual terminals, one does not change the terminal type. The terminal emulator program in the kernel doesn't change, after all. It is incorrect to change the type. In particular, this will screw up cursor/editing key CSI sequence recognition. The linux CSI sequences sent by the Linux kernel terminal emulator are different to the xterm or vt100 CSI sequences sent by GUI terminal emulator programs in DEC VT mode. (In fact, they are highly idiosyncratic and non-standard, and different both to all real terminals that I know of, and to pretty much all other software terminal emulators apart from the one built into Linux.) GUI terminal emulators Your GUI terminal emulator is one of many programs, from the SSH dæmon to screen , that uses pseudo-terminals. What the terminal type is depends from what terminal emulator program is running on the master side of the pseudo-terminal, and how it is configured. Most GUI terminal emulators will start the program on the slave side with a TERM variable whose value matches their terminal emulation on the master side. Programs like the SSH server will attempt to "pass through" the terminal type that is on the client end of the connection. Usually there is some menu or configuration option to choose amongst terminal emulations. The gripping hand The right way to detect colour capability is not to hardwire a list of terminal types in your script. There are an awful lot of terminal types that support colour. The right way is to look at what termcap/terminfo says about your terminal type. colour=0
if tput Co > /dev/null 2>&1
then
test "`tput Co`" -gt 2 && colour=1
elif tput colors > /dev/null 2>&1
then
test "`tput colors`" -gt 2 && colour=1
fi Further reading Jonathan de Boyne Pollard (2018). TERM . nosh Guide . Softwares. | {
"source": [
"https://unix.stackexchange.com/questions/198794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112077/"
]
} |
198,849 | Sometimes, I'd like to know the name of a glyph. For example, if I see − , I may want to know if it's a hyphen - , an en-dash – , an em-dash — , or a minus symbol − . Is there a way that I can copy-paste this into a terminal to see what it is? I am unsure if my system knows the common names to these glyphs, but there is certainly some (partial) information available, such as in /usr/share/X11/locale/en_US.UTF-8/Compose . For example, <Multi_key> <exclam> <question> : "‽" U203D # INTERROBANG Another example glyph: . | Try the unicode utility: $ unicode ‽
U+203D INTERROBANG
UTF-8: e2 80 bd UTF-16BE: 203d Decimal: ‽
‽
Category: Po (Punctuation, Other)
Bidi: ON (Other Neutrals) Or the uconv utility from the ICU package: $ printf %s ‽ | uconv -x any-name
\N{INTERROBANG} You can also get information via the recode utility: $ printf %s ‽ | recode ..dump
UCS2 Mne Description
203D point exclarrogatif Or with Perl: $ printf %s ‽ | perl -CLS -Mcharnames=:full -lne 'print charnames::viacode(ord) for /./g'
INTERROBANG Note that those give information on the characters that make-up that glyph, not on the glyph as a whole. For instance, for é (e with combining acute accent): $ printf é | uconv -x any-name
\N{LATIN SMALL LETTER E}\N{COMBINING ACUTE ACCENT} Different from the standalone é character: $ printf é | uconv -x any-name
\N{LATIN SMALL LETTER E WITH ACUTE} You can ask uconv to recombine those (for those that have a combined form): $ printf 'e\u0301b\u0301' | uconv -x '::nfc;::name;'
\N{LATIN SMALL LETTER E WITH ACUTE}\N{LATIN SMALL LETTER B}\N{COMBINING ACUTE ACCENT} (é has a combined form, but not b́). | {
"source": [
"https://unix.stackexchange.com/questions/198849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
198,925 | [username@notebook ~]$ cat foo.sh
#!/bin/bash
echo "$0"
[username@notebook ~]$ ./foo.sh
./foo.sh
[username@notebook ~]$ Question : How can I output the "foo.sh"? No matter how was it executed. | Use basename : #!/bin/bash
basename -- "$0" If you want to assign it to a variable, you'd do: my_name=$(basename -- "$0") | {
"source": [
"https://unix.stackexchange.com/questions/198925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105588/"
]
} |
199,203 | In Vim, if I paste this script: #!/bin/sh
VAR=1
while ((VAR < 10))
do
echo "VAR1 is now $VAR"
((VAR = VAR +2))
done
echo "finish" I get these strange results: #!/bin/sh
#VAR=1
#while ((VAR < 10))
# do
# echo "VAR1 is now $VAR"
# ((VAR = VAR +2))
# done
# echo "finish"
# Hash signs (#) and tabs have appeared. Why? | There're two reasons: Auto insert comment Auto indenting For pasting in vim while auto-indent is enabled, you must change to paste mode by typing: :set paste Then you can change to insert mode and paste your code. After pasting is done, type: :set nopaste to turn off paste mode. Since this is a common and frequent action, vim offers toggling paste mode: set pastetoggle=<F2> You can change F2 to whatever key you want, and now you can turn pasting on and off easily. To turn off auto-insert of comments, you can add these lines to your vimrc : augroup auto_comment
au!
au FileType * setlocal formatoptions-=c formatoptions-=r formatoptions-=o
augroup END vim also provides a pasting register for you to paste text from the system clipboard. You can use "*p or "+p depending on your system. On a system without X11, such as OSX or Windows, you have to use the * register. On an X11 system, like Linux, you can use both. Further reading Accessing the system clipboard How can I paste something to the VIM from the clipboard fakeclip | {
"source": [
"https://unix.stackexchange.com/questions/199203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
199,615 | I often use bc utility for converting hex to decimal and vice versa. However, it is always bit trial and error how ibase and obase should be configured. For example here I want to convert hex value C0 to decimal: $ echo "ibase=F;obase=A;C0" | bc
180
$ echo "ibase=F;obase=10;C0" | bc
C0
$ echo "ibase=16;obase=A;C0" | bc
192 What is the logic here? obase ( A in my third example) needs to be in the same base as the value which is converted( C0 in my examples) and ibase ( 16 in my third example) has to be in the base where I am converting to? | What you actually want to say is: $ echo "ibase=16; C0" | bc
192 for hex-to-decimal, and: $ echo "obase=16; 192" | bc
C0 for decimal-to-hex. You don't need to give both ibase and obase for any conversion involving decimal numbers, since these settings default to 10. You do need to give both for conversions such as binary-to-hex. In that case, I find it easiest to make sense of things if you give obase first: $ echo "obase=16; ibase=2; 11000000" | bc
C0 If you give ibase first instead, it changes the interpretation of the following obase setting, so that the command has to be: $ echo "ibase=2; obase=10000; 11000000" | bc
C0 This is because in this order, the obase value is interpreted as a binary number, so you need to give 10000₂=16 to get output in hex. That's clumsy. Now let’s work out why your three examples behave as they do. echo "ibase=F;obase=A;C0" | bc 180 That sets the input base to 15 and the output base to 10, since a single-digit value is interpreted in hex, according to POSIX . This asks bc to tell you what C0₁₅ is in base A₁₅=10, and it is correctly answering 180₁₀, though this is certainly not the question you meant to ask. echo "ibase=F;obase=10;C0" | bc C0 This is a null conversion in base 15. Why? First, because the single F digit is interpreted in hex, as I pointed out in the previous example. But now that you've set it to base 15, the following output base setting is interpreted that way, and 10₁₅=15, so you have a null conversion from C0₁₅ to C0₁₅. That's right, the output isn't in hex as you were assuming, it's in base 15! You can prove this to yourself by trying to convert F0 instead of C0 . Since there is no F digit in base 15, bc clamps it to E0 , and gives E0 as the output. echo "ibase=16; obase=A; C0" 192 This is the only one of your three examples that likely has any practical use. It is changing the input base to hex first , so that you no longer need to dig into the POSIX spec to understand why A is interpreted as hex, 10 in this case. The only problem with it is that it is redundant to set the output base to A₁₆=10, since that's its default value. | {
"source": [
"https://unix.stackexchange.com/questions/199615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
199,638 | What is the equivalent to sudo apt-get install texlive-full on Fedora system?
I read it is yum install texlive-scheme-full. Am I correct? | Yes. dnf install texlive-scheme-full (or yum install texlive-scheme-full , in older versions) is the way to go. While the installed packages are not fully equivalent the intention is the same. As stated here: https://ask.fedoraproject.org/en/question/44989/how-to-install-latex-for-fedora-19/ there are the following schemes: texlive-scheme-basic : basic scheme (plain and latex)
texlive-scheme-context : ConTeXt scheme
texlive-scheme-full : full scheme (everything)
texlive-scheme-gust : GUST TeX Live scheme
texlive-scheme-medium : medium scheme (small + more packages and languages)
texlive-scheme-minimal : minimal scheme (plain only)
texlive-scheme-small : small scheme (basic + xetex, metapost, a few languages)
texlive-scheme-tetex : teTeX scheme (more than medium, but nowhere near full)
texlive-scheme-xml : XML scheme and various collections (if you want some finer control over what you install): texlive-collection-basic : Essential programs and files
texlive-collection-bibtexextra : BibTeX additional styles
texlive-collection-binextra : TeX auxiliary programs
texlive-collection-context : ConTeXt and packages
texlive-collection-fontsextra : Additional fonts
texlive-collection-fontsrecommended : Recommended fonts
texlive-collection-fontutils : Graphics and font utilities
texlive-collection-formatsextra : Additional formats
texlive-collection-games : Games typesetting
texlive-collection-genericextra : Generic additional packages
texlive-collection-genericrecommended : Generic recommended packages
texlive-collection-htmlxml : HTML/SGML/XML support
texlive-collection-humanities : Humanities packages
texlive-collection-langafrican : African scripts
texlive-collection-langarabic : Arabic
texlive-collection-langcjk : Chinese/Japanese/Korean
texlive-collection-langcyrillic : Cyrillic
texlive-collection-langczechslovak : Czech/Slovak
texlive-collection-langenglish : US and UK English
texlive-collection-langeuropean : Other European languages
texlive-collection-langfrench : French
texlive-collection-langgerman : German
texlive-collection-langgreek : Greek
texlive-collection-langindic : Indic scripts
texlive-collection-langitalian : Italian
texlive-collection-langother : Other languages
texlive-collection-langpolish : Polish
texlive-collection-langportuguese : Portuguese
texlive-collection-langspanish : Spanish
texlive-collection-latex : LaTeX fundamental packages
texlive-collection-latexextra : LaTeX additional packages
texlive-collection-latexrecommended : LaTeX recommended packages
texlive-collection-luatex : LuaTeX packages
texlive-collection-mathextra : Mathematics packages
texlive-collection-metapost : MetaPost and Metafont packages
texlive-collection-music : Music packages
texlive-collection-omega : Omega packages
texlive-collection-pictures : Graphics, pictures, diagrams
texlive-collection-plainextra : Plain TeX packages
texlive-collection-pstricks : PSTricks
texlive-collection-publishers : Publisher styles, theses, etc
texlive-collection-science : Natural and computer sciences
texlive-collection-xetex : XeTeX and packages | {
"source": [
"https://unix.stackexchange.com/questions/199638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108183/"
]
} |
199,686 | I have some confusion regarding fork and clone. I have seen that: fork is for processes and clone is for threads fork just calls clone, clone is used for all processes and threads Are either of these accurate? What is the distinction between these 2 syscalls with a 2.6 Linux kernel? | fork() was the original UNIX system call. It can only be used to create new processes, not threads. Also, it is portable. In Linux, clone() is a new, versatile system call which can be used to create a new thread of execution. Depending on the options passed, the new thread of execution can adhere to the semantics of a UNIX process, a POSIX thread, something in between, or something completely different (like a different container). You can specify all sorts of options dictating whether memory, file descriptors, various namespaces, signal handlers, and so on get shared or copied. Since clone() is the superset system call, the implementation of the fork() system call wrapper in glibc actually calls clone() , but this is an implementation detail that programmers don't need to know about. The actual real fork() system call still exists in the Linux kernel for backward compatibility reasons even though it has become redundant, because programs that use very old versions of libc, or another libc besides glibc, might use it. clone() is also used to implement the pthread_create() POSIX function for creating threads. Portable programs should call fork() and pthread_create() , not clone() . | {
"source": [
"https://unix.stackexchange.com/questions/199686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43342/"
]
} |
199,694 | This is a cents 5.7 32bit VM running inside a vmware 5.5. host.
I see high values of load average with low CPU usage. The VM has 4 vCPU’s and load sometimes reach 20. When I run vmstat I see high values in the 'r' column. The question is how I find which process are inside the kernel run queue?. I've tried what ever combintation of ps I've found in internet with no luck things like ps r -A vmstat output: [ ~]# vmstat 1 10
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
9 0 8 822516 322880 1593592 0 0 1 65 9 6 1 1 98 0 0
7 0 8 823136 322880 1593584 0 0 0 0 9387 97411 8 9 84 0 0
53 0 8 823508 322880 1593588 0 0 0 236 8332 108913 9 12 79 0 0
64 0 8 818424 322888 1597548 0 0 0 116 9027 140988 10 11 79 0 0
69 0 8 820284 322888 1597548 0 0 0 0 9095 128715 8 10 83 0 0
64 0 8 820284 322888 1597692 0 0 0 0 8701 119305 9 11 80 0 0
3 0 8 819540 322888 1597688 0 0 0 4704 9531 112734 8 8 84 0 0
81 0 8 818052 322888 1599452 0 0 0 224 8324 102409 10 13 77 0 0
8 0 8 816192 322888 1601788 0 0 0 3240 9181 98478 9 11 80 0 0
7 0 8 815076 322888 1601872 0 0 0 0 9250 104422 10 9 81 0 0 mpstat 1 10
06:04:03 PM CPU usr nice sys iowait irq soft steal guest idle
06:04:04 PM all 9.32 0.00 8.82 0.00 0.25 4.03 0.00 0.00 77.58
06:04:05 PM all 9.85 0.00 8.84 0.00 0.25 4.29 0.00 0.00 76.77
06:04:06 PM all 8.29 0.00 5.78 0.00 0.50 4.77 0.00 0.00 80.65
06:04:07 PM all 9.82 0.00 7.81 0.00 0.25 4.28 0.00 0.00 77.83
06:04:08 PM all 8.84 0.00 5.30 0.00 0.25 4.29 0.00 0.00 81.31
06:04:09 PM all 10.05 0.00 9.05 0.00 0.50 4.02 0.00 0.00 76.38
06:04:10 PM all 9.60 0.00 7.32 0.00 0.51 4.04 0.00 0.00 78.54
06:04:11 PM all 8.33 0.00 5.81 0.00 0.25 4.29 0.00 0.00 81.31
06:04:12 PM all 9.57 0.00 7.05 0.00 0.25 4.03 0.00 0.00 79.09
06:04:13 PM all 7.83 0.00 5.05 0.00 0.25 3.79 0.00 0.00 83.08
Average: all 9.15 0.00 7.08 0.00 0.33 4.18 0.00 0.00 79.25 | fork() was the original UNIX system call. It can only be used to create new processes, not threads. Also, it is portable. In Linux, clone() is a new, versatile system call which can be used to create a new thread of execution. Depending on the options passed, the new thread of execution can adhere to the semantics of a UNIX process, a POSIX thread, something in between, or something completely different (like a different container). You can specify all sorts of options dictating whether memory, file descriptors, various namespaces, signal handlers, and so on get shared or copied. Since clone() is the superset system call, the implementation of the fork() system call wrapper in glibc actually calls clone() , but this is an implementation detail that programmers don't need to know about. The actual real fork() system call still exists in the Linux kernel for backward compatibility reasons even though it has become redundant, because programs that use very old versions of libc, or another libc besides glibc, might use it. clone() is also used to implement the pthread_create() POSIX function for creating threads. Portable programs should call fork() and pthread_create() , not clone() . | {
"source": [
"https://unix.stackexchange.com/questions/199694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112633/"
]
} |
199,836 | I've taken a backup of the file where my dconf database is stored ( ~/.config/dconf/user which is a binary file), and now I need to move some keys from the backup to the dconf in use. How can I view the content of the backed up dconf without putting it "in place" and view it with for example dconf-editor ? | To view the content of that file you could rename it - e.g. test - place it under ~/.config/dconf/ and then have dconf read/dump the settings from that file. By default , dconf reads the user-db found in $XDG_CONFIG_HOME/dconf/ : A "user-db" line specifies a user database. These databases are found in $XDG_CONFIG_HOME/dconf/ . The name of the file to open in that
directory is exactly as it is written in the profile. This file is
expected to be in the binary dconf database format. Note that XDG_CONFIG_HOME cannot be set/modified per terminal or session,
because then the writer and reader would be working on different DBs
(the writer is started by DBus and cannot see that variable). As a result, you would need a custom profile that points to that particular db file - e.g. user-db:test and then instruct dconf to dump the data (using the custom profile) via the DCONF_PROFILE environment variable: cd
cp /path_to_backup_dconf/user ~/.config/dconf/test
printf %s\\n "user-db:test" > db_profile
DCONF_PROFILE=~/db_profile dconf dump / > old_settings The result is a file ( old_settings ) containing the settings from your backed up dconf file, e.g.: [org/gnome/desktop/interface]
font-name='DejaVu Sans Oblique 10'
document-font-name='DejaVu Sans Oblique 10'
gtk-im-module='gtk-im-context-simple'
clock-show-seconds=true
icon-theme='HighContrast'
monospace-font-name='DejaVu Sans Mono Oblique 10'
[org/gnome/desktop/input-sources]
sources=@a(ss) []
xkb-options=@as []
[org/gnome/desktop/wm/preferences]
num-workspaces=4
titlebar-font='DejaVu Sans Bold Oblique 10'
....... You could then remove those files: rm -f ~/db_profile ~/.config/dconf/test and load the old settings into the current database: dconf load / < old_settings If you want to dump only specific settings just provide the path: DCONF_PROFILE=~/db_profile dconf dump /org/gnome/desktop/wm/preferences/
[/]
num-workspaces=4
titlebar-font='DejaVu Sans Bold Oblique 10' but note that for each path you should have a different file and when you load it you should specify the path accordingly: dconf load /org/gnome/desktop/wm/preferences/ < old_wm_settings Also note that, due to upstream changes, older dconf databases might contain paths, keys and values that are invalid in newer versions so full compatibility between db-files created by different versions of dconf isn't always guaranteed. In that case, you would have to inspect the resulting old_settings file and manually remove or edit the entries that are invalid before loading it into your current database. | {
"source": [
"https://unix.stackexchange.com/questions/199836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36186/"
]
} |
199,839 | After some issues with Ubuntu, I have decided to go to Mint. Now new issues appear - I have three monitors, two horizonatally and one vertically. The problem is that after each reboot, Mint does not remember anything about the horizontal monitor and makes it back vertical. Any idea how to tell Mint that I do not want to change my monitor setup every time after reboot? I am using nVidia and I am making the setup via nVidia X server settings. Info for my video: $ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Device 041e (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation Device 0fc8 (rev a1) | To view the content of that file you could rename it - e.g. test - place it under ~/.config/dconf/ and then have dconf read/dump the settings from that file. By default , dconf reads the user-db found in $XDG_CONFIG_HOME/dconf/ : A "user-db" line specifies a user database. These databases are found in $XDG_CONFIG_HOME/dconf/ . The name of the file to open in that
directory is exactly as it is written in the profile. This file is
expected to be in the binary dconf database format. Note that XDG_CONFIG_HOME cannot be set/modified per terminal or session,
because then the writer and reader would be working on different DBs
(the writer is started by DBus and cannot see that variable). As a result, you would need a custom profile that points to that particular db file - e.g. user-db:test and then instruct dconf to dump the data (using the custom profile) via the DCONF_PROFILE environment variable: cd
cp /path_to_backup_dconf/user ~/.config/dconf/test
printf %s\\n "user-db:test" > db_profile
DCONF_PROFILE=~/db_profile dconf dump / > old_settings The result is a file ( old_settings ) containing the settings from your backed up dconf file, e.g.: [org/gnome/desktop/interface]
font-name='DejaVu Sans Oblique 10'
document-font-name='DejaVu Sans Oblique 10'
gtk-im-module='gtk-im-context-simple'
clock-show-seconds=true
icon-theme='HighContrast'
monospace-font-name='DejaVu Sans Mono Oblique 10'
[org/gnome/desktop/input-sources]
sources=@a(ss) []
xkb-options=@as []
[org/gnome/desktop/wm/preferences]
num-workspaces=4
titlebar-font='DejaVu Sans Bold Oblique 10'
....... You could then remove those files: rm -f ~/db_profile ~/.config/dconf/test and load the old settings into the current database: dconf load / < old_settings If you want to dump only specific settings just provide the path: DCONF_PROFILE=~/db_profile dconf dump /org/gnome/desktop/wm/preferences/
[/]
num-workspaces=4
titlebar-font='DejaVu Sans Bold Oblique 10' but note that for each path you should have a different file and when you load it you should specify the path accordingly: dconf load /org/gnome/desktop/wm/preferences/ < old_wm_settings Also note that, due to upstream changes, older dconf databases might contain paths, keys and values that are invalid in newer versions so full compatibility between db-files created by different versions of dconf isn't always guaranteed. In that case, you would have to inspect the resulting old_settings file and manually remove or edit the entries that are invalid before loading it into your current database. | {
"source": [
"https://unix.stackexchange.com/questions/199839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112719/"
]
} |
199,840 | How can I log and plot a graph of all available hardware temperatures (CPU, SSD, etc). CPU load over a given time (say a day or a week) in linux? The CPU is i7 haswell if this matters, I have both, an SSD and HDD in this box. | To view the content of that file you could rename it - e.g. test - place it under ~/.config/dconf/ and then have dconf read/dump the settings from that file. By default , dconf reads the user-db found in $XDG_CONFIG_HOME/dconf/ : A "user-db" line specifies a user database. These databases are found in $XDG_CONFIG_HOME/dconf/ . The name of the file to open in that
directory is exactly as it is written in the profile. This file is
expected to be in the binary dconf database format. Note that XDG_CONFIG_HOME cannot be set/modified per terminal or session,
because then the writer and reader would be working on different DBs
(the writer is started by DBus and cannot see that variable). As a result, you would need a custom profile that points to that particular db file - e.g. user-db:test and then instruct dconf to dump the data (using the custom profile) via the DCONF_PROFILE environment variable: cd
cp /path_to_backup_dconf/user ~/.config/dconf/test
printf %s\\n "user-db:test" > db_profile
DCONF_PROFILE=~/db_profile dconf dump / > old_settings The result is a file ( old_settings ) containing the settings from your backed up dconf file, e.g.: [org/gnome/desktop/interface]
font-name='DejaVu Sans Oblique 10'
document-font-name='DejaVu Sans Oblique 10'
gtk-im-module='gtk-im-context-simple'
clock-show-seconds=true
icon-theme='HighContrast'
monospace-font-name='DejaVu Sans Mono Oblique 10'
[org/gnome/desktop/input-sources]
sources=@a(ss) []
xkb-options=@as []
[org/gnome/desktop/wm/preferences]
num-workspaces=4
titlebar-font='DejaVu Sans Bold Oblique 10'
....... You could then remove those files: rm -f ~/db_profile ~/.config/dconf/test and load the old settings into the current database: dconf load / < old_settings If you want to dump only specific settings just provide the path: DCONF_PROFILE=~/db_profile dconf dump /org/gnome/desktop/wm/preferences/
[/]
num-workspaces=4
titlebar-font='DejaVu Sans Bold Oblique 10' but note that for each path you should have a different file and when you load it you should specify the path accordingly: dconf load /org/gnome/desktop/wm/preferences/ < old_wm_settings Also note that, due to upstream changes, older dconf databases might contain paths, keys and values that are invalid in newer versions so full compatibility between db-files created by different versions of dconf isn't always guaranteed. In that case, you would have to inspect the resulting old_settings file and manually remove or edit the entries that are invalid before loading it into your current database. | {
"source": [
"https://unix.stackexchange.com/questions/199840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
199,863 | I am looking for a command to create multiple (thousands of) files containing at least 1KB of random data. For example, Name size
file1.01 2K
file2.02 3K
file3.03 5K
etc. How can I create many files like this? | Since you don't have any other requirements, something like this should work: #! /bin/bash
for n in {1..1000}; do
dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=1 count=$(( RANDOM + 1024 ))
done (this needs bash at least for {1..1000} ). | {
"source": [
"https://unix.stackexchange.com/questions/199863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105827/"
]
} |
199,966 | I have docker installed on CentOS 7 and I am running firewallD. From inside my container, going to the host (default 172.17.42.1) With firewall on container# nc -v 172.17.42.1 4243
nc: connect to 172.17.42.1 port 4243 (tcp) failed: No route to host with firewall shutdown container# nc -v 172.17.42.1 4243
Connection to 172.17.42.1 4243 port [tcp/*] succeeded! I've read the docs on firewalld and I don't fully understand them. Is there a way to simply allow everything in a docker container (I guess on the docker0 adapter) unrestricted access to the host? | Maybe better than earlier answer; firewall-cmd --permanent --zone=trusted --change-interface=docker0
firewall-cmd --permanent --zone=trusted --add-port=4243/tcp
firewall-cmd --reload | {
"source": [
"https://unix.stackexchange.com/questions/199966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110922/"
]
} |
200,125 | I have a HP15 r007TX laptop with Debian 8 (Jessie) installed. Whenever I close the lid and then reopen, the laptop stops working. It get's stuck showing a blank screen. From there nothing happens and I have to hard reboot it. I even changed the setting to do nothing when laptop lid is closed and still have the issue. | To disable the Lid Switch: Open the file /etc/systemd/logind.conf as root. Find this: HandleLidSwitch If it's commented, uncomment and change the value to ignore. The line after editing should be: HandleLidSwitch=ignore Restart computer and your problem should be gone. Or better restart logind service: sudo service systemd-logind restart ( Source ) | {
"source": [
"https://unix.stackexchange.com/questions/200125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112900/"
]
} |
200,239 | I have ServerAliveInterval and in case of few machines also ClientAliveInterval set to 540 in SSH client/server configuration files (I suppose setting it to more than that would not be a good idea). I work with many SSH sessions which currently freeze after a few minutes. How can I fix it? What I want is to have a session to not freeze at all, so that if I open a session at 8 and don't use it for 4 hours, for example, to still use it again at 12 without having to log-in again. | The changes you've made in /etc/ssh/ssh_config and /etc/ssh/sshd_config are correct but will still not have any effect. To get your configuration working, make these configuration changes on the client: /etc/ssh/ssh_config Host *
ServerAliveInterval 100 ServerAliveInterval The client will send a null packet to the server every 100 seconds to keep the connection alive NULL packet Is sent by the server to the client. The same packet is sent by the client to the server. A TCP NULL packet does not contain any controlling flag like SYN, ACK, FIN etc. because the server does not require a reply from the client. The NULL packet is described here: https://www.rfc-editor.org/rfc/rfc6592 Then configuring the sshd part on the server. /etc/ssh/sshd_config ClientAliveInterval 60
TCPKeepAlive yes
ClientAliveCountMax 10000 ClientAliveInterval The server will wait 60 seconds before sending a null packet to the client to keep the connection alive TCPKeepAlive Is there to ensure that certain firewalls don't drop idle connections. ClientAliveCountMax Server will send alive messages to the client even though it has not received any message back from the client. Finally restart the ssh server service ssh restart or service sshd restart depending on what system you are on. | {
"source": [
"https://unix.stackexchange.com/questions/200239",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20334/"
]
} |
200,355 | I'm using ack to search for a string. When I run it without a file argument, I get line numbers: $> ack function
themes/README.txt
7:Drupal's sub-theme functionality to ensure easy maintenance and upgrades.
sites/default/default.services.yml
48: # - The dump() function can be used in Twig templates to output information
... But when I try to specify a file, I don't get line numbers. $> ack function themes/README.txt
Drupal's sub-theme functionality to ensure easy maintenance and upgrades. I've done some googling for a switch, but found no results. How do I get ack to show me line numbers on results from a single file? | When you don't provide any file, ack will search for all files in current directory and subdirectories. If a file contains matching pattern, ack print that filename, the line number and the line which matched pattern. This behaviour does not apply for one file (See ack documentation , search for -H option). Since when ack doesn't have -n option line grep , which will print line matched with its relative line number, you have two choices to work around this issue. Forcing ack print filename with -H : ack -H pattern file or passing /dev/null as the second file: ack pattern file /dev/null | {
"source": [
"https://unix.stackexchange.com/questions/200355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/394/"
]
} |
200,381 | In the script below - which prompts the user to confirm that they want to proceed with running a potentially bad script - when the user enters Y at the prompt - it will break out of the case block, only to be sent back into the while loop again. #! /bin/bash
set -e
echo
echo "bad install start"
echo "-----------------------------------------"
while true; do
read -p "this script will probably fail - do you want to run anyway?" yn
case $yn in
[Yy]*)
##### WHAT GOES HERE?? #####
;;
[Nn]*)
exit ;;
*)
echo "answer y or n" ;;
esac
echo "script has broken out of case back into while loop"
done
echo -e "\e[33m Installing bad packagename \e[0m"
apt-get install sdfsdfdfsd
echo "rest of script - will i keep running?" When n is entered, the script exists entirely as desired. I'd like to know how to make it so that when Y is entered the script breaks out of both the case and the while block, but does not exit entirely. Is there something I can put in for the placeholder ("What goes here??") to do that? | In the case where the user entered "y", you can exit both while and case: break [n]
Exit from within a for, while, until, or select loop. If n is
specified, break n levels. n must be ≥ 1. If n is greater than
the number of enclosing loops, all enclosing loops are exited.
The return value is 0 unless n is not greater than or equal to
1. In your case, you want to do break 2 . | {
"source": [
"https://unix.stackexchange.com/questions/200381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
200,582 | I am partitioning eMMC using following commands in the script, parted /dev/mmcblk0 --script mklabel gpt
parted /dev/mmcblk0 --script mkpart primary ext4 32MB 132MB
parted /dev/mmcblk0 --script mkpart primary ext4 233MB 433MB
parted /dev/mmcblk0 --script mkpart primary ext4 433MB 533MB
parted /dev/mmcblk0 --script mkpart primary ext4 533MB 593MB
parted /dev/mmcblk0 --script mkpart primary ext4 593MB 793MB
parted /dev/mmcblk0 --script mkpart primary ext4 793MB 3800MB
parted /dev/mmcblk0 --script align-check min 1 Is it the correct way to create partition in the script ? Is there any better way ? After creating first partition i am getting following warning Warning: The resulting partition is not properly aligned for best performance. Do i need to worry about it ?
I tried parted /dev/mmcblk0 --script align-check min 1 but not sure that's the solution. Any pointers for that? I am going through this link meanwhile any other suggestions ? Edit :
Just a quick reference for frostschutz reply, MiB = Mebibyte = 1024 KiB
KiB = Kibibyte = 1024 Bytes
MB = Megabyte = 1,000 KB
KB = Kilobyte = 1,000 Bytes | It's correct in principle but you might consider reducing it to a single parted call. parted --script /device \
mklabel gpt \
mkpart primary 1MiB 100MiB \
mkpart primary 100MiB 200MiB \
... Your alignment issue is probably because you use MB instead of MiB . You should not need an actual align-check command when creating partitions on MiB boundaries / on a known device. | {
"source": [
"https://unix.stackexchange.com/questions/200582",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60966/"
]
} |
200,616 | I am looking for a command line or bash script that would add space 5 times before the beginning of each line in a file. for example: abc after adding spaces 5 times abc | With GNU sed: sed -i -e 's/^/ /' <file> will replace the start of each line with 5 spaces. The -i modifies the file in place, -e gives some code for sed to execute. s tells sed to do a subsitution, ^ matches the start of the line, then the part between the second two / characters is what will replace the part matched in the beginning, i.e., the start of the line in this example. | {
"source": [
"https://unix.stackexchange.com/questions/200616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
200,637 | Is there some way of saving all the terminal output to a file with a command? I'm not talking about redirection command > file.txt Not the history history > file.txt , I need the full terminal text Not with hotkeys ! Something like terminal_text > file.txt | You can use script . It will basically save everything printed on the terminal in that script session. From man script : script makes a typescript of everything printed on your terminal.
It is useful for students who need a hardcopy record of an
interactive session as proof of an assignment, as the typescript file
can be printed out later with lpr(1). You can start a script session by just typing script in the terminal, all the subsequent commands and their outputs will all be saved in a file named typescript in the current directory. You can save the result to a different file too by just starting script like: script output.txt To logout of the script session (stop saving the contents), just type exit . Here is an example: $ script output.txt
Script started, file is output.txt
$ ls
output.txt testfile.txt foo.txt
$ exit
exit
Script done, file is output.txt Now if I read the file: $ cat output.txt
Script started on Mon 20 Apr 2015 08:00:14 AM BDT
$ ls
output.txt testfile.txt foo.txt
$ exit
exit
Script done on Mon 20 Apr 2015 08:00:21 AM BDT script also has many options e.g. running quietly -q ( --quiet ) without showing/saving program messages, it can also run a specific command -c ( --command ) rather than a session, it also has many other options. Check man script to get more ideas. | {
"source": [
"https://unix.stackexchange.com/questions/200637",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
201,666 | There are createuser & dropuser commands: createuser - define a new PostgreSQL user account
dropuser - remove a PostgreSQL user account Is there a corresponding way to list the user accounts? These two commands do not require the user to invoke psql nor understand details of using it. | Use the psql shell and: \deu[+] [PATTERN] such as: postgres=# \deu+
List of user mappings
Server | User name | FDW Options
--------+-----------+-------------
(0 rows) And for all users: postgres=# \du
List of roles
Role name | Attributes | Member of
------------+------------------------------------------------+-----------
chpert.net | | {}
postgres | Superuser, Create role, Create DB, Replication | {} Also such as MySQL, you can do : $ psql -c "\du"
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------+-----------
chpert | | {}
postgres | Superuser, Create role, Create DB, Replication | {}
test | | {} | {
"source": [
"https://unix.stackexchange.com/questions/201666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3999/"
]
} |
201,757 | I have a device under test (DUT) and I measure its power usage the using a Power Analyzer Datalogger using the data from /dev/ttyUSB0 . The problem is that the DUT is now remotely from the workstation I used to gather data with, but in the same network, I need to use a 2nd PC which is directly connected via USB to the Power Analyzer as a sort of USB proxy and ssh to create a kind of symbolic link on the measuring machine of the USB of the "proxy" machine. Given the above diagram how can the 1 st PC access /dev/ttyUSB0 of the 2 nd PC which is directly connected, in a way that a program reading the stream from the 1 st PC will not notice the difference? | socat might work here. On the 2nd PC you could let socat listen for data on /dev/ttyUSB0 and serve it to a tcp port, e.g: socat /dev/ttyUSB0,raw,echo=0 tcp-listen:8888,reuseaddr Then on 1st PC you can connect to 2nd PC with socat and provide the data on a pseudo terminal /dev/ttyVUSB0 for your application: socat PTY,raw,echo=0,link=/dev/ttyVUSB0 tcp:<ip_of_pc2>:8888 This isn't tested and socat supports many options, so tweaking may be needed. | {
"source": [
"https://unix.stackexchange.com/questions/201757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22558/"
]
} |
202,161 | I am writing a rolling upgrade playbook and would like to print out the hostname of current host been upgraded. I put inventory_hostname and ansible_hostname in task names but that did not work - name: upgrade softare on {{inventory_hostname}}
- name: current host is {{ansible_hostname}} debug works fine - name: Test a variable
debug: var=inventory_hostname
TASK: [Test a variable] *******************************************************
ok: [SERV14] => {
"var": {
"inventory_hostname": "SERV14"
}
} So what should I do to be able to use those variables in task name descriptions. Thanks | Starting from v2.0 Ansible supports variable substitution in task/handler names: https://github.com/ansible/ansible/issues/10347 , so these examples will work as expected: - name: upgrade software on {{inventory_hostname}}
- name: current host is {{ansible_hostname}} | {
"source": [
"https://unix.stackexchange.com/questions/202161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39569/"
]
} |
202,302 | I'm learning bash scripting and found this on my /usr/share/bash-completion, line 305: local cword words=() What does it do? All tutorials online are just in the format local var=value | Although I like answer given by jordanm I think it's equally important to show less experienced Linux users how to cope with such questions by themselves. The suggested way is faster and more versatile than looking for answers at random pages showing up at Google search results page. First, all commands that can be run in Bash without typing an explicit path to it such as ./command can be divided into two categories: Bash shell builtins and external commands . Bash shell builtins come installed with Bash and are part of it while external commands are not part of Bash. This is important because Bash shell builtins are documented inside man bash and their documentation can be also invoked with help command while external commands are usually documented in their own man pages or take some kind of flag like -h, --help . To check whether a command is a Bash shell builtin or an external command: $ type local
local is a shell builtin It will display how command would be interpreted if used as a command name (from help type ). Here we can see that local is a shell builtin. Let's see another example: $ type vim
vim is /usr/bin/vim Here we can see that vim is not a shell builtin but an external command located in /usr/bin/vim . However, sometimes the same command could be installed both as an external command and be a shell builtin at the same time. Add -a to type to list all possibilities, for example: $ type -a echo
echo is a shell builtin
echo is /usr/bin/echo
echo is /bin/echo Here we can see that echo is both a shell builtin and an external command. However, if you just typed echo and pressed Return a shell builtin would be called because it appears first on this list. Note that all these versions of echo do not need to be the same. For example, on my system /usr/bin/echo takes --help flag while the Bash builtin one doesn't. Ok, now when we know that local is a shell builtin let's find out how it works: $ help local
local: local [option] name[=value] ...
Define local variables.
Create a local variable called NAME, and give it VALUE. OPTION can
be any option accepted by `declare'.
Local variables can only be used within a function; they are visible
only to the function where they are defined and its children.
Exit Status:
Returns success unless an invalid option is supplied, an error occurs,
or the shell is not executing a function. Note the first line: name[=value] . Everything between [ and ] is optional . It's a common convention used in many man pages and form of documentation in *nix world. That being said, command you asked about in your question is perfectly legal. In turn, ... character means that previous argument can be repeated. You can also read about this convention in some versions of man man : The following conventions apply to the SYNOPSIS section and can be used
as a guide in other sections.
bold text type exactly as shown.
italic text replace with appropriate argument.
[-abc] any or all arguments within [ ] are optional.
-a|-b options delimited by | cannot be used together.
argument ... argument is repeatable.
[expression] ... entire expression within [ ] is repeatable. So, at the end of the day, I hope that now you'll have an easier time understanding how different commands in Linux work. | {
"source": [
"https://unix.stackexchange.com/questions/202302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67546/"
]
} |
202,332 | $ source /etc/environment
$ sudo source /etc/environment
[sudo] password for t:
sudo: source: command not found It seems that a different shell than bash is run to execute source /etc/environment and that shell doesn't have source as builtin. But my and the root's default shells are both bash . $ echo $SHELL
/bin/bash If sudo indeeds uses a different shell, why is it? I saw slm's reply , but don't understand in my case. | source is a shell builtin, so it cannot be executed without the shell. However, by default, sudo do not run shell. From sudo Process model When sudo runs a command, it calls fork(2), sets up the execution environment as described above, and calls the execve system call in the child process If you want to explicitly execute shell, use -s option: # sudo -s source /etc/environment Which is still useless because after shell is exited, environment changes are lost. | {
"source": [
"https://unix.stackexchange.com/questions/202332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
202,383 | I basically need to do this: DUMMY=dummy
sudo su - ec2-user -c 'echo $DUMMY' This doesn't work. How can I pass the env variable $DUMMY to su? -p doesn't work with -l. | You can do it without calling login shell: sudo DUMMY=dummy su ec2-user -c 'echo "$DUMMY"' or: sudo DUMMY=dummy su -p - ec2-user -c 'echo "$DUMMY"' The -p option of su command preserve environment variables. | {
"source": [
"https://unix.stackexchange.com/questions/202383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95383/"
]
} |
202,391 | Using the following command, could someone please explain what exactly is the purpose for the ending curly braces ({}) and plus sign (+)? And how would the command operate differently if they were excluded from the command? find . -type d -exec chmod 775 {} + | The curly braces will be replaced by the results of the find command, and the chmod will be run on each of them. The + makes find attempt to run as few commands as possible (so, chmod 775 file1 file2 file3 as opposed to chmod 755 file1 , chmod 755 file2 , chmod 755 file3 ). Without them the command just gives an error. This is all explained in man find : -exec command ; Execute command ; true if 0 status is returned.
All following
arguments to find are taken to be arguments to the command until
an argument consisting of ‘ ; ’ is encountered.
The string ‘ {} ’ is replaced by the current file name being processed everywhere
it occurs in the arguments to the command, not just in arguments
where it is alone, as in some versions of find . … -exec command {} + This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of invocations of the command will be much less than the number of
matched files. … | {
"source": [
"https://unix.stackexchange.com/questions/202391",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64938/"
]
} |
202,400 | I am testing a hard disk with SmartMonTools . Hard disk status prior to the testings (only one short test performed days ago): $ sudo smartctl -l selftest /dev/sda
smartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 5167 - So I start the long test : $ sudo smartctl -t long /dev/sda
smartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".
Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 130 minutes for test to complete.
Test will complete after Sat May 9 16:05:27 2015
Use smartctl -X to abort test. The test is supposed to be running , then, but if I try to see its progress: $ sudo smartctl -l selftest /dev/sda
smartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 5167 - ... all I get is the same results, like if there were no running/performing tests right now. The '-H' parameter gives no more info: $ sudo smartctl -H /dev/sda
smartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED And, as long as there is no process running (this test is performed by the hard disk controller alone), some ps -e style search should neither help. How can I know if there is some SMART self test running right now? | In smartctl -a <device> look for Self-test execution status . Example when no test is running: Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run. Example while a test is running: Self-test execution status: ( 249) Self-test routine in progress...
90% of test remaining. When running selective self-test ( -t select ) there will also be a progress shown here: SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 125045423 Self_test_in_progress [90% left] (2881512-2947047) | {
"source": [
"https://unix.stackexchange.com/questions/202400",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
202,430 | I want to create a "copy" of a directory tree where each file is a hardlink to the original file Example: I have a directory structure: dirA/
dirA/file1
dirA/x/
dirA/x/file2
dirA/y/
dirA/y/file3 Here is the expected result, a "copy" of the directory tree where each file is a hardlink to the original file: dirB/ # normal directory
dirB/file1 # hardlink to dirA/file1
dirB/x/ # normal directory
dirB/x/file2 # hardlink to dirA/x/file2
dirB/y/ # normal directory
dirB/y/file3 # hardlink to dirA/y/file3 | On Linux (more precisely with the GNU and busybox implementations of cp as typically found on systems that have Linux as a kernel) and recent FreeBSD, this is how: cp -al dirA dirB For a more portable solution, see answer using pax and cpio by Stéphane Chazelas | {
"source": [
"https://unix.stackexchange.com/questions/202430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114765/"
]
} |
202,797 | If we use echo 1234 >> some-file then Documentation says that the output is appended. My guess is that, if some-file does not exist, then O_CREAT will make a new file. If > was used, then O_TRUNC will truncate existing file. In case of >> :
Will the file be opened as O_WRONLY (or O_RDWR) and seeked to end and write operation is done , simulating O_APPEND ?
Or will the file be opened as O_APPEND , leaving it to the kernel to make sure appending happens ? I am asking this because a conserver process is overwriting some markers inserted by echo, when the output file is from NFS mount point, & NFS Documentation says O_APPEND is not supported on server, so client kernel will have to handle it. I guess conserver process is using O_APPEND , but not sure of bash >> on linux, hence asking the question here. | I ran this: strace -o spork.out bash -c "echo 1234 >> some-file" to figure out your question. This is what I found: open("some-file", O_WRONLY|O_CREAT|O_APPEND, 0666) = 3 No file named "some-file" existed in the directory in which I ran the echo command. | {
"source": [
"https://unix.stackexchange.com/questions/202797",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54246/"
]
} |
202,891 | So I just installed the latest Kali Linux on my laptop which was based on Debian 7 (oldstable). I then dist-upgrad-ed the whole thing to Debian 8. I've always wanted Wayland instead of X11, so I installed the necessary packages. Then created a minimal ~./config/weston.ini configuration. Now, from the Gnome log-in screen: I can boot to Gnome on Wayland or LXDE (among others). The previous with very limited success and the latter (LXDE) almost perfectly, though the panel needs setting up (I have to look up freedesktop). Anyways, in LXDE, the GUI is more responsive than it was on the oldstable and possibly as fast when it was running windows 7. I was pleased. But I want to know if this is because of all the library/module upgrades from Debian 7 to 8 or from using Wayland (if I really am using Wayland at all). I skimmed through htop and found a /usr/bin/Xorg running and no process named "wayland". So which one am I currently running? | Obtain the session ID to pass in by issuing: loginctl That will show you something like: SESSION UID USER SEAT TTY
c2 1000 yourusername seat0
1 sessions listed. In that example, c2 is the session ID. Then: loginctl show-session <SESSION_ID> -p Type If you want all this on a single command: loginctl show-session $(awk '/tty/ {print $1}' <(loginctl)) -p Type | awk -F= '{print $2}' Use the one corresponding to your user name. Refer to: https://fedoraproject.org/wiki/How_to_debug_Wayland_problems So, for me it is: $ loginctl show-session 2 -p Type
Type=wayland | {
"source": [
"https://unix.stackexchange.com/questions/202891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100633/"
]
} |
202,918 | I have to edit some files placed on some server I could reach via ssh. I would prefer to edit these files in customized vim on my workstation (I have not rights to change vim settings on remote server). Sometimes I would like to edit a file with sublime text or other GUI editor. Of course, I can download these files, edit them locally and upload them back to server. Is there more elegant solution? | You could do this by mounting the remote folder as a file-system using sshfs. To do this, first some pre-requisites: #issue all these cmds on local machine
sudo apt-get install sshfs
sudo adduser <username> fuse #Not required for new Linux versions (including Ubuntu > 18.04) Now, do the mounting process: mkdir ~/remoteserv
sshfs -o idmap=user <username>@<ipaddress>:/remotepath ~/remoteserv After this, just go into the mounted folder and use your own local customized vim. | {
"source": [
"https://unix.stackexchange.com/questions/202918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39370/"
]
} |
202,945 | I'm trying to run some experiments with Linux and look for the smallest distribution by installation size. (RAM, CPU doesn't really matter) | Update: ttylinux is unmaintained at the moment! If you're still interested start here or here . Depending on your platform, ttylinux is maybe something for you: This smallest ttylinux system has an 8 MB file system and runs on i486
computers within 28 MB of RAM, but provides a complete command line
environment and is ready for Internet access. Started in 2001 and latest release is from 2015-03-05 so it is still maintained. | {
"source": [
"https://unix.stackexchange.com/questions/202945",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115043/"
]
} |
203,043 | I have the following files: Codigo-0275_tdim.matches.tsv
Codigo-0275_tdim.snps.tsv
FloragenexTdim_haplotypes_SNp3filter17_single.tsv
FloragenexTdim_haplotypes_SNp3filter17.tsv
FloragenexTdim_SNP3Filter17.fas
S134_tdim.alleles.tsv
S134_tdim.snps.tsv
S134_tdim.tags.tsv I want to count the number of files that have the word snp (case sensitive) on their name. I tried using grep -a 'snp' | wc -l but then I realized that grep searches within the files. What is the correct command to scan through the file names? | Do you mean you want to search for snp in the file names ? That would be a simple shell glob (wildcard), used like this: ls -dq *snp* | wc -l Omit the -q flag if your version of ls doesn't recognise it. It handles filenames containing "strange" characters (including newlines). | {
"source": [
"https://unix.stackexchange.com/questions/203043",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114559/"
]
} |
203,129 | I want to find files which are greater than 1 GB and older than 6 months in entire server. How to write a command for this? | Use find : find /path -mtime +180 -size +1G -mtime means search for modification times that are greater than 180 days (+180). And the -size parameter searches for files greater than 1GB. | {
"source": [
"https://unix.stackexchange.com/questions/203129",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114762/"
]
} |
203,290 | I am using Linux Mint 17.1 Rebecca for about 2 days and accidentally typed my password into the terminal which is now displayed in the history list of commands I have previously typed. I want to clear the terminal history completely. I have tried using the following commands in the terminal which I thought would clear the history forever but they do not: history -c
reset
tput reset The above commands "will" clear the history from the terminal but when I exit and bring up a new one all my previous history is still there and can all be listed again using the - history command and also by pressing the UP arrow on my keyboard. I do not want this to happen until I have totally cleared my history, then I want to continue using it. How can I clear my terminal history completely - forever and start fresh? Please Note: I do not want to exit the terminal without saving history just clear it forever in this one instance. | reset or tput reset only does things to the terminal. The history is entirely managed by the shell, which remains unaffected. history -c clears your history in the current shell. That's enough (but overkill) if you've just typed your password and haven't exited that shell or saved its history explicitly. When you exit bash, the history is saved to the history file, which by default is .bash_history in your home directory. More precisely, the history created during the current session is appended to the file; entries that are already present are unaffected. To overwrite the history file with the current shell's history, run history -w . Instead of removing all your history entries, you can open .bash_history in an editor and remove the lines you don't want to keep. You can also do that inside bash, less conveniently, by using history to display all the entries, then history -d to delete the entries you don't want, and finally history -w to save. Note that if you have multiple running bash instances that have read the password, each of them might save it again. Before definitively purging the password from the history file, make sure that it is purged from all running shell instances. Note that even after you've edited the history file, it's possible that your password is still present somewhere on the disk from an earlier version of the file. It can't be retrieved through the filesystem anymore, but it might still be possible (but probably not easy) to find it by accessing the disk directly. If you use this password elsewhere and your disk gets stolen (or someone gets access to the disk), this could be a problem. | {
"source": [
"https://unix.stackexchange.com/questions/203290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115277/"
]
} |
203,364 | On Debian 8 jessie I've removed python: perry@perry:~$ sudo apt-get remove python
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'python2.7' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. But somehow I can still launch python from the terminal. perry@perry:~$ python
Python 2.7.9 (default, Apr 29 2015, 18:34:06)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> I haven't installed it from source or from any other place but apt. How is this possible and how can I remove python completely? | It turned out that the additional package python-minimal had python installed. One does then not only have to do: sudo apt-get remove python but also: sudo apt-get remove python-minimal | {
"source": [
"https://unix.stackexchange.com/questions/203364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115042/"
]
} |
203,371 | When I try to run ./script.sh I got Permission denied but when I run bash script.sh everything is fine. What did I do wrong? | Incorrect POSIX permissions It means you don't have the execute permission bit set for script.sh . When running bash script.sh , you only need read permission for script.sh . See What is the difference between running “bash script.sh” and “./script.sh”? for more info. You can verify this by running ls -l script.sh . You may not even need to start a new Bash process. In many cases, you can simply run source script.sh or . script.sh to run the script commands in your current interactive shell. You would probably want to start a new Bash process if the script changes current directory or otherwise modifies the environment of the current process. Access Control Lists If the POSIX permission bits are set correctly, the Access Control List (ACL) may have been configured to prevent you or your group from executing the file. E.g. the POSIX permissions would indicate that the test shell script is
executable. $ ls -l t.sh
-rwxrwxrwx+ 1 root root 22 May 14 15:30 t.sh However, attempting to execute the file results in: $ ./t.sh
bash: ./t.sh: Permission denied The getfacl command shows the reason why: $ getfacl t.sh
# file: t.sh
# owner: root
# group: root
user::rwx
group::r--
group:domain\040users:rw-
mask::rwx
other::rwx In this case, my primary group is domain users which has had execute permissions revoked by restricting the ACL with sudo setfacl -m 'g:domain\040users:rw-' t.sh . This restriction can be lifted by either of the following commands: sudo setfacl -m 'g:domain\040users:rwx' t.sh
sudo setfacl -b t.sh See: Access Control Lists, Arch Linux Wiki Using ACLs with Fedora Core 2 Filesystem mounted with noexec option Finally, the reason in this specific case for not being able to run the script is that the filesystem the script resides on was mounted with the noexec option. This option overrides POSIX permissions to prevent any file on that filesystem from being executed. This can be checked by running mount to list all mounted filesystems; the mount options are listed in parentheses in the entry corresponding to the filesystem, e.g. /dev/sda3 on /tmp type ext3 (rw,noexec) You can either move the script to another mounted filesystem or remount the filesystem allowing execution: sudo mount -o remount,exec /dev/sda3 /tmp Note: I’ve used /tmp as an example here since there are good security reasons for keeping /tmp mounted with the noexec,nodev,nosuid set of options. | {
"source": [
"https://unix.stackexchange.com/questions/203371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59854/"
]
} |
203,497 | I've been using Windows and Mac OS for the past 5 years and now I'm considering to use Linux on a daily basis. I've installed Ubuntu on a virtual machine and trying to understand how I can use Linux for my daily job (as a js programmer / web designer). Sorry for the novice question but it occurs to me that sometimes when I install a program through make config & make install it changes my system in ways that is not revertible easily. In windows when you install a program, you can uninstall it and hopefully if it plays by the book there will be no traces of the program left in the file system or registery, etc. In Mac OS you simply delete an App like a file. But in Linux there is apt-get and then there is make . I didn't quite understand how I can keep my Linux installation clean and tidy. It feels like any new app installation may break my system. But then Linux has a reputation of being very robust, so there must be something I don't understand about how app installation and uninstallation affects the system. Can anyone shed some light into this? Update: when installing an app, its files can spread anywhere really (package managers handle part of the issue) but there is a cool hack around that: use Docker for installing apps and keep them in their sandbox, specially if you're not gonna use them too often. It is also possible to run GUI apps like Firefox entirely in a Docker "sandbox". | A new install will seldom break your system (unless you do weird stuff like mixing source and binary). If you use precompiled binaries in Ubuntu then you can remove them and not have to worry about breaking your system, because a binary should list what it requires to run and your package manager will list what programs rely on that program for you to review. When you use source, you need to be more careful so you don't remove something critical (like glib). There are no warnings or anything else when you uninstall from source. This means you can completely break your machine. If you want to uninstall using apt-get then you'll use apt-get remove package as previously stated. Any programs that rely on that package will be uninstalled as well and you'll have a chance to review them. If you want to uninstall then generally the process is make uninstall . There is no warning (as I said above). make config will not alter your system, but make install will. As a beginner, I recommend using apt-get or whatever distro you use for binary packages. It keeps things nice and organized and unless you really want to it won't break your system. Hopefully, that clears everything up. | {
"source": [
"https://unix.stackexchange.com/questions/203497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115426/"
]
} |
203,606 | CoreOS does not include a package manager but my preferred text editor is nano , not vi or vim . Is there any way around this? gcc is not available so its not possible to compile from source: core@core-01 ~/nano-2.4.1 $ ./configure
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... no
checking whether make supports nested variables... no
checking for style of include used by make... none
checking for gcc... no
checking for cc... no
checking for cl.exe... no
configure: error: in `/home/core/nano-2.4.1':
configure: error: no acceptable C compiler found in $PATH To put this in context, I was following this guide when I found I wanted to use nano . | To do this on a CoreOS box, following the hints from the guide here : Boot up the CoreOS box and connect as the core user Run the /bin/toolbox command to enter the stock Fedora container. Install any software you need. To install nano in this case, it would be as simple as doing a dnf -y install nano (dnf has replaced yum) Use nano to edit files. "But wait -- I'm in a container!" Don't worry -- the host's file system is mounted at /media/root when inside the container. So just save a sample text file at /media/root/home/core/test.txt , then exit the container, and finally go list the files in /home/core . Notice your test.txt file? If any part of this is too cryptic or confusing, please ask follow up questions. :-) In the recent CoreOS 47.83.202103292105-0, the host is placed in /host instead of /media/root . | {
"source": [
"https://unix.stackexchange.com/questions/203606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50131/"
]
} |
203,622 | I've just made a mistake and I have removed the 'Name' column from the Archive Manager program interface. Here is the actual interface: Now, I can't see the ' Name ' column. I've tried to modify everything in the options but I couldn't get it back. How do I restore the 'Name' column? If the program has a config file, where can I find it? I'm using Ubuntu 14.04 | To do this on a CoreOS box, following the hints from the guide here : Boot up the CoreOS box and connect as the core user Run the /bin/toolbox command to enter the stock Fedora container. Install any software you need. To install nano in this case, it would be as simple as doing a dnf -y install nano (dnf has replaced yum) Use nano to edit files. "But wait -- I'm in a container!" Don't worry -- the host's file system is mounted at /media/root when inside the container. So just save a sample text file at /media/root/home/core/test.txt , then exit the container, and finally go list the files in /home/core . Notice your test.txt file? If any part of this is too cryptic or confusing, please ask follow up questions. :-) In the recent CoreOS 47.83.202103292105-0, the host is placed in /host instead of /media/root . | {
"source": [
"https://unix.stackexchange.com/questions/203622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115508/"
]
} |
203,846 | Having migrated to Linux from Windows, I would like to find an alternative software to Winmerge or rather learn command line tools to compare and sync two folders on Linux. I would be grateful if you could tell me how to do the following tasks on the command line... (I have studied diff and rsync, but I still need some help.) We have two folders: "/home/user/A" and "/home/user/B" Folder A is the place where regular files and folders are saved and folder B is a backup folder that serves as a complete mirror of folder A. (Nothing is directly saved or modified by the user in folder B.) My questions are: How to list files that exist only in folder B? (E.g. the ones deleted from folder A since the last synchronization.) How to copy files that exist in only folder B back into folder A? How to list files that exist in both folders but have different timestamps or sizes? (The ones that have been modified in folder A since last synronization. I would like to avoid using checksums, because there are tens of thousands of files and it'd make the process too slow.) How to make an exact copy of folder A into folder B? I mean, copy everything from folder A into folder B that exists only in folder A and delete everything from folder B that exists only in folder B, but without touching the files that are the same in both folders. | This puts folder A into folder B: rsync -avu --delete "/home/user/A" "/home/user/B" If you want the contents of folders A and B to be the same, put /home/user/A/ (with the slash) as the source. This takes not the folder A but all of it's content and puts it into folder B. Like this: rsync -avu --delete "/home/user/A/" "/home/user/B" -a Do the sync preserving all filesystem attributes -v run verbosely -u only copy files with a newer modification time (or size difference if the times are equal) --delete delete the files in target folder that do not exist in the source Manpage: https://download.samba.org/pub/rsync/rsync.html | {
"source": [
"https://unix.stackexchange.com/questions/203846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
203,948 | The proc file system allows the kernel to communicate information about each running process on a Linux system. Why is proc called a file system? It’s not a real file system like ext4 . It’s just a collection of files containing information about the running processes. | /proc is a filesystem because user processes can navigate through it with familiar system calls and library calls, like opendir() , readdir() , chdir() and getcwd() . Even open() , read() and close() work on a lot of the "files" that appear in /proc . For most intents and almost all purposes, /proc is a filesystem, despite the fact that its files don’t occupy blocks on some disk. I suppose we should all clarify what definition of the term “file system” we are currently using. In the context of ext4, when we write “file system”, we’re probably talking about the combination of a layout of disk blocks, specification of metadata information about the disk blocks that also resides somewhere on disk, and the code that deals with that on-disk layout. In the context of /usr , /tmp , /var/run and so on, we’re writing about an understanding or a shared conceptualization of how to name some things. Those two uses of the term “file system” are indeed quite different. /proc is really the second kind of “file system”, as you’ve noted. | {
"source": [
"https://unix.stackexchange.com/questions/203948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65878/"
]
} |
204,480 | I want to run multiple commands (processes) on a single shell. All of them have own continuous output and don't stop. Running them in the background breaks Ctrl - C . I would like to run them as a single process (subshell, maybe?) to be able to stop all of them with Ctrl - C . To be specific, I want to run unit tests with mocha (watch mode), run server and run some file preprocessing (watch mode) and see output of each in one terminal window. Basically I want to avoid using some task runner. I can realize it by running processes in the background ( & ), but then I have to put them into the foreground to stop them. I would like to have a process to wrap them and when I stop the process it stops its 'children'. | To run commands concurrently you can use the & command separator. ~$ command1 & command2 & command3 This will start command1 , then runs it in the background. The same with command2 . Then it starts command3 normally. The output of all commands will be garbled together, but if that is not a problem for you, that would be the solution. If you want to have a separate look at the output later, you can pipe the output of each command into tee , which lets you specify a file to mirror the output to. ~$ command1 | tee 1.log & command2 | tee 2.log & command3 | tee 3.log The output will probably be very messy. To counter that, you could give the output of every command a prefix using sed . ~$ echo 'Output of command 1' | sed -e 's/^/[Command1] /'
[Command1] Output of command 1 So if we put all of that together we get: ~$ command1 | tee 1.log | sed -e 's/^/[Command1] /' & command2 | tee 2.log | sed -e 's/^/[Command2] /' & command3 | tee 3.log | sed -e 's/^/[Command3] /'
[Command1] Starting command1
[Command2] Starting command2
[Command1] Finished
[Command3] Starting command3 This is a highly idealized version of what you are probably going to see. But its the best I can think of right now. If you want to stop all of them at once, you can use the build in trap . ~$ trap 'kill %1; kill %2' SIGINT
~$ command1 & command2 & command3 This will execute command1 and command2 in the background and command3 in the foreground, which lets you kill it with Ctrl + C . When you kill the last process with Ctrl + C the kill %1; kill %2 commands are executed, because we connected their execution with the reception of an INTerupt SIGnal, the thing sent by pressing Ctrl + C . They respectively kill the 1st and 2nd background process (your command1 and command2 ). Don't forget to remove the trap, after you're finished with your commands using trap - SIGINT . Complete monster of a command: ~$ trap 'kill %1; kill %2' SIGINT
~$ command1 | tee 1.log | sed -e 's/^/[Command1] /' & command2 | tee 2.log | sed -e 's/^/[Command2] /' & command3 | tee 3.log | sed -e 's/^/[Command3] /' You could, of course, have a look at screen . It lets you split your console into as many separate consoles as you want. So you can monitor all commands separately, but at the same time. | {
"source": [
"https://unix.stackexchange.com/questions/204480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112420/"
]
} |
204,522 | PulseAudio is always running on my system, and it always instantly restarts if it crashes or I kill it. However, I never actually start PulseAudio. I have checked /etc/init.d/ and /etc/X11/Xsession.d/ , and I have checked systemctl list-units -a , and PulseAudio is nowhere to be found. How come PulseAudio seemingly magically starts by itself without me ever running it, and how does it instantly restart when it dies? I'm using Debian 8 (jessie) with xinit and the i3 window manager, and PulseAudio 5. | It seems any process linking to the libpulse* family of shared objects--either before or after running X and the i3 window manager--may implicitly autospawn PulseAudio server, under your user process, as a byproduct of attempts to interface with the audio subsystem. PulseAudio creator Lennart Poettering seems to confirm this, in a 2015-05-29 email to the systemd-devel mailing list : "pulseaudio is generally not a system service but a user service.
Unless your user session is fully converted to be managed by systemd
too (which is unlikely) systemd is hence not involved at all with
starting it. "PA is usually started from the session setup script or service. In
Gnome that's gnome-session, for example. It's also auto-spawned
on-demand if the libraries are used and note that it is missing." For example, on Debian Stretch (Testing), web browser IceWeasel links to two libpulse* shared objects: 1) libpulsecommon-7.1.so; and 2) libpulse.so.0.18.2: k@bucket:~$ ps -ef | grep iceweasel
k 17318 1 5 18:58 tty2 00:00:15 iceweasel
k 17498 1879 0 19:03 pts/0 00:00:00 grep iceweasel
k@bucket:~$ sudo pmap 17318 | grep -i pulse
00007fee08377000 65540K rw-s- pulse-shm-2442253193
00007fee0c378000 65540K rw-s- pulse-shm-3156287926
00007fee11d24000 500K r-x-- libpulsecommon-7.1.so
00007fee11da1000 2048K ----- libpulsecommon-7.1.so
00007fee11fa1000 4K r---- libpulsecommon-7.1.so
00007fee11fa2000 8K rw--- libpulsecommon-7.1.so
00007fee121af000 316K r-x-- libpulse.so.0.18.2
00007fee121fe000 2044K ----- libpulse.so.0.18.2
00007fee123fd000 4K r---- libpulse.so.0.18.2
00007fee123fe000 4K rw--- libpulse.so.0.18.2 You may see which running processes link to libpulse*. For example, first get a list of libpulse* shared objects, then run lsof on each (note: this comes from Debian Stretch (Testing), so your output may differ): sudo find / -type f -name "*libpulse*"
*snip*
/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsedsp.so
/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
/usr/lib/x86_64-linux-gnu/libpulse.so.0.18.2
/usr/lib/x86_64-linux-gnu/libpulse-simple.so.0.1.0
/usr/lib/x86_64-linux-gnu/libpulse-mainloop-glib.so.0.0.5
/usr/lib/libpulsecore-7.1.so
/usr/lib/ao/plugins-4/libpulse.so
sudo lsof /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
gnome-she 864 Debian-gdm mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
gnome-set 965 Debian-gdm mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
gnome-set 1232 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
gnome-she 1286 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
chrome 2730 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so
pulseaudi 18356 k mem REG 252,1 524312 274980 /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-7.1.so To tell these processes not to autospawn PulseAudio, edit ~/.config/pulse/client.conf and add line autospawn = no PulseAudio and its libraries respect that setting, generally. The libpulse* linking by running processes may also indicate why PulseAudio respawns so quickly. The FreeDesktop.org page, " Running PulseAudio ", seems to confirm this: "...typically some background application will immediately reconnect,
causing the server to get immediately restarted." You seem to indicate you start the i3 window manager via the console (by running xinit) and do not use a display manager or desktop environment. The rest of this answer details info for those that do use GNOME, KDE, and so forth. ADDITIONAL INFO, FOR GNOME/KDE AUTOSTART Package PulseAudio (5.0-13), in Debian Jessie (Stable) amd64, installs the following four system files : /etc/xdg/autostart/pulseaudio-kde.desktop /etc/xdg/autostart/pulseaudio.desktop /usr/bin/start-pulseaudio-x11 /usr/bin/start-pulseaudio-kde Some graphical session managers automatically run FreeDesktop.org autostart scripts on user login. The PulseAudio autostart script, in turn, tells graphical session managers to run the appropriate PulseAudio startup script: /usr/bin/start-pulseaudio-x11
/usr/bin/start-pulseaudio-kde These scripts call PulseAudio client /usr/bin/pactl to load PulseAudio modules, which spawns the PulseAudio server as a byproduct (note: if you have autospawn set to "no", pactl respects that and will not autospawn PulseAudio server). More detail, at the FreeDesktop.org page " Running PulseAudio ". Some display managers, in addition and in other distributions, may start PulseAudio (for example, SDDM, on ArchLinux . Though maintainers may have resolved this, by now). | {
"source": [
"https://unix.stackexchange.com/questions/204522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17214/"
]
} |
204,607 | When I use grep -o to search in multiple files, it outputs each result prefixed with the file name. How can I prevent this prefix? I want the results without the file names. | With the GNU implementation of grep (the one that also introduced -o ) or compatible, you can use the -h option. -h, --no-filename
Suppress the prefixing of file names on output. This is the
default when there is only one file (or only standard input) to
search. With other implementations, you can always concatenate the files with cat and grep that output: cat ./*.txt | grep regexp Or use sed or awk instead of grep : awk '/regexp/' ./*.txt (extended regexps like with grep -E ). sed '/regexp/!d/' ./*.txt (basic regexps like with grep without -E . Many sed implementations now also support a -E option for extended regexps). | {
"source": [
"https://unix.stackexchange.com/questions/204607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11086/"
]
} |
204,641 | I currently have an extra HDD which I am using as my workspace. I am trying to get it to mount automatically on reboots using the following line added to /etc/fstab /dev/sdb1 /media/workspace auto defaults 0 1 This works to auto mount it, however I would like to restrict read/write access to users belonging to a specific group. How would I go about doing this in /etc/fstab? Can I simply just use chown or chmod to control the access? | If the filesystem type is one that doesn't have permissions, such as FAT, you can add umask , gid and uid to the fstab options. For example: /dev/sdb1 /media/workspace auto defaults,uid=1000,gid=1000,umask=022 0 1 uid=1000 is the user id. gid=1000 is the group id. umask=022 this will set permissions so that the owner has read, write, execute. Group and Others will have read and execute. To see your changes you do not need to reboot. Just umount and mount again without arguments. For example: umount /media/workspace
mount /media/workspace But make sure to do not have any process (even your shell) using that directory. | {
"source": [
"https://unix.stackexchange.com/questions/204641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116153/"
]
} |
204,661 | My OpenVAS isn't starting in Kali Linux. root@kali:~# openvas-mkcert
One or more files do already exist and would be overriden:
/var/lib/openvas/CA/cacert.pem
/var/lib/openvas/private/CA/cakey.pem
/var/lib/openvas/CA/servercert.pem
/var/lib/openvas/private/CA/serverkey.pem
You need to remove or rename them and re-run openvas-mkcert.
If you run openvas-mkcert with '-f', the files will be overwritten.
root@kali:~# openvas-nvt-sync
[i] This script synchronizes an NVT collection with the 'OpenVAS NVT Feed'.
[i] The 'OpenVAS NVT Feed' is provided by 'The OpenVAS Project'.
[i] Online information about this feed: 'http://www.openvas.org/openvas-nvt-feed.html'.
[i] NVT dir: /var/lib/openvas/plugins
OpenVAS feed server - http://www.openvas.org/
This service is hosted by Intevation GmbH - http://intevation.de/
All transactions are logged.
Please report synchronization problems to [email protected].
If you have any other questions, please use the OpenVAS mailing lists
or the OpenVAS IRC chat. See http://www.openvas.org/ for details.
[i] Feed is already current, no synchronization necessary.
root@kali:~# openvas-mkcert-client -n om -i
Generating RSA private key, 1024 bit long modulus
...++++++
.......................++++++
e is 65537 (0x10001)
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:State or Province Name (full name) [Some-State]:Locality Name (eg, city) []:Organization Name (eg, company) [Internet Widgits Pty Ltd]:Organizational Unit Name (eg, section) []:Common Name (eg, your name or your server's hostname) []:Email Address []:Using configuration from /tmp/openvas-mkcert-client.3524/stdC.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'DE'
localityName :PRINTABLE:'Berlin'
commonName :PRINTABLE:'om'
Certificate is to be certified until May 19 17:49:55 2016 GMT (365 days)
Write out database with 1 new entries
Data Base Updated
Your client certificates are in /tmp/openvas-mkcert-client.3524 .
You will have to copy them by hand.
root@kali:~# openvasmd --rebuild
root@kali:~# openvasmd --backup
root@kali:~# openvasad -c 'add_user' -n openvasadmin -r
bash: openvasad: command not found
root@kali:~# openvasad -c 'add_user' -n openvasadmin -r admin
bash: openvasad: command not found
root@kali:~# openvassd
root@kali:~# openvas-mkcert
One or more files do already exist and would be overriden:
/var/lib/openvas/CA/cacert.pem
/var/lib/openvas/private/CA/cakey.pem
/var/lib/openvas/CA/servercert.pem
/var/lib/openvas/private/CA/serverkey.pem
You need to remove or rename them and re-run openvas-mkcert.
If you run openvas-mkcert with '-f', the files will be overwritten.
root@kali:~# openvas-mkcert -f
-------------------------------------------------------------------------------
Creation of the OpenVAS SSL Certificate
-------------------------------------------------------------------------------
This script will now ask you the relevant information to create the SSL certificate of OpenVAS.
Note that this information will *NOT* be sent to anybody (everything stays local), but anyone with the ability to connect to your OpenVAS daemon will be able to retrieve this information.
CA certificate life time in days [1460]:
Server certificate life time in days [365]:
Your country (two letter code) [DE]: PL
Your state or province name [none]:
Your location (e.g. town) [Berlin]: Wroclaw
Your organization [OpenVAS Users United]:
-------------------------------------------------------------------------------
Creation of the OpenVAS SSL Certificate
-------------------------------------------------------------------------------
Congratulations. Your server certificate was properly created.
The following files were created:
. Certification authority:
Certificate = /var/lib/openvas/CA/cacert.pem
Private key = /var/lib/openvas/private/CA/cakey.pem
. OpenVAS Server :
Certificate = /var/lib/openvas/CA/servercert.pem
Private key = /var/lib/openvas/private/CA/serverkey.pem
Press [ENTER] to exit
root@kali:~# openvas-nvt-sync
[i] This script synchronizes an NVT collection with the 'OpenVAS NVT Feed'.
[i] The 'OpenVAS NVT Feed' is provided by 'The OpenVAS Project'.
[i] Online information about this feed: 'http://www.openvas.org/openvas-nvt-feed.html'.
[i] NVT dir: /var/lib/openvas/plugins
OpenVAS feed server - http://www.openvas.org/
This service is hosted by Intevation GmbH - http://intevation.de/
All transactions are logged.
Please report synchronization problems to [email protected].
If you have any other questions, please use the OpenVAS mailing lists
or the OpenVAS IRC chat. See http://www.openvas.org/ for details.
[i] Feed is already current, no synchronization necessary.
root@kali:~# openvas-mkcert-client -n om -i
Generating RSA private key, 1024 bit long modulus
.............................++++++
..++++++
e is 65537 (0x10001)
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [DE]:State or Province Name (full name) [Some-State]:Locality Name (eg, city) []:Organization Name (eg, company) [Internet Widgits Pty Ltd]:Organizational Unit Name (eg, section) []:Common Name (eg, your name or your server's hostname) []:Email Address []:Using configuration from /tmp/openvas-mkcert-client.3871/stdC.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'DE'
localityName :PRINTABLE:'Berlin'
commonName :PRINTABLE:'om'
Certificate is to be certified until May 19 17:59:47 2016 GMT (365 days)
Write out database with 1 new entries
Data Base Updated
Your client certificates are in /tmp/openvas-mkcert-client.3871 .
You will have to copy them by hand.
root@kali:~# openvasmd --rebuild
root@kali:~# openvassd
bind() failed : Address already in use
root@kali:~# This is not working: [i] This script synchronizes an NVT collection with the 'OpenVAS NVT Feed'.
[i] The 'OpenVAS NVT Feed' is provided by 'The OpenVAS Project'.
[i] Online information about this feed: 'http://www.openvas.org/openvas-nvt-feed.html'.
[i] NVT dir: /var/lib/openvas/plugins
OpenVAS feed server - http://www.openvas.org/
This service is hosted by Intevation GmbH - http://intevation.de/
All transactions are logged.
Please report synchronization problems to [email protected].
If you have any other questions, please use the OpenVAS mailing lists
or the OpenVAS IRC chat. See http://www.openvas.org/ for details.
@ERROR: max connections (200) reached -- try again later
rsync error: error starting client-server protocol (code 5) at main.c(1534) [Receiver=3.0.9]
[e] Error: rsync failed.
[i] This script synchronizes a SCAP data directory with the OpenVAS one.
[i] SCAP dir: /var/lib/openvas/scap-data
[i] Will use rsync
[i] Using rsync: /usr/bin/rsync
[i] Configured SCAP data rsync feed: rsync://feed.openvas.org:/scap-data
OpenVAS feed server - http://www.openvas.org/
This service is hosted by Intevation GmbH - http://intevation.de/
All transactions are logged.
Please report synchronization problems to [email protected].
If you have any other questions, please use the OpenVAS mailing lists
or the OpenVAS IRC chat. See http://www.openvas.org/ for details.
@ERROR: max connections (200) reached -- try again later
rsync error: error starting client-server protocol (code 5) at main.c(1534) [Receiver=3.0.9]
[e] Error: rsync failed. Your SCAP data might be broken now.
[i] This script synchronizes a CERT advisory directory with the OpenVAS one.
[i] CERT dir: /var/lib/openvas/cert-data
[i] Will use rsync
[i] Using rsync: /usr/bin/rsync
[i] Configured CERT data rsync feed: rsync://feed.openvas.org:/cert-data
OpenVAS feed server - http://www.openvas.org/
This service is hosted by Intevation GmbH - http://intevation.de/
All transactions are logged.
Please report synchronization problems to [email protected].
If you have any other questions, please use the OpenVAS mailing lists
or the OpenVAS IRC chat. See http://www.openvas.org/ for details.
@ERROR: max connections (200) reached -- try again later
rsync error: error starting client-server protocol (code 5) at main.c(1534) [Receiver=3.0.9]
Error: rsync failed. Your CERT data might be broken now.
Stopping OpenVAS Manager: openvasmd.
Stopping OpenVAS Scanner: openvassd. And, the terminal freezes at this point. Starting OpenVas Services
Starting Greenbone Security Assistant: ERROR.
Starting OpenVAS Scanner: ERROR.
Starting OpenVAS Manager: ERROR.
root@kali:~# How to solve this problem? | If the filesystem type is one that doesn't have permissions, such as FAT, you can add umask , gid and uid to the fstab options. For example: /dev/sdb1 /media/workspace auto defaults,uid=1000,gid=1000,umask=022 0 1 uid=1000 is the user id. gid=1000 is the group id. umask=022 this will set permissions so that the owner has read, write, execute. Group and Others will have read and execute. To see your changes you do not need to reboot. Just umount and mount again without arguments. For example: umount /media/workspace
mount /media/workspace But make sure to do not have any process (even your shell) using that directory. | {
"source": [
"https://unix.stackexchange.com/questions/204661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
204,689 | If you search something in Vim by prepending searchterm with a forward slash, e.g. /searchterm , Vim puts that searchterm into the search string history table . You then are able to navigate through past search terms by typing in forward slash ( / ) and using Up / Down arrow keys. That search string history table is persistent across Vim restarts. Everything above is also true for command (typed with : prepended) history table. How do I clear those history tables? | The history is persisted in the viminfo file; you can configure what (and how many of them) is persisted via the 'viminfo' (and 'history' ) options. You can clear the history via the histdel() function, e.g. for searches: :call histdel('/') You can even delete just certain history ranges or matching lines. Alternatively, you could also just edit the ~/.viminfo file directly (when Vim is closed, and either with another editor, or with vim -i NONE ). | {
"source": [
"https://unix.stackexchange.com/questions/204689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58428/"
]
} |
204,985 | I was recently given username/password access to a list of servers and want to propagate my SSH public key to these servers, so that I can login more easily. So that it's clear: There is not any pre-existing public key on the remote servers that I can utilize to automate this This constitutes the very first time I'm logging into these servers, and I'd like to not have to constantly type my credentials in to access them Nor do I want to type in my password over and over using ssh-copy-id in a for loop. | Rather than type your password multiple times you can make use of pssh and its -A switch to prompt for it once, and then feed the password to all the servers in a list. NOTE: Using this method doesn't allow you to use ssh-copy-id , however, so you'll need to roll your own method for appending your SSH pub key file to your remote account's ~/.ssh/authorized_keys file. Example Here's an example that does the job: $ cat ~/.ssh/my_id_rsa.pub \
| pssh -h ips.txt -l remoteuser -A -I -i \
' \
umask 077; \
mkdir -p ~/.ssh; \
afile=~/.ssh/authorized_keys; \
cat - >> $afile; \
sort -u $afile -o $afile \
'
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password:
[1] 23:03:58 [SUCCESS] 10.252.1.1
[2] 23:03:58 [SUCCESS] 10.252.1.2
[3] 23:03:58 [SUCCESS] 10.252.1.3
[4] 23:03:58 [SUCCESS] 10.252.1.10
[5] 23:03:58 [SUCCESS] 10.252.1.5
[6] 23:03:58 [SUCCESS] 10.252.1.6
[7] 23:03:58 [SUCCESS] 10.252.1.9
[8] 23:03:59 [SUCCESS] 10.252.1.8
[9] 23:03:59 [SUCCESS] 10.252.1.7 The above script is generally structured like so: $ cat <pubkey> | pssh -h <ip file> -l <remote user> -A -I -i '...cmds to add pubkey...' High level pssh details cat <pubkey> outputs the public key file to pssh pssh uses the -I switch to ingest data via STDIN -l <remote user> is the remote server's account (we're assuming you have the same username across the servers in the IP file) -A tells pssh to ask for your password and then reuse it for all the servers that it connects to -i tells pssh to send any output to STDOUT rather than store it in files (its default behavior) '...cmds to add pubkey...' - this is the trickiest part of what's going on, so I'll break this down by itself (see below) Commands being run on remote servers These are the commands that pssh will run on each server: ' \
umask 077; \
mkdir -p ~/.ssh; \
afile=~/.ssh/authorized_keys; \
cat - >> $afile; \
sort -u $afile -o $afile \
' In order: set the remote user's umask to 077, this is so that any directories or files we're going to create, will have their permissions set accordingly like so: $ ls -ld ~/.ssh ~/.ssh/authorized_keys
drwx------ 2 remoteuser remoteuser 4096 May 21 22:58 /home/remoteuser/.ssh
-rw------- 1 remoteuser remoteuser 771 May 21 23:03 /home/remoteuser/.ssh/authorized_keys create the directory ~/.ssh and ignore warning us if it's already there set a variable, $afile , with the path to authorized_keys file cat - >> $afile - take input from STDIN and append to authorized_keys file sort -u $afile -o $afile - uniquely sorts authorized_keys file and saves it NOTE: That last bit is to handle the case where you run the above multiple times against the same servers. This will eliminate your pubkey from getting appended multiple times. Notice the single ticks! Also pay special attention to the fact that all these commands are nested inside of single quotes. That's important, since we don't want $afile to get evaluated until after it's executing on the remote server. ' \
..cmds... \
' I've expanded the above so it's easier to read here, but I generally run it all on a single line like so: $ cat ~/.ssh/my_id_rsa.pub | pssh -h ips.txt -l remoteuser -A -I -i 'umask 077; mkdir -p ~/.ssh; afile=~/.ssh/authorized_keys; cat - >> $afile; sort -u $afile -o $afile' Bonus material By using pssh you can forgo having to construct files and either provide dynamic content using -h <(...some command...) or you can create a list of IPs using another of pssh 's switches, -H "ip1 ip2 ip3" . For example: $ cat .... | pssh -h <(grep -A1 dp15 ~/.ssh/config | grep -vE -- '#|--') ... The above could be used to extract a list of IPs from my ~/.ssh/config file. You can of course also use printf to generate dynamic content too: $ cat .... | pssh -h <(printf "%s\n" srv0{0..9}) .... For example: $ printf "%s\n" srv0{0..9}
srv00
srv01
srv02
srv03
srv04
srv05
srv06
srv07
srv08
srv09 You can also use seq to generate formatted numbers sequences too! References & similar tools to pssh If you don't want to use pssh as I've done so above there are some other options available. sshpt Ansible's authorized_key_module | {
"source": [
"https://unix.stackexchange.com/questions/204985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
205,010 | I'm renaming network interfaces by modifying the files in /etc/sysconfig/network-scripts . eth0 -> nic0 eth1 -> nic1 The content of the network scripts looks like this, after modification: # cat /etc/sysconfig/network-scripts/ifcfg-nic0
DEVICE=nic0
BOOTPROTO=static
ONBOOT=yes
HWADDR=xx:xx:xx:xx:xx:xx
USERCTL=no
IPV6INIT=no
MASTER=bond0
SLAVE=yes A reboot activates the new config. But how do I activate this configuration without rebooting? A systemctl restart network doesn't do the trick. I can shut down one interface by its old name ( ifdown eth0 ) but ifup results in below message no matter if the old or new name was provided: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device nic0 does not seem to be present, delaying initialization. /etc/init.d/network status shows this output: Configured devices:
lo bond0 nic0 nic1
Currently active devices:
lo eth0 eth1 bond0 Both, ifconfig and ip a show the old interface names. | You can rename the device using the ip command: /sbin/ip link set eth1 down
/sbin/ip link set eth1 name eth123
/sbin/ip link set eth123 up Edit : I am leaving the below for the sake of completeness and posterity (and for informational purposes,) but I have confirmed swill's comment and Marco Macuzzo's answer that simply changing the name and device of the interface /etc/sysconfig/network-scripts/ifcfg-eth0 (and renaming the file) will cause the device to be named correctly as long as the hwaddr= field is included in the configuration file. I recommend using this method instead after the referenced update. You may also want to make sure that you configure a udev rule, so that this will work on the next reboot too. The path for udev moved in CentOS 7 to /usr/lib/udev/rules.d/60-net.rules but you are still able to manage it the same way. If you added "net.ifnames=0 biosdevname=0" to your kernel boot string to return to the old naming scheme for your nics, you can remove ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{type}=="1", PROGRAM="/lib/udev/rename_device", RESULT=="?*", NAME="$result" And replace it with ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{address}=="00:50:56:8e:3f:a7", NAME="eth123" You need one entry per nic. Be sure to use the correct MAC address and update the NAME field. If you did not use "net.ifnames=0 biosdevname=0", be careful as there could be unintended consequences. | {
"source": [
"https://unix.stackexchange.com/questions/205010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101263/"
]
} |
Subsets and Splits