source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
28,605 | My sound volume control does not work when I'm logged in with my regular user, though the sound works correctly. What's weirder is that the sound control works when I'm logged in as guest. With my regular user I can't see any output peripherals in Preferences->Audio but it will show up when logged in as guest. The home directory of my regular Ubuntu user is also used as the home directory of my Mac Os X user. Could it be the case there is some kind of conflict? I already tried deleting the .pulse directory and .pulse-cookie but nothing changed. Only .pulse-cookie was recreated when I logged out/logged in. Do you have any idea of what could be happening? | Use encfs (available as a package on most distributions). To set up: mkdir ~/.encrypted ~/encrypted
encfs ~/.encrypted ~/encrypted
# enter a passphrase
mv existing-directory ~/encrypted The initial call to encfs sets up an encrypted filesystem. After that point, every file that you write under ~/encrypted is not stored directly on the disk, it is encrypted and the encrypted data is stored under ~/.encrypted . The encfs command leaves a daemon running, and this daemon handles the encryption (and decryption when you read a file from under ~/encrypted ). In other words, for files under ~/encrypted , actions such as reads and writes do not translate directly to reading or writing from the disk. They are performed by the encfs process, which encrypts and decrypts the data and uses the ~/.encrypted directory to store the ciphertext. When you've finished working with your files for the time being, unmount the filesystem so that the data can't be accessed until you type your passphrase again: fusermount -u ~/encrypted After that point, ~/encrypted will be an empty directory again. When you later want to work on these files again, mount the encrypted filesystem: encfs ~/.encrypted ~/encrypted
# enter your passphrase This, again, makes the encrypted files in ~/.encrypted accessible under the directory ~/encrypted . You can change the mount point ~/encrypted as you like: encfs ~/.encrypted /somewhere/else (but mount the encrypted directory only once at a time). You can copy or move the ciphertext (but not while it's mounted) to a different location or even to a different machine; all you need to do to work on the files is pass the location of the ciphertext as the first argument to encfs and the location of an empty directory as the second argument. | {
"source": [
"https://unix.stackexchange.com/questions/28605",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14089/"
]
} |
28,611 | I need to run a software system that is intended to be installed as an appliance on a dedicated machine. In order to save energy, I plan to run the system on a VirtualBox VM instead. The host is a standard Linux box with a SysV-Init system, the guest is a heavily modified Linux and I would prefer not to have to alter it further. VirtualBox is used in the OSE version. I have already figured out how to start the VM when the host boots ( Edit: this is done, as Nikhil mentioned below, through the command VBoxManager startvm ), but how can I gracefully shut down the VM? Any script running on the host would need to wait until the guest has fully shut down. Can anyone suggest how, for example, a service file doing this would have to look? | Have you tried acpipowerbutton from this command set? VBoxManage controlvm <uuid>|<name>
pause|resume|reset|poweroff|savestate|
acpipowerbutton|acpisleepbutton| Edit after reading the comments: You can use acpid or other acpi utilities to make it graceful. Also, can you provide more information about how do you shutdown the machine at the moment? Plain shutdown wouldn't wait for unfinished jobs, a time delay may be too long. I assume you aren't using a window manager so try this tool. Just seen this daemon . You might find it useful. | {
"source": [
"https://unix.stackexchange.com/questions/28611",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7932/"
]
} |
28,636 | I'm starting to get a collection of computers at home and to support them I have my "server" linux box running a RAID array. Its currently mdadm RAID-1 , going to RAID-5 once I have more drives (and then RAID-6 I'm hoping for). However I've heard various stories about data getting corrupted on one drive and you never noticing due to the other drive being used, up until the point when the first drive fails, and you find your second drive is also screwed (and 3rd, 4th, 5th drive). Obviously backups are important and I'm taking care of that also, however I know I've previously seen scripts which claim to help against this problem and allow you to check your RAID while its running. However looking for these scripts again now I'm finding it hard to find anything which seems similar to what I ran before and I feel I'm out of date and not understanding whatever has changed. How would you check a running RAID to make sure all disks are still preforming normally? I monitor SMART on all the drives and also have mdadm set to email me in case of failure but I'd like to know my drives occasionally "check" themselves too. | The point of RAID with redundancy is that it will keep going as long as it can, but obviously it will detect errors that put it into a degraded mode, such as a failing disk. You can show the current status of an array with mdadm --detail (abbreviated as mdadm -D ): # mdadm -D /dev/md0
<snip>
0 8 5 0 active sync /dev/sda5
1 8 23 1 active sync /dev/sdb7 Furthermore the return status of mdadm -D is nonzero if there is any problem such as a failed component (1 indicates an error that the RAID mode compensates for, and 2 indicates a complete failure). You can also get a quick summary of all RAID device status by looking at /proc/mdstat . You can get information about a RAID device in /sys/class/block/md*/md/* as well; see Documentation/md.txt in the kernel documentation. Some /sys entries are writable as well; for example you can trigger a full check of md0 with echo check >/sys/class/block/md0/md/sync_action . In addition to these spot checks, mdadm can notify you as soon as something bad happens. Make sure that you have MAILADDR root in /etc/mdadm.conf (some distributions (e.g. Debian) set this up automatically). Then you will receive an email notification as soon as an error (a degraded array) occurs . Make sure that you do receive mail send to root on the local machine (some modern distributions omit this, because they consider that all email goes through external providers — but receiving local mail is necessary for any serious system administrator). Test this by sending root a mail: echo hello | mail -s test root@localhost . Usually, a proper email setup requires two things: Run an MTA on your local machine. The MTA must be set up at least to allow local mail delivery. All distributions come with suitable MTAs, pick anything (but not nullmailer if you want the email to be delivered locally). Redirect mail going to system accounts (at least root ) to an address that you read regularly. This can be your account on the local machine, or an external email address. With most MTAs, the address can be configured in /etc/aliases ; you should have a line like root: djsmiley2k for local delivery, or root: [email protected] for remote delivery. If you choose remote delivery, make sure that your MTA is configured for that. Depending on your MTA, you may need to run the newaliases command after editing /etc/aliases . | {
"source": [
"https://unix.stackexchange.com/questions/28636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10525/"
]
} |
28,675 | Somehow I managed to close a screen window without screen 'noticing' it, so the session is still flagged as attached . This prevents me from re-attaching to this session. What can I do? me@iupr-serv8:~$ screen -r
There are several suitable screens on:
25028.pts-19.XXX-serv8 (01/05/2012 07:15:34 PM) (Attached)
24658.pts-19.XXX-serv8 (01/05/2012 07:11:38 PM) (Detached)
24509.pts-19.XXX-serv8 (01/05/2012 07:10:00 PM) (Detached)
18676.pts-5.XXX-serv8 (01/02/2012 06:55:33 PM) (Attached)
Type "screen [-d] -r [pid.]tty.host" to resume one of them.
me@XXX-serv8:~$ screen -r 25028
There is a screen on:
25028.pts-19.XXX-serv8 (01/05/2012 07:15:33 PM) (Attached)
There is no screen to be resumed matching 25028. [update] In the end I found out, that the session was not lost, but the ID of the first session is 0 . The second session than has the ID 1 . | Try detaching it first with screen -d . If that doesn't work, you can try, in increasing order of emphasis , -d|-D [pid.tty.host]
does not start screen, but detaches the elsewhere running screen session. It has the
same effect as typing "C-a d" from screen's controlling terminal. -D is the equivalent
to the power detach key. If no session can be detached, this option is ignored. In
combination with the -r/-R option more powerful effects can be achieved:
-d -r Reattach a session and if necessary detach it first.
-d -R Reattach a session and if necessary detach or even create it first.
-d -RR Reattach a session and if necessary detach or create it. Use the first session if
more than one session is available.
-D -r Reattach a session. If necessary detach and logout remotely first.
-D -R Attach here and now. In detail this means: If a session is running, then reattach.
If necessary detach and logout remotely first. If it was not running create it and
notify the user. This is the author's favorite.
-D -RR Attach here and now. Whatever that means, just do it. | {
"source": [
"https://unix.stackexchange.com/questions/28675",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11381/"
]
} |
28,679 | I am learning command line from a book called " Linux Command Line and Shell Scripting Bible, Second Edition ." The book states this: Some Linux implementations contain a table of processes to start
automatically on bootup. On Linux systems, this table is usually
located in the special file /etc/inittabs. Other systems (such as the popular Ubuntu Linux distribution) utilize
the /etc/init.d folder, which contains scripts for starting and
stopping individual applications at boot time. The scripts are started
via entries under the /etc/rcX.d folders, where X is a run level. Probably because I am new to linux, I did not understand what the second paragraph quoted meant. Can someone explain the same in a much plainer language? | Let's forget init.d or rcx.d and keep things very simple. Imagine you were programming a program whose sole responsibility is to run or kill other scripts one by one. However your next problem is to make sure they run in order. How would you perform that? And lets imagine this program looked inside a scripts folder for running the scripts. To order the priority of scripts you would name them in lets say numerical order. This order is what dictates the relation between init.d and rc In other words init.d contains the scripts to run and the rcX.d contains their order to run. The X value in rcX.d is the run level. This could be loosely translated to the OS current state. If you dig inside the rcX.d scripts you will find this formatting: Xxxabcd X is replaced with K or S , which stands for whether the script should be killed or started in the current run level xx is the order number abcd is the script name (the name is irrelevant however where it points is the script this will run) | {
"source": [
"https://unix.stackexchange.com/questions/28679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9610/"
]
} |
28,749 | Most recent Linux distributions include bash as default shell, although there are other, (arguably) better shells available. I'm trying to understand if this is some historical leftover that nobody wants to change, or are there some good reasons that make bash the first choice? | The short answer is because linux is really GNU/Linux. Only the kernel is linux but the base collection of utilities providing the Unix like environment is provided by GNU and the GNU shell is bash As I said, that's the short answer ;) edited to add some additional commentary... Let me prefix by saying that I'm not a Unix historian, so I can only answer IMHO A few points, first of all bash is the kitchen sink of shells, as emacs is to editors. At the time bash was released there were no free ksh implementations, tcsh was a free csh replacement, but Stallman had a rant against csh for shell programming. As an interactive shell bash had excellent history/command recall, along with the saving of history from session to session. It was a drop in replacement for sh, bsh, ksh for shell programming and made for a decent interactive shell. Like a snowball rolling downhill, bash has gained momentum and size. Yes, there are dozens of other shells; shells that are better suited for individual purpose or taste, but for a single all around shell bash does a decent job and has had a lot of eyes on it for over 20 years. | {
"source": [
"https://unix.stackexchange.com/questions/28749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11704/"
]
} |
28,756 | I have a directory tree that contains many small files, and a small number of larger files. The average size of a file is about 1 kilobyte. There are 210158 files and directories in the tree (this number was obtained by running find | wc -l ). A small percentage of files gets added/deleted/rewritten several times per week. This applies to the small files, as well as to the (small number of) larger files. The filesystems that I tried (ext4, btrfs) have some problems with positioning of files on disk. Over a longer span of time, the physical positions of files on the disk (rotating media, not solid state disk) are becoming more randomly distributed. The negative consequence of this random distribution is that the filesystem is getting slower (such as: 4 times slower than a fresh filesystem). Is there a Linux filesystem (or a method of filesystem maintenance) that does not suffer from this performance degradation and is able to maintain a stable performance profile on a rotating media? The filesystem may run on Fuse, but it needs to be reliable. | Performance I wrote a small Benchmark ( source ), to find out, what file system performs best with hundred thousands of small files: create 300000 files (512B to 1536B) with data from /dev/urandom rewrite 30000 random files and change the size read 30000 sequential files read 30000 random files delete all files sync and drop cache after every step Results (average time in seconds, lower = better): Using Linux Kernel version 3.1.7
Btrfs:
create: 53 s
rewrite: 6 s
read sq: 4 s
read rn: 312 s
delete: 373 s
ext4:
create: 46 s
rewrite: 18 s
read sq: 29 s
read rn: 272 s
delete: 12 s
ReiserFS:
create: 62 s
rewrite: 321 s
read sq: 6 s
read rn: 246 s
delete: 41 s
XFS:
create: 68 s
rewrite: 430 s
read sq: 37 s
read rn: 367 s
delete: 36 s Result: While Ext4 had good overall performance, ReiserFS was extreme fast at reading sequential files. It turned out that XFS is slow with many small files - you should not use it for this use case. Fragmentation issue The only way to prevent file systems from distributing files over the drive, is to keep the partition only as big as you really need it, but pay attention not to make the partition too small, to prevent intrafile-fragmenting. Using LVM can be very helpful. Further reading The Arch Wiki has some great articles dealing with file system performance: https://wiki.archlinux.org/index.php/Beginner%27s_Guide#Filesystem_types https://wiki.archlinux.org/index.php/Maximizing_Performance#Storage_devices | {
"source": [
"https://unix.stackexchange.com/questions/28756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
28,771 | Possible Duplicate: How to delete part of a path in an interactive shell? Is there a short-cut in bash that lets you delete the last part of a path? Example: /usr/local/bin should become /usr/local/ (or /usr/local ) I know of Ctrl + w but it deletes the complete last word and I'd like to retain that functionality, too. | In a path, it's quite easy, dirname takes off the last component of the path. And since it's a program (as opposed to a builtin) it's completely portable between shells. $ dirname /usr/local/bin
/usr/local It appears you mean while editing an active line at the prompt. In that case Nikhil's comment of esc backspace (consecutively, not both at the same time) is correct. | {
"source": [
"https://unix.stackexchange.com/questions/28771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14164/"
]
} |
28,791 | I'm currently working on a bash script that installs and sets up various programs on a stock Linux system (currently, Ubuntu). Because it installs programs and copies a number of files to various folders that require elevated privileges, I've already done the standard "I need elevated privileges"-and-exit. However, I would like, if possible, to be able to prompt the user for their sudo password and elevate the script's privileges automatically if the user doesn't run the script command with sudo (such as launching it from the GUI file manager), without the user having to restart the script. As this is designed to run on stock Linux installs, any option that modifies the system won't work for my purposes. All options need to be contained to the script itself. Is this possible within Bash? If so, what's the best (secure, yet concise) way to do this? | I run sudo directly from the script: if [ $EUID != 0 ]; then
sudo "$0" "$@"
exit $?
fi | {
"source": [
"https://unix.stackexchange.com/questions/28791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8522/"
]
} |
28,803 | How can I use ffmpeg to reduce the size of a video by lowering the quality (as minimally as possible, naturally, because I need it to run on a mobile device that doesn't have much available space)? I forgot to mention that when the video can use subtitles (*.srt or *.sub), I'd like to convert them too to fit the parameters of the converted video file. | Update 2020: This answer was written in 2009. Since 2013 a video format much better than H.264 is widely available, namely H.265 (better in that it compresses more for the same quality, or gives higher quality for the same size). To use it, replace the libx264 codec with libx265, and push the compression lever further by increasing the CRF value — add, say, 4 or 6, since a reasonable range for H.265 may be 24 to 30. Note that lower CRF values correspond to higher bitrates, and hence produce higher quality videos. ffmpeg -i input.mp4 -vcodec libx265 -crf 28 output.mp4 To see this technique applied using the older H.264 format, see this answer , quoted below for convenience: Calculate the bitrate you need by dividing your target size (in bits) by the video length (in seconds). For example for a target size of 1 GB (one giga byte , which is 8 giga bits ) and 10 000 seconds of video (2 h 46 min 40 s), use a bitrate of 800 000 bit/s (800 kbit/s): ffmpeg -i input.mp4 -b 800k output.mp4 Additional options that might be worth considering is setting the Constant Rate Factor , which lowers the average bit rate, but retains better quality. Vary the CRF between around 18 and 24 — the lower, the higher the bitrate. ffmpeg -i input.mp4 -vcodec libx264 -crf 20 output.mp4 | {
"source": [
"https://unix.stackexchange.com/questions/28803",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6215/"
]
} |
28,827 | Often when I start looking at history of commands some of the characters from a command displayed aren't erased, for example: What's happening: prompt$ some_command
prompt$ some_commanother_command What should have happened: prompt$ some_command
prompt$ another_command I can't erase those characters and bash ignores them when executing the command. They also disappear when a new prompt is being displayed (after pressing Enter for example). I asked other people who work on Linux at my workplace and they said that they have that problem sometimes too, but they didn't have an idea on how solve that issue. I have Ubuntu 11.10 and I'm using guake. Here is my PS1 : \e[0;31m\u \A ${PWD##*/}\e[0;32m$(parse_git_branch)\e[0;31m$\e[m where parse_git_branch is parse_git_branch () {
git name-rev HEAD 2> /dev/null | sed 's#HEAD\ \(.*\)# (\1)#'
} As far as I know my colleagues have that problem even with less "fancy" PS1. | Use \[...\] around the parts of PS1 that have length 0. It helps bash to get the length of the prompt right. Even with this measure, your command line can get spoiled when using multibyte characters (at least mine does). Hitting Ctrl+L also helps in such cases (but clears the screen at the same time). | {
"source": [
"https://unix.stackexchange.com/questions/28827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12459/"
]
} |
28,845 | I have a file with one million lines. Each line has a field called transactionid , which has repetitive values. What I need to do is to count them distinctly. No matter how many times a value is repeated, it should be counted only once. | OK, Assuming that your file is a text file, having the fields separated by comma separator ','. You would also know which field 'transactionid' is in terms of its position. Assuming that your 'transactionid' field is 7th field. awk -F ',' '{print $7}' text_file | sort | uniq -c This would count the distinct/unique occurrences in the 7th field and prints the result. | {
"source": [
"https://unix.stackexchange.com/questions/28845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6115/"
]
} |
28,888 | During an audit of /var/log/auth.log on one of my public webservers, I found this: Jan 10 03:38:11 Bucksnort sshd[3571]: pam_unix(sshd:auth): authentication failure;
logname= uid=0 euid=0 tty=ssh ruser= rhost=61.19.255.53 user=bin
Jan 10 03:38:13 Bucksnort sshd[3571]: Failed password for bin from 61.19.255.53
port 50647 ssh2 At first blush, this looks like typical ssh login spam from random hackers; however, as I looked closer I noticed something else. Most failed /var/log/auth.log entries say invalid user in them, like this one: Jan 9 10:45:23 Bucksnort sshd[3006]: Failed password for invalid user sales
from 123.212.43.5 port 10552 ssh2 The disquieting thing about that failed login message for bin is that it is a valid user in /etc/passwd that even has a login shell: [mpenning@Bucksnort ~]$ grep ^bin /etc/passwd
bin:x:2:2:bin:/bin:/bin/sh I thought I had covered the all the default usernames that could login remotely when I disabled PermitRootLogin in /etc/ssh/sshd_config ; discovering this entry opened new possibilities in my paranoid mind. If somehow services ran under bin , then it is remotely possible that someone could somehow insert an ssh key into the bin user's directory from a running service on the box, so I would like to completely disable login for the bin user, if possible. Questions This server is remote, and expensive to fix (i.e. I will pay for remote hands to hook up a KVM, plus KVM rental). I am trying to figure out what I might break if I change the /etc/passwd entry for bin to look like this: bin:x:2:2:bin:/bin:/bin/false I ran the following commands trying to figure out what bin is needed for... However, these commands came up with no files and I could find no processes owned by bin . What does the bin user do anyway? $ sudo find / -group bin $ sudo find / -user bin Are there any other users that should get their login shells set to /bin/false ? FYI, I have already have /bin/false on www-data . Am I being too paranoid? I am running Debian, if that matters. | A user who has a valid shell and no password can still log in by non-password-based methods, the most common being an ssh key. A valid shell is necessary to run cron jobs. A valid shell is also necessary for su bin -c 'wibble' to work (on Linux at least, su bin -s /bin/sh -c 'wibble' will also work). In the case of bin , most systems never run a command as bin in normal operation, so setting the shell to /bin/false would be ok. There is no risk of any direct attack allowing bin to log in over SSH, because that would require creating /bin/.ssh/authorized_keys as the user bin or as root. In other words, the only way to get in is to be in. However, having a valid shell does increase the risk of misconfiguration. It can also permit some remote attacks with services other than SSH; for example a user reports that an attacker could set a password for daemon remotely via Samba, then use that password to log in over SSH. You can plug the SSH hole by listing the names of the system users in a DenyUsers directive in /etc/ssh/sshd_config (unfortunately, you can't use a numerical range). Or, conversely, you can put an AllowGroups directive and only allow the groups that contain physical users (e.g. users if you grant all your physical users that group membership). There are bugs filed over this issue in Debian ( #274229 , #330882 , #581899 ), currently open and classified as “wishlist”. I tend to agree that these are bugs and system users should have /bin/false as their shell unless it appears necessary to do otherwise. | {
"source": [
"https://unix.stackexchange.com/questions/28888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6766/"
]
} |
28,941 | How can I check which DNS server am I using (in Linux)? I am using network manager and a wired connection to my university's LAN. (I am trying to find out why my domain doesn't get resolved) | You should be able to get some reasonable information in: $ cat /etc/resolv.conf | {
"source": [
"https://unix.stackexchange.com/questions/28941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10950/"
]
} |
28,972 | I understand that source based distributions like Gentoo or Slackware do not need *-dev versions of programs. They include the source code as well as header files for compiling everything locally. But I never saw *-dev packages in Arch Linux , although it is package based. I ran across lots of *-dev packages in other distributions. | The -dev packages usually contain header-files, examples, documentation and such, which are not needed to just running the program (or use a library as a dependency). They are left out to save space. ArchLinux usually just ships these files with the package itself. This costs a bit more disk space for the installation but reduces the number packages you have to manage. | {
"source": [
"https://unix.stackexchange.com/questions/28972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6958/"
]
} |
28,976 | So I need to compress a directory with max compression. How can I do it with xz ? I mean I will need tar too because I can't compress a directory with only xz . Is there a oneliner to produce e.g. foo.tar.xz ? | With a recent GNU tar on bash or derived shell: XZ_OPT=-9 tar cJf tarfile.tar.xz directory tar's lowercase j switch uses bzip, uppercase J switch uses xz. The XZ_OPT environment variable lets you set xz options that cannot be passed via calling applications such as tar . This is now maximal . See man xz for other options you can set ( -e / --extreme might give you some additional compression benefit for some datasets). XZ_OPT=-e9 tar cJf tarfile.tar.xz directory | {
"source": [
"https://unix.stackexchange.com/questions/28976",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6960/"
]
} |
28,983 | I somehow managed to create a file that doesn't seem to have a filename. I found some information regarding how to get more details of the file in the following thread. However, I tried some of the suggestions listed and can't seem to delete the file. I'm not sure what I did to create it, but it happened while trying to copy an xml file. Some info on the file is as follows; > ls -lb
total 296
-rw-r--r-- 1 voyager endeavor 137627 Jan 12 12:49 \177
> file *
: XML document
> ls -i
417777 I tried to find using the inum switch and then pipe that to rm as that seemed like the most foolproof way of getting rid of it. However, the example given at the bottom of the thread linked below failed for me. Example was: > find -inum 41777 -exec ls -al {} \;
find: illegal option -- i
find: [-H | -L] path-list predicate-list so I tried using the path list first like the following, but that didn't work either: > find . -inum 41777 -exec ls -al {} \; I'm not sure what the non-printable character \177 is or how I can pass that to an rm command, but I really want to make sure I don't mess up any other files/directories in my attempt to delete this file. | The file has a name, but it's made of non-printable characters. If you use ksh93, bash, zsh, mksh or FreeBSD sh, you can try to remove it by specifying its non-printable name. First ensure that the name is right with: ls -ld $'\177' If it shows the right file, then use rm: rm $'\177' Another (a bit more risky) approach is to use rm -i -- * . With the -i option rm requires confirmation before removing a file, so you can skip all files you want to keep but the one. Good luck! | {
"source": [
"https://unix.stackexchange.com/questions/28983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14255/"
]
} |
29,078 | I have a 22TB disk on /dev/sdb . How do I create a 22TB partition? Don't really care about the file system - ext4 or zfs is fine. Running CentOS 6.2 - Partition will be used as a data dump. Only a single stream of data so being picky/choosy over what file system isn't really a concern right now. The disk is formed from 12x2TB nearline SAS drives and a Dell Perc controller. I just want a 22TB partition. | The simplest solution is to use GPT partitioning , a 64-bit version of Linux, and XFS : GPT is necessary because the MS-DOS-style MBR partition table created by fdisk is limited to 2 TiB disks. So, you need to use parted or another GPT-aware partitioning program instead of fdisk . ( gdisk , gparted , etc.) A 64-bit kernel is necessary because 32-bit kernels limit you to filesystems smaller than you're asking for. You either hit a size limit based on 32-bit integers or end up not being able to address enough RAM to support the filesystem properly. XFS is not the only solution, but in my opinion it is the easiest one for RHEL systems. You cannot use ext4 for this in RHEL 6. Although the filesystem was designed to support 1 EiB filesystems , there is an artificial 16 TiB volume size limit in the version of e2fsprogs included in RHEL 6 and its derivatives. Both Red Hat and CentOS call this out in their docs. (The ext4 16 TiB limit was raised considerably in RHEL 7 to 50 TiB.) ZFS may not be practical in your situation . Because of its several legal and technical restrictions, I can't outright recommend it unless you need something only ZFS gives you. Having ruled out your two chosen filesystems, I suggest XFS. It is the default filesystem in RHEL 7, it was available as a supported filesystem in all RHEL 6 versions, and was backported to the later RHEL 5 releases after RHEL 6 came out. Here's the process: Check whether you have mkfs.xfs installed by running it without arguments. If it's not present, install the userland XFS tools: # yum install xfsprogs If that failed, it's probably because you're on an older OS that doesn't have this in its default package repository. You really should upgrade, but if that is impossible, you can get this from CentOSPlus or EPEL . You may also need to install the kmod_xfs package. Create the partition: Since you say your 22 TiB volume is on /dev/sdb , the commands for parted are: # parted /dev/sdb mklabel gpt
# parted -a optimal -- /dev/sdb mkpart primary xfs 1 -1 That causes it to take over the entire volume with a single partition. Actually, it ignores the first 1 MiB of the volume, to achieve the 4 KiB alignment required to get the full performance from Advanced Format HDDs and SSDs . You could skip this step and format the entire volume with XFS. That is, you would use /dev/sdb in the example below instead of /dev/sdb1 . This avoids the problem of sector alignment. In the case of a volume that only your Linux-based OS will see, there are no downsides worth speaking about, but I'd caution against doing this on a removable volume or on an internal volume in a multi-booting computer, since some OSes (Windows and macOS, for instance) will offer to format a partitionless hard drive for you every time it appears. Putting the filesystem on a partition solves this. Format the partition: # mkfs.xfs -L somelabel /dev/sdb1 Add the /etc/fstab entry: LABEL=somelabel /some/mount/point xfs defaults 0 0 Mount up! # mount /some/mount/point If you want to go down the LVM path, the above steps are basically just a more detailed version of the second set of commands in user bsd 's answer below . You have to do his first set of commands before the ones above. LVM offers certain advantages at a complexity cost. For instance, you can later "grow" an LVM volume group by adding more physical volumes to it, thereby making space to grow the logical volume ("partition" kinda, sorta), which in turn lets you grow the filesystem living on the logical volume. (See what I mean about complexity? :)) | {
"source": [
"https://unix.stackexchange.com/questions/29078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7340/"
]
} |
29,117 | I would like to keep track of changes in /etc/ Basically I'd like to know if a file was changed, by yum update or by a user and roll it back if I don't like the chage.
I thought of using a VCS like git, LVM or btrfs snapshots or a backup program for this. What would you recommend? | It sounds like you want etckeeper from Joey Hess of Debian, which manages files under /etc using version control. It supports git, mercurial, darcs and bazaar. git is the VCS best supported by etckeeper and the VCS users are most likely to know. It's possible that your distribution has chosen to modify etckeeper so its default VCS is not git. You should only be using etckeeper with a VCS other than git if you're in love with the other VCS. | {
"source": [
"https://unix.stackexchange.com/questions/29117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14161/"
]
} |
29,128 | Linux's /proc/<pid>/environ does not update a process's environment. As I understand it, the file contains the initial environment of the process. How can I read a process's current environment? | /proc/$pid/environ does update if the process changes its own environment. But many programs don't bother changing their own environment, because it's a bit pointless: a program's environment is not visible through normal channels, only through /proc and ps , and even not every unix variant has this kind of feature, so applications don't rely on it. As far as the kernel is concerned, the environment only appears as the argument of the execve system call that starts the program. Linux exposes an area in memory through /proc , and some programs update this area while others don't. In particular, I don't think any shell updates this area. As the area has a fixed size, it would be impossible to add new variables or change the length of a value. | {
"source": [
"https://unix.stackexchange.com/questions/29128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
29,140 | I've accidentally attached to a 2nd GNU screen session from within an existing screen session and cannot detach or issue commands to the inner screen. I remember figuring out how to do that before but completely forgot and would like to keep it as reference. One way is to detach the inner screen by doing screen -dr from shell, but what is the key combination to do that from within screen itself? | ctrl-a a d | {
"source": [
"https://unix.stackexchange.com/questions/29140",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147007/"
]
} |
29,182 | Somehow, I am finding it difficult to understand tweaking around * parameters with cron. I wanted a job to run every hour and I used the below setting: * */1 * * * But it does not seem to do the job. Could someone please explain the meaning of above and what is needed for the job? | * means every . */n means every nth . (So */1 means every 1 .) If you want to run it only once each hour, you have to set the first item to something else than * , for example 20 * * * * to run it every hour at minute 20 . Or if you have permission to write in /etc/cron.hourly/ (or whatever it is on your system), then you could place a script there. | {
"source": [
"https://unix.stackexchange.com/questions/29182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10968/"
]
} |
29,191 | I'm new to scripting, but somewhat familiar with Linux. I am actually using this on a Fedora16 build, with Gnome3 and gnome-terminal... fyi. I flip through all sorts of ssh terminals all day long and I thought it would be nice to create a ssh script called lambda that contains information that is sent to a terminal_wrapper script that accepts all sorts of variables and creates a new gnome-terminal based on those given variables. The information I would like would be to rename the title, change bg and fg colors, and start an ssh in a new gnome-terminal window. I feel like I'm close to getting the ssh to work, but I'm missing something. Here is what I've got for my ssh script for lambda: terminal_wrapper user lambda.company.com And here is my terminal_wrapper: gnome-terminal --title=$2 -e ssh $1 $2 It opens a new window for me, renames it to the lambda.company.com, and then sits there and closes after a second. Any help? Thanks. | * means every . */n means every nth . (So */1 means every 1 .) If you want to run it only once each hour, you have to set the first item to something else than * , for example 20 * * * * to run it every hour at minute 20 . Or if you have permission to write in /etc/cron.hourly/ (or whatever it is on your system), then you could place a script there. | {
"source": [
"https://unix.stackexchange.com/questions/29191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8901/"
]
} |
29,214 | Possible Duplicate: How to move 100 files from a folder containing thousands? Is it possible to copy only the first 1000 files from a directory to another? Thanks in advance | The following copies the first 1000 files found in the current directory to $destdir . Though the actual files depend on the output returned by find . $ find . -maxdepth 1 -type f |head -1000|xargs cp -t "$destdir" You'll need the GNU implementation of cp for -t , a GNU-compatible find for -maxdepth . Also note that it assumes that file paths don't contain blanks, newline, quotes or backslashes (or invalid characters or are longer than 255 bytes with some xargs implementations). EDIT: To handle file names with spaces, newlines, quotes etc, you may want to use null-terminated lines (assuming a version of head that has the -z option): find . -maxdepth 1 -type f -print0 | head -z -n 1000 | xargs -0 -r -- cp -t "$destdir" -- | {
"source": [
"https://unix.stackexchange.com/questions/29214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14381/"
]
} |
29,245 | How can I list recursively all files that were changed between 22.12.2011 and 24.12.2011? | Generally speaking, when you're looking for files in a directory and its subdirectories recursively, use find . The easiest way to specify a date range with find is to create files at the boundaries of the range and use the -newer predicate. touch -t 201112220000 start
touch -t 201112240000 stop
find . -newer start \! -newer stop | {
"source": [
"https://unix.stackexchange.com/questions/29245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6425/"
]
} |
29,247 | I'm on a macbook running Lion. In Terminal I'm connected to my schools server with ssh . I navigated to a folder on the server and have a file I want to copy to my local machine, but I don't know what the IP address of my local machine is. How can I get it? I'm in the folder on the server, and I want to copy read.txt onto my local machine's hard drive. I've tried scp ./read.txt [my computer name].local/newRead.txt but it doesn't work. | You don't need to know your own host's IP address in order to copy files to it. Simply use scp to copy the file from the remote host: $ scp [email protected]:path/to/read.txt ~/path/to/newRead.txt If you want to copy to your local host from your remote host, get your own IP address with ifconfig and issue the following: $ scp path/to/read.txt [email protected]:path/to/newRead.txt where 1.2.3.4 is your local IP address. A convenient way to extract a host's IP address is using this function: ipaddr() { (awk '{print $2}' <(ifconfig eth0 | grep 'inet ')); } where eth0 is your network interface. Stick it in ~/.bash_profile in order to run it as a regular command - ipaddr . | {
"source": [
"https://unix.stackexchange.com/questions/29247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6389/"
]
} |
29,355 | I want to sort only files by update dates including sub-directories. I found out ls -lrtR | grep ^- . but it doesn't seem to sort by update dates. And I need to save this list into a file. Is it possible? Apr 01 2010 InsideDoosanServiceImpl.class // in A directory
Apr 08 2010 MainController.class // in B directory
Apr 07 2010 RecommendController.class // in B directory
Apr 01 2010 MainDao.class // in B directory I mean the whole list is not ordered by date, but first ordered by folder, and ordered by date. I want a list first ordered by date including all sub-directories. | I am not sure what exactly do you mean by update dates , but you are using -r option which according to man does this - -r
Reverse the order of the sort to get reverse lexicographical order or the oldest entries first (or largest files last, if combined with sort by size I think this should be good enough for you if you need files sorted by time. ls -lRt If you don't need all the other stuff listed by ls then you can use - ls -1Rt To capture the result in a file, you can use the redirection operator > and give a file name. So you can do something like this - ls -lRt > sortedfile.list Update: find . -type f -exec ls -lt {} + This will sort files so that the newest files are listed first. To reverse the sort order, showing newest files at the end, use the following command: find . -type f -exec ls -lrt {} + | {
"source": [
"https://unix.stackexchange.com/questions/29355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80425/"
]
} |
29,386 | Here is what I have tried, and I got an error: $ cat /home/tim/.ssh/id_rsa.pub | ssh [email protected] 'cat >> .ssh/authorized_keys'
Password:
cat: >>: No such file or directory
cat: .ssh/authorized_keys: No such file or directory | OpenSSH comes with a command to do this, ssh-copy-id . You just give it the remote address and it adds your public key to the authorized_keys file on the remote machine: $ ssh-copy-id [email protected] You may need to use the -i flag to locate your public key on your local machine: $ ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected] | {
"source": [
"https://unix.stackexchange.com/questions/29386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
29,401 | I have a server with SSH running on a non-standard port. Instead of 22, it runs on 8129. To log in, I use: ssh -p 8129 hostname Now, whenever I need to set up a key for password-less login, I have to copy the public key and add it to authorized_keys manually. I discovered that the command ssh-copy-id could be used to simplify this process, but it seems like it does not have an option to specify the port of the ssh server. Is there some way to tell ssh-copy-id to use port 8129, or should I just forget about this command and copy/paste manually as before? | $ ssh-copy-id "-p 8129 user@host" Source: http://it-ride.blogspot.com/2009/11/use-ssh-copy-id-on-different-port.html NOTE: The port must be in front of the user@host or it will not resolve Editor's note : as pointed out in comments and shown in other answers , ssh-copy-id as shipped by more recent versions of OpenSSH supports the -p <port_number> syntax (no quotes needed). | {
"source": [
"https://unix.stackexchange.com/questions/29401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11704/"
]
} |
29,402 | What would be the most straightforward way of making a GET request to a url over HTTPS, and getting the raw, unparsed response? Could this be achieved with curl? If so, what options would you need to use? | If you want to use curl , this should work: curl -D - https://www.google.com/ Note, however, that this is not exactly the raw response. For instance chunked transfer encoding will not be visible in the response. Using --raw solves this, also verbose mode ( -v ) is useful, too and -i shows the headers before the response body: curl -iv --raw https://www.google.com/ If you want to use a pager like less on the result, it is also necessary to disable the progress-bar ( -s ): curl -ivs --raw https://www.google.com/ | less Depending on what you want to do this may or may not be a problem. What you do get is all HTTP response headers and the document at the requested URL. | {
"source": [
"https://unix.stackexchange.com/questions/29402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2559/"
]
} |
29,421 | How can I find how many lines a text file contains without opening the file in an editor or a viewer application? Is there a handy Unix console command to see the number? | Indeed there is. It is called wc , originally for word count, I believe, but it can do lines, words, characters, bytes (and with some implementations, the length in bytes of the longest line or the display width of the widest one). The -l option tells it to count lines (in effect, it counts the newline characters, so only properly delimited lines): wc -l mytextfile Or to only output the number of lines: wc -l < mytextfile (beware that some implementations insert blanks before that number). | {
"source": [
"https://unix.stackexchange.com/questions/29421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
29,425 | I need to get access to modifier-key state for a console app I'm writing (a personalized editor). Are there any packages/libs/whatever that provide this access? I cobbled the following from somewhere, but it only works if you're root, and I don't really want to mess about at root-level. #include <iostream>
#include <string>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <termios.h>
#include <fcntl.h>
#include <linux/input.h>
#include <unistd.h>
#include <errno.h>
int kbhit(void)
{
struct termios oldt, newt;
int ch;
int oldf;
tcgetattr(STDIN_FILENO, &oldt);
newt = oldt;
newt.c_lflag &= ~0000172 ; //~(ICANON | ECHO);
tcsetattr(STDIN_FILENO, TCSANOW, &newt);
oldf = fcntl(STDIN_FILENO, F_GETFL, 0);
fcntl(STDIN_FILENO, F_SETFL, oldf | O_NONBLOCK);
ch = getchar();
tcsetattr(STDIN_FILENO, TCSANOW, &oldt);
fcntl(STDIN_FILENO, F_SETFL, oldf);
return ch;
}
enum MODKEYS
{
SHIFT_L = 1,
SHIFT_R = 2,
CTRL_L = 4,
CTRL_R = 8,
ALT_L = 16,
ALT_R = 32,
};
int chkmodifiers()
{
int mods=0,keyb,mask;
char key_map[KEY_MAX/8 + 1]; // Create a byte array the size of the number of keys
//event1 - got by inspecting /dev/input/...
FILE *kbd = fopen("/dev/input/event1", "r");
if (kbd == NULL)
{
printf("(chkmodifiers) ERROR: %s\n", strerror(errno)); //permission - got to be root!
return 0;
}
memset(key_map, 0, sizeof(key_map));
ioctl(fileno(kbd), EVIOCGKEY(sizeof(key_map)), key_map); // Fill the keymap with the current keyboard state
keyb = key_map[KEY_LEFTSHIFT/8];
mask = 1 << (KEY_LEFTSHIFT % 8);
if (keyb & mask) mods += SHIFT_L;
keyb = key_map[KEY_RIGHTSHIFT/8];
mask = 1 << (KEY_RIGHTSHIFT % 8);
if (keyb & mask) mods += SHIFT_R;
keyb = key_map[KEY_LEFTCTRL/8];
mask = 1 << (KEY_LEFTCTRL % 8);
if (keyb & mask) mods += CTRL_L;
keyb = key_map[KEY_RIGHTCTRL/8];
mask = 1 << (KEY_RIGHTCTRL % 8);
if (keyb & mask) mods += CTRL_R;
keyb = key_map[KEY_LEFTALT/8];
mask = 1 << (KEY_LEFTALT % 8);
if (keyb & mask) mods += ALT_L;
keyb = key_map[KEY_RIGHTALT/8];
mask = 1 << (KEY_RIGHTALT % 8);
if (keyb & mask) mods += ALT_R;
return mods;
}
int main()
{
puts("Press a key!");
char ch=0;
int n=0,m;
while (ch != 'q')
{
n = kbhit();
if (n != -1)
{
m = chkmodifiers();
ch = (char)n;
printf("You pressed '%c' [%d]\n", ch, n);
if ((m & SHIFT_L) == SHIFT_L) printf(" .. and ls\n");
if ((m & SHIFT_R) == SHIFT_R) printf(" .. and rs\n");
if ((m & CTRL_L) == CTRL_L) printf(" .. and lc\n");
if ((m & CTRL_R) == CTRL_R) printf(" .. and rc\n");
if ((m & ALT_L) == ALT_L) printf(" .. and la\n");
if ((m & ALT_R) == ALT_R) printf(" .. and ra\n");
}
}
return 0;
} | Indeed there is. It is called wc , originally for word count, I believe, but it can do lines, words, characters, bytes (and with some implementations, the length in bytes of the longest line or the display width of the widest one). The -l option tells it to count lines (in effect, it counts the newline characters, so only properly delimited lines): wc -l mytextfile Or to only output the number of lines: wc -l < mytextfile (beware that some implementations insert blanks before that number). | {
"source": [
"https://unix.stackexchange.com/questions/29425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2141/"
]
} |
29,450 | I am inside a screen (screen -Ra). I have a long command, and I am at the end. Instead of keeping the left arrow, how can you go to the beginning of the line? CTRL-A works when I am in a normal window, but when I am inside a screen pressing CTRL-A gives me a message "No other window" - seems like CTRL-A is dispatched to the screen. How do you go to the beginning of a line in a screen? | Use Ctrl - a a , or change screen's escape keystroke (option -e ). | {
"source": [
"https://unix.stackexchange.com/questions/29450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11498/"
]
} |
29,457 | I have a growing log file for which I want to display only the last 15 lines. Here is what I know I can do: tail -n 15 -F mylogfile.txt As the log file is filled, tail appends the last lines to the display. I am looking for a solution that only displays the last 15 lines and get rid of the lines before the last 15 after it has been updated. Would you have an idea? | It might suffice to use watch: $ watch tail -n 15 mylogfile.txt | {
"source": [
"https://unix.stackexchange.com/questions/29457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13528/"
]
} |
29,509 | I have an array of "options" of a command. my_array=(option1 option2 option3) I want to call this command in a bash script, using the values from array as options. So, command $(some magic here with my_array) "$1" becomes: command -option1 -option2 -option3 "$1" How can I do it? Is it possible? | I would prefer a plain bash way: command "${my_array[@]/#/-}" "$1" One reason for this are the spaces. For example if you have: my_array=(option1 'option2 with space' option3) The sed based solutions will transform it in -option1 -option2 -with -space -option3 (length 5), but the above bash expansion will transform it into -option1 -option2 with space -option3 (length still 3). Rarely, but sometimes this is important, for example: bash-4.2$ my_array=('Ffoo bar' 'vOFS=fiz baz')
bash-4.2$ echo 'one foo bar two foo bar three foo bar four' | awk "${my_array[@]/#/-}" '{print$2,$3}'
two fiz baz three | {
"source": [
"https://unix.stackexchange.com/questions/29509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2689/"
]
} |
29,545 | The logged in user is a member of a group that has a write permission on a folder. But when this user is trying to write something, "permission is denied". The log below summarizes the question: subv:/www/tracer/ whoami
frank
subv:/www/tracer/
subv:/www/tracer/ ls -ltr
total 4
drwxrwxr-x 2 root tracer 4096 Jan 20 12:25 convert.tracer.com
subv:/www/tracer/ groups frank
frank : frank tracer
subv:/www/tracer/ > convert.tracer.com/test
-bash: convert.tracer.com/test: Permission denied
subv:/www/tracer/ Output of "ls -bail /www/tracer/convert.tracer.com/": subv:~/ ls -bail /www/tracer/convert.tracer.com/
total 8
38010883 drwxrwxr-x 2 root tracer 4096 Jan 20 12:25 .
38010882 drwxr-xr-x 3 root root 4096 Jan 20 12:25 ..
subv:~/ | Group membership is re-read on login. groups seem to report the groups you are in according to /etc/group and does not reflect membership of groups in the current session. Use the command id -Gn to show the groups that you are currently an active member of. Solution: relogin to apply the group changes. | {
"source": [
"https://unix.stackexchange.com/questions/29545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11498/"
]
} |
29,570 | Which command should I use to remove a user from a group in Debian? When adding a user to a group, it can be done with: usermod -a -G group user However, I could not find a similar command (accepting a group and user as arguments) for removing the user from the group. The closest I could get is: usermod -G all,existing,groups,except,for,group user Is there a command like usermod OPTION group user with OPTION an option to make usermod (or a similar program) remove the user from group? | You can use gpasswd : # gpasswd --delete user group The new group config will be assigned at the next login. If the user is logged in, the effects of the command aren't seen immediately. | {
"source": [
"https://unix.stackexchange.com/questions/29570",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8250/"
]
} |
29,574 | According the the Unix and Linux Administration Handbook and man , logrotate has options for daily , weekly , and monthly , but is there a way to add an hourly option? This blog post mentions you can set size 1 and remove the time option (eg: daily ) and then manually call logrotate with cron - I suppose something like logrotate -f /etc/logrotate.d/my-hourly-file but is there a more elegant solution for rotating logs hourly? | Depending on your OS. Some (all?) Linux distributions have a directory /etc/cron.hourly where you can put cron jobs to be executed every hour. Others have a directory /etc/cron.d/ . There you can put cron-jobs that are to be executed as any special user with the usual cron-settings of a crontab entry (and you have to specify the username). If you use either of these instead of the standard log rotatation script in /etc/cron.daily/ you should copy that script there and cp /dev/null to the original position. Else it will be reactivated by a logrotate patch-update. For proper hourly rotation, also take care that the dateext directive is not set. If so, by default the first rotated file will get the extension of the current date like YYYYMMDD. Then, the second time logrotate would get active within the same day, it simply skips the rotation even if the size threshold has exceeded. The reason is that the new name of the file to get rotated already exists, and logrotate does not append the content to the existing old file.
For example on RHEL and CentOS, the dateext directive is given by default in /etc/logrotate.conf . After removing or commenting that line, the rotated files will simply get a running number as extension until reaching the rotate value. In this way, it's possible to perform multiple rotations a day. | {
"source": [
"https://unix.stackexchange.com/questions/29574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10287/"
]
} |
29,577 | What is the difference between hard and soft limits in ulimit? For number of open files, I have a soft limit of 1024 and a hard limit of 10240.
It is possible to run programs opening more than 1024 files. What is the soft limit for? | A hard limit can only be raised by root (any process can lower it). So it is useful for security: a non-root process cannot overstep a hard limit. But it's inconvenient in that a non-root process can't have a lower limit than its children. A soft limit can be changed by the process at any time. So it's convenient as long as processes cooperate, but no good for security. A typical use case for soft limits is to disable core dumps ( ulimit -Sc 0 ) while keeping the option of enabling them for a specific process you're debugging ( (ulimit -Sc unlimited; myprocess) ). The ulimit shell command is a wrapper around the setrlimit system call, so that's where you'll find the definitive documentation. Note that some systems may not implement all limits. Specifically, some systems don't support per-process limits on file descriptors (Linux does); if yours doesn't, the shell command may be a no-op. | {
"source": [
"https://unix.stackexchange.com/questions/29577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10788/"
]
} |
29,578 | I want to create a log file for a cron script that has the current hour in the log file name. This is the command I tried to use: 0 * * * * echo hello >> ~/cron-logs/hourly/test`date "+%d"`.log Unfortunately I get this message when that runs: /bin/sh: -c: line 0: unexpected EOF while looking for matching ``'
/bin/sh: -c: line 1: syntax error: unexpected end of file I have tried escaping the date part in various ways, but without much luck. Is it possible to make this happen in-line in a crontab file or do I need to create a shell script to do this? | Short answer: Escape the % as \% : 0 * * * * echo hello >> ~/cron-logs/hourly/test`date "+\%d"`.log Long answer: The error message suggests that the shell which executes your command doesn't see the second back tick character: /bin/sh: -c: line 0: unexpected EOF while looking for matching '`' This is also confirmed by the second error message your received when you tried one of the other answers: /bin/sh: -c: line 0: unexpected EOF while looking for matching ')' The crontab manpage confirms that the command is read only up to the first unescaped % sign: The "sixth" field (the rest of the line) specifies the command to
be run. The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in
the SHELL variable of the cronfile. Percent-signs ( % ) in the command, unless escaped with backslash ( \ ), will be changed into newline characters , and all data after the first % will be sent to
the command as standard input. | {
"source": [
"https://unix.stackexchange.com/questions/29578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10287/"
]
} |
29,608 | I notice that some scripts which I have acquired from others have the shebang #!/path/to/NAME while others (using the same tool, NAME) have the shebang #!/usr/bin/env NAME . Both seem to work properly. In tutorials (on Python, for example), there seems to be a suggestion that the latter shebang is better. But, I don't quite understand why this is so. I realize that, in order to use the latter shebang, NAME must be in the PATH whereas the first shebang does not have this restriction. Also, it appears (to me) that the first would be the better shebang, since it specifies precisely where NAME is located. So, in this case, if there are multiple versions of NAME (e.g., /usr/bin/NAME, /usr/local/bin/NAME), the first case specifies which to use. My question is why is the first shebang preferred to the second one? | It isn't necessarily better. The advantage of #!/usr/bin/env python is that it will use whatever python executable appears first in the user's $PATH . The disadvantage of #!/usr/bin/env python is that it will use whatever python executable appears first in the user's $PATH . That means that the script could behave differently depending on who runs it. For one user, it might use the /usr/bin/python that was installed with the OS. For another, it might use an experimental /home/phred/bin/python that doesn't quite work correctly. And if python is only installed in /usr/local/bin , a user who doesn't have /usr/local/bin in $PATH won't even be able to run the script. (That's probably not too likely on modern systems, but it could easily happen for a more obscure interpreter.) By specifying #!/usr/bin/python you specify exactly which interpreter will be used to run the script on a particular system . Another potential problem is that the #!/usr/bin/env trick doesn't let you pass arguments to the intrepreter (other than the name of the script, which is passed implicitly). This usually isn't an issue, but it can be. Many Perl scripts are written with #!/usr/bin/perl -w , but use warnings; is the recommended replacement these days. Csh scripts should use #!/bin/csh -f -- but csh scripts are not recommended in the first place. But there could be other examples. I have a number of Perl scripts in a personal source control system that I install when I set up an account on a new system. I use an installer script that modifies the #! line of each script as it installs it in my $HOME/bin . (I haven't had to use anything other than #!/usr/bin/perl lately; it goes back to times when Perl often wasn't installed by default.) A minor point: the #!/usr/bin/env trick is arguably an abuse of the env command, which was originally intended (as the name implies) to invoke a command with an altered environment. Furthermore, some older systems (including SunOS 4, if I recall correctly) didn't have the env command in /usr/bin . Neither of these is likely to be a significant concern. env does work this way, a lot of scripts do use the #!/usr/bin/env trick, and OS providers aren't likely to do anything to break it. It might be an issue if you want your script to run on a really old system, but then you're likely to need to modify it anyway. Another possible issue, (thanks to Sopalajo de Arrierez for pointing it out in comments) is that cron jobs run with a restricted environment. In particular, $PATH is typically something like /usr/bin:/bin . So if the directory containing the interpreter doesn't happen to be in one of those directories, even if it's in your default $PATH in a user shell, then the /usr/bin/env trick isn't going to work. You can specify the exact path, or you can add a line to your crontab to set $PATH ( man 5 crontab for details). Kevin's comment points out that Python's virtualenv creates a special case, where the environment installs a Python interpreter in a special directory that's inserted at the front of $PATH . For that particular environment (and perhaps others like it), the #!/usr/bin/env python trick (or python3 ?) is likely to be the best solution. (I haven't used virtualenv myself.) | {
"source": [
"https://unix.stackexchange.com/questions/29608",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13401/"
]
} |
29,654 | When I'm using tail -f and I want to return to the shell, I always use CTRL+C . Or when I am typing a command and feel like aborting it and starting over, I simply CTRL+C to get back to an empty command line prompt. Is this considered bad practice? I sometimes feel there might be a better way to break away from something, but really have no idea. | Ctrl + C sends a SIGINT to the program. This tells the program that you want to interrupt (and end) it's process. Most programs correctly catch this and cleanly exit. So, yes, this is a "correct" way to end most programs. There are other keyboard shortcuts for sending other signals to programs, but this is the most common. | {
"source": [
"https://unix.stackexchange.com/questions/29654",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14584/"
]
} |
29,671 | How can I convert a .cue / .bin (cdr track) image into a single .iso file? I have
Fedora 16 (x86-64) Linux 3.1.9-1.fc16.x86_64 #1 SMP Fri Jan 13 16:37:42 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux | You should look at bchunk , which is specifically meant for this type of conversion. You should be able to install it with sudo yum install bchunk , but I'm only 95% sure it's in the standard repo. bchunk will create an ISO from any data tracks, and CDR for any CD audio. If you want everything in one ISO bchunk is not appropriate. The syntax is like this, bchunk IMAGE.bin IMAGE.cue IMAGE.iso To create a single ISO with all the tracks in one take a look at bin2iso . bin2iso is most likely not included in your standard repo. Although RPMs do exist unofficially online. I would recommend using PowerISO over bin2iso , as bin2iso is fairly non-updated. bin2iso <cuefile> You also would be able to the conversion PowerISO .
It is commercial software, but the linux version is freeware. Sometimes if I have problems with the Free Software for different image conversions, I give PowerISO a go. | {
"source": [
"https://unix.stackexchange.com/questions/29671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14182/"
]
} |
29,697 | Am I blind or is there no option like --in-place for sort ? In order to save results to the input file, sed uses -i ( --in-place ). Redirecting the output of sort to the input file sort < f > f results in making it empty. If there is no --in-place option - maybe there is some trick how to do this in handy way? (The only thing that cames to my mind: sort < f > /tmp/f$$ ; cat /tmp/f$$ > f ; rm /tmp/f$$ Moving is not right choice, cause file permissions might be changed. That's why I overwrite with the contents of the temp file which I then remove.) | sort has the -o (or --output ) option that takes a filename as argument.
The program writes the data to a temporary file,
then overwrites the original input file after the sort is complete
(which can happen only after all the input data have been read).
(This is essentially the same thing as what sed -i does.) From GNU sort info page: -o OUTPUT-FILE --output= OUTPUT-FILE Write output to OUTPUT-FILE instead of standard output. Normally, sort reads all input before opening OUTPUT-FILE , so you can
safely sort a file in place by using commands like sort -o F F and cat F | sort -o F . However, sort with --merge ( -m )
can open the output file before reading all input, so a command
like cat F | sort -m -o F - G is not safe, as sort might start
writing F before cat is done reading it. On newer systems, -o cannot appear after an input file if POSIXLY_CORRECT is set, e.g., sort F -o F . Portable scripts
should specify -o OUTPUT-FILE before any input files. and from The Open Group Base Specifications Issue 7 : -o output Specify the name of an output file to be used instead of the standard
output. This file can be the same as one of the input files. There have been reports that sort might discard (i.e., destroy)
some or all of your data
if you are out of disk space or out of disk quota,
or the system crashes while sort is writing the output file,
or some other error occurs. In short, to sort a file in place, the following may be used: sort -o filename filename | {
"source": [
"https://unix.stackexchange.com/questions/29697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9689/"
]
} |
29,724 | I thought the following would group the output of my_command in an array of lines: IFS='\n' array_of_lines=$(my_command); so that $array_of_lines[1] would refer to the first line in the output of my_command , $array_of_lines[2] to the second, and so forth. However, the command above doesn't seem to work well. It seems to also split the output of my_command around the character n , as I have checked with print -l $array_of_lines , which I believe prints elements of an array line by line. I have also checked this with: echo $array_of_lines[1]
echo $array_of_lines[2]
... In a second attempt, I thought adding eval could help: IFS='\n' array_of_lines=$(eval my_command); but I got the exact same result as without it. Finally, following the answer on List elements with spaces in zsh , I have also tried using parameter expansion flags instead of IFS to tell zsh how to split the input and collect the elements into an array, i.e.: array_of_lines=("${(@f)$(my_command)}"); But I still got the same result (splitting happening on n ) With this, I have the following questions: Q1. What are "the proper" ways of collecting the output of a command in an array of lines? Q2. How can I specify IFS to split on newlines only? Q3. If I use parameter expansion flags as in my third attempt above (i.e. using @f ) to specify the splitting, does zsh ignore the value of IFS ? Why didn't it work above? | TL, DR: array_of_lines=("${(@f)$(my_command)}") First mistake (→ Q2): IFS='\n' sets IFS to the two characters \ and n . To set IFS to a newline, use IFS=$'\n' . Second mistake: to set a variable to an array value, you need parentheses around the elements: array_of_lines=(foo bar) . This would work, except that it strips empty lines, because consecutive whitespace counts as a single separator: IFS=$'\n' array_of_lines=($(my_command)) You can retain the empty lines except at the very end by doubling the whitespace character in IFS : IFS=$'\n\n' array_of_lines=($(my_command)) To keep trailing empty lines as well, you'd have to add something to the command's output, because this happens in the command substitution itself, not from parsing it. IFS=$'\n\n' array_of_lines=($(my_command; echo .)); unset 'array_of_lines[-1]' (assuming the output of my_command doesn't end in a non-delimited line; also note that you lose the exit status of my_command ) Note that all the snippets above leave IFS with its non-default value, so they may mess up subsequent code. To keep the setting of IFS local, put the whole thing into a function where you declare IFS local (here also taking care of preserving the command's exit status): collect_lines() {
local IFS=$'\n\n' ret
array_of_lines=($("$@"; ret=$?; echo .; exit $ret))
ret=$?
unset 'array_of_lines[-1]'
return $ret
}
collect_lines my_command But I recommend not to mess with IFS ; instead, use the f expansion flag to split on newlines (→ Q1): array_of_lines=("${(@f)$(my_command)}") Or to preserve trailing empty lines: array_of_lines=("${(@f)$(my_command; echo .)}")
unset 'array_of_lines[-1]' The value of IFS doesn't matter there. I suspect that you used a command that splits on IFS to print $array_of_lines in your tests (→ Q3). | {
"source": [
"https://unix.stackexchange.com/questions/29724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
29,790 | I have an external program that produces an output file (largish, 20K lines possible). I need to insert a new line between the existing line 1 and line 2. I've been looking at awk and sed - I use one liners in each fairly regularly - but I haven't been able to come up with the right switches to do this. | awk 'NR==1{print; print "new line"} NR!=1' | {
"source": [
"https://unix.stackexchange.com/questions/29790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10554/"
]
} |
29,791 | I have a user, say user1 , which has modifications to its .bash_profile , one of them changing the PATH , e.g.: export PATH=/some/place:$PATH . This change works fine if I log on as user1 or do a su - user1 . But if I try to run a command via su as root , e.g.: su -c test.sh oracle (test contains echo $PATH ) It doesn't seem to have the modified PATH (or root's PATH , for that matter). I've also tried copying .bash_profile to .profile , to no avail. Why is this happening? | Using su without -l or - starts bash as an interactive, but non-login shell, which doesn't read from either of the files you specified. Use the -l or - option or put the relevant config into /root/.bashrc . Quick summary of config files: Login shell ( -l / --login ) reads /etc/profile first, and then the first it finds of: ~/.bash_profile , ~/.bash_login , and ~/.profile . Interactive but non-login shell ( -i ) reads /etc/bash.bashrc and ~/.bashrc , in that order (unless the --rcfile option is used and tells it to look elsewhere). Non-interactive shells, e.g. started from within another program without using the -l or -i flags, reads the file specified in the BASH_ENV environment variable. When run as sh as a login shell, it will read /etc/profile and ~/.profile , in that order. When run as sh as an interactive non-login, it reads the file specified in ENV . | {
"source": [
"https://unix.stackexchange.com/questions/29791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1384/"
]
} |
29,794 | My Goal Have a USB drive (4GB) live boot to DSL (for size) with some extra software (git, hg, .vimrc + plugins, etc...) and also reserve a portion of the drive as writable (maybe symlink my home folder, etc...) Ultimately have a portable development environment. Ideas / Suggestions on how to accomplish this? | Using su without -l or - starts bash as an interactive, but non-login shell, which doesn't read from either of the files you specified. Use the -l or - option or put the relevant config into /root/.bashrc . Quick summary of config files: Login shell ( -l / --login ) reads /etc/profile first, and then the first it finds of: ~/.bash_profile , ~/.bash_login , and ~/.profile . Interactive but non-login shell ( -i ) reads /etc/bash.bashrc and ~/.bashrc , in that order (unless the --rcfile option is used and tells it to look elsewhere). Non-interactive shells, e.g. started from within another program without using the -l or -i flags, reads the file specified in the BASH_ENV environment variable. When run as sh as a login shell, it will read /etc/profile and ~/.profile , in that order. When run as sh as an interactive non-login, it reads the file specified in ENV . | {
"source": [
"https://unix.stackexchange.com/questions/29794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14645/"
]
} |
29,845 | I would like to copy a set of files from directory A to directory B, with the caveat that if a file in directory A is identical to a file in directory B, that file should not be copied (and thus its modification time should not be updated). Is there a way to do that with existing tools, without writing my own script to do it? To elaborate a bit on my use-case: I am autogenerating a bunch of .c files in a temporary directory (by a method that has to generate all of them unconditionally), and when I re-generate them, I'd like to copy only the ones that have changed into the actual source directory, leaving the unchanged ones untouched (with their old creation times) so that make will know that it doesn't need to recompile them. (Not all the generated files are .c files, though, so I need to do binary comparisons rather than text comparisons.) (As a note: This grew out of the question I asked on https://stackoverflow.com/questions/8981552/speeding-up-file-comparions-with-cmp-on-cygwin/8981762#8981762 , where I was trying to speed up the script file I was using to do this operation, but it occurs to me that I really should ask if there's a a better way to do this than writing my own script -- especially since any simple way of doing this in a shell script will invoke something like cmp on every pair of files, and starting all those processes takes too long.) | rsync is probably the best tool for this. There are a lot of options on this command so read man page . I think you want the --checksum option or the --ignore-times | {
"source": [
"https://unix.stackexchange.com/questions/29845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14395/"
]
} |
29,851 | I'm aware its best to create temporary files with mktemp , but what about named pipes? I prefer things to be as POSIX compliant as possible, but Linux only is acceptable. Avoiding Bashisms is my only hard criteria, as I write in dash . | tmppipe=$(mktemp -u)
mkfifo -m 600 "$tmppipe" Unlike regular file creation, which is prone to being hijacked by an existing file or a symbolic link, the creation of a name pipe through mkfifo or the underlying function either creates a new file in the specified place or fails. Something like : >foo is unsafe because if the attacker can predict the output of mktemp then the attacker can create the target file for himself. But mkfifo foo would fail in such a scenario. If you need full POSIX portability, mkfifo -m 600 /tmp/myfifo is safe against hijacking but prone to a denial of service; without access to a strong random file name generator, you would need to manage retry attempts. If you don't care for the subtle security problems around temporary files, you can follow a simple rule: create a private directory, and keep everything in there. tmpdir=
cleanup () {
trap - EXIT
if [ -n "$tmpdir" ] ; then rm -rf "$tmpdir"; fi
if [ -n "$1" ]; then trap - $1; kill -$1 $$; fi
}
tmpdir=$(mktemp -d)
trap 'cleanup' EXIT
trap 'cleanup HUP' HUP
trap 'cleanup TERM' TERM
trap 'cleanup INT' INT
mkfifo "$tmpdir/pipe" | {
"source": [
"https://unix.stackexchange.com/questions/29851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10987/"
]
} |
29,869 | I want to convert some files from jpeg to pdf. I am using following command. $ convert image1.jpg image1.pdf But I have 100 images. How should I convert all of them to corresponding pdfs? I tried $ convert image*.jpg image*.pdf It doesn't work. | In bash: for f in *.jpg; do
convert ./"$f" ./"${f%.jpg}.pdf"
done | {
"source": [
"https://unix.stackexchange.com/questions/29869",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7689/"
]
} |
29,878 | Assuming a script that outputs a list of files: $> bash someScript.sh
path/to/some/file
path/to/the/file/i/want/to/work/with
path/to/yet/another/file Now I want to have the second file-path as parameter for another command, e.g. vim . Is there a way to directly access it? And I want to point out that I do not necessarily want to access the second file, but the next time it could be the third or the 27th. I want to be able to select that Nth line as easily as possible. Right now I do mouse-selecting and insert by middle-clicking or type the path with tab-completion. Now I wonder if there is an easier way. Problem with my own solution is though that I would have to edit all my scripts this way. It would be fun if there was a more general solution to this issue, that would work with any kind of command, e.g. find . | Sure. bash someScript.sh | sed -n '2 p' will filter your output and just print the second line of it. To make that a parameter to vim : vim "$(bash someScript.sh | sed -n '2 p')" | {
"source": [
"https://unix.stackexchange.com/questions/29878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
29,899 | I want to use find but sort the results reverse chronologically as with ls -ltr . Is this possible through any combo of flags or pipelines? | Use find 's -printf command to output both the time (in a sortable way) and the file, then sort. If you use GNU find, find . your-options -printf "%T+ %p\n" | sort For convenience here is an explanation of the -printf "%T+ %p\n" from man find : %Tk File's last modification time in the format specified by k , which is the same as for %A . where k in this case is set to + + Date and time, separated by + , for example `2004-04-28+22:22:05.0'. This is a GNU extension. The time is given in the current timezone (which may be affected by setting the TZ environment variable). The seconds field includes a fractional part. %p File's name. | {
"source": [
"https://unix.stackexchange.com/questions/29899",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6973/"
]
} |
29,902 | I am in the process of migrating a machine from RHEL 4 to 5. Rather than actually do an upgrade we have created a new VM (both machines are in a cloud) and I am in the process of copying across data between the two. I have come across the following file, which I need to remove from the new machine but am unable to, even when running as root: -rw------- 1 2003 2003 219 jan 11 14:22 .bash_history This file is inside /home/USER/, where USER is the account of the guy who built the machine. He doesn't have an account on the old machine, so I am trying to remove his home folder so that the new machine tallies with the old one, but I get the following error: rm: ne peut enlever `.bash_history': Opération non permise (translated from the French: cannot remove XXX, operation not permitted) I have tried using the following command but this has made no difference: chattr -i .bash_history Is the only choice to create a user with the ID 2003, or is there another way around it? Edit I have tried using rm -f , and I get the same error. I get the same kind of error using chmod 777 first. I have been able to chown the folder that contains the file I am trying to delete, so it is: drwx------ 2 root root 1024 jan 24 15:58 USER Edit2 Running the lsattr command as suggested by Angus gave the following output: -----a------- USER/.bash_history
------------- USER/..
------------- USER/. The file is flagged as append-only - on changing this flag using chattr -a .bash_history I was able to delete the file. | Check the permissions of the directory . To delete a file inside it, it should be writable by you chmod ugo+w . and not immutable or append-only: chattr -i -a . Check with ls -la and lsattr -a . | {
"source": [
"https://unix.stackexchange.com/questions/29902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2163/"
]
} |
29,906 | I have the following code that will remove lines with the pattern banana and 2 lines after it: sed '/banana/I,+2 d' file So far, so good! But I need it to remove 2 lines before banana , but I can't get it with a “minus sign” or whatever (similar to what grep -v -B2 banana file should do but doesn't): teresaejunior@localhost ~ > LC_ALL=C sed '-2,/banana/I d' file
sed: invalid option -- '2'
teresaejunior@localhost ~ > LC_ALL=C sed '/banana/I,-2 d' file
sed: -e expression #1, char 16: unexpected `,'
teresaejunior@localhost ~ > LC_ALL=C sed '/banana/I,2- d' file
sed: -e expression #1, char 17: unknown command: `-' | Sed doesn't backtrack: once it's processed a line, it's done. So “find a line and print the previous N lines” isn't going to work as is, unlike “find a line and print the next N lines” which is easy to graft on. If the file isn't too long, since you seem to be ok with GNU extensions, you can use tac to reverse the lines of the file. tac | sed '/banana/I,+2 d' | tac Another angle of attack is to maintain a sliding window in a tool like awk. Adapting from Is there any alternative to grep's -A -B -C switches (to print few lines before and after )? (warning: minimally tested): #!/bin/sh
{ "exec" "awk" "-f" "$0" "$@"; } # -*-awk-*-
# The array h contains the history of lines that are eligible for being "before" lines.
# The variable skip contains the number of lines to skip.
skip { --skip }
match($0, pattern) { skip = before + after }
NR > before && !skip { print NR h[NR-before] }
{ delete h[NR-before]; h[NR] = $0 }
END { if (!skip) {for (i=NR-before+1; i<=NR; i++) print h[i]} } Usage: /path/to/script -v pattern='banana' -v before=2 | {
"source": [
"https://unix.stackexchange.com/questions/29906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9491/"
]
} |
29,930 | I have a list of hosts in the network providing shares via SAMBA. How can I determine either IP address or the host name of one particular host, e.g. the one with the name “SASAK02”. The output of smbtree is as follows WORKGROUP
\\SASAK02
\\SAURA-PC1
\\PC-VAN-DAMME | Try nmblookup <wins-hostname> . | {
"source": [
"https://unix.stackexchange.com/questions/29930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12779/"
]
} |
29,964 | I was just running a few commands in a terminal and I started wondering, does Unix/Linux take shortcuts when running piped commands? For example, let's say I have a file with one million lines, the first 10 of which contain hello world . If you run the command grep "hello world" file | head does the first command stop as soon as it finds 10 lines, or does it continue to search the entire file first? | Sort of. The shell has no idea what the commands you are running will do, it just connects the output of one to the input of the other. If grep finds more than 10 lines that say "hello world" then head will have all 10 lines it wants, and close the pipe. This will cause grep to be killed with a SIGPIPE, so it does not need to continue scanning a very large file. | {
"source": [
"https://unix.stackexchange.com/questions/29964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5485/"
]
} |
29,999 | I have two virtual machines both running on a Linux host (Fedora 16). I set both adapters as attached to NAT. When I boot them up they both have their default gateway set to 10.0.2.2 . They also both have the same IP address (10.0.2.15) . They are both on the same adapter (adapter 1). I don't know why they are getting assigned the same IP address, and shouldn't the default gateway be 10.0.2.1 since the subnet address is 10.0.2.0 and the netmask is 255.255.255.0. Is there something I am missing, has this happened to anyone before? How do I get the VirtualBox DHCP working properly? | VirtualBox DHCP is working properly. There is nothing wrong with having all of your machines getting the same address in NAT configuration. All VMs are isolated from each other so there is no risk of conflict. They are also not on the same adapter. Each VM has its own virtualized hardware including NICs. The default gateway also need not to be 10.0.2.1. Although it is a common practice to have it at the lower IP address, it can be any IP in the subnet range. Also, there is no "real" dhcp service, everything is hardcoded in the VirtualBox code, although if you are not happy with the default IP addresses you can fine tune the NAT engine . | {
"source": [
"https://unix.stackexchange.com/questions/29999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11213/"
]
} |
30,075 | I am using screen to split my terminals but I would like to be able to resize the horizontal dimension of the split screens. If I do C-a :resize 10 I only change the vertical dimension to 10 lines. How do I achieve the same but for the horizontal dimension? | At least on Debian and Ubuntu, the resize command, when applied to a full height region performs a horizontal resizing. If it works for you, then first split vertically, next perform a resizing of the width, then split horizontally. | {
"source": [
"https://unix.stackexchange.com/questions/30075",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14761/"
]
} |
30,091 | I'm looking at a bash script someone else wrote that uses mktemp : TEMP=`mktemp --directory` However, this line does not work on my machine (OS X 10.6). How would I fix this line so that it is cross-un*x-like-platform compatible? EDIT: An alternative command would be sufficient as well. | The following is what I ended up using to reliably create a temporary directory that works on both Linux and Darwin (all versions before Mac OS X 10.11), without hardcoding $TMPDIR or /tmp : mytmpdir=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir') Background: The GNU mktemp command requires no arguments. Plain mktemp will work and creates a temporary file in the system's temporary directory. Plain mktemp -d will create a directory instead of a file, which is what you'd want to use on Linux. (gnu-coreutils)$ man mktemp
> ..
> If DIR is not specified, uses $TMPDIR if set, else /tmp.
> .. By default, GNU mktemp uses the template tmp.XXXXXXXXXX for the name of the sub directory (or file). To customise this template, the -t option can be used. OSX's mktemp has no default template and requires a template to be specified. Unfortunately, where GNU mktemp takes the template as -t option, on OSX this is passed as positional argument. Instead, OSX's mktemp has a -t option that means something else. The -t option on OSX is documented as a "prefix" for the template. It is expanded to {prefix}.XXXXXXXX , so it adds the Xs to it automatically (e.g. mktemp -d -t example could create example.zEJZWCTQ in the temp directory). I was surprised to find that in many Linux environments, $TMPDIR is not set by default. Many CLI programs do support it when set, but still need a default for /tmp . This means passing $TMPDIR/example.XXXXXXXX to mktemp or mkdir is dangerous because it may produce /example.XXXXXXXX in the root directory of the local disk (due to $TMPDIR being unset and becoming an empty string). On OSX, $TMPDIR is always set and (at least in the default shell) it is not set to /tmp (which is a symlink to /private/tmp ) but to /var/folders/dx/*****_*************/T . So whatever we do for OSX, should honour that default behaviour. In conclusion, the following is what I ended up using to reliably create a temporary directory that works on both Linux and Darwin (Mac OS X), without hardcoding either $TMPDIR or /tmp : mytmpdir=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir') The first part is for Linux. This command will fail on Darwin (Mac OS X) with error status code 1 responding with "usage: ...". That's why we ignore stderr and instead then execute the Mac variant. The mytmpdir prefix is only used on Mac (where that option is required to be set). | {
"source": [
"https://unix.stackexchange.com/questions/30091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14765/"
]
} |
30,127 | Occasionally I have a thought that I want to write into a file while I am at the terminal. I would want these notes all in the same file, just listed one after the other. I would also like a date / time tag on each one. Is it possible to do this without having to open the file each time? Can I just enter it into the terminal and have it appended to the file each time with a command or script? I am using GNU BASH. | Write yourself a shell script called "n". Put this in it: #!/bin/sh
notefile=/home/me/notefile
date >> $notefile
emacs $notefile -f end-of-buffer I recommend this instead of cat >> notefile because: One day you'll be in such a hurry that you'll fumblefinger the >> and type > instead and blow away your file. Emacs starts in five one-hundredths of a second on my Mac Mini. It takes a tenth of a second to start on a ten year old Celeron-based system I have sitting around. If you can't wait that long to start typing, then you're already a machine and don't need to take notes. :) If you insist on avoiding a text editor, use a shell function: n () { date >> /home/me/notefile; cat >> /home/me/notefile; } which should work in all shells claiming Bourne shell compatibility. | {
"source": [
"https://unix.stackexchange.com/questions/30127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9483/"
]
} |
30,140 | I have a list of servers: cat list.txt
10.10.10.10 servera
10.11.10.10 serverb How can I check that I can log in via ssh to them or not? I mean by default I should be able to log in via ssh key auth.., so in short, I need a solution that sorts the lines (servers) in the list.txt like this: servers that I can log in via ssh key servers that prompts for password (of course password is unknown..) servers that are unreachable | You can do that with a combination of the BatchMode option and "parsing" the output. ( ssh always returns 255 if it fails to connect for whatever reason, so you can't use the return code to distinguish between types of failures.) With BatchMode on, no password prompt or other interaction is attempted, so a connect that requires a password will fail. (I also put a ConnectTimeout in there which should be adjusted to fit your needs. And picked really bad filenames.) #! /bin/bash
rm good no_auth other
while read ip host ; do
status=$(ssh -o BatchMode=yes -o ConnectTimeout=5 $ip echo ok 2>&1)
case $status in
ok) echo $ip $host >> good ;;
*"Permission denied"*) echo $ip $host $status >> no_auth ;;
*) echo $ip $host $status >> other ;;
esac
done < list.txt You could detect other types of errors (like missing server public key) if you need more detailed classification. If you need the results in a single, sorted file, just cat the various output files together as you see fit. | {
"source": [
"https://unix.stackexchange.com/questions/30140",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6960/"
]
} |
30,168 | I recently switched to zsh (finally) and am loving it! So far one thing that I am missing is Ctrl + R to do incremental history search. I have the history set up properly HISTSIZE=10000
SAVEHIST=10000
HISTFILE=~/.zsh_history and I used vi key bindings bindkey -v But Ctrl + R does not work. It removes the line above the current line, which is not the behavior it should have in vim either. Any suggestions? | If I recall correctly, you need to explicitly set it, even with bindkey -v . Use something like this: bindkey -v
bindkey '^R' history-incremental-search-backward | {
"source": [
"https://unix.stackexchange.com/questions/30168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3850/"
]
} |
30,173 | A huge (up to 2 GiB) text file of mine contains about 100 exact duplicates of every line in it (useless in my case, as the file is a CSV-like data table). What I need is to remove all the repetitions while (preferably, but this can be sacrificed for a significant performance boost) maintaining the original sequence order. In the result each line is to be unique. If there were 100 equal lines (usually the duplicates are spread across the file and won't be neighbours) there is to be only one of the kind left. I have written a program in Scala (consider it Java if you don't know about Scala) to implement this. But maybe there are faster C-written native tools able to do this faster? UPDATE: the awk '!seen[$0]++' filename solution seemed working just fine for me as long as the files were near 2 GiB or smaller but now as I am to clean-up a 8 GiB file it doesn't work any more. It seems taking infinity on a Mac with 4 GiB RAM and a 64-bit Windows 7 PC with 4 GiB RAM and 6 GiB swap just runs out of memory. And I don't feel enthusiastic about trying it on Linux with 4 GiB RAM given this experience. | An awk solution seen on #bash (Freenode): awk '!seen[$0]++' filename If you want to edit the file in-place, you can use the following command (provided that you use a GNU awk version that implements this extension): awk -i inplace '!seen[$0]++' filename | {
"source": [
"https://unix.stackexchange.com/questions/30173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
30,189 | I have multiple Amazon EC2 accounts and want to quickly be able to switch variables, such as $EC2_HOME , using a script. I have have a shell script set up like this: #!/bin/sh
export EC2_HOME=/home/me/.ec2
echo $EC2_HOME When I run the script I know that EC2_HOME is set, but I thought that using export would make the variable stick around after the script completed. It does not, as running echo $EC_HOME does not show anything. I know this must be very rudimentary Linux scripting knowledge, but I don't know it. I tried looking for related questions without luck - so my apologies if this is a duplicate. | You should source your script, with . ./script or source ./script | {
"source": [
"https://unix.stackexchange.com/questions/30189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10287/"
]
} |
30,190 | I am using php-fpm on debian with nginx for php5 support.
I would like to have php-fpm to be under the user&group php-user instead of www-data. I thought the init.d script would have the user mentioned or uses a file which has www-data written in it. Yet I don't see it. How do I spawn this process to be under user php-user:php-user? Here is the php5-fpm init.d script on my server. I tried looking at start-stop-daemon man pages but didn't see it. I'm sure this is simple but I don't know how to do it. #!/bin/sh
### BEGIN INIT INFO
# Provides: php-fpm php5-fpm
# Required-Start: $remote_fs $network
# Required-Stop: $remote_fs $network
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts php5-fpm
# Description: Starts PHP5 FastCGI Process Manager Daemon
### END INIT INFO
# Author: Ondrej Sury <[email protected]>
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="PHP5 FastCGI Process Manager"
NAME=php5-fpm
DAEMON=/usr/sbin/$NAME
DAEMON_ARGS="--fpm-config /etc/php5/fpm/php-fpm.conf"
PIDFILE=/var/run/php5-fpm.pid
TIMEOUT=30
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions
#
# Function to check the correctness of the config file
#
do_check()
{
[ "$1" != "no" ] && $DAEMON $DAEMON_ARGS -t 2>&1 | grep -v "\[ERROR\]"
FPM_ERROR=$($DAEMON $DAEMON_ARGS -t 2>&1 | grep "\[ERROR\]")
if [ -n "${FPM_ERROR}" ]; then
echo "Please fix your configuration file..."
$DAEMON $DAEMON_ARGS -t 2>&1 | grep "\[ERROR\]"
return 1
fi
return 0
}
#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
|| return 1
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
$DAEMON_ARGS 2>/dev/null \
|| return 2
# Add code here, if necessary, that waits for the process to be ready
# to handle requests from services started subsequently which depend
# on this one. As a last resort, sleep for some time.
}
#
# Function that stops the daemon/service
#
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
start-stop-daemon --stop --quiet --retry=TERM/$TIMEOUT/KILL/5 --pidfile $PIDFILE --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
# Wait for children to finish too if this is a daemon that forks
# and if the daemon is only ever run from this initscript.
# If the above conditions are not satisfied then add some other code
# that waits for the process to drop all resources that could be
# needed by services started subsequently. A last resort is to
# sleep for some time.
start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
[ "$?" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return "$RETVAL"
}
#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
#
# If the daemon can reload its configuration without
# restarting (for example, when it is sent a SIGHUP),
# then implement that here.
#
start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
return 0
}
case "$1" in
start)
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
do_check $VERBOSE
case "$?" in
0)
do_start
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
1) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
stop)
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
status)
status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
check)
do_check yes
;;
reload|force-reload)
log_daemon_msg "Reloading $DESC" "$NAME"
do_reload
log_end_msg $?
;;
restart)
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|status|restart|reload|force-reload}" >&2
exit 1
;;
esac
: | Look in your conf file /etc/php5/fpm/pool.d/www.conf . There you will find options user and group . It will appear as [www] . You can make it into [myuser] group=mygroup . | {
"source": [
"https://unix.stackexchange.com/questions/30190",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
30,286 | I am neither concerned about RAM usage (as I've got enough) nor about losing data in case of an accidental shut-down (as my power is backed, the system is reliable and the data are not critical). But I do a lot of file processing and could use some performance boost. That's why I'd like to set the system up to use more RAM for file system read and write caching, to prefetch files aggressively (e.g. read-ahead the whole file accessed by an application in case the file is of sane size or at least read-ahead a big chunk of it otherwise) and to flush writing buffers less frequently. How to achieve this (may it be possible)? I use ext3 and ntfs (I use ntfs a lot!) file systems with XUbuntu 11.10 x86. | Improving disk cache performance in general is more than just increasing the file system cache size unless your whole system fits in RAM in which case you should use RAM drive ( tmpfs is good because it allows falling back to disk if you need the RAM in some case) for runtime storage (and perhaps an initrd script to copy system from storage to RAM drive at startup). You didn't tell if your storage device is SSD or HDD. Here's what I've found to work for me (in my case sda is a HDD mounted at /home and sdb is SSD mounted at / ). First optimize the load-stuff-from-storage-to-cache part: Here's my setup for HDD (make sure AHCI+NCQ is enabled in BIOS if you have toggles): echo cfq > /sys/block/sda/queue/scheduler
echo 10000 > /sys/block/sda/queue/iosched/fifo_expire_async
echo 250 > /sys/block/sda/queue/iosched/fifo_expire_sync
echo 80 > /sys/block/sda/queue/iosched/slice_async
echo 1 > /sys/block/sda/queue/iosched/low_latency
echo 6 > /sys/block/sda/queue/iosched/quantum
echo 5 > /sys/block/sda/queue/iosched/slice_async_rq
echo 3 > /sys/block/sda/queue/iosched/slice_idle
echo 100 > /sys/block/sda/queue/iosched/slice_sync
hdparm -q -M 254 /dev/sda Worth noting for the HDD case is high fifo_expire_async (usually write) and long slice_sync to allow a single process to get high throughput (set slice_sync to lower number if you hit situations where multiple processes are waiting for some data from the disk in parallel). The slice_idle is always a compromise for HDDs but setting it somewhere in range 3-20 should be okay depending on disk usage and disk firmware. I prefer to target for low values but setting it too low will destroy your throughput. The quantum setting seems to affect throughput a lot but try to keep this as low as possible to keep latency on sensible level. Setting quantum too low will destroy throughput. Values in range 3-8 seem to work well with HDDs. The worst case latency for a read is ( quantum * slice_sync ) + ( slice_async_rq * slice_async ) ms if I've understood the kernel behavior correctly. The async is mostly used by writes and since you're willing to delay writing to disk, set both slice_async_rq and slice_async to very low numbers. However, setting slice_async_rq too low value may stall reads because writes cannot be delayed after reads any more. My config will try to write data to disk at most after 10 seconds after data has been passed to kernel but since you can tolerate loss of data on power loss also set fifo_expire_async to 3600000 to tell that 1 hour is okay for the delay to disk. Just keep the slice_async low, though, because otherwise you can get high read latency. The hdparm command is required to prevent AAM from killing much of the performance that AHCI+NCQ allows. If your disk makes too much noise, then skip this. Here's my setup for SSD (Intel 320 series): echo cfq > /sys/block/sdb/queue/scheduler
echo 1 > /sys/block/sdb/queue/iosched/back_seek_penalty
echo 10000 > /sys/block/sdb/queue/iosched/fifo_expire_async
echo 20 > /sys/block/sdb/queue/iosched/fifo_expire_sync
echo 1 > /sys/block/sdb/queue/iosched/low_latency
echo 6 > /sys/block/sdb/queue/iosched/quantum
echo 2 > /sys/block/sdb/queue/iosched/slice_async
echo 10 > /sys/block/sdb/queue/iosched/slice_async_rq
echo 1 > /sys/block/sdb/queue/iosched/slice_idle
echo 20 > /sys/block/sdb/queue/iosched/slice_sync Here it's worth noting the low values for different slice settings. The most important setting for an SSD is slice_idle which must be set to 0-1. Setting it to zero moves all ordering decisions to native NCQ while setting it to 1 allows kernel to order requests (but if the NCQ is active, the hardware may override kernel ordering partially). Test both values to see if you can see the difference. For Intel 320 series, it seems that setting slide_idle to 0 gives the best throughput but setting it to 1 gives best (lowest) overall latency. If you have recent enough kernel, you can use slide_idle_us to set the value in microseconds instead of milliseconds and you could use something like echo 14 > slice_idle_us instead. Suitable value seems to be close to 700000 divided by max practical IOPS your storage device can support so 14 is okay for pretty fast SSD devices. For more information about these tunables, see https://www.kernel.org/doc/Documentation/block/cfq-iosched.txt . Update in year 2020 and kernel version 5.3 (cfq is dead): #!/bin/bash
modprobe bfq
for d in /sys/block/sd?; do
# HDD (tuned for Seagate SMR drive)
echo bfq >"$d/queue/scheduler"
echo 4 >"$d/queue/nr_requests"
echo 32000 >"$d/queue/iosched/back_seek_max"
echo 3 >"$d/queue/iosched/back_seek_penalty"
echo 80 >"$d/queue/iosched/fifo_expire_sync"
echo 1000 >"$d/queue/iosched/fifo_expire_async"
echo 5300 >"$d/queue/iosched/slice_idle_us"
echo 1 >"$d/queue/iosched/low_latency"
echo 200 >"$d/queue/iosched/timeout_sync"
echo 0 >"$d/queue/iosched/max_budget"
echo 1 >"$d/queue/iosched/strict_guarantees"
# additional tweaks for SSD (tuned for Samsung EVO 850):
if test $(cat "$d/queue/rotational") = "0"; then
echo 36 >"$d/queue/nr_requests"
echo 1 >"$d/queue/iosched/back_seek_penalty"
# slice_idle_us should be ~ 0.7/IOPS in µs
echo 16 >"$d/queue/iosched/slice_idle_us"
echo 10 >"$d/queue/iosched/fifo_expire_sync"
echo 250 >"$d/queue/iosched/fifo_expire_async"
echo 10 >"$d/queue/iosched/timeout_sync"
echo 0 >"$d/queue/iosched/strict_guarantees"
fi
done The setup is pretty similar but I now use bfq instead of cfq because the latter is not available with modern kernels. I try to keep nr_requests as low as possible to allow bfq to control the scheduling more accurately. At least Samsung SSD drives seem to require a pretty deep queue to be able to run with high IOPS. Update: Many Samsung SSDs have a firmware bug and can hang the whole device if nr_requests is too high and OS submits lots of requests rapidly. I've seen random freeze about once every 2 months if I use high nr_requests (e.g. 32 or 36), but the value 6 has been stable this far. The official fix is to set it to 1 but it hurts the performance a lot! For more details, see https://bugzilla.kernel.org/show_bug.cgi?id=203475 and https://bugzilla.kernel.org/show_bug.cgi?id=201693 – basically, if you have a Samsung SSD device and see failed command: WRITE FPDMA QUEUED in the kernel log, you've been bitten by this bug. I'm using Ubuntu 18.04 with kernel package linux-lowlatency-hwe-18.04-edge which has bfq only as a module so I need to load it before being able to switch to it. I also nowadays also use zram but I only use 5% of RAM for zram. This allows Linux kernel to use swapping related logic without touching the disks. However, if you decide to go with zero disk swap, make sure your apps do not leak RAM or you're wasting money. Now that we have configured kernel to load stuff from disk to cache with sensible performance, it's time to adjust the cache behavior: According to benchmarks I've done, I wouldn't bother setting read ahead via blockdev at all. Kernel default settings are fine. Set system to prefer swapping file data over application code (this does not matter if you have enough RAM to keep whole filesystem and all the application code and all virtual memory allocated by applications in RAM). This reduces latency for swapping between different applications over latency for accessing big files from a single application: echo 15 > /proc/sys/vm/swappiness If you prefer to keep applications nearly always in RAM you could set this to 1. If you set this to zero, kernel will not swap at all unless absolutely necessary to avoid OOM. If you were memory limited and working with big files (e.g. HD video editing), then it might make sense to set this close to 100. I nowadays (2017) prefer to have no swap at all if you have enough RAM. Having no swap will usually lose 200-1000 MB of RAM on long running desktop machine. I'm willing to sacrifice that much to avoid worst case scenario latency (swapping application code in when RAM is full). In practice, this means that I prefer OOM Killer to swapping. If you allow/need swapping, you might want to increase /proc/sys/vm/watermark_scale_factor , too, to avoid some latency. I would suggest values between 100 and 500. You can consider this setting as trading CPU usage for lower swap latency. The default is 10 and the maximum possible is 1000. Higher value should (according to kernel documentation ) result in higher CPU usage for kswapd processes and lower overall swapping latency. Next, tell kernel to prefer keeping directory hierarchy in memory over file contents and the rest of the page cache in case some RAM needs to be freed (again, if everything fits in RAM, this setting does nothing): echo 10 > /proc/sys/vm/vfs_cache_pressure Setting vfs_cache_pressure to a low value makes sense because in most cases, the kernel needs to know the directory structure and other
filesystem metadata before it can use file contents from the cache and flushing the directory cache too soon will make the file cache next to worthless. However, page cache contains also other data but just the file contents so this setting should be considered like the overall importance of metadata caching vs rest of the system. Consider going all the way down to 1 with this setting if you have lots of small files (my system has around 150K 10 megapixel photos and counts as a "lots of small files" system). Never set it to zero or the directory structure is always kept in memory even if the system runs out of memory. Setting this to a big value is sensible only if you have only a few big files that are constantly being re-read (again, HD video editing without enough RAM would be an example case). Official kernel documentation says that "increasing vfs_cache_pressure significantly beyond 100 may have negative performance impact". Year 2021 update: After running with kernel version 5.4 for long enough, I've come to the conclusion that the very low vfs_cache_pressure setting (I used to run with 1 for years) may now be causing long stalls / bad latency when memory pressure gets high enough . However, I never noticed such behavior with kernel version 5.3 or lesser. Year 2022 update: I've been running kernel 5.4.x series for another year and I've come to the conclusion that vfs_cache_presure has changed permanently. The kernel memory manager behavior that I used to get with kernel version 5.3 or older with values in range 1..5 seems to match real world behavior with 5.4 values in range 100..120. The newer kernels make this adjustment matter more so I'd recommend the value vfs_cache_presure=120 nowadays for low latency overall. Kernel version 5.3 or older should use a very low but non-zero value here in my opinion. Exception: if you have a truly massive amount of files and directories and you rarely touch/read/list all files setting vfs_cache_pressure higher than 100 may be wise. This only applies if you do not have enough RAM and cannot keep the whole directory structure in RAM and still have enough RAM for normal file cache and processes (e.g. company wide file server with lots of archival content). If you feel that you need to increase vfs_cache_pressure above 100 you're running without enough RAM. Increasing vfs_cache_pressure may help but the only real fix is to get more RAM. Having vfs_cache_pressure set to high number sacrifices average performance for having a more stable performance overall (that is, you can avoid really bad worst case behavior but have to deal with worse overall performance). Finally tell the kernel to use up to 99% of the RAM as cache for writes and instruct kernel to use up to 50% of RAM before slowing down the process that's writing (default for dirty_background_ratio is 10 ). Warning: I personally would not do this but you claimed to have enough RAM and are willing to lose the data. echo 99 > /proc/sys/vm/dirty_ratio
echo 50 > /proc/sys/vm/dirty_background_ratio And tell that 1h write delay is ok to even start writing stuff on the disk (again, I would not do this): echo 360000 > /proc/sys/vm/dirty_expire_centisecs
echo 360000 > /proc/sys/vm/dirty_writeback_centisecs For more information about these tunables, see https://www.kernel.org/doc/Documentation/sysctl/vm.txt If you put all of those to /etc/rc.local and include following at the end, everything will be in cache as soon as possible after boot (only do this if your filesystem really fits in the RAM): (nice find / -type f -and -not -path '/sys/*' -and -not -path '/proc/*' -print0 2>/dev/null | nice ionice -c 3 wc -l --files0-from - > /dev/null)& Or a bit simpler alternative which might work better (cache only /home and /usr , only do this if your /home and /usr really fit in RAM): (nice find /home /usr -type f -print0 | nice ionice -c 3 wc -l --files0-from - > /dev/null)& | {
"source": [
"https://unix.stackexchange.com/questions/30286",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
30,303 | I want to create a DEB file manually. I would like to just provide a folder which contains data to install, and a script to be executed after installation. Is this possible? | Making a source package My recommendation is to make a source package. Install build-essential , debhelper , dh-make . Change to the directory where the files you want to install are (the directory name must be of the form $PACKAGE-$VERSION , e.g. myapp-4.2-1 for your first attempt at packaging Myapp V4.2), and run dh_make --createorig . Answer the questions. Debhelper will create the basic infrastructure needed to build a package by generating files in a subdirectory called debian , both some mandatory files and templates for optional files. You may need to modify some of these files: Edit debian/rules to build what needs building and install the files in the right place. If you just need to copy some files and not to compile stuff, just edit the file debian/install to specify which files need to be installed where. Edit debian/copyright to add license information about your package and information on where to get the latest version (if relevant). Edit debian/changelog to remove the reference to an ITP (that's only relevant if you're working for the Debian project). Rename debian/postinst.ex to debian/postinst and add your post-installation commands there. If you later update your package, run debchange -i to add a changelog entry or edit the file in Emacs (with dpkg-dev-el installed). Run dpkg-buildpackage -rfakeroot -us -uc to build the .deb package (remove -us -uc if you want to sign the package with your PGP key). Making a binary package directly If you decide to make a binary package directly without building it from a source package, which is not really easier because there aren't as many tools to facilitate the process, you'll need some basic familiarity with the format of deb packages. It is described in the Debian Policy Manual , in particular ch. 3 (format of binary packages) , ch. 5 (control files) , ch. 6 (installation scripts) and appendix B (binary package manipulation) . You make sure that your package installs the expected files /usr/share/doc/copyright (containing the license of the package contents, as well as where to find the latest version of the package) and /usr/share/doc/changelog.Debian.gz (containing the changelog of the deb package). You don't need these if you're only going to use the package in-house, but it's better to have them. On Debian and derivatives If you have the Debian tools available, use dpkg-deb to construct the package. In the directory containing the data to install, add a directory called DEBIAN at the top level, containing the control files and maintainer scripts. $ ls mypackage-42
DEBIAN etc usr var
$ dpkg-deb -b mypackage-42 The hard way If you don't have the Debian tools, build an archive of the files you want to package called data.tar.gz , a separate archive of the control files called control.tar.gz (no subdirectories), and a text file called debian-binary and containing the text 2.0 . cd mypackage-42
tar czf ../data.tar.gz [a-z]*
cd DEBIAN
tar czf ../../control.tar.gz *
cd ../..
echo 2.0 > debian-binary
ar r mypackage-42.deb debian-binary control.tar.gz data.tar.gz You need at least a control file with the fields Package , Maintainer , Priority , Architecture , Installed-Size , Version , and any necessary dependency declaration. The script to be executed after installation is called postinst . Be sure to make it executable. It goes alongside control . Converting a binary package from a different format If you already have a binary package from another distribution, you can use alien to convert it. | {
"source": [
"https://unix.stackexchange.com/questions/30303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
30,370 | I want to have a shell script like this: my-app &
echo $my-app-pid But I do not know how the get the pid of the just executed command. I know I can just use the jobs -p my-app command to grep the pid. But if I want to execute the shell multiple times, this method will not work. Because the jobspec is ambiguous. | The PID of the last executed command is in the $! shell variable: my-app &
echo $! | {
"source": [
"https://unix.stackexchange.com/questions/30370",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5075/"
]
} |
30,400 | I have 2 computers, localpc and remoteserver . I need localpc to execute some commands on remoteserver . One of the things it needs to do is start a backup script that runs for a number of hours. I would like the command on localpc to “fire” and then be running totally independent on remoteserver , like localpc was never there in the first place. This is what I have done so far: remoteserver contains has the script: /root/backup.sh localpc is scheduled to run this: ssh root@remoteserver 'nohup /root/backup.sh' & Am I doing this the right way? Is there a better way to do this? Will I run into any trouble doing it this way? | You should probably use screen on the remote host, to have a real detached command: ssh root@remoteserver screen -d -m ./script | {
"source": [
"https://unix.stackexchange.com/questions/30400",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7340/"
]
} |
30,454 | I have seen many developers using this command to set the option to vi. I never understood the real use of this? When using bash commands, what help does switching to vi provide? | By setting your readline editing to either emacs (the default) or vi ( set -o vi ) you are essentially standardizing your editing commands, across the shell and your editor of choice 1 . Thus, if you want to edit a command in the shell you use the same commands 2 that you would if you were in your text editor. This means only having to remember one command syntax and (if that were not advantage enough) would probably make your editing in both environments faster and less error prone... You can further leverage this relationship in vi-mode by pulling up any command from your shell history, hitting Escape to enter command mode and then hitting v , which will open your $EDITOR with the command loaded for more complex editing with the full power of vim. Once you have finished editing the command to your satisfaction, :wq and the command is executed back in your shell. 1. Assuming, of course, that you use Emacs or Vi/m as your editor. 2. Or, more accurately, a subset thereof... | {
"source": [
"https://unix.stackexchange.com/questions/30454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10654/"
]
} |
30,465 | As far as I know vi is more commonly found on out-of-the-box unix systems while vim often has to be installed. Also vim stands for vi improved , but improved how? What are the main differences? | Vim tries to resemble the syntax and semantic of Vi command as much as possible. But being an "improved version", Vim adds new commands and features. It also changes the semantic of some Vi commands to better match the current expectations of its programmers. A detailed list of changes between vim and Vi can be found using the command :help compatible in Vim. | {
"source": [
"https://unix.stackexchange.com/questions/30465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
30,470 | I saw this in the end of an awesome shell script but I can't understand the logic here because I think it's being short-handed for a longer command. spark ${@:-`cat`} This apears at the end of this script . Any ideas? + Marks for some one who extends it into a full segment of code, even if its slower (Better for explanation) | It's the first special case of parameter substitution in man bash : ${parameter:-word} Use Default Values. If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted. In the case you mentioned, either the user has provided arguments on the command line and they will be used, or the user is asked to input them on standard input afterwards. | {
"source": [
"https://unix.stackexchange.com/questions/30470",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3368/"
]
} |
30,478 | Sometimes I need to divide one number by another. It would be great if I could just define a bash function for this. So far, I am forced to use expressions like echo 'scale=25;65320/670' | bc but it would be great if I could define a .bashrc function that looked like divide () {
bc -d $1 / $2
} | I have a handy bash function called calc : calc () {
bc -l <<< "$@"
} Example usage: $ calc 65320/670
97.49253731343283582089
$ calc 65320*670
43764400 You can change this to suit yourself. For example: divide() {
bc -l <<< "$1/$2"
} Note: <<< is a here string which is fed into the stdin of bc . You don't need to invoke echo . | {
"source": [
"https://unix.stackexchange.com/questions/30478",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1389/"
]
} |
30,489 | On most FHS systems, there is a /tmp folder as well as a /var/tmp folder. What is the functional difference between the two? | /tmp is meant as fast (possibly small) storage with a short lifetime. Many systems clean /tmp very fast - on some systems it is even mounted as RAM-disk. /var/tmp is normally located on a physical disk, is larger and can hold temporary files for a longer time. Some systems also clean /var/tmp , but less often. Also note that /var/tmp might not be available in the early boot-process, as /var and/or /var/tmp may be mountpoints. Thus it is a little bit comparable to the difference between /bin and /usr/bin . The first is available during early boot - the latter after the system has mounted everything. So most boot-scripts will use /tmp and not /var/tmp for temporary files. Another (upcoming) location on Linux for temporary files is /dev/shm . | {
"source": [
"https://unix.stackexchange.com/questions/30489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4232/"
]
} |
30,497 | I have basic understanding of chmod and CentOS file permissions. 777 stands for 111111111 and rwx for each root, group, user, etc. What I can't get my head around is setting up Apache, FTP and PHP to all work together correctly. I have a proFTPd server and Apache server. How should I setup the permissions on the proFTPd server so that Apache server can read and execute the PHP files? On my initial setup, the files uploaded by FTP cannot be read by Apache. Should I put the FTP user and the apache server into a group? How does the permission system work (FTP-Apache-PHP) on standard hosting systems? | /tmp is meant as fast (possibly small) storage with a short lifetime. Many systems clean /tmp very fast - on some systems it is even mounted as RAM-disk. /var/tmp is normally located on a physical disk, is larger and can hold temporary files for a longer time. Some systems also clean /var/tmp , but less often. Also note that /var/tmp might not be available in the early boot-process, as /var and/or /var/tmp may be mountpoints. Thus it is a little bit comparable to the difference between /bin and /usr/bin . The first is available during early boot - the latter after the system has mounted everything. So most boot-scripts will use /tmp and not /var/tmp for temporary files. Another (upcoming) location on Linux for temporary files is /dev/shm . | {
"source": [
"https://unix.stackexchange.com/questions/30497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
30,583 | When I try to telnet to a port on a server, and if there is no program listening on that port telnet dies with a "Unable to connect ... " error. I understand that. But, why do we need a firewall if there is no program listening on any ports? | There may not be a service running right now, but how about tomorrow? You have them all turned off, but what about your users? Anyone on a unix/windows/mac system can open a port > 1024 on any machine they have access to. What about malware? What about a virus? They can also open up ports and start serving information to the world, or start listening for connections from the network. A firewall's main purpose is not to block the ports for services you know are disabled, it is to block the ports on services you might not know about. Think of it as a default deny with only certain holes punched in for services you authorize. Any user or program started by a user can start a server on a system they have access to, a firewall prevents someone else from connecting to that service. A good admin knows what services need to be exposed, and can enable them. A firewall is mostly to mitigate the risk from unknown servers running on your system or your network, as well as to manage what is allowed into the network from a central place. It's important to know what is running on your machine/server and only enable what you need, but a firewall provides that extra bit of protection against the things you don't know about. | {
"source": [
"https://unix.stackexchange.com/questions/30583",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254/"
]
} |
30,693 | I'm not very familiar with all the tricks of grep/find/awk/xargs quite yet. I have some files matching a particular pattern, say *.xxx . These files are in random places throughout a certain directory. How can I find all such files, and move them to a folder in my home directory on Unix (that may not exist yet)? | mkdir ~/dst
find source -name "*.xxx" -exec mv -i {} -t ~/dst \; | {
"source": [
"https://unix.stackexchange.com/questions/30693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10122/"
]
} |
30,724 | This might be a bad idea. The more I think about it the more I come to the realization that I probably shouldn't do it... but I've been trying and failing so I REALLY want to know how to do it, even if it's a bad idea. What I want is for the bashrc file to be sourced every time I run the clear command. The reason for this is completely materialistic. I have system information echoed out when I source bashrc and it's cool to me and I'd like that to be at the top every time I clear. I've tried to set up some aliases for clear but I keep running into infinite loops. The obvious fix is to change the aliases to something else besides clear so that I can run the clear command in the alias without interfering but I type clear so often that it's kind of ingrained in my brain at this point. I'd like to be able to type clear and make it clear AND source the bashrc file. | alias clear='source ~/.bashrc; \clear' The \ tells bash that you want to invoke the external command, not the alias. | {
"source": [
"https://unix.stackexchange.com/questions/30724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15045/"
]
} |
30,741 | ... and what are the differences between them? I formulated my question like this to make it clear I'm not interested in a flamewar of opinions, rather in an objective comparison between the different flavors of BSD Unix. Ideally I could get feedback from users who have experience in all of them. Background I recently discovered that there's much more to Unix than merely Linux. I use Solaris at work, it opened my eyes. Now I'm interested in new unices, I want to try a new one and I'm naturally curious about BSDs. The problem I'm not asking for advice or opinions on what BSD to install ; I want to know the differences (and common points) between them so I can make up my own mind. The problem is that it's difficult to get proper comparisons between them. If you're lucky, you get some hasty definition like this one: FreeBSD = Popular all-rounder.
NetBSD = Portable (runs on a lot of platforms, including a toaster)
OpenBSD = Security above anything else. (It might be true, but it's not really useful. I'm sure FreeBSD is portable and secure as well ...) If you're unlucky you get caught in one of those inevitable Unix legends about projects splitting, forking, rebranding on intellectual/moral grounds, how Theo de Raadt is an extremist and how MacOS X and FreeBSD had a common ancestor over 20 years ago. Fascinating, but not really informative, is it? The BSDs The BSDs I am interested in are: FreeBSD OpenBSD NetBSD and optionally Dragonfly Darwin ... My questions In order to understand the differences better, here's a list of somewhat related questions about the different distributions (can we use this term?). If you present your answer under some form of tabular data, you are my all-time hero! Do they use the same kernel? Do they use the same userland tools? (what are the differences, if any?) Do they use the same package/source management system? Do they use the same default shell? Are binaries portable between them? Are sources portable between them? Do they use different directory trees? How big are their respective communities? Are they the same order of magnitude? How much of the current development is common? What are the main incompatibilities between them? I don't know how easy those questions are to answer, and how relevant to the StackExchange format this question really is. I just never came across a simple document listing the differences between BSDs in a clear way, useful for fairly experienced users to look at and make a choice easily. | I don't think I will provide you and everyone with the perfect answer, however, using a BSD system everyday for work, I am sure I can give you a useful insight in the BSD world.
I didn't ever use NetBSD, I won't talk a lot about it. Do they use the same kernel? No, although there are similarities due to the historic forks. Each project evolved separately. Do they use the same userland tools? (what are the differences, if any?) They all follow POSIX. You can expect a set of tools to have the same functionality between *BSD.
It's also common to see some obvious differences in process/network management tools within the BSDs. Do they use the same package/source management system? They provide a packaging system, different for each OS. Do they use the same default shell? No, for example FreeBSD uses csh, OpenBSD uses ksh. Are binaries portable between them? No: (XXXX@freebsd-6 101)file `which ls`
/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (FreeBSD), for FreeBSD 5.5, dynamically linked (uses shared libs), stripped They don't really support stable and fast binary emulation. Don't rely on it. Are sources portable between them? Some yes, as long as you don't use kernel code or libc code (which is tied up tightly to the OS) for example. Do they use different directory trees? No, they are very similar to Linux here.
However FreeBSD advocates the use of /usr/local/etc for third party software's configuration files. OpenBSD puts all in /etc...
They put all third party in /usr/local, whereas Linux distribution will do as they see fit.
In general you can say that *BSD are very conservative about that, things belongs where they belongs, and that's not something to make up. How big are their respective communities? Are they the same order of magnitude? FreeBSD's is the largest and most active, you can reach it through a lot of different forums, mailing lists, IRC channels and such...
OpenBSD has a good community but mostly visible through IRC and mailing lists. Actually if you think you need a good community, FreeBSD is the way to go.
NetBSD and OpenBSD communities are centered around development, talk about new improvements etc. They don't really like to do basic user-support or advertising. They expect everyone to be advanced unix users and able to read the documentation before asking anything. How much of the current development is common? Due to really free licenses code can flow among the projects, OpenBSD often patches their code following NetBSD (as their sources have a lot in common), FreeBSD takes and integrates OpenBSD's Packet Filter, etc. It's obviously harder when it comes to drivers and others kernel things. What are the main incompatibilities between them? They are not compatible in a binary form, but they are mostly compatible in syntax and code. You can rely on that to achieve portability in your code. It will build or/and execute easily on all flavors of BSD, except if your going too close to the kernel (ifconfig, pfctl...). Here's how you can enjoy learning from the BSD world: Try to replace your home router with an openbsd box, play with pf and the network. You will see how easy it is to make what you want. It's clean, reliable and secure.
Use a FreeBSD as a desktop, they support a lot of GPUs, you can use flash to some extent, there's some compatibility with Linux binaries. You can safely build your custom kernel (actually this is recommended). It's overall a good learning experience.
Try NetBSD on very old hardware or even toasters . Although they are different, each of them tries to be a good OS, and it will match users more than situations. As a learning experience, try them all (Net/Open/Free), but later you might find yourself using only 1 for most situations (since you're more knowledgeable in a specific system or fit in more with the community). The other BSDs are hybrids or just slightly modified versions, I find it better to stay close to the source of the software development (use packet filter on OpenBSD, configure yourself your desktop on FreeBSD, ...). As a personal note, I'm happy to see an enthusiast like you, and I hope you will find a lot of good things in the BSD world. BSD is not about hating windows or other OSs, it's about liking Unix. | {
"source": [
"https://unix.stackexchange.com/questions/30741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4098/"
]
} |
30,751 | I want to get a list of all environment variables (shell variables? exported variables?) and their values at a given time, in zsh. What is the proper way to do this? | It sounds like you want env . | {
"source": [
"https://unix.stackexchange.com/questions/30751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3850/"
]
} |
30,759 | If you were helping someone to learn the concept of pipes on the command line what example would you use? The example that actually came up was as follows: cat whatever.txt | less I feel like that's not the best example, namely because there's only one step. What's a good, but fundemental, use of | ? Ideally the example I'll present will use programs that have outputs themselves that can be run independently and then shown piped together. | I'm going to walk you through a somewhat complex example, based on a real life scenario. Problem Let's say the command conky stopped responding on my desktop, and I want to kill it manually. I know a little bit of Unix, so I know that what I need to do is execute the command kill <PID> . In order to retrieve the PID, I can use ps or top or whatever tool my Unix distribution has given me. But how can I do this in one command? Answer $ ps aux | grep conky | grep -v grep | awk '{print $2}' | xargs kill DISCLAIMER: This command only works in certain cases. Don't copy/paste it in your terminal and start using it, it could kill processes unsuspectingly. Rather learn how to build it . How it works 1- ps aux This command will output the list of running processes and some info about them. The interesting info is that it'll output the PID of each process in its 2nd column. Here's an extract from the output of the command on my box: $ ps aux
rahmu 1925 0.0 0.1 129328 6112 ? S 11:55 0:06 tint2
rahmu 1931 0.0 0.3 154992 12108 ? S 11:55 0:00 volumeicon
rahmu 1933 0.1 0.2 134716 9460 ? S 11:55 0:24 parcellite
rahmu 1940 0.0 0.0 30416 3008 ? S 11:55 0:10 xcompmgr -cC -t-5 -l-5 -r4.2 -o.55 -D6
rahmu 1941 0.0 0.2 160336 8928 ? Ss 11:55 0:00 xfce4-power-manager
rahmu 1943 0.0 0.0 32792 1964 ? S 11:55 0:00 /usr/lib/xfconf/xfconfd
rahmu 1945 0.0 0.0 17584 1292 ? S 11:55 0:00 /usr/lib/gamin/gam_server
rahmu 1946 0.0 0.5 203016 19552 ? S 11:55 0:00 python /usr/bin/system-config-printer-applet
rahmu 1947 0.0 0.3 171840 12872 ? S 11:55 0:00 nm-applet --sm-disable
rahmu 1948 0.2 0.0 276000 3564 ? Sl 11:55 0:38 conky -q 2- grep conky I'm only interested in one process, so I use grep to find the entry corresponding to my program conky . $ ps aux | grep conky
rahmu 1948 0.2 0.0 276000 3564 ? Sl 11:55 0:39 conky -q
rahmu 3233 0.0 0.0 7592 840 pts/1 S+ 16:55 0:00 grep conky 3- grep -v grep As you can see in step 2, the command ps outputs the grep conky process in its list (it's a running process after all). In order to filter it, I can run grep -v grep . The option -v tells grep to match all the lines excluding the ones containing the pattern. $ ps aux | grep conky | grep -v grep
rahmu 1948 0.2 0.0 276000 3564 ? Sl 11:55 0:39 conky -q NB: I would love to know a way to do steps 2 and 3 in a single grep call. 4- awk '{print $2}' Now that I have isolated my target process. I want to retrieve its PID. In other words I want to retrieve the 2nd word of the output. Lucky for me, most (all?) modern unices will provide some version of awk , a scripting language that does wonders with tabular data. Our task becomes as easy as print $2 . $ ps aux | grep conky | grep -v grep | awk '{print $2}'
1948 5- xargs kill I have the PID. All I need is to pass it to kill . To do this, I will use xargs . xargs kill will read from the input (in our case from the pipe), form a command consisting of kill <items> ( <items> are whatever it read from the input), and then execute the command created. In our case it will execute kill 1948 . Mission accomplished. Final words Note that depending on what version of unix you're using, certain programs may behave a little differently (for example, ps might output the PID in column $3). If something seems wrong or different, read your vendor's documentation (or better, the man pages). Also be careful as long pipes can be dangerous. Don't make any assumptions especially when using commands like kill or rm . For example, if there was another user named 'conky' (or 'Aconkyous') my command may kill all his running processes too! What I'm saying is be careful, especially for long pipes. It's always better to build it interactively as we did here, than make assumptions and feel sorry later. | {
"source": [
"https://unix.stackexchange.com/questions/30759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15070/"
]
} |
30,761 | File: i am someone1.
i am someone2.
i am someone3
~
~ Documentation says G takes me end of the file, but it only takes me to the beginning of last line. I want to be able to come to last character of the file, 3 in this case, and press a and type a period. How do I do that? | If you type A after G you will enter insert mode at the end of the last line. If you just want to go to the last character, then G-End will suffice | {
"source": [
"https://unix.stackexchange.com/questions/30761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4605/"
]
} |
30,903 | I'm having trouble with escaping characters in bash. I'd like to escape single and double quotes while running a command under a different user. For the purposes of this question let's say I want to echo the following on the screen: 'single quote phrase' "double quote phrase" How can I escape all the special chars, if I also need to switch to a different user: sudo su USER -c "echo \"'single quote phrase' \"double quote phrase\"\"" Of course, this doesn't produce the right result. | You can use the following string literal syntax: > echo $'\'single quote phrase\' "double quote phrase"'
'single quote phrase' "double quote phrase" From man bash Words of the form $'string' are treated specially. The word expands
to string, with backslash-escaped characters replaced as specified by
the
ANSI C standard. Backslash escape sequences, if present, are decoded as follows: \a alert (bell)
\b backspace
\e
\E an escape character
\f form feed
\n new line
\r carriage return
\t horizontal tab
\v vertical tab
\\ backslash
\' single quote
\" double quote
\nnn the eight-bit character whose value is the octal value nnn (one to three digits)
\xHH the eight-bit character whose value is the hexadecimal value HH (one or two hex digits)
\cx a control-x character | {
"source": [
"https://unix.stackexchange.com/questions/30903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6442/"
]
} |
30,925 | It's taken me almost 10 years of Linux usage to ask this question. It was all trial and error and random late-night internet surfing. But people shouldn't need 10 years for this. If I were just starting out with Linux, I'd want to know: When to alias, when to script, and when to write a function? Where aliases are concerned, I use aliases for very simple operations that don't take arguments. alias houston='cd /home/username/.scripts/' That seems obvious. But some people do this: alias command="bash bashscriptname" (and add it to the .bashrc file) Is there a good reason to do that? I'm trying really hard, but I genuinely can't think of any circumstances in which I'd want to do that. So, if there is an edge case where that would make a difference, please answer below. Because that's where I would just put something in my PATH and chmod +x it, which is another thing that came after years of Linux trial-and-error. Which brings me to the next topic. For instance, I added a hidden folder ( .scripts/ ) in the home directory to my PATH by just adding a line to my .bashrc ( PATH=$PATH:/home/username/.scripts/ ), so anything executable in there automagically autocompletes. If I needed to. I don't really need that, though, do I? I would only use that for languages that are not the shell, like Python. If it's the shell, I can just write a function inside the very same .bashrc : funcname () {
somecommand -someARGS "$@"
} As I stated, I found a lot of this out through trial and error. And I only truly saw the beauty of functions when my computer died and I was forced to use the computers of the people around me when they weren't using them. Instead of moving a whole directory of scripts from computer to computer, I ended up just replacing everyone else's .bashrc with my own, since they had never even made a single modification. But did I miss anything? So, what would you tell a beginning Linux user about when to alias, when to script, and when to write a function? If it's not obvious, I'm assuming the people who answer this will make use of all three options. If you only use aliases, or only use scripts, or only use functions— or if you only use aliases and scripts or aliases and functions or scripts and functions —this question isn't really aimed at you. | An alias should effectively not (in general) do more than change the default options of a command. It is nothing more than simple text replacement on the command name. It can't do anything with arguments but pass them to the command it actually runs. So if you simply need to add an argument at the front of a single command, an alias will work. Common examples are # Make ls output in color by default.
alias ls="ls --color=auto"
# make mv ask before overwriting a file by default
alias mv="mv -i" A function should be used when you need to do something more complex than an alias but that wouldn't be of use on its own. For example, take this answer on a question I asked about changing grep 's default behavior depending on whether it's in a pipeline: grep() {
if [[ -t 1 ]]; then
command grep -n "$@"
else
command grep "$@"
fi
} It's a perfect example of a function because it is too complex for an alias (requiring different defaults based on a condition), but it's not something you'll need in a non-interactive script. If you get too many functions or functions too big, put them into separate files in a hidden directory, and source them in your ~/.bashrc : if [ -d ~/.bash_functions ]; then
for file in ~/.bash_functions/*; do
. "$file"
done
fi A script should stand on its own. It should have value as something that can be re-used, or used for more than one purpose. | {
"source": [
"https://unix.stackexchange.com/questions/30925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1389/"
]
} |
30,953 | I often find myself sending folders with 10K - 100K of files to a remote machine (within the same network on-campus). I was just wondering if there are reasons to believe that, tar + rsync + untar Or simply tar (from src to dest) + untar could be faster in practice than rsync when transferring the files for the first time . I am interested in an answer that addresses the above in two scenarios: using compression and not using it. Update I have just run some experiments moving 10,000 small files (total size = 50 MB), and tar+rsync+untar was consistently faster than running rsync directly (both without compression). | When you send the same set of files, rsync is better suited because it will only send differences. tar will always send everything and this is a waste of resources when a lot of the data are already there. The tar + rsync + untar loses this advantage in this case, as well as the advantage of keeping the folders in-sync with rsync --delete . If you copy the files for the first time, first packeting, then sending, then unpacking (AFAIK rsync doesn't take piped input) is cumbersome and always worse than just rsyncing, because rsync won't have to do any task more than tar anyway. Tip: rsync version 3 or later does incremental recursion, meaning it starts copying almost immediately before it counts all files. Tip2: If you use rsync over ssh , you may also use either tar+ssh tar -C /src/dir -jcf - ./ | ssh user@server 'tar -C /dest/dir -jxf -' or just scp scp -Cr srcdir user@server:destdir General rule, keep it simple. UPDATE: I've created 59M demo data mkdir tmp; cd tmp
for i in {1..5000}; do dd if=/dev/urandom of=file$i count=1 bs=10k; done and tested several times the file transfer to a remote server (not in the same lan), using both methods time rsync -r tmp server:tmp2
real 0m11.520s
user 0m0.940s
sys 0m0.472s
time (tar cf demo.tar tmp; rsync demo.tar server: ; ssh server 'tar xf demo.tar; rm demo.tar'; rm demo.tar)
real 0m15.026s
user 0m0.944s
sys 0m0.700s while keeping separate logs from the ssh traffic packets sent wc -l rsync.log rsync+tar.log
36730 rsync.log
37962 rsync+tar.log
74692 total In this case, I can't see any advantage in less network traffic by using rsync+tar, which is expected when the default mtu is 1500 and while the files are 10k size. rsync+tar had more traffic generated, was slower for 2-3 seconds and left two garbage files that had to be cleaned up. I did the same tests on two machines on the same lan, and there the rsync+tar did much better times and much much less network traffic. I assume cause of jumbo frames. Maybe rsync+tar would be better than just rsync on a much larger data set. But frankly I don't think it's worth the trouble, you need double space in each side for packing and unpacking, and there are a couple of other options as I've already mentioned above. | {
"source": [
"https://unix.stackexchange.com/questions/30953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
30,970 | I want to make a fresh new copy of a large number of files from one local drive to another. I've read that rsync does a checksum comparison of files when sending them to a remote machine over a network. Will rsync make the comparison when copying the files between two local drives? If it does do a verification - is it a safe bet? Or is it better to do a byte by byte comparison? | rsync always uses checksums to verify that a file was transferred correctly. If the destination file already exists, rsync may skip updating the file if the modification time and size match the source file, but if rsync decides that data need to be transferred, checksums are always used on the data transferred between the sending and receiving rsync processes. This verifies that the data received are the same as the data sent with high probability, without the heavy overhead of a byte-level comparison over the network. Once the file data are received, rsync writes the data to the file and trusts that if the kernel indicates a successful write, the data were written without corruption to disk. rsync does not reread the data and compare against the known checksum as an additional check. As for the verification itself, for protocol 30 and beyond (first supported in 3.0.0), rsync uses MD5 . For older protocols, the checksum used is MD4 . While long considered obsolete for secure cryptographic hashes, MD5 and MD4 remain adequate for checking file corruption. Source: the man page and eyeballing the rsync source code to verify. | {
"source": [
"https://unix.stackexchange.com/questions/30970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15169/"
]
} |
30,974 | I successfully installed NVIDIA-Linux-x86_64-290.10.run on my Debian Squeeze desktop but I do not see /var/log/nvidia-installer.log clearly stating which file(s) have been added/replaced. Does anybody know which file(s) it installs or modifies ? | rsync always uses checksums to verify that a file was transferred correctly. If the destination file already exists, rsync may skip updating the file if the modification time and size match the source file, but if rsync decides that data need to be transferred, checksums are always used on the data transferred between the sending and receiving rsync processes. This verifies that the data received are the same as the data sent with high probability, without the heavy overhead of a byte-level comparison over the network. Once the file data are received, rsync writes the data to the file and trusts that if the kernel indicates a successful write, the data were written without corruption to disk. rsync does not reread the data and compare against the known checksum as an additional check. As for the verification itself, for protocol 30 and beyond (first supported in 3.0.0), rsync uses MD5 . For older protocols, the checksum used is MD4 . While long considered obsolete for secure cryptographic hashes, MD5 and MD4 remain adequate for checking file corruption. Source: the man page and eyeballing the rsync source code to verify. | {
"source": [
"https://unix.stackexchange.com/questions/30974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15171/"
]
} |
31,008 | I have four files that I created using an svndump test.svn
test2.svn
test.svn.gz
test2.svn.gz now when I run this md5sum test2.svn test.svn test.svn.gz test2.svn.gz Here is the output 89fc1d097345b0255825286d9b4d64c3 test2.svn
89fc1d097345b0255825286d9b4d64c3 test.svn
8284ebb8b4f860fbb3e03e63168b9c9e test.svn.gz
ab9411efcb74a466ea8e6faea5c0af9d test2.svn.gz So I can't understand why gzip is compressing files differently is it putting a timestamp somewhere before compressing? I had a similar issue with mysqldump as it was using the date field on top | gzip stores some of the original file's metadata in record header, including the file modification time and filename, if available. See GZIP file format specification . So it's expected that your two gzip files aren't identical. You can work around this by passing gzip the -n flag, which stops it from including the original filename and timestamp in the header. | {
"source": [
"https://unix.stackexchange.com/questions/31008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3368/"
]
} |
31,071 | I tried writing a shell script which can do automatic login into a ssh server using password which is mentioned in the script. I have written the following code: set timeout 30
/usr/bin/ssh -p 8484 [email protected]
expect
{
"[email protected]'s password"
{
send "password\r"
}
} This code is not running properly, still it is asking for the password. Can somebody please help me in solving this | I once wrote an expect script to log in to a ssh server (like your case) and my script was something like this: #!/usr/bin/expect
spawn ssh [email protected]
expect "password"
send "MyPassword\r"
interact I think maybe the interact is missing in your script. | {
"source": [
"https://unix.stackexchange.com/questions/31071",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3979/"
]
} |
31,107 | How can I kill a process based on its command line arguments? killall , pgrep , and pkill seem to only work based on the process name. I need this to be able to differentiate between a number of applications running inside Java virtual machines, where java is the process name for all of them and the actual application name can be found by looking at the command line arguments. This can be done manually with ps aux | grep myapp.jar and then manually killing the pid from the output, but I'd like a command to do something equivalent automatically. | pgrep / pkill take a -f flag. From the man page: -f The pattern is normally only matched against the process name.
When -f is set, the full command line is used. For example: $ sleep 30& sleep 60&
[1] 8007
[2] 8008
$ pkill -f 'sleep 30'
[1] - terminated sleep 30
$ pgrep sleep
8008 | {
"source": [
"https://unix.stackexchange.com/questions/31107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15344/"
]
} |
31,114 | I have link -> file I do find -L . -name 'link' And get ./link Why is that? man find says: -L : Follow symbolic links. When find examines or prints information about files, the information used shall be
taken from the properties of the file to which the link points, not from the link itself (unless it is a bro‐
ken symbolic link or find is unable to examine the file to which the link points). | The general rule is that if a command operates on links (i.e. directory entries, which are pointers to inodes) then the command treats symlinks as themselves rather than as the object the link points to. Otherwise the command operates on what the symlink points to. Thus cp follows symlinks by default and copies the contents of the file pointed to by the link. But when you ask cp to deal with directory entries by specifying -R , it stops following symlinks. mv always works with directory entries, and so it never follows symlinks. The find command's normal activity is to operate on directory entries, so symlinks are not followed by default. Adding -L causes find to follow symlinks for all properties except the one that cannot be ignored when doing directory search, the name. One of purposes of find -name is to provide input for commands like mv and rm , which operate on directory entries. There would be unpleasant and surprising results if find -L dir -name could produce names that pointed outside the directory tree rooted at dir . | {
"source": [
"https://unix.stackexchange.com/questions/31114",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7591/"
]
} |
31,118 | The UNIX system call for process creation, fork(), creates a child process by copying the parent process. My understanding is that this is almost always followed by a call to exec() to replace the child process' memory space (including text segment). Copying the parent's memory space in fork() always seemed wasteful to me (although I realize the waste can be minimized by making the memory segments copy-on-write so only pointers are copied). Anyway, does anyone know why this duplication approach is required for process creation? | It's to simplify the interface. The alternative to fork and exec would be something like Windows' CreateProcess function. Notice how many parameters CreateProcess has, and many of them are structs with even more parameters. This is because everything you might want to control about the new process has to be passed to CreateProcess . In fact, CreateProcess doesn't have enough parameters, so Microsoft had to add CreateProcessAsUser and CreateProcessWithLogonW . With the fork/exec model, you don't need all those parameters. Instead, certain attributes of the process are preserved across exec . This allows you to fork , then change whatever process attributes you want (using the same functions you'd use normally), and then exec . In Linux, fork has no parameters, and execve has only 3: the program to run, the command line to give it, and its environment. (There are other exec functions, but they're just wrappers around execve provided by the C library to simplify common use cases.) If you want to start a process with a different current directory: fork , chdir , exec . If you want to redirect stdin/stdout: fork , close/open files, exec . If you want to switch users: fork , setuid , exec . All these things can be combined as needed. If somebody comes up with a new kind of process attribute, you don't have to change fork and exec . As larsks mentioned, most modern Unixes use copy-on-write, so fork doesn't involve significant overhead. | {
"source": [
"https://unix.stackexchange.com/questions/31118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15243/"
]
} |
31,161 | I would like to frequently switch between directories that are in totally unrelated paths, for example /Project/Warnest/docs/ and ~/Dropbox/Projects/ds/test/ . But I don't want to type cd /[full-path]/ all the time. Are there any shortcut commands to switch to previously worked directories? One solution I could think of is to add environment variables to my bash .profile for the frequently used directories and cd to them using those variables. But is there any other solution to this? | If you're just switching between two directories, you can use cd - to go back and forth. | {
"source": [
"https://unix.stackexchange.com/questions/31161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13683/"
]
} |
31,224 | How do I recursively grep files within a given folders except a couple file types? For example, I'm looking for a string within my workspace folder but it ends up searching inside sql files and generates serialized strings. So in this case, I'd like to grep the workspace folder except sql files. I'm preferably looking for a one-liner if possible. | If you have GNU grep you can use the --exclude=GLOB option, like grep -r --exclude='*.sql' pattern dir/ | {
"source": [
"https://unix.stackexchange.com/questions/31224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7768/"
]
} |
31,248 | After pushd ing too many times, I want to clear the whole stack of paths. How would I popd all the items in the stack? I'd like to popd without needing to know how many are in the stack? The bash manual doesn't seem to cover this . Why do I need to know this? I'm fastidious and to clean out the stack. | dirs -c is what you are looking for. | {
"source": [
"https://unix.stackexchange.com/questions/31248",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7768/"
]
} |
31,306 | I want to do a grep for \resources\ . How do I do this? I've tried: grep \resources\
grep \\resources\\
grep "\resources\"
grep "\\resources\\" None of these work . | The backslash is a special character for many applications: including the shell: you need to escape it using another backslash or more elegantly, using single quotes when possible: $ printf '%s\n' foo\\bar 'foo\bar'
foo\bar
foo\bar Here the command received two arguments with value foo\bar , which were echoed as-is on the terminal. (Above, I used printf instead of echo as many echo implementations also do their own interpreting of backslash (here would expand \b into a backspace character)). But backslash is also a special character for grep . This command recognizes many special sequences like \( , \| , \. , and so on. So similarly you need to feed grep with a double \\ for an actual backslash character. This means that using the shell you need to type: grep 'foo\\bar' or equivalently: grep foo\\\\bar (both lines tell the shell to transmit foo\\bar as argument to grep ). Many other commands interpret backslashes in some of their arguments… and two levels of escaping are needed (one to escape the shell interpretation, one to escape the command interpretation). By the way, for the shell, single quotes '…' prevent any kind of character interpretation, but double quotes only prevents some of them: in particular $ , ` and \ remain active characters within "…" . | {
"source": [
"https://unix.stackexchange.com/questions/31306",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15361/"
]
} |
31,312 | Need some pointing in right direction on script to fetch and regex or sed. Site24x7 provides a URL with a CSV list of their source IP's used for monitoring. (they also provide other formats, CSV seems the least messed up as their structure leaves a lot to be desired. https://www.site24x7.com/multi-location-web-site-monitoring.html ) Like so: Country,City,IP Address External
Australia,Sydney,"101.0.67.53"
Australia,Melbourne,"125.214.65.59"
Belgium,Brussels,"87.238.165.164"
Brazil,São Paulo,"200.170.83.170"
Brazil,Rio de Janeiro,"201.20.20.237"
Canada,Toronto,"208.69.56.166,
208.69.56.171,
208.69.56.172 "
Canada,Montreal,"199.204.45.153,
199.204.45.154,
199.204.45.155,
199.204.45.156" I need to save it as an Allow include file in apache. Like so: Allow from \
72.5.230.111 \
72.5.230.65 \
72.5.230.84 | The backslash is a special character for many applications: including the shell: you need to escape it using another backslash or more elegantly, using single quotes when possible: $ printf '%s\n' foo\\bar 'foo\bar'
foo\bar
foo\bar Here the command received two arguments with value foo\bar , which were echoed as-is on the terminal. (Above, I used printf instead of echo as many echo implementations also do their own interpreting of backslash (here would expand \b into a backspace character)). But backslash is also a special character for grep . This command recognizes many special sequences like \( , \| , \. , and so on. So similarly you need to feed grep with a double \\ for an actual backslash character. This means that using the shell you need to type: grep 'foo\\bar' or equivalently: grep foo\\\\bar (both lines tell the shell to transmit foo\\bar as argument to grep ). Many other commands interpret backslashes in some of their arguments… and two levels of escaping are needed (one to escape the shell interpretation, one to escape the command interpretation). By the way, for the shell, single quotes '…' prevent any kind of character interpretation, but double quotes only prevents some of them: in particular $ , ` and \ remain active characters within "…" . | {
"source": [
"https://unix.stackexchange.com/questions/31312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15363/"
]
} |
31,334 | I'm reading a book, it says: Every process has at least three communication channels available to it: “standard
input” (STDIN), “standard output” (STDOUT), and “standard error” (STDERR). Most commands accept their input from STDIN and write their output to STDOUT.
They write error messages to STDERR. This convention lets you string
commands together like building blocks to create composite pipelines. The shell interprets the symbols < , > , and >> as instructions to reroute a command’s input or output to or from a file. To connect the STDOUT of one command to the STDIN of another, use the | symbol, commonly known as a pipe. ps -ef | grep httpd So basically what this is saying is that standard input is a command that allows user to write to a file, while standard output is a command that has the bash shell write output to the shell, and standard error is just like output but it is only invoked when there is an error in file system. Then we get to the part of connecting STDOUT and STDIN and I'm lost. | Standard input and standard output are not commands. Imagine commands as machines in a factory with an assembly line. Most machines are designed to have one conveyor belt to feed data in and one conveyor belt to feed data out; they are the standard input and the standard output respectively. The standard error is an opening on the side of the machine where it can eject rejects. +-------+ +------------------+ +------------------+ +------+
| input | | machine A | | machine B | |output|
| reser |=====|<stdin stdout>|=======|<stdin stdout>|=====|bucket|
| ‑voir | → | stderr | → | stderr | → | |
+-------+ +------------------+ +------------------+ +------+
|| || The diagram above shows a conveyor belt that goes through two machines. The data comes from the input reservoir on the left, is fed to machine A, then the output is conveyed further to machine B (for which it is input), and machine B's output is deposited in the output bucket on the right. In unix terms, this is called a pipeline . The metaphor is that of plumbing: a pipe connects machine A to machine B. The shell syntax for the pipeline above is <input-file.txt commandA | commandB >output-file.txt The < redirection symbol tells the shell to connect commandA 's standard input to the file input-file.txt before launching commandA . (You can put the redirection before or after the command name.) The > redirection symbol tells the shell to connect commandB 's standard output to output-file.txt . The pipe (" | ") symbol in the middle tells the shell to connect commandA 's standard output to commandB 's standard input before launching them. Commands can have more than one input and more than one output, but that's material for another day . | {
"source": [
"https://unix.stackexchange.com/questions/31334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15378/"
]
} |
31,378 | When I restart httpd, I get the following error. What am I missing? [root@localhost ~]# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: Syntax error on line 22 of /etc/httpd/conf.d/sites.conf:
Invalid command 'SSLEngine', perhaps misspelled or defined by a module not included in the server configuration I have installed mod_ssl using yum install mod_ssl openssh Package 1:mod_ssl-2.2.15-15.el6.centos.x86_64 already installed and latest version
Package openssh-5.3p1-70.el6_2.2.x86_64 already installed and latest version My sites.conf looks like this <VirtualHost *:80>
# ServerName shop.itmanx.com
ServerAdmin [email protected]
DocumentRoot /var/www/html/magento
<Directory /var/www/html>
Options -Indexes
AllowOverride All
</Directory>
ErrorLog logs/shop-error.log
CustomLog logs/shop-access.log
</VirtualHost>
<VirtualHost *:443>
ServerName secure.itmanx.com
ServerAdmin [email protected]
SSLEngine on
SSLCertificateFile /etc/httpd/ssl/secure.itmanx.com/server.crt
SSLCertificateKeyFile /etc/httpd/ssl/secure.itmanx.com/server.key
SSLCertificateChainFile /etc/httpd/ssl/secure.itmanx.com/chain.crt
DocumentRoot /var/www/html/magento
<Directory /var/www/html>
Options -Indexes
AllowOverride All
</Directory>
ErrorLog logs/shop-ssl-error.log
CustomLog logs/shop-ssl-access.log
</VirtualHost> | Probably you do not load the ssl module. You should have a LoadModule directive somewhere in your apache configuration files. Something like: LoadModule ssl_module /usr/lib64/apache2-prefork/mod_ssl.so Usually apache configuration template has (on any distribution) a file called (something like) loadmodule.conf in which you should find a LoadModule directive for each module you load into apache at server start. | {
"source": [
"https://unix.stackexchange.com/questions/31378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15401/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.