source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
118,811
I installed Debian onto my machine last night. Now, I don't understand why I can't run GUI apps from a terminal when running as root. For example: sudo -i glxgears Generates the following output: No protocol specified Error: couldn't open display :0 But when I first open the terminal I can run glxgears from the user account. It's only after I do sudo -i that the problem crops up. This happens for any GUI app that I try to run. I think it's probably related to X11, but I'm not sure.
Accessing the X server requires two things: The $DISPLAY variable pointing to the correct display (usually :0 ) Proper authentication information The authentication information can be explicitly specified via $XAUTHORITY , and defaults to ~/.Xauthority otherwise. If $DISPLAY and $XAUTHORITY is set for your user, sudo will set them for the new shell, too, and everything should work fine. If they are not set, they will probably default to the wrong values and you cannot start and X applications. In Debian $XAUTHORITY is usually not set explicitly. Just add export XAUTHORITY=~/.Xauthority to your .bashrc or explicitly say XAUTHORITY=~/.Xauthority sudo ... and everything should work. You can also use xauth list to check whether proper authentication information are available.
{ "source": [ "https://unix.stackexchange.com/questions/118811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38705/" ] }
118,853
Please compare the following two lines: -rws---r-x 1 root root 21872 2009-10-13 21:06 prg1 -rwx---r-x 1 root root 21872 2009-10-13 21:06 prg2 Does the setuid bit on prg1 , along with the read and execute bits for 'other' mean that any user can run it with root privileges? The prg2 also has read and execute for 'other', but does not have the setuid bit set, so does that mean it can still be run by any user but without root privileges?
That is the "setuid" bit, which tells the OS to execute that program with the userid of its owner. This is typically used with files owned by root to allow normal users to execute them as root with no external tools (such as sudo ). You can set the suid bit using chmod , eg chmod 4755 which will give a file give the normal permissions 755 does ( rwxr-xr-x ) and add the suid bit to give rwsr-xr-x You can clear the setuid bit by issuing a normal chmod command with a 0 prepended to it. For example, to set permissions back to rwxr-xr-x you would use chmod 0755 .
{ "source": [ "https://unix.stackexchange.com/questions/118853", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62316/" ] }
118,856
I'm writing a shell script for Linux, using Bash, to translate any video-file into a MP4. For that, I'm using avconv with libvorbis for audio. Inside my script, I have a question for the user : read -p "- Audio Quality [scale from -2 to 10] ? " if [ -n "$REPLY" ] ; then ABITRATE="-aq $REPLY" fi My "ABITRATE" string goes into the final avconv command-line. But I would like to give the user the opportunity to answer that question with a value in Kb (Kilobit), and translate it into the scale that libvorbis uses. The "scale from -2 to 10" is this : Quality Kbit/s Normalization ----------------------------- -2 ~32 y -1 ~48 y 0 ~64 y 1 ~80 y 2 ~96 y 3 ~112 y 4 ~128 n 5 ~160 n 6 ~192 n 7 ~224 n 8 ~256 n 9 ~320 n 10 ~500 n I would like to know how to check if my $REPLY is in a range of number. For example, I would like my script to do something like this : if [ $REPLY is a number between 1 and 32 ] ; then REPLY="-2" elif [ $REPLY is a number between 33 and 48 ] ; then REPLY="-1" fi Is this possible (I'm willing to say 'yes of course, shouldn't be hard' but I don't know the syntax to use) ?
The [ command/shell builtin has comparison tests, so you can just do if [ "$REPLY" -ge 1 ] && [ "$REPLY" -le 32 ]; then REPLY=-2; elif [ "$REPLY" -ge 33 ] && [ "$REPLY" -le 48 ]; then REPLY=-1; fi where -ge means greater-or-equal-to (and so on). The [ command is just a command, not special syntax (it's actually the same as test : check out man test ), so it NEEDS the space after it. If you write [$REPLY it will try to find a command named [$REPLY and execute it, which won't work. The same goes for closing ] . Here, we're using the && shell operator to run the second command only if the first is successful. [ also supports -a to and two tests, but it's deprecated and its usage should be discouraged as it causes arguments not to be parseable reliably. Edit: to test if the number is integer (if that can happen in your code), first do the test if [[ "$REPLY" =~ ^[0-9]+$ ]]; then existing code else echo "$REPLY is not an integer" >&2 && exit 1; fi Of course all these bracket expressions return 0 (true) or 1 (false) and can be combined. Not only you can put everything in the same bracket, you can also do if [[ "$REPLY" =~ ^[0-9]+$ ]] && [ "$REPLY" -ge 1 ] && [ "$REPLY" -le 32 ]; then ... or something similar.
{ "source": [ "https://unix.stackexchange.com/questions/118856", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62317/" ] }
118,880
For uninstalling an app (or package) should I use apt-get remove package-name or apt-get purge package-name ? What is advantage of any of them to other one?
If you have customized the package/software at all, either by editing the config files directly, or via a GUI, you may want to keep your customizations. Usually in Unix/Linux systems, configurations are saved in text files, even if the configuration/customization is done via the GUI. Each Debian binary deb package has a list of files which it identifies as config files. dpkg , and thus apt honor this identification when removing packages, and also on upgrades. By default apt/dpkg will not remove config files on package removal. You have to request a purge. On upgrade it will ask you to choose between the current version and the new version (if they differ) before overwriting config files. Even in that case, it saves a copy of the original file. Here Debian is trying to help you, based on the assumption that your config files may contain valuable information. So, if you have not configured the package, or you don't want to keep your configurations, you can use apt-get purge . If you do keep the config files, then if/when you reinstall the package, Debian will attempt to reuse the saved configuration information. If the version of the package you are trying to (re)install has config files that conflict with the configuration files that are already installed, it will again ask you before overwriting, as it does on upgrade. Minor comment: If you have removed the package and later want to remove the config files,it used to be the case that apt would not remove the config files if the package was not installed. However, for some years now, running apt-get purge will remove config files even if the package is no longer installed. This was fixed in the 0.8.0~pre1 version of apt, released on Fri, 13 Aug 2010, or possibly in the 0.8.15~exp1 version of apt, released Fri, 10 Jun 2011. See Debian Bug Report: apt-get --purge does not work as expected , dated 24th June 2002.
{ "source": [ "https://unix.stackexchange.com/questions/118880", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55747/" ] }
118,883
As far as I undstand, the following command: sudo rsync --delete -azvr /home/oshiro/Desktop/source/ /home/oshiro/Desktop/destination Is this all I need to create a simple synchronisation of files from 1 location to another? Or does the above command do anything more in the background which I don't know about? For example, does it create some sort of versioning, where I can specify a past time to get files how where were in the passed? That's a feature which I do not want to enable at this stage. All I want is bit-identical copy of the files from 1 location to another. Can I get rid of any of those parameters or do I need to add further parameters to make the copies bit-identical with no versioning. Basically, I don't want to create backups how the mac time machine backup system creates backups.
Rsync doesn't do any kind of versioning or keep any history unless instructed with options such as --backup . There are backup tools that use rsync, but rsync itself isn't a backup tool any more than four wheels make a car. Rsync just handles the synchronization. Regarding the options you used or might want to use: -a means “copy almost everything” (copy directories recursively, copy symbolic links as such, preserve all metadata, etc.). Use this option unless you're doing something unusual. In addition to -a , you may want to use -H to preserve hard links, -A to preserve ACLs ( -a only preserves traditional unix permissions), or -X to preserve extended attributes. -r is already included in -a . -v means verbose. -z is useless for a local copy. --delete deletes files in the destination that are not present in the source. So this is the basic command to make the destination identical to the source (absent hard links, ACLs and extended attributes): rsync -a --delete SOURCE/ DESTINATION/
{ "source": [ "https://unix.stackexchange.com/questions/118883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4430/" ] }
118,933
If a shell is asked to perform a probably useless ( or partially useless ) command known to terminate, such as cat hugeregularfile.txt > /dev/null , can it skip that command's execution ( or execute a cheaper equivalent, say, touch -a hugeregularfile.txt )? More generally, is the shell similar to C compilers in that it may perform any transformation on the source code, so long as the externally observable behaviour is as-if the abstract machine evaluated it? EDIT Nota Bene: My question as originally posed had a title that asked whether the shell is permitted to do these optimizations, not whether it should or even whether implementations that can do them exist. I'm interested in the theory more than the practice, although both are welcome.
No, that would be a bad idea. cat hugeregularfile.txt > /dev/null and touch -a hugeregularfile.txt are not the same. cat will read the whole file, even if you redirect the output to /dev/null . And reading the whole file might be exactly what you want. For example in order to cache it so that later reads will be significantly faster. The shell can't know your intention. Similarly, a C compiler will never optimize out reading a file, even if you don't look at the stuff you read.
{ "source": [ "https://unix.stackexchange.com/questions/118933", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55491/" ] }
119,098
I have the md5sum of a file and I don't know where it is on my system. Is there any easy option of find to identify a file based on its md5 ? Or do I need to develop a small script ? I'm working on AIX 6 without the GNU tools.
Using find : find /tmp/ -type f -exec md5sum {} + | grep '^file_md5sum_to_match' If you searching through / then you can exclude /proc and /sys see following find command example : Also I had done some testing, find take more time and less CPU and RAM where ruby script is taking less time but more CPU and RAM Test Result Find [root@dc1 ~]# time find / -type f -not -path "/proc/*" -not -path "/sys/*" -exec md5sum {} + | grep '^304a5fa2727ff9e6e101696a16cb0fc5' 304a5fa2727ff9e6e101696a16cb0fc5 /tmp/file1 real 6m20.113s user 0m5.469s sys 0m24.964s Find with -prune [root@dc1 ~]# time find / \( -path /proc -o -path /sys \) -prune -o -type f -exec md5sum {} + | grep '^304a5fa2727ff9e6e101696a16cb0fc5' 304a5fa2727ff9e6e101696a16cb0fc5 /tmp/file1 real 6m45.539s user 0m5.758s sys 0m25.107s Ruby Script [root@dc1 ~]# time ruby findm.rb File Found at: /tmp/file1 real 1m3.065s user 0m2.231s sys 0m20.706s
{ "source": [ "https://unix.stackexchange.com/questions/119098", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53092/" ] }
119,126
I want to display Memory usage, Disk Usage and CPU Load in the following format: Memory Usage: 33/512MB (6%) Disk usage: 4.2/20GB (23%) CPU Load: 0.01 How do I do that?
Try this, it works on my Debian system. The details may vary depending on the implementation of these tools that your OS uses: #!/bin/sh free -m | awk 'NR==2{printf "Memory Usage: %s/%sMB (%.2f%%)\n", $3,$2,$3*100/$2 }' df -h | awk '$NF=="/"{printf "Disk Usage: %d/%dGB (%s)\n", $3,$2,$5}' top -bn1 | grep load | awk '{printf "CPU Load: %.2f\n", $(NF-2)}' If you save the above as a script and run it, you will get (example from my system): $ ./foo.sh Memory Usage: 4986/7994MB (62.37%) Disk Usage: 23/68GB (35%) CPU Load: 0.78 Note that the script above is giving the disk usage for the / partition. You did not specify what you wanted so I'm guessing that's what you're after.
{ "source": [ "https://unix.stackexchange.com/questions/119126", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62444/" ] }
119,269
Want to get the IP address using Shell script. Without knowing the eth0 or eth1 or eth2 How to get the particular IP address. I am not interest to get localhost address, want to get private IP address .
To list all IP addresses, regardless of name, try this: ifconfig | perl -nle 's/dr:(\S+)/print $1/e' or: ifconfig | awk '/inet addr/{print substr($2,6)}' Specify the interface name (e.g. eth0) right after ifconfig if you only want the IP of a specific interface: ifconfig eth0 | perl -nle 's/dr:(\S+)/print $1/e' or: ifconfig eth0 | awk '/inet addr/{print substr($2,6)}'
{ "source": [ "https://unix.stackexchange.com/questions/119269", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61108/" ] }
119,303
From man renice : Users other than the super-user may only alter the priority of processes they own, and can only monotonically increase their ``nice value'' (for security reasons) within the range 0 to PRIO_MAX (20) [...] So, I can renice my own processes upwards (give them lower priority) but never downwards: $ renice 10 22316 22316 (process ID) old priority 0, new priority 10 $ renice 9 22316 renice: failed to set priority for 22316 (process ID): Permission denied Why is this? I can understand why normal users cannot set nice values lower than 0, but why since I can decrease the priority to 10 can't I increase it again to 9? What "security reason" is there for this? I have the right to launch a process with a nice value of 9, so why can't I renice it to 9? EDIT: I should learn to scroll down. Turns out this is listed as a bug in man renice : BUGS Non super-users can not increase scheduling priorities of their own processes, even if they were the ones that decreased the priorities in the first place. That's even more confusing. If they consider this behavior to be a bug, why not change it? The renice command appeared in 4.0BSD which I think is from 1980. This should be very easy to fix so on the one hand they seem to have chosen to leave it and on the other they list it as a bug.
Since linux 2.6.12, that depends on the value of the RLIMIT_NICE limit ( ulimit -e ). Which can take values from 0 to 40. That limit is more the limit on the priority of the process (the greater that number, the higher the priority a user can set for a process). You'll notice the default value is 20 on ubuntu 10.04 and 0 in Debian jessie for instance. A value of n for that limit means that a process without the CAP_NICE capability can only increase a process priority to up to n , which means decrease niceness down to a niceness of 20 - n . So for a value of 0, that means no non-privileged user can lower the niceness below 20, so no non-privileged user can lower the niceness. With a value of 20, non-privileged users can decrease the niceness back to 0. It's up to the administrator to choose whether they allow users to lower their process priority, and to what level by setting the hard limit for that. As to why an administrator may not want users to lower their process priority, see Flup's answer .
{ "source": [ "https://unix.stackexchange.com/questions/119303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
119,310
I recently updated my fedora to 20 and wanted to install vim.but running sudo yum install vim returned this error: Transaction check error: file /usr/share/man/man1/vim.1.gz from install of vim-common-2:7.4.179-1.fc20.x86_64 conflicts with file from package vim-minimal-2:7.4.027-2.fc20.x86_64 Error Summary ------------- How to fix this problem?
Before you remove vim-minimal, login with root user or do: sudo -s After that, remove vim-minimal with the command: yum remove vim-minimal Then you can install vim: yum install vim and after that install sudo: yum install sudo
{ "source": [ "https://unix.stackexchange.com/questions/119310", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62538/" ] }
119,358
I have a problem copying files to a directory on Ubuntu 12.04. I create a directory in the home directory so that the path where I want to copy to is: /home/sixven/camp_sms/inputs But when ini run the following command in the terminal to create a sample file as follows: francisco-vergara@Francisco-Vergara:/home/sixven/camp_sms/inputs$ touch test_file.txt touch: can not make `touch' on «test_file.txt»: permission denied I can not copy files directly in that directory. How can I assign permissions with the chown & chmod commands to copy the files? I do not know which user and group to use.
First of all you have to know that the default permission of directories in Ubuntu is 644 which means you can't create a file in a directory you are not the owner. you are trying as user:francisco-vergara to create a file in a directory /home/sixven/camp_sms/inputs which is owned by user:sixven . So how to solve this: You can either change the permission of the directory and enable others to create files inside. sudo chmod -R 777 /home/sixven/camp_sms/inputs This command will change the permission of the directory recursively and enable all other users to create/modify and delete files and directories inside. You can change the owner ship of this directory and make user:francisco-vergara as the owner sudo chown -R francisco-vergara:francisco-vergara /home/sixven/camp_sms/inputs But like this the user:sixven can't write in this folder again and thus you may moving in a circular infinite loop. So i advise you to use Option 1. Or if this directory will be accessed by both users you can do the following trick: change ownership of the directory to user:francisco-vergara and keep the group owner group:sixven . sudo chown -R francisco-vergara /home/sixven/camp_sms/inputs Like that both users can still use the directory. But as I said you before It's easiest and more efficient to use option 1.
{ "source": [ "https://unix.stackexchange.com/questions/119358", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4441/" ] }
119,364
Is it possible to export a block device such as a DVD or CDROM and make it so that it's mountable on another computer as a block device? NOTE: I'm not interested in doing this using NFS or Samba, I actually want the optical drive to show up as a optical drive on a remote computer.
I think you might be able to accomplish what you want using network block devices (NBD). Looking at the wikipedia page on the subject there is mention of a tool called nbd . It's comprised of a client and server component. Example In this scenario I'm setting up a CDROM on my Fedora 19 laptop (server) and I'm sharing it out to an Ubuntu 12.10 system (client). installing $ apt-cache search ^nbd- nbd-client - Network Block Device protocol - client nbd-server - Network Block Device protocol - server $ sudo apt-get install nbd-server nbd-client sharing a CD Now back on the server (Fedodra 19) I do a similar thing using its package manager YUM. Once complete I pop a CD in and run this command to share it out as a block device: $ sudo nbd-server 2000 /dev/sr0 ** (process:29516): WARNING **: Specifying an export on the command line is deprecated. ** (process:29516): WARNING **: Please use a configuration file instead. $ A quick check to see if it's running: $ ps -eaf | grep nbd root 29517 1 0 12:02 ? 00:00:00 nbd-server 2000 /dev/sr0 root 29519 29071 0 12:02 pts/6 00:00:00 grep --color=auto nbd Mounting the CD Now back on the Ubuntu client we need to connect to the nbd-server using nbd-client like so. NOTE: the name of the nbd-server is greeneggs in this example. $ sudo nbd-client greeneggs 2000 /dev/nbd0 Negotiation: ..size = 643MB bs=1024, sz=674983936 bytes (On some systems - e.g. Fedora - one has to modprobe nbd first.) We can confirm that there's now a block device on the Ubuntu system using lsblk : $ sudo lsblk -l NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk sda1 8:1 0 243M 0 part /boot sda2 8:2 0 1K 0 part sda5 8:5 0 465.5G 0 part ubuntu-root (dm-0) 252:0 0 461.7G 0 lvm / ubuntu-swap_1 (dm-1) 252:1 0 3.8G 0 lvm [SWAP] sr0 11:0 1 654.8M 0 rom nbd0 43:0 0 643M 1 disk nbd0p1 43:1 0 643M 1 part And now we mount it: $ sudo mount /dev/nbd0p1 /mnt/ mount: block device /dev/nbd0p1 is write-protected, mounting read-only $ did it work? The suspense is killing me, and we have liftoff: $ sudo ls /mnt/ EFI GPL isolinux LiveOS There's the contents of a LiveCD of CentOS that I mounted in the Fedora 19 laptop and was able to mount it as a block device of the network on Ubuntu.
{ "source": [ "https://unix.stackexchange.com/questions/119364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
119,432
On my Debian system I've customized my Gnome (Shell) keyboard shortcuts, via System Settings > Keyboard > Shortcuts. Where do I find the file with these settings so that I can copy the file onto a flash drive for backup and then use it to replace the keyboard shortcuts on other Gnome systems?
Gnome 3 uses DCONF to store the preferences in a single binary file: ~/.config/dconf/user . As per the Gnome docs, it is recommended to save only the settings that you need and restore them with either dconf or gsettings . However, gsettings is only able to restore the value(s) for one single key at a time (plus, the value must be quoted) and that makes it a bit awkward for this kind of task. Which leaves us with dconf . So, in this particular case, save the current settings for gnome-shell keyboard shortcuts 1 : dconf dump /org/gnome/shell/keybindings/ > bkp Here's a bkp sample: [/] toggle-message-tray=['<Super>m'] open-application-menu=['<Super>F1'] toggle-application-view=['<Control>F1'] focus-active-notification=['<Super>n'] toggle-recording=['<Control><Shift><Alt>r'] Load the settings on another system: dconf load /org/gnome/shell/keybindings/ < bkp 1: WM and Media Keys shortcuts belong to different schemas: /org/gnome/desktop/wm/keybindings/ /org/gnome/mutter/keybindings/ /org/gnome/mutter/wayland/keybindings/ /org/gnome/settings-daemon/plugins/media-keys/ Note that dconf only dumps non-default values so if you run e.g. dconf dump /org/gnome/desktop/wm/keybindings/ and don't get any output that means there's no custom WM shortcut defined. As a side note, dconf-editor is a tool that helps visualizing dconf settings structure, i.e. schema [:path] key value , the type and the default values of any key etc. For the record, saving the preferences with gsettings : gsettings list-recursively org.gnome.shell.keybindings > bkp bkp sample: org.gnome.shell.keybindings focus-active-notification ['<Super>n'] org.gnome.shell.keybindings open-application-menu ['<Super>F1'] org.gnome.shell.keybindings toggle-application-view ['<Super>a'] org.gnome.shell.keybindings toggle-message-tray ['<Super>m'] org.gnome.shell.keybindings toggle-recording ['<Control><Shift><Alt>r'] Now loading the preferences (as I said, for each line in the backup file you need a separate command and don't forget to quote the values): gsettings set org.gnome.shell.keybindings focus-active-notification "['<Super>n']" gsettings set org.gnome.shell.keybindings open-application-menu "['<Super>F1']" gsettings set org.gnome.shell.keybindings toggle-application-view "['<Super>a']" gsettings set org.gnome.shell.keybindings toggle-message-tray "['<Super>m']" gsettings set org.gnome.shell.keybindings toggle-recording "['<Control><Shift><Alt>r']"
{ "source": [ "https://unix.stackexchange.com/questions/119432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55152/" ] }
119,546
I'm starting with bash and I found the following: if test $first -lt $second then echo $first is lower than $second else if test $first -gt $second then echo $first is higher than $second else echo $first and $second are equals fi fi For reading the script and executing it, I know what it does, but not what -lt and -gt are for. Can somebody tell me what is the name of that kind of 'tool' and what they(-lt and -gt) do? Thanks!
It's short for less than and greater than . It's used for integer comparison in bash. You can read more by typing man test : .... INTEGER1 -gt INTEGER2 INTEGER1 is greater than INTEGER2 .... INTEGER1 -lt INTEGER2 INTEGER1 is less than INTEGER2 ....
{ "source": [ "https://unix.stackexchange.com/questions/119546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62668/" ] }
119,598
I have a script being run automatically that I can't find in the crontab for the expected users, so I'd like to search all users' crontabs for it. Essentially I want to run a crontab -l for all users.
Well depends on the script but easily you can find your crontab as root with crontab -l -u <user> Or you can find crontab from spool where is located file for all users cat /var/spool/cron/crontabs/<user> To show all users' crontabs with the username printed at the beginning of each line: cd /var/spool/cron/crontabs/ && grep . *
{ "source": [ "https://unix.stackexchange.com/questions/119598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11898/" ] }
119,606
Is there something inherent to Linux operating systems that makes them poor managers of battery power by default? I would have thought a light distro like Lubuntu would have a clear battery life advantage over Windows, yet this doesn't seem to be the case. Is it a hardware vendor issue - are laptops just designed to work more power efficiently with Windows OSes? For example, in my experience on the same laptop, a given linux distribution always seems to have poor battery life compared to Windows. My old laptop (a Thinkpad X61) lasted nearly half as long when booted into Lubuntu than it did when using Windows XP. On a newer model, I get a similar poor performance using Fedora 20 vs Windows 8.1.
A modern computer contains hundreds of parts that can be turned on and off or clocked faster or slower independently. The granularity is smaller than visible chips, smaller even than cores. A large part of power saving consists on turning parts on and off at the best time. Each part should be turned off when not in use, but only if it's going to remain unused for a long enough time to be worth it (when the part restarts, it needs some time and hence some power to reboot and restore its old state). Thus good power management requires well-written drivers. Hardware manufacturers are very cagey about giving out details about their hardware to people who write drivers. Even people who write closed-source drivers under a non-disclosure agreement often lack detailed documentation — and people who are writing open-source drivers for an operating system that the manufacturer doesn't care much about have it a lot worse. So on PCs, Linux starts at a disadvantage, not for technical reasons, but for social reasons. Linux is used in many embedded devices and higher-end devices that run on batteries. The kernel matters a lot for battery life; most smartphones run Android, which is based on a Linux kernel. To save battery, make sure not to run a “screen saver” (go for a plain black screen), and stay away from 3D effects (which stress the GPU). Don't keep web pages with CPU-intensive animations and other effects (Chrome has a nice view of per-tab CPU consumption). Enable CPU frequency scaling while not connected to the mains; it makes your computer slower by slowing down the main CPU, which saves power. Run Powertop to see where your power is going given the way you use your computer.
{ "source": [ "https://unix.stackexchange.com/questions/119606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62701/" ] }
119,612
I am reading an article on executing bash shell scripts. Method 1: Create a directory for your shell scripts and add the directory to the contents of the PATH variable, so that you can execute the script as shown below. When this executes, the variables, functions and aliases created in this subshell are only known to the particular bash session of that subshell. When that shell exits and the parent regains control, everything is cleaned up and all changes to the state of the shell made by the script, are forgotten. $ hello_world.sh Method 2: A script can also explicitly be executed by a given shell. The specified shell will start as a subshell of your current shell and execute the script. $ sh script_name.sh Method 3: If you don't want to start a new shell but execute the script in the current shell, you source it: source script_name.sh I really don't understand the benefits and disadvantages of each of these methods. Can anyone clarify?
A modern computer contains hundreds of parts that can be turned on and off or clocked faster or slower independently. The granularity is smaller than visible chips, smaller even than cores. A large part of power saving consists on turning parts on and off at the best time. Each part should be turned off when not in use, but only if it's going to remain unused for a long enough time to be worth it (when the part restarts, it needs some time and hence some power to reboot and restore its old state). Thus good power management requires well-written drivers. Hardware manufacturers are very cagey about giving out details about their hardware to people who write drivers. Even people who write closed-source drivers under a non-disclosure agreement often lack detailed documentation — and people who are writing open-source drivers for an operating system that the manufacturer doesn't care much about have it a lot worse. So on PCs, Linux starts at a disadvantage, not for technical reasons, but for social reasons. Linux is used in many embedded devices and higher-end devices that run on batteries. The kernel matters a lot for battery life; most smartphones run Android, which is based on a Linux kernel. To save battery, make sure not to run a “screen saver” (go for a plain black screen), and stay away from 3D effects (which stress the GPU). Don't keep web pages with CPU-intensive animations and other effects (Chrome has a nice view of per-tab CPU consumption). Enable CPU frequency scaling while not connected to the mains; it makes your computer slower by slowing down the main CPU, which saves power. Run Powertop to see where your power is going given the way you use your computer.
{ "source": [ "https://unix.stackexchange.com/questions/119612", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15378/" ] }
119,627
In Linux and, to my knowledge, all Unix systems, terminal emulators run interactive, non-login shells by default. This means that, for bash, the started shell will: When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc and ~/.bashrc , if these files exist. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of /etc/bash.bashrc and ~/.bashrc . And for login shells: When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile , if that file exists. After reading that file, it looks for ~/.bash_profile , ~/.bash_login , and ~/.profile , in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior. On OSX, however, the default shell (which is bash) started in the default terminal (Terminal.app) actually sources ~/.bash_profile or ~.profile etc. In other words, it acts like a login shell. Main question : Why is the default interactive shell a login shell on OSX? Why did OSX choose to do this? This means that all instructions/tutorials for shell based things that mention changing things in ~/.bashrc will fail on OSX or vice versa for ~/.profile . Still, while many accusations can be leveled at Apple, hiring incompetent or idiotic devs is not one of them. Presumably, they had a good reason for this, so why? Subquestions: Does Terminal.app actually run an interactive login shell or have they changed bash's behavior? Is this specific to Terminal.app or is it independent of the terminal emulator?
The way it's supposed work is that, at the point when you get a shell prompt, both .profile and .bashrc have been run. The specific details of how you get to that point are of secondary relevance, but if either of the files didn't get run at all, you'd have a shell with incomplete settings. The reason terminal emulators on Linux (and other X-based systems) don't need to run .profile themselves is that it will normally have been run already when you logged in to X. The settings in .profile are supposed to be of the kind that can be inherited by subprocesses, so as long as it's executed once when you log in (e.g. via .Xsession ), any further subshells don't need to re-run it. As the Debian wiki page linked by Alan Shutko explains: "Why is .bashrc a separate file from .bash_profile , then? This is done for mostly historical reasons, when machines were extremely slow compared to today's workstations. Processing the commands in .profile or .bash_profile could take quite a long time, especially on a machine where a lot of the work had to be done by external commands (pre-bash). So the difficult initial set-up commands, which create environment variables that can be passed down to child processes, are put in .bash_profile . The transient settings and aliases which are not inherited are put in .bashrc so that they can be re-read by every subshell." All the same rules hold on OSX, too, except for one thing — the OSX GUI doesn't run .profile when you log in, apparently because it has its own method of loading global settings. But that means that a terminal emulator on OSX does need to run .profile (by telling the shell it launches that it's a login shell), otherwise you'd end up with a potentially crippled shell. Now, a kind of a silly peculiarity of bash, not shared by most other shells, is that it will not automatically run .bashrc if it's started as a login shell. The standard work-around for that is to include something like the following commands in .bash_profile : [[ -e ~/.profile ]] && source ~/.profile # load generic profile settings [[ -e ~/.bashrc ]] && source ~/.bashrc # load aliases etc. Alternatively, it's possible to have no .bash_profile at all, and just include some bash-specific code in the generic .profile file to run .bashrc if needed. If the OSX default .bash_profile or .profile doesn't do this, then that's arguably a bug. In any case, the proper work-around is to simply add those lines to .bash_profile . Edit: As strugee notes , the default shell on OSX used to be tcsh, whose behavior is much saner in this respect: when run as an interactive login shell, tcsh automatically reads both .profile and .tcshrc / .cshrc , and thus does not need any workarounds like the .bash_profile trick shown above. Based on this, I'm 99% sure that the failure of OSX to supply an appropriate default .bash_profile is because, when they switched from tcsh to bash, the folks at Apple simply didn't notice this little wart in bash's startup behavior. With tcsh, no such tricks were needed — starting tcsh as a login shell from an OSX terminal emulator Just Plain Works and does the right thing without such kluges.
{ "source": [ "https://unix.stackexchange.com/questions/119627", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
119,648
I'm reading an example bash shell script: #!/bin/bash # This script makes a backup of my home directory. cd /home # This creates the archive tar cf /var/tmp/home_franky.tar franky > /dev/null 2>&1 # First remove the old bzip2 file. Redirect errors because this generates some if the archive # does not exist. Then create a new compressed file. rm /var/tmp/home_franky.tar.bz2 2> /dev/null bzip2 /var/tmp/home_franky.tar # Copy the file to another host - we have ssh keys for making this work without intervention. scp /var/tmp/home_franky.tar.bz2 bordeaux:/opt/backup/franky > /dev/null 2>&1 # Create a timestamp in a logfile. date >> /home/franky/log/home_backup.log echo backup succeeded >> /home/franky/log/home_backup.log I'm trying to understand the use of /dev/null 2>&1 here. At first, I thought this script uses /dev/null in order to gracefully ignore errors, without causing the script to crash (kind of like try catch exception handling in programming languages). Because I don't see how using tar to compress a directory into a tar file could possibly cause any type of errors.
No, this will not prevent the script from crashing. If any errors occur in the tar process (e.g.: permission denied, no such file or directory, ...) the script will still crash. This is because of using > /dev/null 2>&1 will redirect all your command output (both stdout and stderr ) to /dev/null , meaning no outputs are printed to the terminal. By default: stdin ==> fd 0 stdout ==> fd 1 stderr ==> fd 2 In the script, you use > /dev/null causing: stdin ==> fd 0 stdout ==> /dev/null stderr ==> fd 2 And then 2>&1 causing: stdin ==> fd 0 stdout ==> /dev/null stderr ==> stdout
{ "source": [ "https://unix.stackexchange.com/questions/119648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15378/" ] }
120,015
I want to find out the list of dynamic libraries a binary loads when run (With their full paths). I am using CentOS 6.0. How to do this?
You can do this with ldd command: NAME ldd - print shared library dependencies SYNOPSIS ldd [OPTION]... FILE... DESCRIPTION ldd prints the shared libraries required by each program or shared library specified on the command line. .... Example: $ ldd /bin/ls linux-vdso.so.1 => (0x00007fff87ffe000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007ff0510c1000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007ff050eb9000) libacl.so.1 => /lib/x86_64-linux-gnu/libacl.so.1 (0x00007ff050cb0000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ff0508f0000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ff0506ec000) /lib64/ld-linux-x86-64.so.2 (0x00007ff0512f7000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ff0504ce000) libattr.so.1 => /lib/x86_64-linux-gnu/libattr.so.1 (0x00007ff0502c9000)
{ "source": [ "https://unix.stackexchange.com/questions/120015", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
120,077
I had a directory which had around 5 million files. When I tried to run the ls command from inside this directory, my system consumed a huge amount of memory and it hung after sometime. Is there an efficient way to list the files other than using the ls command?
ls actually sorts the files and tries to list them which becomes a huge overhead if we are trying to list more than a million files inside a directory. As mentioned in this link, we can use strace or find to list the files. However, those options also seemed unfeasible to my problem since I had 5 million files. After some bit of googling, I found that if we list the directories using getdents() , it is supposed to be faster, because ls , find and Python libraries use readdir() which is slower but uses getdents() underneath. We can find the C code to list the files using getdents() from here : /* * List directories using getdents() because ls, find and Python libraries * use readdir() which is slower (but uses getdents() underneath. * * Compile with * ]$ gcc getdents.c -o getdents */ #define _GNU_SOURCE #include <dirent.h> /* Defines DT_* constants */ #include <fcntl.h> #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) struct linux_dirent { long d_ino; off_t d_off; unsigned short d_reclen; char d_name[]; }; #define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { int fd, nread; char buf[BUF_SIZE]; struct linux_dirent *d; int bpos; char d_type; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) handle_error("getdents"); if (nread == 0) break; for (bpos = 0; bpos < nread;) { d = (struct linux_dirent *) (buf + bpos); d_type = *(buf + bpos + d->d_reclen - 1); if( d->d_ino != 0 && d_type == DT_REG ) { printf("%s\n", (char *)d->d_name ); } bpos += d->d_reclen; } } exit(EXIT_SUCCESS); } Copy the C program above into directory in which the files need to be listed. Then execute the below commands. gcc getdents.c -o getdents ./getdents Timings example : getdents can be much faster than ls -f , depending on the system configuration. Here are some timings demonstrating a 40x speed increase for listing a directory containing about 500k files over an NFS mount in a compute cluster. Each command was run 10 times in immediate succession, first getdents , then ls -f . The first run is significantly slower than all others, probably due to NFS caching page faults. (Aside: over this mount, the d_type field is unreliable, in the sense that many files appear as "unknown" type.) command: getdents $bigdir usr:0.08 sys:0.96 wall:280.79 CPU:0% usr:0.06 sys:0.18 wall:0.25 CPU:97% usr:0.05 sys:0.16 wall:0.21 CPU:99% usr:0.04 sys:0.18 wall:0.23 CPU:98% usr:0.05 sys:0.20 wall:0.26 CPU:99% usr:0.04 sys:0.18 wall:0.22 CPU:99% usr:0.04 sys:0.17 wall:0.22 CPU:99% usr:0.04 sys:0.20 wall:0.25 CPU:99% usr:0.06 sys:0.18 wall:0.25 CPU:98% usr:0.06 sys:0.18 wall:0.25 CPU:98% command: /bin/ls -f $bigdir usr:0.53 sys:8.39 wall:8.97 CPU:99% usr:0.53 sys:7.65 wall:8.20 CPU:99% usr:0.44 sys:7.91 wall:8.36 CPU:99% usr:0.50 sys:8.00 wall:8.51 CPU:100% usr:0.41 sys:7.73 wall:8.15 CPU:99% usr:0.47 sys:8.84 wall:9.32 CPU:99% usr:0.57 sys:9.78 wall:10.36 CPU:99% usr:0.53 sys:10.75 wall:11.29 CPU:99% usr:0.46 sys:8.76 wall:9.25 CPU:99% usr:0.50 sys:8.58 wall:9.13 CPU:99%
{ "source": [ "https://unix.stackexchange.com/questions/120077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47538/" ] }
120,096
I have a PC with Intel(R) Pentium(R) CPU G640 @ 2.80 GHz and 8 GB of RAM. I am running Scientific Linux 6.5 on it with EXT3 filesystem. On this setup, what is the fastest way I can do a sort -u on a 200 gigabyte file? Should I split the file into smaller files (smaller than 8 GB), sort -u them, put them together, then split them again in a different size, sort -u again, etc.? Or are there any sorting scripts, programs that could handle files this big with my limited amount of RAM?
GNU sort (which is the default on most Linux systems), has a --parallel option. From http://www.gnu.org/software/coreutils/manual/html_node/sort-invocation.html : ‘--parallel=n’ Set the number of sorts run in parallel to n. By default, n is set to the number of available processors, but limited to 8, as there are diminishing performance gains after that. Note also that using n threads increases the memory usage by a factor of log n. Also see nproc invocation. Since your cpu has 2 cores, you could do: sort --parallel=2 -uo list-sorted.txt list.txt It is better to specify the actual number of cores since there may appear to be more due to the processor having hyper-threading . You could also experiment with nice to influence the processor scheduling priority and ionice to influence I/O scheduling. You can increase the priority over other processes like this, I don't think this will give you large savings as they are usually better for making sure a background process doesn't use too much resources. Never-the-less you can combine them with something like: nice -n -20 ionice -c2 -n7 sort --parallel=2 -uo list-sorted.txt list.txt Note also that as Gilles commented, using a single GNU sort command will be faster than any other method of breaking down the sorting as the algorithm is already optimised to handle large files. Anything else will likely just slow things down.
{ "source": [ "https://unix.stackexchange.com/questions/120096", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61246/" ] }
120,097
It seems somewhat similar to the question here but I think what I'm asking is a little different. I've got a CentOS 5.x box running with mysql on it. The root linux user can edit/modify files in /var/lib/mysql/ sub-directories but most (if not all) of these directories have file system ownerships of mysql:mysql . When I run groups root I see: [root@foo ~]# groups root root : root bin daemon sys adm disk wheel I get that root is a privileged account and can/will have access to everything but why don't I see mysql in the list of groups it belongs to?
GNU sort (which is the default on most Linux systems), has a --parallel option. From http://www.gnu.org/software/coreutils/manual/html_node/sort-invocation.html : ‘--parallel=n’ Set the number of sorts run in parallel to n. By default, n is set to the number of available processors, but limited to 8, as there are diminishing performance gains after that. Note also that using n threads increases the memory usage by a factor of log n. Also see nproc invocation. Since your cpu has 2 cores, you could do: sort --parallel=2 -uo list-sorted.txt list.txt It is better to specify the actual number of cores since there may appear to be more due to the processor having hyper-threading . You could also experiment with nice to influence the processor scheduling priority and ionice to influence I/O scheduling. You can increase the priority over other processes like this, I don't think this will give you large savings as they are usually better for making sure a background process doesn't use too much resources. Never-the-less you can combine them with something like: nice -n -20 ionice -c2 -n7 sort --parallel=2 -uo list-sorted.txt list.txt Note also that as Gilles commented, using a single GNU sort command will be faster than any other method of breaking down the sorting as the algorithm is already optimised to handle large files. Anything else will likely just slow things down.
{ "source": [ "https://unix.stackexchange.com/questions/120097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1822/" ] }
120,153
I need to write a bash script wherein I have to create a file which holds the details of IP Addresses of the hosts and their mapping with corresponding MAC Addresses. Is there any possible way with which I can find out the MAC address of any (remote) host when IP address of the host is available?
If you just want to find out the MAC address of a given IP address you can use the command arp to look it up, once you've pinged the system 1 time. Example $ ping skinner -c 1 PING skinner.bubba.net (192.168.1.3) 56(84) bytes of data. 64 bytes from skinner.bubba.net (192.168.1.3): icmp_seq=1 ttl=64 time=3.09 ms --- skinner.bubba.net ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 3.097/3.097/3.097/0.000 ms Now look up in the ARP table: $ arp -a skinner.bubba.net (192.168.1.3) at 00:19:d1:e8:4c:95 [ether] on wlp3s0 fing If you want to sweep the entire LAN for MAC addresses you can use the command line tool fing to do so. It's typically not installed so you'll have to go download it and install it manually. $ sudo fing 10.9.8.0/24 Using ip If you find you don't have the arp or fing commands available, you could use iproute2's command ip neigh to see your system's ARP table instead: $ ip neigh 192.168.1.61 dev eth0 lladdr b8:27:eb:87:74:11 REACHABLE 192.168.1.70 dev eth0 lladdr 30:b5:c2:3d:6c:37 STALE 192.168.1.95 dev eth0 lladdr f0:18:98:1d:26:e2 REACHABLE 192.168.1.2 dev eth0 lladdr 14:cc:20:d4:56:2a STALE 192.168.1.10 dev eth0 lladdr 00:22:15:91:c1:2d REACHABLE References Equivalent of iwlist to see who is around?
{ "source": [ "https://unix.stackexchange.com/questions/120153", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48188/" ] }
120,157
I accidentally typed ssh 10.0.05 instead of ssh 10.0.0.5 and was very surprised that it worked. I also tried 10.005 and 10.5 and those also expanded automatically to 10.0.0.5 . I also tried 192.168.1 and that expanded to 192.168.0.1 . All of this also worked with ping rather than ssh , so I suspect it would work for many other commands that connect to an arbitrary user-supplied host. Why does this work? Is this behavior documented somewhere? Is this behavior part of POSIX or something? Or is it just some weird implementation? (Using Ubuntu 13.10 for what it's worth.)
Quoting from man 3 inet_aton : a.b.c.d Each of the four numeric parts specifies a byte of the address; the bytes are assigned in left-to-right order to produce the binary address. a.b.c Parts a and b specify the first two bytes of the binary address. Part c is interpreted as a 16-bit value that defines the rightmost two bytes of the binary address. This notation is suitable for specifying (outmoded) Class B network addresses. a.b Part a specifies the first byte of the binary address. Part b is interpreted as a 24-bit value that defines the rightmost three bytes of the binary address. This notation is suitable for specifying (outmoded) Class C network addresses. a The value a is interpreted as a 32-bit value that is stored directly into the binary address without any byte rearrangement. In all of the above forms, components of the dotted address can be specified in decimal, octal (with a leading 0), or hexadecimal, with a leading 0X). Addresses in any of these forms are collectively termed IPV4 numbers-and-dots notation. The form that uses exactly four decimal numbers is referred to as IPv4 dotted-decimal notation (or sometimes: IPv4 dotted-quad notation). For fun, try this: $ nslookup unix.stackexchange.com Non-authoritative answer: Name: unix.stackexchange.com Address: 198.252.206.140 $ echo $(( (198 << 24) | (252 << 16) | (206 << 8) | 140 )) 3338456716 $ ping 3338456716 # What? What did we ping just now? PING stackoverflow.com (198.252.206.140): 48 data bytes 64 bytes from 198.252.206.140: icmp_seq=0 ttl=52 time=75.320 ms 64 bytes from 198.252.206.140: icmp_seq=1 ttl=52 time=76.966 ms 64 bytes from 198.252.206.140: icmp_seq=2 ttl=52 time=75.474 ms
{ "source": [ "https://unix.stackexchange.com/questions/120157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62975/" ] }
120,164
I have install Fedora 20 before, a few days ago , I just install Window 7, and now I can not boot to Fedora because I don't have a boot menu. Is there anyway to recover the Fedora boot menu, is there any free software to do it without using the Fedora CD to recover it.
Quoting from man 3 inet_aton : a.b.c.d Each of the four numeric parts specifies a byte of the address; the bytes are assigned in left-to-right order to produce the binary address. a.b.c Parts a and b specify the first two bytes of the binary address. Part c is interpreted as a 16-bit value that defines the rightmost two bytes of the binary address. This notation is suitable for specifying (outmoded) Class B network addresses. a.b Part a specifies the first byte of the binary address. Part b is interpreted as a 24-bit value that defines the rightmost three bytes of the binary address. This notation is suitable for specifying (outmoded) Class C network addresses. a The value a is interpreted as a 32-bit value that is stored directly into the binary address without any byte rearrangement. In all of the above forms, components of the dotted address can be specified in decimal, octal (with a leading 0), or hexadecimal, with a leading 0X). Addresses in any of these forms are collectively termed IPV4 numbers-and-dots notation. The form that uses exactly four decimal numbers is referred to as IPv4 dotted-decimal notation (or sometimes: IPv4 dotted-quad notation). For fun, try this: $ nslookup unix.stackexchange.com Non-authoritative answer: Name: unix.stackexchange.com Address: 198.252.206.140 $ echo $(( (198 << 24) | (252 << 16) | (206 << 8) | 140 )) 3338456716 $ ping 3338456716 # What? What did we ping just now? PING stackoverflow.com (198.252.206.140): 48 data bytes 64 bytes from 198.252.206.140: icmp_seq=0 ttl=52 time=75.320 ms 64 bytes from 198.252.206.140: icmp_seq=1 ttl=52 time=76.966 ms 64 bytes from 198.252.206.140: icmp_seq=2 ttl=52 time=75.474 ms
{ "source": [ "https://unix.stackexchange.com/questions/120164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62978/" ] }
120,206
I expected that: $ rm *(1)* would remove all files containing (1) in the name. I was wrong. It removed all files in the directory. Why?
From man bash : *(pattern-list) Matches zero or more occurrences of the given patterns You have a glob expression which matches files beginning with zero or more 1 s - which is all files. One simple way to disable this globbing behaviour is to \ escape the parentheses: rm *\(1\)* Otherwise you can use shopt -u extglob to disable the behaviour and shopt -s extglob to re-enable it: shopt -u extglob rm *(1)* shopt -s extglob Note that as Stephane says , extglob is enabled by bash-completion so disabling it may cause completion functions not to work properly.
{ "source": [ "https://unix.stackexchange.com/questions/120206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5045/" ] }
120,210
I have a program that need to write in a file every second. I thought there is a too much overhead if I open and close the file each time. So I decided to keep open the file io stream. But I don't want to rely on my unreliable hunch. How can I find the better way if I should keep file stream open or close by every additional writing? I want to check in CentOS environment.
From man bash : *(pattern-list) Matches zero or more occurrences of the given patterns You have a glob expression which matches files beginning with zero or more 1 s - which is all files. One simple way to disable this globbing behaviour is to \ escape the parentheses: rm *\(1\)* Otherwise you can use shopt -u extglob to disable the behaviour and shopt -s extglob to re-enable it: shopt -u extglob rm *(1)* shopt -s extglob Note that as Stephane says , extglob is enabled by bash-completion so disabling it may cause completion functions not to work properly.
{ "source": [ "https://unix.stackexchange.com/questions/120210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44001/" ] }
120,221
How can I tell whether my harddrive is laid out using an MBR or GPT format?
You can use parted -l to determine the type of partition table. Eg: $ sudo parted -l Model: ATA TOSHIBA THNSNS25 (scsi) Disk /dev/sda: 256GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 4194kB 32.2GB 32.2GB primary ext4 boot 2 32.2GB 256GB 224GB primary ext4 Model: ATA Hitachi HDT72101 (scsi) Disk /dev/sdb: 1000GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 32.2GB 32.2GB primary ext4 boot 2 32.2GB 996GB 964GB primary ext4 3 996GB 1000GB 4295MB primary linux-swap(v1) The Partition Table field shows that I am using a msdos MBR partition table (the one still commonly used for Linux and Windows) on both disks. From the man page parted can create (and thus hopefully identify) the following types of partition table (or more broadly `disk label'): bsd dvh gpt - this is a GPT partition table loop - this is raw disk access without a partition table mac msdos - this is a standard MBR partition table pc98 sun Update It is worth adding the command for listing a single partition since this is not obvious without some knowledge of parted and it can be a pain finding the data you need if there are multiple drives. For /dev/sda you would do: parted /dev/sda print
{ "source": [ "https://unix.stackexchange.com/questions/120221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
120,257
I have a script which gives me fine-grained control over my backlight brightness and requires sudo to run. It's essentially this: backlight="/sys/class/backlight/acpi_video0/brightness" echo $1 | tee $backlight and lives at ~/bin/backlight-adjust . The script needs sudo privileges, because tee $backlight is writing to a privileged location. So it'll fail if it's not run with sudo . This approach has a problem, because I can't just run sudo backlight-adjust , because ~/bin is not in the $PATH in the sudo environment, only in my environment. So I'd have to to run sudo env "PATH=$PATH" backlight-adjust or something similar. Alternatively, I could have written it like this: backlight="/sys/class/backlight/acpi_video0/brightness" echo $1 | sudo tee $backlight and prompt me for the password. The second approach works better for me because I don't have to remember to type sudo; it'll prompt me. And I can keep my $PATH intact. This feels more convenient overall, but are there any reasons why I shouldn't do it the second way? (I'm running Xubuntu 14.04 and my shell is GNU bash 4.2.45, if that makes a difference.)
Personally, I would use a different approach. Make an alias for your script. Add this line to your ~/.bashrc (or equivalent in other shells) alias backlight-adjust='sudo ~/bin/backlight-adjust' That way, you don't need to worry about remembering to run it with sudo and you don't need to add the sudo to the script. It will be completely transparent to you and simply ask for your password when you try and run backlight-adjust .
{ "source": [ "https://unix.stackexchange.com/questions/120257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63035/" ] }
120,265
I'm getting the error message that the system is not able to locate the package kamailio. I navigated the repo on that site and I can see that there is a kamailio package, but it's located under http://repo.pouf.org/raspbian/pool/main/k/ folder. I've also tried to change the sources.list file to read: deb http://repo.pouf.org/raspbian/dists/ wheezy main But that didn't fix the issue.
Looks like you just haven't updated your package lists, this is missing from the link that you gave - sudo apt-get update This should download the list files from the repos in /etc/apt/sources.list so that apt-get install knows what packages to look for. Note also that you should do this regularly as the repository will change over time. In particularly do it before installing software if it hasn't been done for a while!
{ "source": [ "https://unix.stackexchange.com/questions/120265", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52684/" ] }
120,311
When I sum up the sizes of my files, I get one figure. If I run du , I get another figure. If I run du on all the files on my partition, it doesn't match what df claims is used. Why are there so many different figures for the total size of my files? Can't computers add? Speaking of adding: when I add the “Used” and “Available” columns of df , I don't get the total figure. And that total figure is smaller than the size of my partition. And if I add up my partition sizes I don't get my disk size! What gives?
Adding up numbers is easy. The problem is, there are many different numbers to add. How much disk space does a file use? The basic idea is that a file containing n bytes uses n bytes of disk space, plus a bit for some control information: the file's metadata (permissions, timestamps, etc.), and a bit of overhead for the information that the system needs to find where the file is stored. However there are many complications. Microscopic complications Think of each file as a series of books in a library. Smaller files make up just one volume, but larger files consist of many volumes, like an encyclopedia. In order to be able to locate the files, there is a card catalog which references every volume. Each volume has a bit of overhead due to the covers. If a file is very small, this overhead is relatively large. Also the card catalog itself takes up some room. Going a bit more technical, in a typical simple filesystem, the space is divided in blocks . A typical block size is 4KiB. Each file takes up an integer number of blocks. Unless the file size is a multiple of the block size, the last block is only partially used. So a 1-byte file and a 4096-byte file both take up 1 block, whereas a 4097-byte file takes up two blocks. You can observe this with ls or du : if your filesystem has a 4KiB block size, then ls -s and du will report 4KiB for a 1-byte file. If a file is large, then additional blocks are needed just to store the list of blocks that make up the file (these are indirect blocks ; more sophisticated filesystems may optimize this in the form of extents ). Those don't show in the file size as reported by ls -l or GNU du --apparent-size . du and ls -s , which report disk usage as opposed to size, does account for them. Some filesystems try to reuse the free space left in the last block to pack several file tails in the same block . Some filesystems (such as ext4 since Linux 3.8 use 0 blocks for tiny files (just a few bytes) that entirely fit in the inode. Macroscopic complications Generally, as seen above, the total size reported by du is the sum of the sizes of the blocks or extents used by the file. The size reported by du may be smaller if the file is compressed. Unix systems traditionally support a crude form of compression: if a file block contains only null bytes, then instead of storing a block of zeroes, the filesystem can omit that block altogether. A file with omitted blocks like this is called a sparse file . Sparse files are not automatically created when a file contains a large series of null bytes, the application must arrange for the file to become sparse. Some filesystems such as btrfs and zfs support general-purpose compression . Advanced complications Two major features of very modern filesystems such as zfs and btrfs make the relationship between file size and disk usage significantly more distant: snapshots and deduplication. Snapshots are a frozen state of the filesystem at a certain date. Filesystems that support this feature can contain multiple snapshots taken at different dates. These snapshots take room, of course. At one extreme, if you delete all the files from the active version of the filesystem, the filesystem won't become empty if there are snapshots remaining. Any file or block that hasn't changed since a snapshot, or between two snapshots was taken exists identically in the snapshot and in the active version or other snapshot. This is implemented via copy-on-write . In some edge cases, it's possible that deleting a file on a full filesystem will fail due to insufficient available space — because removing that file would require making a copy of a block in the directory, and there's no more room for even that one block. Deduplication is a storage optimization technique that consists of avoiding storing identical blocks. With typical data, looking for duplicates isn't always worth the effort. Both zfs and btrfs support deduplication as an optional feature. Why is the total from du different from the sum of the file sizes? As we've seen above, the size reported by du for each file is normally is the sum of the sizes of the blocks or extents used by the file. Note that by default, ls -l lists sizes in bytes, but du lists sizes in KiB, or in 512-byte units (sectors) on some more traditional systems ( du -k forces the use of kilobytes). Most modern unices support ls -lh and du -h to use “human-readable” numbers using K, M, G, etc. suffices (for KiB, MiB, GiB) as appropriate. When you run du on a directory, it sums up the disk usage of all the files in the directory tree, including the directories themselves. A directory contains data (the names of the files, and a pointer to where the file's metadata is), so it needs a bit of storage space. A small directory will take up one block, a larger directory will require more blocks. The amount of storage used by a directory sometimes depends not only on the files it contains but also the order in which they were inserted and in which some files are removed (with some filesystems, this can leave holes — a compromise between disk space and performance), but the difference will be tiny (an extra block here and there). When you run ls -ld /some/directory , the directory's size is listed. (Note that the “total NNN” line at the top of the output from ls -l is an unrelated number, it's the sum of the sizes in blocks of the listed items, expressed in KiB or sectors.) Keep in mind that du includes dot files which ls doesn't show unless you use the -A or -a option. Sometimes du reports less than the expected sum. This happens if there are hard links inside the directory tree: du counts each file only once. Use du -l switch to count files N times if they have N hard links. On some file systems like ZFS on Linux, du does not report the full disk space occupied by extended attributes of a file. Beware that if there are mount points under a directory, du will count all the files on these mount points as well, unless given the -x option. So if for instance you want the total size of the files in your root filesystem, run du -x / , not du / . If a filesystem is mounted to a non-empty directory , the files in that directory are hidden by the mounted filesystem. They still occupy their space, but du won't find them. Deleted files When a file is deleted , this only removes the directory entry, not necessarily the file itself. Two conditions are necessary in order to actually delete a file and thus reclaim its disk space: The file's link count must drop to 0: if a file has multiple hard links, removing one doesn't affect the others. As long as the file is open by some process, the data remains. Only when all processes have closed the file is the file deleted. The output fuser -m or lsof on a mount point includes the processes that have a file open on that filesystem, even if the file is deleted. even if no process has the deleted file open, the file's space may not be reclaimed if that file is the backend of a loop device. losetup -a (as root ) can tell you which loop devices are currently set up and on what file. The loop device must be destroyed (with losetup -d ) before the disk space can be reclaimed. If you delete a file in some file managers or GUI environments, it may be put into a trash area where it can be undeleted. As long as the file can be undeleted, its space is still consumed. What are these numbers from df exactly? A typical filesystem contains: Blocks containing file (including directories) data and some metadata (including indirect blocks, and extended attributes on some filesystems). Free blocks. Blocks that are reserved to the root user. superblocks and other control information. Inodes A journal Only the first kind is reported by du . When it comes to df , what goes into the “used”, “available” and total columns depends on the filesystem (of course used blocks (including indirect ones) are always in the “used” column, and unused blocks are always in the “available” column). Filesystems in the ext2/ext3/ext4 reserve 5% of the space to the root user. This is useful on the root filesystem, to keep the system going if it fills up (in particular for logging, and to let the system administrator store a bit of data while fixing the problem). Even for data partitions such as /home , keeping that reserved space is useful because an almost-full filesystem is prone to fragmentation. Linux tries to avoid fragmentation (which slows down file access, especially on rotating mechanical devices such as hard disks) by pre-allocating many consecutive blocks when a file is being written, but if there are not many consecutive blocks, that can't work. Traditional filesystems, up to and including ext4 but not btrfs, reserve a fixed number of inodes when the filesystem is created. This significantly simplifies the design of the filesystem, but has the downside that the number of inodes needs to be sized properly: with too many inodes, space is wasted; with too few inodes, the filesystem may run out of inodes before running out of space. The command df -i reports how many inodes are in use and how many are available (filesystems where the concept is not applicable may report 0). Running tune2fs -l on the volume containing an ext2/ext3/ext4 filesystem reports some statistics including the total number and number of free inodes and blocks. Another feature that can confuse matter is subvolumes (supported in btrfs , and in zfs under the name datasets ). Multiple subvolumes share the same space, but have separate directory tree roots. If a filesystem is mounted over the network (NFS, Samba, etc.) and the server exports a portion of that filesystem (e.g. the server has a /home filesystem, and exports /home/bob ), then df on a client reflects the data for the whole filesystem, not just for the part that is exported and mounted on the client. What's using the space on my disk? As we've seen above, the total size reported by df does not always take all the control data of the filesystem into account. Use filesystem-specific tools to get the exact size of the filesystem if needed. For example, with ext2/ext3/ext4, run tune2fs -l and multiply the block size by the block count. When you create a filesystem, it normally fills up the available space on the enclosing partition or volume. Sometimes you might end up with a smaller filesystem when you've been moving filesystems around or resizing volumes. On Linux, lsblk presents a nice overview of the available storage volumes. For additional information or if you don't have lsblk , use specialized volume management or partitioning tools to check what partitions you have. On Linux, there's lvs , vgs , pvs for LVM , fdisk for traditional PC-style (“MBR”) partitions (as well as GPT on recent systems), gdisk for GPT partitions, disklabel for BSD disklabels, Parted , etc. Under Linux, cat /proc/partitions gives a quick summary. Typical installations have at least two partitions or volumes used by the operating system: a filesystem (sometimes more), and a swap volume. Some computers have a partition containing the BIOS or other diagnostic software. Computers with UEFI have a dedicated bootloader partition. Finally, note that most computer programs use units based on powers of 1024 = 2 10 (because programmers love binary and powers of 2). So 1 kB = 1024 B, 1 MB = 1048576 B, 1 GB = 1073741824, 1 TB = 1099511627776 B, … Officially, these units are known as kibibyte KiB, mebibyte MiB, etc., but most software just reports k or kB, M or MB, etc. On the other hand, hard disk manufacturers systematically use metric (1000-based units). So that 1 TB drive is only 931 GiB or 0.904 TiB.
{ "source": [ "https://unix.stackexchange.com/questions/120311", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
120,368
Some applications simulate a virtual USB or CD Rom drive as if a USB drive is attached to the computer. Is there any configuration or application that provides a virtual USB drive, not for the the operating system itself, but for other equipments which accept USB drive, through a USB port. So I'll have a virtual hard disk (e.g. a *.vdi file) in the computer, which is connected, through a USB socket, as a USB drive to some other equipment (e.g. a cell phone or a laptop).
Edit: While this answer was correct at the time (with a few rare exceptions), since then there's been more developments. We now have USB-C, for example, which supports both device and host modes. Many devices - especially SBCs - come with USB-C and a controller which can run in both modes. The main problem is still mostly on the Windows PC side, where there's a lack of any USB-C device mode drivers with the OS. Linux, however, does include USB-C device mode drivers (aka "USB Gadget" drivers - although you may need to compile a custom kernel if they haven't been included in your distribution.) You would need to add a USB Device/Peripheral controller to the computer, as opposed to the USB Host Controller they tend to come with. Something like this: https://www.maximintegrated.com/en/products/interface/controllers-expanders/MAX3420E.html Unfortunately, you'd have to find a way to wire it onto your motherboard. Technically, it can be done. Practically, you'd have to redesign the motherboard to include it. You might be lucky enough to find an SPI or I2C bus exposed somewhere on your motherboard to allow you to add it, but they're usually wired directly into whatever they're being used for unless you're using a dev board or single-board computer with exposed GPIO and other ports such as a Raspberry Pi. The other option would be a USB On-the-Go Controller. Motherboards designed for embedded and portable devices tend to have a USB OTG (On-the-go) contoller, which can function as either a Host or Device controller. For example, the aforementioned Raspberry Pi has an On-the-Go Controller, but on all models except the Pi Zero that gets rewired to a host port or an onboard USB hub denying the use of USB device functionality. The BeagleBone Black has an OTG port. That's not all though - once you've got the hardware, you'd also need the software. Linux has some useful kernel USB Gadget drivers ("USB gadget" is another term for USB peripheral/device) such as g_serial and g_ethernet that allow you to plug your device into another computer and be visible as a serial or ethernet-over-USB device (there are others for exposing a device as mass storage, which allow you to use a file as a block device and expose the computer as a mass storage gadget). The BeagleBone Black tends to come with this enabled by default, so you can simply plug it into your PC over USB and see it as a networked device - and I believe it also appears as a mass storage device by using a composite driver (which allows it to appear as multiple USB device types over a single connection.) The Pi Zero can use these , but does not by default. For Windows or other OSes, you'd probably have to write that device driver yourself. So, theoretically, you can do it. You can tear down your desktop PC, try and find an unused compatible bus on the motherboard somewhere (most likely some unused pins on a controller IC), or a way to extend an internal I2C or SPI bus, or something you can tear out and replace, and solder a USB OTG or device controller chip onto it. Then you can install Linux and use a gadget driver, or write your own for another OS. Practically, unless you're a top-notch electronics engineer, you're not going to be able to do it. At least, not until someone comes out with that elusive adapter with a device or OTG port on it that plugs into a USB port (theoretically, that could be done with a microcontroller such an Arduino wired to a pair of USB device controller ICs), and writes the drivers to run it.
{ "source": [ "https://unix.stackexchange.com/questions/120368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62511/" ] }
120,380
How can I tell for sure what userland C library my system uses? Possible reasons to need this information include: There's a gigantic source package I am considering downloading which I'm sure will do proper checks and lists a mininum library version, but I'd rather save myself a potential hassle by checking first if it will work. I am concerned about ABI compatibility with some third party binaries I want to try and install outside the system's package management system. I have a source package who's documentation mentions the need for a minimum version of my system's library, but the build process does not perform any checks. I am building a cross-compiler targeting a specific system and do not want to risk forward compatibility problems.
GNU/Linux systems usually use either glibc (Fedora/Redhat family, Arch) or its close cousin, eglibc (Debian/Ubuntu family); since eglibc is now being merged back into glibc ( see EGLIBC 2.19 Branch Created under "News" ), in the near future they will all be glibc again. The easiest way to check the exact version is to ask ldd , which ships with the C library. On Fedora 20: > ldd --version ldd (GNU libc) 2.18 That's glibc 2.18. On Raspbian (Debian 7 port for ARMv6 Broadcom SoC): > ldd --version ldd (Debian EGLIBC 2.13-38+rpi2) 2.13 That's eglibc 2.13. If for whatever reason you have mixed and matched some parts or otherwise aren't sure about ldd , you can query the C library directly. > whereis libc.so libc: /usr/lib64/libc.a /usr/lib64/libc.so /usr/share/man/man7/libc.7.gz None of those is executable but they provide a clue about where to find one. > $(find /usr/lib64/ -executable -name "*libc.so*") --version GNU C Library (GNU libc) stable release version 2.18, by Roland McGrath et al. However, it is not necessarily so easy, because the C library does not have to reside somewhere whereis can find it. > whereis libc.so libc: /usr/share/man/man7/libc.7.gz Unfortunately, the man page does not provide a version number. ldd still comes in handy, since any working, dynamically linked executable on the system (e.g., almost everything in /usr/bin ) will link to the C library. > ldd /usr/bin/touch /usr/lib/arm-linux-gnueabihf/libcofi_rpi.so (0xb6eed000) librt.so.1 => /lib/arm-linux-gnueabihf/librt.so.1 (0xb6ed0000) libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0xb6da1000) /lib/ld-linux-armhf.so.3 (0xb6efb000) libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0xb6d82000) libc.so.6 is on the third line. > /lib/arm-linux-gnueabihf/libc.so.6 --version GNU C Library (Debian EGLIBC 2.13-38+rpi2) stable release version 2.13, by Roland McGrath et al.
{ "source": [ "https://unix.stackexchange.com/questions/120380", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
120,451
I can pretty quickly monitor the running time of a process with time : x@y ~ $ time foo real 0m14.299s user 0m4.770s sys 0m0.440s Is there a way I can get the same data for I/O and CPU usage of an argument, recorded to STDOUT? A simple command or utility like time would be ideal, where I just pass the argument of the thing I want to run: x@y ~ $ stats foo wallclock runtime 0m14.299s I/O reads 290,420 KB I/O writes 239,429 KB peak CPU usage 18.62% mean CPU usage 1.44% # etc.
look at the time man page on your system, some implementations have format options to include I/O, CPU and Memory stats (-f). For instance, GNU time , with -v will show all available info (here on Linux): /usr/bin/time -v ls Command being timed: "ls" User time (seconds): 0.00 System time (seconds): 0.00 Percent of CPU this job got: 0% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 3664 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 273 Voluntary context switches: 2 Involuntary context switches: 2 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 On BSDs , use -l instead. Note that this is the actual /usr/bin/time program, not the keyword that some shells like bash provide which you invoke with time pipeline .
{ "source": [ "https://unix.stackexchange.com/questions/120451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63035/" ] }
120,484
The date command doesn't offer such thing, which is kind of sad since RFC-3339 is the modern, widespread, sane format used everywhere (except in email which is neither modern nor sane). My timezone offset is currently -08:00 so the simplest form of this command should print the current time as 2013-09-05T14:58:33.102-08:00 .
It seems like you can do several formats using the switch to the GNU implementation of date (version 5.90 or above), --rfc3339= . Examples $ date --rfc-3339=date 2014-03-19 $ date --rfc-3339=seconds 2014-03-19 18:00:05-04:00 $ date --rfc-3339=ns 2014-03-19 18:00:08.179780629-04:00 If you want the T to be added, as a hack: $ date --rfc-3339=seconds | sed 's/ /T/' 2014-03-19T18:35:03-04:00 If you want it in milliseconds: $ date --rfc-3339=ns | sed 's/ /T/; s/\(\....\).*\([+-]\)/\1\2/g' 2014-03-19T18:42:52.362-04:00 References Date and Time on the Internet: Timestamps
{ "source": [ "https://unix.stackexchange.com/questions/120484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46435/" ] }
120,506
It seems I can shutdown using sudo shutdown by specifying a time or minutes. Is there a way to specify datetime for shutdown?
You can do this directly from the shutdown command , see man shutdown : SYNOPSIS /sbin/shutdown [-akrhPHfFnc] [-t sec] time [warning message] [...] time When to shutdown. So, for example: shutdown -h 21:45 That will run shutdown -h at 21:45. For commands that don't offer this functionality, you can try one of: A. Using at The at daemon is designed for precisely this. Depending on your OS, you may need to install it. On Debian based systems, this can be done with: sudo apt-get install at There are three ways of giving a command to at : Pipe it: $ echo "ls > a.txt" | at now + 1 min warning: commands will be executed using /bin/sh job 3 at Thu Apr 4 20:16:00 2013 Save the command you want to run in a text file, and then pass that file to at : $ echo "ls > a.txt" > cmd.txt $ at now + 1 min < cmd.txt warning: commands will be executed using /bin/sh job 3 at Thu Apr 4 20:16:00 2013 You can also pass at commands from STDIN: $ at now + 1 min warning: commands will be executed using /bin/sh at> ls Then, press Ctrl D to exit the at shell. The ls command will be run in one minute. You can give very precise times in the format of [[CC]YY]MMDDhhmm[.ss] , as in $ at -t 201403142134.12 < script.sh This will run the script script.sh at 21:34 and 12 seconds on the 14th of March 2014. B. Using cron (though this not a good idea for shutdown) The other approach is using the cron scheduler which is designed to perform tasks at specific times. It is usually used for tasks that will be repeated but you can also give a specific time. Each user has their own "crontabs" which control what jobs are executed and when. The general format of a crontab is: * * * * * command to be executed - - - - - | | | | | | | | | +----- day of week (0 - 6) (Sunday=0) | | | +------- month (1 - 12) | | +--------- day of month (1 - 31) | +----------- hour (0 - 23) +------------- min (0 - 59) So, for example, this will run ls every day at 14:04: 04 14 * * * ls To set up a cronjob for a specific date: Create a new crontab by running crontab -e . This will bring up a window of your favorite text editor. Add this line to the file that just opened. This particular example will run at 14:34 on the 15th of March 2014 if that day is a Friday (so, OK, it might run more than once): 34 14 15 5 /path/to/command Save the file and exit the editor. This SO answer suggests a way to have it run only once but I have never used it so I can't vouch for it.
{ "source": [ "https://unix.stackexchange.com/questions/120506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38353/" ] }
120,543
What does column 'tcp6' mean on output netstat? Please anyone explain the follow output of netstat: tcp6 0 0 dmz.local.net:www 5.140.235.6%14631:49964 ESTABLISHED 21393/apache2 What does tcp6 mean?
tcp6 simply means TCP protocol over IP v6 . tcp6 0 0 dmz.local.net:www 5.140.235.6%14631:49964 ESTABLISHED 21393/apache2 As from the netstat manual : tcp6 : The protocol used. Here it is TCP over IPv6 0 : The count of bytes not copied by the user program connected to this socket. 0 : The count of bytes not acknowledged by the remote host. Local Address dmz.local.net : www : Address and port number of the local end of the socket. Unless the (-n) option is specified, the socket address is resolved to its canonical host name (FQDN), and the port number is translated into the corresponding service name. 5.140.235.6%14631 : 49964 : Address and port number of the remote end of the socket. ESTABLISHED : The state of the socket. The state ESTABLISHED means the socket has an established connection. 21393 / apache2 : Slash-separated pair of the process id (PID) and process name of the process that owns the socket. To sum up: your local apache2 process (pid= 21393), listening on the standard www port ( 80 ) has established a TCP (over IPv6 ) connection with the remote host 5.140.235.6%14631 on port 49964 (unresolved IPv6 address which is a link-local IPv6 address: an address that a computer assigns itself in order to facilitate local communications). For more about IPv6 : wikipedia IPv6 address notation this superuser thread about % in IPv6, that is Scope ID .
{ "source": [ "https://unix.stackexchange.com/questions/120543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63168/" ] }
120,615
How can I select a bunch of text and comment it all out? Currently I go to the first line, go to insert mode then type # left-arrow down-arrow and then I repeat that sequence, perhaps saving a few keystrokes by using the . repeat feature to do each line. Is there anyway I could (for instance) select either multiple lines in visual mode or by using a range of lines and an ex ('colon') command and for that range comment out all the lines with a # to make them a "block comment". The ability to quickly 'de-comment' (remove the # 's) for a block comment would also be nice.
Visual Block Mode First, move the cursor to the first char of the first line in block code you want to comment, then type: Ctrl + v then vim will go into VISUAL BLOCK mode. Use j to move the cursor down until you reach the last line of your code block. Then type: Shift + i now vim goes to INSERT mode and the cursor is at the first char of the first line. Finally, type # then ESC and the code block is now commented. To decomment, do the same things but instead of type Shift + I , you just type x to remove all # after highlight them in VISUAL BLOCK mode.
{ "source": [ "https://unix.stackexchange.com/questions/120615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
120,642
I was under the impression that the maximum length of a single argument was not the problem here so much as the total size of the overall argument array plus the size of the environment, which is limited to ARG_MAX . Thus I thought that something like the following would succeed: env_size=$(cat /proc/$$/environ | wc -c) (( arg_size = $(getconf ARG_MAX) - $env_size - 100 )) /bin/echo $(tr -dc [:alnum:] </dev/urandom | head -c $arg_size) >/dev/null With the - 100 being more than enough to account for the difference between the size of the environment in the shell and the echo process. Instead I got the error: bash: /bin/echo: Argument list too long After playing around for a while, I found that the maximum was a full hex order of magnitude smaller: /bin/echo \ $(tr -dc [:alnum:] </dev/urandom | head -c $(($(getconf ARG_MAX)/16-1))) \ >/dev/null When the minus one is removed, the error returns. Seemingly the maximum for a single argument is actually ARG_MAX/16 and the -1 accounts for the null byte placed at the end of the string in the argument array. Another issue is that when the argument is repeated, the total size of the argument array can be closer to ARG_MAX , but still not quite there: args=( $(tr -dc [:alnum:] </dev/urandom | head -c $(($(getconf ARG_MAX)/16-1))) ) for x in {1..14}; do args+=( ${args[0]} ) done /bin/echo "${args[@]}" "${args[0]:6534}" >/dev/null Using "${args[0]:6533}" here makes the last argument 1 byte longer and gives the Argument list too long error. This difference is unlikely to be accounted for by the size of the environment given: $ cat /proc/$$/environ | wc -c 1045 Questions: Is this correct behaviour, or is there a bug somewhere? If not, is this behaviour documented anywhere? Is there another parameter which defines the maximum for a single argument? Is this behaviour limited to Linux (or even particular versions of such)? What accounts for the additional ~5KB discrepancy between the actual maximum size of the argument array plus the approximate size of the environment and ARG_MAX ? Additional info: uname -a Linux graeme-rock 3.13-1-amd64 #1 SMP Debian 3.13.5-1 (2014-03-04) x86_64 GNU/Linux
Answers Definitely not a bug. The parameter which defines the maximum size for one argument is MAX_ARG_STRLEN . There is no documentation for this parameter other than the comments in binfmts.h : /* * These are the maximum length and maximum number of strings passed to the * execve() system call. MAX_ARG_STRLEN is essentially random but serves to * prevent the kernel from being unduly impacted by misaddressed pointers. * MAX_ARG_STRINGS is chosen to fit in a signed 32-bit integer. */ #define MAX_ARG_STRLEN (PAGE_SIZE * 32) #define MAX_ARG_STRINGS 0x7FFFFFFF As is shown, Linux also has a (very large) limit on the number of arguments to a command. A limit on the size of a single argument (which differs from the overall limit on arguments plus environment) does appear to be specific to Linux. This article gives a detailed comparison of ARG_MAX and equivalents on Unix like systems. MAX_ARG_STRLEN is discussed for Linux, but there is no mention of any equivalent on any other systems. The above article also states that MAX_ARG_STRLEN was introduced in Linux 2.6.23, along with a number of other changes relating to command argument maximums (discussed below). The log/diff for the commit can be found here . It is still not clear what accounts for the additional discrepancy between the result of getconf ARG_MAX and the actual maximum possible size of arguments plus environment. Stephane Chazelas' related answer , suggests that part of the space is accounted for by pointers to each of the argument/environment strings. However, my own investigation suggests that these pointers are not created early in the execve system call when it may still return a E2BIG error to the calling process (although pointers to each argv string are certainly created later). Also, the strings are contiguous in memory as far as I can see, so no memory gaps due do alignment here. Although is very likely to be a factor within whatever does use up the extra memory. Understanding what uses the extra space requires a more detailed knowledge of how the kernel allocates memory (which is useful knowledge to have, so I will investigate and update later). ARG_MAX Confusion Since the Linux 2.6.23 (as result of this commit ), there have been changes to the way that command argument maximums are handled which makes Linux differ from other Unix-like systems. In addition to adding MAX_ARG_STRLEN and MAX_ARG_STRINGS , the result of getconf ARG_MAX now depends on the stack size and may be different from ARG_MAX in limits.h . Normally the result of getconf ARG_MAX will be 1/4 of the stack size. Consider the following in bash using ulimit to get the stack size: $ echo $(( $(ulimit -s)*1024 / 4 )) # ulimit output in KiB 2097152 $ getconf ARG_MAX 2097152 However, the above behaviour was changed slightly by this commit (added in Linux 2.6.25-rc4~121). ARG_MAX in limits.h now serves as a hard lower bound on the result of getconf ARG_MAX . If the stack size is set such that 1/4 of the stack size is less than ARG_MAX in limits.h , then the limits.h value will be used: $ grep ARG_MAX /usr/include/linux/limits.h #define ARG_MAX 131072 /* # bytes of args + environ for exec() */ $ ulimit -s 256 $ echo $(( $(ulimit -s)*1024 / 4 )) 65536 $ getconf ARG_MAX 131072 Note also that if the stack size set lower than the minimum possible ARG_MAX , then the size of the stack ( RLIMIT_STACK ) becomes the upper limit of argument/environment size before E2BIG is returned (although getconf ARG_MAX will still show the value in limits.h ). A final thing to note is that if the kernel is built without CONFIG_MMU (support for memory management hardware), then the checking of ARG_MAX is disabled, so the limit does not apply. Although MAX_ARG_STRLEN and MAX_ARG_STRINGS still apply. Further Reading Related answer by Stephane Chazelas - https://unix.stackexchange.com/a/110301/48083 In detailed page covering most of the above. Includes a table of ARG_MAX (and equivalent) values on other Unix-like systems - http://www.in-ulm.de/~mascheck/various/argmax/ Seemingly the introduction of MAX_ARG_STRLEN caused a bug in with Automake which was embedding shell scripts into Makefiles using sh -c - http://www.mail-archive.com/[email protected]/msg05522.html
{ "source": [ "https://unix.stackexchange.com/questions/120642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48083/" ] }
120,655
Does anybody know what this error means and how to get Lispworks Personal edition to run on Redhat? Error: Error during GUI startup: Could not register handle for external module "-lgthread-2.0": libgthread-2.0.so: cannot open shared object file: No such file or directory. Here is the whole output: sudo /usr/bin/lispworks-personal-6-1-1-x86-linux LispWorks(R): The Common Lisp Programming Environment Personal Edition Copyright (C) 1987-2012 LispWorks Ltd. All rights reserved. Version 6.1.1 Saved by LispWorks as lispworks-personal-6-1-1-x86-linux, at 06 Dec 2012 16:51 User root on HostName Error during GUI startup: Could not register handle for external module "-lgthread-2.0": libgthread-2.0.so: cannot open shared object file: No such file or directory. DESCRIPTION: Output Backtrace <and a simple test case, if possible> IMPACT: Broken/Annoying/Data Loss/Missing Error/New Feature/Performance Loss URGENCY: ASAP/Current Release/Next Release/Future Release/None PRODUCT CONFIGURATION: LispWorks Personal Edition 6.1.1 Process name: /usr/bin/lispworks-personal-6-1-1-x86-linux ID: 9597 Started at: 2014/03/20 23:04:07 Save history: 1: lispworks-6-1-0-0-x86-linux-release-base Saved by davef as lispworks-6-1-0-0-x86-linux-release-base, at 03 Nov 2011 13:25 2: lispworks-6-1-0-0-x86-linux-release-gtk-shaken Saved by davef as lispworks-6-1-0-0-x86-linux-release-gtk-shaken, at 03 Nov 2011 14:00 3: lispworks-6-1-1-0-x86-linux-release-gtk-shaken Saved by davef as lispworks-personal-6-1-1-x86-linux, at 06 Dec 2012 16:51 LispWorks 6.1.1 - Personal Edition Loaded Modules: Public patches: Private patches: CAPI-GTK-DESTROY-REPRESENTATION Foreign modules: #<FLI::INTERNAL-MODULE :LISP : exports = 0> #<FLI::INTERNAL-MODULE :CALLBACKS : exports = 0> #<FLI::EXTERNAL-MODULE "-lgthread-2.0" : handle = #x00000000; exports = 0> Signal Handlers 2 SYSTEM::SIGINT-HANDLER 13 SYSTEM::THE-NULL-FUNCTION 17 SYSTEM::GET-CHILDREN-INF HOST CONFIGURATION: Zundrum (x86_64), Linux 2.6.32-431.5.1.el6.x86_64 Red Hat Enterprise Linux Workstation release 6.5 (Santiago) Kernel \r on an \m LWSerialNumber: Unknown Site: Unknown GTK+ not loaded Backtrace: #<The COMMON-LISP-USER package, 1/16 internal, 0/4 external> Call to (SUBFUNCTION 1 ENVIRONMENT:START-ENVIRONMENT) {offset 186} SYSTEM::C : #<SIMPLE-ERROR 200C2DAB> Binding frame: CONDITIONS::*IN-SIGNAL-CATCH* : T Handler frame: NIL Call to SIGNAL {offset 1446} CONDITIONS::DATUM : #<SIMPLE-ERROR 200C2DAB> CONDITIONS::ARGUMENTS : NIL Binding frame: CONDITIONS::*IN-SIGNAL-CATCH* : NIL Catch frame: CONDITIONS::SIGNAL-CATCH Binding frame: CONDITIONS::*BROKEN-ON-SIGNALS* : NIL Call to CONDITIONS::CONDITIONS-ERROR {offset 430} CONDITIONS::DATUM : "Could not register handle for external module ~S:~% ~A." CONDITIONS::ARGUMENTS : ("-lgthread-2.0" "libgthread-2.0.so: cannot open shared object file: No such file or directory") Call to ERROR {offset 67} SYSTEM::ESTRING : "Could not register handle for external module ~S:~% ~A." SYSTEM::EARGS : ("-lgthread-2.0" "libgthread-2.0.so: cannot open shared object file: No such file or directory") Binding frame: FLI::*DLOPEN-FLAGS* : T Call to FLI::CONNECT-TO-EXTERNAL-MODULE {offset 319} FLI::MODULE : #<FLI::EXTERNAL-MODULE "-lgthread-2.0" : handle = #x00000000; exports = 0> TYPE : :MANUAL FLI::ERRORP : T Call to FLI::CREATE-EXTERNAL-MODULE {offset 275} FLI::NAME : "-lgthread-2.0" FLI::CONNECTION-STYLE : :IMMEDIATE FLI::FILENAME : NIL FLI::MODULE : #<FLI::EXTERNAL-MODULE "-lgthread-2.0" : handle = #x00000000; exports = 0> OPEN : T FLI::LIFETIME : :SESSION FLI::DLOPEN-FLAGS : FLI::DEFAULT FLI::ADD-LIB-PATH : NIL Call to FLI:REGISTER-MODULE {offset 146} FLI::NAME : "-lgthread-2.0" FLI::CONNECTION-STYLE : :IMMEDIATE FLI::LIFETIME : :SESSION FLI::REAL-NAME : NIL FLI::FILE-NAME : NIL FLI::DLOPEN-FLAGS : FLI::DEFAULT FLI::ADD-LIB-PATH : NIL Call to LWGTK:INITIALIZE-GTK-LIBRARY {offset 999} Call to CAPI-GTK-LIBRARY::ENSURE-GTK-INITIALIZED {offset 21} Call to (METHOD CAPI-LIBRARY:LIBRARY-READY-TO-START ((EQL :GTK))) {offset 11} CAPI-GTK-LIBRARY::LOOK-AND-FEEL : :DONT-KNOW Call to CLOS::CACHE-MISS-FUNCTION {offset 311} CLOS::ARGS : (:GTK) CLOS::.CACHE-INFO. {Closed} : #<CLOS::CACHE-INFO CAPI-LIBRARY:LIBRARY-READY-TO-START [8/2] > CLOS::.GF. {Closed} : #<STANDARD-GENERIC-FUNCTION CAPI-LIBRARY:LIBRARY-READY-TO-START 217FEBA2> Call to CAPI-INTERNALS:START-ENVIRONMENT {offset 60} CAPI::ARGS : (:START-FUNCTIONS ((LISPWORKS-TOOLS::START-LISPWORKS-TOOLS :TOOLS (LISPWORKS-TOOLS:LISPWORKS-ECHO-PODIUM LISPWORKS-TOOLS:LISTENER))) :ENVIRONMENT :CAPI) CAPI::ENVIRONMENT : :CAPI PACKAGE : NIL CAPI::LIBRARY : NIL CAPI::START-FUNCTIONS : ((LISPWORKS-TOOLS::START-LISPWORKS-TOOLS :TOOLS (LISPWORKS-TOOLS:LISPWORKS-ECHO-PODIUM LISPWORKS-TOOLS:LISTENER))) Call to ENVIRONMENT::START-CAPI-ENVIRONMENT {offset 24} LISPWORKS-TOOLS::ARGS : NIL Call to CLOS::CACHE-MISS-FUNCTION {offset 311} CLOS::ARGS : (#<ENVIRONMENT::CAPI-ENVIRONMENT 21BCCF4B> NIL) CLOS::.CACHE-INFO. {Closed} : #<CLOS::CACHE-INFO ENVIRONMENT-INTERNALS:ENVIRONMENT-START [8/2] > CLOS::.GF. {Closed} : #<STANDARD-GENERIC-FUNCTION ENVIRONMENT-INTERNALS:ENVIRONMENT-START 20979E5A> Handler frame: ((ERROR . #<Function 1 subfunction of ENVIRONMENT:START-ENVIRONMENT 21E1A0EA>)) Call to ENVIRONMENT:START-ENVIRONMENT {offset 158} SYSTEM::ARGS : NIL SYSTEM::OLD {Closed} : #<Function ENVIRONMENT:START-ENVIRONMENT 20979E92> Binding frame: MP:*INITIAL-PROCESSES* : (("The idle process" (:PRIORITY -536870912 :RESTART-ACTION :CONTINUE :INTERNAL-SERVER :IDLE) MP::PROCESS-IDLE-FUNCTION)) Call to ENVIRONMENT::I-RESTART-WITH-ENVIRONMENT-AUX {offset 210} ENVIRONMENT::TTY-LISTENER-P : NIL Call to SYSTEM::RESTART-HOOK {offset 96} FUNCTION : SYSTEM::%TOP-LEVEL Restart frame: (SYSTEM::TOP-LEVEL) Catch frame: (SYSTEM::IN-START-FUNCTION-ONCE . RESTART-CASE) Catch frame: (SYSTEM::IN-START-FUNCTION-ONCE . 1) Catch frame: SYSTEM::EXIT-LISPWORKS Call to SYSTEM::IN-START-FUNCTION-ONCE {offset 421} Catch frame: SYSTEM::START-UP Catch frame: SYSTEM::IN-START-FUNCTION Call to SYSTEM::IN-START-FUNCTION {offset 57} Call to SYSTEM::CALL-IN-START-FUNCTION {offset 12} Catch frame: (NIL) Call to SYSTEM::START-FUNCTION {offset 50} SYSTEM::GC-MESSAGES : :DONT-KNOW SYSTEM::START-FUNCTION Generation 0: Total Size 515K, Allocated 490K, Free 17K Segment 20090128: Total Size 507K, Allocated 490K, Free 13K minimum free space 64K, Awaiting promotion = 0K, sweeps before promotion =10 Segment 21EDE100: Total Size 7K, Allocated 0K, Free 3K minimum free space 0K, Awaiting promotion = 0K, sweeps before promotion =2 Generation 1: Total Size 308K, Allocated 110K, Free 189K Segment 2070F0C0: Total Size 68K, Allocated 0K, Free 64K minimum free space 3K, Awaiting promotion = 0K, sweeps before promotion =4 Segment 200540A8: Total Size 240K, Allocated 110K, Free 125K minimum free space 0K, static Generation 2: Total Size 68K, Allocated 0K, Free 64K Segment 20F1C640: Total Size 68K, Allocated 0K, Free 64K minimum free space 117K, Awaiting promotion = 0K, sweeps before promotion =4 Generation 3: Total Size 30387K, Allocated 30247K, Free 128K Segment 2010F0C0: Total Size 6144K, Allocated 6139K, Free 0K minimum free space 3K, Awaiting promotion = 0K, sweeps before promotion =10 Segment 20F2D6B8: Total Size 16066K, Allocated 15934K, Free 128K minimum free space 0K, Awaiting promotion = 0K, sweeps before promotion =10 Segment 20720138: Total Size 8177K, Allocated 8173K, Free 0K minimum free space 0K, Awaiting promotion = 0K, sweeps before promotion =10 Total Size 31616K, Allocated 30848K, Free 398K EDIT: Here is the output from yum install glib2-devel Setting up Install Process Package glib2-devel-2.26.1-7.el6_5.x86_64 already installed and latest version Nothing to do Here is libgthread*.so sudo find /*/*/libgthread*.so /usr/lib64/libgthread-2.0.so Here is ldd lispworks... ldd /usr/bin/lispworks-personal-6-1-1-x86-linux linux-gate.so.1 => (0x00a8b000) libdl.so.2 => /lib/libdl.so.2 (0x00d45000) libpthread.so.0 => /lib/libpthread.so.0 (0x00402000) libc.so.6 => /lib/libc.so.6 (0x0087e000) /lib/ld-linux.so.2 (0x0085c000) EDIT: Here the error I get on my Fedora 20 x86_64 KDE ~]$ lispworks-personal-6-1-1-x86-linux LispWorks(R): The Common Lisp Programming Environment Personal Edition Copyright (C) 1987-2012 LispWorks Ltd. All rights reserved. Version 6.1.1 Saved by LispWorks as lispworks-personal-6-1-1-x86-linux, at 06 Dec 2012 16:51 User root on Zundrum Error during GUI startup: Could not register handle for external module "-lgtk-x11-2.0": libgtk-x11-2.0.so: cannot open shared object file: No such file or directory. DESCRIPTION: Output Backtrace <and a simple test case, if possible> IMPACT: Broken/Annoying/Data Loss/Missing Error/New Feature/Performance Loss URGENCY: ASAP/Current Release/Next Release/Future Release/None PRODUCT CONFIGURATION: LispWorks Personal Edition 6.1.1 Process name: /home/kristjan/bin/lispworks-personal-6-1-1-x86-linux ID: 2527 Started at: 2014/03/25 18:37:44 Save history: 1: lispworks-6-1-0-0-x86-linux-release-base Saved by davef as lispworks-6-1-0-0-x86-linux-release-base, at 03 Nov 2011 13:25 2: lispworks-6-1-0-0-x86-linux-release-gtk-shaken Saved by davef as lispworks-6-1-0-0-x86-linux-release-gtk-shaken, at 03 Nov 2011 14:00 3: lispworks-6-1-1-0-x86-linux-release-gtk-shaken Saved by davef as lispworks-personal-6-1-1-x86-linux, at 06 Dec 2012 16:51 LispWorks 6.1.1 - Personal Edition Loaded Modules: Public patches: Private patches: CAPI-GTK-DESTROY-REPRESENTATION Foreign modules: #<FLI::INTERNAL-MODULE :LISP : exports = 0> #<FLI::INTERNAL-MODULE :CALLBACKS : exports = 0> #<FLI::EXTERNAL-MODULE "-lgthread-2.0" {/lib/libgthread-2.0.so.0}: handle = #x09D30DE0; exports = 0> #<FLI::EXTERNAL-MODULE "-lgtk-x11-2.0" : handle = #x00000000; exports = 0> Signal Handlers 2 SYSTEM::SIGINT-HANDLER 13 SYSTEM::THE-NULL-FUNCTION 17 SYSTEM::GET-CHILDREN-INF HOST CONFIGURATION: Zundrum (x86_64), Linux 3.13.6-200.fc20.x86_64 Fedora release 20 (Heisenbug) Kernel \r on an \m (\l) LWSerialNumber: Unknown Site: Unknown GTK+ not loaded Backtrace: #<The COMMON-LISP-USER package, 1/16 internal, 0/4 external> Call to (SUBFUNCTION 1 ENVIRONMENT:START-ENVIRONMENT) {offset 186} SYSTEM::C : #<SIMPLE-ERROR 200B5DA3> Binding frame: CONDITIONS::*IN-SIGNAL-CATCH* : T Handler frame: NIL Call to SIGNAL {offset 1446} CONDITIONS::DATUM : #<SIMPLE-ERROR 200B5DA3> CONDITIONS::ARGUMENTS : NIL Binding frame: CONDITIONS::*IN-SIGNAL-CATCH* : NIL Catch frame: CONDITIONS::SIGNAL-CATCH Binding frame: CONDITIONS::*BROKEN-ON-SIGNALS* : NIL Call to CONDITIONS::CONDITIONS-ERROR {offset 430} CONDITIONS::DATUM : "Could not register handle for external module ~S:~% ~A." CONDITIONS::ARGUMENTS : ("-lgtk-x11-2.0" "libgtk-x11-2.0.so: cannot open shared object file: No such file or directory") Call to ERROR {offset 67} SYSTEM::ESTRING : "Could not register handle for external module ~S:~% ~A." SYSTEM::EARGS : ("-lgtk-x11-2.0" "libgtk-x11-2.0.so: cannot open shared object file: No such file or directory") Binding frame: FLI::*DLOPEN-FLAGS* : T Call to FLI::CONNECT-TO-EXTERNAL-MODULE {offset 319} FLI::MODULE : #<FLI::EXTERNAL-MODULE "-lgtk-x11-2.0" : handle = #x00000000; exports = 0> TYPE : :MANUAL FLI::ERRORP : T Call to FLI::CREATE-EXTERNAL-MODULE {offset 275} FLI::NAME : "-lgtk-x11-2.0" FLI::CONNECTION-STYLE : :IMMEDIATE FLI::FILENAME : NIL FLI::MODULE : #<FLI::EXTERNAL-MODULE "-lgtk-x11-2.0" : handle = #x00000000; exports = 0> OPEN : T FLI::LIFETIME : :SESSION FLI::DLOPEN-FLAGS : FLI::DEFAULT FLI::ADD-LIB-PATH : NIL Call to FLI:REGISTER-MODULE {offset 146} FLI::NAME : "-lgtk-x11-2.0" FLI::CONNECTION-STYLE : :IMMEDIATE FLI::LIFETIME : :SESSION FLI::REAL-NAME : NIL FLI::FILE-NAME : NIL FLI::DLOPEN-FLAGS : FLI::DEFAULT FLI::ADD-LIB-PATH : NIL Call to LWGTK:INITIALIZE-GTK-LIBRARY {offset 999} Call to CAPI-GTK-LIBRARY::ENSURE-GTK-INITIALIZED {offset 21} Call to (METHOD CAPI-LIBRARY:LIBRARY-READY-TO-START ((EQL :GTK))) {offset 11} CAPI-GTK-LIBRARY::LOOK-AND-FEEL : :DONT-KNOW Call to CLOS::CACHE-MISS-FUNCTION {offset 311} CLOS::ARGS : (:GTK) CLOS::.CACHE-INFO. {Closed} : #<CLOS::CACHE-INFO CAPI-LIBRARY:LIBRARY-READY-TO-START [8/2] > CLOS::.GF. {Closed} : #<STANDARD-GENERIC-FUNCTION CAPI-LIBRARY:LIBRARY-READY-TO-START 217FEBA2> Call to CAPI-INTERNALS:START-ENVIRONMENT {offset 60} CAPI::ARGS : (:START-FUNCTIONS ((LISPWORKS-TOOLS::START-LISPWORKS-TOOLS :TOOLS (LISPWORKS-TOOLS:LISPWORKS-ECHO-PODIUM LISPWORKS-TOOLS:LISTENER))) :ENVIRONMENT :CAPI) CAPI::ENVIRONMENT : :CAPI PACKAGE : NIL CAPI::LIBRARY : NIL CAPI::START-FUNCTIONS : ((LISPWORKS-TOOLS::START-LISPWORKS-TOOLS :TOOLS (LISPWORKS-TOOLS:LISPWORKS-ECHO-PODIUM LISPWORKS-TOOLS:LISTENER))) Call to ENVIRONMENT::START-CAPI-ENVIRONMENT {offset 24} LISPWORKS-TOOLS::ARGS : NIL Call to CLOS::CACHE-MISS-FUNCTION {offset 311} CLOS::ARGS : (#<ENVIRONMENT::CAPI-ENVIRONMENT 21BCCF4B> NIL) CLOS::.CACHE-INFO. {Closed} : #<CLOS::CACHE-INFO ENVIRONMENT-INTERNALS:ENVIRONMENT-START [8/2] > CLOS::.GF. {Closed} : #<STANDARD-GENERIC-FUNCTION ENVIRONMENT-INTERNALS:ENVIRONMENT-START 20979E5A> Handler frame: ((ERROR . #<Function 1 subfunction of ENVIRONMENT:START-ENVIRONMENT 21E1A0EA>)) Call to ENVIRONMENT:START-ENVIRONMENT {offset 158} SYSTEM::ARGS : NIL SYSTEM::OLD {Closed} : #<Function ENVIRONMENT:START-ENVIRONMENT 20979E92> Binding frame: MP:*INITIAL-PROCESSES* : (("The idle process" (:PRIORITY -536870912 :RESTART-ACTION :CONTINUE :INTERNAL-SERVER :IDLE) MP::PROCESS-IDLE-FUNCTION)) Call to ENVIRONMENT::I-RESTART-WITH-ENVIRONMENT-AUX {offset 210} ENVIRONMENT::TTY-LISTENER-P : NIL Call to SYSTEM::RESTART-HOOK {offset 96} FUNCTION : SYSTEM::%TOP-LEVEL Restart frame: (SYSTEM::TOP-LEVEL) Catch frame: (SYSTEM::IN-START-FUNCTION-ONCE . RESTART-CASE) Catch frame: (SYSTEM::IN-START-FUNCTION-ONCE . 1) Catch frame: SYSTEM::EXIT-LISPWORKS Call to SYSTEM::IN-START-FUNCTION-ONCE {offset 421} Catch frame: SYSTEM::START-UP Catch frame: SYSTEM::IN-START-FUNCTION Call to SYSTEM::IN-START-FUNCTION {offset 57} Call to SYSTEM::CALL-IN-START-FUNCTION {offset 12} Catch frame: (NIL) Call to SYSTEM::START-FUNCTION {offset 50} SYSTEM::GC-MESSAGES : :DONT-KNOW SYSTEM::START-FUNCTION Generation 0: Total Size 515K, Allocated 260K, Free 247K Segment 20090128: Total Size 507K, Allocated 260K, Free 243K minimum free space 64K, Awaiting promotion = 0K, sweeps before promotion =10 Segment 21EDE100: Total Size 7K, Allocated 0K, Free 3K minimum free space 0K, Awaiting promotion = 0K, sweeps before promotion =2 Generation 1: Total Size 308K, Allocated 110K, Free 189K Segment 2070F0C0: Total Size 68K, Allocated 0K, Free 64K minimum free space 3K, Awaiting promotion = 0K, sweeps before promotion =4 Segment 200540A8: Total Size 240K, Allocated 110K, Free 125K minimum free space 0K, static Generation 2: Total Size 68K, Allocated 0K, Free 64K Segment 20F1C640: Total Size 68K, Allocated 0K, Free 64K minimum free space 117K, Awaiting promotion = 0K, sweeps before promotion =4 Generation 3: Total Size 30387K, Allocated 30247K, Free 128K Segment 2010F0C0: Total Size 6144K, Allocated 6139K, Free 0K minimum free space 3K, Awaiting promotion = 0K, sweeps before promotion =10 Segment 20F2D6B8: Total Size 16066K, Allocated 15934K, Free 128K minimum free space 0K, Awaiting promotion = 0K, sweeps before promotion =10 Segment 20720138: Total Size 8177K, Allocated 8173K, Free 0K minimum free space 0K, Awaiting promotion = 0K, sweeps before promotion =10 Total Size 31616K, Allocated 30618K, Free 628K
Answers Definitely not a bug. The parameter which defines the maximum size for one argument is MAX_ARG_STRLEN . There is no documentation for this parameter other than the comments in binfmts.h : /* * These are the maximum length and maximum number of strings passed to the * execve() system call. MAX_ARG_STRLEN is essentially random but serves to * prevent the kernel from being unduly impacted by misaddressed pointers. * MAX_ARG_STRINGS is chosen to fit in a signed 32-bit integer. */ #define MAX_ARG_STRLEN (PAGE_SIZE * 32) #define MAX_ARG_STRINGS 0x7FFFFFFF As is shown, Linux also has a (very large) limit on the number of arguments to a command. A limit on the size of a single argument (which differs from the overall limit on arguments plus environment) does appear to be specific to Linux. This article gives a detailed comparison of ARG_MAX and equivalents on Unix like systems. MAX_ARG_STRLEN is discussed for Linux, but there is no mention of any equivalent on any other systems. The above article also states that MAX_ARG_STRLEN was introduced in Linux 2.6.23, along with a number of other changes relating to command argument maximums (discussed below). The log/diff for the commit can be found here . It is still not clear what accounts for the additional discrepancy between the result of getconf ARG_MAX and the actual maximum possible size of arguments plus environment. Stephane Chazelas' related answer , suggests that part of the space is accounted for by pointers to each of the argument/environment strings. However, my own investigation suggests that these pointers are not created early in the execve system call when it may still return a E2BIG error to the calling process (although pointers to each argv string are certainly created later). Also, the strings are contiguous in memory as far as I can see, so no memory gaps due do alignment here. Although is very likely to be a factor within whatever does use up the extra memory. Understanding what uses the extra space requires a more detailed knowledge of how the kernel allocates memory (which is useful knowledge to have, so I will investigate and update later). ARG_MAX Confusion Since the Linux 2.6.23 (as result of this commit ), there have been changes to the way that command argument maximums are handled which makes Linux differ from other Unix-like systems. In addition to adding MAX_ARG_STRLEN and MAX_ARG_STRINGS , the result of getconf ARG_MAX now depends on the stack size and may be different from ARG_MAX in limits.h . Normally the result of getconf ARG_MAX will be 1/4 of the stack size. Consider the following in bash using ulimit to get the stack size: $ echo $(( $(ulimit -s)*1024 / 4 )) # ulimit output in KiB 2097152 $ getconf ARG_MAX 2097152 However, the above behaviour was changed slightly by this commit (added in Linux 2.6.25-rc4~121). ARG_MAX in limits.h now serves as a hard lower bound on the result of getconf ARG_MAX . If the stack size is set such that 1/4 of the stack size is less than ARG_MAX in limits.h , then the limits.h value will be used: $ grep ARG_MAX /usr/include/linux/limits.h #define ARG_MAX 131072 /* # bytes of args + environ for exec() */ $ ulimit -s 256 $ echo $(( $(ulimit -s)*1024 / 4 )) 65536 $ getconf ARG_MAX 131072 Note also that if the stack size set lower than the minimum possible ARG_MAX , then the size of the stack ( RLIMIT_STACK ) becomes the upper limit of argument/environment size before E2BIG is returned (although getconf ARG_MAX will still show the value in limits.h ). A final thing to note is that if the kernel is built without CONFIG_MMU (support for memory management hardware), then the checking of ARG_MAX is disabled, so the limit does not apply. Although MAX_ARG_STRLEN and MAX_ARG_STRINGS still apply. Further Reading Related answer by Stephane Chazelas - https://unix.stackexchange.com/a/110301/48083 In detailed page covering most of the above. Includes a table of ARG_MAX (and equivalent) values on other Unix-like systems - http://www.in-ulm.de/~mascheck/various/argmax/ Seemingly the introduction of MAX_ARG_STRLEN caused a bug in with Automake which was embedding shell scripts into Makefiles using sh -c - http://www.mail-archive.com/[email protected]/msg05522.html
{ "source": [ "https://unix.stackexchange.com/questions/120655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
120,677
The command mount.cifs is found not being able to run in a gentoo system with systemd ae429-1105 etc # mount -t cifs //file.abc.edu.au/user /home/directory/path -o credentials=/etc/user,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 mount error(2): No such file or directory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) It has been confirmed that the existence and accessibility of the mountpoint /home/directory/path and credential file /etc/user . Also the relevant modules and services has been enabled, i.e., ae429-1105 etc # lsmod |egrep 'fuse|cifs' fuse 72589 5 cifs 312131 0 and ae429-1105 etc # systemctl -t service -a |grep Samba nmbd.service loaded active running Samba NetBIOS name server smbd.service loaded active running Samba SMB/CIFS server winbindd.service loaded inactive dead Samba Winbind daemon This problem has been identified by many users, e.g. one example . ALSO NOTE that the same command executed in my Ubuntu/debian system is able to mount successfully. Other information in the problematic machine: ae429-1105 etc # mount.cifs --version mount.cifs version: 6.1 the version of mount.cifs installed in debian/ubuntu is 6.0
You might need to provide the vers= option to the mount command to force version 3.0 if you're trying to mount a share from a newer version of Windows. One of our fileservers was recently upgraded to 2012R2 and that's when my mount stopped working. Setting it to vers=3.0 fixed the issue. Like most Samba/CIFS errors the "No such file or directory" message isn't much help. As an example: # mount -t cifs //win2012r2/someshare -o cred=/home/foo/.cifs_user,vers=3.0 /mnt/tmp ..where I have my domain, username and password contained in the .cifs_user file: user=MyUser password=MyPassword domain=MyDomain Apparently, smbmount uses a newer version of the SMB protocol by default since it worked without issue or any special options. Notice below that the default protocol version is 1.0. From the mount.cifs man page: vers=arg SMB protocol version. Allowed values are: · 1.0 - The classic CIFS/SMBv1 protocol. · 2.0 - The SMBv2.002 protocol. This was initially introduced in Windows Vista Service Pack 1, and Windows Server 2008. Note that the initial release version of Windows Vista spoke a slightly different dialect (2.000) that is not supported. · 2.1 - The SMBv2.1 protocol that was introduced in Microsoft Windows 7 and Windows Server 2008R2. · 3.0 - The SMBv3.0 protocol that was introduced in Microsoft Windows 8 and Windows Server 2012. · 3.02 or 3.0.2 - The SMBv3.0.2 protocol that was introduced in Microsoft Windows 8.1 and Windows Server 2012R2. · 3.1.1 or 3.11 - The SMBv3.1.1 protocol that was introduced in Microsoft Windows 10 and Windows Server 2016. · 3 - The SMBv3.0 protocol version and above. · default - Tries to negotiate the highest SMB2+ version supported by both the client and server. If no dialect is specified on mount vers=default is used. To check Dialect refer to /proc/fs/cifs/DebugData Note too that while this option governs the protocol version used, not all features of each version are available. The default since v4.13.5 is for the client and server to negotiate the highest possible version greater than or equal to 2.1. In kernels prior to v4.13, the default was 1.0. For kernels between v4.13 and v4.13.5 the default is 3.0.
{ "source": [ "https://unix.stackexchange.com/questions/120677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44259/" ] }
120,710
nslookup does not come preinstalled in RHEL 7 Beta. I noticed even dig and host was not pre installed. I read couple of links which mentions nslookup is dead/deprecated, so is there an alternative for nslookup introduced in RHEL7 which comes preinstalled?
Have you tried getent hosts ? [root@test ~]# getent hosts unix.stackexchange.com 198.252.206.140 unix.stackexchange.com It's not give full details like nameserver,other resource record like other tools ( dig ) do, so if you want full details then you need to install bind-utils package. or just using ping to know ip
{ "source": [ "https://unix.stackexchange.com/questions/120710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62686/" ] }
120,786
Is there a one-liner that will list all executables from $PATH in Bash?
This is not an answer, but it's showing binary, a command which you could run compgen -c (assuming bash ) Other useful commands compgen -a # will list all the aliases you could run. compgen -b # will list all the built-ins you could run. compgen -k # will list all the keywords you could run. compgen -A function # will list all the functions you could run. compgen -A function -abck # will list all the above in one go.
{ "source": [ "https://unix.stackexchange.com/questions/120786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1806/" ] }
120,788
Having the following in one of my shell functions: function _process () { awk -v l="$line" ' BEGIN {p=0} /'"$1"'/ {p=1} END{ if(p) print l >> "outfile.txt" } ' } , so when called as _process $arg , $arg gets passed as $1 , and used as a search pattern. It works this way, because shell expands $1 in place of awk pattern! Also l can be used inside awk program, being declared with -v l="$line" . All fine. Is it possible in same manner give pattern to search as a variable? Following will not work, awk -v l="$line" -v search="$pattern" ' BEGIN {p=0} /search/ {p=1} END{ if(p) print l >> "outfile.txt" } ' ,as awk will not interpret /search/ as a variable, but instead literally.
Use awk's ~ operator, and you don't need to provide a literal regex on the right-hand side: function _process () { awk -v l="$line" -v pattern="$1" ' $0 ~ pattern {p=1; exit} END {if(p) print l >> "outfile.txt"} ' } Here calling exit upon the first match as we don't need to read the rest. You don't even need awk , grep would be enough and likely more efficient and avoid the problem of awk 's -v var='value' doing backslash processing: function _process () { grep -qe "$1" && printf '%s\n' "$line" } Depending on the pattern, you may want grep -Eqe "$1"
{ "source": [ "https://unix.stackexchange.com/questions/120788", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60095/" ] }
120,957
I want to run a command when the user becomes inactive (the system is idle). For example: echo "You started to be inactive." Also, when the user becomes active again (the system is not idle anymore): echo "You started to be active, again." I need a shell script that will do this. Is this possible without a timer/interval? Maybe some system events?
This thread on the ArchLinux forums contains a short C program that queries the xscreensaver for information how long the user has been idle, this seems to be quite close to your requirements: #include <X11/extensions/scrnsaver.h> #include <stdio.h> int main(void) { Display *dpy = XOpenDisplay(NULL); if (!dpy) { return(1); } XScreenSaverInfo *info = XScreenSaverAllocInfo(); XScreenSaverQueryInfo(dpy, DefaultRootWindow(dpy), info); printf("%u\n", info->idle); return(0); } Save this as getIdle.c and compile with gcc -o getIdle getIdle.c -lXss -lX11 to get an executable file getIdle . This program prints the "idle time" (user does not move/click with mouse, does not use keyboard) in milliseconds, so a bash script that builds upon this could looke like this: #!/bin/bash idle=false idleAfter=3000 # consider idle after 3000 ms while true; do idleTimeMillis=$(./getIdle) echo $idleTimeMillis # just for debug purposes. if [[ $idle = false && $idleTimeMillis -gt $idleAfter ]] ; then echo "start idle" # or whatever command(s) you want to run... idle=true fi if [[ $idle = true && $idleTimeMillis -lt $idleAfter ]] ; then echo "end idle" # same here. idle=false fi sleep 1 # polling interval done This still needs regular polling, but it does everything you need...
{ "source": [ "https://unix.stackexchange.com/questions/120957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45370/" ] }
120,988
My system suddenly crashed, I have restart it, where could I find the last/previous crash log, since there are no /var/log/syslog* anymore..
journalctl --since=today Reference
{ "source": [ "https://unix.stackexchange.com/questions/120988", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27996/" ] }
121,013
For this question, let's consider a bash shell script, though this question must be applicable to all types of shell script. When someone executes a shell script, does Linux load all the script at once (into memory maybe) or does it read script commands one by one (line by line)? In other words, if I execute a shell script and delete it before the execution completes, will the execution be terminated or will it continue as it is?
If you use strace you can see how a shell script is executed when it's run. Example Say I have this shell script. $ cat hello_ul.bash #!/bin/bash echo "Hello Unix & Linux!" Running it using strace : $ strace -s 2000 -o strace.log ./hello_ul.bash Hello Unix & Linux! $ Taking a look inside the strace.log file reveals the following. ... open("./hello_ul.bash", O_RDONLY) = 3 ioctl(3, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7fff0b6e3330) = -1 ENOTTY (Inappropriate ioctl for device) lseek(3, 0, SEEK_CUR) = 0 read(3, "#!/bin/bash\n\necho \"Hello Unix & Linux!\"\n", 80) = 40 lseek(3, 0, SEEK_SET) = 0 getrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=4*1024}) = 0 fcntl(255, F_GETFD) = -1 EBADF (Bad file descriptor) dup2(3, 255) = 255 close(3) ... Once the file's been read in, it's then executed: ... read(255, "#!/bin/bash\n\necho \"Hello Unix & Linux!\"\n", 40) = 40 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 3), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc0b38ba000 write(1, "Hello Unix & Linux!\n", 20) = 20 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(255, "", 40) = 0 exit_group(0) = ? In the above we can clearly see that the entire script appears to be being read in as a single entity, and then executed there after. So it would "appear" at least in Bash's case that it reads the file in, and then executes it. So you'd think you could edit the script while it's running? NOTE: Don't, though! Read on to understand why you shouldn't mess with a running script file. What about other interpreters? But your question is slightly off. It's not Linux that's necessarily loading the contents of the file, it's the interpreter that's loading the contents, so it's really up to how the interpreter's implemented whether it loads the file entirely or in blocks or lines at a time. So why can't we edit the file? If you use a much larger script however you'll notice that the above test is a bit misleading. In fact most interpreters load their files in blocks. This is pretty standard with many of the Unix tools where they load blocks of a file, process it, and then load another block. You can see this behavior with this U&L Q&A that I wrote up a while ago regarding grep , titled: How much text does grep/egrep consume each time? . Example Say we make the following shell script. $ ( echo '#!/bin/bash'; for i in {1..100000}; do printf "%s\n" "echo \"$i\""; done ) > ascript.bash; $ chmod +x ascript.bash Resulting in this file: $ ll ascript.bash -rwxrwxr-x. 1 saml saml 1288907 Mar 23 18:59 ascript.bash Which contains the following type of content: $ head -3 ascript.bash ; echo "..."; tail -3 ascript.bash #!/bin/bash echo "1" echo "2" ... echo "99998" echo "99999" echo "100000" Now when you run this using the same technique above with strace : $ strace -s 2000 -o strace_ascript.log ./ascript.bash ... read(255, "#!/bin/bash\necho \"1\"\necho \"2\"\necho \"3\"\necho \"4\"\necho \"5\"\necho \"6\"\necho \"7\"\necho \"8\"\necho \"9\"\necho \"10\"\necho ... ... \"181\"\necho \"182\"\necho \"183\"\necho \"184\"\necho \"185\"\necho \"186\"\necho \"187\"\necho \"188\"\necho \"189\"\necho \"190\"\necho \""..., 8192) = 8192 You'll notice that the file is being read in at 8KB increments, so Bash and other shells will likely not load a file in its entirety, rather they read them in in blocks. References The #! magic, details about the shebang/hash-bang mechanism on various Unix flavours
{ "source": [ "https://unix.stackexchange.com/questions/121013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55673/" ] }
121,071
In addition to the more widespread useradd , Debian based systems also contain an additional adduser command which provides a higher level interface for adding users and some related tasks. There are various questions/answers on other SE sites which detail the basic differences between these commands, for example: ServerFault - What's the difference between 'useradd' and 'adduser'? Superuser - What's the difference between “adduser” and “useradd”? Ask Ubuntu - What is the difference between adduser and useradd? Most of the answers essentially say that adduser provides nicer interface for adding users interactively, but don't give much detail on what happens when adduser is run that doesn't compared to useradd . So: What does adduser do that useradd doesn't? What commands do I need to use to produce equivalent results?
First off, the respective man page snippets highlight the differences between the two commands and give some indication of what is going on. For adduser : adduser and addgroup add users and groups to the system according to command line options and configuration information in /etc/adduser.conf. They are friendlier front ends to the low level tools like useradd, groupadd and usermod programs, by default choosing Debian policy conformant UID and GID values, creating a home directory with skeletal configuration, running a custom script, and other features. Then for useradd : useradd is a low level utility for adding users. On Debian, administrators should usually use adduser(8) instead. Further investigation of adduser reveals that it is a perl script providing a high level interface to, and thus offering some of the functionality of, the following commands: useradd groupadd passwd - used to add/change users passwords. gpasswd - used to add/change group passwords. usermod - used to change various user associated parameters. chfn - used to add/change additional information held on a user. chage - used to change password expiry information. edquota - used to change disk usage quotas. A basic run of the adduser command is as follows: adduser username This simple command will do a number of things: Create the user named username . Create the user's home directory (default is /home/username and copy the files from /etc/skel into it. Create a group with the same name as the user and place the user in it. Prompt for a password for the user. Prompt for additional information on the user. The useradd program can most of accomplish most of this, however it does not do so by default and needs additional options. Some of the information requires more commands: useradd -m -U username passwd username chfn username Note that adduser ensures that created UIDs and GIDs conform with the Debian policy . Creating normal users with useradd seems to be ok, provided UID_MIN / UID_MAX in /etc/login.defs matches the Debian policy. What is a problem though is that Debian specifies a particular range for system user UIDs which only seems to be supported in /etc/adduser.conf , so naively adding a system user with useradd and not specifying a UID/GUID in the correct range leaves the potential for serious problems. Another common use for adduser is to simplify the process of adding a user to a group. Here, the following command: adduser username newgroup is equivalent to the following usermod command: usermod -a -G newgroup username The main drawback from usermod in this case is that forgetting to pass the append option (i.e.: -a ) would end up removing the user from all groups before adding them to "newgroup" (i.e.: -G alone means "replace with"). One downside to using adduser here though is that you can only specify one group at a time.
{ "source": [ "https://unix.stackexchange.com/questions/121071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48083/" ] }
121,087
How is the random string M1uG*xgRCthKWwjIjWc*010iSthY9buc being detected as too simplistic/systematic for a password according to passwd and cracklib-check ? Try it on your machine and see echo "M1uG*xgRCthKWwjIjWc*010iSthY9buc" | cracklib-check Note that this is not my password, but another randomly generated string from the same random password generator that produces the same result.
Since cracklib is open source, the answer can be found in the source code . "Too simplistic/systematic" means that there are too many characters that are preceded by one of their alphabetical neighbors. Hence "ab" or "ba" are considered bad, but "ac" or "ca" are OK since the b is omitted. Before this patch from 2010-03-02 , it allows at most four characters that exhibit this trait. E.g., "bar12345" would fail, because the characters "a", "2", "3", "4" and "5" are alphabetical neighbors of the preceding characters. slm found out in his answer that M1uG*xgRCthKWwjIjWc*010iS was OK, while M1uG*xgRCthKWwjIjWc*010iSt is not. Let's analyze. Here are the characters that cracklib-check thinks are indications of a systematic password: M1uG*xgRCthKWwjIjWc*010iS ^^ ^^ which is below the max of four, but adding the t: M1uG*xgRCthKWwjIjWc*010iSt ^^ ^^ ^ pushes it above the limit, since T follows S (it appears the test is case insensitive). The patch changes the max limit so it depends on the total password length, to avoid false positives like this.
{ "source": [ "https://unix.stackexchange.com/questions/121087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41855/" ] }
121,161
Right now I'm using echo "Hello World" >> file.txt to append some text to a file but I also need to add text below a certain string let's say [option] , is it possible with sed ? EG: Input file Some text Random [option] Some stuff Output file Some text Random [option] *inserted text* Some stuff
Append line after match sed '/\[option\]/a Hello World' input Insert line before match sed '/\[option\]/i Hello World' input Additionally you can take backup and edit input file in-place using -i.bkp option to sed
{ "source": [ "https://unix.stackexchange.com/questions/121161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10425/" ] }
121,289
Is there a way to check the permissions of the root folder, /? I mean the folder's permissions, not its content's (/var, /usr, etc.) permissions? Running ls /.. shows the content's permissions.
You can also use the -d switch of ls : $ ls -ld / drwxr-xr-x 28 root root 126976 Mar 20 17:11 / From man ls : -l use a long listing format -d, --directory list directory entries instead of contents, and do not derefer‐ ence symbolic links
{ "source": [ "https://unix.stackexchange.com/questions/121289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62602/" ] }
121,293
I want to write a Bash script to convert every .pdf file in the current directory into a .png file. For example: $ls . a.pdf b.pdf $./pdf2png.sh Converting pdfs to pngs a.pdf -> a.png b.pdf -> b.png This is my best attempt: #!/bin/bash convert -verbose -density 500 -resize '800' a.pdf a.png convert -verbose -density 500 -resize '800' b.pdf b.png
If you have really strange names, ones that contain newlines or backslashes and the like, you could do something like this: find . -type f -name '*.pdf' -print0 | while IFS= read -r -d '' file do convert -verbose -density 500 -resize 800 "${file}" "${file%.*}.png" done That should be able to deal with just about anything you throw at it. Tricks used: find ... -print0 : causes find to print its results separated by the null character, let's us deal with newlines. IFS= : this will disable word splitting, needed to deal with white space. read -r : disables interpreting of backslash escape characters, to deal with files that contain backslashes. read -d '' : sets the record delimiter to the null character, to deal with find's output and correctly handle file names with newline characters. ${file%.*}.png : uses the shell's inbuilt string manipulation abilities to remove the extension.
{ "source": [ "https://unix.stackexchange.com/questions/121293", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55996/" ] }
121,395
I use an Apple wired keyboard on Linux. By default the function keys (F1, F2, F3, etc) require the fn key to be pressed for them to work. Without the fn key, these keys control the features like Screen Brightness, Volume, and Music Track Control. Is there any way to swap these around, so the Function keys do not require the fn modifier, but the other functions (Brightness etc) do?
You need to add 0 or 2 into /sys/module/hid_apple/parameters/fnmode . i.e.: echo 2 > /sys/module/hid_apple/parameters/fnmode There seems to be some confusion regarding what the difference between the two values might be. Quoting the Ubuntu documentation : 0 = disabled : Disable the 'fn' key. Pressing 'fn'+'F8' will behave like you only press 'F8' 1 = fkeyslast : Function keys are used as last key. Pressing 'F8' key will act as a special key. Pressing 'fn'+'F8' will behave like a F8. 2 = fkeysfirst : Function keys are used as first key. Pressing 'F8' key will behave like a F8. Pressing 'fn'+'F8' will act as special key (play/pause). Note that this also works for me on Fedora. As several people have commented, this change is temporary. You can stick it in your login shell's RC file or into cron so that you don't have to worry about it. You can also change your driver settings to make this change permanent, like so: echo options hid_apple fnmode=2 | sudo tee -a /etc/modprobe.d/hid_apple.conf sudo update-initramfs -u -k all # reboot when convenient credits to https://askubuntu.com/a/7553
{ "source": [ "https://unix.stackexchange.com/questions/121395", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63476/" ] }
121,410
I'm struggling with cpupower on ArchLinux. I want to set governor to ondemand or even to conservative . First if I do $ sudo cpupower frequency-info --governors , I only get performance powersave . So I look for available modules like this ls -1 /lib/modules/`uname -r`/kernel/drivers/cpufreq/ ...and I get acpi-cpufreq.ko.gz amd_freq_sensitivity.ko.gz cpufreq_conservative.ko.gz cpufreq_powersave.ko.gz cpufreq_stats.ko.gz cpufreq_userspace.ko.gz p4-clockmod.ko.gz pcc-cpufreq.ko.gz powernow-k8.ko.gz speedstep-lib.ko.gz So, first of all no modules for "ondemand" seems to be available. What do I miss? Then I try to enable at least conservative: $ sudo modprobe cpufreq_conservative then I check the module is actually loaded $ lsmod | grep cpufreq and check if it is now avaliable $ sudo cpupower frequency-info --governors but unfortunately I still get the same: performance powersave only, and if I try to enable conservative $ sudo cpupower frequency-set -g conservative It says that the module is not avaliable. So basically I have two questions: What do I need to install in order to have ondemand module How can I enable it?
Assuming your governor is the intel_pstate (default for Intel Sandy Bridge and Ivy Bridge CPUs as of kernel 3.9). This issue is not specific to Arch, but all distros using the new Intel pstate driver for managing CPU frequency/power management. See Arch Linux CPU frequency scaling . Theodore Ts'o wrote his explanation on Google+ : intel_pstate can be disabled at boot-time with kernel arg intel_pstate=disable The problem with the ondemand governor is that it doesn't know the specific capabilities of the CPU Executing some tasks with higher frequency will consume less power than would a lower frequency taking more time, e.g., arithmetic stuff, but not true for all tasks, e.g., loading something from memory The intel_pstate driver knows the details of the how the CPU works and it does a better job than the generic ACPI solution intel_pstate offers only two governors, powersave and performance . Intel claims that the intel_pstate "powersave" is faster than the generic acpi governor with "performance" To change back to the ACPI driver, reboot and set the kernel arg intel_pstate=disable Then execute modprobe acpi-cpufreq and you should have the ondemand governor available. You can make the changes permanent by editing /etc/default/grub and adding GRUB_CMDLINE_LINUX_DEFAULT="intel_pstate=disable" And then updating grub.cfg ala grub-mkconfig -o /boot/grub/grub.cfg Follow the instructions for Arch kernel module loading and add the acpi-cpufreq module.
{ "source": [ "https://unix.stackexchange.com/questions/121410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57287/" ] }
121,450
I have the problem that I get too much information after the match for grep -RnisI --color=auto "pseudomonas" * I want to get only like 20 characters or 10 words after and before the match. What is the right tool to do such a thing?
cat file.txt | grep -o -P '.{0,20}string.{0,20}' This should do it for you Update: If you don't want to cat, you can just use the grep with the file as a parameter: grep -o -P '.{0,20}pseudomonas.{0,20}' FileName.html Also, The -P uses Perl Regex, which the man pages says is experimental, if you want to avoid that flag, you could just use egrep instead: grep -Eo '.{0,20}yourstring.{0,20}' yourtestfile.txt
{ "source": [ "https://unix.stackexchange.com/questions/121450", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
121,523
I need to check with a script, whether eth0 is configured. If so, the script will do nothing. Otherwise it will start wlan0 . (I don't want both eth0 and wlan0 to be up at the same time). What would be the easiest way to check, whether eth0 is already up? I am using Debian Wheezy CLARIFICATION: I would like to check not only that the cable in eth0 is plugged in, but rather that the interface is configured (i.e. it has either static IP set, or it has received a DHCP IP). If cable is plugged in, but eth0 is not configured correctly, I want to start wlan0
You can do it many ways. Here an example: $ cat /sys/class/net/eth0/operstate up
{ "source": [ "https://unix.stackexchange.com/questions/121523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36112/" ] }
121,527
When I exit the last terminal in a tmux session it will automatically close tmux and return me to the parent terminal. Is it possible to instead have tmux switch to another, already running, session instead of closing? I create a new named session for every task that I work on, such as TRxxx, email, scratch etc and when I am done with one of them I close all the terminals by running exit in every one. When you run exit in the last terminal of a session tmux will return you to the parent terminal but is it possible to get it to just switch to one of the other open sessions instead so I don't have to reattach? This question is similar to " Kill a tmux session and select another tmux session " but I want to close my terminals the nice way by using exit rather than killing the window.
I added this to my ~/.tmux.conf : set-option -g detach-on-destroy off When I destroy the last shell in a session, it switches to another active session. Once all sessions are closed, tmux exits.
{ "source": [ "https://unix.stackexchange.com/questions/121527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28838/" ] }
121,533
I'm new to bash scripting, and I'm trying to configure my video outputs so that my laptop display gets turned off when I connect an external monitor via VGA. That's the script I came up with, pretty straightforward: #!/bin/bash myvar="$(xrandr -q)" if [[ $myvar == *"VGA connected"* ]] then xrandr --output VGA --auto; xrandr --output LVDS --off; else xrandr --output LVDS --auto; fi All is working as it should except for the xrandr --output LVDS --off bit, as my laptop display simply adjusts its resolution to match the external monitor's one and stays on. Can't figure out the bug on this one. Any help is appreciated.
I added this to my ~/.tmux.conf : set-option -g detach-on-destroy off When I destroy the last shell in a session, it switches to another active session. Once all sessions are closed, tmux exits.
{ "source": [ "https://unix.stackexchange.com/questions/121533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63527/" ] }
121,570
I want to rename multiple files in the same directory using Bash scripting. Names of the files are as follows: file2602201409853.p file0901201437404.p file0901201438761.p file1003201410069.p file2602201410180.p I want to rename to the following format: file2503201409853.p file2503201437404.p file2503201438761.p file2503201410069.p file2503201410180.p I was reading about the rename command, and try to do it this way, but it does nothing, I think I have questions about the syntax. Then I read that you can make a loop using the mv command as follows: for file in cmpsms*2014*.p; do mv "$file" "${file/cmpsms*2014*.p/cmpsms25032014*.p}" done But I can not rename the files. What am I doing wrong?
You were right to consider rename first. The syntax is a little strange if you're not used to regexes but it's by far the quickest/shortest route once you know what you're doing: rename 's/\d{4}/2503/' file* That simply matches the first 4 numbers and swaps them for the ones you specified. And a test harness ( -vn means be verbose but don't do anything) using your filenames: $ rename 's/\d{4}/2503/' file* -vn file0901201437404.p renamed as file2503201437404.p file0901201438761.p renamed as file2503201438761.p file1003201410069.p renamed as file2503201410069.p file2602201409853.p renamed as file2503201409853.p file2602201410180.p renamed as file2503201410180.p
{ "source": [ "https://unix.stackexchange.com/questions/121570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4441/" ] }
121,702
What happens when I write cat /proc/cpuinfo . Is that a named pipe (or something else) to the OS which reads the CPU info on the fly and generate that text each time I call it?
Whenever you read a file under /proc , this invokes some code in the kernel which computes the text to read as the file content. The fact that the content is generated on the fly explains why almost all files have their time reported as now and their size reported as 0 — here you should read 0 as “don't know”. Unlike usual filesystems, the filesystem which is mounted on /proc , which is called procfs , doesn't load data from a disk or other storage media (like FAT, ext2, zfs, …) or over the network (like NFS, Samba, …) and doesn't call user code (unlike FUSE ). Procfs is present in most non-BSD unices. It started its life in AT&T's Bell Labs in UNIX 8th edition as a way to report information about processes (and ps is often a pretty-printer for information read through /proc ). Most procfs implementations have a file or directory called /proc/123 to report information about the process with PID 123. Linux extends the proc filesystem with many more entries that report the state of the system, including your example /proc/cpuinfo . In the past, Linux's /proc acquired various files that provide information about drivers, but this use is now deprecated in favor of /sys , and /proc now evolves slowly. Entries like /proc/bus and /proc/fs/ext4 remain where they are for backward compatibility, but newer similar interfaces are created under /sys . In this answer, I'll focus on Linux. Your first and second entry points for documentation about /proc on Linux are: the proc(5) man page ; The /proc filesystem in the kernel documentation . Your third entry point, when the documentation doesn't cover it, is reading the source . You can download the source on your machine, but this is a huge program, and LXR , the Linux cross-reference, is a big help. (There are many variants of LXR; the one running on lxr.linux.no is the nicest by far but unfortunately the site is often down.) A little knowledge of C is required, but you don't need to be a programmer to track down a mysterious value. The core handling of /proc entries is in the fs/proc directory. Any driver can register entries in /proc (though as indicated above this is now deprecated in favor of /sys ), so if you don't find what you're looking for in fs/proc , look everywhere else. Drivers call functions declared in include/linux/proc_fs.h . Kernel versions up to 3.9 provide the functions create_proc_entry and some wrappers (especially create_proc_read_entry ), and kernel versions 3.10 and above provide instead only proc_create and proc_create_data (and a few more). Taking /proc/cpuinfo as an example, a search for "cpuinfo" leads you to the call to proc_create("cpuinfo, …") in fs/proc/cpuinfo.c . You can see that the code is pretty much boilerplate code: since most files under /proc just dump some text data, there are helper functions to do that. There is merely a seq_operations structure, and the real meat is in the cpuinfo_op data structure, which is architecture-dependent, usually defined in arch/<architecture>/kernel/setup.c (or sometimes a different file). Taking x86 as an example, we're led to arch/x86/kernel/cpu/proc.c . There the main function is show_cpuinfo , which prints out the desired file content; the rest of the infrastructure is there to feed the data to the reading process at the speed it requests it. You can see the data being assembled on the fly from data in various variables in the kernel, including a few numbers computed on the fly such as the CPU frequency . A big part of /proc is the per-process information in /proc/<PID> . These entries are registered in fs/proc/base.c , in the tgid_base_stuff array ; some functions registered here are defined in other files. Let's look at a few examples of how these entries are generated: cmdline is generated by proc_pid_cmdline in the same file. It locates te data in the process and prints it out. clear_refs , unlike the entries we've seen so far, is writable but not readable. Therefore the proc_clear_refs_operations structures defines a clear_refs_write function but no read function. cwd is a symbolic link (a slightly magical one), declared by proc_cwd_link , which looks up the process's current directory and returns it as the link content. fd is a subdirectory. The operations on the directory itself are defined in the proc_fd_operations data structure (they're boilerplate except for the function that enumerates the entries, proc_readfd , which enumerates the process's open files) while operations on the entries are in `proc_fd_inode_operations . Another important area of /proc is /proc/sys , which is a direct interface to sysctl . Reading from an entry in this hierarchy returns the value of the corresponding sysctl value, and writing sets the sysctl value. The entry points for sysctl are in fs/proc/proc_sysctl.c . Sysctls have their own registration system with register_sysctl and friends.
{ "source": [ "https://unix.stackexchange.com/questions/121702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
121,718
I have a JSON output from which I need to extract a few parameters in Linux. This is the JSON output: { "OwnerId": "121456789127", "ReservationId": "r-48465168", "Groups": [], "Instances": [ { "Monitoring": { "State": "disabled" }, "PublicDnsName": null, "RootDeviceType": "ebs", "State": { "Code": 16, "Name": "running" }, "EbsOptimized": false, "LaunchTime": "2014-03-19T09:16:56.000Z", "PrivateIpAddress": "10.250.171.248", "ProductCodes": [ { "ProductCodeId": "aacglxeowvn5hy8sznltowyqe", "ProductCodeType": "marketplace" } ], "VpcId": "vpc-86bab0e4", "StateTransitionReason": null, "InstanceId": "i-1234576", "ImageId": "ami-b7f6c5de", "PrivateDnsName": "ip-10-120-134-248.ec2.internal", "KeyName": "Test_Virginia", "SecurityGroups": [ { "GroupName": "Test", "GroupId": "sg-12345b" } ], "ClientToken": "VYeFw1395220615808", "SubnetId": "subnet-12345314", "InstanceType": "t1.micro", "NetworkInterfaces": [ { "Status": "in-use", "SourceDestCheck": true, "VpcId": "vpc-123456e4", "Description": "Primary network interface", "NetworkInterfaceId": "eni-3619f31d", "PrivateIpAddresses": [ { "Primary": true, "PrivateIpAddress": "10.120.134.248" } ], "Attachment": { "Status": "attached", "DeviceIndex": 0, "DeleteOnTermination": true, "AttachmentId": "eni-attach-9210dee8", "AttachTime": "2014-03-19T09:16:56.000Z" }, "Groups": [ { "GroupName": "Test", "GroupId": "sg-123456cb" } ], "SubnetId": "subnet-31236514", "OwnerId": "109030037527", "PrivateIpAddress": "10.120.134.248" } ], "SourceDestCheck": true, "Placement": { "Tenancy": "default", "GroupName": null, "AvailabilityZone": "us-east-1c" }, "Hypervisor": "xen", "BlockDeviceMappings": [ { "DeviceName": "/dev/sda", "Ebs": { "Status": "attached", "DeleteOnTermination": false, "VolumeId": "vol-37ff097b", "AttachTime": "2014-03-19T09:17:00.000Z" } } ], "Architecture": "x86_64", "KernelId": "aki-88aa75e1", "RootDeviceName": "/dev/sda1", "VirtualizationType": "paravirtual", "Tags": [ { "Value": "Server for testing RDS feature in us-east-1c AZ", "Key": "Description" }, { "Value": "RDS_Machine (us-east-1c)", "Key": "Name" }, { "Value": "1234", "Key": "cost.centre", }, { "Value": "Jyoti Bhanot", "Key": "Owner", } ], "AmiLaunchIndex": 0 } ] } I want to write a file that contains heading like instance id, tag like name, cost center, owner. and below that certain values from the JSON output. The output here given is just an example. How can I do that using sed and awk ? Expected output : Instance id Name cost centre Owner i-1234576 RDS_Machine (us-east-1c) 1234 Jyoti
The availability of parsers in nearly every programming language is one of the advantages of JSON as a data-interchange format. Rather than trying to implement a JSON parser, you are likely better off using either a tool built for JSON parsing such as jq or a general purpose script language that has a JSON library. For example, using jq, you could pull out the ImageID from the first item of the Instances array as follows: jq '.Instances[0].ImageId' test.json Alternatively, to get the same information using Ruby's JSON library: ruby -rjson -e 'j = JSON.parse(File.read("test.json")); puts j["Instances"][0]["ImageId"]' I won't answer all of your revised questions and comments but the following is hopefully enough to get you started. Suppose that you had a Ruby script that could read a from STDIN and output the second line in your example output[0]. That script might look something like: #!/usr/bin/env ruby require 'json' data = JSON.parse(ARGF.read) instance_id = data["Instances"][0]["InstanceId"] name = data["Instances"][0]["Tags"].find {|t| t["Key"] == "Name" }["Value"] owner = data["Instances"][0]["Tags"].find {|t| t["Key"] == "Owner" }["Value"] cost_center = data["Instances"][0]["SubnetId"].split("-")[1][0..3] puts "#{instance_id}\t#{name}\t#{cost_center}\t#{owner}" How could you use such a script to accomplish your whole goal? Well, suppose you already had the following: a command to list all your instances a command to get the json above for any instance on your list and output it to STDOU One way would be to use your shell to combine these tools: echo -e "Instance id\tName\tcost centre\tOwner" for instance in $(list-instances); do get-json-for-instance $instance | ./ugly-ruby-scriptrb done Now, maybe you have a single command that give you one json blob for all instances with more items in that "Instances" array. Well, if that is the case, you'll just need to modify the script a bit to iterate through the array rather than simply using the first item. In the end, the way to solve this problem, is the way to solve many problems in Unix. Break it down into easier problems. Find or write tools to solve the easier problem. Combine those tools with your shell or other operating system features. [0] Note that I have no idea where you get cost-center from, so I just made it up.
{ "source": [ "https://unix.stackexchange.com/questions/121718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63687/" ] }
121,757
I have multiple hard disks which get connected to my server and I'm not sure which one is what in the view of sdXY. If I could see the serial numbers of my hard disks from terminal, I could easily identify them. Is there any way I can get the serial numbers from the terminal?
Another solution which does not require root privileges: udevadm info --query=all --name=/dev/sda | grep ID_SERIAL This is actually the library that lsblk , mentioned by don_crissti, leverages, but my version of lsblk does not include the option for printing the serial number. See the man page of udevadm for more.
{ "source": [ "https://unix.stackexchange.com/questions/121757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19072/" ] }
121,761
I am struggling with understanding if and how the following is possible. Say that I have a machine T (target) which I want to access from remote (ideally via ssh ). T is behind a router/firewall R and I cannot forward port (e.g.) 22 of R to port 22 of T . In a word, no direct ssh access to T is possible. Now say that I have a machine A on which I have full control. I can ssh from T to A , i.e. T: ssh user@A succeeds. Q1: can I use this to access shell of T from A ? I.e., can I use the connection created from T to A , to use T from A ? T ---> ssh ----> A # this is possible T <--- ? shell ? <---- A # is this possible? Q2: If Q1 is possible: Let's say that I have a third machine L (e.g. my laptop), and I aim at having access to the shell of T from L . Can I ssh-tunnel A access to L ? T ----> ssh ----> A <---- ssh < ---- L T <------- ?? %&£€ ?? <------- L # is this possible? any help appreciated.
Another solution which does not require root privileges: udevadm info --query=all --name=/dev/sda | grep ID_SERIAL This is actually the library that lsblk , mentioned by don_crissti, leverages, but my version of lsblk does not include the option for printing the serial number. See the man page of udevadm for more.
{ "source": [ "https://unix.stackexchange.com/questions/121761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63711/" ] }
121,794
I have a 1 TB file. I would like to read from byte 12345678901 to byte 19876543212 and put that on standard output on a machine with 100 MB RAM. I can easily write a perl script that does this. sysread delivers 700 MB/s (which is fine), but syswrite only delivers 30 MB/s. I would like something more efficient, preferably something that is installed every Unix system and that can deliver in the order of 1 GB/s. My first idea is: dd if=1tb skip=12345678901 bs=1 count=$((19876543212-12345678901)) But that is not efficient. Edit: I have no idea how I measured syswrite wrong. This delivers 3.5 GB/s: perl -e 'sysseek(STDIN,shift,0) || die; $left = shift; \ while($read = sysread(STDIN,$buf, ($left > 32768 ? 32768 : $left))){ \ $left -= $read; syswrite(STDOUT,$buf); }' 12345678901 $((19876543212-12345678901)) < bigfile and avoids the yes | dd bs=1024k count=10 | wc nightmare.
This is slow because of the small block size. Using a recent GNU dd ( coreutils v8.16 + ), the simplest way is to use the skip_bytes and count_bytes options: in_file=1tb start=12345678901 end=19876543212 block_size=4096 copy_size=$(( $end - $start )) dd if="$in_file" iflag=skip_bytes,count_bytes,fullblock bs="$block_size" \ skip="$start" count="$copy_size" Update fullblock option added above as per @Gilles answer . At first I thought that it might be implied by count_bytes , but this is not the case. The issues mentioned are a potential problem below, if dd s read/write calls are interrupted for any reason then data will be lost. This is not likely in most cases (odds are reduced somewhat since we are reading from a file and not a pipe). Using a dd without the skip_bytes and count_bytes options is more difficult: in_file=1tb start=12345678901 end=19876543212 block_size=4096 copy_full_size=$(( $end - $start )) copy1_size=$(( $block_size - ($start % $block_size) )) copy2_start=$(( $start + $copy1_size )) copy2_skip=$(( $copy2_start / $block_size )) copy2_blocks=$(( ($end - $copy2_start) / $block_size )) copy3_start=$(( ($copy2_skip + $copy2_blocks) * $block_size )) copy3_size=$(( $end - $copy3_start )) { dd if="$in_file" bs=1 skip="$start" count="$copy1_size" dd if="$in_file" bs="$block_size" skip="$copy2_skip" count="$copy2_blocks" dd if="$in_file" bs=1 skip="$copy3_start" count="$copy3_size" } You could also experiment with different block sizes, but the gains won't be very dramatic. See - Is there a way to determine the optimal value for the bs parameter to dd?
{ "source": [ "https://unix.stackexchange.com/questions/121794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
121,802
To enable an option, we can use setopt . e.g.: setopt extended_glob How can we check if an option is currently enabled ?
In zsh , you can use setopt to show options enabled and unsetopt to show which are not enabled: $ setopt autocd histignorealldups interactive monitor sharehistory shinstdin zle $ unsetopt noaliases allexport noalwayslastprompt alwaystoend noappendhistory autocd autocontinue noautolist noautomenu autonamedirs ..... In bash , you can use shopt -p .
{ "source": [ "https://unix.stackexchange.com/questions/121802", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32459/" ] }
121,829
There is literally nothing on google that I can find that will help me answer this question. I presume it is passing some other parameter to ls -i ?
Yes, the argument -i will print the inode number of each file or directory the ls command is listing. As you want to print the inode number of a directory, I would suggest using the argument -d to only list directories. For printing the inode number the directory /path/to/dir, use the following command line: ls -id /path/to/dir From man ls : -d, --directory list directory entries instead of contents, and do not derefer‐ ence symbolic links -i, --inode print the index number of each file
{ "source": [ "https://unix.stackexchange.com/questions/121829", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63748/" ] }
121,865
I create a 1TB file with random data with dd if=/dev/urandom of=file bs=1M count=1000000 . Now I check with kill -SIGUSR1 <PID> the progress and get the following: 691581+0 Datensätze ein 691580+0 Datensätze aus 725174190080 Bytes (725 GB) kopiert, 86256,9 s, 8,4 MB/s 800950+1 Datensätze ein 800950+0 Datensätze aus 839856947200 Bytes (840 GB) kopiert, 99429,5 s, 8,4 MB/s dd: warning: partial read (809620 bytes); suggest iflag=fullblock 803432+1 Datensätze ein 803431+1 Datensätze aus 842459273876 Bytes (842 GB) kopiert, 99791,3 s, 8,4 MB/s I can't interpret the warning. What does it say? Is my file really random after the warning or is there a problem? What does +0 or +1 in 800950+1 Datensätze ein and 800950+0 Datensätze aus mean? After the warning it is +1. Is it a errorcount?
Summary: dd is a cranky tool which is hard to use correctly. Don't use it, despite the numerous tutorials that tell you so. dd has a “unix street cred” vibe attached to it — but if you truly understand what you're doing, you'll know that you shouldn't be touching it with a 10-foot pole. dd makes a single call to the read system call per block (defined by the value of bs ). There is no guarantee that the read system call returns as much data as the specified buffer size. This tends to work for regular files and block devices, but not for pipes and some character devices. See When is dd suitable for copying data? (or, when are read() and write() partial) for more information. If the read system call returns less than one full block, then dd transfers a partial block. It still copies the specified number of blocks, so the total amount of transfered bytes is less than requested. The warning about a “partial read” tells you exactly this: one of the reads was partial, so dd transfered an incomplete block. In the block counts, +1 means that one block was read partially; since the output count is +0 , all blocks were written out as read. This doesn't affect the randomness of the data: all the bytes that dd writes out are bytes that it read from /dev/urandom . But you got fewer bytes than expected. Linux's /dev/urandom accommodates arbitrary large requests (source: extract_entropy_user in drivers/char/random.c ), so dd is normally safe when reading from it. However, reading large amounts of data takes time. If the process receives a signal, the read system call returns before filling its output buffer. This is normal behavior, and applications are supposed to call read in a loop; dd doesn't do this, for historical reasons ( dd 's origins are murky, but it seems to have started out as a tool to access tapes, which have peculiar requirements, and was never adapted to be a general-purpose tool). When you check the progress, this sends the dd process a signal which interrupts the read. You have a choice between knowing how many bytes dd will copy in total (make sure not to interrupt it — no progress check, no suspension), or knowing how many bytes dd has copied so far, in which case you can't know how many more bytes it will copy. The version of dd in GNU coreutils (as found on non-embedded Linux and on Cygwin) has a flag fullblock which tells dd to call read in a loop (and ditto for write ) and thus always transfer full blocks. The error message suggests that you use it; you should always use it (in both input and output flags), except in very special circumstances (mostly when accessing tapes) — if you use dd at all, that is: there are usually better solutions (see below). dd if=/dev/urandom iflag=fullblock of=file bs=1M count=1000000 Another possible way to be sure of what dd will do is to pass a block size of 1. Then you can tell how many bytes were copied from the block count, though I'm not sure what will happen if a read is interrupted before reading the first byte (which is not very likely in practice but can happen). However, even if it works, this is very slow. The general advice on using dd is do not use dd . Although dd is often advertised as a low-level command to access devices, it is in fact no such thing: all the magic happens in the device file (the /dev/… ) part, dd is just an ordinary tool with a high potential for misuse resulting in data loss. In most cases, there is a simpler and safer way to do what you want, at least on Linux. For example, to read a certain number of bytes at the beginning of a file, just call head : head -c 1000000m </dev/urandom >file I made a quick benchmark on my machine and did not observe any performance difference between dd with a large block size and head . If you need to skip some bytes at the beginning, pipe tail into head : dd if=input of=output count=C bs=B seek=S <input tail -c +$((S*B+1)) | head -c $((C*B)) >output If you want to see progress, call lsof to see the file offset. This only works on a regular file (the output file on your example), not on a character device. lsof -a -p 1234 -d 1 cat /proc/1234/fdinfo/1 You can call pv to get a progress report (better than dd 's), at the expense of an additional item in the pipeline (performance-wise, it's barely perceptible).
{ "source": [ "https://unix.stackexchange.com/questions/121865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63760/" ] }
121,874
I can't find the dig command on my new CentOS installation. I've tried dnf install dig but it say that it cannot find the package. How do I install dig on CentOS?
The DIG tool is part of the BIND Utilities so you need to install them. To install the BIND Utilities, type the following: $ dnf install bind-utils
{ "source": [ "https://unix.stackexchange.com/questions/121874", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63767/" ] }
121,880
I need to setup SSHFP records in the DNS for my host. I have done some searching but I haven't found any good example. What are SSHFP records? What does SSHFP records look like? How do I create SSHFP records?
What are SSHFP records? SSHFP records are DNS records that contain fingerprints for public keys used for SSH. They're mostly used with DNSSEC enabled domains. When an SSH client connects to a server it checks the corresponding SSHFP record. If the records fingerprint matches the servers, the server is legit and it's safe to connect. What does SSHFP records look like? SSHFP records consist of three things: Public key algorithm Fingerprint type Fingerprint (in hex) Public key algorithm There are five different algorithms defined in SSHFP as of 2021 . Each algorithm is represented by an integer. The algorithms are : 1 - RSA 2 - DSA 3 - ECDSA 4 - Ed25519 6 - Ed448 Fingerprint type Two fingerprint types are defined in SSHFP as of 2012 . Each fingerprint type is represented by an integer. These are : 1 - SHA-1 2 - SHA-256 How do I generate SSHFP records? You can use ssh-keygen locally to generate the records using the -r parameter, followed by the hostname (which does not affect the fingerprints so you can specify whatever you like instead). You can use ssh-keyscan to generate records for a remote server using the -D parameter, followed by the hostname. Example Using ssh-keygen and CentOS: [root@localhost ~]# ssh-keygen -r my.domain.com my.domain.com IN SSHFP 1 1 450c7d19d5da9a3a5b7c19992d1fbde15d8dad34 my.domain.com IN SSHFP 2 1 72d30d211ce8c464de2811e534de23b9be9b4dc4 Note Sometimes ssh-keygen will ask for the location of the public certificate. If it asks, you will have to run ssh-keygen multiple times and every time specify a different certificate to make sure that you generate all necessary SSHFP records. Your public keys are usually located in /etc/ssh .
{ "source": [ "https://unix.stackexchange.com/questions/121880", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63767/" ] }
121,967
I have a bash shell script in which I pipe some data through about 5 or 6 different programs then the final results into a tab delimited file. I then do the same again for a separate similar dataset and output to a second file. Then both files are input into another program for comparative analysis. e.g. to simplify Data1 | this | that |theother | grep |sed | awk |whatever > Data1Res.csv Data2 | this | that |theother | grep |sed | awk |whatever > Data2Res.csv AnalysisProg -i Data1res.csv Data2res.csv My question is : how can I make step1 and step2 run at the same time (e.g. using &) but only launch step3 (AnalysisProg) when both are complete? thx ps AnalysisProg will not work on a stream or fifo.
Use wait . For example: Data1 ... > Data1Res.csv & Data2 ... > Data2Res.csv & wait AnalysisProg will: run the Data1 and Data2 pipes as background jobs wait for them both to finish run AnalysisProg. See, e.g., this question .
{ "source": [ "https://unix.stackexchange.com/questions/121967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53432/" ] }
122,238
I'm trying to upgrade to a newer version (that has a bug fix) than my current 1.6. I am on Ubuntu and recently upgraded to Ubuntu 13.04. Ideally I want to use tmux version 1.8 or even 1.9. I've downloaded newer versions but can't get them working. I downloaded 1.9a but when I try and run it, it just hangs. I tried this download: http://sourceforge.net/p/tmux/tmux-code/ci/master/tree/README#l26 and did the $ sh autogen.sh $ ./configure && make but I get $ ./tmux $ protocol version mismatch (client 8, server 6) I tried to download and use a 1.8.4 version but the download didn't seem to have files I could use.
Pretty awesome hack, if you need your tmux working and not want to lose all your sessions: $ tmux attach protocol version mismatch (client 7, server 6) $ pgrep tmux 3429 $ /proc/3429/exe attach original post on Google Plus - https://plus.google.com/110139418387705691470/posts/BebrBSXMkBp
{ "source": [ "https://unix.stackexchange.com/questions/122238", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
122,305
Is there a simple option on extundelete how I can try to undelete a file called /var/tmp/test.iso that I just deleted? (it is not so important that I would start to remount the drive read-only or such things. I can also just re-download that file again) I am looking for a simple command with that I could try if I manage to fast-recover it. I know, it is possible with remounting the drive in read-only : (see How do I simply recover the only file on an empty disk just deleted? ) But is this also possible somehow on the still mounted disk? For info: if the deleted file is on an NTFS partition it is easy with ntfsundelete e.g. if you know the size was about 250MB use sudo ntfsundelete -S 240m-260m -p 100 /dev/hda2 and then undelete the file by inode e.g. with sudo ntfsundelete /dev/hda2 --undelete --inodes 8270
Looking at the usage guide on extundelete it seems as though you're limited to undeleting files to a few ways. Restoring all extundelete is designed to undelete files from an unmounted partition to a separate (mounted) partition. extundelete will restore any files it finds to a subdirectory of the current directory named “RECOVERED_FILES”. To run the program, type “extundelete --help” to see various options available to you. Typical usage to restore all deleted files from a partition looks like this: $ extundelete /dev/sda4 --restore-all Restoring a single file In addition to this method highlighted in the command line usage: --restore-file path/to/deleted/file Attemps to restore the file which was deleted at the given filename, called as "--restore-file dirname/filename". So you should be able to accomplish what you want doing this: $ extundelete --restore-file /var/tmp/test.iso /dev/sda4 NOTE: In both cases you need to know the device, /dev/sda4 to perform this command. You'll have to remount the filesystem as readonly. This is one of the conditions of using extundelete and there isn't any way around this.
{ "source": [ "https://unix.stackexchange.com/questions/122305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
122,333
For example, I have a file myold_file . Then I use ln to create a hard link as mylink : ln myold_file mylink Then, even by using ls -a , I cannot tell which is the old one. Is there anyway to tell?
You can't, because they are literally the same file, only reached by different paths. The first one has no special status.
{ "source": [ "https://unix.stackexchange.com/questions/122333", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54709/" ] }
122,343
What does $# mean in shell? I have code such as if [ $# -eq 0 ] then I want to understand what $# means, but Google search is very bad for searching these kinds of things.
You can always check the man page of your shell. man bash says: Special Parameters # Expands to the number of positional parameters in decimal. Therefore a shell script can check how many parameters are given with code like this: if [ "$#" -eq 0 ]; then echo "you did not pass any parameter" fi
{ "source": [ "https://unix.stackexchange.com/questions/122343", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54709/" ] }
122,345
I have got an issue while updating svn repository from my server. See the error below SVN: OPTIONS of 'http://ipaddress/svn/folder': could not connect to server (http://ipaddress) but i can to ping and connect SVN server via ssh.
You can always check the man page of your shell. man bash says: Special Parameters # Expands to the number of positional parameters in decimal. Therefore a shell script can check how many parameters are given with code like this: if [ "$#" -eq 0 ]; then echo "you did not pass any parameter" fi
{ "source": [ "https://unix.stackexchange.com/questions/122345", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63175/" ] }
122,424
In Linux desktop system, I want to execute a command when the user logs in. After reading some other posts, I tried to insert the command in ~/.bashrc but unsuccessfully. Moreover, the system uses a graphic interface for the user login, so the command should not be related to the start of a shell. I also tried to append the command to one of the scripts contained in /etc/profile.d with no results. Is there another way to do this? Any file that the system reads after the login?
There is no guarantee that the graphical display manager will read the classic startup files. This changes between distributions and between display managers. One of the following should work though. Use your desktop environment's native method to set startup applications. The details will depend on the DE you are using, but you can create a script that runs your command and add it to the list of startup applications. For example, on my system (Cinnamon), you can do this through "System Settings" => "Startup Applications". Use ~/.xprofile , this is sourced by at least the GDM, LDM, LightDM and LXDM login managers. If neither of the above work, try adding the command to ~/.profile : This is the main initialization file for login shells and is also read by some graphical shells on login. As @derobert pointed out in the comments, you can also use the free desktop standards : The Autostart Directories are $XDG_CONFIG_DIRS/autostart as defined in accordance with the "Referencing this specification" section in the "desktop base directory specification". If the same filename is located under multiple Autostart Directories only the file under the most important directory should be used. Example: If $XDG_CONFIG_HOME is not set the Autostart Directory in the user's home directory is ~/.config/autostart/ Example: If $XDG_CONFIG_DIRS is not set the system wide Autostart Directory is /etc/xdg/autostart/ Example: If $XDG_CONFIG_HOME and $XDG_CONFIG_DIRS are not set and the two files /etc/xdg/autostart/foo.desktop and ~/.config/autostart/foo.desktop exist then only the file ~/.config/autostart/foo.desktop will be used because ~/.config/autostart/ is more important than /etc/xdg/autostart/ The ~/.bashrc is completely irrelevant here, it is only read by interactive non-login shells, so is ignored on login shells, graphical or not.
{ "source": [ "https://unix.stackexchange.com/questions/122424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48707/" ] }
122,597
Sort the files in the directory recursively based on last modified date I have modified a lot of files in my directory want to know what are those files by sorting them by the last modified date and in that I want some of the extensions to be excluded in the svn directory I have a lot of .svn files too which I don't want to show in the sort
find -printf "%TY-%Tm-%Td %TT %p\n" | sort -n will give you something like 2014-03-31 04:10:54.8596422640 ./foo 2014-04-01 01:02:11.9635521720 ./bar
{ "source": [ "https://unix.stackexchange.com/questions/122597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34348/" ] }
122,605
I have a folder with a number of files in it ABC.* (there are roughly 100 such files). I want to duplicate them all to new files with names starting with DEF.* So, I want ABC.Page1 ABC.Page2 ABC.Topic12 ...etc copied to DEF.Page1 DEF.Page2 DEF.Topic12 ...etc What is the simplest way to do this with a batch command (in BASH or similar)? I am thinking something involving sed or awk or xargs, but I'm having difficulty figuring out the syntax. I could write a Python script, but I'm thinking there is probably a command line solution that is not too complicated.
How about something like this in bash: for file in ABC.*; do cp "$file" "${file/ABC/DEF}";done you can test it by putting echo in front of the cp command: for file in ABC.*; do echo cp "$file" "${file/ABC/DEF}";done
{ "source": [ "https://unix.stackexchange.com/questions/122605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38705/" ] }
122,616
I have configured sudo to run without a password, but when I try to ssh 'sudo Foo' , I still get the error message sudo: sorry, you must have a tty to run sudo . Why does this happen and how can I work around it?
That's probably because your /etc/sudoers file (or any file it includes) has: Defaults requiretty ...which makes sudo require a TTY. Red Hat systems (RHEL, Fedora...) have been known to require a TTY in default sudoers file. That provides no real security benefit and can be safely removed. Red Hat have acknowledged the problem and it will be removed in future releases. If changing the configuration of the server is not an option, as a work-around for that mis-configuration, you could use the -t or -tt options to ssh which spawns a pseudo-terminal on the remote side, but beware that it has a number of side effects. -tt is meant for interactive use. It puts the local terminal in raw mode so that you interact with the remote terminal. That means that if ssh I/O is not from/to a terminal, that will have side effects. For instance, all the input will be echoed back, special terminal characters ( ^? , ^C , ^U ) will cause special processing; on output, LF s will be converted to CRLF s... (see this answer to Why is this binary file being changed? for more details. To minimise the impact, you could invoke it as: ssh -tt host 'stty raw -echo; sudo ...' < <(cat) The < <(cat) will avoid the setting of the local terminal (if any) in raw mode. And we're using stty raw -echo to set the line discipline of the remote terminal as pass through (effectively so it behaves like the pipe that would be used instead of a pseudo-terminal without -tt , though that only applies after that command is run, so you need to delay sending something for input until that happens). Note that since the output of the remote command will go to a terminal, that will still affect its buffering (which will be line-based for many applications) and bandwidth efficiency since TCP_NODELAY is on. Also with -tt , ssh sets the IPQoS to lowdelay as opposed to throughput . You could work around both with: ssh -o IPQoS=throughput -tt host 'stty raw -echo; sudo cmd | cat' < <(cat) Also, note that it means the remote command cannot detect end-of-file on its stdin and the stdout and stderr of the remote command are merged into a single stream. So, not so good a work around after all. If you've a got a way to spawn a pseudo-terminal on the remote host (like with expect , zsh , socat , perl 's IO::Pty ...), then it would be better to use that to create the pseudo-terminal to attach sudo to (but not for I/O), and use ssh without -t . For example, with expect : ssh host 'expect -c "spawn -noecho sh -c { exec sudo cmd >&4 2>&5 <&6 4>&- 5>&- 6<&-} exit [lindex [wait] 3]" 4>&1 5>&2 6<&0' Or with script (here assuming the implementation from util-linux ): ssh host 'SHELL=/bin/sh script -qec " sudo cmd <&3 >&4 2>&5 3<&- 4>&- 5>&- " /dev/null 3<&0 4>&1 5>&2' (assuming (for both) that the login shell of the remote user is Bourne-like).
{ "source": [ "https://unix.stackexchange.com/questions/122616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34334/" ] }
122,681
I am trying to write a script that installs packages, but if it fails at any point later in the script rolls back whatever it installed. Of course if the user has already previously installed a package I don't want to uninstall it out from under them. How can my script tell whether a package has been previously installed via yum?
I found the following on a semi-related StackOverflow question ; the answer I needed didn't actually quite answer the question there (and was not selected as the correct answer) so I figured I'd post it here for others to find easier. yum list installed PACKAGE_NAME This command returns some human-readable output, but more importantly returns an exit status code; 0 indicates the package is installed, 1 indicates the package is not installed (does not check whether the package is valid, so yum list installed herpderp-beepbopboop will return a "1" just as yum list installed traceroute will if you don't have traceroute installed). You can subsequently check "$?" for this exit code. Since the output is somewhat counter-intuitive, I used @Chris Downs' "condensed" version below in a wrapper function to make the output more "logical" (i.e. 1=installed 0=not installed): function isinstalled { if yum list installed "$@" >/dev/null 2>&1; then true else false fi } usage would be if isinstalled $package; then echo "installed"; else echo "not installed"; fi EDIT: Replaced return statements with calls to true and false which help make the function more readable/intuitive, while returning the values bash expects (i.e. 0 for true, 1 for false). If you're just checking for one package in your script, you may just be better off testing yum list installed directly, but (IMHO) the function makes it easier to understand what's going on, and its syntax is much easier to remember than yum with all the redirects to supress its output.
{ "source": [ "https://unix.stackexchange.com/questions/122681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62503/" ] }
122,795
When editing an authorised_keys file in Nano, I want to wrap long lines so that I can see the end of the lines (i.e tell whose key it is). Essentially I want it to look like the output of cat authorised_keys So, I hit Esc + L which is the meta key for enabling long line wrapping on my platform and I see the message to say long line wrapping has been enabled but the lines do not wrap as I expect. I'm using Terminal on OSX 10.8.5
To see the word wrapping style you described, use nano's "soft wrapping": Esc + $ . The Esc + L command you (and everyone) tried does "hard wrapping." Note on keystroke notation - if you are new to Linux, the notation Esc + $ means press and release Esc and then press $ . The full key press sequence then is Esc , Shift+4 . (It does not mean hold down escape while pressing $ .) Source: https://www.nano-editor.org/dist/v2.9/nano.html (search for --softwrap) Note on softwrap and formatting mistakes - If you are new to nano, be a little careful of softwrap. If you are editing a configuration file or something else that is sensitive to newlines or indents, formatting mistakes can be made. Until you get comfortable with softwrap’s behaviors, I suggest doing a quick check with softwrap off (do the key sequence again) before saving. Note on the goodness provided by others in their answers below - because different operating systems and different versions of nano do things a little differently: If you like softwrap on all of the time, set it in your .nanorc, as described in x0a's answer below , as it is a bit more through than Prashant's. If you have a Raspberry Pi, note chainsawmascara's answer about needing an extra keystroke for softwrap to go into effect. If you have a Mac, like lodeOfCode's answer below , you can always update nano and here , and thus bask in the warm glow of softwrap! nano linewrap
{ "source": [ "https://unix.stackexchange.com/questions/122795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50131/" ] }
122,801
I've decided to try tmux: have been reading the docs and googling around, trying to find a way to have two users sharing a session, each with a different cursor. However, giving 777 permissions to the socket, or creating a group, chgrp ing the socket and adding both users to it, seems to let that same socket be used to share a session with only one cursor: both users can write, but always in the same cursor position. Right now both users are in the same home server over ssh, and the idea is to be able to have: A terminal in a, let's say, left pane, where I can type commands Another terminal in a right pane, where I can see another user typing commands in his own terminal The same thing for the other user What I'm doing at the moment is using two sessions (not shared) and a script -f and tail -f combination that kinda works for reading each other's key strokes, but I reckon there is probably some way of doing this using tmux sharing capabilities. Is there a way to get this idea working with write support in each other's terminal? What is the better way to do this?
This question is a bit old, but I was looking for something similar, and found it here . It creates a second session that shares windows with the first, but has its own view and cursor. tmux new-session -s alice tmux new-session -t alice -s bob If the sharing is happening between two user accounts, you may still have to mess with permissions (which it sounds like you had working already). Edit: As suggested, a quote from another answer : First, add a group for tmux users export TMUX_GROUP=tmux addgroup $TMUX_GROUP Create a directory with the group set to $TMUX_GROUP and use the setgid bit so that files created within the directory automatically have the group set to $TMUX_GROUP. mkdir /var/tmux chgrp $TMUX_GROUP /var/tmux chmod g+ws /var/tmux Next make sure the users that want to share the session are members of $TMUX_GROUP usermod -aG $TMUX_GROUP user1 usermod -aG $TMUX_GROUP user2
{ "source": [ "https://unix.stackexchange.com/questions/122801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48795/" ] }
122,845
I have been looking at a few scripts other people wrote (specifically Red Hat), and a lot of their variables are assigned using the following notation VARIABLE1="${VARIABLE1:-some_val}" or some expand other variables VARIABLE2="${VARIABLE2:-`echo $VARIABLE1`}" What is the point of using this notation instead of just declaring the values directly (e.g., VARIABLE1=some_val )? Are there benefits to this notation or possible errors that would be prevented? Does the :- have specific meaning in this context?
This technique allows for a variable to be assigned a value if another variable is either empty or is undefined. NOTE: This "other variable" can be the same or another variable. excerpt ${parameter:-word} If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted. NOTE: This form also works, ${parameter-word} . If you'd like to see a full list of all forms of parameter expansion available within Bash then I highly suggest you take a look at this topic in the Bash Hacker's wiki titled: " Parameter expansion ". Examples variable doesn't exist $ echo "$VAR1" $ VAR1="${VAR1:-default value}" $ echo "$VAR1" default value variable exists $ VAR1="has value" $ echo "$VAR1" has value $ VAR1="${VAR1:-default value}" $ echo "$VAR1" has value The same thing can be done by evaluating other variables, or running commands within the default value portion of the notation. $ VAR2="has another value" $ echo "$VAR2" has another value $ echo "$VAR1" $ $ VAR1="${VAR1:-$VAR2}" $ echo "$VAR1" has another value More Examples You can also use a slightly different notation where it's just VARX=${VARX-<def. value>} . $ echo "${VAR1-0}" has another value $ echo "${VAR2-0}" has another value $ echo "${VAR3-0}" 0 In the above $VAR1 & $VAR2 were already defined with the string "has another value" but $VAR3 was undefined, so the default value was used instead, 0 . Another Example $ VARX="${VAR3-0}" $ echo "$VARX" 0 Checking and assigning using := notation Lastly I'll mention the handy operator, := . This will do a check and assign a value if the variable under test is empty or undefined. Example Notice that $VAR1 is now set. The operator := did the test and the assignment in a single operation. $ unset VAR1 $ echo "$VAR1" $ echo "${VAR1:=default}" default $ echo "$VAR1" default However if the value is set prior, then it's left alone. $ VAR1="some value" $ echo "${VAR1:=default}" some value $ echo "$VAR1" some value Handy Dandy Reference Table This makes the difference between assignment and substitution explicit: Assignment sets a value for the variable whereas substitution doesn't. References Parameter Expansions - Bash Hackers Wiki 10.2. Parameter Substitution Bash Parameter Expansions
{ "source": [ "https://unix.stackexchange.com/questions/122845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31762/" ] }
122,854
How can I use find to generate a list of directories which contain the most numbers of files. I'd like the list to be from highest to lowest. I'd only like the listing to go 1 level deep, and I'd typically run this command from the top of my filesystem, i.e. / .
Using GNU tools: find / -xdev -type d -print0 | while IFS= read -d '' dir; do echo "$(find "$dir" -maxdepth 1 -print0 | grep -zc .) $dir" done | sort -rn | head -50 This uses two find commands. The first finds directories and pipes them to a while loop runs the next find for each directory. The second lists all the child files/directories in the first level while grep counts them. The grep allows -print0 to be used with the second find since wc does not have a -z equivalent. This stops filenames with a newline from being counted twice (although using wc and no -print0 wouldn't make much difference). The result of the second find is placed in the argument to echo so it and the directory name can easily be placed on the same line (the $(..) construct automatically trims the newline at the end of grep ). Lines are then sorted by number and the 50 largest numbers shown with head . Note that this will also include the top level directories of mount points. A simple way to get around this is to use a bind mount and then use the directory of the mount. To do this: sudo mount --bind / /mnt A more portable solution uses a different shell instance for each directory (also answered here ): find / -xdev -type d -exec sh -c ' echo "$(find "$0" | grep "^$0/[^/]*$" | wc -l) $0"' {} \; | sort -rn | head -50 Sample output: 9225 /var/lib/dpkg/info 6322 /usr/share/qt4/doc/html 4927 /usr/share/man/man3 2301 /usr/share/man/man1 2097 /usr/share/doc 2097 /usr/bin 1863 /usr/lib/x86_64-linux-gnu 1679 /var/cache/apt/archives 1628 /usr/share/qt4/doc/src/images 1614 /usr/share/qt4/doc/html/images 1308 /usr/share/scilab/modules/overloading/macros 1083 /usr/src/linux-headers-3.13-1-common/include/linux 1071 /usr/src/linux-headers-3.13-1-amd64/include/config 847 /usr/include/qt4/QtGui 774 /usr/include/qt4/Qt 709 /usr/share/man/man8 616 /usr/lib 611 /usr/share/icons/oxygen/32x32/actions 608 /usr/share/icons/oxygen/22x22/actions 598 /usr/share/icons/oxygen/16x16/actions 579 /usr/share/bash-completion/completions 574 /usr/share/icons/oxygen/48x48/actions 570 /usr/share/vim/vim74/syntax 546 /usr/share/scilab/modules/m2sci/macros/sci_files 531 /usr/lib/i386-linux-gnu/wine/wine 530 /usr/lib/i386-linux-gnu/wine/wine/fakedlls 496 /etc/ssl/certs 457 /usr/share/mime/application 454 /usr/share/man/man2 450 /usr/include/qt4/QtCore 443 /usr/lib/python2.7 419 /usr/src/linux-headers-3.13-1-common/include/uapi/linux 413 /usr/share/fonts/X11/misc 413 /usr/include/linux 375 /usr/share/man/man5 374 /usr/share/lintian/overrides 372 /usr/share/cmake-2.8/Modules 370 /usr/share/fonts/X11/75dpi 370 /usr/share/fonts/X11/100dpi 356 /usr/share/icons/gnome/24x24/actions 356 /usr/share/icons/gnome/22x22/actions 356 /usr/share/icons/gnome/16x16/actions 353 /usr/share/icons/gnome/48x48/actions 353 /usr/share/icons/gnome/32x32/actions 341 /usr/lib/ghc/ghc-7.6.3 326 /usr/sbin 324 /usr/share/scilab/modules/compatibility_functions/macros 324 /usr/share/scilab/modules/cacsd/macros 320 /usr/share/terminfo/a 319 /usr/share/i18n/locales
{ "source": [ "https://unix.stackexchange.com/questions/122854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
122,918
While trying to fix an obscure hardware bug, it was suggested that adding a couple of parameters to the kernel might resolve the issue. I can of course do that, but I was wondering whether I can make these changes to a running kernel. In particular, I know that procfs and sysfs provide a way to make changes to the running kernel, but I am unsure how to map kernel parameter names to file paths. (I also presume that not all settings are changable at runtime, and these particular parameters might well not be configurable once the system is booted.) The specific parameters I'm interested in are i8042.nomux=1 i8042.reset I'm particularly unsure as to whether it's possible to issue the reset command on a running system. If these parameters are tweakable at runtime, where would I find them?
There are three kinds of things that can be called kernel parameters. Core kernel parameters are options passed on the kernel command line. They can only be passed at boot time. They are documented in kernel-parameters.txt (this file also lists module parameters; core kernel parameters are the ones without a . ). Some of these parameters only matter at boot time (e.g. root ). For the ones that are used throughout the lifetime of the system, there may or may not be a mechanism to change them at runtime, there's no general rule. Module parameters are like kernel parameters, but they specify a particular component of the kernel, usually a specific driver. Despite the name, these parameters apply whether the corresponding driver is compiled directly in the kernel or as a module. When the component is included in the main kernel image, you need to pass COMPONENT_NAME.PARAMETER_NAME=VALUE on the kernel command line. When the component is loaded as a module, you need to pass PARAMETER_NAME=VALUE to insmod . Some module parameters are visible through sysfs . The directory /sys/module/MODULE_NAME/parameters contains one file per parameter; reading that file gives you the current value of the parameter. Writing to that file sets the parameter, if it can be modified; most parameters cannot be modified (and so the file is read-only). The directory /sys/module/kernel/parameters lists some of the core kernel parameters. Module parameters are documented haphazardly; some of them are listed in kernel-parameters.txt , and the file contains references for some modules. If you can't find the documentation, search the source . Module parameters are declared by the module_param macro or one of its companions module_param_named , module_param_cb , etc. The last parameter of these macros determines the file permission (e.g. 0600 or S_IRUSR | S_IWUSR would rw------- i.e. readable and writable by root and inaccessible by anyone else). When the permission is 0, the entry in sysfs doesn't appear at all. i8042.nomux and i8042.reset are parameters of the i8042 driver . Looking at the source code, both have the permission 0, so these two parameters are not modifiable or even queryable at runtime. You can set the parameters only when the driver is started. If the driver is compiled as a module, then unloading the module and loading it again allows you to supply different parameters when you reload it. If the driver is directly in the kernel or if your system's configuration makes it effectively impossible to unload the module, you need to reboot. Finally, another kind of parameters in the kernel is sysctl . These settings can be viewed and changed with the sysctl command or via /proc/sys . I think the separation between sysctl and kernel parameters is mostly historical; hardware-related settings are traditionally kernel parameters while software-releated settings are traditionally sysctl but the distinction can be fuzzy at times.
{ "source": [ "https://unix.stackexchange.com/questions/122918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26776/" ] }
122,977
I was trying to find how to pass some text to a file without overwriting what's there already using the > command and I realised I don't know what it's called. Searching for right arrow or right chevron or more than command didn't show up anything. I've always just called it pass to .
> is not a command but a file descriptor redirection. This means that the shell parses this assignment, removes it from the command line and changes the environment for the new process in which it is started. The new process does not notice this part of the command line. That's the reason why you can put it everywhere: At the beginning, at the end or in between. Look for the REDIRECTION block in man bash . In order to append to an existing file you need to use >> .
{ "source": [ "https://unix.stackexchange.com/questions/122977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46764/" ] }
123,029
It has come to my attention that a server of mine has been hacked and infected with a known Chinese botnet. It was a prototype/testing virtual machine with its own static IP(US address) so no harm was caused(just took me a while to figure it out). Now I would like to know what IP/s was used for the intrusion to know if the attack originated from china. Is there a way to view a history of received connections on ssh on the server? Edit: The system is Linux Debian 7
Look at the output of the last command and anything with an IP address or hostname instead of a blank space came in over the network. If sshd is the only way of doing that on this system, then there you go. Alternatively (if this is Linux), you can check /var/log/secure (on RH-based distros) or /var/log/auth.log (on Debian-based distros) where sshd will usually keep track of connections made even if they don't result in successful logins (which hits utmp / wtmp , which is what last will read from). Example: Apr 3 16:21:01 xxxxxxvlp05 sshd[6266]: Connection closed by xxx.xxx.13.76 ... Apr 3 09:09:49 xxxxxxvlp05 sshd[26275]: Failed password for invalid user __super from xxx.xxx.13.76 port 45229 ssh2 IIRC Solaris's sshd (which may not necessarily be OpenSSH's sshd ) will log this information to /var/adm/messages EDIT: @derobert makes an excellent point. It's important to remember that on any system, if your superuser account is compromised, then all bets are off since log files such as /var/log/wtmp or /var/adm/messages can be modified by the attacker. This can be mitigated if you shove logs off-server to a secure location. For example, at one shop I used to work at we had an "Audit Vault" machine that was secured so as to only receive the audit log files from the various servers in the data center. I would recommend having a similar setup in the future (since "I have a test machine" sounds like you're operating in a large-ish shop)
{ "source": [ "https://unix.stackexchange.com/questions/123029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56196/" ] }
123,074
If I configure the swappiness value to another, from ex.: 60 to 0, then I always need to reboot the machine to the changes to take effect? Even when modifying with: sysctl -w vm.swappiness=0
Everything is well explained in the Wikipedia page you gave. # Set the swappiness value as root echo 10 > /proc/sys/vm/swappiness # Alternatively, run this as a non-root user # This does the same as the previous command sudo sysctl -w vm.swappiness=10 # Verify the change cat /proc/sys/vm/swappiness 10 At this point, the system will manage the swap like you just configured it, BUT if you reboot NOW, your change will be forgotten and the system will work with the default value (assuming 60, meaning than it will start to swap at 40% occupation of RAM). You have to add the line below in /etc/sysctl.conf to keep your change permanently: vm.swappiness = 10 Hope it’s more clear for you now!
{ "source": [ "https://unix.stackexchange.com/questions/123074", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61246/" ] }
123,088
Apart from upgrading the kernel, are there any changes to a Linux system that require a reboot? I know there are situations where a reboot makes things easier, but are there any that cannot be accomplished except with a reboot? To clarify: I'm thinking of a typical desktop or server system that isn't suffering from a hardware malfunction.
A couple of things come to mind: Recover from a kernel panic A kernel panic, by definition, cannot be recovered from without restarting the kernel. Recover from hangs which leave you without terminal access If the system is unresponsive and you're stranded without a way to issue commands to recover, the only thing you might be able to do is to reboot. Usually, you'd want to avoid manual power cycling. For these kinds of situations, the Linux kernel has Magic SysRq support which can be used to reboot the machine in an emergency. As long as CONFIG_MAGIC_SYSRQ option has been enabled in the kernel configuration, and the kernel.sysrq sysctl option is enabled, you can issue commands directly to the kernel with magic SysRq key combinations: Note that Alt + SysRq below means press and hold down Alt , then press and hold SysRq (typically the PrintScrn key). Alt + SysRq + r : regain control of keyboard Alt + SysRq + e : send SIGTERM to all processes, except init , giving them a chance to terminate gracefully Alt + SysRq + i : send SIGKILL to all processes, except init , forcing them to terminate Alt + SysRq + s : attempt to sync all mounted filesystems Alt + SysRq + u : remount all filesystem read-only Alt + SysRq + b : reboot, or Alt + SysRq + o : shutdown A mnemonic for the magic SysRq key combinations to attempt a graceful reboot is: " R eboot E ven I f S ystem U tterly B roke " For headless servers, there's even an iptables target enabling remote SysRq sequences over a network. Recover from unbootable state If the system has already been brought to a state where a regular boot is not possible (e.g. as a result of a failed system upgrade, corrupt filesystem etc.), then the only way to access a recovery console on the system might be to reboot using appropriate boot-time options. Change boot-time kernel parameters Some kernel parameters (e.g. audit to enable / disable kernel auditing) can only be set when the kernel is loaded at boot-time.
{ "source": [ "https://unix.stackexchange.com/questions/123088", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62988/" ] }
123,243
I found a question about, how to remove lines longer then 2048 chars: How to delete line if longer than XY? Q: But how can I remove lines shorter then 4 chars? So remove lines that has 1 or 2 or 3 length in a file. UPDATE: Thanks for the many GOOD answers, but I can only mark one as OK
You could use sed . The following would remove lines that are 3 characters long or smaller: sed -r '/^.{,3}$/d' filename In order to save the changes to the file in-place, supply the -i option. If your version of sed doesn't support extended RE syntax, then you could write the same in BRE: sed '/^.\{,3\}$/d' filename which would work with all sed variants. You could also use awk : awk 'length($0)>3' filename Using perl : perl -lne 'length()>3 && print' filename
{ "source": [ "https://unix.stackexchange.com/questions/123243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61246/" ] }
123,247
I really like the idea of Mutt , reading mail in the terminal. I'm not really pleased with some inconsistencies and the IMAP handling. I set about trying to find some alternatives to Mutt, but I can't seem to find any. What alternatives to the Mutt e-mail client exist for the Linux terminal?
The obvious answer is Alpine , which used to be Pine , but was freed by the University of Washington. Pine is non-free software, Alpine is free software. Alpine is quite similar to Mutt, but Mutt is generally considered to be more powerful and flexible. The current active branch of Alpine is a fork called Re-Alpine, since the University of Washington has largely ceased development of Alpine as of 2008. The Wikipedia pages on Pine and Alpine cover the history adequately. I'd recommend trying to figure out your issues with Mutt instead of jumping to another mail client. Alpine inherits a polished user interface from Pine, but has some significant limitations and inflexibilities compared to Mutt. Therefore, you may find using it comes with its own problems. Personally, I've used Pine since 1994, and switched to Alpine when that became available. I've thought over the years that I ought to be using Mutt instead, but never managed a successful transition. Incidentally, IMAP was created by the late Mark Crispin , who used to work at the University of Washington developing IMAP. He was therefore also, unsurprisingly, responsible for Pine's IMAP support. In the Pine credits he is listed thus: C-Client library & IMAPd: Mark Crispin
{ "source": [ "https://unix.stackexchange.com/questions/123247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64334/" ] }
123,440
I am running the following command, but it is not performed recursively: find . -name *.java I know there are java files further down in the current directory but it is performing the find on the current directory only. I am using OS X, 10.9.
The problem is, you didn't quote your -name parameter. Do this instead: find . -name '*.java' Explanation Without the quotes, the shell interprets *.java as a glob pattern and expands it to any file names matching the glob before passing it to find . This way, if you had, say, foo.java in the current directory, find 's actual command line would be: find . -name foo.java which would obviously list the file in the current directory only (unless you happen to have some similarly-named files further down the tree). Quoting prevents glob expansion and passes the command line to find as-is. Incidentally, if the glob had failed to match (no *.java files in the current directory), you would get one of two behaviors depending on how your shell is set up to handle globs that don't match (this is governed by the nullglob option in Bash, for example): If a glob that doesn't match is not expanded by the shell, find will (accidentally, mind you) exhibit correct behavior. If a glob that doesn't match is expanded into an empty string by the shell, find will complain that it is missing an argument to -name .
{ "source": [ "https://unix.stackexchange.com/questions/123440", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11498/" ] }
123,473
What is the difference between the two commands env and printenv ? They both show the environment variables, and the output is exactly the same aside from _ . Are there any historical reasons for there being two commands instead of one?
Are there any historical reasons for there being two commands instead of one? There was, just history manner. Bill Joy wrote the first version of printenv command in 1979 for BSD. UNIX System III introduced env command in 1980. GNU followed UNIX System's env in 1986. BSD followed GNU/UNIX System's env in 1988. MINIX followed BSD's printenv in 1988. GNU followed MINX/BSD's printenv in 1989. GNU shell programming utilities 1.0 included printenv and env in 1991. GNU Shell Utilities merged into GNU coreutils in 2002, which is what you find in GNU/Linux nowadays. Note that the "followed" doesn't means the source code was the same, probably they were rewritten to avoid license lawsuits. So, the reason why both commands existed is because Bill Joy wrote printenv even before env existed. After 10 years of merging/compatibility and GNU come across it, you are now seeing both similar commands on the same page. This history indicated as follows: (I tried to minimize the answer and only provided 2 essential source code snippets here. The rest you can click the attached links to see more) [fall of 1975] Also arriving in the fall of 1975 were two unnoticed graduate students, Bill Joy and Chuck Haley; they both took an immediate interest in the new system. Initially, they began working on a Pascal system that Thompson had hacked together while hanging around the 11/70 machine room. [1977] Joy started compiling the first Berkeley Software Distribution (1BSD), which was released on March 9, 1978. //rf: https://en.wikipedia.org/wiki/Berkeley_Software_Distribution [February, 1979] 1979(see "Bill Joy, UCB February, 1979") /1980(see "copyright[] =") , printenv.c //rf: http://minnie.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/src/ucb/printenv.c /* * Copyright (c) 1980 Regents of the University of California. * All rights reserved. The Berkeley software License Agreement * specifies the terms and conditions for redistribution. */ #ifndef lint char copyright[] = "@(#) Copyright (c) 1980 Regents of the University of California.\n\ All rights reserved.\n"; #endif not lint #ifndef lint static char sccsid[] = "@(#)printenv.c 5.1 (Berkeley) 5/31/85"; #endif not lint /* * printenv * * Bill Joy, UCB * February, 1979 */ extern char **environ; main(argc, argv) int argc; char *argv[]; { register char **ep; int found = 0; argc--, argv++; if (environ) for (ep = environ; *ep; ep++) if (argc == 0 || prefix(argv[0], *ep)) { register char *cp = *ep; found++; if (argc) { while (*cp && *cp != '=') cp++; if (*cp == '=') cp++; } printf("%s\n", cp); } exit (!found); } prefix(cp, dp) char *cp, *dp; { while (*cp && *dp && *cp == *dp) cp++, dp++; if (*cp == 0) return (*dp == '='); return (0); } [1979] Hard to determine released in 2BSD OR 3BSD //rf: https://en.wikipedia.org/wiki/Berkeley_Software_Distribution 3BSD The printenv command appeared in 3.0 BSD. //rf: http://www.freebsd.org/cgi/man.cgi?query=printenv&sektion=1#end 3.0 BSD introduced at 1979 //rf: http://gunkies.org/wiki/3_BSD 2BSD The printenv command first appeared in 2BSD //rf: http://man.openbsd.org/printenv.1 [June, 1980] UNIX Release 3.0 OR "UNIX System III" //rf: ftp://pdp11.org.ru/pub/unix-archive/PDP-11/Distributions/usdl/SysIII/ [xiaobai@xiaobai pdp11v3]$ sudo grep -rni printenv . //no such printenv exist. [xiaobai@xiaobai pdp11v3]$ sudo find . -iname '*env*' ./sys3/usr/src/lib/libF77/getenv_.c ./sys3/usr/src/lib/libc/vax/gen/getenv.c ./sys3/usr/src/lib/libc/pdp11/gen/getenv.c ./sys3/usr/src/man/man3/getenv.3c ./sys3/usr/src/man/docs/c_env ./sys3/usr/src/man/docs/mm_man/s03envir ./sys3/usr/src/man/man7/environ.7 ./sys3/usr/src/man/man1/env.1 ./sys3/usr/src/cmd/env.c ./sys3/bin/env [xiaobai@xiaobai pdp11v3]$ man ./sys3/usr/src/man/man1/env.1 | cat //but got env already ENV(1) General Commands Manual ENV(1) NAME env - set environment for command execution SYNOPSIS env [-] [ name=value ] ... [ command args ] DESCRIPTION Env obtains the current environment, modifies it according to its arguments, then executes the command with the modified environment. Arguments of the form name=value are merged into the inherited environment before the command is executed. The - flag causes the inherited environment to be ignored completely, so that the command is executed with exactly the environment specified by the arguments. If no command is specified, the resulting environment is printed, one name-value pair per line. SEE ALSO sh(1), exec(2), profile(5), environ(7). ENV(1) [xiaobai@xiaobai pdp11v3]$ [xiaobai@xiaobai pdp11v3]$ cat ./sys3/usr/src/cmd/env.c //diff with http://minnie.tuhs.org/cgi-bin/utree.pl?file=pdp11v/usr/src/cmd/env.c version 1.4, you will know this file is slightly older, so we can concluded that this file is "env.c version < 1.4" /* * env [ - ] [ name=value ]... [command arg...] * set environment, then execute command (or print environment) * - says start fresh, otherwise merge with inherited environment */ #include <stdio.h> #define NENV 100 char *newenv[NENV]; char *nullp = NULL; extern char **environ; extern errno; extern char *sys_errlist[]; char *nvmatch(), *strchr(); main(argc, argv, envp) register char **argv, **envp; { argc--; argv++; if (argc && strcmp(*argv, "-") == 0) { envp = &nullp; argc--; argv++; } for (; *envp != NULL; envp++) if (strchr(*envp, '=') != NULL) addname(*envp); while (*argv != NULL && strchr(*argv, '=') != NULL) addname(*argv++); if (*argv == NULL) print(); else { environ = newenv; execvp(*argv, argv); fprintf(stderr, "%s: %s\n", sys_errlist[errno], *argv); exit(1); } } addname(arg) register char *arg; { register char **p; for (p = newenv; *p != NULL && p < &newenv[NENV-1]; p++) if (nvmatch(arg, *p) != NULL) { *p = arg; return; } if (p >= &newenv[NENV-1]) { fprintf(stderr, "too many values in environment\n"); print(); exit(1); } *p = arg; return; } print() { register char **p = newenv; while (*p != NULL) printf("%s\n", *p++); } /* * s1 is either name, or name=value * s2 is name=value * if names match, return value of s2, else NULL */ static char * nvmatch(s1, s2) register char *s1, *s2; { while (*s1 == *s2++) if (*s1++ == '=') return(s2); if (*s1 == '\0' && *(s2-1) == '=') return(s2); return(NULL); } [xiaobai@xiaobai pdp11v3]$ [1985] BSD first printenv manual //rf: http://minnie.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/src/man/man1/printenv.1 I couldn't find the manual related to env, but the closest is getenv and environ //http://minnie.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/src/man [1986] First version of GNU env //rf: ftp://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/i386/1.0-RELEASE/ports/shellutils/src/env.c [1987] MINIX 1st released //rf: https://en.wikipedia.org/wiki/Andrew_S._Tanenbaum Tanenbaum wrote a clone of UNIX, called MINIX (MINi-unIX), for the IBM PC. It was targeted at students and others who wanted to learn how an operating system worked. [1988] BSD 1st env.c //http://minnie.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/src/usr.sbin/cron/env.c /* Copyright 1988,1990,1993,1994 by Paul Vixie * All rights reserved [October 4, 1988] MINIX version 1.3 //rf: https://groups.google.com/forum/#!topic/comp.os.minix/cQ8kaiq1hgI ... 32932 190 /minix/commands/printenv.c //printenv.c already exist //rf: http://www.informatica.co.cr/linux/research/1990/0202.htm [1989] The first version of GNU printenv , refer to [August 12, 1993]. [July 16, 1991] "Shellutils" - GNU shell programming utilities 1.0 released //rf: https://groups.google.com/forum/#!topic/gnu.announce/xpTRtuFpNQc The programs in this package are: basename date dirname env expr groups id logname pathchk printenv printf sleep tee tty whoami yes nice nohup stty uname [August 12, 1993] printenv.c //rf: ftp://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/i386/1.0-RELEASE/ports/shellutils/src/printenv.c , GNU Shell Utilities 1.8 //rf: ftp://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/i386/1.0-RELEASE/ports/shellutils/VERSION /* printenv -- print all or part of environment Copyright (C) 1989, 1991 Free Software Foundation. ... [1993] printenv.c which found on DSLinux source code in 2006 //rf: (Google) cache:mailman.dslinux.in-berlin.de/pipermail/dslinux-commit-dslinux.in-berlin.de/2006-August/000578.html --- NEW FILE: printenv.c --- /* * Copyright (c) 1993 by David I. Bell [November 1993] The first version of FreeBSD was released. //rf: https://en.wikipedia.org/wiki/FreeBSD [september 1, 2002] http://git.savannah.gnu.org/cgit/coreutils.git/tree/README-package-renamed-to-coreutils The GNU fileutils, textutils, and sh-utils(see "Shellutils" at July 16, 1991 above) packages have been merged into one, called the GNU coreutils. Overall, env use cases compare with printenv : print environment variables, but printenv can do the same Disable shell builtin but can achieve with enable cmd too. set variable but pointless due to some shells already can do it without env , e.g. $ HOME=/dev HOME=/tmp USER=root /bin/bash -c "cd ~; pwd" /tmp #!/usr/bin/env python header, but still not portable if env not in /usr/bin env -i , disable all env. I find it useful to figure out the critical environment variables for certain program, to make it run from crontab . e.g. [1] In interactive mode, run declare -p > /tmp/d.sh to stores attributes variables. [2] In /tmp/test.sh , write: . /tmp/d.sh; eog /home/xiaobai/Pictures/1.jpg [3] Now run env -i bash /tmp/test.sh [4] If it success to display image, remove half of variables in /tmp/d.sh and run env -i bash /tmp/test.sh again. If something failed, undo it. Repeat the step to narrow down. [5] Finally I figure out eog requires $DISPLAY to run in crontab , and absent of $DBUS_SESSION_BUS_ADDRESS will slow down the display of image. target_PATH="$PATH:$(sudo printenv PATH)"; is useful to directly use the root path without having to further parse the output of env or printenv . e.g: xb@dnxb:~$ sudo env | grep PATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin xb@dnxb:~$ sudo printenv | grep PATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin xb@dnxb:~$ sudo printenv PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin xb@dnxb:~$ sudo env PATH env: ‘PATH’: No such file or directory xb@dnxb:~$
{ "source": [ "https://unix.stackexchange.com/questions/123473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20120/" ] }
123,511
I have a font with the name Media Gothic . How can I find the file name of that font in Linux? I need to copy that file to another system. I've tried: find /usr/share/fonts/ -name '*media*' But this gives no results. gothic gives some other fonts. TTF is a binary format so I can't use grep .
Have you tried ? fc-list | grep -i "media" Also give a try to fc-scan , fc-match
{ "source": [ "https://unix.stackexchange.com/questions/123511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1806/" ] }
123,518
Many new laptop and desktop computers do not have 9-pin/25-pin serial ports. Why do many Linux distributions still contain /dev/ttyS0 , dev/ttyS1 device files? Since udev can create the device files dynamically, why are /dev/ttyS0 , /dev/ttyS1 still created statically? Each time I boot up, /dev/ttyS0 and /dev/ttyS1 are in there. By the way: I am using Debian 7.0.
These /dev nodes appear because the standard PC serial port driver is compiled into the kernel you're using, and it is finding UARTs . That causes /sys/devices/platform/serial8250 (or something compatible) to appear, so udev creates the corresponding /dev nodes. These UARTs are most likely one of the many features of your motherboard's chipset. Serial UARTs in the chipset are quite common still, even though it is becoming less and less common for a DB-9 connector to be attached to these IC UART pins. On some motherboards, there is a header connector for each serial port, and you have to buy an adapter cable if you want to route that connector to the back of the PC: Other motherboards using the same chipset might not even expose the header connector, even though the feature is available in silicon, purely to save a bit of PCB space and a few cents for the header connector. A few serial UARTs add negligible cost to a mass-produced PC chipset IC, whereas it adds a few dollars to the final retail cost of a motherboard to run a DB-9 connector out to the board edge. There is also a cost in PCB space; space at the board edge is especially precious. There is no standard way to probe for the existence of a device connected to an RS-232 serial port. Contrast USB, where the mere presence of a port on the motherboard doesn't cause a /dev node to be created, but plugging a device in does, because there is a fairly complex negotiation between the device and the host OS. In effect, the device announces itself to the OS, so udev can react by creating an appropriate /dev node for the device.
{ "source": [ "https://unix.stackexchange.com/questions/123518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47298/" ] }
123,602
I've always wondered this but never took the time to find out, so I'll do so now - how portable is the usage shown here of either /proc/$$/fd/$N or /dev/fd/$N ? I understand POSIX guarantees /dev/null, /dev/tty, and /dev/console (though I only found that out the other day after reading the comments on this answer ) but what about these others? As far as I can tell they're pretty common, but in what systems can I not expect to find them? Why not? Is it more likely to find one than the other? Will they always exhibit like attributes? I tend to use these devices pretty extensively in all manner of ways, and I'd like to know if there's a chance I'd come up short just trying. Also the above questions should be understood to be only what I think I'd like to know, but, since I obviously have to ask in the first place, I may not know best in this regard and they should not be considered stringent requirements for an answer. Just clue me in if you can, please.
The /proc/ PID /fd/ NUM symlinks are quasi-universal on Linux, but they don't exist anywhere else (except on Cygwin which emulates them). /proc/ PID /fd/ NUM also exist on AIX and Solaris, but they aren't symlinks. Portably, to get information about open files, install lsof . Unices with /proc/ PID /fd Linux Under Linux, /proc/ PID /fd/ NUM is a slightly magic symbolic link to the file that the process with the ID PID has open on the file descriptor NUM . This link is magic in that, for example, it can be used to access the file even if the file is removed. The link will track the file through renames, too. /proc/self is a magic symbolic link which points to /proc/ PID where PID is the process that accesses the link. This feature is present on virtually all Linux systems. It's provided by the driver for the proc filesystem , which is technically optional but used for so many things (including making the ps work — it reads from /proc/ PID ) that it's almost never left out even on embedded systems. Cygwin Cygwin emulates Linux's /proc/ PID /fd/ NUM (for Cygwin processes) and /proc/self . Solaris (since version 2.6), AIX There are /proc/ PID /fd entries for each file descriptor, but they appear as the same type as the opened file, so they provide no information about the path of the file. They do however report the same stat information as fstat would report to the process that has the file open, so it's possible to determine on which filesystem the file is located and its inode number. Directories appear as symbolic links, however they are magic symlinks which can only be followed, and readlink returns an empty string. Under AIX, the procfiles command displays some information about a process's open files. Under Solaris, the pfiles command displays some information about a process's open files. This does not include the path to the file (on Solaris, it does since Solaris 10, see below). Solaris (since version 10 ) In addition to /proc/ PID /fd/ NUM , modern Solaris versions have /proc/ PID /path/ NUM which contains symbolic links similar to Linux's symlinks in /proc/ PID /fd/ NUM . The pfiles command shows information about a process's open files, including paths. Plan9 /proc/ PID /fd is a text file which contains one record (line) per file descriptor opened by the process. The file name is not tracked there. QNX /proc/ PID / is a directory, but it doesn't contain any information about file descriptors. Unices with /proc but no direct access to file descriptors (Note: sometimes it's possible to obtain information about a process's open files by riffling through its memory image which is accessible under /proc . I don't count that as “direct access”.) Unices where /proc/ PID is a file The proc filesystem itself started out in UNIX 8th edition, but with a different structure, and went through Plan 9 and back to some unices. I think that all operating systems with a /proc have an entry for each PID, but on many systems, it's a regular file, not a directory. The following systems have a /proc/ PID which needs to be read with ioctl : Solaris up to 2.5 OSF/1 now known as Tru64 IRIX (?) SCO (?) MINIX 3 MINIX 3 has a procfs server which provides several Linux-like components including /proc/ PID / directories. However there is no /proc/ PID /fd . FreeBSD FreeBSD has /proc/ PID / directories, but they do not provide information about open file descriptors. (There is however /proc/ PID /file which is similar to Linux's /proc/ PID /exe , giving access to the executable through a symbolic link.) FreeBSD's procfs is deprecated . Unices without /proc HP-UX OpenBSD NetBSD Mac OS X File descriptor information through other channels Fuser The fuser command lists the processes that have a specified file open, or a file open on the specified mount point. This command is standard (available on all XSI -compliant systems, i.e. POSIX with the X/Open System Interface Extension). You can't go from a process to file names with this utility. Lsof Lsof stands for “list open files”. It is a third-party tool , available (but usually not part of the default installation) for most unix variants. Obtaining information about open files is very system-dependent, as the analysis above might have made you suspect. The lsof maintainer has done the work of combining it all under a single interface. You can read the FAQ to see what kinds of difficulties lsof has to put up with. On most unices, obtaining information about the names of open files requires parsing kernel data structures. Quoting from FAQ 3.3 “Why doesn't lsof report full path names?”: Lsof can't obtain path name components from the kernel name caches of the following dialects: AIX Only the Linux kernel records full path names in the structures it maintains about open files; instead, most kernels convert path names to device and node number doublets and use them for subsequent file references once files have been opened. If you need to parse information from lsof 's output, be sure to use the -F mode (one field per line), preferably the -F0 mode (null-delimited fields). To get information about a specific file descriptor of a specific process, use the -a option with -p PID and -d NUM , e.g. lsof -a -p 123 -d 0 -F0n . /dev/fd/ NUM for file descriptors of the current process Many unix variants provide a way to for a process to access its open files via a file name: opening /dev/fd/ NUM is equivalent to calling dup( NUM ) . These names are useful when a program wants a file name but you want to pass an already-open file (e.g. a pipe or socket); for example the shells that implement process substitution use them where available (using a temporary named pipe where /dev/fd is unavailable). Where /dev/fd exists, there are also usually (always?) synonyms (sometimes symbolic links, sometimes hard links, sometimes magic files with equivalent properties) /dev/stdin = /dev/fd/0 , /dev/stdout = /dev/fd/1 , /dev/stderr = /dev/fd/2 . Under Linux, /dev/fd is a symbolic link to /proc/self/fd . Under most unices ( IRIX , OpenBSD , NetBSD , SCO, Solaris , …), the entries in /dev/fd are character devices. They usually appear whether the file descriptor is open or not, and entries may not be available for file descriptors above a certain number. Under FreeBSD and OSX, the fdescfs filesystem provides a dynamic /dev/fd directory which follows the open descriptors of the calling process. A static /dev/fd is available if /dev/fd is not mounted. Under OSF/1 (Tru64), /dev/fd is provided via fdfs . There is no /dev/fd on AIX or HP-UX.
{ "source": [ "https://unix.stackexchange.com/questions/123602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52934/" ] }
123,634
I am printing some files from a remote computer to a network printer with the lpr command. It apparently worked, but some minutes later when I typed lpstat or lpq, the job had already disappeared, it had probably already printed the file. Is there a way to check the history or the log of my successfully completed jobs in the printer queue?
Yes a program exists: lpstat - print cups status information $ lpstat -W completed -W which-jobs Specifies which jobs to show, completed or not-completed (the default). This option must appear before the -o option and/or any printer names, otherwise the default (not-completed) value will be used in the request to the scheduler. Or if you prefer via the following web pages : https://localhost:631/printers/[NameOfPrinter]?which_jobs=completed http://localhost:631/jobs?which_jobs=completed Kind regards
{ "source": [ "https://unix.stackexchange.com/questions/123634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64622/" ] }
123,711
CVE-2014-0160 a.k.a. Heartbleed is a vulnerability in OpenSSL. It looks scary. How do I determine whether I am affected? If I'm affected, what do I need to do? Apparently upgrading isn't enough.
This vulnerability has a high potential impact because if your system has been attacked, it will remain vulnerable even after patching, and attacks may not have left any traces in logs. Chances that if you patched quickly and you aren't a high-profile target, nobody will have gotten around to attacking you, but it's hard to be sure. Am I vulnerable? The buggy version of OpenSSL The buggy software is the OpenSSL library 1.0.1 up to 1.0.1f , and OpenSSL 1.0.2 up to beta1. Older versions (0.9.x, 1.0.0) and versions where the bug has been fixed (1.0.1g onwards, 1.0.2 beta 2 onwards) are not affected. It's an implementation bug, not a flaw in the protocol, so only programs that use the OpenSSL library are affected. You can use the command line tool openssl version -a to display the OpenSSL version number. Note that some distributions port the bug fix to earlier releases; if your package's change log mentions the Heartbleed bug fix, that's fine, even if you see a version like 1.0.1f. If openssl version -a mentions a build date (not the date on the first line) of 2014-04-07 around evening UTC or later, you should be fine. Note that the OpenSSL package may have 1.0.0 in its name even though the version is 1.0.1 ( 1.0.0 refers to the binary compatibility). Affected applications Exploitation is performed through an application which uses the OpenSSL library to implement SSL connections . Many applications use OpenSSL for other cryptographic services, and that's fine: the bug is in the implementation of a particular feature of the SSL protocol, the “heartbeat”. You may want to check which programs are linked against the library on your system. On systems that use dpkg and apt (Debian, Ubuntu, Mint, …), the following command lists installed packages other than libraries that use libssl1.0.0 (the affected package): apt-cache rdepends libssl1.0.0 | tail -n +3 | xargs dpkg -l 2>/dev/null | grep '^ii' | grep -v '^ii lib' If you run some server software that's on this list and listens to SSL connections , you're probably affected. This concerns web servers, email servers, VPN servers, etc. You'll know that you've enabled SSL because you had to generate a certificate, either by submitting a certificate signing request to a certification authority or by making your own self-signed certificate. (It's possible that some installation procedure has generated a self-signed certificate without you noticing, but that's generally done only for internal servers, not for servers exposed to the Internet.) If you ran a vulnerable server exposed to the Internet, consider it compromised unless your logs show no connection since the announcement on 2014-04-07. (This assumes that the vulnerability wasn't exploited before its announcement.) If your server was only exposed internally, whether you need to change the keys will depend on what other security measures are in place. Client software is affected only if you used it to connect to a malicious server. So if you connected to your email provider using IMAPS, you don't need to worry (unless the provider was attacked — but if that's the case they should let you know), but if you browsed random websites with a vulnerable browser you may need to worry. So far it seems that the vulnerability wasn't being exploited before it was discovered, so you only need to worry if you connected to malicious servers since 2014-04-08. The following programs are unaffected because they don't use OpenSSL to implement SSL: SSH (the protocol is not SSL) Chrome/Chromium ( uses NSS ) Firefox (uses NSS) (at least with Firefox 27 on Ubuntu 12.04, but not with all builds? What is the impact? The bug allows any client who can connect to your SSL server to retrieve about 64kB of memory from the server at a time. The client doesn't need to be authenticated in any way. By repeating the attack, the client can dump different parts of the memory in successive attempts. This potentially allows the attacker to retrieve any data that has been in the memory of the server process, including keys, passwords, cookies, etc. One of the critical pieces of data that the attacker may be able to retrieve is the server's SSL private key. With this data, the attacker can impersonate your server. The bug also allows any server that your SSL client connected to to retrieve about 64kB of memory from the client at a time. This is a worry if you used a vulnerable client to manipulate sensitive data and then later connected to an untrusted server with the same client. The attack scenarios on this side are thus significantly less likely than on the server side. Note that for typical distributions, there is no security impact on package distribution as the integrity of packages relies on GPG signatures, not on SSL transport. How do I fix the vulnerability? Remediation of exposed servers Take all affected servers offline. As long as they're running, they're potentially leaking critical data. Upgrade the OpenSSL library package . All distributions should have a fix out by now (either with 1.0.1g, or with a patch that fixes the bug without changing the version number). If you compiled from source, upgrade to 1.0.1g or above. Make sure that all affected servers are restarted. On Linux, you can check if potentially affected processes are still running with grep 'libssl.*(deleted)' /proc/*/maps Generate new keys . This is necessary because the bug might have allowed an attacker to obtain the old private key. Follow the same procedure you used initially. If you use certificates signed by a certification authority, submit your new public keys to your CA. When you get the new certificate, install it on your server. If you use self-signed certificates, install it on your server. Either way, move the old keys and certificates out of the way (but don't delete them, just ensure they aren't getting used any more). Now that you have new uncompromised keys, you can bring your server back online . Revoke the old certificates. Damage assessment : any data that has been in the memory of a process serving SSL connections may potentially have been leaked. This can include user passwords and other confidential data. You need to evaluate what this data may have been. If you're running a service that allows password authentication, then the passwords of users who connected since a little before the vulnerability was announced should be considered compromised. Check your logs and change the passwords of any affected user. Also invalidate all session cookies, as they may have been compromised. Client certificates are not compromised. Any data that was exchanged since a little before the vulnerability may have remained in the memory of the server and so may have been leaked to an attacker. If someone has recorded an old SSL connection and retrieved your server's keys, they can now decrypt their transcript. (Unless PFS was ensured — if you don't know, it wasn't.) Remediation in other cases Servers that only listen on localhost or on an intranet are only to be considered exposed if untrusted users can connect to them. With clients, there are only rare scenarios where the bug can have been exploited: an exploit would require that you used the same client process to manipulate confidential data (e.g. passwords, client certificates, …); and then, in the same process, connected to a malicious server over SSL. So for example an email client that you only use to connect to your (not completely untrusted) mail provider is not a concern (not a malicious server). Running wget to download a file is not a concern (no confidential data to leak). If you did that between 2014-04-07 evening UTC and upgrading your OpenSSL library, consider any data that was in the client's memory to be compromised. References The Heartbleed Bug (by one of the two teams who independently discovered the bug) How exactly does the OpenSSL TLS heartbeat (Heartbleed) exploit work? Does Heartbleed mean new certificates for every SSL server? Heartbleed: What is it and what are options to mitigate it?
{ "source": [ "https://unix.stackexchange.com/questions/123711", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }