source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
152,299
I am trying to retrieve memory used(RAM) in percentage using Linux commands. My cpanel shows Memory Used which I need to display on a particular webpage. From forums, I found out that correct memory can be found from the following: free -m Result: -/+ buffers/cache: 492 1555 -/+ buffers/cache: contains the correct memory usage. I don't know how to parse this info or if there is any different command to get the memory used in percentage.
Here is sample output from free: % free total used free shared buffers cached Mem: 24683904 20746840 3937064 254920 1072508 13894892 -/+ buffers/cache: 5779440 18904464 Swap: 4194236 136 4194100 The first line of numbers ( Mem: ) lists total memory used memory free memory usage of shared usage of buffers usage filesystem caches ( cached ) In this line used includes the buffers and cache and this impacts free. This is not your "true" free memory because the system will dump cache if needed to satisfy allocation requests. The next line ( -/+ buffers/cache: ) gives us the actual used and free memory as if there were no buffers or cache. The final line ( Swap ) gives the usage of swap memory. There is no buffer or cache for swap as it would not make sense to put these things on a physical disk. To output used memory (minus buffers and cache) you can use a command like: % free | awk 'FNR == 3 {print $3/($3+$4)*100}' 23.8521 This grabs the third line and divides used/total * 100. And for free memory: % free | awk 'FNR == 3 {print $4/($3+$4)*100}' 76.0657
{ "source": [ "https://unix.stackexchange.com/questions/152299", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82109/" ] }
152,312
I am trying to remove all instances of a pattern match from a file if it matches a pattern. If there is a match, the (complete) line with the matching pattern and the next line get removed. The next line always appears after the line with the pattern match, but in addition it appears in other areas of the file. I am using grep and it is deleting all occurrences of the next line in the file, as expected. Is there a way I can remove that next line if and only if it is after the line with the pattern match?
You can use sed with the N and d commands and a {} block : sed -e '/pattern here/ { N; d; }' For every line that matches pattern here , the code in the {} gets executed. N takes the next line into the pattern space as well, and then d deletes the whole thing before moving on to the next line. This works in any POSIX-compatible sed .
{ "source": [ "https://unix.stackexchange.com/questions/152312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82116/" ] }
152,331
I have a Dell XPS 13 ultrabook which has a wifi nic, but no physical ethernet nic (wlan0, but no eth0). I need to create a virtual adapter for using Vagrant with NFS, but am finding that the typical ifup eth0:1... fails with ignoring unknown interface eth0:1=eth0:1 . I also tried creating a virtual interface against wlan0 , but received the same result. How can I create a virtual interface on this machine with no physical interface?
Setting up a dummy interface If you want to create network interfaces, but lack a physical NIC to back it, you can use the dummy link type. You can read more about them here: iproute2 Wikipedia page . Creating eth10 To make this interface you'd first need to make sure that you have the dummy kernel module loaded. You can do this like so: $ sudo lsmod | grep dummy $ sudo modprobe dummy $ sudo lsmod | grep dummy dummy 12960 0 With the driver now loaded you can create what ever dummy network interfaces you like: $ sudo ip link add eth10 type dummy NOTE: In older versions of ip you'd do the above like this, appears to have changed along the way. Keeping this here for reference purposes, but based on feedback via comments, the above works now. $ sudo ip link set name eth10 dev dummy0 And confirm it: $ ip link show eth10 6: eth10: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether c6:ad:af:42:80:45 brd ff:ff:ff:ff:ff:ff Changing the MAC You can then change the MAC address if you like: $ sudo ifconfig eth10 hw ether 00:22:22:ff:ff:ff $ ip link show eth10 6: eth10: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether 00:22:22:ff:ff:ff brd ff:ff:ff:ff:ff:ff Creating an alias You can then create aliases on top of eth10. $ sudo ip addr add 192.168.100.199/24 brd + dev eth10 label eth10:0 And confirm them like so: $ ifconfig -a eth10 eth10: flags=130<BROADCAST,NOARP> mtu 1500 ether 00:22:22:ff:ff:ff txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 $ ifconfig -a eth10:0 eth10:0: flags=130<BROADCAST,NOARP> mtu 1500 inet 192.168.100.199 netmask 255.255.255.0 broadcast 192.168.100.255 ether 00:22:22:ff:ff:ff txqueuelen 0 (Ethernet) Or using ip : $ ip a | grep -w inet inet 127.0.0.1/8 scope host lo inet 192.168.1.20/24 brd 192.168.1.255 scope global wlp3s0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 inet 192.168.100.199/24 brd 192.168.100.255 scope global eth10:0 Removing all this? If you want to unwind all this you can run these commands to do so: $ sudo ip addr del 192.168.100.199/24 brd + dev eth10 label eth10:0 $ sudo ip link delete eth10 type dummy $ sudo rmmod dummy References MiniTip: Setting IP Aliases under Fedora Linux Networking: Dummy Interfaces and Virtual Bridges ip-link man page iproute2 HOWTO iproute2 cheatsheet
{ "source": [ "https://unix.stackexchange.com/questions/152331", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26343/" ] }
152,346
I was wondering: when installing something, there's an easy way of double clicking an install executable file, and on the other hand, there is a way of building it from source. The latter one, downloading a source bundle, is really cumbersome. But what is the fundamental difference between these two methods?
All software are programs , which are also called source packages . So all source packages need to be built first, to run on your system. The binary packages are one that are already build from source by someone with general features and parameters provided in the software so that a large number of users can install and use it. Binary packages are easy to install . But may not have all options from the upstream package. So for installing from source, you need to build the source code yourself. That means you need to take care of the dependencies yourself. Also you need to be aware of all the features of the package so that you can build it accordingly. Advantages of installing from source: You can install the latest version and can always stay updated, whether it be a security patch or a new feature. Allows you to trim down the features while installing so as to suit your needs. Similarly you can add some features which may not be provided in the binary. Install it in a location you wish. In case of some software you may provide your hardware specific info for a suitable installation. In short installing from source gives you heavy customization option at the same time it takes a lot of effort, while installation from binary is easier but you may not be able to customize as you wish. Update : Adding the argument related to security in the comments below. Yes it is true that while installing from binary you don't have the integrity of the source code. But then it depends as to where you have got the binary from. There are lots of trusted sources from where you can get the binary of any new project, the only negative is the time . It may take some time for the binary of the updates or even a new project to appear in our trusted repositories. And above all things, about software security, I'd like to highlight this hilarious page at bell-labs provided by Joe in the below comments.
{ "source": [ "https://unix.stackexchange.com/questions/152346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72825/" ] }
152,435
I have a laptop with a built-in screen and an attached monitor. When I start a Google's video Hangout and share my desktop, I would like to be able to share only the attached screen, but I don't know how. Right now I have two monitors: LVDS1 corresponds to my laptop's screen, which is configured as the secondary screen and DP1 which is my primary screen. But the problem still remains if I change my laptop's screen to be the primary screen. $ xrandr Screen 0: minimum 320 x 200, current 3286 x 1468, maximum 8192 x 8192 LVDS1 connected 1366x768+1920+700 (normal left inverted right x axis y axis) 344mm x 194mm 1366x768 60.06*+ 1024x768 60.00 800x600 60.32 56.25 640x480 59.94 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) DP1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 475mm x 267mm 1920x1080 60.00*+ 1280x1024 75.02 60.02 1152x864 75.00 1024x768 75.08 60.00 800x600 75.00 60.32 640x480 75.00 60.00 720x400 70.08 Whenever I start sharing my desktop in the Hangout, only the built-in (smaller) screen is shared. Best thing would be to be able to chose which one to share, but if not, how could I share only the attached (bigger) screen? I bet Google's Hangout is looking for a configuration file to choose which screen to share, but don't know which file it is. NOTE Using Fedora 20, x86_64, Linux 3.15.10-200, GNOME Shell 3.10.4-8, Firefox 31. NOTE 2 Using Google Chrome makes Google Hangouts share both screens at the same time instead of only the laptop's screen, which I think is even worse. Still trying to find out how could I choose which screen to share.
Problem It turns out there's already an open issue in the Chromium tracker about this annoying inconvenience. Existing options offered by Hangouts have major drawbacks: Share Entire Screen: If you have multiple screens (I have three) and share "Entire Screen", other people in the hangout won't be able to see anything. Share Application: If you only share a specific application, then: You will have to manually switch to other apps while streaming by going back to hangouts and switching Screen Share on/off. In some applications, extra windows (such as dialogs for preferences, menus, popups, etc.) won't be captured as part of the app you're sharing. And most of the times it's these dialogs you want to focus on. Solution / workaround A very good workaround is at Comment 18 of this same discussion, so all the credits should go to the comment's author. I'll summarize the process here, which allows you to Share a Part/Area of your multi-monitor screen in Google Hangouts running in a Linux Machine . Open VLC in "Screen Capture" mode and tell it which part of your X11 screen you want it to capture, using the appropriate Screen Module command-line parameters . You can either do this through GUI configuration OR using the command line: vlc \ --no-video-deco \ --no-embedded-video \ --screen-fps=20 \ --screen-top=32 \ --screen-left=0 \ --screen-width=1920 \ --screen-height=1000 \ screen:// If VLC complains about not being able to open screen:// , please make sure you have correct module installed. For me, on ubuntu 19.10, I had to install an additional package vlc-plugin-extra-access by invoking apt install vlc-plugin-access-extra . Go back to Google Hangouts and share the newly opened VLC window, which now acts as your "portal" to the interesting part of your screen. Important notes Move the VLC window away from the part of the screen you are capturing to avoid inception effects . Do NOT resize OR minimize the VLC window because it will affect the resolution of your screen share. If you want to get it out of your way while streaming to hangouts, just move it off-screen WITHOUT resizing it, or just pretend it's not there. The mouse pointer is not captured by VLC in linux. The author of the workaround suggests a solution for this as well: ExtraMaus , a simple C programs which creates a "clone" of your mouse, but visible by VLC. [TL;DR] Explaining the values I chose in the example The screen:// parameter indicates we want to enable the Screen Capture module. You'll always use this parameter as is. The flags --no-video-deco and --no-embedded-video hide the window menu and video control toolbar respectively. You don't want to share these through Hangouts, so I suggest you always include these parameters. The --screen-fps=20 does not have to be 20. You can make it 30 or 10, since performance is primarily affected by how Chrome encodes the video stream. The area of the screen you want to captured follows the standard convention [ --screen-top , --screen-left , --screen-width , --screen-height ]. Supposing I had two monitors, each 1920x1080, giving a total 3840x1080 "virtual" screen when placed one next to the other, I could give the following coordinates: [ 0, 0, 1920, 1080] for my entire left screen [ 0, 1920, 1920, 1080] for my entire right screen [32, 0, 1920, 1000] for a part of my left screen which spans across its full width but trims 32 pixels from its top (where I usually have a window's title bar) and 1080-1000-32 = 48 pixels from its bottom (where I have my KDE taskbar).
{ "source": [ "https://unix.stackexchange.com/questions/152435", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66295/" ] }
152,553
Given this piece of bash: PARMS='-rvu' PARMS+=" --delete --exclude='.git'" echo $PARMS rsync ${PARMS} . ${TARGET} The echo shows the PARMS string as expected, no error is displayed, but rsync silently acts as if the options added by the += did not exist. However, this works as expected: PARMS='-rvu' rsync ${PARMS} --delete --exclude='.git' . ${TARGET} I guess I screwed something up with bash quotes (always had problems with those), but not exactly sure what and why are the options ignored even though the string seems to have been built correctly.
There is a difference between: PARMS+="... --exclude='.git'" and ... --exclude='.git' In the first, the single quotes are inside quotes themselves, so they are literally present in the substituted text given to rsync as arguments. rsync gets an argument whose value is --exclude='.git' . In the second, the single quotes are interpreted by the shell at the time they're written, because they aren't inside quotes themselves, and rsync gets to see --exclude=.git . In this case, you don't need the single quotes at all — .git is a perfectly valid shell word on its own, with no special characters, so you can use it literally in the command. Better for this kind of thing, though, is an array : PARMS=(-rvu) PARMS+=(--delete --exclude='.git') rsync "${PARMS[@]}" This builds up your command as separate words, with whatever quoting you want interpreted at the time you write the array line. "${PARMS[@]}" expands to each entry in the array as a separate argument, even if the argument itself has special characters or spaces in it, so rsync sees what you wrote as you meant it.
{ "source": [ "https://unix.stackexchange.com/questions/152553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82253/" ] }
152,738
I have tried tmux -c "shell command" split-window but it does not seem to work. Using tmux split-window , one can split a new window. UPDATE : Using tmux split-window 'exec ping g.cn' can run the ping command , but when stoped the new window will be closed.
Use: tmux split-window "shell command" The split-window command has the following syntax: split-window [-dhvP] [-c start-directory] [-l size | -p percentage] [-t target-pane] [shell-command] [-F format] (from man tmux , section "Windows and Panes"). Note that the order is important - the command has to come after any of those preceding options that appear, and it has to be a single argument, so you need to quote it if it has spaces. For commands like ping -c that terminate quickly, you can set the remain-on-exit option first: tmux set-option remain-on-exit on tmux split-window 'ping -c 3 127.0.0.1' The pane will remain open after ping finishes, but be marked "dead" until you close it manually. If you don't want to change the overall options, there is another approach. The command is run with sh -c , and you can exploit that to make the window stay alive at the end: tmux split-window 'ping -c 3 127.0.0.1 ; read' Here you use the shell read command to wait for a user-input newline after the main command has finished. In this case, the command output will remain until you press Enter in the pane, and then it will automatically close.
{ "source": [ "https://unix.stackexchange.com/questions/152738", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46826/" ] }
153,011
I need to download a large file (1GB). I also have access to multiple computers running Linux, but each is limited to a 50kB/s download speed by an admin policy. How do I distribute downloading this file on several computers and merge them after all segments have been downloaded, so that I can receive it faster?
The common protocols HTTP, FTP and SFTP support range requests , so you can request part of a file. Note that this also requires server support, so it might or might not work in practice. You can use curl and the -r or --range option to specify the range and eventually just cat ting the files together. Example: curl -r 0-104857600 -o distro1.iso 'http://files.cdn/distro.iso' curl -r 104857601-209715200 -o distro2.iso 'http://files.cdn/distro.iso' […] And eventually when you gathered the individual parts you concatenate them: cat distro* > distro.iso You can get further information about the file, including its size with the --head option: curl --head 'http://files.cdn/distro.iso' You can retrieve the last chunk with an open range: curl -r 604887601- -o distro9.iso 'http://files.cdn/distro.iso' Read the curl man page for more options and explanations. You can further leverage ssh and tmux to ease running and keeping track of the downloads on multiple servers.
{ "source": [ "https://unix.stackexchange.com/questions/153011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10867/" ] }
153,091
I have a file that only uses \n for new lines, but I need it to have \r\n for each new line. How can I do this? For example, I solved it in Vim using :%s/\n/\r\n/g , but I would like to use a script or command-line application. Any suggestions? I tried looking this up with sed or grep , but I got immediately confused by the escape sequence workarounds (I am a bit green with these commands). If interested, the application is related to my question/answer here
You can use unix2dos (which found on Debian): unix2dos file Note that this implementation won't insert a CR before every LF , only before those LF s that are not already preceded by one (and only one) CR and will skip binary files (those that contain byte values in the 0x0 -> 0x1f range other than LF , FF , TAB or CR ). or use sed : CR=$(printf '\r') sed "s/\$/$CR/" file or use awk : awk '{printf "%s\r\n", $0}' file or: awk -v ORS='\r\n' 1 file or use perl : perl -pe 's|\n|\r\n|' file
{ "source": [ "https://unix.stackexchange.com/questions/153091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59802/" ] }
153,352
I've seen discussion before about ELF magic, most recently the comments in this Security stack exchange question . I've seen it mentioned before, and I've seen it in my own boot logs.. But I'm not sure what it is. The man page on elf is a bit over my head, as I don't do C or lower level languages. As someone who uses Linux as an every day operating system, what is ELF?
Right from the man page you reference: elf - format of Executable and Linking Format (ELF) files ELF defines the binary format of executable files used by Linux. When you invoke an executable, the OS must know how to load the executable into memory properly, how to resolve dynamic library dependencies and then where to jump into the loaded executable to start executing it. The ELF format provides this information. ELF magic is used to identify ELF files and is merely the very first few bytes of a file: % od -c -N 16 /bin/ls 0000000 177 E L F 002 001 001 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000020 or % readelf -h /bin/ls | grep Magic Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 These 16 bytes unambiguously identify a file as an ELF executable. Many file formats have "magic" bytes that accomplish the same task -- identifying a type of file.
{ "source": [ "https://unix.stackexchange.com/questions/153352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73386/" ] }
153,508
I know cat can do this, but its main purpose is to concatenate rather than just to display the content. I also know about less and more , but I'm looking for something simple ( not a pager ) that just outputs the content of a file to the terminal and it's made specifically for this, if there is such thing.
The most obvious one is cat . But also have a look at head and tail . There are other shell utilities to print a file line by line: sed , awk , grep . But those are meant to manipulate the file content or to search inside the file. I made a few tests to estimate which is the most effective one. I ran all trough strace to see which made the fewest system calls. My file has 1275 lines. awk : 1355 system calls cat : 51 system calls grep : 1337 system calls head : 93 system calls tail : 130 system calls sed : 1378 system calls As you can see, even if cat was designed to concatenate files, it is the fastest and most effective one. sed , awk and grep printed the file line by line, that's why they have more that 1275 system calls.
{ "source": [ "https://unix.stackexchange.com/questions/153508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80615/" ] }
153,539
Out of the two options to change permissions: chmod 777 file.txt chmod a+rwx file.txt I am writing a document that details that users need to change the file permissions of a certain file. I want to detail it as the most common way of changing file permissions. Currently is says: - Set permissions on file.txt as per the example below: - chmod 777 /tmp/file.txt This is just an example, and won't change files to have full permissions for everyone.
Google gives: 1,030,000 results for ' chmod 777 ' 371,000 results for ' chmod a+rwx ' chmod 777 is about 3 times more popular. That said, I prefer using long options in documentation and scripts, because they are self-documenting. If you are following up your instructions with "Run ls -l | grep file.txt and verify permissions", you may want to use chmod a+rwx because that's how ls will display the permissions.
{ "source": [ "https://unix.stackexchange.com/questions/153539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20752/" ] }
153,660
I tried 'man echo' in Bash and it told me that 'echo --help' will display help then exit, and similarly, that 'echo --version' will output version and exit. But why it doesn't work ? 'echo --help' just simply prints '--help' literally.
man echo relates to the echo program . GNU echo supports a --help option, as do some others. When you run echo in Bash you instead get its builtin echo which doesn't. To access the echo program, rather than the builtin, you can either give a path to it: /bin/echo --help or use Bash's enable command to disable the built-in version: $ enable -n echo $ echo --help Bash has built-in versions of a lot of basic commands, because it's a little faster to do that, but you can always bypass them like this when you need to.
{ "source": [ "https://unix.stackexchange.com/questions/153660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82973/" ] }
153,665
I was checking unshare command and according to it's man page, unshare - run program with some namespaces unshared from parent I also see there is a type of namespace listed as, mount namespace mounting and unmounting filesystems will not affect rest of the system. What exactly is the purpose of this mount namespace ? I am trying to understand this concept with the help of some example.
Running unshare -m gives the calling process a private copy of its mount namespace, and also unshares file system attributes so that it no longer shares its root directory, current directory, or umask attributes with any other process. So what does the above paragraph say? Let us try and understand using a simple example. Terminal 1: I do the below commands in the first terminal. #Creating a new process unshare -m /bin/bash #creating a new mount point secret_dir=`mktemp -d --tmpdir=/tmp` #creating a new mount point for the above created directory. mount -n -o size=1m -t tmpfs tmpfs $secret_dir #checking the available mount points. grep /tmp /proc/mounts The last command gives me the output as, tmpfs /tmp/tmp.7KtrAsd9lx tmpfs rw,relatime,size=1024k 0 0 Now, I did the following commands as well. cd /tmp/tmp.7KtrAsd9lx touch hello touch helloagain ls -lFa The output of the ls command is, ls -lFa total 4 drwxrwxrwt 2 root root 80 Sep 3 22:23 ./ drwxrwxrwt. 16 root root 4096 Sep 3 22:22 ../ -rw-r--r-- 1 root root 0 Sep 3 22:23 hello -rw-r--r-- 1 root root 0 Sep 3 22:23 helloagain So what is the big deal in doing all this? Why should I do it? I open another terminal now ( terminal 2 ) and do the below commands. cd /tmp/tmp.7KtrAsd9lx ls -lFa The output is as below. ls -lFa total 8 drwx------ 2 root root 4096 Sep 3 22:22 ./ drwxrwxrwt. 16 root root 4096 Sep 3 22:22 ../ The files hello and helloagain are not visible and I even logged in as root to check these files. So the advantage is, this feature makes it possible for us to create a private temporary filesystem that even other root-owned processes cannot see or browse through. From the man page of unshare , mount namespace Mounting and unmounting filesystems will not affect the rest of the system (CLONE_NEWNS flag), except for filesystems which are explicitly marked as shared (with mount --make-shared; see /proc/self/mountinfo for the shared flags). It's recommended to use mount --make-rprivate or mount --make-rslave after unshare --mount to make sure that mountpoints in the new namespace are really unshared from the parental namespace. The memory being utilized for the namespace is VFS which is from kernel. And - if we set it up right in the first place - we can create entire virtual environments in which we are the root user without root permissions. References: The example is framed using the details from this blog post . Also, the quotes of this answer are from this wonderful explanation from Mike . Another wonderful read regarding this can be found from the answer from here .
{ "source": [ "https://unix.stackexchange.com/questions/153665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47538/" ] }
153,693
I'm trying to change the cpu frequency on my laptop (running Linux), and not having any success. Here are some details: # uname -a Linux yoga 3.12.21-gentoo-r1 #4 SMP Thu Jul 10 17:32:31 HKT 2014 x86_64 Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz GenuineIntel GNU/Linux # cpufreq-info cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 0.97 ms. hardware limits: 800 MHz - 2.60 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 800 MHz and 2.60 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 2.42 GHz (asserted by call to hardware). (similar information for cpus 1, 2 and 3) # cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors performance powersave I initially had the userspace governor built into the kernel, but then I also tried building it as a module (with the same results); it was loaded while running the above commands (and I couldn't find any system messages when loading it): # lsmod Module Size Used by cpufreq_userspace 1525 0 (some other modules) And here are the commands I tried for changing the frequency: # cpufreq-set -f 800MHz Error setting new values. Common errors: - Do you have proper administration rights? (super-user?) - Is the governor you requested available and modprobed? - Trying to set an invalid policy? - Trying to set a specific frequency, but userspace governor is not available, for example because of hardware which cannot be set to a specific frequency or because the userspace governor isn't loaded? # cpufreq-set -g userspace Error setting new values. Common errors: - Do you have proper administration rights? (super-user?) - Is the governor you requested available and modprobed? - Trying to set an invalid policy? - Trying to set a specific frequency, but userspace governor is not available, for example because of hardware which cannot be set to a specific frequency or because the userspace governor isn't loaded? Any ideas?
This is because your system is using the new driver called intel_pstate . There are only two governors available when using this driver: powersave and performance . The userspace governor is only available with the older acpi-cpufreq driver (which will be automatically used if you disable intel_pstate at boot time; you then set the governor/frequency with cpupower ): disable the current driver: add intel_pstate=disable to your kernel boot line boot, then load the userspace module: modprobe cpufreq_userspace set the governor: cpupower frequency-set --governor userspace set the frequency: cpupower --cpu all frequency-set --freq 800MHz
{ "source": [ "https://unix.stackexchange.com/questions/153693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34089/" ] }
153,763
I am trying to instruct GNU Make 3.81 to not stop if a command fails (so I prefix the command with - ) but I also want to check the exit status on the next command and print a more informative message. However my Makefile below fails: $ cat Makefile all: -/bin/false ([ $$? -eq 0 ] && echo "success!") || echo "failure!" $ $ make /bin/false make: [all] Error 1 (ignored) ([ $? -eq 0 ] && echo "success!") || echo "failure!" success! Why does the Makefile above echo "success!" instead of "failure!" ? update: Following and expanding on the accepted answer, below is how it should be written: failure: @-/bin/false && ([ $$? -eq 0 ] && echo "success!") || echo "failure!" success: @-/bin/true && ([ $$? -eq 0 ] && echo "success!") || echo "failure!"
Each update command in a Makefile rule is executed in a separate shell. So $? does not contain the exit status of the previous failed command, it contains whatever the default value is for $? in a new shell. That's why your [ $? -eq 0 ] test always succeeds.
{ "source": [ "https://unix.stackexchange.com/questions/153763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24044/" ] }
153,785
Under consistent network device naming scheme, what does 'eno' stand for in network interface name eno16777736 for CentOS 7 or RHEL 7?
This is Predictable Network Interface Device Names in action. en is for Ethernet o is for on-board The number is a firmware/BIOS provided index. More details in the source of udev-builtin-net_id.c
{ "source": [ "https://unix.stackexchange.com/questions/153785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83074/" ] }
153,808
I often leave tasks running within screen so that I can check on them later, but I sometimes need to check which command is running within the screen. It's usually a php script of sorts, such as screen -d -m nice -n -10 php -f convertThread.php start=10000 I don't record which screens are running which command, but I'd like to be able to get a feel for the progress made by checking what command was run within it, without killing the process. I don't see any option for this within the screen help pages.
I recently had to do this. On Stack Overflow I answered how to find the PID of the process running in screen . Once you have the PID you can use ps to get the command. Here is the contents of that answer with some additional content to address your situation: You can get the PID of the screen sessions here like so: $ screen -ls There are screens on: 1934.foo_Server (01/25/15 15:26:01) (Detached) 1876.foo_Webserver (01/25/15 15:25:37) (Detached) 1814.foo_Monitor (01/25/15 15:25:13) (Detached) 3 Sockets in /var/run/screen/S-ubuntu. Let us suppose that you want the PID of the program running in Bash in the foo_Monitor screen session. Use the PID of the foo_Monitor screen session to get the PID of the bash session running in it by searching PPIDs (Parent PID) for the known PID: $ ps -el | grep 1814 | grep bash F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 S 1000 1815 1814 0 80 0 - 5520 wait pts/1 00:00:00 bash Now get just the PID of the bash session: $ ps -el | grep 1814 | grep bash | awk '{print $4}' 1815 Now we want the process with that PID. Just nest the commands, and this time use the -v flag on grep bash to get the process that is not bash: $ echo $(ps -el | grep $(ps -el | grep 1814 | grep bash | awk '{print $4}') | grep -v bash | awk '{print $4}') 23869 We can use that PID to find the command (Look at the end of the second line): $ ps u -p 23869 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND dotanco+ 18345 12.1 20.1 5258484 3307860 ? Sl Feb02 1147:09 /usr/lib/foo Put it all together: $ ps u -p $(ps -el | grep $(ps -el | grep SCREEN_SESSION_PID | grep bash | awk '{print $4}') | grep -v bash | awk '{print $4}')
{ "source": [ "https://unix.stackexchange.com/questions/153808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33437/" ] }
153,862
I have a directory containing a large number of files. I want to delete all files except for file.txt . How do I do this? There are too many files to remove the unwanted ones individually and their names are too diverse to use * to remove them all except this one file. Someone suggested using rm !(file.txt) But it doesn't work. It returns: Badly placed ()'s My OS is Scientific Linux 6. Any ideas?
POSIXly: find . ! -name 'file.txt' -type f -exec rm -f {} + will remove all regular files (recursively, including hidden ones) except file.txt . To remove directories, change -type f to -type d and add -r option to rm . In bash , to use rm -- !(file.txt) , you must enable extglob : $ shopt -s extglob $ rm -- !(file.txt) (or calling bash -O extglob ) Note that extglob only works in bash and Korn shell family. And using rm -- !(file.txt) can cause an Argument list too long error. In zsh , you can use ^ to negate pattern with extendedglob enabled: $ setopt extendedglob $ rm -- ^file.txt or using the same syntax with ksh and bash with options ksh_glob and no_bare_glob_qual enabled.
{ "source": [ "https://unix.stackexchange.com/questions/153862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78535/" ] }
153,980
I have some problems to install nginx pkg (nginx-full) on debian jessie # apt-get install nginx-full Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: nginx-common Suggested packages: fcgiwrap nginx-doc The following NEW packages will be installed: nginx-common nginx-full 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 510 kB of archives. After this operation, 1.271 kB of additional disk space will be used. Do you want to continue? [Y/n] Get:1 http://debian.c3sl.ufpr.br/debian/ jessie/main nginx-common all 1.6.1-1 [83,6 kB] Get:2 http://debian.c3sl.ufpr.br/debian/ jessie/main nginx-full amd64 1.6.1-1+b1 [427 kB] Fetched 510 kB in 1s (266 kB/s) Selecting previously unselected package nginx-common. (Reading database ... 170540 files and directories currently installed.) Preparing to unpack .../nginx-common_1.6.1-1_all.deb ... Unpacking nginx-common (1.6.1-1) ... Selecting previously unselected package nginx-full. Preparing to unpack .../nginx-full_1.6.1-1+b1_amd64.deb ... Unpacking nginx-full (1.6.1-1+b1) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up nginx-common (1.6.1-1) ... Setting up nginx-full (1.6.1-1+b1) ... Job for nginx.service failed. See 'systemctl status nginx.service' and 'journalctl -xn' for details. invoke-rc.d: initscript nginx, action "start" failed. dpkg: error processing package nginx-full (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: nginx-full E: Sub-process /usr/bin/dpkg returned an error code (1) # systemctl status nginx.service nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled) Active: failed (Result: exit-code) since Sex 2014-09-05 11:39:46 BRT; 1s ago Process: 2972 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE) #journalctl -xn No journal files were found. Someone know how to fix it?
A similar issue was reported on Debian bug #754407 . In the end it was just the port 80 being taken by other process (Apache2). Might this be your case as well?
{ "source": [ "https://unix.stackexchange.com/questions/153980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83195/" ] }
154,254
Not sure how else to phrase the question, but basically, I often find myself running a command like vagrant to bring up the VM, and then ssh into it like below: vagrant up && vagrant ssh Short of writing my own function or script, is there a way to "reuse" the vagrant portion in the second part of the command?
With (t)csh , bash or zsh history expansion you could write: vagrant up && !#:0 ssh But, seriously, you wouldn't
{ "source": [ "https://unix.stackexchange.com/questions/154254", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47355/" ] }
154,395
I cannot copy a file over scp when the remote machine .bashrc file includes source command to update paths and some variables. Why does it happen?
You should make part or all of your .bashrc not run when your shell is non-interactive. (An scp is an example of a non-interactive shell invocation, unless someone has radically altered your systems.) Then put all commands that can possibly generate output in that section of the file. A standard way to do this in an init file is: # Put all the commands here that should run regardless of whether # this is an interactive or non-interactive shell. # Example command: umask 0027 # test if the prompt var is not set and also to prevent failures # when `$PS1` is unset and `set -u` is used if [ -z "${PS1:-}" ]; then # prompt var is not set, so this is *not* an interactive shell return fi # If we reach this line of code, then the prompt var is set, so # this is an interactive shell. # Put all the commands here that should run only if this is an # interactive shell. # Example command: echo "Welcome, ${USER}. This is the ~/.bashrc file." You might also see people use [ -z "${PS1:-}" ] && return instead of my more verbose if statement. If you don't want to rearrange your whole file, you can also just make certain lines run only in interactive context by wrapping them like so: if [ -n "${PS1:-}" ]; then echo "This line only runs in interactive mode." fi If you segregate your .bashrc this way, then your scp commands should no longer have this problem.
{ "source": [ "https://unix.stackexchange.com/questions/154395", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68913/" ] }
154,427
I just wrote a bash script and always getting this EOF-Error. So here is my script (only works on OS X): #!/bin/bash #DEFINITIONS BEGIN en_sq() { echo -e "Enabling smart quotes..." defaults write NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool true status=$(defaults read NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool) if [ "$status" = "1" ] then echo -e "Success! Smart quotes are now enabled." SUCCESS="TRUE" else echo -e "Sorry, an error occured. Try again." fi } di_sq() { echo -e "Disabling smart quotes..." defaults write NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool false status=$(defaults read NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool) if [ "$status" = "0" ] then echo -e "Success! Smart quotes are now disabled." SUCCESS="TRUE" else echo -e "Sorry, an error occured. Try again." fi } en_sd() { echo -e "Enabling smart dashes..." defaults write NSGlobalDomain NSAutomaticDashSubstitutionEnabled -bool true status=$(defaults read NSGlobalDomain NSAutomaticDashSubstitutionEnabled -bool) if [ "$status" = "1" ] then echo -e "Success! Smart dashes are now enabled." SUCCESS="TRUE" else echo -e "Sorry, an error occured. Try again." fi } di_sd() { echo -e "Enabling smart dashes..." defaults write NSGlobalDomain NSAutomaticDashSubstitutionEnabled -bool false status=$(defaults read NSGlobalDomain NSAutomaticDashSubstitutionEnabled -bool) if [ "$status" = "0" ] then echo -e "Success! Smart dashes are now disabled." SUCCESS="TRUE" else echo -e "Sorry, an error occured. Try again." fi } #DEFINITIONS END #--------------- #BEGIN OF CODE with properties #This is only terminated if the user entered properties (eg ./sqd.sh 1 1) if [ "$1" = "1" ] then en_sq elif [ "$1" = "0" ] then di_sq fi if [ "$2" = "1" ] then en_sd #exit 0 if both, $1 and $2 are correct entered and processed. exit 0 elif [ "$1" = "0" ] then di_sd #exit 0 if both, $1 and $2 are correct entered and processed. exit 0 fi #END OF CODE with properties #--------------------------- #BEGIN OF CODE without properties #This is terminated if the user didn't enter two properties echo -e "\n\n\n\n\nINFO: You can use this command as following: $0 x y, while x and y can be either 0 for false or 1 for true." echo -e "x is for the smart quotes, y for the smart dashes." sleep 1 echo -e " \n Reading preferences...\n" status=$(defaults read NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool) if [ "$status" = "1" ] then echo -e "Smart quotes are enabled." elif [ "$status" = "0" ] then echo -e "Smart quotes are disabled." else echo -e "Sorry, an error occured. You have to run this on OS X"" fi status=$(defaults read NSGlobalDomain NSAutomaticQuoteSubstitutionEnabled -bool) if [ "$status" = "1" ] then echo -e "Smart dashes are enabled." elif [ "$status" = "0" ] then echo -e "Smart dashes are disabled." else echo -e "Sorry, an error occured. You have to run this on OS X!" fi sleep 3 echo -e "\n\n You can now enable or disable smart quotes." until [ "$SUCCESS" = "TRUE" ] do echo -e "Enter e for enable or d for disable:" read sq if [ "$sq" = "e" ] then en_sq elif [ "$sq" = "d" ] then di_sq else echo -e "\n\n ERROR! Please enter e for enable or d for disable!" fi done SUCCESS="FALSE" echo -e "\n\n You can now enable or disable smart dashes." until [ "$SUCCESS" = "TRUE" ] do echo -e "Enter e for enable or d for disable:" read sq if [ "$sd" = "e" ] then en_sd elif [ "$sd" = "d" ] then di_sd else echo -e "\n\n ERROR! Please enter e for enable or d for disable!" fi done And here is my error: ./coding.sh: line 144: unexpected EOF while looking for matching `"' ./coding.sh: line 147: syntax error: unexpected end of file
You can see your problem if you just look at your question. Note how the syntax highlighting is messed up after line 95: echo -e "Sorry, an error occurred. You have to run this on OS X"" As the error message tells you, you have an unmatched " . Just remove the extra " from the line above and you should be fine: echo -e "Sorry, an error occurred. You have to run this on OS X"
{ "source": [ "https://unix.stackexchange.com/questions/154427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83451/" ] }
154,579
I use below command to empty a file > file.txt It works perfectly fine! But there are some files with root user permissions. So I tried sudo > different-file.txt and I got below verbose usage: sudo [-D level] -h | -K | -k | -V usage: sudo -v [-AknS] [-D level] [-g groupname|#gid] [-p prompt] [-u user name|#uid] usage: sudo -l[l] [-AknS] [-D level] [-g groupname|#gid] [-p prompt] [-U user name] [-u user name|#uid] [-g groupname|#gid] [command] usage: sudo [-AbEHknPS] [-C fd] [-D level] [-g groupname|#gid] [-p prompt] [-u user name|#uid] [-g groupname|#gid] [VAR=value] [-i|-s] [<command>] usage: sudo -e [-AknS] [-C fd] [-D level] [-g groupname|#gid] [-p prompt] [-u user name|#uid] file ... This suggests that there is some syntax error in my command. What is wrong?
To empty a file as root, you can also use the truncate command: $ sudo truncate -s0 file.txt The -s0 sets the file's size to 0, effectively emptying it.
{ "source": [ "https://unix.stackexchange.com/questions/154579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24449/" ] }
154,599
It often happens that I want to apply an operation recursively. Some commands, such as grep, use a lowercase r to indicate recursion. For example grep -r foo . Other commands seem to prefer an uppercase R: chmod -R 755 . I am constantly getting these the wrong way around and forgetting which is which. Is there any logic behind the selection of case for these arguments?
Most POSIX commands that have recursive directory traversal option ( ls , chmod , chgrp , chmod , cp , rm ) have -R for that. rm also has -r because that's what it was initially, long before POSIX. Now, the behaviour varies when symlinks are found in walking down the tree. POSIX tried to make things consistent by adding the -L / -H / P options to give the user a chance to decide what to do with symlinks leaving the default when none is provided unspecified. POSIX grep has no -r or -R . GNU grep initially had neither. -r was added in 1998. That was following symlinks. -R was added as a synonym in 2001 for consistency with the other utilities. That was still following symlinks. In 2012 (grep 2.12), -r was changed so it no longer followed symlinks, possibly because -L , -H were already used for something else. BSDs grep were based on GNU grep for a long time. Some of them have rewritten their own and kept more or less compatibility with GNU grep . Apple OS/X addressed the symlink issue differently. -r and -R are the same and don't follow symlinks. There's a -S option however that acts like chmod / cp / find 's -L option to follow symlinks.
{ "source": [ "https://unix.stackexchange.com/questions/154599", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83550/" ] }
154,605
How can I open a text file and let it update itself? Similar to the way top works. I want to open a log file and watch it update itself on the fly. I have just tried: $ tail error.log But just realised, that it just shows you the lines in the log file. I am using RHEL 5.10
You're looking for tail -f error.log (from man tail ): -f, --follow[={name|descriptor}] output appended data as the file grows; -f, --follow, and --fol‐ low=descriptor are equivalent That will let you watch a file and see any changes made to it.
{ "source": [ "https://unix.stackexchange.com/questions/154605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20752/" ] }
154,747
I have 3 directories at current path. $ls a_0db_data a_clean_0db_data a_clean_data $ls a_*_data a_0db_data: a_clean_0db_data: a_clean_data: $ls a_[a-z]*_data a_clean_0db_data: a_clean_data: I expected last ls command to match only a_clean_data . Why did it also match the one containing 0 ? bash --version GNU bash, version 4.2.24(1)-release (i686-pc-linux-gnu)
The [a-z] part isn't what matches the number; it's the * . You may be confusing shell globbing and regular expressions . Tools like grep accept various flavours of regexes ( basic by default, -E for extended, -P for Perl regex ) E.g. ( -v inverts the match) $ ls a_[a-z]*_data | grep -v "[0-9]" a_clean_data If you want to use a bash regex, here is an example on how to test if the variable $ref is an integer: re='^[0-9]+$' if ! [[ $ref =~ $re ]] ; then echo "error" fi
{ "source": [ "https://unix.stackexchange.com/questions/154747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23301/" ] }
154,818
I am searching for files which name which contain AAA within their path using following command: find path_A -name "*AAA*" Given the output showed by the above command, I want to move those files into another path, say path_B . Instead of moving those file one by one, can I optimize the command by moving those files right after the find command?
With GNU mv : find path_A -name '*AAA*' -exec mv -t path_B {} + That will use find's -exec option which replaces the {} with each find result in turn and runs the command you give it. As explained in man find : -exec command ; Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of `;' is encountered. In this case, we are using the + version of -exec so that we run as few mv operations as possible: -exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invoca‐ tions of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of `{}' is allowed within the command. The command is executed in the starting directory.
{ "source": [ "https://unix.stackexchange.com/questions/154818", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27321/" ] }
154,955
I'm trying to change the value of $JAVA_HOME & I just can't seem to find in which file is it being set currently. I can't remember where did I set it the last time. Already tried How to determine where an environment variable came from? but I need(ed) a list of files where the variable can be set.
You didn't specify a shell. So, I will assume bash . The next issue is: did you set it for your user only or system-wide? If you set it for your user only, then run: grep JAVA_HOME ~/.bash_profile ~/.bash_login ~/.profile ~/.bashrc If you set it system-wide, then it may vary with distribution but try: grep JAVA_HOME /etc/environment /etc/bash.bashrc /etc/profile.d/* /etc/profile If the above give no answer, you can cast a wider net: grep -r JAVA_HOME /etc grep -r JAVA_HOME ~/ See also the suggestions in How to determine where an environment variable came from .
{ "source": [ "https://unix.stackexchange.com/questions/154955", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62605/" ] }
155,017
A fork() system call clones a child process from the running process. The two processes are identical except for their PID. Naturally, if the processes are just reading from their heaps rather than writing to it, copying the heap would be a huge waste of memory. Is the entire process heap copied? Is it optimized in a way that only writing triggers a heap copy?
The entirety of fork() is implemented using mmap / copy on write. This not only affects the heap, but also shared libraries, stack, BSS areas. Which, incidentally, means that fork is a extremely lightweight operation, until the resulting 2 processes (parent and child) actually start writing to memory ranges. This feature is a major contributor to the lethality of fork-bombs - you end up with way too many processes before kernel gets overloaded with page replication and differentiation. You'll be hard-pressed to find in a modern OS an example of an operation where kernel performs a hard copy (device drivers being the exception) - it's just far, far easier and more efficient to employ VM functionality. Even execve() is essentially "please mmap the binary / ld.so / whatnot, followed by execute" - and the VM handles the actual loading of the process to RAM and execution. Local uninitialized variables end up being mmaped from a 'zero-page' - special read-only copy-on-write page containing zeroes, local initialized variables end up being mmaped (copy-on-write, again) from the binary file itself, etc.
{ "source": [ "https://unix.stackexchange.com/questions/155017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1079/" ] }
155,046
I have a script which runs rsync with a Git working directory as destination. I want the script to have different behavior depending on if the working directory is clean (no changes to commit), or not. For instance, if the output of git status is as below, I want the script to exit: git status Already up-to-date. # On branch master nothing to commit (working directory clean) Everything up-to-date If the directory is not clean then I would like it to execute some more commands. How can I check for output like the above in a shell script?
Parsing the output of git status is a bad idea because the output is intended to be human readable, not machine-readable. There's no guarantee that the output will remain the same in future versions of Git or in differently configured environments. UVVs comment is on the right track, but unfortunately the return code of git status doesn't change when there are uncommitted changes. It does, however, provide the --porcelain option, which causes the output of git status --porcelain to be formatted in an easy-to-parse format for scripts, and will remain stable across Git versions and regardless of user configuration. We can use empty output of git status --porcelain as an indicator that there are no changes to be committed: if [ -z "$(git status --porcelain)" ]; then # Working directory clean else # Uncommitted changes fi If we do not care about untracked files in the working directory, we can use the --untracked-files=no option to disregard those: if [ -z "$(git status --untracked-files=no --porcelain)" ]; then # Working directory clean excluding untracked files else # Uncommitted changes in tracked files fi To make this more robust against conditions which actually cause git status to fail without output to stdout , we can refine the check to: if output=$(git status --porcelain) && [ -z "$output" ]; then # Working directory clean else # Uncommitted changes fi It's also worth noting that, although git status does not give meaningful exit code when the working directory is unclean, git diff provides the --exit-code option, which makes it behave similar to the diff utility, that is, exiting with status 1 when there were differences and 0 when none were found. Using this, we can check for unstaged changes with: git diff --exit-code and staged, but not committed changes with: git diff --cached --exit-code Although git diff can report on untracked files in submodules via appropriate arguments to --ignore-submodules , unfortunately it seems that there is no way to have it report on untracked files in the actual working directory. If untracked files in the working directory are relevant, git status --porcelain is probably the best bet.
{ "source": [ "https://unix.stackexchange.com/questions/155046", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78937/" ] }
155,053
Is there any way to automate Linux server configuration? I'm working on setting up a couple of new build servers, as well as an FTP server, and would like to automate as much of the process as possible. The reason for this is that the setup and configuration of these servers needs to be done in an easily repeatable way. We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future. Essentially, all the servers need is to install the OS, as well as a handful of packages. There's nothing overly complicated about the setups. So, is there a way to automate this process (or at least some amount of it)? EDIT: Also, say I use Kickstart, is there a way to remove the default Ubuntu repositories, and just install the packages from a collection of .deb files we have locally (preferably through apt, rather than dpkg)?
Yes! This is a big deal, and incredibly common. And there are two basic approaches. One way is simply with scripted installs, as for example used in Fedora, RHEL, or CentOS's kickstart. Check this out in the Fedora install guide: Kickstart Installations . For your simple case, this may be sufficient. (Take this as an example; there are similar systems for other distros, but since I work on Fedora that's what I'm familiar with.) The other approach is to use configuration management . This is a big topic, but look into Puppet, Chef, Ansible, cfengine, Salt, and others. In this case, you might use a very basic generic kickstart to provision a minimal machine, and the config management tool to bring it into its proper role. As your needs and infrastructure grow, this becomes incredibly important. Using config management for all your changes means that you can recreate not just the initial install, but the evolved state of the system as you introduce the inevitable tweaks and fixes caused by interacting with the real world. We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future. You are absolutely on the right track — this is the bedrock principle of professional systems administration. We even have a meme image for it: It's often moderately harder to set up initially, and there can be a big learning curve for some of the more advanced systems, but it pays for itself forever. Even if you have only a handful of systems, think about how much you want to work at recreating them in the event of catastrophe in the middle of the night, or when you're on vacation.
{ "source": [ "https://unix.stackexchange.com/questions/155053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83835/" ] }
155,064
I'm using cygwin tail to follow busy java web application logs, on a windows server, generating roughly 16Gb worth of logs a day. I"m constrained to 10MB log sizes, so the files roll very often. The command line I'm using is : /usr/bin/tail -n 1000 -F //applicationserver/logs/logs.log It survives 2-4 rolls of the file, about 4-6 minutes, but eventually, usually reports: "File truncated" and then echos the name of the file every second. the file is busily filling and rotating. Am I exceeding the capability of tail?
Yes! This is a big deal, and incredibly common. And there are two basic approaches. One way is simply with scripted installs, as for example used in Fedora, RHEL, or CentOS's kickstart. Check this out in the Fedora install guide: Kickstart Installations . For your simple case, this may be sufficient. (Take this as an example; there are similar systems for other distros, but since I work on Fedora that's what I'm familiar with.) The other approach is to use configuration management . This is a big topic, but look into Puppet, Chef, Ansible, cfengine, Salt, and others. In this case, you might use a very basic generic kickstart to provision a minimal machine, and the config management tool to bring it into its proper role. As your needs and infrastructure grow, this becomes incredibly important. Using config management for all your changes means that you can recreate not just the initial install, but the evolved state of the system as you introduce the inevitable tweaks and fixes caused by interacting with the real world. We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future. You are absolutely on the right track — this is the bedrock principle of professional systems administration. We even have a meme image for it: It's often moderately harder to set up initially, and there can be a big learning curve for some of the more advanced systems, but it pays for itself forever. Even if you have only a handful of systems, think about how much you want to work at recreating them in the event of catastrophe in the middle of the night, or when you're on vacation.
{ "source": [ "https://unix.stackexchange.com/questions/155064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83834/" ] }
155,139
In my /etc/passwd file, I can see that the www-data user used by Apache, as well as all sorts of system users, have either /usr/sbin/nologin or /bin/false as their login shell. For example, here is a selection of lines: daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin sys:x:3:3:sys:/dev:/usr/sbin/nologin games:x:5:60:games:/usr/games:/usr/sbin/nologin www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin syslog:x:101:104::/home/syslog:/bin/false whoopsie:x:109:116::/nonexistent:/bin/false mark:x:1000:1000:mark,,,:/home/mark:/bin/bash Consequently, if I try to swap to any of these users (which I'd sometimes like to do to check my understanding of their permissions, and which there are probably other at least halfway sane reasons for), I fail: mark@lunchbox:~$ sudo su www-data This account is currently not available. mark@lunchbox:~$ sudo su syslog mark@lunchbox:~$ Of course, it's not much of an inconvenience, because I can still launch a shell for them via a method like this: mark@lunchbox:~$ sudo -u www-data /bin/bash www-data@lunchbox:~$ But that just leaves me wondering what purpose is served by denying these users a login shell. Looking around the internet for an explanation, many people claim that this has something to do with security, and everybody seems to agree that it would be in some way a bad idea to change the login shells of these users. Here's a collection of quotes: Setting the Apache user's shell to something non-interactive is generally good security practice (really all service users who don't have to log in interactively should have their shell set to something that's non-interactive). -- https://serverfault.com/a/559315/147556 the shell for the user www-data is set to /usr/sbin/nologin, and it's set for a very good reason. -- https://askubuntu.com/a/486661/119754 [system accounts] can be security holes , especially if they have a shell enabled: Bad bin:x:1:1:bin:/bin:/bin/sh Good bin:x:1:1:bin:/bin:/sbin/nologin -- https://unix.stackexchange.com/a/78996/29001 For security reasons I created a user account with no login shell for running the Tomcat server: # groupadd tomcat # useradd -g tomcat -s /usr/sbin/nologin -m -d /home/tomcat tomcat -- http://www.puschitz.com/InstallingTomcat.html While these posts are in unanimous agreement that not giving system users real login shells is good for security, not one of them justifies this claim, and I can't find an explanation of it anywhere. What attack are we trying to protect ourselves against by not giving these users real login shells?
If you take a look at the nologin man page you'll see the following description. excerpt nologin displays a message that an account is not available and exits non-zero. It is intended as a replacement shell field to deny login access to an account. If the file /etc/nologin.txt exists, nologin displays its contents to the user instead of the default message. The exit code returned by nologin is always 1. So the actual intent of nologin is just so that when a user attempts to login with an account that makes use of it in the /etc/passwd is so that they're presented with a user friendly message, and that any scripts/commands that attempt to make use of this login receive the exit code of 1. Security With respect to security, you'll typically see either /sbin/nologin or sometimes /bin/false , among other things in that field. They both serve the same purpose, but /sbin/nologin is probably the preferred method. In any case they're limiting direct access to a shell as this particular user account. Why is this considered valuable with respect to security? The "why" is hard to fully describe, but the value in limiting a user's account in this manner, is that it thwarts direct access via the login application when you attempt to gain access using said user account. Using either nologin or /bin/false accomplishes this. Limiting your system's attack surface is a common technique in the security world, whether disabling services on specific ports, or limiting the nature of the logins on one's systems. Still there are other rationalizations for using nologin . For example, scp will no longer work with a user account that does not designate an actual shell, as described in this ServerFault Q&A titled: What is the difference between /sbin/nologin and /bin/false? .
{ "source": [ "https://unix.stackexchange.com/questions/155139", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29001/" ] }
155,150
NOTE: This is related to my question: " Apache 2.4 won't reload, any problem with my configuration? ". I'm trying to test a local site, locally. As I understand Apache 2 (and perhaps Apache as well) has something called VirtualHost . My little bit of understanding tells me that virtualhosting is a way where one server/IP address can serve multiple domains. https://en.wikipedia.org/wiki/Virtual_hosting . Anyway, I'm getting the following error when running Apache 2's configtest to see where I'm failing. I'm running Apache 2.4.10-1, and it seems there are a lot of changes which have happened between Apache 2.2 and Apache 2.4 which I'm not aware of. $ sudo apache2ctl configtest [sudo] password for shirish: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message Syntax OK This is the /etc/hosts file: $ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 debian mini I also see an empty /etc/hosts.conf file. Perhaps the data in /etc/hosts needs to be copied to /etc/hosts.conf for the server to take cognizance? My hostname: $ hostname debian This is the configuration file of the site: $ cat /etc/apache2/sites-available/minidebconfindia.conf <VirtualHost mini:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html/in2014.mini/website <Directory /> Options +FollowSymLinks +Includes Require all granted </Directory> <Directory /var/www/html/in2014.mini/website/> Options +Indexes +FollowSymLinks +MultiViews +Includes Require all granted </Directory> </VirtualHost> I also read about binding to addresses and ports , but I haven't understood that well for multiple reasons. It doesn't give/share an example as to in which file those lines need to be put and what will come before and after. An example would have been much better. I did that and restarted the server, but I still get the same error. ~$ sudo apache2ctl configtest AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message Syntax OK It seems there are three configuration files in Debian that I need to know and understand. /etc/apache2$ ls *.conf apache2.conf ports.conf and /etc/apache2/conf.d$ ls *.conf httpd.conf Apparently, apache2.conf IS the global configuration file while the httpd.conf is a user-configuration file. There is also ports.conf. Both apache2.conf and ports.conf are at the defaults except I have changed the loglevel of Apache from warn to debug . I tried one another thing: $ sudo apache2ctl -S AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message VirtualHost configuration: 127.0.1.1:80 debian (/etc/apache2/sites-enabled/minidebconfindia.conf:1) *:80 127.0.1.1 (/etc/apache2/sites-enabled/000-default.conf:1) ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www/html" Main ErrorLog: "/var/log/apache2/error.log" Mutex watchdog-callback: using_defaults Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults PidFile: "/var/run/apache2/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG User: name="www-data" id=33 Group: name="www-data" id=33 Maybe somebody has more insight.
The file to edit: /etc/apache2/apache2.conf Command to edit file: sudo nano /etc/apache2/apache2.conf For a global servername you can put it at the top of the file (outside of virtual host tags). The first line looks like: ServerName myserver.mydomain.com Then save and test the configuration with the following command: apachectl configtest You should get: Syntax OK Then you can restart the server and check you don't get the error message: sudo service apache2 restart
{ "source": [ "https://unix.stackexchange.com/questions/155150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
155,184
I have got one folder for log with 7 folders in it. Those seven folders too have subfolders in them and those subfolders have subfolders too. I want to delete all the files older than 15 days in all folders including subfolders without touching folder structrure, that means only files. mahesh@inl00720:/var/dtpdev/tmp/ > ls A1 A2 A3 A4 A5 A6 A7 mahesh@inl00720:/var/dtpdev/tmp/A1/ > ls B1 B2 B3 B4 file1.txt file2.csv
You could start by saying find /var/dtpdev/tmp/ -type f -mtime +15 . This will find all files older than 15 days and print their names. Optionally, you can specify -print at the end of the command, but that is the default action. It is advisable to run the above command first, to see what files are selected. After you verify that the find command is listing the files that you want to delete (and no others), you can add an "action" to delete the files. The typical actions to do this are: -exec rm -f {} \; (or, equivalently, -exec rm -f {} ';' ) This will run rm -f on each file; e.g., rm -f /var/dtpdev/tmp/A1/B1; rm -f /var/dtpdev/tmp/A1/B2; rm -f /var/dtpdev/tmp/A1/B3; … -exec rm -f {} + This will run rm -f on many files at once; e.g., rm -f /var/dtpdev/tmp/A1/B1 /var/dtpdev/tmp/A1/B2 /var/dtpdev/tmp/A1/B3 … so it may be slightly faster than option 1.  (It may need to run rm -f a few times if you have thousands of files.) -delete This tells find itself to delete the files, without running rm . This may be infinitesimally faster than the -exec variants, but it will not work on all systems. So, if you use option 2, the whole command would be: find /var/dtpdev/tmp/ -type f -mtime +15 -exec rm -f {} +
{ "source": [ "https://unix.stackexchange.com/questions/155184", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83926/" ] }
155,189
I thought about whether this question is suitable for SE or not, I hope you agree it is. Some time ago I asked on SE how to find text in files and leave the file with only the matching lines that contain the text I was searching for. The question is here: How to find text in files and only keep the respective matching lines using the terminal on OS X? While the answer worked perfectly I now wonder, how come sed is so fast? In my use case, I had quite a lot of files which in total were about 30 Gb in size. The sed command ran in about 12 seconds which I never would have believed (working with a normal HDD). Within 12 seconds the command read through 30 Gb of text, truncating each file to only keep the respective lines I was filtering for. How does this work? (or: what is this sorcery?) The actual command was: find . -type f -exec sed -i'' '/\B\/foobar\b/!d' {} \;
You could start by saying find /var/dtpdev/tmp/ -type f -mtime +15 . This will find all files older than 15 days and print their names. Optionally, you can specify -print at the end of the command, but that is the default action. It is advisable to run the above command first, to see what files are selected. After you verify that the find command is listing the files that you want to delete (and no others), you can add an "action" to delete the files. The typical actions to do this are: -exec rm -f {} \; (or, equivalently, -exec rm -f {} ';' ) This will run rm -f on each file; e.g., rm -f /var/dtpdev/tmp/A1/B1; rm -f /var/dtpdev/tmp/A1/B2; rm -f /var/dtpdev/tmp/A1/B3; … -exec rm -f {} + This will run rm -f on many files at once; e.g., rm -f /var/dtpdev/tmp/A1/B1 /var/dtpdev/tmp/A1/B2 /var/dtpdev/tmp/A1/B3 … so it may be slightly faster than option 1.  (It may need to run rm -f a few times if you have thousands of files.) -delete This tells find itself to delete the files, without running rm . This may be infinitesimally faster than the -exec variants, but it will not work on all systems. So, if you use option 2, the whole command would be: find /var/dtpdev/tmp/ -type f -mtime +15 -exec rm -f {} +
{ "source": [ "https://unix.stackexchange.com/questions/155189", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62627/" ] }
155,331
In a file containing lines like this one: # lorem ipsum blah variable I would like to remove # (comment) character in the same line that contains a specific string, in place. Is sed good for this? I'm struggling to get this conditional working. I have a "clumsy" way of doing this; I can find the matching line number with awk or sed and then use this number in a separate sed command, but I believe that this can be done in a much better way.
Use the string you are looking for as the selector for the lines to be operated upon: sed '/ipsum/s/#//g' /ipsum/ selects lines containing "ipsum" and only on these lines the command(s) that follow are executed. You can use braces to run more commands /ipsum/{s/#//g;s/@/-at-/g;}
{ "source": [ "https://unix.stackexchange.com/questions/155331", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211711/" ] }
155,347
Is there a way to tell rsync to sync only source directories that are missing on a destination (whole directories, not just any missing file)? Considering S as the set of source directories, and D the set of destination directories, I want to: copy the entire directory s in S if s not in D . skip entirely the directory s (indep. of the files it contains) if s in D . Of course it's possible to list the directories on both sides, rsync the list from the destination, do some perl and generate a list of all directories that need to be copied, but it would be better if it were possible with just one invocation of rsync . For example, if source and destination are on the same server, one could do: src=/some/where dst=/else/where cd /tmp (cd $src; find . -type d) | sort > a (cd $dst; find . -type d) | sort > b comm -23 a b | perl -e 'L: while(<>) {chomp; $p=$_; while ($p=~s,/[^/]+$,,) { next L if $n{$p}; } $n{$_}++; s,^./,/,; print "$_/***\n"}' > tocopy rsync -vmazn --include-from tocopy --exclude '*' $src/ $dst/ (without the -n if for real). PS: note that the perl "one-liner" above (more or less) strips subdirs of dirs that we decide to copy (as those subdirs are subsumed in the copy of their parent). That recipe ends up with the minimal set of dirs to copy.
Use the string you are looking for as the selector for the lines to be operated upon: sed '/ipsum/s/#//g' /ipsum/ selects lines containing "ipsum" and only on these lines the command(s) that follow are executed. You can use braces to run more commands /ipsum/{s/#//g;s/@/-at-/g;}
{ "source": [ "https://unix.stackexchange.com/questions/155347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31177/" ] }
155,384
I want to split a file into chunks with 2 words each. $cat tmp word1 word2 word3 word4 word5 word6 word7 $sed -e 's/word. word. /&\n/g' tmp word1 word2 word3 word4 word5 word6 word7 $sed -e 's/word. \{2\}/&\n/g' tmp word1 word2 word3 word4 word5 word6 word7 I expected the last command to give same result as the one before it. What is wrong?
Sorry, seems like I figured it out just after posting. It needs to be sed -e 's/\(word. \)\{2\}/&\n/g' tmp Apparently the parentheses are needed to let sed apply {2} condition on the entire pattern word. and not just preceding space.
{ "source": [ "https://unix.stackexchange.com/questions/155384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23301/" ] }
155,551
I'm having some problems with some scripts in bash, about errors and unexpected behaviors. I would like to investigate the causes of the problems so I can apply fixes. Is there a way I can turn some kind of "debug-mode" for bash, to get more information?
Start your bash script with bash -x ./script.sh or add in your script set -x to see debug output. Additional with bash 4.1 or later: If you want to write the debug output to a separate file, add this to your script: exec 5> debug_output.txt BASH_XTRACEFD="5" See: https://stackoverflow.com/a/25593226/3776858 If you want to see line numbers add this: PS4='$LINENO: ' If you have access to logger command then you can use this to write debug output via your syslog with timestamp, script name and line number: #!/bin/bash exec 5> >(logger -t $0) BASH_XTRACEFD="5" PS4='$LINENO: ' set -x # Place your code here You can use option -p of logger command to set an individual facility and level to write output via local syslog to its own logfile.
{ "source": [ "https://unix.stackexchange.com/questions/155551", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41104/" ] }
155,805
I want to replace only the first k instances of a word. How can I do this? Eg. Say file foo.txt contains 100 instances occurrences of word 'linux' . I need to replace first 50 occurrences only.
The first section belows describes using sed to change the first k-occurrences on a line. The second section extends this approach to change only the first k-occurrences in a file, regardless of what line they appear on. Line-oriented solution With standard sed, there is a command to replace the k-th occurrance of a word on a line. If k is 3, for example: sed 's/old/new/3' Or, one can replace all occurrences with: sed 's/old/new/g' Neither of these is what you want. GNU sed offers an extension that will change the k-th occurrance and all after that. If k is 3, for example: sed 's/old/new/g3' These can be combined to do what you want. To change the first 3 occurrences: $ echo old old old old old | sed -E 's/\<old\>/\n/g4; s/\<old\>/new/g; s/\n/old/g' new new new old old where \n is useful here because we can be sure that it never occurs on a line. Explanation: We use three sed substitution commands: s/\<old\>/\n/g4 This the GNU extension to replace the fourth and all subsequent occurrences of old with \n . The extended regex feature \< is used to match the beginning of a word and \> to match the end of a word. This assures that only complete words are matched. Extended regex requires the -E option to sed . s/\<old\>/new/g Only the first three occurrences of old remain and this replaces them all with new . s/\n/old/g The fourth and all remaining occurrences of old were replaced with \n in the first step. This returns them back to their original state. Non-GNU solution If GNU sed is not available and you want to change the first 3 occurrences of old to new , then use three s commands: $ echo old old old old old | sed -E -e 's/\<old\>/new/' -e 's/\<old\>/new/' -e 's/\<old\>/new/' new new new old old This works well when k is a small number but scales poorly to large k . Since some non-GNU seds do not support combining commands with semicolons, each command here is introduced with its own -e option. It may also be necessary to verify that your sed supports the word boundary symbols, \< and \> . File-oriented solution We can tell sed to read the whole file in and then perform the substitutions. For example, to replace the first three occurrences of old using a BSD-style sed: sed -E -e 'H;1h;$!d;x' -e 's/\<old\>/new/' -e 's/\<old\>/new/' -e 's/\<old\>/new/' The sed commands H;1h;$!d;x read the whole file in. Because the above does not use any GNU extension, it should work on BSD (OSX) sed. Note, thought, that this approach requires a sed that can handle long lines. GNU sed should be fine. Those using a non-GNU version of sed should test its ability to handle long lines. With a GNU sed, we can further use the g trick described above, but with \n replaced with \x00 , to replace the first three occurrences: sed -E -e 'H;1h;$!d;x; s/\<old\>/\x00/g4; s/\<old\>/new/g; s/\x00/old/g' This approach scales well as k becomes large. This assumes, though, that \x00 is not in your original string. Since it is impossible to put the character \x00 in a bash string, this is usually a safe assumption.
{ "source": [ "https://unix.stackexchange.com/questions/155805", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81580/" ] }
155,806
How can I use the diff command to compare 2 commands' outputs? Does something like this exist? diff ($cat /etc/passwd) ($cut -f2/etc/passwd)
Use process substitution : diff <(cat /etc/passwd) <(cut -f2 /etc/passwd) <(...) is called process substitution. It converts the output of a command into a file-like object that diff can read from. While process substitution is not POSIX, it is supported by bash, ksh, and zsh.
{ "source": [ "https://unix.stackexchange.com/questions/155806", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82729/" ] }
155,829
Executing kill -l on linux gives: 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX What happened to 32 and 33 ? Why is it not listed? They could have started at 1 and ended at 62 instead of skipping 2 in the middle?
It is because of NPTL . Since it is part of the GNU C library nearly every modern linux distribution don't uses the first two real time signals anymore. NPTL is an implementation of the POSIX Threads . NPTL makes internal use of the first two real-time signals. This part of the signal manpage is very interesting: The Linux kernel supports a range of 32 different real-time signals, numbered 33 to 64. However, the glibc POSIX threads implementation internally uses two (for NPTL) or three (for LinuxThreads) real-time signals (see pthreads(7)), and adjusts the value of SIGRTMIN suitably (to 34 or 35). Because the range of available real-time signals varies according to the glibc threading implementation (and this variation can occur at run time according to the available kernel and glibc), and indeed the range of real-time signals varies across UNIX systems, programs should never refer to real-time signals using hard-coded numbers, but instead should always refer to real-time signals using the notation SIGRTMIN+n, and include suitable (run-time) checks that SIGRTMIN+n does not exceed SIGRTMAX. I also checked the source code for glibc; see line 22 . __SIGRTMIN is increased +2, so the first two real time signals are excluded from the range of real time signals.
{ "source": [ "https://unix.stackexchange.com/questions/155829", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63606/" ] }
155,838
I'm trying to use the following script to generate a sitemap for my website. When I run it as sh thsitemap.sh I get an error like this and creates an empty sitemap.xml file: thsitemap.sh: 22: thsitemap.sh: [[: not found thsitemap.sh: 42: thsitemap.sh: [[: not found thsitemap.sh: 50: thsitemap.sh: Syntax error: "(" unexpected But as the same user root when I manually copy and paste these lines on the terminal, it works without any error and the sitemap.xml file have all the urls. What's the problem? How can I fix this? #!/bin/bash ############################################## # modified version of original http://media-glass.es/ghost-sitemaps/ # for ghost.centminmod.com # http://ghost.centminmod.com/ghost-sitemap-generator/ ############################################## url="techhamlet.com" webroot='/home/leafh8kfns/techhamlet.com' path="${webroot}/sitemap.xml" user='leafh8kfns' # web server user group='leafh8kfns' # web server group debug='n' # disable debug mode with debug='n' ############################################## date=`date +'%FT%k:%M:%S+00:00'` freq="daily" prio="0.5" reject='.rss, .gif, .png, .jpg, .css, .js, .txt, .ico, .eot, .woff, .ttf, .svg, .txt' ############################################## # create sitemap.xml file if it doesn't exist and give it same permissions # as nginx server user/group if [[ ! -f "$path" ]]; then touch $path chown ${user}:${group} $path fi # check for robots.txt defined Sitemap directive # if doesn't exist add one # https://support.google.com/webmasters/answer/183669 if [ -f "${webroot}/robots.txt" ]; then SITEMAPCHECK=$(grep 'Sitemap:' ${webroot}/robots.txt) if [ -z "$SITEMAPCHECK" ]; then echo "Sitemap: http://${url}/sitemap.xml" >> ${webroot}/robots.txt fi fi ############################################## echo "" > $path # grab list of site urls list=`wget -r --delete-after $url --reject=${reject} 2>&1 |grep "\-\-" |grep http | grep -v 'normalize\.css' | awk '{ print $3 }'` if [[ "$debug" = [yY] ]]; then echo "------------------------------------------------------" echo "Following list of urls will be submitted to Google" echo $list echo "------------------------------------------------------" fi # put list into an array array=($list) echo "------------------------------------------------------" echo ${#array[@]} "pages detected for $url" echo "------------------------------------------------------" # formatted properly according to # https://support.google.com/webmasters/answer/35738 echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <urlset xsi:schemaLocation=\"http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\">" > $path echo ' ' >> $path; for ((i=0;i<${#array[*]};i++)); do echo "<url> <loc>${array[$i]:0}</loc> <lastmod>$date</lastmod> <changefreq>$freq</changefreq> <priority>$prio</priority> </url>" >> $path done echo "" >> $path echo "</urlset>" >> $path # notify Google # URL encode urls as per https://support.google.com/webmasters/answer/183669 if [[ "$debug" = [nN] ]]; then wget -q --delete-after http://www.google.com/webmasters/tools/ping?sitemap=http%3A%2F%2F${url}%2Fsitemap.xml rm -rf ${url} else echo "wget -q --delete-after http://www.google.com/webmasters/tools/ping?sitemap=http%3A%2F%2F${url}%2Fsitemap.xml" echo "rm -rf ${url}" fi echo "------------------------------------------------------" exit 0
Run the script either as: bash script.sh or just: ./script.sh When bash is run using the name sh , it disables most of its extensions, such as the [[ testing operator. Since you have the #!/bin/bash shebang line, you don't need to specify the shell interpreter explicitly on the command line. Running the script as a command will use that line to find the shell.
{ "source": [ "https://unix.stackexchange.com/questions/155838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27743/" ] }
155,993
I am trying to change my current password in Red Hat Enterprise Linux Server release 5.10 (Tikanga) but it says my new password is too similar . Is there any way to force change ? $ passwd Changing password for user XY Changing password for XY (current) UNIX password: New UNIX password: BAD PASSWORD: is too similar to the old one New UNIX password:
If you can run the command as root, you can force the change to be accepted. Example: $ sudo passwd myusername Changing password for user myusername. New password: Retype new password: passwd: all authentication tokens updated successfully.
{ "source": [ "https://unix.stackexchange.com/questions/155993", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84441/" ] }
156,084
I am trying to understand named pipes in the context of this particular example. I type <(ls -l) in my terminal and get the output as, bash: /dev/fd/63: Permission denied . If I type cat <(ls -l) , I could see the directory contents. If I replace the cat with echo , I think I get the terminal name (or is it?). echo <(ls -l) gives the output as /dev/fd/63 . Also, this example output is unclear to me. ls -l <(echo "Whatever") lr-x------ 1 root root 64 Sep 17 13:18 /dev/fd/63 -> pipe:[48078752] However, if I give, ls -l <() it lists me the directory contents. What is happening in case of the named pipe?
When you do <(some_command) , your shell executes the command inside the parentheses and replaces the whole thing with a file descriptor, that is connected to the command's stdout. So /dev/fd/63 is a pipe containing the output of your ls call. When you do <(ls -l) you get a Permission denied error, because the whole line is replaced with the pipe, effectively trying to call /dev/fd/63 as a command, which is not executable. In your second example, cat <(ls -l) becomes cat /dev/fd/63 . As cat reads from the files given as parameters you get the content. echo on the other hand just outputs its parameters "as-is". The last case you have, <() is simply replaced by nothing, as there is no command. But this is not consistent between shells, in zsh you still get a pipe (although empty). Summary : <(command) lets you use the ouput of a command, where you would normally need a file. Edit: as Gilles points out, this is not a named pipe, but an anonymous pipe. The main difference is, that it only exists, as long as the process is running, while a named pipe (created e.g. with mkfifo ) will stay without processes attached to it.
{ "source": [ "https://unix.stackexchange.com/questions/156084", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47538/" ] }
156,205
Shells like Bash and Zsh expand wildcard into arguments, as many arguments as match the pattern: $ echo *.txt 1.txt 2.txt 3.txt But what if I only want the first match to be returned, not all the matches? $ echo *.txt 1.txt I don't mind shell-specific solutions, but I would like a solution that works with whitespace in filenames.
One robust way in bash is to expand into an array, and output the first element only: pattern="*.txt" files=( $pattern ) echo "${files[0]}" # printf is safer! (You can even just echo $files , a missing index is treated as [0].) This safely handles space/tab/newline and other metacharacters when expanding the filenames. Note that locale settings in effect can alter what "first" is. You can also do this interactively with a bash completion function : _echo() { local cur=${COMP_WORDS[COMP_CWORD]} # string to expand if compgen -G "$cur*" > /dev/null; then local files=( ${cur:+$cur*} ) # don't expand empty input as * [ ${#files} -ge 1 ] && COMPREPLY=( "${files[0]}" ) fi } complete -o bashdefault -F _echo echo This binds the _echo function to complete arguments to the echo command (overriding normal completion). An extra "*" is appended in the code above, you can just hit tab on a partial filename and hopefully the right thing will happen. The code is slightly convoluted, rather than set or assume nullglob ( shopt -s nullglob ) we check compgen -G can expand the glob to some matches, then we expand safely into an array, and finally set COMPREPLY so that quoting is robust. You can partly do this (programmatically expand a glob) with bash's compgen -G , but it's not robust as it outputs unquoted to stdout. As usual, completion is rather fraught, this breaks completion of other things, including environment variables (see the _bash_def_completion() function here for the details of emulating the default behaviour). You could also just use compgen outside of a completion function: files=( $(compgen -W "$pattern") ) One point to note is that "~" is not a glob, it's handled by bash in a separate stage of expansion, as are $variables and other expansions. compgen -G just does filename globbing, but compgen -W gives you all of bash's default expansion, though possibly too many expansions (including `` and $() ). Unlike -G , the -W is safely quoted (I can't explain the disparity). Since the purpose of -W is that it expands tokens, this means it will expand "a" to "a" even if no such file exists, so it's perhaps not ideal. This is easier to understand, but may have unwanted side-effects: _echo() { local cur=${COMP_WORDS[COMP_CWORD]} local files=( $(compgen -W "$cur") ) printf -v COMPREPLY %q "${files[0]}" } Then: touch $'curious \n filename' echo curious* tab Note the use of printf %q to safely quote the values. One final option is to use 0-delimited output with GNU utilities (see the bash FAQ ): pattern="*.txt" while IFS= read -r -d $'\0' filename; do printf '%q' "$filename"; break; done < <(find . -maxdepth 1 -name "$pattern" -printf "%f\0" | sort -z ) This option gives you a little more control over the sorting order (the order when expanding a glob will be subject to your locale/ LC_COLLATE and may or may not fold case), but is otherwise a rather large hammer for such a small problem ;-)
{ "source": [ "https://unix.stackexchange.com/questions/156205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9041/" ] }
156,261
I have a file file.gz , when I try to unzip this file by using gunzip file.gz , it unzipped the file but only contains extracted and removes the file.gz file. How can I unzip by keeping both unzipped file and zipped file?
Here are several alternatives: Give gunzip the --keep option (version 1.6 or later) -k --keep Keep (don't delete) input files during compression or decompression. gunzip -k file.gz Pass the file to gunzip as stdin gunzip < file.gz > file Use zcat (or, on older systems, gzcat ) zcat file.gz > file
{ "source": [ "https://unix.stackexchange.com/questions/156261", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20346/" ] }
156,534
I am having trouble getting the basics of Bash scripting down. Here's what I have so far: #!/bin/bash FILES="/home/john/my directory/*.txt" for f in "${FILES}" do echo "${f}" done All I want to do is list all the .txt files in a for loop so I can do stuff with them. But the space in the my directory and the asterisk in *.txt just aren't playing nicely. I tried using it with and without double quotes, with and without curly braces on variable names and still can't print all the .txt files. This is a very basic thing, but I'm still struggling because I'm tired and can't think straight. What am I doing wrong? I've been able to successfully apply the script above if my FILES don't have a space or an asterisk... I had to experiment with or without the use of double quotes and braces to get it to work. But the moment I have both spaces and an asterisk, it messes everything up.
Inside quotes, the * will not expand to a list of files. To use such a wildcard successfully, it must be outside of quotes. Even if the wildcard did expand, the expression "${FILES}" would result in a single string, not a list of files. One approach that would work would be: #!/bin/bash DIR="/home/john/my directory/" for f in "$DIR"/*.txt do echo "${f}" done In the above, file names with spaces or other difficult characters will be handled correctly. A more advanced approach could use bash arrays: #!/bin/bash FILES=("/home/john/my directory/"*.txt) for f in "${FILES[@]}" do echo "${f}" done In this case, FILES is an array of file names. The parens surrounding the definition make it an array. Note that the * is outside of quotes. The construct "${FILES[@]}" is a special case: it will expand to a list of strings where each string is one of the file names. File names with spaces or other difficult characters will be handled correctly.
{ "source": [ "https://unix.stackexchange.com/questions/156534", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56021/" ] }
156,579
I have done the following at command line: $ text="name with space" $ echo $text name with space I am trying to use tr -d ' ' to remove the spaces and have a result of: namewithspace I've tried a few things like: text=echo $text | tr -d ' ' No luck so far so hopefully you wonderful folk can help!
In Bash, you can use Bash's built in string manipulation. In this case, you can do: > text="some text with spaces" > echo "${text// /}" sometextwithspaces For more on the string manipulation operators, see http://tldp.org/LDP/abs/html/string-manipulation.html However, your original strategy would also work, your syntax is just a bit off: > text2=$(echo $text | tr -d ' ') > echo $text2 sometextwithspaces
{ "source": [ "https://unix.stackexchange.com/questions/156579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82325/" ] }
156,794
There are two server that I can access with 2 different VPN connections. I have managed to have both VPN working on the same time on my machine (a bit of routing rules). I want to do a scp <remote1>:some/file <remote2>:destination/folder from my laptop terminal. But when I try this, the scp command that is invoked on remote1 cannot find remote2 because they are not in the same network. Is it possible to force the scp command to pass through my laptop as a router? If I try with Nautilus (connect to server, both servers, then copy-paste) it works, but I'd like to do it from a terminal.
Newer versions of scp have the option -3 -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly between the two remote hosts
{ "source": [ "https://unix.stackexchange.com/questions/156794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59807/" ] }
156,797
How do I find out what commands such as LS do? I have recently been trying to write my first code and have became stuck with command names.
Newer versions of scp have the option -3 -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly between the two remote hosts
{ "source": [ "https://unix.stackexchange.com/questions/156797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84949/" ] }
157,022
Nginx does not follow symbolic links. I get a 404 error. In my directory, I have this link: lrwxrwxrwx 1 root root 48 Sep 23 08:52 modules -> /path/to/dir/ but the files stored in /path/to/dir aren't found.
I insert disable_symlinks off; in my nginx.conf and i resolved, works fine! http { disable_symlinks off; }
{ "source": [ "https://unix.stackexchange.com/questions/157022", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64116/" ] }
157,154
In Windows, if you type LIST DISK using DiskPart in a command prompt it lists all physical storage devices, plus their size, format, etc. What is the equivalent of this in Linux?
There are many tools for that, for example fdisk -l or parted -l , but probably the most handy is lsblk (aka list block devices ): Example $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 238.5G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi ├─sda2 8:2 0 500M 0 part /boot └─sda3 8:3 0 237.8G 0 part ├─fedora-root 253:0 0 50G 0 lvm / ├─fedora-swap 253:1 0 2G 0 lvm [SWAP] └─fedora-home 253:2 0 185.9G 0 lvm It has many additional options, for example to show filesystems, labels, etc. As always man lsblk is your friend.
{ "source": [ "https://unix.stackexchange.com/questions/157154", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85170/" ] }
157,164
I am having this strange problem of my system running out of space constantly. Just yesterday I cleaned up near 15GB of space and today I see it is out of space. I really don't get it. I know some other people also have this problem, but its not .xsession-log . Infact my /var/log is quite small. Here is a screenshot of my disk. I ran it as root. Here is my df [eeuser@roadrunner ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 178G 169G 2.2M 100% / udev 7.9G 4.0K 7.9G 1% /dev tmpfs 3.2G 940K 3.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.9G 2.9M 7.9G 1% /run/shm What seems strange is, disk-analyzer shows usage of / as 100% but only 74.4GB. I have the linux partition of 190GB. Where is my 100GB. [eeuser@roadrunner ~]$ sudo fdisk -l /dev/sda Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0dc6f614 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 921806847 460800000 7 HPFS/NTFS/exFAT /dev/sda3 921806848 1541421152 309807152+ 7 HPFS/NTFS/exFAT /dev/sda4 1541423102 1953523711 206050305 5 Extended /dev/sda5 1541423104 1919993855 189285376 83 Linux /dev/sda6 1919995904 1953523711 16763904 82 Linux swap / Solaris
There are many tools for that, for example fdisk -l or parted -l , but probably the most handy is lsblk (aka list block devices ): Example $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 238.5G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi ├─sda2 8:2 0 500M 0 part /boot └─sda3 8:3 0 237.8G 0 part ├─fedora-root 253:0 0 50G 0 lvm / ├─fedora-swap 253:1 0 2G 0 lvm [SWAP] └─fedora-home 253:2 0 185.9G 0 lvm It has many additional options, for example to show filesystems, labels, etc. As always man lsblk is your friend.
{ "source": [ "https://unix.stackexchange.com/questions/157164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85181/" ] }
157,167
Classical situation: I ran a bad rm and realized immediately afterwards that I had removed the wrong files. (Nothing critical and I had tolerably recent backups, but still annoying.) Knowing that further disk activity was my enemy if I wanted to recover the files with extundelete or such tools, I immediately powered the machine down physically (i.e., with the power button, not with halt or any such command). This was a laptop with no important tasks running or anything open, so it was an acceptable operation. (By the way, I learned since then that the first thing to do in such a situation would be to estimate first if the missing files may still be opened by a process https://unix.stackexchange.com/a/101247 -- if they are, you should recover them this way rather than power down the machine.) Still, once the machine was powered down I thought for a while and decided the files were not worth the time investment of booting a live system for proper forensics. So I powered the machine back up. And then I discovered that my files were still sitting on disk: the rm hadn't been propagated to disk before I had powered down. I did a little dance and thanked the god of sysadmins for His unexpected forgiveness. My question is now to understand how this was possible, and what is the typical delay before an rm is actually propagated to disk. I know that disk IO isn't flushed immediately but that it sits in memory for some time, but I thought that the disk journal would make sure quickly that pending operations do not get entirely lost. https://unix.stackexchange.com/a/78766 seems to hint at a separate mechanism to flush dirty pages and to flush journal operations but does not give sufficient detail about how the journal would be involved for a rm , and the expected delay before operations are flushed. Some more details: the data was in an ext4 partition inside a LUKS volume, and when booting the machine back up I saw the following in syslog : Sep 24 10:24:58 gamma kernel: [ 11.457007] EXT4-fs (dm-0): 1 orphan inode deleted Sep 24 10:24:58 gamma kernel: [ 11.458393] EXT4-fs (dm-0): recovery complete Sep 24 10:24:58 gamma kernel: [ 11.482475] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) but I am not confident it is related to the rm . Another question would be whether there is a way to tell the kernel to not perform any of the pending disk operations (but rather, say, dump them somewhere), rather than powering the machine down. (Of course, it sounds dangerous to not perform the pending operations, but this is what would happen when powering the machine down anyway, and it some cases it could save you.) This would be "cleaner", of course, and also interesting for e.g. remote servers where physical powerdown is not an easy option.
It sounds like you've got a decent grasp on what happened. Yes, because you hard-powered-off the system before your changes were committed to disk, they were there when you booted back up. The system caches all writes before flushing them out to disk. There are several options which control this behavior, all located at /proc/sys/vm/dirty_* [ kernel doc ] . Unless a flush is explicitly performed by an application via fsync() [ man 2 fsync ] , the data is committed when it is either old enough, or the write cache is filled up. The definition of "data" as used above includes modification to the directory entry to delete the file. Now, as for the journal, that's one of the common misconceptions of what the journal is for. The purpose of a journal is not to ensure changes get replayed, or that data is not lost. The purpose of a journal is to prevent corruption of the filesystem itself, not the files in it. The journal simply contains information about the changes being made, and not (typically) the full data of the change itself. The exact details are dependent upon the filesystem, and journal mode. For ext3/4, see the data mount option in man 8 mount . To answer your supplementary question of whether there's a way to prevent the pending writes without a reboot: From doing a quick read through the kernel source code, it looks like you can use the magic sysrq u command ([ wikipedia ], [ kernel doc ]) to do an emergency remount-read-only operation. It appears this will immediately remount all volumes read-only without a sync operation. To use this, simply press Alt + SysRq + u .
{ "source": [ "https://unix.stackexchange.com/questions/157167", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8446/" ] }
157,176
Installing Google's .deb or .rpm Chrome package adds the Google repository to one's system for automatic updates. Does installing a .deb or .rpm package always add a repository to one's system? If not, how does one verify whether a package will add a repository when installed?
It sounds like you've got a decent grasp on what happened. Yes, because you hard-powered-off the system before your changes were committed to disk, they were there when you booted back up. The system caches all writes before flushing them out to disk. There are several options which control this behavior, all located at /proc/sys/vm/dirty_* [ kernel doc ] . Unless a flush is explicitly performed by an application via fsync() [ man 2 fsync ] , the data is committed when it is either old enough, or the write cache is filled up. The definition of "data" as used above includes modification to the directory entry to delete the file. Now, as for the journal, that's one of the common misconceptions of what the journal is for. The purpose of a journal is not to ensure changes get replayed, or that data is not lost. The purpose of a journal is to prevent corruption of the filesystem itself, not the files in it. The journal simply contains information about the changes being made, and not (typically) the full data of the change itself. The exact details are dependent upon the filesystem, and journal mode. For ext3/4, see the data mount option in man 8 mount . To answer your supplementary question of whether there's a way to prevent the pending writes without a reboot: From doing a quick read through the kernel source code, it looks like you can use the magic sysrq u command ([ wikipedia ], [ kernel doc ]) to do an emergency remount-read-only operation. It appears this will immediately remount all volumes read-only without a sync operation. To use this, simply press Alt + SysRq + u .
{ "source": [ "https://unix.stackexchange.com/questions/157176", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65222/" ] }
157,285
Trying to count number of files in current directory, I found ls -1 | wc -l , which means: send the list of files (where every filename is printed in a new line) to the input of wc, where -l will count the number of lines on input. This makes sense. I decided to try simply ls | wc -l and was very surprised about it also gives me a correct number of files. I wonder why this happens, because ls command with no options prints the filenames on a single line.
From info ls : '-1' '--format=single-column' List one file per line. This is the default for 'ls' when standard output is not a terminal. When you pipe the output of ls , you get one filename per line. ls only outputs the files in columns when the output is destined for human eyes. Here's where ls decides what to do: switch (ls_mode) { case LS_MULTI_COL: /* This is for the 'dir' program. */ format = many_per_line; set_quoting_style (NULL, escape_quoting_style); break; case LS_LONG_FORMAT: /* This is for the 'vdir' program. */ format = long_format; set_quoting_style (NULL, escape_quoting_style); break; case LS_LS: /* This is for the 'ls' program. */ if (isatty (STDOUT_FILENO)) { format = many_per_line; /* See description of qmark_funny_chars, above. */ qmark_funny_chars = true; } else { format = one_per_line; qmark_funny_chars = false; } break; default: abort (); } source: http://git.savannah.gnu.org/cgit/coreutils.git/tree/src/ls.c
{ "source": [ "https://unix.stackexchange.com/questions/157285", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85258/" ] }
157,286
I would like to copy files with multiple extensions to a single destination directory. For example, I can use the following command to copy all .txt files in the working directory to a directory called destination : cp -v *.txt destination/ And I can use the following to copy all .png directories in the working directory to destination : cp -v *.png destination/ But it's time consuming to type these as separate commands (even with the use of command history). So, is there any way that I can tell cp to copy files with either the pattern *.txt or the pattern *.png to destination ? Ideally, I would like to be able to specify more than two patterns -- like instructing cp to copy all *.txt or *.png or *.jpg files to destination , for example. I'm sure that all of this is possible using a shell script -- I'm using bash , for example -- but is there any way to accomplish it more simply, just from the console? Could I somehow use brace expansion to do it? I know that it is possible to copy all files in the working directory except those matching certain specified patterns , but since my working directory contains far more file extensions that I don't want to copy than those I do , that would be a pain. Do you have any thoughts on this?
Brace expansion will get the job done. man bash and search for Brace Expansion . cp *.{txt,jpg,png} destination/ EDIT: In keeping with the OP's request, the command above was missing the verbose option: cp -v *.{txt,jpg,png} destination/
{ "source": [ "https://unix.stackexchange.com/questions/157286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9605/" ] }
157,328
I have a file which includes comments: foo bar stuff #Do not show this... morestuff evenmorestuff#Or this I want to print the file without including any of the comments: foo bar stuff morestuff evenmorestuff There are a lot of applications where this would be helpful. What is a good way to do it?
One way to remove all comments is to use grep with -o option: grep -o '^[^#]*' file where -o : prints only matched part of the line first ^ : beginning of the line [^#]* : any character except # repeated zero or more times Note that empty lines will be removed too, but lines with only spaces will stay.
{ "source": [ "https://unix.stackexchange.com/questions/157328", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61235/" ] }
157,329
There is apparently a vulnerability (CVE-2014-6271) in bash: Bash specially crafted environment variables code injection attack I am trying to figure out what is happening, but I'm not entirely sure I understand it. How can the echo be executed as it is in single quotes? $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" vulnerable this is a test EDIT 1 : A patched system looks like this: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test EDIT 2 : There is a related vulnerability / patch: CVE-2014-7169 which uses a slightly different test: $ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test" unpatched output : vulnerable bash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)' bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable' bash: error importing function definition for `BASH_FUNC_x' test partially (early version) patched output : bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' bash: error importing function definition for `BASH_FUNC_x()' test patched output up to and including CVE-2014-7169: bash: warning: x: ignoring function definition attempt bash: error importing function definition for `BASH_FUNC_x' test EDIT 3 : story continues with: CVE-2014-7186 CVE-2014-7187 CVE-2014-6277
bash stores exported function definitions as environment variables. Exported functions look like this: $ foo() { bar; } $ export -f foo $ env | grep -A1 foo foo=() { bar } That is, the environment variable foo has the literal contents: () { bar } When a new instance of bash launches, it looks for these specially crafted environment variables, and interprets them as function definitions. You can even write one yourself, and see that it still works: $ export foo='() { echo "Inside function"; }' $ bash -c 'foo' Inside function Unfortunately, the parsing of function definitions from strings (the environment variables) can have wider effects than intended. In unpatched versions, it also interprets arbitrary commands that occur after the termination of the function definition. This is due to insufficient constraints in the determination of acceptable function-like strings in the environment. For example: $ export foo='() { echo "Inside function" ; }; echo "Executed echo"' $ bash -c 'foo' Executed echo Inside function Note that the echo outside the function definition has been unexpectedly executed during bash startup. The function definition is just a step to get the evaluation and exploit to happen, the function definition itself and the environment variable used are arbitrary. The shell looks at the environment variables, sees foo , which looks like it meets the constraints it knows about what a function definition looks like, and it evaluates the line, unintentionally also executing the echo (which could be any command, malicious or not). This is considered insecure because variables are not typically allowed or expected, by themselves, to directly cause the invocation of arbitrary code contained in them. Perhaps your program sets environment variables from untrusted user input. It would be highly unexpected that those environment variables could be manipulated in such a way that the user could run arbitrary commands without your explicit intent to do so using that environment variable for such a reason declared in the code. Here is an example of a viable attack. You run a web server that runs a vulnerable shell, somewhere, as part of its lifetime. This web server passes environment variables to a bash script, for example, if you are using CGI, information about the HTTP request is often included as environment variables from the web server. For example, HTTP_USER_AGENT might be set to the contents of your user agent. This means that if you spoof your user agent to be something like '() { :; }; echo foo', when that shell script runs, echo foo will be executed. Again, echo foo could be anything, malicious or not.
{ "source": [ "https://unix.stackexchange.com/questions/157329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16841/" ] }
157,351
I need some assistance grasping what I'm sure is a fundamental concept in Linux: the limit for open files. Specifically, I'm confused on why open sockets can count towards the total number of "open files" on a system. Can someone please elaborate on the reason why? I understand that this probably goes back to the whole "everything is a file" principle in Linux but any additional detail would be appreciated.
The limit on "open files" is not really just for files. It's a limit on the number of kernel handles a single process can use at one time. Historically, the only thing that programs would typically open a lot of were files, so this became known as a limit on the number of open files. There is a limit to help prevent processes from say, opening a lot of files and accidentally forgetting to close them, which will cause system-wide problems eventually. A socket connection is also a kernel handle. So the same limits apply for the same reasons - it's possible for a process to open network connections and forget to close them. As noted in the comments, kernel handles are traditionally called file descriptors in Unix-like systems.
{ "source": [ "https://unix.stackexchange.com/questions/157351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1822/" ] }
157,381
Some context about the bug: CVE-2014-6271 Bash supports exporting not just shell variables, but also shell functions to other bash instances, via the process environment to (indirect) child processes. Current bash versions use an environment variable named by the function name, and a function definition starting with “() {” in the variable value to propagate function definitions through the environment. The vulnerability occurs because bash does not stop after processing the function definition; it continues to parse and execute shell commands following the function definition. For example, an environment variable setting of VAR=() { ignored; }; /bin/id will execute /bin/id when the environment is imported into the bash process. Source: http://seclists.org/oss-sec/2014/q3/650 When was the bug introduced, and what is the patch that fully fixes it? (See CVE-2014-7169 ) What are the vulnerable versions beyond noted in the CVE (initially) (3.{0..2} and 4.{0..3})? Has the buggy source code been reused in other projects? Additional information is desirable. Related: What does env x='() { :;}; command' bash do and why is it insecure?
TL;DR The shellshock vulnerability is fully fixed in On the bash-2.05b branch: 2.05b.10 and above (patch 10 included) On the bash-3.0 branch: 3.0.19 and above (patch 19 included) On the bash-3.1 branch: 3.1.20 and above (patch 20 included) On the bash-3.2 branch: 3.2.54 and above (patch 54 included) On the bash-4.0 branch: 4.0.41 and above (patch 41 included) On the bash-4.1 branch: 4.1.14 and above (patch 14 included) On the bash-4.2 branch: 4.2.50 and above (patch 50 included) On the bash-4.3 branch: 4.3.27 and above (patch 27 included) If your bash shows an older version, your OS vendor may still have patched it by themselves, so best is to check. If: env xx='() { echo vulnerable; }' bash -c xx shows "vulnerable", you're still vulnerable. That is the only test that is relevant (whether the bash parser is still exposed to code in any environment variable). Details. The bug was in the initial implementation of the function exporting/importing introduced on the 5 th of August 1989 by Brian Fox, and first released in bash-1.03 about a month later at a time where bash was not in such widespread use, before security was that much of a concern and HTTP and the web or Linux even existed. From the ChangeLog in 1.05 : Fri Sep 1 18:52:08 1989 Brian Fox (bfox at aurel) * readline.c: rl_insert (). Optimized for large amounts of typeahead. Insert all insertable characters at once. * I update this too irregularly. Released 1.03. [...] Sat Aug 5 08:32:05 1989 Brian Fox (bfox at aurel) * variables.c: make_var_array (), initialize_shell_variables () Added exporting of functions. Some discussions in gnu.bash.bug and comp.unix.questions around that time also mention the feature. It's easy to understand how it got there. bash exports the functions in env vars like foo=() { code } And on import, all it has to do is interpret that with the = replaced with a space... except that it should not blindly interpret it. It's also broken in that in bash (contrary to the Bourne shell), scalar variables and functions have a different name space. Actually if you have foo() { echo bar; }; export -f foo export foo=bar bash will happily put both in the environment (yes entries with same variable name) but many tools (including many shells) won't propagate them. One would also argue that bash should use a BASH_ namespace prefix for that as that's env vars only relevant from bash to bash. rc uses a fn_ prefix for a similar feature. A better way to implement it would have been to put the definition of all exported variables in a variable like: BASH_FUNCDEFS='f1() { echo foo;} f2() { echo bar;}...' That would still need to be sanitized but at least that could not be more exploitable than $BASH_ENV or $SHELLOPTS ... There is a patch that prevents bash from interpreting anything else than the function definition in there ( https://lists.gnu.org/archive/html/bug-bash/2014-09/msg00081.html ), and that's the one that has been applied in all the security updates from the various Linux distributions. However, bash still interprets the code in there and any bug in the interpreter could be exploited. One such bug has already been found (CVE-2014-7169) though its impact is a lot smaller. So there will be another patch coming soon. Until a hardening fix that prevents bash to interpret code in any variable (like using the BASH_FUNCDEFS approach above), we won't know for sure if we're not vulnerable from a bug in the bash parser. And I believe there will be such a hardening fix released sooner or later. Edit 2014-09-28 Two additional bugs in the parser have been found (CVE-2014-718{6,7}) (note that most shells are bound to have bugs in their parser for corner cases, that wouldn't have been a concern if that parser hadn't been exposed to untrusted data). While all 3 bugs 7169, 7186 and 7187 have been fixed in following patches, Red Hat pushed for the hardening fix. In their patch, they changed the behaviour so that functions were exported in variables called BASH_FUNC_myfunc() more or less preempting Chet's design decision. Chet later published that fix as an official upstreams bash patch . That hardening patch, or variants of it are now available for most major Linux distribution and eventually made it to Apple OS/X. That now plugs the concern for any arbitrary env var exploiting the parser via that vector including two other vulnerabilities in the parser (CVE-2014-627{7,8}) that were disclosed later by Michał Zalewski (CVE-2014-6278 being almost as bad as CVE-2014-6271) thankfully after most people had had time to install the hardening patch Bugs in the parser will be fixed as well, but they are no longer that much of an issue now that the parser is no longer so easily exposed to untrusted input. Note that while the security vulnerability has been fixed, it's likely that we'll see some changes in that area. The initial fix for CVE-2014-6271 has broken backward compatibility in that it stops importing functions with . or : or / in their name. Those can still be declared by bash though which makes for an inconsistent behaviour. Because functions with . and : in their name are commonly used, it's likely a patch will restore accepting at least those from the environment. Why wasn't it found earlier? That's also something I wondered about. I can offer a few explanations. First, I think that if a security researcher (and I'm not a professional security researcher) had specifically been looking for vulnerabilities in bash, they would have likely found it. For instance, if I were a security researcher, my approaches could be: Look at where bash gets input from and what it does with it. And the environment is an obvious one. Look in what places the bash interpreter is invoked and on what data. Again, it would stand out. The importing of exported functions is one of the features that is disabled when bash is setuid/setgid, which makes it an even more obvious place to look. Now, I suspect nobody thought to consider bash (the interpreter) as a threat, or that the threat could have come that way. The bash interpreter is not meant to process untrusted input. Shell scripts (not the interpreter) are often looked at closely from a security point of view. The shell syntax is so awkward and there are so many caveats with writing reliable scripts (ever seen me or others mentioning the split+glob operator or why you should quote variables for instance?) that it's quite common to find security vulnerabilities in scripts that process untrusted data. That's why you often hear that you shouldn't write CGI shell scripts, or setuid scripts are disabled on most Unices. Or that you should be extra careful when processing files in world-writeable directories (see CVE-2011-0441 for instance). The focus is on that, the shell scripts, not the interpreter. You can expose a shell interpreter to untrusted data (feeding foreign data as shell code to interpret) via eval or . or calling it on user provided files, but then you don't need a vulnerability in bash to exploit it. It's quite obvious that if you're passing unsanitized data for a shell to interpret, it will interpret it. So the shell is called in trusted contexts. It's given fixed scripts to interpret and more often than not (because it's so difficult to write reliable scripts) fixed data to process. For instance, in a web context, a shell might be invoked in something like: popen("sendmail -oi -t", "w"); What can possibly go wrong with that? If something wrong is envisaged, that's about the data fed to that sendmail, not how that shell command line itself is parsed or what extra data is fed to that shell. There's no reason you'd want to consider the environment variables that are passed to that shell. And if you do, you realise it's all env vars whose name start with "HTTP_" or are well known CGI env vars like SERVER_PROTOCOL or QUERYSTRING none of which the shell or sendmail have any business to do with. In privilege elevation contexts like when running setuid/setgid or via sudo, the environment is generally considered and there have been plenty of vulnerabilities in the past, again not against the shell itself but against the things that elevate the privileges like sudo (see for instance CVE-2011-3628 ). For instance, bash doesn't trust the environment when setuid or called by a setuid command (think mount for instance that invokes helpers). In particular, it ignores exported functions. sudo does clean the environment: all by default except for a white list, and if configured not to, at least black lists a few that are known to affect a shell or another (like PS4 , BASH_ENV , SHELLOPTS ...). It does also blacklist the environment variables whose content starts with () (which is why CVE-2014-6271 doesn't allow privilege escalation via sudo ). But again, that's for contexts where the environment cannot be trusted: any variable with any name and value can be set by a malicious user in that context. That doesn't apply to web servers/ssh or all the vectors that exploit CVE-2014-6271 where the environment is controlled (at least the name of the environment variables is controlled...) It's important to block a variable like echo="() { evil; }" , but not HTTP_FOO="() { evil; }" , because HTTP_FOO is not going to be called as a command by any shell script or command line. And apache2 is never going to set an echo or BASH_ENV variable. It's quite obvious some environment variables should be black-listed in some contexts based on their name , but nobody thought that they should be black-listed based on their content (except for sudo ). Or in other words, nobody thought that arbitrary env vars could be a vector for code injection. As to whether extensive testing when the feature was added could have caught it, I'd say it's unlikely. When you test for the feature , you test for functionality. The functionality works fine. If you export the function in one bash invocation, it's imported alright in another. A very thorough testing could have spotted issues when both a variable and function with the same name are exported or when the function is imported in a locale different from the one it was exported in. But to be able to spot the vulnerability, it's not a functionality test you would have had to do. The security aspect would have had to be the main focus, and you wouldn't be testing the functionality, but the mechanism and how it could be abused. It's not something that developers (especially in 1989) often have at the back of their mind, and a shell developer could be excused to think his software is unlikely to be network exploitable.
{ "source": [ "https://unix.stackexchange.com/questions/157381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24379/" ] }
157,477
Apparently, the shellshock Bash exploit CVE-2014-6271 can be exploited over the network via SSH. I can imagine how the exploit would work via Apache/CGI, but I cannot imagine how that would work over SSH? Can somebody please provide an example how SSH would be exploited, and what harm could be done to the system? CLARIFICATION AFAIU, only an authenticated user can exploit this vulnerability via SSH. What use is this exploit for somebody, who has legitimate access to the system anyway? I mean, this exploit does not have privilege escalation (he cannot become root), so he can do no more than he could have done after simply logging in legitimately via SSH.
One example where this can be exploited is on servers with an authorized_keys forced command. When adding an entry to ~/.ssh/authorized_keys , you can prefix the line with command="foo" to force foo to be run any time that ssh public key is used. With this exploit, if the target user's shell is set to bash , they can take advantage of the exploit to run things other than the command that they are forced to. This would probably make more sense in example, so here is an example: sudo useradd -d /testuser -s /bin/bash testuser sudo mkdir -p /testuser/.ssh sudo sh -c "echo command=\\\"echo starting sleep; sleep 1\\\" $(cat ~/.ssh/id_rsa.pub) > /testuser/.ssh/authorized_keys" sudo chown -R testuser /testuser Here we set up a user testuser , that forces any ssh connections using your ssh key to run echo starting sleep; sleep 1 . We can test this with: $ ssh testuser@localhost echo something else starting sleep Notice how our echo something else doesn't get run, but the starting sleep shows that the forced command did run. Now lets show how this exploit can be used: $ ssh testuser@localhost '() { :;}; echo MALICIOUS CODE' MALICIOUS CODE starting sleep This works because sshd sets the SSH_ORIGINAL_COMMAND environment variable to the command passed. So even though sshd ran sleep , and not the command I told it to, because of the exploit, my code still gets run.
{ "source": [ "https://unix.stackexchange.com/questions/157477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
157,494
I have a process which has been spawned from a shell. It is running as a background process and exporting a DB to a CSV file in /tmp . How can I tell when the background process has completed (finished / quit) or if the CSV file lock has closed? I plan to FTP the file to another host once it's written, but I need the complete file before I start the file transfer.
One example where this can be exploited is on servers with an authorized_keys forced command. When adding an entry to ~/.ssh/authorized_keys , you can prefix the line with command="foo" to force foo to be run any time that ssh public key is used. With this exploit, if the target user's shell is set to bash , they can take advantage of the exploit to run things other than the command that they are forced to. This would probably make more sense in example, so here is an example: sudo useradd -d /testuser -s /bin/bash testuser sudo mkdir -p /testuser/.ssh sudo sh -c "echo command=\\\"echo starting sleep; sleep 1\\\" $(cat ~/.ssh/id_rsa.pub) > /testuser/.ssh/authorized_keys" sudo chown -R testuser /testuser Here we set up a user testuser , that forces any ssh connections using your ssh key to run echo starting sleep; sleep 1 . We can test this with: $ ssh testuser@localhost echo something else starting sleep Notice how our echo something else doesn't get run, but the starting sleep shows that the forced command did run. Now lets show how this exploit can be used: $ ssh testuser@localhost '() { :;}; echo MALICIOUS CODE' MALICIOUS CODE starting sleep This works because sshd sets the SSH_ORIGINAL_COMMAND environment variable to the command passed. So even though sshd ran sleep , and not the command I told it to, because of the exploit, my code still gets run.
{ "source": [ "https://unix.stackexchange.com/questions/157494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85404/" ] }
157,558
It's not clear to be from the manpage for crontab. IS extra white space allowed between the fields? e.g., if I have this: 1 7 * * * /scripts/foo 5 17 * * 6 /script/bar 31 6 * * 0 /scripts/bofh is it safe to reformat it nicely like this: 1 7 * * * /scripts/foo 5 17 * * 6 /script/bar 31 6 * * 0 /scripts/bofh ?
Yes extra space is allowed and you can nicely line up your fields for readability. From man 5 crontab Blank lines and leading spaces and tabs are ignored. and An environment setting is of the form, name = value where the spaces around the equal-sign (=) are optional, and any sub‐ sequent non-leading spaces in value will be part of the value assigned to name. For the fields itself the man pages says: The fields may be separated by spaces or tabs. That should be clear: multiple spaces are allowed.
{ "source": [ "https://unix.stackexchange.com/questions/157558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84619/" ] }
157,743
I am running a Fedora desktop behind a corporate proxy that is blocking yum traffic (specifically *.gz and *.bz2 ). I have access to a separate RedHat machine via ssh which can download anything it likes. When I do yum update and other yum commands: Is it possible to route that traffic to the RedHat machine to do the downloads for me? I don't have root access on the RedHat machine but I can login and use wget to download files. If so, how?
My solution was similar to @slm's but I used SOCKS instead because it is simpler and required no proxy installation on the server or client. Run all commands on the computer with restricted acccess. in yum.conf set the proxy as follows proxy=socks5h://localhost:1080 from a terminal type ssh -D 1080 YOUR_USER@YOUR_SERVER_WITH_FULL_WEB_ACCESS press enter and type your password. now, in a separate terminal (not the ssh one) type yum update
{ "source": [ "https://unix.stackexchange.com/questions/157743", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83122/" ] }
157,763
cd - can move to the last visited directory. Can we visit more history other than the last one?
The command you are looking for is pushd and popd . You could view a practical working example of pushd and popd from here . mkdir /tmp/dir1 mkdir /tmp/dir2 mkdir /tmp/dir3 mkdir /tmp/dir4 cd /tmp/dir1 pushd . cd /tmp/dir2 pushd . cd /tmp/dir3 pushd . cd /tmp/dir4 pushd . dirs /tmp/dir4 /tmp/dir4 /tmp/dir3 /tmp/dir2 /tmp/dir1
{ "source": [ "https://unix.stackexchange.com/questions/157763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
157,823
I'm looking to list all ports a PID is currently listening on. How would you recommend I get this kind of data about a process?
You can use ss from the iproute2 package (which is similar to netstat ): ss -l -p -n | grep "pid=1234," or (for older iproute2 version): ss -l -p -n | grep ",1234," Replace 1234 with the PID of the program.
{ "source": [ "https://unix.stackexchange.com/questions/157823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61349/" ] }
158,194
What if 'kill -9' does not work? or How to kill a script which starts new processes? doesn't help me in anyway. I have a python script which starts automatically with another process id using the same port when killed using sudo kill -9 <pid> . $ lsof -i :3002 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME python 13242 ubuntu 3u IPv4 64592 0t0 TCP localhost:3002 (LISTEN) $ sudo kill -9 13242 $ lsof -i :3002 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME python 16106 ubuntu 3u IPv4 74792 0t0 TCP localhost:3002 (LISTEN) $ sudo kill 16106 $ lsof -i :3002 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME python 16294 ubuntu 3u IPv4 75677 0t0 TCP localhost:3002 (LISTEN) It's not a Zombie process. $ ps -Al 4 S 0 16289 1 0 80 0 - 12901 poll_s ? 00:00:00 sudo 4 S 1000 16293 16289 0 80 0 - 1100 wait ? 00:00:00 sh 0 S 1000 16294 16293 0 80 0 - 34632 poll_s ? 00:00:00 python I have even tried sudo pkill -f <processname> with no luck. It doesn't want to die. Update: It's parent process is sh whose parent is sudo as mentioned in the above table. I am not sure if it is safe to kill these abruptly. Also this is a shared ubuntu server.
Starts automatically with another process ID means that it is a different process. Thus there is a parent process, which monitors its children, and if one dies, it gets respawned by the parent. If you want to stop the service completely, find out how to stop the parent process. Killing it with SIGKILL is of course one of the options, but probably not The Right One TM , since the service monitor might need to do some cleanup to shut down properly. To find the monitor process, you might need to inspect the whole process list, since the actual listeners might dissociate themselves from their parent (usually by the fork() + setsid() combo). In this case, I find the output of ps faux (from procps at least, might vary for other implementations) rather handy - it lists all processes in a hierarchical tree. Unless there has been a PID wrap (see also wikipedia ), the monitor PID should be smaller than PID of any of the listeners (unless of course you hit a PID-wraparound).
{ "source": [ "https://unix.stackexchange.com/questions/158194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68924/" ] }
158,234
I have a large file composed of text fields separated by semicolons in the form of a large table. It has been sorted. I have a smaller file composed of the same text fields. At some point, someone concatenated this file with others and then did a sort to form the large file described above. I would like to subtract the lines of the small file from the big one (i.e. for each line in the small file, if a matching string exists in the big file, delete that line in the big file). The file looks roughly like this GenericClass1; 1; 2; NA; 3; 4; GenericClass1; 5; 6; NA; 7; 8; GenericClass2; 1; 5; NA; 3; 8; GenericClass2; 2; 6; NA; 4; 1; etc Is there a quick classy way to do this or do I have to use awk?
You can use grep . Give it the small file as input and tell it to find non-matching lines: grep -vxFf file.txt bigfile.txt > newbigfile.txt The options used are: -F, --fixed-strings Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched. (-F is specified by POSIX.) -f FILE, --file=FILE Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing. (-f is specified by POSIX.) -v, --invert-match Invert the sense of matching, to select non-matching lines. (-v is specified by POSIX.) -x, --line-regexp Select only those matches that exactly match the whole line. (-x is specified by POSIX.)
{ "source": [ "https://unix.stackexchange.com/questions/158234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79386/" ] }
158,244
I have a machine with both glibc i686 and x86_64, and a very annoying problem with glibc. Is it normal to have two libraries of the same name installed on one computer? How can I know which library is executed? Until recently, I believed that x86_64 was i686. Well, I must be mistaken but why? [root@machin ~]# yum info glibc Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Excluding Packages in global exclude list Finished Installed Packages Name : glibc Arch : i686 Version : 2.5 Release : 42 Size : 12 M Repo : installed Summary : The GNU libc libraries. License : LGPL Description: The glibc package contains standard libraries which are used by : multiple programs on the system. In order to save disk space and : memory, as well as to make upgrading easier, common system code is : kept in one place and shared between programs. This particular package : contains the most important sets of shared libraries: the standard C : library and the standard math library. Without these two libraries, a : Linux system will not function. Name : glibc Arch : x86_64 Version : 2.5 Release : 42 Size : 11 M Repo : installed Summary : The GNU libc libraries. License : LGPL Description: The glibc package contains standard libraries which are used by : multiple programs on the system. In order to save disk space and : memory, as well as to make upgrading easier, common system code is : kept in one place and shared between programs. This particular package : contains the most important sets of shared libraries: the standard C : library and the standard math library. Without these two libraries, a : Linux system will not function. [root@machin ~]# yum info glibc-common Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Excluding Packages in global exclude list Finished Installed Packages Name : glibc-common Arch : x86_64 Version : 2.5 Release : 42 Size : 64 M Repo : installed Summary : Common binaries and locale data for glibc License : LGPL Description: The glibc-common package includes common binaries for the GNU libc : libraries, as well as national language (locale) support.
Technically, i686 is actually a 32-bit instruction set (part of the x86 family line), while x86_64 is a 64-bit instruction set (also referred to as amd64). From the sound of it, you have a 64-bit machine that has 32-bit libraries for backwards compatibility. That should be totally fine.
{ "source": [ "https://unix.stackexchange.com/questions/158244", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50325/" ] }
158,289
I am trying to find the largest file in a directory recursively. If there is a subdirectory inside of that directory the function needs to go inside that directory and check to see if the largest file is there. Once the largest file is found the output is displayed with the relative path name and the name and size of the largest file. EX: dude@shell2 (~...assignment/solutions) % bash maxfile.sh ~/test class/asn dude.h.gch: 9481628 This is what I have: #!/bin/sh clear recursiveS() { for d in *; do if [ -d $d ]; then (cd $d; echo $(pwd)/$line; du -a; recursiveS;) fi done } recursiveS I have been stuck for a while now. I cannot implement this by pipelining a number of existing Unix tools. Any ideas would be nice!
use find (here assuming GNU find ) to output file names with the file size. sort. print out the largest one. find . -type f -printf "%s\t%p\n" | sort -n | tail -1 That assumes file paths don't contain newline characters. Using a loop in bash with the GNU implementation of stat : shopt -s globstar max_s=0 for f in **; do if [[ -f "$f" && ! -L "$f" ]]; then size=$( stat -c %s -- "$f" ) if (( size > max_s )); then max_s=$size max_f=$f fi fi done echo "$max_s $max_f" This will be significantly slower than the find solution. That also assumes that file names don't end in newline characters and will skip hidden files and not descend into hidden directories. If there's a file called - in the current directory, the size of the file open on stdin will be considered. Beware that versions of bash prior to 4.3 followed symbolic links when descending the directory tree.
{ "source": [ "https://unix.stackexchange.com/questions/158289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86067/" ] }
158,400
In /etc/shadow file there are encrypted password. Encrypted password is no longer crypt(3) or md5 "type 1" format. ( according to this previous answer ) Now I have a $6$somesalt$someveryverylongencryptedpasswd as entry. I can no longer use openssl passwd -1 -salt salt hello-world $1$salt$pJUW3ztI6C1N/anHwD6MB0 to generate encrypted passwd. Any equivalent like (non existing) .. ? openssl passwd -6 -salt salt hello-world
Python: python -c 'import crypt; print crypt.crypt("password", "$6$saltsalt$")' (for python 3 and greater it will be print(crypt.crypt(..., ...)) ) Perl: perl -e 'print crypt("password","\$6\$saltsalt\$") . "\n"'
{ "source": [ "https://unix.stackexchange.com/questions/158400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79818/" ] }
158,494
When attempting to launch system-config-users from command line, I get the following warning, and the tool does not open. I'm using CentOS 7 with Mate 1.8.1. WARNING **: Error enumerating actions: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any .service files Error checking for authorization org.freedesktop.policykit.exec: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any .service files yum list polkit* Installed Packages polkit.x86_64 0.112-5.el7 @anaconda polkit-devel.x86_64 0.112-5.el7 @base polkit-docs.noarch 0.112-5.el7 @base polkit-gnome.x86_64 0.105-6.el7 @epel polkit-pkla-compat.x86_64 0.1-4.el7 @anaconda What is missing from my system to cause this error?
I just had the same return when installing deluged on arch, I typed: systemctl start deluged I tried with sudo and it worked fine. Seems to be a group permissions issue. All I did was enable permissions for my user account and then typed: sudo systemctl start deluged worked like a charm.
{ "source": [ "https://unix.stackexchange.com/questions/158494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20107/" ] }
158,564
This is my code #!/bin/bash showword() { echo $1 } echo This is a sample message | xargs -d' ' -t -n1 -P2 showword So I have a function showword which echoes whatever string you pass as a parameter to the function. Then I have xargs trying to call the function and pass one word at a time to the function, and run 2 copies of the function in parallel. The thing that is not working is xargs doesn't recognize the function. How can I achieve what I am trying to do, how can I make xargs work with the function showword ?
Try exporting the function, then calling it in a subshell: showword() { echo $1 } export -f showword echo This is a sample message | xargs -d' ' -t -n1 -P2 bash -c 'showword "$@"' _ This causes xargs to execute bash -c 'showword "$@"' _ This bash -c 'showword "$@"' _ is bash -c 'showword "$@"' _ a ︙ The arguments passed to the bash command are, well, passed into the bash environment, but starting from 0.  So, inside the function, $0 is “ _ ” and $1 is “ This ” $0 is “ _ ” and $1 is “ is ” $0 is “ _ ” and $1 is “ a ” ︙ See Bash -c with positional parameters . Note that export -f works only in Bash, and -P n ( --max-procs= max-procs ) works only in GNU xargs .
{ "source": [ "https://unix.stackexchange.com/questions/158564", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84978/" ] }
158,584
A former coworker did something to top that whenever it runs as root the data is sorted by MEM usage instead of the default CPU usage. According to multiple searches, the man page and even the options within the top console itself (O), just pressing k it should be sorted by CPU, but instead when I hit k it asks me for a pid to kill.
You can change the sort field in the interactive top window with the < and > keys. I'm not sure what operating system you're running but at least on my GNU top, k is supposed to kill, not reset. Presumably, your friend changed the sort field and hit Shift + W to save to ~/.toprc . Just use the keys I mentioned to choose the sort field you want and then, when it's set up as you like it, hit Shift + W again and it should save that state and open that way next time.
{ "source": [ "https://unix.stackexchange.com/questions/158584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27807/" ] }
158,597
What are good program or script solutions to run specific applications depending on whether the laptop is running on battery or charging? In order to conserve battery, I automatically want to stop certain applications from running when on battery (Dropbox, backup engine, etc...) and restart them when back on charging.
You can change the sort field in the interactive top window with the < and > keys. I'm not sure what operating system you're running but at least on my GNU top, k is supposed to kill, not reset. Presumably, your friend changed the sort field and hit Shift + W to save to ~/.toprc . Just use the keys I mentioned to choose the sort field you want and then, when it's set up as you like it, hit Shift + W again and it should save that state and open that way next time.
{ "source": [ "https://unix.stackexchange.com/questions/158597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28643/" ] }
158,678
I can successfully mount an ext4 partition, the problem is that all the files on the partition are owned by the user with userid 1000. On one machine, my userid is 1000, but on another it's 1010. My username is the same on both machines, but I realise that the filesystem stores userids, not usernames. I could correct the file ownership with something like the following: find /mnt/example -exec chown -h 1010 {} \; But then I would have to correct the file ownerships again back to 1000 when I mount this external drive on another machine. What I would like is to give mount an option saying map userid 1000 to 1010, so that I don't have to actually modify any files. Is there a way to do this?
Take a look at the bindfs package. bindfs is a FUSE filesystem that allows for various manipulations of file permissions, file ownership etc. on top of existing file systems. You are looking specifically for the --map option of bindfs: --map=user1/user2:@group1/@group2:..., -o map=... Given a mapping user1/user2, all files owned by user1 are shown as owned by user2. When user2 creates files, they are chowned to user1 in the underlying directory. When files are chowned to user2, they are chowned to user1 in the underlying directory. Works similarly for groups. A single user or group may appear no more than once on the left and once on the right of a slash in the list of mappings. Currently, the options --force-user, --force-group, --mirror, --create-for-*, --chown-* and --chgrp-* override the corresponding behavior of this option. Requires mounting as root. So to map your files with user id 1001 in /mnt/wrong to /mnt/correct with user id 1234, run this command: sudo bindfs --map=1001/1234 /mnt/wrong /mnt/correct
{ "source": [ "https://unix.stackexchange.com/questions/158678", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9041/" ] }
158,727
I have been studying the Linux kernel behaviour for quite some time now, and it's always been clear to me that: When a process dies, all its children are given back to the init process (PID 1) until they eventually die. However, recently, someone with much more experience than me with the kernel told me that: When a process exits, all its children also die (unless you use NOHUP in which case they get back to init ). Now, even though I don't believe this, I still wrote a simple program to make sure of it. I know I should not rely on time ( sleep ) for tests since it all depends on process scheduling, yet for this simple case, I think that's fairly enough. int main(void){ printf("Father process spawned (%d).\n", getpid()); sleep(5); if(fork() == 0){ printf("Child process spawned (%d => %d).\n", getppid(), getpid()); sleep(15); printf("Child process exiting (%d => %d).\n", getppid(), getpid()); exit(0); } sleep(5); printf(stdout, "Father process exiting (%d).\n", getpid()); return EXIT_SUCCESS; } Here is the program's output, with the associated ps result every time printf talks: $ ./test & Father process spawned (435). $ ps -ef | grep test myuser 435 392 tty1 ./test Child process spawned (435 => 436). $ ps -ef | grep test myuser 435 392 tty1 ./test myuser 436 435 tty1 ./test Father process exiting (435). $ ps -ef | grep test myuser 436 1 tty1 ./test Child process exiting (436). Now, as you can see, this behaves quite as I would have expected it to. The orphan process (436) is given back to init (1) until it dies. However, is there any UNIX-based system on which this behaviour does not apply by default? Is there any system on which the death of a process immediately triggers the death of all its children?
When a process exits, all its children also die (unless you use NOHUP in which case they get back to init). This is wrong. Dead wrong. The person saying that was either mistaken, or confused a particular situation with the the general case. There are two ways in which the death of a process can indirectly cause the death of its children. They are related to what happens when a terminal is closed. When a terminal disappears (historically because the serial line was cut due to a modem hangup, nowadays usually because the user closed the terminal emulator window), a SIGHUP signal is sent to the controlling process running in that terminal — typically, the initial shell started in that terminal. Shells normally react to this by exiting. Before exiting, shells intended for interactive use send HUP to each job that they started. Starting a job from a shell with nohup breaks that second source of HUP signals because the job will then ignore the signal and thus not be told to die when the terminal disappears. Other ways to break the propagation of HUP signals from the shell to the jobs include using the shell's disown builtin if it has one (the job is removed from the shell's list of jobs), and double forking (the shell launches a child which launches a child of its own and exits immediately; the shell has no knowledge of its grandchild). Again, the jobs started in the terminal die not because their parent process (the shell) dies, but because their parent process decides to kill them when it is told to kill them. And the initial shell in the terminal dies not because its parent process dies, but because its terminal disappears (which may or may not coincidentally be because the terminal is provided by a terminal emulator which is the shell's parent process).
{ "source": [ "https://unix.stackexchange.com/questions/158727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41892/" ] }
158,839
It there a way to display the complete command line in htop (e.g. in multiple lines or with a moving banner). With the default setting where only one line is displayed it isn't possible to distungish all processes, e.g. different java programs (because class or jar argument follows a bunch of arguments) or programs with long absolute path of binaries. Omitting the full absolute path in favour of only the binary would be a compromise where distinction would not be optimal, but better in some cases. I checked out the settings and the manpage and didn't find an option suitable in my understanding.
As far as I know, the only way to show the full command line is to scroll right with the arrow keys or to use a terminal with a small font. EDIT ( thanks to @LangeHaare ): You can use Ctrl-A and Ctrl-E to jump to the beginning and end the command line.
{ "source": [ "https://unix.stackexchange.com/questions/158839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63502/" ] }
158,896
To what extent can other POSIX-compatible shells function as reasonable replacements for bash? They don't need to be true "drop-in" replacements, but close enough to work with most scripts and support the rest with some modification. I want to have explicit bash scripts - initscripts, DHCP client scripts, etc. - work with minimal modification I want my own collection of more specialized shell scripts to not need too much modification I want to have features like string manipulation and built-in regex pattern matching The only serious contenders I know of are zsh and mksh. So, for those of you here who are good with either or both of them: What features does bash have that zsh and mksh respectively do not? What features do the shells share with bash, but use incompatible syntax for?
I'll stick to scripting features. Rich interactive features (command line edition, completion, prompts, etc.) tend to be very different, achieving similar effects in wholly incompatible ways. What features are in zsh and missing from bash, or vice versa? gives a few pointers on interactive use. The closest thing to bash would be ATT ksh93 or mksh (the Korn shell and a clone). Zsh also has a subset of features but you would need to run it in ksh emulation mode, not in zsh native mode. I won't list POSIX features (which are available in any modern sh shell), nor relatively obscure features, nor as mentioned above features for interactive use. Observations are valid as of bash 4.2, ksh 93u and mksh 40.9.20120630 as found on Debian wheezy. Shell syntax Quoting $'…' (literal strings with backslash interpolation) is available in ksh93 and mksh. `$"…" (translated strings) is bash-specific. Conditional constructs Mksh and ksh93 have ;& to fall through in a case statement, but not ;;& to test subsequent cases. Mksh has ;| for that, and recent mksh allows ;;& for compatibility. ((…)) arithmetic expressions and [[ … ]] tests are ksh features. Some conditional operators are different, see “conditional expressions” below. Coprocesses Ksh and bash both have coprocesses but they work differently. Functions Mksh and ksh93 support the function name {…} syntax for function definitions in addition to the standard name () {…} , but using function in ksh changes scoping rules, so stick to name () … to maintain compatibility. The rules for allowed characters in function names vary; stick to alphanumerics and _ . Brace expansion Ksh93 and mksh support brace expansion {foo,bar} . Ksh93 supports numeric ranges {1..42} but mksh doesn't. Parameter expansion Ksh93 and mksh support substring extraction with ${VAR:offset} and ${VAR:offset:length} , but not case folding like ${VAR^} , ${VAR,} , etc. You can do case conversion with typeset -l and typeset -u in both bash and ksh. They support replacement with ${VAR/PATTERN/STRING} or ${VAR/PATTERN//STRING} . The quoting rules for STRING are slightly different, so avoid backslashes (and maybe other characters) in STRING (build a variable and use ${VAR/PATTERN/$REPLACEMENT} instead if the replacement contains quoting characters). Array expansion ( ${ARRAY[KEY]} , "${ARRAY[@]}" , ${#ARRAY[@]} , ${!ARRAY[@]} ) work in bash like in ksh. ${!VAR} expanding to ${OTHERVAR} when the value of VAR is OTHERVAR (indirect variable reference) is bash-specific (ksh does something different with ${!VAR} ). To get this double expansion in ksh, you need to use a name reference instead ( typeset -n VAR=OTHERVAR; echo "$VAR" ). ${!PREFIX*} works the same. Process substitution Process substitution <(…) and >(…) is supported in ksh93 but not in mksh. Wildcard patterns The ksh extended glob patterns that need shopt -s extglob to be activated in bash are always available in ksh93 and mksh. Mksh doesn't support character classes like [[:alpha:]] . IO redirection Bash and ksh93 define pseudo-files /dev/tcp/ HOST / PORT and /dev/udp/ HOST / PORT , but mksh doesn't. Expanding wildcards in a redirection in scripts (as in var="*.txt"; echo hello >$a writing to a.txt if that file name is the sole match for the pattern) is a bash-specific feature (other shells never do it in scripts). <<< here-strings work in ksh like in bash. The shortcut >& to redirect syntax errors is also supported by mksh but not by ksh93. Conditional expressions [[ … ]] double bracket syntax The double bracket syntax from ksh is supported by both ATT ksh93 and mksh like in bash. File operators Ksh93, mksh and bash support the same extensions to POSIX, including -a as an obsolete synonym of -e , -k (sticky), -G (owned by egid), -O (owner by euid), -ef (same file), -nt (newer than), -ot (older than). -N FILE (modified since last read) isn't supported by mksh. Mksh doesn't have a regexp matching operator =~ . Ksh93 has this operator, and it performs the same matching as in bash, but doesn't have an equivalent of BASH_REMATCH to retrieve matched groups afterwards. String operators Ksh93 and mksh support the same string comparison operators < and > as bash as well as the == synonym of = . Mksh doesn't use locale settings to determine the lexicographic order, it compares strings as byte strings. Other operators -v VAR to test if a variable is defined is bash-specific. In any POSIX shell, you can use [ -z "${VAR+1}" ] . Builtins alias The set of allowed character in alias names isn't the same in all shells. I think it's the same as for functions (see above). builtin Ksh93 has a builtin called builtin , but it doesn't execute a name as a built-in command. Use command to bypass aliases and functions; this will call a builtin if one exists, otherwise an external command (you can avoid this with PATH= command error_out_if_this_is_not_a_builtin ). caller This is bash-specific. You can get a similar effect with .sh.fun , .sh.file and .sh.lineno in ksh93. In mksh there's at last LINENO . declare , local , typeset declare is a bash-specific name for ksh's typeset . Use typeset : it also works in bash. Mksh defines local as an alias for typeset . In ksh93, you need to use typeset (or define an alias). Mksh has no associative arrays (they're slated for an as yet unreleased version). I don't think there's an exact equivalent of bash's typeset -t (trace function) in ksh. cd Ksh93 doesn't have -e . echo Ksh93 and mksh process the -e and -n options like in bash. Mksh also understands -E , ksh93 doesn't treat it as an option. Backslash expansion is off by default in ksh93, on by default in mksh. enable Ksh doesn't provide a way to disable builtin commands. To avoid a builtin, look up the external command's path and invoke it explicitly. exec Ksh93 has -a but not -l . Mksh has neither. export Neither ksh93 nor mksh has export -n . Use typeset +x foo instead, it works in bash and ksh. Ksh doesn't export functions through the environment. let let is the same in bash and ksh. mapfile , readarray This is a bash-specific feature. You can use while read loops or command substitution to read a file and split it into an array of lines. Take care of IFS and globbing. Here's the equivalent of mapfile -t lines </path/to/file : IFS=$'\n'; set -f lines=($(</path/to/file)) unset IFS; set +f printf printf is very similar. I think ksh93 supports all of bash's format directives. mksh doesn't support %q or %(DATE_FORMAT)T ; on some installations, printf isn't an mksh builtin and calls the external command instead. printf -v VAR is bash-specific, ksh always prints to standard output. read Several options are bash-specific, including all the ones about readline. The options -r , -d , -n , -N , -t , -u are identical in bash, ksh93 and mksh. readonly You can declare a variable as read-only in Ksh93 and mksh with the same syntax. If the variable is an array, you need to assign to it first, then make it read-only with readonly VAR . Functions can't be made read-only in ksh. set , shopt All the options to set and set -o are POSIX or ksh features. shopt is bash-specific. Many options concern interactive use anyway. For effects on globbing and other features enabled by some options, see the section “Options” below. source This variant of . exists in ksh as well. In bash and mksh, source searches the current directory after PATH , but in ksh93, it's an exact equivalent of . . trap The DEBUG pseudo-signal isn't implemented in mksh. In ksh93, it exists with a different way to report information, see the manual for details. type In ksh, type is an alias for whence -v . In mksh, type -p does not print the path to the executable, but a human-readable message; you need to use whence -p COMMAND instead. Options shopt -s dotglob — don't ignore dot files in globbing To emulate the dotglob option in ksh93, you can set FIGNORE='@(.|..)' . I don't think there's anything like this in mksh. shopt -s extglob — ksh extended glob patterns The extglob option is effectively always on in ksh. shopt -s failglob — error out if a glob pattern matches nothing I don't think this exists in either mksh or ksh93. It does in zsh (default behavior unless null_glob or csh_null_glob are set). shopt -s globstar — **/ recursive globbing Ksh93 has recursive globbing with **/ , enabled with set -G . Mksh doesn't have recursive globbing. shopt -s lastpipe — run the last command of a pipeline in the parent shell Ksh93 always runs the last command of a pipeline in the parent shell, which in bash requires the lastpipe option to be set. Mksh always runs the last command of a pipeline in a subshell. shopt -s nocaseglob , shopt -s nocasematch — case-insensitive patterns Mksh doesn't have case-insensitive pattern matching. Ksh93 supports it on a pattern-by-pattern basis: prefix the pattern with ~(i) . shopt -s nullglob — expand patterns that match no file to an empty list Mksh doesn't have this. Ksh93 supports it on a pattern-by-pattern basis: prefix the pattern with ~(N) . Variables Obviously most of the BASH_xxx variables don't exist in ksh. $BASHPID can be emulated with the costly but portable sh -c 'echo $PPID' , and has been recently added to mksh. BASH_LINE is .sh.lineno in ksh93 and LINENO in mksh. BASH_SUBSHELL is .sh.subshell in ksh93. Mksh and ksh93 both source the file given in ENV when they start up. EUID and UID don't exist in ksh93. Mksh calls them USER_ID and KSH_UID ; it doesn't have GROUPS . FUNCNAME and FUNCNEST don't exist in ksh. Ksh93 has .sh.fun and .sh.level . Functions declared with function foo { …; } (no parentheses!) have their own name in $0 . GLOBIGNORE exists in ksh93 but with a different name and syntax: it's called FIGNORE , and it's a single pattern, not a colon-separated list. Use a @(…|…) pattern. Ksh's FIGNORE subsumes bash's, with a wholly different syntax. Ksh93 and mksh have nothing like HOSTTYPE , MACHTYPE and OSTYPE . Nor SHELLOPTS or TIMEFORMAT . Mksh has PIPESTATUS , but ksh93 doesn't. Mksh and ksh93 have RANDOM .
{ "source": [ "https://unix.stackexchange.com/questions/158896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79424/" ] }
158,933
Is it possible to open an incognito session in bash ? For example, when we need to enter passwords in commands and don't want bash to add them to history.
When you want bash to stop logging your commands, just unset the HISTFILE variable: HISTFILE= All further commands should then no longer be logged to .bash_history . On the other hand, if you are actually supplying passwords as arguments to commands, you're already doing something wrong. .bash_history is not world-readable and therefore not the biggest threat in this situation: ps and /proc are the big problem. All users on the system can see the commands you're currently running with all of their arguments . Passing passwords as command line arguments is therefore inherently insecure . Use environment variables or config files (that you have chmodded 600) to securely supply passwords.
{ "source": [ "https://unix.stackexchange.com/questions/158933", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73778/" ] }
158,960
I'm trying to install a Debian package from source (via git). I downloaded the package, changed to the package’s directory and ran ./configure command but it returned bash: ./configure: No such file or directory . What can be the problem? A configure.ac file is located in the program folder. ./configure make sudo make install
If the file is called configure.ac, do $> autoconf Depends: M4, Automake If you're not sure what to do, try $> cat readme They must mean that you use "autoconf" to generate an executable "configure" file. So the order is: $> autoconf $> ./configure $> make $> make install
{ "source": [ "https://unix.stackexchange.com/questions/158960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/487383/" ] }
158,997
Alright, when I run certain commands the wrong way, (misspelled, etc.) The terminal outputs this: > instead of computername:workingfolder username$ , and when I type enter it goes like this: > > > That would be if I pressed enter 3 times.
> is the default continuation prompt.That is what you will see if what you entered before had unbalanced quote marks. As an example, type a single quote on the command line followed by a few enter keys: $ ' > > > The continuation prompts will occur until you either (a) complete the command with a closing quote mark or (b) type Ctrl + D to finish input, at which point the shell will respond with an error message about the unbalanced quotes, or (c) type Ctrl + C which will abort the command that you were entering. How this is useful Sometime, you may want to enter a string which contains embedded new lines. You can do that as follows: $ paragraph='first line > second line > third line > end' Now, when we display that shell variable, you can see that the prompts have disappeared but the newlines are retained: $ echo "$paragraph" first line second line third line end
{ "source": [ "https://unix.stackexchange.com/questions/158997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
159,010
I have a long running bash instance (inside a screen session) that is executing a complex set of commands inside a loop (with each loop doing pipes, redirects, etc). The long command line was written inside the terminal - it's not inside any script. Now, I know the bash process ID, and I have root access - how can I see the exact command line being executed inside that bash ? Example bash$ echo $$ 1234 bash$ while true ; do \ someThing | somethingElse 2>/foo/bar | \ yetAnother ; sleep 600 ; done And in another shell instance, I want to see the command line executed inside PID 1234: bash$ echo $$ 5678 bash$ su - sh# cd /proc/1234 sh# # Do something here that will display the string \ 'while true ; do someThing | somethingElse 2>/foo/bar | \ yetAnother ; sleep 600 ; done' Is this possible? EDIT #1 Adding counter-examples for some answers I've got. About using the cmdline under /proc/PID : that doesn't work, at least not in my scenario. Here's a simple example: $ echo $$ 8909 $ while true ; do echo 1 ; echo 2>/dev/null ; sleep 30 ; done In another shell: $ cat /proc/8909/cmdline bash Using ps -p PID --noheaders -o cmd is just as useless: $ ps -p 8909 --no-headers -o cmd bash ps -eaf is also not helpful: $ ps -eaf | grep 8909 ttsiod 8909 8905 0 10:09 pts/0 00:00:00 bash ttsiod 30697 8909 0 10:22 pts/0 00:00:00 sleep 30 ttsiod 31292 13928 0 10:23 pts/12 00:00:00 grep --color=auto 8909 That is, there's no output of the ORIGINAL command line, which is what I'm looking for - i.e the while true ; do echo 1 ; echo 2>/dev/null ; sleep 30 ; done .
I knew I was grasping at straws, but UNIX never fails! Here's how I managed it: bash$ gdb --pid 8909 ... Loaded symbols for /lib/i386-linux-gnu/i686/cmov/libnss_files.so.2 0xb76e7424 in __kernel_vsyscall () Then at the (gdb) prompt I ran the command, call write_history("/tmp/foo") which will write this history to the file /tmp/foo . (gdb) call write_history("/tmp/foo") $1 = 0 I then detach from the process. (gdb) detach Detaching from program: /bin/bash, process 8909 And quit gdb . (gdb) q And sure enough... bash$ tail -1 /tmp/foo while true ; do echo 1 ; echo 2>/dev/null ; sleep 30 ; done For easy future re-use, I wrote a bash script , automating the process.
{ "source": [ "https://unix.stackexchange.com/questions/159010", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6859/" ] }
159,033
I'm trying to tail a log file on multiple remote machines and forward the output to my local workstation. I want connections to close when pressing Ctrl - C . At the moment I have the following function that almost works as intended. function dogfight_tail() { logfile=/var/log/server.log pids="" for box in 02 03; do ssh server-$box tail -f $logfile | grep $1 & pids="$pids $!" done trap 'kill -9 $pids' SIGINT trap wait } The connections close and I receive the output from tail . BUT, there is some kind of buffering going on because the output come in batches. And here's the fun part… I can see the same buffering behaviour when executing the following and append "test" to the file /var/log/server.log on the remote machines 4-5 times… ssh server-01 "tail -f /var/log/server.log | grep test" …and found two ways of disabling it… Add -t flag to ssh. ssh -t server-01 "tail -f /var/log/server.log | grep test" Remove quotation from the remote command. ssh server-01 tail -f /var/log/server.log | grep test However, neither of these approaches work for the function that execute on multiple machines mentioned above. I have tried dsh, which have the same buffering behaviour when executing. dsh -m server-01,server-02 -c "tail -f /var/log/server.log | grep test" Same here, if I remove the quotation, the buffering goes away and everything works fine. dsh -m server-01,server-02 -c tail -f /var/log/server.log | grep test Also tried parallel-ssh which works exactly the same as dsh . Can somebody explain what's going on here? How do I fix this problem? Would be ideal to go with straight ssh if possible. P.S. I do not want to use multitail or similar since I want to be able to execute arbitrary commands.
What you see is effect of a standard stdout buffer in grep provided by Glibc. The best solution is to disable it by using --line-buffered (GNU grep, I'm not sure what other implementations might support it or something similar). As for why this only happens in some cases: ssh server "tail -f /var/log/server.log | grep test" runs the whole command in the quotes on the server - thus grep waits to fill its buffer. ssh server tail -f /var/log/server.log | grep test runs grep on your local machine on the output tail sent through the ssh channel. The key part here is, that grep adjusts its behaviour depending on whether its stdin is a terminal or not. When you run ssh -t , the remote command is running with a controlling terminal and thus the remote grep behaves like your local one.
{ "source": [ "https://unix.stackexchange.com/questions/159033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86538/" ] }
159,094
I have a deb package for installation. Shall I install by dpkg -i my.deb , or by apt? Will both handle the software dependency problem well? If by apt, how can I install from the deb by apt?
When you use apt to install a package, under the hood it uses dpkg . When you install a package using apt, it first creates a list of all the dependencies and downloads it from the repository. Once the download is finished it calls dpkg to install all those files, satisfying all the dependencies. So if you have a .deb file, you can install it by: Using: sudo dpkg -i /path/to/deb/file sudo apt-get install -f Using: sudo apt install ./name.deb Or sudo apt install /path/to/package/name.deb With old apt-get versions you must first move your deb file to /var/cache/apt/archives/ directory. For both, after executing this command, it will automatically download its dependencies. First installing gdebi and then opening your .deb file using it ( Right-click -> Open with ). It will install your .deb package with all its dependencies. Note : APT maintains the package index which is a database ( /var/cache/apt/*.bin ) of available packages available in repo defined in /etc/apt/sources.list file and in the /etc/apt/sources.list.d directory. All these methods will fail to satisfy the software dependency if the dependencies required by the deb is not present in the package index. Why use sudo apt-get install -f after sudo dpkg -i /path/to/deb/file (as mentioned in method 1)? From man apt-get : -f, --fix-broken Fix; attempt to correct a system with broken dependencies in place. When dpkg installs a package and a package dependency is not satisfied, it leaves the package in an "unconfigured" state and that package is considered broken. The sudo apt-get install -f command tries to fix this broken package by installing the missing dependency.
{ "source": [ "https://unix.stackexchange.com/questions/159094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
159,123
I'm currently writing a program that prints to a Zebra printer. Because my office doesn't have a zebra printer, we print to a linux VM running netcat with nc -k -l -p 9100 | tee labels.txt so that we can view the output to the printer and verify correctness. Unfortunately, this file gets pretty big and takes up a lot of space on the VM, especially because no one ever remembers to clear it. Using tee seems to be a good option for writing to a file, but it isn't very featured in the way I'd desire. I'd like for the label.txt to only grow to a certain size (say 20 MB), at which point it begins overwriting itself. Or perhaps renames label.txt to label.txt.1 , allowing label.txt to grow and then overwriting label.txt.1 . Is there any way to do this with netcat / tee ? Or should I be looking at another program?
When you use apt to install a package, under the hood it uses dpkg . When you install a package using apt, it first creates a list of all the dependencies and downloads it from the repository. Once the download is finished it calls dpkg to install all those files, satisfying all the dependencies. So if you have a .deb file, you can install it by: Using: sudo dpkg -i /path/to/deb/file sudo apt-get install -f Using: sudo apt install ./name.deb Or sudo apt install /path/to/package/name.deb With old apt-get versions you must first move your deb file to /var/cache/apt/archives/ directory. For both, after executing this command, it will automatically download its dependencies. First installing gdebi and then opening your .deb file using it ( Right-click -> Open with ). It will install your .deb package with all its dependencies. Note : APT maintains the package index which is a database ( /var/cache/apt/*.bin ) of available packages available in repo defined in /etc/apt/sources.list file and in the /etc/apt/sources.list.d directory. All these methods will fail to satisfy the software dependency if the dependencies required by the deb is not present in the package index. Why use sudo apt-get install -f after sudo dpkg -i /path/to/deb/file (as mentioned in method 1)? From man apt-get : -f, --fix-broken Fix; attempt to correct a system with broken dependencies in place. When dpkg installs a package and a package dependency is not satisfied, it leaves the package in an "unconfigured" state and that package is considered broken. The sudo apt-get install -f command tries to fix this broken package by installing the missing dependency.
{ "source": [ "https://unix.stackexchange.com/questions/159123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73386/" ] }
159,191
I am trying to setup KVM in ubuntu 14.04 host machine. I use a wireless interface to access the internet in my machine. I have setup the wireless interface in my /etc/networks/interfaces as below. auto wlan0 iface wlan0 inet static address 192.168.1.9 netmask 255.255.255.0 gateway 192.168.1.1 wpa-ssid My_SSID wpa-psk SSID_Password dns-nameservers 8.8.8.8 dns-search lan dns-domain lan I checked if my machine is available for virtualization and this command confirms that my hardware supports virtualization. egrep '(vmx|svm)' /proc/cpuinfo I installed the necessary packages for kvm virtualization as below. apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder I also installed the bridge utils package to configure bridge network for my kvm . apt-get install bridge-utils I modified my /etc/network/interfaces to allow the bridged network as below. auto br0 iface br0 inet static address 192.168.1.40 network 192.168.1.0 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 dns-search lan dns-domain lan bridge_ports wlan0 bridge_stp 0ff bridge_fd 0 bridge_maxwait 0 wpa-ssid my_ssid wpa-psk ssid_password After the above step, I am able to ping 192.168.1.40 and also I could see there is br0 and virbr0 listed in the output of ifconfig -a command. I am also able to access the internet without any problem with my wireless interface. However, after the above step if I try to add another OS using ubuntu-vm-builder command, I am not able to add a new OS. This is the command I use to add a new OS. sudo ubuntu-vm-builder kvm trusty \ --domain rameshpc \ --dest demo1 \ --hostname demo1 \ --arch amd64 \ --mem 1024 \ --cpus 4 \ --user ladmin \ --pass password \ --bridge br0 \ --ip 192.168.1.40 \ --mask 255.255.255.0 \ --net 192.168.1.0 \ --bcast 192.168.1.255 \ --gw 192.168.1.1 \ --dns 8.8.8.8 \ --components main,universe \ --addpkg acpid \ --addpkg openssh-server \ --addpkg linux-image-generic \ --libvirt qemu;///system; I have seen that setting a bridged network using a wireless interface is quiet complicated as discussed in this question. However, as the answer describes it is possible using a tunneling device. I have tried the option as suggested in this link. But I couldn't get it to work.
As someone rightly said once, Nothing is impossible in Linux TM , I could achieve the kvm in my host with a bridged network over a wireless interface. These are the steps I followed to accomplish the same. I installed the virt-manager package to manage the installation more efficiently. I installed it as below. sudo apt-get install virt-manager Now, create a new sub-network using Virt Manager’s GUI as highlighted below. This is basically a sub network of our existing host network. After setting this new sub-network , check if the network is available and ping some sites to check the network connectivity. Also, check the routing information using route command and make sure wlan0 and virbr2 doesn't have the same destination. Now, the final step to make it work is to issue the below command. Here 192.168.1.9 is the host machine address. arp -i wlan0 -Ds 192.168.1.9 wlan0 pub After the above step, I was able to successfully install a Fedora guest OS using the virt-manager . References http://specman1.wordpress.com/2014/01/02/wireless-bridging-virtual-machines-kvm/ https://superuser.com/questions/694929/wireless-bridge-on-kvm-virtual-machine
{ "source": [ "https://unix.stackexchange.com/questions/159191", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47538/" ] }
159,221
Executing journalctl under a CentOS 7 system just prints messages generated after the last boot. The command # journalctl --boot=-1 prints Failed to look up boot -1: Cannot assign requested address and exits with status 1. Comparing it to a current Fedora system I notice that the CentOS 7 does not have /var/log/journal (and journalctl does not provide --list-boots ). Thus my question how to display log messages which were written before the last boot date. Or, perhaps this functionality has to be enabled on CentOS 7? (The journalctl man page lists 'systemd 208' as version number.)
tl;dr On CentOS 7, you have to enable the persistent storage of log messages: # mkdir /var/log/journal # systemd-tmpfiles --create --prefix /var/log/journal # systemctl restart systemd-journald Otherwise, the journal log messages are not retained between boots. Details Whether journald retains log messages from previous boots is configured via /etc/systemd/journald.conf . The default setting under CentOS 7 is: [Journal] Storage=auto Where the journald.conf man page explains auto as: One of "volatile", "persistent", "auto" and "none". If "volatile", journal log data will be stored only in memory, i.e. below the /run/log/journal hierarchy (which is created if needed). If "persistent", data will be stored preferably on disk, i.e. below the /var/log/journal hierarchy (which is created if needed), with a fallback to /run/log/journal (which is created if needed), during early boot and if the disk is not writable. " auto " is similar to "persistent" but the directory /var/log/journal is not created if needed, so that its existence controls where log data goes . (emphasize mine) The systemd-journald.service man page thus states that: By default, the journal stores log data in /run/log/journal/. Since /run/ is volatile, log data is lost at reboot. To make the data persistent, it is sufficient to create /var/log/journal/ where systemd-journald will then store the data. Apparently, the default was changed in Fedora 19 (to persitent storage) and since CentOS 7 is derived from Fedora 18 - it is still non-persisent there, by default. Persistency is implemented by default outside of journald via /var/log/messages and the rotated versions /var/log/messages-YYYYMMDD which are written by rsyslogd (which runs by default and gets its input from journald). Thus, to enable persistent logging with journald under RHEL/CentOS 7 one has to # mkdir /var/log/journal and then fix permissions and restart journald, e.g. via # systemd-tmpfiles --create --prefix /var/log/journal # systemctl restart systemd-journald
{ "source": [ "https://unix.stackexchange.com/questions/159221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
159,253
I want to decode URL encoding, is there any built-in tool for doing this or could anyone provide me with a sed code that will do this? I did search a bit through unix.stackexchange.com and on the internet but I couldn't find any command line tool for decoding url encoding. What I want to do is simply in place edit a txt file so that: %21 becomes ! %23 becomes # %24 becomes $ %26 becomes & %27 becomes ' %28 becomes ( %29 becomes ) And so on.
Found these Python one liners that do what you want: Python2 $ alias urldecode='python -c "import sys, urllib as ul; \ print ul.unquote_plus(sys.argv[1])"' $ alias urlencode='python -c "import sys, urllib as ul; \ print ul.quote_plus(sys.argv[1])"' Python3 $ alias urldecode='python3 -c "import sys, urllib.parse as ul; \ print(ul.unquote_plus(sys.argv[1]))"' $ alias urlencode='python3 -c "import sys, urllib.parse as ul; \ print (ul.quote_plus(sys.argv[1]))"' Example $ urldecode 'q+werty%3D%2F%3B' q werty=/; $ urlencode 'q werty=/;' q+werty%3D%2F%3B References Urlencode and urldecode from a command line
{ "source": [ "https://unix.stackexchange.com/questions/159253", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
159,273
I am seeing hundreds of different connections to the same ip and port scrolling by when running nethogs. Occasionally the foreign IP and port will change (not always 80, but sometimes). I've noticed that my router CPU usage jumps to 100% when these huge bursts of connections happen, so I'm fairly certain that this massive spike keeps overloading the router and essentially making my network useless for up to a full 60 seconds. Things I've tried: sudo netstat -tulpn | grep $whateverip : nothing sudo netstat --inet -ap | grep $whateverip : nothing sudo lsof -i | grep $whateverport : by the time this finishes, the port and IP have changed again This may just be paranoia, but I swear it seems like every time I try to dig into more info on the connection, the port and IP change, so my command gives me nothing. Am I dealing with something evil living inside my server? Or is there some more benign explanation that I'm missing in my limited networking knowledge? Also note that this is an Ubuntu server with no UI, so it's not me chasing around someone just browsing reddit.
Found these Python one liners that do what you want: Python2 $ alias urldecode='python -c "import sys, urllib as ul; \ print ul.unquote_plus(sys.argv[1])"' $ alias urlencode='python -c "import sys, urllib as ul; \ print ul.quote_plus(sys.argv[1])"' Python3 $ alias urldecode='python3 -c "import sys, urllib.parse as ul; \ print(ul.unquote_plus(sys.argv[1]))"' $ alias urlencode='python3 -c "import sys, urllib.parse as ul; \ print (ul.quote_plus(sys.argv[1]))"' Example $ urldecode 'q+werty%3D%2F%3B' q werty=/; $ urlencode 'q werty=/;' q+werty%3D%2F%3B References Urlencode and urldecode from a command line
{ "source": [ "https://unix.stackexchange.com/questions/159273", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86680/" ] }
159,278
In the directory /home/in I have files like this: crust.MC12345.txt crust.etcMC12345.txt crust.MC23456.txt crust.etcMC23456.txt crust.etctcMC23456.txt I only need to move crust.etcMC12345.txt and crust.etcMC23456.txt to another dir, /home/out .what is the pattern i use in the mv command for the above scenario ?
If I correctly understand your question the answer is very simple: mv crust.etcMC* /home/out or if etc is not literal string, but for example any three characters then: mv crust.???MC* /home/out
{ "source": [ "https://unix.stackexchange.com/questions/159278", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86683/" ] }
159,367
I know this question has probably been answered before. I have seen many threads about this in various places, but the answers are usually hard to extract for me. I am looking for help with an example usage of the 'sed' command. Say I wanted to act upon the file "hello.txt" (in same directory as prompt). Anywhere it contained the phrase "few", it should be changed to "asd". What would the command look like?
sed is the s tream ed itor , in that you can use | (pipe) to send standard streams (STDIN and STDOUT specifically) through sed and alter them programmatically on the fly, making it a handy tool in the Unix philosophy tradition; but can edit files directly, too, using the -i parameter mentioned below. Consider the following : sed -i -e 's/few/asd/g' hello.txt s/ is used to s ubstitute the found expression few with asd : The few, the brave. The asd, the brave. /g stands for "global", meaning to do this for the whole line. If you leave off the /g (with s/few/asd/ , there always needs to be three slashes no matter what) and few appears twice on the same line, only the first few is changed to asd : The few men, the few women, the brave. The asd men, the few women, the brave. This is useful in some circumstances, like altering special characters at the beginnings of lines (for instance, replacing the greater-than symbols some people use to quote previous material in email threads with a horizontal tab while leaving a quoted algebraic inequality later in the line untouched), but in your example where you specify that anywhere few occurs it should be replaced, make sure you have that /g . The following two options (flags) are combined into one, -ie : -i option is used to edit i n place on the file hello.txt . -e option indicates the e xpression/command to run, in this case s/ . Note: It's important that you use -i -e to search/replace. If you do -ie , you create a backup of every file with the letter 'e' appended.
{ "source": [ "https://unix.stackexchange.com/questions/159367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86730/" ] }
159,371
I'm using SteamOS. SteamOS I believe is Debian based. I wiped the laptop and got it installed nicely. When I started moving my music over I got this message: I assume, I need to make some sort of partition larger but I haven't been able to figure out how to do that? As requested: desktop@steamos:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x116c49cc Device Boot Start End Blocks Id System /dev/sda1 1 1953525167 976762583+ ee GPT Partition 1 does not start on physical sector boundary. desktop@steamos:~$ df -h Filesystem Size Used Avail Use% Mounted on rootfs 9.3G 8.8G 27M 100% / udev 10M 0 10M 0% /dev tmpfs 739M 360K 739M 1% /run /dev/disk/by-uuid/12742cc0-e489-472e-aa10-974d078d98e0 9.3G 8.8G 27M 100% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 3.4G 25M 3.4G 1% /run/shm /dev/sda5 889G 119M 843G 1% /boot /dev/sda1 487M 128K 486M 1% /boot/efi /dev/sda3 9.3G 1.5G 7.4G 17% /boot/recovery desktop@steamos:~$
sed is the s tream ed itor , in that you can use | (pipe) to send standard streams (STDIN and STDOUT specifically) through sed and alter them programmatically on the fly, making it a handy tool in the Unix philosophy tradition; but can edit files directly, too, using the -i parameter mentioned below. Consider the following : sed -i -e 's/few/asd/g' hello.txt s/ is used to s ubstitute the found expression few with asd : The few, the brave. The asd, the brave. /g stands for "global", meaning to do this for the whole line. If you leave off the /g (with s/few/asd/ , there always needs to be three slashes no matter what) and few appears twice on the same line, only the first few is changed to asd : The few men, the few women, the brave. The asd men, the few women, the brave. This is useful in some circumstances, like altering special characters at the beginnings of lines (for instance, replacing the greater-than symbols some people use to quote previous material in email threads with a horizontal tab while leaving a quoted algebraic inequality later in the line untouched), but in your example where you specify that anywhere few occurs it should be replaced, make sure you have that /g . The following two options (flags) are combined into one, -ie : -i option is used to edit i n place on the file hello.txt . -e option indicates the e xpression/command to run, in this case s/ . Note: It's important that you use -i -e to search/replace. If you do -ie , you create a backup of every file with the letter 'e' appended.
{ "source": [ "https://unix.stackexchange.com/questions/159371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86731/" ] }
159,462
I know that all of them are unit files, but I can't understand the special meaning of them. I think that targets are similar to daemons and sockets are the same as socket (IP + port) but also with inode numbers. Could anyone please explain them in simple words?
Service units: A unit configuration file whose name ends in .service encodes information about a process controlled and supervised by systemd. — systemd.service(5) Systemd service units are the units that actually execute and keep track of programs and daemons, and dependencies are used to make sure that services are started in the right order. They are the most commonly used type of units. Socket units: A unit configuration file whose name ends in ".socket" encodes information about an IPC or network socket or a file system FIFO controlled and supervised by systemd, for socket-based activation. — systemd.socket(5) Socket units on the other hand don't actually start daemons on their own. Instead, they just sit there and listen on an IP address and a port, or a UNIX domain socket, and when something connects to it, the daemon that the socket is for is started and the connection is handed to it. This is useful for making sure that big daemons that take up a lot of resources but are rarely used aren't running and taking up resources all the time, but instead they are only started when needed. Target units: A unit configuration file whose name ends in ".target" encodes information about a target unit of systemd, which is used for grouping units and as well-known synchronization points during start-up. — systemd.target(5) Targets are used for grouping and ordering units. They are somewhat of a rough equivalent to runlevels in that at different targets, different services, sockets, and other units are started. Unlike runlevels, they are much more free-form and you can easily make your own targets for ordering units, and targets have dependencies among themselves. For instance, multi-user.target is what most daemons are grouped under, and it requires basic.target to be activated, which means that all services grouped under basic.target will be started before the ones in multi-user.target .
{ "source": [ "https://unix.stackexchange.com/questions/159462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72520/" ] }
159,489
When you want to run multiple commands you can use ; , && and | Like this: killall Finder; killall SystemUIServer , cd ~/Desktop/ && rm Caches Or: man grep | man cat for example. But, is there a difference between | , ; and && ? If so, what is the difference?
; : commands separated by a ; are executed sequentially. The shell waits for each command to terminate in turn. && : command after && is executed if, and only if, command before && returns an exit status of zero. You can think of it as AND operator. | : a pipe. In expression command1 | command2 The standard output of command1 is connected via a pipe to the standard input of command2. There are more similar control operators, worth to mention: || : command after || is executed if, and only if, command before || returns a non-zero exit status. You can think of it as OR operator. Please note, that | and || are completely different animals. & : the shell executes the command terminated by & in the background, does not wait for the command to finish and immediately returns exit code 0. Once again, & has nothing to do with && . |& : a shorthand for 2>&1 | i.e. both standard output and standard error of command1 are connected to command2's standard input through the pipe. Additionally if you use zsh then you can also start command with &| or &! . In this case job is immediately disowned, after startup it does not have a place in the job table.
{ "source": [ "https://unix.stackexchange.com/questions/159489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
159,513
I often see tutorials online that connect various commands with different symbols. For example: command1 | command2 command1 & command2 command1 || command2 command1 && command2 Others seem to be connecting commands to files: command1 > file1 command1 >> file1 What are these things? What are they called? What do they do? Are there more of them? Meta thread about this question. .
These are called shell operators and yes, there are more of them. I will give a brief overview of the most common among the two major classes, control operators and redirection operators , and how they work with respect to the bash shell. A. Control operators POSIX definition In the shell command language, a token that performs a control function. It is one of the following symbols: & && ( ) ; ;; <newline> | || And |& in bash. A ! is not a control operator but a Reserved Word . It becomes a logical NOT [negation operator] inside Arithmetic Expressions and inside test constructs (while still requiring an space delimiter). A.1 List terminators ; : Will run one command after another has finished, irrespective of the outcome of the first. command1 ; command2 First command1 is run, in the foreground, and once it has finished, command2 will be run. A newline that isn't in a string literal or after certain keywords is not equivalent to the semicolon operator. A list of ; delimited simple commands is still a list - as in the shell's parser must still continue to read in the simple commands that follow a ; delimited simple command before executing, whereas a newline can delimit an entire command list - or list of lists. The difference is subtle, but complicated: given the shell has no previous imperative for reading in data following a newline, the newline marks a point where the shell can begin to evaluate the simple commands it has already read in, whereas a ; semi-colon does not. & : This will run a command in the background, allowing you to continue working in the same shell. command1 & command2 Here, command1 is launched in the background and command2 starts running in the foreground immediately, without waiting for command1 to exit. A newline after command1 is optional. A.2 Logical operators && : Used to build AND lists, it allows you to run one command only if another exited successfully. command1 && command2 Here, command2 will run after command1 has finished and only if command1 was successful (if its exit code was 0). Both commands are run in the foreground. This command can also be written if command1 then command2 else false fi or simply if command1; then command2; fi if the return status is ignored. || : Used to build OR lists, it allows you to run one command only if another exited unsuccessfully. command1 || command2 Here, command2 will only run if command1 failed (if it returned an exit status other than 0). Both commands are run in the foreground. This command can also be written if command1 then true else command2 fi or in a shorter way if ! command1; then command2; fi . Note that && and || are left-associative; see Precedence of the shell logical operators &&, || for more information. ! : This is a reserved word which acts as the “not” operator (but must have a delimiter), used to negate the return status of a command — return 0 if the command returns a nonzero status, return 1 if it returns the status 0. Also a logical NOT for the test utility. ! command1 [ ! a = a ] And a true NOT operator inside Arithmetic Expressions: $ echo $((!0)) $((!23)) 1 0 A.3 Pipe operator | : The pipe operator, it passes the output of one command as input to another. A command built from the pipe operator is called a pipeline . command1 | command2 Any output printed by command1 is passed as input to command2 . |& : This is a shorthand for 2>&1 | in bash and zsh. It passes both standard output and standard error of one command as input to another. command1 |& command2 A.4 Other list punctuation ;; is used solely to mark the end of a case statement . Ksh, bash and zsh also support ;& to fall through to the next case and ;;& (not in ATT ksh) to go on and test subsequent cases. ( and ) are used to group commands and launch them in a subshell. { and } also group commands, but do not launch them in a subshell. See this answer for a discussion of the various types of parentheses, brackets and braces in shell syntax. B. Redirection Operators POSIX definition of Redirection Operator In the shell command language, a token that performs a redirection function. It is one of the following symbols: < > >| << >> <& >& <<- <> These allow you to control the input and output of your commands. They can appear anywhere within a simple command or may follow a command. Redirections are processed in the order they appear, from left to right. < : Gives input to a command. command < file.txt The above will execute command on the contents of file.txt . <> : same as above, but the file is open in read+write mode instead of read-only : command <> file.txt If the file doesn't exist, it will be created. That operator is rarely used because commands generally only read from their stdin, though it can come handy in a number of specific situations . > : Directs the output of a command into a file. command > out.txt The above will save the output of command as out.txt . If the file exists, its contents will be overwritten and if it does not exist it will be created. This operator is also often used to choose whether something should be printed to standard error or standard output : command >out.txt 2>error.txt In the example above, > will redirect standard output and 2> redirects standard error. Output can also be redirected using 1> but, since this is the default, the 1 is usually omitted and it's written simply as > . So, to run command on file.txt and save its output in out.txt and any error messages in error.txt you would run: command < file.txt > out.txt 2> error.txt >| : Does the same as > , but will overwrite the target, even if the shell has been configured to refuse overwriting (with set -C or set -o noclobber ). command >| out.txt If out.txt exists, the output of command will replace its content. If it does not exist it will be created. >> : Does the same as > , except that if the target file exists, the new data are appended. command >> out.txt If out.txt exists, the output of command will be appended to it, after whatever is already in it. If it does not exist it will be created. >& : (per POSIX spec) when surrounded by digits ( 1>&2 ) or - on the right side ( 1>&- ) either redirects only one file descriptor or closes it ( >&- ). A >& followed by a file descriptor number is a portable way to redirect a file descriptor, and >&- is a portable way to close a file descriptor. If the right side of this redirection is a file please read the next entry. >& , &> , >>& and &>> : (read above also) Redirect both standard error and standard output, replacing or appending, respectively. command &> out.txt Both standard error and standard output of command will be saved in out.txt , overwriting its contents or creating it if it doesn't exist. command &>> out.txt As above, except that if out.txt exists, the output and error of command will be appended to it. The &> variant originates in bash , while the >& variant comes from csh (decades earlier). They both conflict with other POSIX shell operators and should not be used in portable sh scripts. << : A here document. It is often used to print multi-line strings. command << WORD Text WORD Here, command will take everything until it finds the next occurrence of WORD , Text in the example above, as input . While WORD is often EoF or variations thereof, it can be any alphanumeric (and not only) string you like. When any part of WORD is quoted or escaped, the text in the here document is treated literally and no expansions are performed (on variables for example). If it is unquoted, variables will be expanded. For more details, see the bash manual . If you want to pipe the output of command << WORD ... WORD directly into another command or commands, you have to put the pipe on the same line as << WORD , you can't put it after the terminating WORD or on the line following. For example: command << WORD | command2 | command3... Text WORD <<< : Here strings, similar to here documents, but intended for a single line. These exist only in the Unix port or rc (where it originated), zsh, some implementations of ksh, yash and bash. command <<< WORD Whatever is given as WORD is expanded and its value is passed as input to command . This is often used to pass the content of variables as input to a command. For example: $ foo="bar" $ sed 's/a/A/' <<< "$foo" bAr # as a short-cut for the standard: $ printf '%s\n' "$foo" | sed 's/a/A/' bAr # or sed 's/a/A/' << EOF $foo EOF A few other operators ( >&- , x>&y x<&y ) can be used to close or duplicate file descriptors. For details on them, please see the relevant section of your shell's manual ( here for instance for bash). That only covers the most common operators of Bourne-like shells. Some shells have a few additional redirection operators of their own. Ksh, bash and zsh also have constructs <(…) , >(…) and =(…) (that latter one in zsh only). These are not redirections, but process substitution .
{ "source": [ "https://unix.stackexchange.com/questions/159513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }