source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
113,827 | Buildroot is generating images for an embedded device where they should run. This is working very well. In those images, the rootfs is included. Due to some research, I'd like to look into that generated file (e.g. different compression modes set by the Buildroot were applied and now shall be checked if they were correctly done), but I can't find something useful in the Net. As far as I know, the difference between a uImage and zImage is just a small header, so u-boot is able to read that binary file. But I can open neither uImage nor the zImage. Can anyone give me a hint of how to decompress those (u/z)Images on the host? | mkimage -l uImage Will dump the information in the header. tail -c+65 < uImage > out Will get the content. tail -c+65 < uImage | gunzip > out will get it uncompressed if it was gzip-compressed. If that was an initramfs, you can do cpio -t < out or pax < out to list the content. If it's a ramdisk image, you can try and mount it with: mount -ro loop out /mnt file out could tell you more about what it is. | {
"source": [
"https://unix.stackexchange.com/questions/113827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56989/"
]
} |
113,893 | I tried to run the following: $ vlc -I dummy v4l2:///dev/video0 --video-filter scene --no-audio --scene-path webcam.png --scene-prefix image_prefix --scene-format png vlc://quit --run-time=1
VLC media player 2.0.7 Twoflower (revision 2.0.6-54-g7dd7e4d)
[0x1f4a1c8] dummy interface: using the dummy interface module...
[0x7fc19c001238] v4l2 demux error: VIDIOC_STREAMON failed
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
libv4l2: error setting pixformat: Device or resource busy
[0x7fc19c007f18] v4l2 access error: cannot set input 0: Device or resource busy
[0x7fc19c007f18] v4l2 access error: cannot set input 0: Device or resource busy
[0x7fc1a4000b28] main input error: open of `v4l2:///dev/video0' failed
[0x7fc1a4000b28] main input error: Your input can't be opened
[0x7fc1a4000b28] main input error: VLC is unable to open the MRL 'v4l2:///dev/video0'. Check the log for details.
[0x7fc19c007cc8] idummy demux: command `quit' So I'm assuming that there is a program currently accessing my webcam, which is cumbersome since its light is off and lsof | grep /dev/video returns nothing. Is there another, proper way to check what processes are currently using my webcam? Or is the problem of an entirely different nature? | I was having the same problem and the solution at http://www.theoutpost.org/8-nslu2/open-devvideo0-device-or-resource-busy/ (EDIT: url updated) helped me. $ fuser /dev/video0
/dev/video0: 1871m
$ ps axl | grep 1871
$ kill -9 1871 | {
"source": [
"https://unix.stackexchange.com/questions/113893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4881/"
]
} |
113,895 | I use Red Hat Enterprise in my school and I want to compile some c++11 code but the gcc --version is 4.4.7 which is not compatible.
Is that one the latest version in the red hat repositories?
is there any compiler that can be installed quickly using yum and that supports c++11? | I was having the same problem and the solution at http://www.theoutpost.org/8-nslu2/open-devvideo0-device-or-resource-busy/ (EDIT: url updated) helped me. $ fuser /dev/video0
/dev/video0: 1871m
$ ps axl | grep 1871
$ kill -9 1871 | {
"source": [
"https://unix.stackexchange.com/questions/113895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54176/"
]
} |
113,898 | I have file1 likes: 0 AFFX-SNP-000541 NA
0 AFFX-SNP-002255 NA
1 rs12103 0.6401
1 rs12103_1247494 0.696
1 rs12142199 0.7672 And a file2: 0 AFFX-SNP-000541 1
0 AFFX-SNP-002255 1
1 rs12103 0.5596
1 rs12103_1247494 0.5581
1 rs12142199 0.4931 And would like a file3 such that: 0 AFFX-SNP-000541 NA 1
0 AFFX-SNP-002255 NA 1
1 rs12103 0.6401 0.5596
1 rs12103_1247494 0.696 0.5581
1 rs12142199 0.7672 0.4931 Which means to put the 4th column of file2 to file1 by the name of the 2nd column. | This should do it: join -j 2 -o 1.1,1.2,1.3,2.3 file1 file2 Important : this assumes your files are sorted (as in your example) according to the SNP name. If they are not, sort them first: join -j 2 -o 1.1,1.2,1.3,2.3 <(sort -k2 file1) <(sort -k2 file2) Output: 0 AFFX-SNP-000541 NA 1
0 AFFX-SNP-002255 NA 1
1 rs12103 0.6401 0.5596
1 rs12103_1247494 0.696 0.5581
1 rs12142199 0.7672 0.4931 Explanation (from info join ): `join' writes to standard output a line for each pair of input lines
that have identical join fields. `-1 FIELD'
Join on field FIELD (a positive integer) of file 1.
`-2 FIELD'
Join on field FIELD (a positive integer) of file 2.
`-j FIELD'
Equivalent to `-1 FIELD -2 FIELD'.
`-o FIELD-LIST'
Otherwise, construct each output line according to the format in
FIELD-LIST. Each element in FIELD-LIST is either the single
character `0' or has the form M.N where the file number, M, is `1'
or `2' and N is a positive field number. So, the command above joins the files on the second field and prints the 1st,2nd and 3rd field of file one, followed by the 3rd field of file2. | {
"source": [
"https://unix.stackexchange.com/questions/113898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59781/"
]
} |
113,981 | Our system crashed and we're trying to recover our data. The disc is fine, but the OS is gone, so I'm trying to get at the actual MySQL database files. Does anybody know where to look for them in a Debian Linux server? | MySQL stores DB files in /var/lib/mysql by default, but you can override this in the configuration file, typically called /etc/my.cnf , although Debian calls it /etc/mysql/my.cnf . | {
"source": [
"https://unix.stackexchange.com/questions/113981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59660/"
]
} |
114,022 | I am running the latest version of Linux Mint with Cinnamon. I'm trying to map Caps Lock to Ctrl , but I cannot figure out how to do it. All web searches I've done have led me to older versions of Linux Mint (there is no keyboard layout option in my settings). How can I do this? | You can accomplish this with xmodmap . Add the following to ~/.xmodmap : remove Lock = Caps_Lock
keysym Caps_Lock = Control_L
add Control = Control_L and run the command xmodmap ~/.xmodmap . You can put this command in whatever startup files are processed by your WM or DE. This will only work in X but it will work regardless of what graphical environment you use. | {
"source": [
"https://unix.stackexchange.com/questions/114022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45483/"
]
} |
114,041 | I like to use tac to reverse the output of cat . However, it's not available in the Mavericks terminal. I tried to find it on MacPorts and again it's not available. Can anyone please show me how to get tac ? It's very helpful for reading log files. | Yes: Install Homebrew brew install coreutils ln -s /usr/local/bin/gtac /usr/local/bin/tac apparently not needed with latest Homebrew, see comment by Ran Ever-Hadani below or use MacPorts to install coreutils in a similar way. | {
"source": [
"https://unix.stackexchange.com/questions/114041",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34121/"
]
} |
114,121 | I have few functions defined in this fashion: function f {
read and process $1
...
echo $result
} I want to compose them together so that invocation would look like f | g | h . What idiom shoud I use to convert function working on arguments to one reading arguments from stdin? Is it possible to read pairs, tuples of arguments from stream without need to escape them (e.g. null terminating)? | One potential approach would be to put a while...read construct inside your functions which would process any data that came into the function through STDIN, operate on it, and then emit the resulting data back out via STDOUT. function X {
while read data; do
...process...
done
} Care will need to be spent with how you configure your while ..read.. components since they'll be highly dependent on the types of data they'll be able to reliably consume. There may be an optimal configuration that you can come up with. Example $ logF() { while read data; do echo "[F:$(date +"%D %T")] $data"; done; }
$ logG() { while read data; do echo "G:$data"; done; }
$ logH() { while read data; do echo "H:$data"; done; } Here's each function by itself. $ echo "hi" | logF
[F:02/07/14 20:01:11] hi
$ echo "hi" | logG
G:hi
$ echo "hi" | logH
H:hi Here they are when we use them together. $ echo "hi" | logF | logG | logH
H:G:[F:02/07/14 19:58:18] hi
$ echo -e "hi\nbye" | logF | logG | logH
H:G:[F:02/07/14 19:58:22] hi
H:G:[F:02/07/14 19:58:22] bye They can take various styles of input. #-- ex. #1
$ cat <<<"some string of nonsense" | logF | logG | logH
H:G:[F:02/07/14 20:03:47] some string of nonsense
#-- ex. #2
$ (logF | logG | logH) <<<"Here comes another string."
H:G:[F:02/07/14 20:04:46] Here comes another string.
#-- ex. #3
$ (logF | logG | logH)
Look I can even
H:G:[F:02/07/14 20:05:19] Look I can even
type to it
H:G:[F:02/07/14 20:05:23] type to it
live
H:G:[F:02/07/14 20:05:25] live
via STDIN
H:G:[F:02/07/14 20:05:29] via STDIN
..type Ctrl + D to stop..
#-- ex. #4
$ seq 5 | logF | logG | logH
H:G:[F:02/07/14 20:07:40] 1
H:G:[F:02/07/14 20:07:40] 2
H:G:[F:02/07/14 20:07:40] 3
H:G:[F:02/07/14 20:07:40] 4
H:G:[F:02/07/14 20:07:40] 5
#-- ex. #5
$ (logF | logG | logH) < <(seq 2)
H:G:[F:02/07/14 20:15:17] 1
H:G:[F:02/07/14 20:15:17] 2 | {
"source": [
"https://unix.stackexchange.com/questions/114121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57402/"
]
} |
114,140 | I was using the following command curl -silent http://api.openstreetmap.org/api/0.6/relation/2919627 http://api.openstreetmap.org/api/0.6/relation/2919628 | grep node | awk '{print $3}' | uniq when I wondered why uniq wouldn't remove the duplicates. Any idea why ? | You have to sort the output in order for the uniq command to be able to work. See the man page: Filter adjacent matching lines from INPUT (or standard input), writing to OUTPUT (or standard output). So you can pipe the output into sort first and then uniq it. Or you can make use of sort 's ability to perform the sort and unique all together like so: $ ...your command... | sort -u Examples sort | uniq $ cat <(seq 5) <(seq 5) | sort | uniq
1
2
3
4
5 sort -u $ cat <(seq 5) <(seq 5) | sort -u
1
2
3
4
5 Your example $ curl -silent http://api.openstreetmap.org/api/0.6/relation/2919627 http://api.openstreetmap.org/api/0.6/relation/2919628 \
| grep node | awk '{print $3}' | sort -u
ref="1828989762"
ref="1829038636"
ref="1829656128"
ref="1865479751"
ref="451116245"
ref="451237910"
ref="451237911"
ref="451237917"
ref="451237920"
ref="451237925"
ref="451237933"
ref="451237934"
ref="451237941"
ref="451237943"
ref="451237945"
ref="451237947"
ref="451237950"
ref="451237953" | {
"source": [
"https://unix.stackexchange.com/questions/114140",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21509/"
]
} |
114,189 | I can't find my sshd logs in the standard places. What I've tried: Not in /var/log/auth.log Not in /var/log/secure Did a system search for 'auth.log' and found nothing I've set /etc/ssh/sshd_config to explicitly use SyslogFacility AUTH and LogLevel INFO and restarted sshd and still can't find them. I'm using OpenSSH 6.5p1-2 on Arch Linux. | Try this command to view the log from systemctl: journalctl -u sshd | tail -n 100 | {
"source": [
"https://unix.stackexchange.com/questions/114189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24697/"
]
} |
114,238 | I want to write some code to allow me switching to some directories that I usually go to. Say this program is mycd , and /a/very/long/path/name is the directory that I want to go to. So I can simply type mycd 2 instead of cd /a/very/long/path/name . Here I assume mycd knows 2 refers to that /a/very/long/path/name . There might also be mycd 1 , mycd 3 , ... etc. The problem is I have to write mycd as a shell script and type . mycd 2 to do the desired thing because otherwise the script just get executed in a child script which doesn't change anything about the parent shell that I actually care about. My question is: can I do it without using source ? because . mycd assumes mycd has to be a shell script and this might also introduce some functions that I don't want. can I implement it in some other programming languages? | make mycd a function so the cd command executes in your current shell. Save it in your ~/.bashrc file. function mycd {
if (( $# == 0 )); then
echo "usage: $FUNCNAME [1|2|3|...]"
return
fi
case $1 in
1) cd /tmp ;;
2) cd /a/very/long/path/name ;;
3) cd /some/where/else ;;
*) echo "unknown parameter" ;;
esac
} | {
"source": [
"https://unix.stackexchange.com/questions/114238",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59925/"
]
} |
114,244 | How can I replace all newlines with space except the last newline.
I can replace all newline to space using tr but how I can do it with some exceptions? | You can use paste -s -d ' ' file.txt : $ cat file.txt
one line
another line
third line
fourth line
$ paste -s -d ' ' file.txt
one line another line third line fourth line | {
"source": [
"https://unix.stackexchange.com/questions/114244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
114,300 | While following android eclipse debug tutorial, I encounter following commands. cd /path/to/android/root
. build/envsetup.sh
lunch 1
make
emulator My problem is what the dot before build/envsetup.sh means? | A dot in that context means to "source" the contents of that file into the current shell. With source itself being a shell builtin command. And source and the dot operator being synonyms. Example Say I had the following contents in a sample.sh file. $ cat sample.sh
echo "hi"
echo "bye?" Now when I source it: $ . sample.sh
hi
bye?
$ Files such as this are often used to incorporate setup commands such as adding things to ones environment variables. Examples Say I had these commands in another file, addvars.sh . $ cat addvars.sh
export VAR1="some var1 string"
export VAR2="some var2 string" Notice that I don't have any variables in my current shell's environment. $ env | grep VAR
$ Now when I source this file: $ . addvars.sh
$ OK, doesn't seem like it did anything, but when we check the env variables again: $ env | grep VAR
VAR1=some var1 string
VAR2=some var2 string | {
"source": [
"https://unix.stackexchange.com/questions/114300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2239/"
]
} |
114,359 | Pending an answer to xrandr detects amplifier as monitor a possible workaround is to blacklist devices with specific EDID s. Unfortunately xrandr --verbose prints everything in a format which is cumbersome to parse and doesn't support querying single devices, and get-edid 's output doesn't seem to be easy to map to xrandr 's monitor IDs (for example DVI-1 ). Is there some way to get an easily parseable EDID for a single monitor? | Lead #1: monitor-edid I'm not that up on EDID and monitors but I did find this tool, monitor-edid which might be of use to you here as well. Forgive me if it's off base, I'm trying to also learn more about this space, given the variety of questions you ask on the topic. $ monitor-edid
EISA ID: LEN4036
EDID version: 1.3
EDID extension blocks: 0
Screen size: 30.3 cm x 19.0 cm (14.08 inches, aspect ratio 16/10 = 1.59)
Gamma: 2.2
Digital signal
# Monitor preferred modeline (60.0 Hz vsync, 55.8 kHz hsync, ratio 16/10, 120 dpi)
ModeLine "1440x900" 114.06 1440 1488 1520 2044 900 903 909 930 -hsync -vsync
# Monitor supported modeline (50.0 Hz vsync, 51.8 kHz hsync, ratio 16/10, 120 dpi)
ModeLine "1440x900" 114.06 1440 1488 1520 2204 900 903 909 1035 -hsync -vsync Lead #2: ddccontrol There was another tool that I came across called ddccontrol , which might be helpful in getting the information you're after. Lead #3: /sys Finally in poking through /sys I noticed that there were leaf nodes hanging off of the various video interfaces. $ sudo find /sys |grep -i edid
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-HDMI-A-1/edid
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-HDMI-A-2/edid
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-HDMI-A-3/edid
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-VGA-1/edid
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-LVDS-1/edid
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-DP-1/edid
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-DP-2/edid
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-DP-3/edid
/sys/module/drm/parameters/edid_fixup
/sys/module/drm_kms_helper/parameters/edid_firmware However on my Lenovo laptop these "files" were empty, perhaps they're different on your system. I found this forum thread that showed sample output from the VGA EDID. $ lspci | grep VGA
01:00.0 VGA compatible controller: nVidia Corporation NV17 [GeForce4 440 Go 64M] (rev a3)
$ xxd /sys/devices/pci0000:00/0000:00:0b.0/0000:01:00.0/drm/card0/card0-VGA-1/edid
0000000: 00ff ffff ffff ff00 5a63 0213 0101 0101 ........Zc......
0000010: 2b0a 0103 1c25 1bb0 eb00 b8a0 5749 9b26 +....%......WI.&
0000020: 1048 4cff ff80 8199 8159 714f 6159 4559 .HL......YqOaYEY
0000030: 3159 a94f 0101 863d 00c0 5100 3040 40a0 1Y.O...=..Q.0@@.
0000040: 1300 680e 1100 001e 0000 00ff 0033 3139 ..h..........319
0000050: 3030 3433 3030 3737 330a 0000 00fd 0032 004300773......2
0000060: a01e 6114 000a 2020 2020 2020 0000 00fc ..a... ....
0000070: 0047 3930 6d62 0a20 2020 2020 2020 00ba .G90mb. .. Source: Extract Monitor Serial Number / Manufacture Date Using EDID? . References Monitor-edid The new homepage of read-edid Extended display identification data | {
"source": [
"https://unix.stackexchange.com/questions/114359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
114,447 | ~$ echo $(ls)
arguments.txt cecho.sh Desktop Documents Downloads example.txt Music Pictures Public
~$ echo $(ls "*.txt")
ls: cannot access *.txt: No such file or directory Why does the 2nd subshell command fails? | Remove the quotes around *.txt and it should work. With quotes shell will look for the literal filename *.txt . To explore/experiment, try creating a file with name *.txt as touch '*.txt' and repeat the command. | {
"source": [
"https://unix.stackexchange.com/questions/114447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42132/"
]
} |
114,462 | I was recently browsing my Fedora's /bin folder and noticed a binary named [ .
I did try to search the internet for more information on that, but I couldn't find anything useful. Running it through strace doesn't seem to produce anything useful for closer inspection too. What is that? Should I be alarmed? Could it be the result of a system compromise? Should I run it? Does it belong to any package? | The [ binary residing under the /bin tree in many GNU/Linux distributions is not something to be alarmed off. At least in my Fedora 19 it is a part of the coreutils package, as demonstrated below: $ rpm -qf /bin/[
coreutils-8.21-13.fc19.x86_64 and is a synonym for test to allow for expressions like [ expression ] to be written in shell scripts or even interactive usage. | {
"source": [
"https://unix.stackexchange.com/questions/114462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20260/"
]
} |
114,505 | Let's say I have several shell "tabs" (or screens? sessions?) named bash1, bash2, etc. open in GNU screen. I want the status bar (i.e., the caption line) to display the names as "bash1 | bash2 | ..", with the currently open tab and the last open tab clearly marked. How do I make this happen with my .screenrc ? | Edit or create (if not present) /etc/screenrc or ( ~/.screenrc ) and add below code autodetach on
startup_message off
hardstatus alwayslastline
shelltitle 'bash'
hardstatus string '%{gk}[%{wk}%?%-Lw%?%{=b kR}(%{W}%n*%f %t%?(%u)%?%{=b kR})%{= w}%?%+Lw%?%? %{g}][%{d}%l%{g}][ %{= w}%Y/%m/%d %0C:%s%a%{g} ]%{W}' shelltitle 'bash' can be changed once the screen is created. ( Ctrl a + A )
session name can be changed to SESSSIONNAME with :sessionname SESSIONNAME . | {
"source": [
"https://unix.stackexchange.com/questions/114505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60013/"
]
} |
114,645 | FreeRDP 1.0.2 has an updated their parameter syntax for " better interoperability with Windows ." I had a problem using the old syntax where the clipboard plugin only worked the first time I pasted, and subsequently stopped: xfreerdp --plugin cliprdr -g 1920x1060 -u Administrator -p xxx n.n.n.n So I decided to try the new syntax, but I can't seem to get it right. The following: xfreerdp +clipboard /size:1920x1060 /u:Administrator /p:xxx /v:n.n.n.n Gives an error: Warning xf_GetWindowProperty (140): Property 385 does not exist
transport_connect: getaddrinfo (Name or service not known)
Error: protocol security negotiation failure Any advice? | The xfreerdp protocol changed how to connect. Try the following example: xfreerdp +clipboard /u:<username> /v:<hostname> /size:<WxH> Also, if it is necessary to connect over a different port, add /p: <port> after the <hostname> parameter. | {
"source": [
"https://unix.stackexchange.com/questions/114645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5655/"
]
} |
114,648 | I am trying to find settings.xml file in my Ubuntu machine. I have no clue where it is, and which directory it is in. I tried using this - ls -R | grep settings.xml But it doesn't show me the full path where it is.. Is there any other command which I need to try that can give me the full path? | For fast search (but not definitive): locate -br '^settings.xml$' From man locate : locate reads one or more databases prepared by updatedb(8) and writes
file names matching at least one of the PATTERNs to standard output,
one per line.
-b, --basename
Match only the base name against the specified patterns. This
is the opposite of --wholename.
-r, --regexp REGEXP
Search for a basic regexp REGEXP. No PATTERNs are allowed if
this option is used, but this option can be specified multiple
times. The ^ and $ ensure that only files whose name is settings.xml and not files whose names contain settings.xml will be printed. You may need for the first time to run: updatedb (as root ) to update/build the database of locate . | {
"source": [
"https://unix.stackexchange.com/questions/114648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48833/"
]
} |
114,709 | I'm running a Ubuntu based web server (Apache, MySQL) on a 512MB VPS. This is more than sufficient for the website it is running (small forum). As I wanted to add some protection against viruses I installed ClamAV and use it to scan uploaded files as part of the upload handling script (PHP). I'm running the clamav-daemon service so the definitions don't have to be loaded every time a file is scanned. One downside to this practise seems to be the "huge" amount of memory used by clamav-daemon service: >200 MB. This already resulted in the service being forced to stop and the uploads being rejected. I can simply upgrade the memory of the VPS to 1024MB, but I want to know if there is a way to reduce the memory usage of ClamAV by e.g. not loading unwanted definitions. | ClamAV holds the search strings using the classic string (Boyer Moore) and regular expression (Aho Corasick) algorithms. Being algorithms from the 1970s they are extemely memory efficient. The problem is the huge number of virus signatures. This leads to the algorithms' datastructures growing quite large. You can't send those datastructures to swap, as there are no parts of the algorithms' datastructures accessed less often than other parts. If you do force pages of them to swap disk, then they'll be referenced moments later and just swap straight back in. (Technically we say "the random access of the datastructure forces the entire datastructure to be in the process's working set of memory".) The datastructures are needed if you are scanning from the command line or scanning from a daemon. You can't use just a portion of the virus signatures, as you don't get to choose which viruses you will be sent, and thus can't tell which signatures you will need. Here's the memory used on a 32-bit machine running Debian Wheezy and it's clamd. # ps_mem.py
Private + Shared = RAM used Program
281.7 MiB + 422.5 KiB = 282.1 MiB clamd Edit: I see someone suggests setting the resident set size. If this succeeds then having a resident set size less than the working set size will lead to the process thrashing to and from swap. This will lower the entire system performance substantially. In any case the Linux manual page for setrlimit(RLIMIT_RSS, ...) says that setting the resident set size is no longer supported and never had any effect on processes which chose not to call madvise(MADV_WILLNEED, ...). | {
"source": [
"https://unix.stackexchange.com/questions/114709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60121/"
]
} |
114,878 | I often use the command cat /dev/urandom | strings --bytes 1 | tr -d '\n\t ' | head --bytes 32 to generate pseudo-random passwords. This doesn't work with /dev/random . Specifically cat /dev/urandom | strings --bytes 1 | tr -d '\n\t ' produces output cat /dev/random | strings --bytes 1 produces output cat /dev/random | strings --bytes 1 | tr -d '\n\t ' does not produce output NB: When using /dev/random you may have to wiggle your mouse or press keys (e.g. ctrl, shift, etc.) to generate entropy. Why does the last example not work? Does tr have some kind of large internal buffer that /dev/urandom fills quickly but /dev/random doesn't? P.S. I'm using CentOS 6.5 cat /proc/version
Linux version 2.6.32-431.3.1.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) ) #1 SMP Fri Jan 3 21:39:27 UTC 2014 | It will eventually. In: cat /dev/random | strings --bytes 1 | tr -d '\n\t ' cat will never buffer, but it's superfluous anyway as there's nothing to concatenate here. < /dev/random strings --bytes 1 | tr -d '\n\t ' strings though, since its output is not longer a terminal will buffer its output by blocks (of something like 4 or 8kB) as opposed to lines when the output goes to a terminal. So it will only start writing to stdout when it has accumulated 4kB worth of characters to output, which on /dev/random is going to take a while. tr output goes to a terminal (if you're running that at a shell prompt in a terminal), so it will buffer its output line-wise. Because you're removing the \n , it will never have a full line to write, so instead, it will write as soon as a full block has been accumulated (like when the output doesn't go to a terminal). So, tr is likely not to write anything until strings has read enough from /dev/random so as to write 8kB (2 blocks possibly much more) of data (since the first block will probably contain some newline or tab or space characters). On this system I'm trying this on, I can get an average of 3 bytes per second from /dev/random (as opposed to 12MiB on /dev/urandom ), so in the best case scenario (the first 4096 bytes from /dev/random are all printable ones), we're talking 22 minutes before tr starts to output anything. But it's more likely going to be hours (in a quick test, I can see strings writing a block every 1 to 2 blocks read, and the output blocks contain about 30% of newline characters, so I'd expect it'd need to read at least 3 blocks before tr has 4096 characters to output). To avoid that, you could do: < /dev/random stdbuf -o0 strings --bytes 1 | stdbuf -o0 tr -d '\n\t ' stdbuf is a GNU command (also found on some BSDs) that alters the stdio buffering of commands via an LD_PRELOAD trick. Note that instead of strings , you can use tr -cd '[:graph:]' which will also exclude tab, newline and space. You may want to fix the locale to C as well to avoid possible future surprises with UTF-8 characters. | {
"source": [
"https://unix.stackexchange.com/questions/114878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18799/"
]
} |
114,908 | I want to convert all *.flac to *.mp3 in the specific folder. This is what I've tried, but not works: # change to the home directory
cd ~/music
# convert all *.flac files
ffmpeg -i *.flac -acodec libmp3lame *.mp3
# (optional: check whether there are any errors printed on the terminal)
sleep 60 How to get my goal? | Try this: for i in *.flac ; do
ffmpeg -i "$i" -acodec libmp3lame "$(basename "${i/.flac}")".mp3
sleep 60
done | {
"source": [
"https://unix.stackexchange.com/questions/114908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60221/"
]
} |
114,943 | Is there an issue with sed and new line character? I have a file test.txt with the following contents aaaaa
bbbbb
ccccc
ddddd The following does not work: sed -r -i 's/\n/,/g' test.txt I know that I can use tr for this but my question is why it seems not possible with sed. If this is a side effect of processing the file line by line I would be interested in why this happens. I think grep removes new lines. Does sed do the same? | With GNU sed and provided POSIXLY_CORRECT is not in the environment (for single-line input): sed -i ':a;N;$!ba;s/\n/,/g' test.txt From https://stackoverflow.com/questions/1251999/sed-how-can-i-replace-a-newline-n : create a label via :a append the current and next line to the pattern space via N if we are before the last line, branch to the created label $!ba ( $! means not to do it on the last line (as there should be one final newline)). finally the substitution replaces every newline with a comma on the pattern space (which is the whole file). | {
"source": [
"https://unix.stackexchange.com/questions/114943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42132/"
]
} |
115,245 | I want to install sqldeveloper from Oracle on Arch Linux. The only Linux download option is RPM. I am not interested in using arch repositories to install sqldeveloper. I can only use what the vendor provides. | Jasonwryan (as per usual) was right on the mark with his initial comment . Arch's packages are supposed to be as close to "vanilla" as possible. Now, while you could use rpmextract or alien , there isn't really a good reason to do so. What you should do is create a PKGBUILD that uses the RPM as the source file and then installs everything that's needed where it should be in the package() function. If you are unsure of how to do this, take a look at some packages on the ArchLinux User Repository ; there are plenty that do similar things. Now, since bsdtar (the default extractor used on source files by makepkg ) supports extracting RPMs without issue, there is no reason to use rpmextract —it adds a makedependency without adding any real functionality. Some related reading from the wiki: PKGBUILD s Basic PKGBUILD templates Arch packaging standards | {
"source": [
"https://unix.stackexchange.com/questions/115245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57890/"
]
} |
115,276 | I want to write an automated post-installation script in Bash (called post-install.sh , for instance). The script will automatically add and update repositories, install and update packages, edit config files, etc. Now, if I execute this script, for instance with sudo post-install.sh , will I only be prompted for a sudo password once, or will I need to enter the sudo password on each invocation of a command inside the script, that needs sudo permission? In other words, do the commands inside the bash script 'inherit' the execution permissions, so to speak? And, if they indeed do , is there still a possibility that the sudo permissions will time out (if, for instance, a particular command takes long enough to exceed the sudo timeout)? Or will the initial sudo password entrance last for the complete duration of whole script? | Q#1: Will I only be prompted for a sudo password once, or will I need to enter the sudo password on each invocation of a command inside the script, that needs sudo permission? Yes, once, for the duration of the running of your script. NOTE: When you provide credentials to sudo , the authentication is typically good for 5 minutes within the shell where you typed the password. Additionally any child processes that get executed from this shell, or any script that runs in the shell (your case) will also run at the elevated level. Q#2: is there still a possibility that the sudo permissions will time out (if, for instance, a particular command takes long enough to exceed the sudo timeout)? Or will the initial sudo password entrance last for the complete duration of whole script? No they will not timeout within the script. Only if you interactively were typing them within the shell where the credentials were provided. Every time sudo is executed within this shell, the timeout is reset. But in your case they credentials will remain so long as the script is executing and running commands from within it. excerpt from sudo man page This limit is policy-specific; the default password prompt timeout for the sudoers security policy is 5 minutes. | {
"source": [
"https://unix.stackexchange.com/questions/115276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52321/"
]
} |
115,310 | I am looking for a way to select multiple messages in Mutt. For example, selecting the first and the last message would select a whole block of messages. I'd also like to select a subject using a regular expression. Then, I want to run a command on the selected messages, e.g., save them to a file. | You need to run the tag-pattern command. The default for that is T ( Shift + t ). You can then give it a regular expression. By default this will match message subjects. If you need to select a range of messages by number, you can provide the ~m [MIN]-[MAX] pattern to tag-pattern. There are many other options I've found useful over the years, and you can see a complete list in the “Advanced Usage - Patterns” section of the manual . You can also use t to tag or untag the highlighted message, to fine-tune the selection. Then you can run tag-prefix ( ; ) followed by save-message ( s ), and it will prompt you for a mailbox name. This command marks the saved messages to be deleted; there is also the copy-message command ( C , i.e. Shift +c) to copy without marking for deletion. | {
"source": [
"https://unix.stackexchange.com/questions/115310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
115,548 | Are there any (good) terminal based (ie. runs in a VT, not in GUI/X) spreadsheets or wordprocessors for Unix/Linux? Can anybody remember the name of such programs which were popular before (eg. before X became so widespread)? I know the "correct" way of doing wordprocessing in Unix is using a markup-language like LaTeX or GROFF together with a simple editor like vi or emacs... But what I'm wondering about, is if there is - or was (anybody remember an older program that did this?) - something like the old MS-DOS (pre-Windows) WordPerfect-like program for Unix? Where you didn't have true WYSIWYG, but where things like emphesize and underline was marked in the text with colors, reverse video and such. Programs that are more "front ends" for LaTeX or some XML-format to create wordprocess-documents are also of interest, provided they use the terminal and use colors and such to mark things like emphesized text (rather than you see the latex format-code). Eg. you press CTRL-I, the text you write turns reverse video, and is written to file inbetween format-codes for emphesize. | As for command-line spreadsheet programs there are sc and oleo . See: sc: the Venerable Spreadsheet Calculator GNU Oleo | {
"source": [
"https://unix.stackexchange.com/questions/115548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28975/"
]
} |
115,560 | I took a look around at other questions here and at various "scp usage" tutorials on Internet, but I can't sort out what's wrong. I'm using Linux Mint and I'm trying to figure out how scp works. I've a file file.ext (on my computer) in directory /home/name/dir/ I connect to a remote machine using ssh , like: ssh -p 2222 username@domain it asks me the password and the shell displays: username@domain ~ $ now, If I issue the command (before I ran ssh I was in the local directory /home/name/dir ): scp -r -P 2222 file.ext username@domain output is: cp: cannot stat ‘file.ext’: No such file or directory Same result if instead of file.ext I write the complete path scp -r -P 2222 /home/name/dir/file.ext username@domain Also, the server admin told me that I shall upload the file to my remote home directory (instead of root), like: scp -r -P 2222 file.ext username@domain:~/ but when I do it and press "Enter" nothing happens, as If the shell was waiting for further input. Summary of my problems: cp: no such file or directory shell "stuck" on ~/ Any suggestions? | You need to run the scp command from the local machine, not on the remote. You don't need the ssh at all: dragonmnl@local $ scp -P 2222 file.ext username@domain:~/ You also don't need the -r : -r Recursively copy entire directories. If you are already logged into the remote machine and want to copy from your local, you need to make sure that your local machine is accessible via the internet and has ssh set up. I don't think this is what you are after but if it is, just run this from the remote: username@domain $ scp dragonmnl@local:/path/to/file.ext ~/ | {
"source": [
"https://unix.stackexchange.com/questions/115560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60590/"
]
} |
115,591 | I often grep a bunch of files to find a line, and then grep returns one result. Rather than copying and pasting the filename into a new command, I'd like to be able to open that one result with an editor. Something like: grep foo | vim . Is there a way to do that in BASH? | Use grep -l to just get the filename of the matching file and not the matching text, then combine it with vim : vim "$(grep -l some_pattern file_names)" | {
"source": [
"https://unix.stackexchange.com/questions/115591",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33341/"
]
} |
115,631 | I am having a problem with permissions on a Linux server. I am used to BSD. When a directory is owned by a group the user who owns it isn't in such as www-data, files created in it will be owned by that group. This is important because I want files to be readable by the webserver (which I will not run as root) but so a user can still put new files in the directory. I can't put the users in www-data because then they can read every other users websites. I want the webserver to read all websites, I want users to be able to change their own. The permissions are set like this on the folders at the moment.... drwxr-x--- 3 john www-data 4096 Feb 17 21:27 john It is standard behavior on BSD for permissions to work this way. How do I get Linux to do this? | It sounds like you're describing the setgid bit functionality where when a directory that has it set, will force any new files created within it to have their group set to the same group that's set on the parent directory. Example $ whoami
saml
$ groups
saml wheel wireshark setup a directory with perms + ownerships $ sudo mkdir --mode=u+rwx,g+rs,g-w,o-rwx somedir
$ sudo chown saml.apache somedir
$ ll -d somedir/
drwxr-s---. 2 saml apache 4096 Feb 17 20:10 somedir/ touch a file as saml in this dir $ whoami
saml
$ touch somedir/afile
$ ll somedir/afile
-rw-rw-r--. 1 saml apache 0 Feb 17 20:11 somedir/afile This will give you approximately what it sounds like you want. If you truly want exactly what you've described though, I think you'll need to resort to Access Control Lists functionality to get that (ACLs). ACLs If you want to get a bit more control over the permissions on the files that get created under the directory, somedir , you can add the following ACL rule to set the default permissions like so. before $ ll -d somedir
drwxr-s---. 2 saml apache 4096 Feb 17 20:46 somedir set permissions $ sudo setfacl -Rdm g:apache:rx somedir
$ ll -d somedir/
drwxr-s---+ 2 saml apache 4096 Feb 17 20:46 somedir/ Notice the + at the end, that means this directory has ACLs applied to it. $ getfacl somedir
# file: somedir
# owner: saml
# group: apache
# flags: -s-
user::rwx
group::r-x
other::---
default:user::rwx
default:group::r-x
default:group:apache:r-x
default:mask::r-x
default:other::--- after $ touch somedir/afile
$ ll somedir/afile
-rw-r-----+ 1 saml apache 0 Feb 17 21:27 somedir/afile
$
$ getfacl somedir/afile
# file: somedir/afile
# owner: saml
# group: apache
user::rw-
group::r-x #effective:r--
group:apache:r-x #effective:r--
mask::r--
other::--- Notice with the default permissions ( setfacl -Rdm ) set so that the permissions are ( r-x ) by default ( g:apache:rx ). This forces any new files to only have their r bit enabled. | {
"source": [
"https://unix.stackexchange.com/questions/115631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21408/"
]
} |
115,734 | I have a file that contains file names. For example: /tmp/list.txt (it is with the spaces at the start of each line): /tmp/file.log
/app/nir/home.txt
/etc/config.cust I want, using one line, to move all the files listed in /tmp/list.txt to /app/dest So it should be something like this: cat /tmp/list.txt | xargs mv /app/dest/ | You are just missing the -t option for mv (assuming GNU mv ): cat /tmp/list.txt | xargs mv -t /app/dest/ or shorter (inspired by X Tian's answer): xargs mv -t /app/dest/ < /tmp/list.txt the leading (and possible trailing) spaces are removed. Spaces within the filenames will lead to problems. If you have spaces or tabs or quotes or backslashes in the filenames, assuming GNU xargs you can use: sed 's/^ *//' < /tmp/list.txt | xargs -d '\n' mv -t /app/dest/ | {
"source": [
"https://unix.stackexchange.com/questions/115734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42081/"
]
} |
115,743 | Just for fun, I thought I would use this command on my Raspberry Pi running Raspbian: sudo rm -f /bin/rm I thought I could just reinstall coreutils : I was wrong! apt-get install --reinstall coreutils gives an error from dpkg , saying it couldn't remove the package. Compiling from source doesn't work because the Makefile uses rm . How can I get a working rm back? | sudo touch /bin/rm && sudo chmod +x /bin/rm
apt-get download coreutils
sudo dpkg --unpack coreutils* And never again. Why didn't you use sudo with apt-get? Because the download command doesn't require it: download download will download the given binary package into the current
directory. So, unless you are in some directory you can't write, you don't need to use sudo , and it could get problematic later on since you will need root permissions to remove/move the package. | {
"source": [
"https://unix.stackexchange.com/questions/115743",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60684/"
]
} |
115,825 | When I enter unzip ../founation-latest.zip , it outputs this: warning [../foundation-latest.zip]: 248 extra bytes at beginning or within zipfile (attempting to process anyway) The file is 138KB. It unzips correctly, but why am I getting this error? | I found this thread which had a similar problem. The bug report is titled: unzip fails on 5.4GB ZIP with "extra bytes at beginning or within zipfile" . One of the suggested fixes was to use this command on the .zip file. $ zip -FFv foo.zip --out fixed.zip Example Run $ zip -FFv foo.zip --out fixed.zip
Fix archive (-FF) - salvage what can
Found end record (EOCDR) - says expect single disk archive
Scanning for entries...
Local ( 1 0): copying: d1/f1 (651734 bytes)
Local ( 1 651817): copying: d1/d2/ (0 bytes)
Local ( 1 651905): copying: d1/d2/f3 (80 bytes)
Local ( 1 652083): copying: d1/f23 (891 bytes)
Local ( 1 653021): copying: d1/f27 (8764 bytes)
Local ( 1 661837): copying: d1/f24 (14818 bytes)
Local ( 1 676709): copying: d1/f25 (17295 bytes)
...
Cen ( 1 5488799949): updating: d1/f13
Cen ( 1 5488800052): updating: d1/f14
Zip64 EOCDR found ( 1 5488800155)...
Zip64 EOCDL found ( 1 5488800211)...
EOCDR found ( 1 5488800231)...
$ echo $?
0 zip's -FF switch excerpt from zip man page -FF
--fixfix
Fix the zip archive. The -F option can be used if some
portions of the archive are missing, but requires a reasonably
intact central directory. The input archive is scanned as
usual, but zip will ignore some problems. The resulting
archive should be valid, but any inconsistent entries will be
left out.
When doubled as in -FF, the archive is scanned from the
beginning and zip scans for special signatures to
identify the limits between the archive members. The single
-F is more reliable if the archive is not too much damaged, so
try this option first.
If the archive is too damaged or the end has been truncated,
you must use -FF. This is a change from zip 2.32, where the
-F option is able to read a truncated archive. The -F option
now more reliably fixes archives with minor damage and the -FF
option is needed to fix archives where -F might have been
sufficient before.
... | {
"source": [
"https://unix.stackexchange.com/questions/115825",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57520/"
]
} |
115,838 | I have tried to SSH into my AWS Ubuntu server and copy the directory to my local machine. Throughout the process I experience different file permission errors (noted below). Is there one specific file permission needed for the .pem file that allows me to SSH and SCP? Or do I need to change the file permission twice - once for SSH and another for SCP after I login? Here are the commands I'm using: SSH: ssh -i sentiment.pem [email protected] Copy from remote to local computer with: scp [email protected]:/home/ubuntu/sentimentfolder /Users/Toga/Desktop/sentimentlocal I'm on a Mac OS X 10.7.5. Trial and Error: After I initially downloaded the .pem file, its permissions were set to, I THINK: 0644 -rw-r--r--@ 1 Toga staff 1692 Feb 18 21:27 sentiment.pem I then tried to SSH via terminal and received the following: WARNING: UNPROTECTED PRIVATE KEY FILE!
Permissions 0644 for 'sentiment.pem' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: sentiment.pem
Permission denied (publickey). I updated the file permissions to: chmod 660 sentiment.pem After the update, the permissions were set to: -rw-rw----@ 1 Toga staff 1692 Feb 18 21:27 sentiment.pem I then tried to SSH via terminal and received the following: WARNING: UNPROTECTED PRIVATE KEY FILE!
Permissions 0660 for 'sentiment.pem' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: sentiment.pem
Permission denied (publickey). I updated the file permissions to: chmod 600 sentiment.pem After the update, the permissions were set to: -rw-------@ 1 Toga staff 1692 Feb 18 21:27 sentiment.pem I then tried to SSH via terminal and was successful!! Now logged in, I run the a command to copy the remote directory to my local computer with: scp [email protected]:/home/ubuntu/sentimentfolder /Users/Toga/Desktop/sentimentlocal Which returns: Permission denied (publickey). SCP Commands Attempted: added the option -i and referenced the .pem file: scp -i sentiment.pem [email protected]:/home/ubuntu/sentimentfolder /Users/Toga/Desktop/sentimentlocal added the option -i , referenced the .pem file, and changed the user for AWS to ec2-user : scp -i sentiment.pem [email protected]:/home/ubuntu/sentimentfolder /Users/Toga/Desktop/sentimentlocal added the option -i , referenced the .pem file, changed the user for AWS to ec2-user , and added the complete file path for the location of the .pem file: scp -i /Users/Toga/Desktop/rollup/Personal/Serial_Project_Starter/sentiment/sentiment.pem [email protected]:/home/ubuntu/sentiment /Users/Toga/Desktop/sentimentlocal | TL;DR : chmod 400 ~/.ssh/ec2private.pem Visit here How to Connect to Amazon EC2 Remotely Using SSH or refer below. How to Connect to Amazon EC2 Remotely Using SSH: Download the .pem file. In Amazon Dashboard choose "Instances" from the left side bar, and then select the instance you would like to connect to. Click on "Actions", then select "Connect" Click on "Connect with a Standalone SSH Client" Open up a Terminal window Create a directory: # mkdir -p ~/.ssh Move the downloaded .pem file to the .ssh directory we just created: # mv ~/Downloads/ec2private.pem ~/.ssh Change the permissions of the .pem file so only the root user can read it: # chmod 400 ~/.ssh/ec2private.pem Create a config file: # vim ~/.ssh/config Enter the following text into that config file: Host *amazonaws.com
IdentityFile ~/.ssh/ec2private.pem
User ec2-user Save that file. Use the ssh command with your public DNS hostname to connect to your instance. e.g.: # ssh ec2-54-23-23-23-34.example.amazonaws.com | {
"source": [
"https://unix.stackexchange.com/questions/115838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60742/"
]
} |
115,863 | I need to delete all compiled data: directories called build , directories called obj , *.so files. I wrote a command find \( -name build -o -name obj -o -name *.so \) -exec rm -rf {} \; that goes through all the directories recursively and deletes all I need. Why do I have such an output at the end?
Maybe I should write a different command. find: `./3/obj': No such file or directory
find: `./3/build': No such file or directory
find: `./1/obj': No such file or directory
find: `./1/build': No such file or directory
find: `./2/obj': No such file or directory
find: `./2/build': No such file or directory | Use -prune on the directories that you're going to delete anyway to tell find not to bother trying to find files in them: find . \( -name build -o -name obj -o -name '*.so' \) -prune -exec rm -rf {} + Also note that *.so needs to be quoted as otherwise it may be expanded by the shell to the list of .so files in the current directory. The equivalent of your GNU -regex -type one would be: find . \( -name build -o -name obj -o -name '*?.so' \) -prune -exec rm -rf {} + Note that if you're going to use GNU specific syntax, you might as well use -delete instead of -exec rm -rf {} + . With -delete , GNU find turns on -depth automatically. It doesn't run external commands so in that way, it's more efficient, and also it's safer as it removes the race condition where someone may be able to make you remove the wrong files by changing a directory to a symlink in-between the time find finds a file and rm removes it (see info -f find -n 'Security Considerations for find' for details). find . -regextype posix-egrep -regex '.*/((obj|build)(/.*)?|.+\.so)' -delete | {
"source": [
"https://unix.stackexchange.com/questions/115863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28860/"
]
} |
115,886 | I often open large files, looking through logs for finding info. As all lines have timestamp in my case and I am sure of in which part of the whole file the info is present which I am interested in. For example, in the bottom half of the file contents(50% or beyond) or about 10% more towards the end of file or scroll down more 20% of original file. So, to navigate quickly in this fashion i.e. with percentage-wise - Is there any existing functionality already available in vim ? | Sorry for a short answer, but just type 50% *N%*
{count}% Go to {count} percentage in the file, on the first
non-blank in the line |linewise|. To compute the new
line number this formula is used:
({count} * number-of-lines + 99) / 100
See also 'startofline' option. {not in Vi} | {
"source": [
"https://unix.stackexchange.com/questions/115886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
115,897 | I feel confused about ssh port forwarding and the difference between ssh local and remote port forwarding. Could you please explain them in detail and with examples? Thanks! | I have drawn some sketches Introduction local: -L Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side. ssh -L sourcePort:forwardToHost:onPort connectToHost means: connect with ssh to connectToHost , and forward all connection attempts to the local sourcePort to port onPort on the machine called forwardToHost , which can be reached from the connectToHost machine. remote: -R Specifies that the given port on the remote (server) host is to be forwarded to the given host and port on the local side. ssh -R sourcePort:forwardToHost:onPort connectToHost means: connect with ssh to connectToHost , and forward all connection attempts to the remote sourcePort to port onPort on the machine called forwardToHost , which can be reached from your local machine. Examples Example for 1 ssh -L 80:localhost:80 SUPERSERVER You specify that a connection made to the local port 80 is to be forwarded to port 80 on SUPERSERVER. That means if someone connects to your computer with a webbrowser, he gets the response of the webserver running on SUPERSERVER. You, on your local machine, have no webserver running. Example for 2 ssh -R 80:localhost:80 tinyserver You specify, that a connection made to the port 80 of tinyserver is to be forwarded to port 80 on your local machine. That means if someone connects to the small and slow server with a webbrowser, he gets the response of the webserver running on your local machine. The tinyserver, which has not enough diskspace for the big website, has no webserver running. But people connecting to tinyserver think so. More examples Other things could be: The powerful machine has five webservers running on five different ports. If a user connects to one of the five tinyservers at port 80 with his webbrowser, the request is redirected to the corresponding webserver running on the powerful machine. That would be ssh -R 80:localhost:30180 tinyserver1
ssh -R 80:localhost:30280 tinyserver2
etc. Or maybe your machine is only the connection between the powerful and the small servers. Then it would be (for one of the tinyservers that play to have their own webservers): ssh -R 80:SUPERSERVER:30180 tinyserver1
ssh -R 80:SUPERSERVER:30280 tinyserver2
etc | {
"source": [
"https://unix.stackexchange.com/questions/115897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60778/"
]
} |
115,917 | If I perform a sequence of commands like: $ ls
$ grep abc file.txt and then use the up arrow key to get the previous one, the terminal will show the last cmd (which is the grep here) But if I do something like this: $ ls
$ grep abc file.txt where grep is preceded by spaces, pressing up gives ls , not grep . Why is this? | echo $HISTCONTROL
ignoreboth If you want to change this behaviour add a new line to your ~/.bashrc file (which will affect every new shell you open): HISTCONTROL=ignoredups (assuming you still want to filter out duplicates) man bash: HISTCONTROL A colon-separated list of values controlling how commands are saved on the history list. If the list of values includes ignorespace , lines which begin with a space character are not saved in the history list. A value of ignoredups causes lines matching the previous history entry to not be saved. A value of ignoreboth is shorthand for ignorespace and ignoredups . | {
"source": [
"https://unix.stackexchange.com/questions/115917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41501/"
]
} |
115,918 | Red Hat 5/6 when I do mount it says type nfs, I would like to know how to determine version if it isn't listed in mount options or fstab. Please don't say remount it with the version option, I want to know how to determine the currently mounted NFS version. I am guessing it will default based on NFS server/client settings, but how to I determine what it is currently? I am pretty sure it's NFS v3 because nfs4_setfacl is not supported it seems. | Here are 2 ways to do it: mount Using mount's -v switch: $ mount -v | grep /home/sam
mulder:/export/raid1/home/sam on /home/sam type nfs (rw,intr,tcp,nfsvers=3,rsize=16384,wsize=16384,addr=192.168.1.1) nfsstat Using nfsstat -m : $ nfsstat -m | grep -A 1 /home/sam
/home/sam from mulder:/export/raid1/home/sam
Flags: rw,vers=3,rsize=16384,wsize=16384,hard,intr,proto=tcp,timeo=600,retrans=2,sec=sys,addr=mulder | {
"source": [
"https://unix.stackexchange.com/questions/115918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43342/"
]
} |
115,980 | Suppose I have a binary called foo . If I want to redirect the output of foo to some other process bar , I could write ./foo | bar . On the other hand, if I wanted to time foo, and redirect the output of time I could write, time (./foo) | bar . My question is, how can I stick the output of time to the end of the output of foo and pipe it through the same pipe ? The following solution is not what I am looking for, because it starts two separate instances of the process bar , while I want a single shared pipe, to a single instance of bar . time (./foo | bar) | bar For anyone who is curious, the reason for not wanting to start two instances of bar is because bar can be a network client and I want the timing information to be sent to the server as part of the same http POST message as the process output. | If I understand what you're asking for I this will do. I'm using the commands ls ~ and tee as stand-ins for ./foo and bar , but the general form of what you want is this: $ ( time ./foo ) |& bar NOTE: The output of time is already being attached at the end of any output from ./foo , it's just being done so on STDERR. To redirect it through the pipe you need to combine STDERR with STDOUT. You can use either |& or 2>&1 to do so. $ ( time ./foo ) |& bar
-or-
$ ( time ./foo ) 2>&1 | bar Example $ ( time ls . ) |& tee cmd.log
cmd.log
file1
file2
file3
file4
file5
real 0m0.005s
user 0m0.000s
sys 0m0.001s And here's the contents of the file cmd.log produced by tee . $ more cmd.log
cmd.log
file1
file2
file3
file4
file5
real 0m0.005s
user 0m0.000s
sys 0m0.001s | {
"source": [
"https://unix.stackexchange.com/questions/115980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34334/"
]
} |
116,070 | How can I grant write permission to one group? I have two users ( alex and ben ). alex is member of group alex and of group consult . ben is member of group ben and of group consult . I want to grant read-write access to both alex and ben on the folder consult_documents . If I make alex the owner of the directory consult_documents and I grant 775 access to the directory consult_documents , ben and alex will be able to access the folder, I think. But will this allow ben access to alex 's other folders as well?
If a user is in two groups, does that mean that all the members from both groups get the same permissions on all folders? | Granting 775 permissions on a directory doesn't automatically mean that all users in a certain group will gain rwx access to it. They need to either be the owner of the directory or to belong to the directory's group: $ ls -ld some_dir
drwxrwxr-x 2 alex consult 4096 Feb 20 10:10 some_dir/
^ ^
| |_____ directory's group
|___________ directory's owner So, in order to allow both alex and ben to have write access to some_dir , the some_dir directory itself must belong to the consult group. If that's not the case, the directory's owner (alex in your example), should issue the following command: $ chgrp consult some_dir/ or to change group ownership of everything inside the directory: $ chgrp -R consult some_dir/ This will only work if alex is a member of the consult group, which seems to be the case in your example. This will not allow ben to access all of alex's directories for two reasons: Not all of alex's directories will belong to the consult group Some of alex's directories may belong to the consult group but alex may not have chosen to allow rwx group access to them. In short, the answer depends both on group ownership and on the group permission bits set for the directory. All of this is provided you don't use any additional mandatory access control measures on your system. | {
"source": [
"https://unix.stackexchange.com/questions/116070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60871/"
]
} |
116,074 | I'm searching for an simple shell script to mv all folders in /var/www/uploads/ that containing Math or Physics in the name to /mnt/Backup/ . | Granting 775 permissions on a directory doesn't automatically mean that all users in a certain group will gain rwx access to it. They need to either be the owner of the directory or to belong to the directory's group: $ ls -ld some_dir
drwxrwxr-x 2 alex consult 4096 Feb 20 10:10 some_dir/
^ ^
| |_____ directory's group
|___________ directory's owner So, in order to allow both alex and ben to have write access to some_dir , the some_dir directory itself must belong to the consult group. If that's not the case, the directory's owner (alex in your example), should issue the following command: $ chgrp consult some_dir/ or to change group ownership of everything inside the directory: $ chgrp -R consult some_dir/ This will only work if alex is a member of the consult group, which seems to be the case in your example. This will not allow ben to access all of alex's directories for two reasons: Not all of alex's directories will belong to the consult group Some of alex's directories may belong to the consult group but alex may not have chosen to allow rwx group access to them. In short, the answer depends both on group ownership and on the group permission bits set for the directory. All of this is provided you don't use any additional mandatory access control measures on your system. | {
"source": [
"https://unix.stackexchange.com/questions/116074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/51005/"
]
} |
116,136 | I followed this link to change log-rotate configuration for RHEL 6 After I made the change to config file, what should I do to let this take effect? | logrotate uses crontab to work. It's scheduled work, not a daemon, so no need to reload its configuration. When the crontab executes logrotate , it will use your new config file automatically. If you need to test your config you can also execute logrotate on your own with the command: logrotate /etc/logrotate.d/your-logrotate-config Or as mentioned in comments, identify the logrotate line in the output of the command crontab -l and execute the command line refer to slm's answer to have a precise cron.daily explanation | {
"source": [
"https://unix.stackexchange.com/questions/116136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54709/"
]
} |
116,228 | I have a JSON file members.json as below. {
"took": 670,
"timed_out": false,
"_shards": {
"total": 8,
"successful": 8,
"failed": 0
},
"hits": {
"total": 74,
"max_score": 1,
"hits": [
{
"_index": "2000_270_0",
"_type": "Medical",
"_id": "02:17447847049147026174478:174159",
"_score": 1,
"_source": {
"memberId": "0x7b93910446f91928e23e1043dfdf5bcf",
"memberFirstName": "Uri",
"memberMiddleName": "Prayag",
"memberLastName": "Dubofsky"
}
},
{
"_index": "2000_270_0",
"_type": "Medical",
"_id": "02:17447847049147026174478:174159",
"_score": 1,
"_source": {
"memberId": "0x7b93910446f91928e23e1043dfdf5bcG",
"memberFirstName": "Uri",
"memberMiddleName": "Prayag",
"memberLastName": "Dubofsky"
}
}
]
}
} I want to parse it using bash script get only the list of field memberId . The expected output is: memberIds
-----------
0x7b93910446f91928e23e1043dfdf5bcf
0x7b93910446f91928e23e1043dfdf5bcG I tried adding following bash+python code to .bashrc : function getJsonVal() {
if [ \( $# -ne 1 \) -o \( -t 0 \) ]; then
echo "Usage: getJsonVal 'key' < /tmp/file";
echo " -- or -- ";
echo " cat /tmp/input | getJsonVal 'key'";
return;
fi;
cat | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["'$1'"]';
} And then called: $ cat members.json | getJsonVal "memberId" But it throws: Traceback (most recent call last):
File "<string>", line 1, in <module>
KeyError: 'memberId' Reference https://stackoverflow.com/a/21595107/432903 | If you would use: $ cat members.json | \
python -c 'import json,sys;obj=json.load(sys.stdin);print obj;' you can inspect the structure of the nested dictonary obj and see that your original line should read: $ cat members.json | \
python -c 'import json,sys;obj=json.load(sys.stdin);print obj["hits"]["hits"][0]["_source"]["'$1'"]'; to the to that "memberId" element. This way you can keep the Python as a oneliner. If there are multiple elements in the nested "hits" element, then you can do something like: $ cat members.json | \
python -c '
import json, sys
obj=json.load(sys.stdin)
for y in [x["_source"]["'$1'"] for x in obj["hits"]["hits"]]:
print y
' Chris Down's solution is better for finding a single value to (unique) keys at any level. With my second example that prints out multiple values, you are hitting the limits of what you should try with a one liner, at that point I see little reason why to do half of the processing in bash, and would move to a complete Python solution. | {
"source": [
"https://unix.stackexchange.com/questions/116228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17781/"
]
} |
116,243 | I have come across bash sequences such as \033[999D and \033[2K\r which are used to do some manipulation on a printout on a terminal. But what do these sequences mean? Where can I find a list/summary on the web to help me find out the meaning of these sequences? | See this link http://www.termsys.demon.co.uk/vtansi.htm . As Anthon says, \033 is the C-style octal code for an escape character. The [999D moves the cursor back 999 columns, presumably a brute force way of getting to the start of the line. [2K erases the current line. \r is a carriage return which will move the cursor back to the start of the current line and is a C-style escape sequence rather than a terminal control sequence. Update As other people have pointed out, these control sequences are nothing to do bash itself but rather the terminal device/emulator the text appears on. Once upon a time it was common for these sequences to be interpreted by a completely different piece of hardware. Originally, each one would respond to completely different sets of codes. To deal with this the termcap and terminfo libraries where used to write code compatible with multiple terminals. The tput command is an interface to the terminfo library ( termcap support can also be compiled in) and is a more robust way to create compatible sequences. That said, there is also the ANSI X3.64 or ECMA-48 standard. Any modern terminal implementation will use this. terminfo and termcap are still relevant as the implementation may be incomplete or include non standard extensions, however for most purposes it is safe to assume that common ANSI sequences will work. The xterm FAQ provides some interesting information on differences between modern terminal emulators (many just try to emulate xterm itself) and how xterm sequences relate to the VT100 terminals mentioned in the above link. It also provides a definitive list of xterm control sequences . Also commonly used of course is the Linux console, a definitive list of control sequences for it can be found in man console_codes , along with a comparison to xterm . | {
"source": [
"https://unix.stackexchange.com/questions/116243",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31853/"
]
} |
116,303 | I have generated and downloaded a private .pem key from AWS. However, to use Putty in order to connect to the virtual machine, I must have that key in .ppk format. The process of conversion is detailed in roughly 20 lines here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html#prepare-for-putty I am using Linux Mint (an Ubuntu distro) and I know I can use puttygen in the terminal. However, I have no idea how to use this tool, nor how to configure the needed parameters. When I type puttygen --help I get PuTTYgen unidentified build, Aug 7 2013 12:24:58
Usage: puttygen ( keyfile | -t type [ -b bits ] )
[ -C comment ] [ -P ] [ -q ]
[ -o output-keyfile ] [ -O type | -l | -L | -p ]
-t specify key type when generating (rsa, dsa, rsa1)
-b specify number of bits when generating key
-C change or specify key comment
-P change key passphrase
-q quiet: do not display progress bar
-O specify output type:
private output PuTTY private key format
private-openssh export OpenSSH private key
private-sshcom export ssh.com private key
public standard / ssh.com public key
public-openssh OpenSSH public key
fingerprint output the key fingerprint
-o specify output file
-l equivalent to `-O fingerprint'
-L equivalent to `-O public-openssh'
-p equivalent to `-O public' But I have no idea whatsoever on how to do what the website tells me to do and all my tentatives failed so far. How do I do what the website tells me to do, using puttygen on the terminal? | Using the GUI See this SO Q&A on how to do exactly what you want, titled: Convert PEM to PPK file format . excerpt Download your .pem from AWS Open PuTTYgen, select Type of key to generate as: SSH-2 RSA Click "Load" on the right side about 3/4 down Set the file type to *.* Browse to, and Open your .pem file PuTTY will auto-detect everything it needs, and you just need to click "Save private key" and you can save your ppk key for use with PuTTY Using the command line If on the other hand you'd like to convert a .pem to .ppk file via the command line tool puttygen , I did come across this solution on SO in this Q&A titled: How to convert SSH keypairs generated using PuttyGen(Windows) into key-pairs used by ssh-agent and KeyChain(Linux) . excerpt $ puttygen keyfile.pem -O private -o avdev.ppk For the public key: $ puttygen keyfile.pem -L References Converting Your Private Key (Putty) | {
"source": [
"https://unix.stackexchange.com/questions/116303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36203/"
]
} |
116,313 | I am evaluating the expression 6^6^6 using python and bc separately. The content of the python file is print 6**6**6 . When I execute time python test.py , I get the output as real 0m0.067s
user 0m0.050s
sys 0m0.011s And then, I ran the command time echo 6^6^6 | bc which gave me the following output real 0m0.205s
user 0m0.197s
sys 0m0.005s From these results it is clear that the sys time taken by python and bc was 11ms and 5ms respectively. The bc command outperformed python at sys time level but when it comes to user and real time python was almost 4 times faster than bc . What might have gone there. I haven't given any priority to the processes as such. I am trying to understand this situation. | Python imports a large number of files at startup: % python -c 'import sys; print len(sys.modules)'
39 Each of these requires an even greater number of attempts at opening a Python file, because there are many ways to define a module: % python -vv -c 'pass'
# installing zipimport hook
import zipimport # builtin
# installed zipimport hook
# trying site.so
# trying sitemodule.so
# trying site.py
# trying site.pyc
# trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.so
# trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sitemodule.so
# trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py
# /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py
import site # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.pyc
# trying os.so
# trying osmodule.so
# trying os.py
# trying os.pyc
# trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.so
# trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/osmodule.so
# trying /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py
# /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.pyc matches /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py
import os # precompiled from /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.pyc
... Each "trying", except for those which are builtin, requires an os-level/system calls, and each "import" seems to trigger about 8 "trying" messages. (There was ways to reduce this using zipimport, and each path in your PYTHONPATH may require another call.) This means there are almost 200 stat system calls before Python starts on my machine, and "time" assigns that to "sys" rather than "user", because the user program is waiting on the system to do things. By comparison, and like terdon said, "bc" doesn't have that high of a startup cost. Looking at the dtruss output (I have a Mac; "strace" for a Linux-based OS), I see that bc doesn't make any open() or stat() system calls of its own, except for loading a few shared libraries are the start, which of course Python does as well. In addition, Python has more files to read, before it's ready to process anything. Waiting for disk is slow. You can get a sense for Python's startup cost by doing: time python -c pass It's 0.032s on my machine, while 'print 6**6**6' is 0.072s, so startup cost is 1/2rd of the overall time and the calculation + conversion to decimal is the other half. While: time echo 1 | bc takes 0.005s, and "6^6^6" takes 0.184s so bc's exponentiation is over 4x slower than Python's even though it's 7x faster to get started. | {
"source": [
"https://unix.stackexchange.com/questions/116313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17625/"
]
} |
116,365 | I recently upgraded from OS X 10.6 (I think) to 10.9. Since then it seems, while editing in vim , the arrow keys will "spontaneously" stop working. At one point, in frustration I "mashed" one of the arrow keys and was eventually shown a "E388 Couldn't find a definition" error. All other times I've experienced it, the arrows, having worked for awhile, suddenly start dinging at me! Quitting and reopening solves the problem temporarily . But, I'd like to prevent it! Anyone know what this might be? And how to fix it? It looks like my default vimrc was modified during the update. If my memory is correct, it was a pretty big file previously. Now, it just contains this: " Configuration file for vim
set modelines=0 " CVE-2007-2438
" Normally we use vim-extensions. If you want true vi-compatibility
" remove change the following statements
set nocompatible " Use Vim defaults instead of 100% vi compatibility
set backspace=2 " more powerful backspacing
" Don't write backup file if vim is being called by "crontab -e"
au BufWrite /private/tmp/crontab.* set nowritebackup
" Don't write backup file if vim is being called by "chpass"
au BufWrite /private/etc/pw.* set nowritebackup I have no idea what these options do yet. I'll look into it -- but, hopefully someone here knows more quickly than I can google and read ... | Found in James Hodgkinson's blog , the following command works for me. Note it will refresh the vim screen. :!reset | {
"source": [
"https://unix.stackexchange.com/questions/116365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61037/"
]
} |
116,369 | I recently discovered terminal's feature, you can set the keys emacs or vi style I prefer the second. so if you do set -o vi You can use k j l h keys to navigate on the command line. And you can switch between 'Normal' and 'Insert' modes like in vim . However there's no way to visually distinguish one mode from another, even cursor doesn't change. Which makes vi-style pretty much useless. Is there a way to make it truly vim -like? | Found in James Hodgkinson's blog , the following command works for me. Note it will refresh the vim screen. :!reset | {
"source": [
"https://unix.stackexchange.com/questions/116369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43520/"
]
} |
116,389 | I want to delete all *.o files in a directory and its sub-directories. However, I get an error: sashoalm@aspire:~/.Workspace.OLD$ rm -r *.o
rm: cannot remove `*.o': No such file or directory On the other hand, rm *.o works, but it's not recursive. | That is evil: rm -r is not for deleting files but for deleting directories. Luckily there are probably no directories matching *.o . What you want is possible with zsh but not with sh or bash (new versions of bash can do this, but only if you enable the shell option globstar with shopt -s globstar ). The globbing pattern is **/*.o but that would not be limited to files, too (maybe zsh has tricks for the exclusion of non-files, too). But this is rather for find : find . -type f -name '*.o' -delete or (as I am not sure whether -delete is POSIX) find . -type f -name '*.o' -exec rm {} + | {
"source": [
"https://unix.stackexchange.com/questions/116389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9108/"
]
} |
116,395 | Trying to find out how to use case-insensitive searches in less I found this on serverfault . That seems to perfectly answer my question. The problem is: It doesn't work like that here (OpenSUSE 13.1; less 458). I had aliased less to less -WiNS but I changed that. But even calling it as command less file does not change anything. I have checked with ps that there is no -i option in the command line any more. As the answer says the less help (pressing h ) states that I can use -i within less , too. If I use that once then less tells me it had changed to case-insensitive search (that is kind of correct: nothing changes). If I use it twice then less tells me it had turned to case-sensitive search. And right, then it works as it should from the start. Giving -i twice on the command line does not work, though. What's up here? | I'm not sure how to enable this from the command line but when you're inside of less you can toggle the behavior you want by giving the -i command to less . toggling -i searching for /blah and /BLAH searching for /Blah Apparently you can also summon this mode on demand by suffixing your searches with a -i . Example less prompt> /search string/-i References How do you do a case insensitive search using a pattern modifier using less? | {
"source": [
"https://unix.stackexchange.com/questions/116395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32191/"
]
} |
116,424 | I am trying to write a regex that will display all words that are 10 characters long, and none of the letters are repeating. So far, I have got grep --colour -Eow '(\w{10})' Which is the very first part of the question. How would I go about checking for the "uniqueness"? I really don't have a clue, apart from that I need to use back references. | grep -Eow '\w{10}' | grep -v '\(.\).*\1' excludes words that have two identical characters. grep -Eow '\w{10}' | grep -v '\(.\)\1' excludes the ones that have repeating characters. POSIXly: tr -cs '[:alnum:]_' '[\n*]' |
grep -xE '.{10}' |
grep -v '\(.\).*\1' tr puts words on their own line by converting any s equence of non-word-characters ( c omplement of alpha-numeric and underscore) to a newline character. Or with one grep : tr -cs '[:alnum:]_' '[\n*]' |
grep -ve '^.\{0,9\}$' -e '.\{11\}' -e '\(.\).*\1' (exclude lines of less than 10 and more than 10 characters and those with a character appearing at least twice). With one grep only (GNU grep with PCRE support or pcregrep ): grep -Po '\b(?:(\w)(?!\w*\1)){10}\b' That is, a word boundary ( \b ) followed by a sequence of 10 word characters (provided that each is not followed by a sequence of word characters and themselves, using the negative look-ahead PCRE operator (?!...) ). We're lucky that it works here, as not many regexp engines work with backreferences inside repeating parts. Note that (with my version of GNU grep at least) grep -Pow '(?:(\w)(?!\w*\1)){10}' Doesn't work, but grep -Pow '(?:(\w)(?!\w*\2)){10}' does (as echo aa | grep -Pw '(.)\2' ) which sounds like a bug. You may want: grep -Po '(*UCP)\b(?:(\w)(?!\w*\1)){10}\b' if you want \w or \b to consider any letter as a word component and not just the ASCII ones in non-ASCII locales. Another alternative: grep -Po '\b(?!\w*(\w)\w*\1)\w{10}\b' That is a word boundary (one that is not followed by a sequence of word characters one of which repeats) followed by 10 word characters. Things to possibly have at the back of one's mind: Comparison is case sensitive, so Babylonish for instance would be matched, since all the characters are different even though there are two B s, one lower and one upper case (use -i to change that). for -w , \w and \b , a word is a letter (ASCII ones only for GNU grep for now , the [:alpha:] character class in your locale if using -P and (*UCP) ), decimal digits or underscore . that means that c'est (two words as per the French definition of a word) or it's (one word according to some English definitions of a word) or rendez-vous (one word as per the French definition of a word) are not considered one word. Even with (*UCP) , Unicode combining characters are not considered as word components, so téléphone ( $'t\u00e9le\u0301phone' ) is considered as 10 characters, one of which non-alpha. défavorisé ( $'d\u00e9favorise\u0301' ) would be matched even though it's got two é because that's 10 all different alpha characters followed by a combining acute accent (non-alpha, so there's a word boundary between the e and its accent). | {
"source": [
"https://unix.stackexchange.com/questions/116424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61067/"
]
} |
116,439 | Recently had a LVM'd CentOS 6.5 install get accidentally cold-shutdown. On bootup, it says that the home partition need fscking: /dev/mapper/vg_myserver-lv_home: Block bitmap for group 3072 is not in group. (block 3335668205)
/dev/mapper/vg_myserver-lv_home: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. ...but I guess the root partition is OK, since it gives me a shell there. So we run e2fsck -b 32768 /dev/mapper/vg_myserver/lv_home and after saying Yes to various fixes, on Pass 5 it just prints endless numbers to the screen, very fast. Once in a while it will print them in neat columns, and if these are block numbers, after a couple hours we are still nowhere near the first 2% being done of our 1.2 TB LV. I read that you can use cleap_mmp with tune2fs , but upon trying that, it doesn't accept cleap_mmp nor list it among valid options. My question is, how does everyone deal with a corrupt ext4 fs without weeks of downtime? Does everyone have this dilemma, or weeks of downtime vs rebuilding your server / lost data? If so why does anyone use or recommend the use of ext4? Is there some trick I'm missing that would let me target the specific block/group it's complaining about , so we can get on with it and mount the home fs again? | grep -Eow '\w{10}' | grep -v '\(.\).*\1' excludes words that have two identical characters. grep -Eow '\w{10}' | grep -v '\(.\)\1' excludes the ones that have repeating characters. POSIXly: tr -cs '[:alnum:]_' '[\n*]' |
grep -xE '.{10}' |
grep -v '\(.\).*\1' tr puts words on their own line by converting any s equence of non-word-characters ( c omplement of alpha-numeric and underscore) to a newline character. Or with one grep : tr -cs '[:alnum:]_' '[\n*]' |
grep -ve '^.\{0,9\}$' -e '.\{11\}' -e '\(.\).*\1' (exclude lines of less than 10 and more than 10 characters and those with a character appearing at least twice). With one grep only (GNU grep with PCRE support or pcregrep ): grep -Po '\b(?:(\w)(?!\w*\1)){10}\b' That is, a word boundary ( \b ) followed by a sequence of 10 word characters (provided that each is not followed by a sequence of word characters and themselves, using the negative look-ahead PCRE operator (?!...) ). We're lucky that it works here, as not many regexp engines work with backreferences inside repeating parts. Note that (with my version of GNU grep at least) grep -Pow '(?:(\w)(?!\w*\1)){10}' Doesn't work, but grep -Pow '(?:(\w)(?!\w*\2)){10}' does (as echo aa | grep -Pw '(.)\2' ) which sounds like a bug. You may want: grep -Po '(*UCP)\b(?:(\w)(?!\w*\1)){10}\b' if you want \w or \b to consider any letter as a word component and not just the ASCII ones in non-ASCII locales. Another alternative: grep -Po '\b(?!\w*(\w)\w*\1)\w{10}\b' That is a word boundary (one that is not followed by a sequence of word characters one of which repeats) followed by 10 word characters. Things to possibly have at the back of one's mind: Comparison is case sensitive, so Babylonish for instance would be matched, since all the characters are different even though there are two B s, one lower and one upper case (use -i to change that). for -w , \w and \b , a word is a letter (ASCII ones only for GNU grep for now , the [:alpha:] character class in your locale if using -P and (*UCP) ), decimal digits or underscore . that means that c'est (two words as per the French definition of a word) or it's (one word according to some English definitions of a word) or rendez-vous (one word as per the French definition of a word) are not considered one word. Even with (*UCP) , Unicode combining characters are not considered as word components, so téléphone ( $'t\u00e9le\u0301phone' ) is considered as 10 characters, one of which non-alpha. défavorisé ( $'d\u00e9favorise\u0301' ) would be matched even though it's got two é because that's 10 all different alpha characters followed by a combining acute accent (non-alpha, so there's a word boundary between the e and its accent). | {
"source": [
"https://unix.stackexchange.com/questions/116439",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34450/"
]
} |
116,508 | Our sys admin installed a software application (Maven) on the server and told everyone to add the /usr/local/maven/bin/ folder to their path. I think it could be more convenient to just link the few programs in that folder from the /bin folder (or other folder that everyone has in their path) like this: ln -s /usr/local/maven/bin/* /bin Is this correct? Are there some hidden side effects to my suggestion? | On linking You generally do not link /usr/local/* with /bin , but this is more of a historical practice. In general, there are a few "technical" reason why you cannot do what you're suggesting. Making links to executables in /bin can cause problems: Probably the biggest caveat would be if you're system is having packages managed by some sort of package manager such as RPM, dpkg, APT, YUM, pacman, pkg_add, etc. In these cases, you'll generally want to let the package manager do its job and manage directories such as /sbin , /bin , /lib , and /usr . One exception would be /usr/local which is typically a safe place to do as you see fit on the box, without having to worry about a package manager interfering with your files. Often times executables built for /usr/local will have this PATH hard-coded into their executables. There may also be configuration files that are included in /usr/local as part of the installation of these applications. So linking to just the executable could cause issues with these apps finding the .cfg files later one. Here's an example of such a case: $ strings /usr/local/bin/wit | grep '/usr/local'
/usr/local/share/wit
/usr/local/share/wit/ The same issue that applies to finding .cfg files can also occur with "helper" executables that the primary app needs to run. These too would also need to be linked into /usr/bin , knowing this might be problematic and only show up when you actually attempted to execute the linked app. NOTE: in general it's best to avoid the temptation to link to one off apps in /usr/bin . /etc/profile.d Rather then have all the users provide this management, the admin could very easily add this to everyone's $PATH on the box by adding a corresponding file in the /etc/profile.d directory. A file such as this, /etc/profile.d/maven.sh : PATH=$PATH:/usr/local/maven/bin You generally do this as an admin instead of polluting all the users' setups with this. Using alternatives Most distros now provide another tool called alternatives (Fedora/CentOS) or update-alternatives (Debian/Ubuntu) which you can also use to loop into the $PATH tools which might be outside the /bin . Using tools such as these is preferable since these are adhering more to what most admins would consider "standard practice" and so makes the systems easier to hand off from one admin to another. This tool does a similar thing in making links in /bin ; but it manages the creation and destruction of these links, so it's easier to understand a system's intended setup when done through a tool vs. done directly as you're suggesting. Here I'm using that system to manage Oracle's Java on a box: $ ls -l /etc/alternatives/ | grep " java"
lrwxrwxrwx. 1 root root 73 Feb 5 13:15 java -> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64/jre/bin/java
lrwxrwxrwx. 1 root root 77 Feb 5 13:15 java.1.gz -> /usr/share/man/man1/java-java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64.1.gz
lrwxrwxrwx. 1 root root 70 Feb 5 13:19 javac -> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64/bin/javac
lrwxrwxrwx. 1 root root 78 Feb 5 13:19 javac.1.gz -> /usr/share/man/man1/javac-java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64.1.gz
lrwxrwxrwx. 1 root root 72 Feb 5 13:19 javadoc -> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64/bin/javadoc
lrwxrwxrwx. 1 root root 80 Feb 5 13:19 javadoc.1.gz -> /usr/share/man/man1/javadoc-java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64.1.gz You can see the effects of this: $ type java
java is /usr/bin/java
$ readlink -f /usr/bin/java
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.4.1.fc19.x86_64/jre/bin/java My $0.02 Making links in /bin , though plausible, would likely be highly discouraged by most sysadmins: Would be frowned upon because it's viewed as custom and can lead to confusion if another admin is required to pick up the box Can lead to a system becoming broken at a future state as a result of this "fragile" customization. | {
"source": [
"https://unix.stackexchange.com/questions/116508",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16569/"
]
} |
116,529 | How is it possible that I cannot log in as root by su root or su (I get incorrect password error), but I can log in by ssh root@localhost or ssh root@my_local_IP with the same password? I'm using CentOS 6.4. Update1 : cat /etc/pam.d/su gives: #%PAM-1.0
auth sufficient pam_rootok.so
# Uncomment the following line to implicitly trust users in the "wheel" group.
#auth sufficient pam_wheel.so trust use_uid
# Uncomment the following line to require a user to be in the "wheel" group.
#auth required pam_wheel.so use_uid
auth include system-auth
account sufficient pam_succeed_if.so uid = 0 use_uid quiet
account include system-auth
password include system-auth
session include system-auth
session optional pam_xauth.so Update2 : $ sudo grep su /var/log/secure | grep -v sudo gives : Feb 23 13:12:17 fallah su: pam_unix(su:auth): authentication failure;
logname=fallah uid=501 euid=501 tty=pts/0 ruser=fallah rhost= user=root repeated about 20 times. | In your comment, you said that /bin/su has the following mode/owner: -rwxrwxrwx. 1 root root 30092 Jun 22 2012 /bin/su There are two problems here. it needs to have the set-uid bit turned on, so that it always runs with root permissions, otherwise when an ordinary (non-root) user runs it, it will not have access to the password info in /etc/shadow nor the ability to set the userid to the desired new user. it ought to have the group and other write bits turned off, so that other users cannot alter it. To fix this, login as root - you said you can do this with ssh - and type chmod 4755 /bin/su or, alternatively, chmod u+s,g-w,o-w /bin/su (The standards document for chmod goes into more detail about what kinds of arguments it takes.)
This will restore the mode bits to the way they were when the operating system was first installed. When you list this file, it ought to look like this: -rwsr-xr-x. 1 root root 30092 Jun 22 2012 /bin/su As @G-Man noted, files that are mode 777 could be overwritten by untrusted users, and if that's the case, you may want to reinstall them from the distribution medium or backups. | {
"source": [
"https://unix.stackexchange.com/questions/116529",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46819/"
]
} |
116,563 | "Everything is a file" in the UNIX World. Above sentence is famous. When I run echo "hello programmer" >> /dev/tty1 , I can watch the given string on TeleType 1 , .... What and where is file per each socket ? Suppose my friend connects to my PC, and its IP is h.h.h.h , how can I access the respective file? Is it possible? | A socket is a file. But not all files have names. Here are a few examples of files that don't have names: Any file that used to have a name, and is now deleted, but is still opened by a program. An unnamed pipe , such as one created by the | shell operator. Most sockets : any Internet socket , or a Unix socket which is not in the filesystem namespace (it can be in the abstract namespace or unnamed). Files such as unnamed pipes or sockets are created by a process and can only be accessed in that process or in subsequently-created child processes. (This is not completely true: a process that has a pipe or socket (or any other file) open can transmit it to other processes via a Unix socket; this is known as file descriptor passing .) Sockets that have a name (whether in the filesystem or abstract) can be opened using that name. Network sockets can be opened (or more precisely connected to) remotely from any machine that has appropriate connectivity. | {
"source": [
"https://unix.stackexchange.com/questions/116563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21911/"
]
} |
116,623 | One of my favorite Unix tricks is ^x^y , which will take the last command and replace the first instance of "x" with "y". However, I'm wondering if a similar trick works to replace all instances of "x" with "y" in the last command? | You can use the !!:gs/search/replace/ notation to do what you want. This utilizes the global search & replace ( :gs ): before $ echo "harm warm swarm barm"
harm warm swarm barm after $ !!:gs/arm/orn/
echo "horn worn sworn born"
horn worn sworn born References The Definitive Guide to Bash Command Line History Caret search and replace in Bash shell | {
"source": [
"https://unix.stackexchange.com/questions/116623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16259/"
]
} |
116,629 | Suppose I press the A key in a text editor and this inserts the character a in the document and displays it on the screen. I know the editor application isn't directly communicating with the hardware (there's a kernel and stuff in between), so what is going on inside my computer? | There are several different scenarios; I'll describe the most common ones. The successive macroscopic events are: Input: the key press event is transmitted from the keyboard hardware to the application. Processing: the application decides that because the key A was pressed, it must display the character a . Output: the application gives the order to display a on the screen. GUI applications The de facto standard graphical user interface of unix systems is the X Window System , often called X11 because it stabilized in the 11th version of its core protocol between applications and the display server. A program called the X server sits between the operating system kernel and the applications; it provides services including displaying windows on the screen and transmitting key presses to the window that has the focus. Input +----------+ +-------------+ +-----+
| keyboard |------------->| motherboard |-------->| CPU |
+----------+ +-------------+ +-----+
USB, PS/2, … PCI, …
key down/up First, information about the key press and key release is transmitted from the keyboard to the computer and inside the computer. The details depend on the type of hardware. I won't dwell more on this part because the information remains the same throughout this part of the chain: a certain key was pressed or released. +--------+ +----------+ +-------------+
-------->| kernel |------->| X server |--------->| application |
+--------+ +----------+ +-------------+
interrupt scancode keysym
=keycode +modifiers When a hardware event happens, the CPU triggers an interrupt , which causes some code in the kernel to execute. This code detects that the hardware event is a key press or key release coming from a keyboard and records the scan code which identifies the key. The X server reads input events through a device file , for example /dev/input/eventNNN on Linux (where NNN is a number). Whenever there is an event, the kernel signals that there is data to read from that device. The device file transmits key up/down events with a scan code, which may or may not be identical to the value transmitted by the hardware (the kernel may translate the scan code from a keyboard-dependent value to a common value, and Linux doesn't retransmit the scan codes that it doesn't know ). X calls the scan code that it reads a keycode . The X server maintains a table that translates key codes into keysyms (short for “key symbol”). Keycodes are numeric, whereas keysyms are names such as A , aacute , F1 , KP_Add , Control_L , … The keysym may differ depending on which modifier keys are pressed ( Shift , Ctrl , …). There are two mechanisms to configure the mapping from keycodes to keysyms: xmodmap is the traditional mechanism. It is a simple table mapping keycodes to a list of keysyms (unmodified, shifted, …). XKB is a more powerful, but more complex mechanism with better support for more modifiers, in particular for dual-language configuration, among others. Applications connect to the X server and receive a notification when a key is pressed while a window of that application has the focus. The notification indicates that a certain keysym was pressed or released as well as what modifiers are currently pressed. You can see keysyms by running the program xev from a terminal. What the application does with the information is up to it; some applications have configurable key bindings. In a typical configuration, when you press the key labeled A with no modifiers, this sends the keysym a to the application; if the application is in a mode where you're typing text, this inserts the character a . Relationship of keyboard layout and xmodmap goes into more detail on keyboard input. How do mouse events work in linux? gives an overview of mouse input at the lower levels. Output +-------------+ +----------+ +-----+ +---------+
| application |------->| X server |---····-->| GPU |-------->| monitor |
+-------------+ +----------+ +-----+ +---------+
text or varies VGA, DVI,
image HDMI, … There are two ways to display a character. Server-side rendering : the application tells the X server “ draw this string in this font at this position ”. The font resides on the X server. Client-side rendering : the application builds an image that represents the character in a font that it chooses, then tells the X server to display that image . See What are the purposes of the different types of XWindows fonts? for a discussion of client-side and server-side text rendering under X11. What happens between the X server and the Graphics Processing Unit (the processor on the video card) is very hardware-dependent. Simple systems have the X server draw in a memory region called a framebuffer , which the GPU picks up for display. Advanced systems such as found on any 21st century PC or smartphone allow the GPU to perform some operations directly for better performance. Ultimately, the GPU transmits the screen content pixel by pixel every fraction of a second to the monitor. Text mode application, running in a terminal If your text editor is a text mode application running in a terminal, then it is the terminal which is the application for the purpose of the section above. In this section, I explain the interface between the text mode application and the terminal. First I describe the case of a terminal emulator running under X11. What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'? may be useful background here. After reading this, you may want to read the far more detailed What are the responsibilities of each Pseudo-Terminal (PTY) component (software, master side, slave side)? Input +-------------------+ +-------------+
----->| terminal emulator |-------------->| application |
+-------------------+ +-------------+
keysym character or
escape sequence The terminal emulator receives events like “ Left was pressed while Shift was down”. The interface between the terminal emulator and the text mode application is a pseudo-terminal (pty) , a character device which transmits bytes. When the terminal emulator receives a key press event, it transforms this into one or more bytes which the application gets to read from the pty device. Printable characters outside the ASCII range are transmitted as one or more byte depending on the character and encoding . For example, in the UTF-8 encoding of the Unicode character set, characters in the ASCII range are encoded as a single bytes, while characters outside that range are encoded as multiple bytes. Key presses that correspond to a function key or a printable character with modifiers such as Ctrl or Alt are sent as an escape sequence . Escape sequences typically consist of the character escape (byte value 27 = 0x1B = \033 , sometimes represented as ^[ or \e ) followed by one or more printable characters. A few keys or key combination have a control character corresponding to them in ASCII-based encodings (which is pretty much all of them in use today, including Unicode): Ctrl + letter yields a character value in the range 1–26, Esc is the escape character seen above and is also the same as Ctrl + [ , Tab is the same as Ctrl + I , Return is the same as Ctrl + M , etc. Different terminals send different escape sequences for a given key or key combination. Fortunately, the converse is not true: given a sequence, there is in practice at most one key combination that it encodes. The one exception is the character 127 = 0x7f = \0177 which is often Backspace but sometimes Delete . In a terminal, if you type Ctrl + V followed by a key combination, this inserts the first byte of the escape sequence from the key combination literally. Since escape sequences normally consist only of printable characters after the first one, this inserts the whole escape sequence literally. See key bindings table? for a discussion of zsh in this context. The terminal may transmit the same escape sequence for some modifier combinations (e.g. many terminals transmit a space character for both Space and Shift + Space ; xterm has a mode to distinguish modifier combinations but terminals based on the popular vte library don't ). A few keys are not transmitted at all, for example modifier keys or keys that trigger a binding of the terminal emulator (e.g. a copy or paste command). It is up to the application to translate escape sequences into symbolic key names if it so desires. Output +-------------+ +-------------------+
| application |-------------->| terminal emulator |--->
+-------------+ +-------------------+
character or
escape sequence Output is rather simpler than input. If the application outputs a character to the pty device file, the terminal emulator displays it at the current cursor position. (The terminal emulator maintains a cursor position, and scrolls if the cursor would fall under the bottom of the screen.) The application can also output escape sequences (mostly beginning with ^[ or ^] ) to tell the terminal to perform actions such as moving the cursor, changing the text attributes (color, bold, …), or erasing part of the screen. Escape sequences supported by the terminal emulator are described in the termcap or terminfo database. Most terminal emulator nowadays are fairly closely aligned with xterm . See Documentation on LESS_TERMCAP_* variables? for a longer discussion of terminal capability information databases, and How to stop cursor from blinking and Can I set my local machine's terminal colors to use those of the machine I ssh into? for some usage examples. Application running in a text console If the application is running directly in a text console, i.e. a terminal provided by the kernel rather than by a terminal emulator application, the same principles apply. The interface between the terminal and the application is still a byte stream which transmits characters, with special keys and commands encoded as escape sequences. Remote application, accessed over the network Remote text application If you run a program on a remote machine, e.g. over SSH , then the network communication protocol relays data at the pty level. +-------------+ +------+ +-----+ +----------+
| application |<--------->| sshd |<--------->| ssh |<--------->| terminal |
+-------------+ +------+ +-----+ +----------+
byte stream byte stream byte stream
(char/seq) over TCP/… (char/seq) This is mostly transparent, except that sometimes the remote terminal database may not know all the capabilities of the local terminal. Remote X11 application The communication protocol between applications an the server is itself a byte stream that can be sent over a network protocol such as SSH. +-------------+ +------+ +-----+ +----------+
| application |<---------->| sshd |<------>| ssh |<---------->| X server |
+-------------+ +------+ +-----+ +----------+
X11 protocol X11 over X11 protocol
TCP/… This is mostly transparent, except that some acceleration features such as movie decoding and 3D rendering that require direct communication between the application and the display are not available. | {
"source": [
"https://unix.stackexchange.com/questions/116629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
116,770 | I have a directory which contains image files with names like image1.jpg
image2.jpg
image3.jpg
... Unfortunately, the image names must be zero based, so image1.jpg should be image0.jpg , image2.jpg should be image1.jpg and so on. I can write a script to generate mv commands like these, put them in a shell script, and then execute them - mv image1.jpg image0.jpg
mv image2.jpg image1.jpg
mv image3.jpg image2.jpg
... But I suppose there is a neater way to do it in Unix. So what is it? | The good old perl rename: rename 's/(\d+)(\.jpg)/($1-1).$2/e' * [Remarks] Image numbers should be greater than 0. In case images are greater than 9 and have not leading 0s,
use $(ls -v1 *) to avoid clobbering. Proposed by @arielf and noticed by @Graeme. When in doubt use also -v for verbose and -n for no-action. | {
"source": [
"https://unix.stackexchange.com/questions/116770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59877/"
]
} |
116,774 | I have lost all files under /etc/yum.repos.d/ and need a source or quick method for recovery. kernel version : 2.6.43.8-1.fc15.i686 / Fedora 15 Lovelock yum version : yum-3.2.29-9.fc15.noarch | The good old perl rename: rename 's/(\d+)(\.jpg)/($1-1).$2/e' * [Remarks] Image numbers should be greater than 0. In case images are greater than 9 and have not leading 0s,
use $(ls -v1 *) to avoid clobbering. Proposed by @arielf and noticed by @Graeme. When in doubt use also -v for verbose and -n for no-action. | {
"source": [
"https://unix.stackexchange.com/questions/116774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61252/"
]
} |
116,775 | I need to "install" a bunch of files to another directory keeping the directory structure of the source files intact. For example, if I have ./foo/bar/baz.txt going to /var/www/localhost/webroot/ I want the result to be /var/www/localhost/webroot/foo/bar/baz.txt . rsync has this capability in --relative , but when I did this I discovered it wasn't friendly to symlinks: $ ls -ald /var/www/localhost/webroot/ | grep ^l
lrwxrwxrwx 1 www-data www-data 15 2014-01-03 13:45 media -> ../static/media
lrwxrwxrwx 1 root root 13 2014-02-24 13:47 var -> ../static/var
$ rsync -qrR . /var/www/localhost/webroot/
$ ls -ald /var/www/localhost/webroot/ | grep var
drwxr-xr-x 3 root root 4096 2014-02-24 13:52 /var/www/localhost/webroot/var So you see the symlink is no longer a symlink – the files were copied to the wrong place! rsync also has the --no-implied-dirs option, that superficially seems to do what I want, but it only works as I intend when not doing a recursive rsync, so I have to: find . -type f -print0 | xargs -0I{} rsync -R --no-implied-dirs {} /var/www/localhost/webroot/ Is there any more direct way to accomplish this mirroring of files without wiping out intermediate symlink directories (with or without rsync)? | Use rsync 's option -K ( --keep-dirlinks ). From the manpage: -K, --keep-dirlinks
This option causes the receiving side to treat a
symlink to a directory as though it were a real
directory, but only if it matches a real directory from
the sender. Without this option, the receiver’s
symlink would be deleted and replaced with a real
directory.
For example, suppose you transfer a directory foo that
contains a file file, but foo is a symlink to directory
bar on the receiver. Without --keep-dirlinks, the
receiver deletes symlink foo, recreates it as a
directory, and receives the file into the new
directory. With --keep-dirlinks, the receiver keeps
the symlink and file ends up in bar.
One note of caution: if you use --keep-dirlinks, you
must trust all the symlinks in the copy! If it is
possible for an untrusted user to create their own
symlink to any directory, the user could then (on a
subsequent copy) replace the symlink with a real
directory and affect the content of whatever directory
the symlink references. For backup copies, you are
better off using something like a bind mount instead of
a symlink to modify your receiving hierarchy.
See also --copy-dirlinks for an analogous option for
the sending side. | {
"source": [
"https://unix.stackexchange.com/questions/116775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5377/"
]
} |
116,779 | After freshly installing Debian with Gnome 3, I cannot enable desktop icons. I tried to use dconf-editor , but all values were locked. I tried sudo dconf-editor : The option was changeable but had no effect... I also tried gnome-tweak-tool and to no avail. What am I doing wrong? | Use rsync 's option -K ( --keep-dirlinks ). From the manpage: -K, --keep-dirlinks
This option causes the receiving side to treat a
symlink to a directory as though it were a real
directory, but only if it matches a real directory from
the sender. Without this option, the receiver’s
symlink would be deleted and replaced with a real
directory.
For example, suppose you transfer a directory foo that
contains a file file, but foo is a symlink to directory
bar on the receiver. Without --keep-dirlinks, the
receiver deletes symlink foo, recreates it as a
directory, and receives the file into the new
directory. With --keep-dirlinks, the receiver keeps
the symlink and file ends up in bar.
One note of caution: if you use --keep-dirlinks, you
must trust all the symlinks in the copy! If it is
possible for an untrusted user to create their own
symlink to any directory, the user could then (on a
subsequent copy) replace the symlink with a real
directory and affect the content of whatever directory
the symlink references. For backup copies, you are
better off using something like a bind mount instead of
a symlink to modify your receiving hierarchy.
See also --copy-dirlinks for an analogous option for
the sending side. | {
"source": [
"https://unix.stackexchange.com/questions/116779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59939/"
]
} |
116,849 | I'm getting the following output from top : Cpu(s): 43.8%us, 32.5%sy, 4.8%ni, 2.0%id, 15.6%wa, 0.2%hi, 1.2%si, 0.0%st
Mem: 16331504k total, 15759412k used, 572092k free, 4575980k buffers
Swap: 4194296k total, 260644k used, 3933652k free, 1588044k cached the output from iostat -xk 6 shows the following: Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 360.20 86.20 153.40 1133.60 2054.40 26.61 1.51 6.27 0.77 18.38
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd 22.60 198.80 17.40 31.60 265.60 921.60 48.46 0.18 3.70 1.67 8.20
sdc 16.80 218.20 22.20 23.40 261.60 966.40 53.86 0.21 4.56 1.49 6.78 Based on the above it looks like something must be overloaded. But what? Questions If its not the harddisk or the CPU then what? It seems as though 15.6% of the CPU's time is spent waiting. What exactly could it be waiting for? | As a clarification point, load is not directly tied to CPU. This is one of the most common misconceptions about load. The fact that you mention disk seems to acknowledge that you're aware of this, but I just wanted to mention it as I see comments that indicate some believe otherwise. Load is defined as the number of processes on the kernel run queue (meaning: waiting on system resources). This generally means processes waiting on CPU, disk, or network, but can be anything hardware really. A "process" is not necessarily a full process either. A thread is defined as a "lightweight process", and each thread that is waiting increases the load count. To figure out which processes are a problem: Run top -H (the -H enables showing threads) The keyboard shortcuts vary by version. With newer top (3.3 and after): Press f to bring up the field options. Use the arrow keys to go to S = Process Status and press s . Press q to go back to the main page. Press Shift + R to reverse the sorting. With older top (before 3.3): Press Shift + o to bring up the sort options. Then w to sort by process status. Then Enter to go back to the main page. Then Shift + R to reverse the sorting. Then in the S column, look for processes which have D or R (they should now be at the top). These will be processes contributing to system load. If the process shows a D , that means "uninterruptable sleep". Usually this is caused when the process is waiting on I/O (disk, network, etc). If the process shows a R , that means it's just doing normal computation. To find more about what those processes are doing: With newer top (3.3 and after): Press f to bring up the field options. Use the arrow keys to go to WCHAN = Sleeping in Function and press d to enable it. Then q to get back to the main page. With older top (before 3.3): Press f then y to enable the WCHAN field. If your system has the necessary kernel options, and the wchan file is present on your system (I forget where it is and what it's called) , the WCHAN field should show you what kernel function the process is currently running (if the field just shows a - or a ? on everything, you don't have support). A bit of google here and you should be on your way. If you don't have wchan support, you can always try an strace on the processes to find out what they're doing, but that's the difficult way. | {
"source": [
"https://unix.stackexchange.com/questions/116849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29357/"
]
} |
116,919 | How can I redirect the microphone of one computer to listen to it on another computer via ssh? Which is the right device or which is the right command line? Some years ago it was easy and fun to redirect sound from a remote microphone to a local computer or vice versa – it was an easy telephone. There are some instructions for it, but none of them seem to work on newer computers/linux distros. I don’t even have a /dev/audio on my computer (Fedora 17). I think that it may have something to do with pulse audio. Or don’t I need pulse audio for this simple telephone? Which is the right device? I can see all my sound devices when I start alsamixer and press the F6 key. But I don’t know which are the devices in my /dev tree. | OK, I've just found it, and it still works! Really funny. You don’t need any fancy applications, instant messengers or the like. With this command you send your audio to the remote host. arecord -f cd -t raw | oggenc - -r | ssh <user>@<remotehost> mplayer - Or if you like ffmpeg better ffmpeg -f alsa -ac 1 -i hw:3 -f ogg - \
| ssh <user>@<remotehost> mplayer - -idle -demuxer ogg Source: http://shmerl.blogspot.de/2011/06/some-fun-with-audio-forwarding.html If you want a real telephone: The command above was only for one direction. For the other direction you have to start another ssh session. So, to receive what the other user says to you, use ssh <user>@<remotehost> 'arecord -f cd -t raw | oggenc - -r' | mplayer - Or if you like ffmpeg better ssh <user>@<remotehost> ffmpeg -f alsa -ac 1 -i hw:3 -f ogg - \
| mplayer - -idle -demuxer ogg where hw:3 is the alsadevice you want to record (find it with arecord -l ; you can also use a device name, find this with arecord -L ; in many cases you can just use the device listed with the following command: arecord -L | grep sysdefault ). Update In 2018 on my Fedora Linux systems ffmpeg does not have alsa support included (it seems to be the same on RaspberryPi systems with Raspbian). But there is a simple solution without recompiling. Just pipe the output of arecord (the alsarecorder) to ffmpeg: ssh <user>@<remotehost> 'arecord -f cd -D plughw:2 | ffmpeg -ac 1 -i - -f ogg -' \
| mplayer - -idle -demuxer ogg You get the input device plughw:2 by finding your device in the output of the following command: arecord -l In my case I see card0 and card2 (my webcam which has a microphone). So I wrote plughw:2 for card2 . Update 2 (without mplayer) If you don’t have or like mplayer but ffplay (which is part of ffmpeg ) you can use: uncompressed wave-audio (high bandwidth, very low cpu usage on the recording side) ssh <user>@<remotehost> "arecord -f cd -D plughw:2" | ffplay -nodisp - compressed with flac (low bandwidth, low cpu usage on the recording side) ssh <user>@<remotehost> "arecord -f cd -D plughw:2 | flac - -o -" | ffplay -nodisp - compressed with ogg (very low bandwidth, high cpu usage on the recording side) ssh <user>@<remotehost> "arecord -f cd -D plughw:2 | oggenc -" | ffplay -nodisp - | {
"source": [
"https://unix.stackexchange.com/questions/116919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31673/"
]
} |
116,955 | In a bash sub shell I get the following error when running cd sudo: cd: command not found This is expected because I don't have a path. Usually to work around this I just provide the full path like so: (/usr/local/bin/foo) Much to my surprise, cd does not appear to be in any of the normal places. which cd
whereis cd
ls /bin | grep cd By comparison, ls is right where I would expect. which ls
/bin/ls Where is the cd command located? And why is different from all the other commands? Update Another interesting tidbit, cd does not show up in hash hash
0 /bin/ls
2 /usr/bin/find
2 /sbin/ip
1 /usr/bin/updatedb
1 /usr/bin/apt-get | What cd am I using? If you're in Bash cd is a builtin. The type command even bears this out: $ type -a cd
cd is a shell builtin
cd is /usr/bin/cd
cd is /bin/cd The system will use the first thing in this list, so the builtin will be the preferred option, and the only one that works (see the section below on What is /bin/cd ). What's a builtin? I like to think of builtins as functions that Bash knows how to do itself. Basically anything that you use a lot has been moved into the Bash "kernel" so that it doesn't have to go executing a process for each time. You can always explicitly tell Bash that you want a builtin by using the builtin command like so: $ builtin cd See the help about builtin : $ help builtin Why isn't cd in hash? The hash is meant only to "hash" (aka. "save" in a key/value pair) the locations of files, not for builtins or keywords. The primary task for hash is in saving on having to go through the $PATH each time looking for frequently used executables. Keywords? These are typically the commands that are part of Bash's programming language features. $ type while
while is a shell keyword
$ type for
for is a shell keyword
$ type !
! is a shell keyword Some things are implemented in multiple ways, such as [ : $ type -a [
[ is a shell builtin
[ is /usr/bin/[
[ is /bin/[ ...and cd as you've discovered. What is /bin/cd? On my Fedora 19 system /bin/cd is actually a shell script: $ more /bin/cd
#!/bin/sh
builtin cd "$@" But it doesn't do what you think. See these other U&L Q&A's for more details: What is the point of the `cd` external command? " Why can't I redirect a path name output from one command to "cd"? Bottom line: POSIX's requires that it's there and in this implementation, it acts as a test, confirming that you're able to change directories to X, but then returning a return code confirming or denying that this is possible. | {
"source": [
"https://unix.stackexchange.com/questions/116955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39263/"
]
} |
116,959 | I get the message There are stopped jobs. when I try to exit a bash shell sometimes. Here is a reproducible scenario in python 2.x: ctrl + c is handled by the interpreter as an exception. ctrl + z 'stops' the process. ctrl + d exits python for reals. Here is some real-world terminal output: example_user@example_server:~$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> ctrl+z [1]+ Stopped python
example_user@example_server:~$ exit
logout
There are stopped jobs. Bash did not exit, I must exit again to exit the bash shell. Q: What is a 'stopped job', or what does this mean? Q: Can a stopped process be resumed? Q: Does the first exit kill the stopped jobs? Q: Is there a way to exit the shell the first time? (without entering exit twice) | A stopped job is one that has been temporarily put into the background and is no longer running, but is still using resources (i.e. system memory). Because that job is not attached to the current terminal, it cannot produce output and is not receiving input from the user. You can see jobs you have running using the jobs builtin command in bash, probably other shells as well. Example: user@mysystem:~$ jobs
[1] + Stopped python
user@mysystem:~$ You can resume a stopped job by using the fg (foreground) bash built-in command. If you have multiple commands that have been stopped you must specify which one to resume by passing jobspec number on the command line with fg . If only one program is stopped, you may use fg alone: user@mysystem:~$ fg 1
python At this point you are back in the python interpreter and may exit by using control-D. Conversely, you may kill the command with either it's jobspec or PID. For instance: user@mysystem:~$ ps
PID TTY TIME CMD
16174 pts/3 00:00:00 bash
17781 pts/3 00:00:00 python
18276 pts/3 00:00:00 ps
user@mysystem:~$ kill 17781
[1]+ Killed python
user@mysystem:~$ To use the jobspec, precede the number with the percent (%) key: user@mysystem:~$ kill %1
[1]+ Terminated python If you issue an exit command with stopped jobs, the warning you saw will be given. The jobs will be left running for safety. That's to make sure you are aware you are attempting to kill jobs you might have forgotten you stopped. The second time you use the exit command the jobs are terminated and the shell exits. This may cause problems for some programs that aren't intended to be killed in this fashion. In bash it seems you can use the logout command which will kill stopped processes and exit. This may cause unwanted results. Also note that some programs may not exit when terminated in this way, and your system could end up with a lot of orphaned processes using up resources if you make a habit of doing that. Note that you can create background process that will stop if they require user input: user@mysystem:~$ python &
[1] 19028
user@mysystem:~$ jobs
[1]+ Stopped python You can resume and kill these jobs in the same way you did jobs that you stopped with the Ctrl-z interrupt. | {
"source": [
"https://unix.stackexchange.com/questions/116959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61349/"
]
} |
116,971 | I have NFSv4 Server (on RHELv6.4) and NFS Clients on (CentOSv6.4). Let's say in /etc/exports : /shares/website1 <ip-client-1>(rw,sync,no_subtree_check,no_root_squash)
/shares/website2 <ip-client-2>(rw,sync,no_subtree_check,no_root_squash) Then whenever i made some changes in that (let's say the changes ONLY for client-2 ), e.g: /shares/website1 <ip-client-1>(rw,sync,no_subtree_check,no_root_squash)
/shares/xxxxxxxx <ip-client-2>(rw,sync,no_subtree_check,no_root_squash) Then i always service nfs restart . And then eventually .. the mount-point on client-1 got unresponsive (Can't open its files, etc) . (Why? Because of RESTART?) But as described, i only modified the line for client-2 only. Everything for the client-1 are still untouched. So my questions here are: Whenever i modify the /etc/exports , should i restart the service or what? If i service nfs restart , why the Mount-Point on other Clients are eventually affected? (For those Client Machines with NO changes made in /etc/exports for them.) That means, whenever i make the changes in /etc/exports and restart the service, i will need to go RE-MOUNT the directories on EVERY CLIENTS in the export list, in order to have the mount-points working again. Any idea, please? | You shouldn't need to restart NFS every time you make a change to /etc/exports . All that's required is to issue the appropriate command after editing the /etc/exports file: $ exportfs -ra Excerpt from the official Red Hat documentation titled: 21.7. The /etc/exports Configuration File . excerpt When issued manually, the /usr/sbin/exportfs command allows the root user to selectively export or unexport directories without restarting the NFS service. When given the proper options, the /usr/sbin/exportfs command writes the exported file systems to /var/lib/nfs/xtab. Since rpc.mountd refers to the xtab file when deciding access privileges to a file system, changes to the list of exported file systems take effect immediately. Also read the exportfs man page for more details, specifically the "DESCRIPTION" section which explains all this and more. DESCRIPTION
An NFS server maintains a table of local physical file systems that are
accessible to NFS clients. Each file system in this table is referred
to as an exported file system, or export, for short. The exportfs command maintains the current table of exports for the NFS
server. The master export table is kept in a file named
/var/lib/nfs/etab. This file is read by rpc.mountd when a client sends
an NFS MOUNT request.
Normally the master export table is initialized with the contents
of /etc/exports and files under /etc/exports.d by invoking exportfs -a.
However, a system administrator can choose to add or delete exports
without modifying /etc/exports or files under /etc/exports.d by
using the exportfs command. Also take note of the options we're using, -ra : -a Export or unexport all directories.
-r Reexport all directories, synchronizing /var/lib/nfs/etab with
/etc/exports and files under /etc/exports.d. This option
removes entries in /var/lib/nfs/etab which have been deleted
from /etc/exports or files under /etc/exports.d, and removes
any entries from the kernel export table which are no longer
valid. | {
"source": [
"https://unix.stackexchange.com/questions/116971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34722/"
]
} |
116,987 | I am trying to download all links from aligajani.com. There are 7 of them, excluding the domain facebook.com–which I want to ignore. I don't want to download from links that start with facebook.com domain. Also, I want them saved in a .txt file, line by line. So there would be 7 lines. Here's what I've tried so far. This just downloads everything. Don't want that. wget -r -l 1 http://aligajani.com | wget does not offer such an option. Please read its man page. You could use lynx for this: lynx -dump -listonly http://aligajani.com | grep -v facebook.com > file.txt From its man page: -listonly
for -dump, show only the list of links. | {
"source": [
"https://unix.stackexchange.com/questions/116987",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61368/"
]
} |
117,037 | On Linux, I want to send a command string (i.e. some data) to a serial port (containing control characters), and listen to the response (which also usually might contain control characters). How can I do this as simplest as possible on Linux? An example is appreciated! | All devices on Unix are mapped to a device file, the serial ports would be /dev/ttyS0 /dev/ttyS1 ... . First have a look at the permissions on that file, lets assume you are using /dev/ttyS1 . ls -l /dev/ttyS1 You will want read.write access, if this is a shared system then you should consider the security consequences of opening it up for everyone. chmod o+rw /dev/ttyS1 A very simple crude method to write to the file, would use the simple echo command. echo -ne '\033[2J' > /dev/ttyS1 and to read cat -v < /dev/ttyS1 You can have cat running in one terminal, and echo in a 2nd. If everything is gibberish, then baud rate, bit settings might need setting before you start sending. stty will do that. !! NOTE stty will use stdin as default file descriptor to affect. Equivalent commands. stty 19200 < /dev/ttyS1
stty 19200 -F /dev/ttyS1 This might be enough for you to script something and log ? Not sure what you are trying to achieve. For a more interactive, remembers your default settings approach would be to use minicom it is just a program which does everything I've mentioned so far. (similar to hyperterminal in Windows, you might be familiar). An intermediate solution, would use a terminal program like screen which will work on a serial device. screen /dev/ttyS1 man screen man minicom man stty for more information | {
"source": [
"https://unix.stackexchange.com/questions/117037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31853/"
]
} |
117,093 | So I received a warning from our monitoring system on one of our boxes that the number of free inodes on a filesystem was getting low. df -i output shows this: Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 524288 422613 101675 81% / As you can see, the root partition has 81% of its inodes used. I suspect they're all being used in a single directory. But how can I find where that is at? | I saw this question over on stackoverflow, but I didn't like any of the answers, and it really is a question that should be here on U&L anyway. Basically an inode is used for each file on the filesystem. So running out of inodes generally means you've got a lot of small files laying around. So the question really becomes, "what directory has a large number of files in it?" In this case, the filesystem we care about is the root filesystem / , so we can use the following command: { find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n; } 2>/dev/null This will dump a list of every directory on the filesystem prefixed with the number of files (and subdirectories) in that directory. Thus the directory with the largest number of files will be at the bottom. In my case, this turns up the following: 1202 /usr/share/man/man1
2714 /usr/share/man/man3
2826 /var/lib/dpkg/info
306588 /var/spool/postfix/maildrop So basically /var/spool/postfix/maildrop is consuming all the inodes. *Note, this answer does have three caveats that I can think of. It does not properly handle anything with newlines in the path. I know my filesystem has no files with newlines, and since this is only being used for human consumption, the potential issue isn't worth solving and one can always replace the \n with \0 and use -z options for the sort and uniq commands above as following: { find / -xdev -printf '%h\0' |sort -z |uniq -zc |sort -zk1rn; } 2>/dev/null Optionally you can add head -zn10 to the command to get top 10 most used inodes. It also does not handle if the files are spread out among a large number of directories. This isn't likely though, so I consider the risk acceptable. It will also count hard links to a same file (so using only one inode) several times. Again, unlikely to give false positives* The key reason I didn't like any of the answers on the stackoverflow answer is they all cross filesystem boundaries. Since my issue was on the root filesystem, this means it would traverse every single mounted filesystem. Throwing -xdev on the find commands wouldn't even work properly. For example, the most upvoted answer is this one: for i in `find . -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n If we change this instead to for i in `find . -xdev -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n even though /mnt/foo is a mount, it is also a directory on the root filesystem, so it'll turn up in find . -xdev -type d , and then it'll get passed to the ls -a $i , which will dive into the mount. The find in my answer instead lists the directory of every single file on the mount. So basically with a file structure such as: /foo/bar
/foo/baz
/pop/tart we end up with /foo
/foo
/pop So we just have to count the number of duplicate lines. | {
"source": [
"https://unix.stackexchange.com/questions/117093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4358/"
]
} |
117,122 | currently I am having lot's of fun with apt-get - and the bad thing is, it was my own fault. I had enabled the testing packages in /etc/apt/sources.list to install a certain package. And I told my system do apt-get dist-upgrade . Everything worked fine, but now I am trying to get back to the stable updated - and I fail... When trying to do the apt-get dist-upgrade , i get the following information: The following packages will be REMOVED:
linux-image-3.10-3-amd64
The following NEW packages will be installed:
libcgi-fast-perl libfcgi-perl libyaml-syck-perl
The following packages will be DOWNGRADED:
initramfs-tools libdate-manip-perl munin munin-common Well, that's okay, but when I am try to do this, I get a warning in bold friendly red letters: You are running a kernel (version 3.10-3-amd64) and attempting to remove the same version.
...
It is highly recommended to abort the kernel removal unless you are prepared to fix the system after removal. Well, I like to follow the recommendmend. The correct kernel version for the stable release would be linux-image-3.2.0-4-amd64 and it is already installed. Probably the downgrade would be no problem if I was working under the older kernel? Actually, I have no clue how to enable the kernel 3.2.0 instead of 3.10 . | Look at this, it seems to indicate that downgrade is possible using apt-get: http://ispire.me/downgrade-from-debian-sid-to-stable-from-jessie-to-wheezy/ Essentials (3-step): (If much of your system is of a higher version, you'll want to be careful downgrading. See especially format changes (data and personal config files) Remove all references to sid or unstable in your /etc/apt/sources.list by deleting, replacing, or commenting out. Ensure sources.list has what you do want (I recommend security and stable deb sources). For example: deb http://security.debian.org/ wheezy/updates main deb-src http://security.debian.org/ wheezy/updates main deb http://cdn.debian.net/debian/ wheezy main contrib non-free deb-src http://cdn.debian.net/debian/ wheezy main contrib non-free Pin the release you want in /etc/apt/preferences (this will cause the already downloaded but now unwanted package information to be ignored as desired). Package: * Pin: release a=stable Pin-Priority: 1001 Finally we have to run the apt update and upgrade process for downgrading all packages. * apt will ask for confirmation # apt-get update # apt-get upgrade # apt-get dist-upgrade If you can't explain what each of these commands does independently, read your man pages! :) And do the same for at least the options you use in other utilities. (If you have issues downgrading a package) Purge it, then reinstall # apt-get purge [your_failing_package] # apt-get install [your_failing_package] | {
"source": [
"https://unix.stackexchange.com/questions/117122",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61441/"
]
} |
117,150 | On shared unix hosting, if I have a file sensitive-data.txt and I issue: chmod 600 sensitive-data.txt Can root user still read my file? Specifically I'm wondering if it's safe to store my password in mercurial hgrc file. UPDATE Decided to use the mecurial keyring extension as it was super easy to setup: pip install mercurial_keyring and then add to hgrc: [extensions]
mercurial_keyring = However I'm still interested in the answer to this question. | Yes, root can: $ echo Hello you\! > file
$ chmod 600 file
$ ls -l file
-rw------- 1 terdon terdon 11 Feb 27 02:14 file
$ sudo -i
# cat file
Hello you! In any case, even if root couldn't read your files as root, they can always log in as you without a password: $ whoami
terdon
$ sudo -i
[sudo] password for terdon:
# whoami
root
# su - terdon
$ whoami
terdon So, root can change to any other username using su (or sudo -iu username ) and will then be able to do anything at all as though they were you. | {
"source": [
"https://unix.stackexchange.com/questions/117150",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6250/"
]
} |
117,244 | I have created a script to install two scripts on to the crontab. #!/bin/bash
sudo crontab -l > mycron
#echo new cron into cron file
echo "*/05 * * * * bash /mnt/md0/capture/delete_old_pcap.sh" >> mycron #schedule the delete script
echo "*/12 * * * * bash /mnt/md0/capture/merge_pcap.sh" >> mycron #schedule the merge script
#install new cron file
crontab mycron
rm mycron The script runs, and add the two lines to the crontab. But if I run the script again, it adds those lines again , thus I will have four lines saying the same stuff. I want the install script to run such that, the lines inserted to the crontab do not repeat. How can I do that | I would recommend using /etc/cron.d over crontab . You can place files in /etc/cron.d which behave like crontab entries. Though the format is slightly different. For example /etc/cron.d/pcap : */05 * * * * root bash /mnt/md0/capture/delete_old_pcap.sh
*/12 * * * * root bash /mnt/md0/capture/merge_pcap.sh The difference in the format is adding the user to run the job as after the time specification. Now you can simply check if the file exists, and if you overwrite it, it doesn't matter. Note that it's possible your cron daemon might not have /etc/cron.d . I do not know which cron daemons have it, but vixie cron is the the standard cron daemon on linux, and it does. | {
"source": [
"https://unix.stackexchange.com/questions/117244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58138/"
]
} |
117,250 | I need to increase screen resolution to 1024x800 for my XEN console. I tried to place vga=791 at the end of kernel line the file /boot/grub/grub.conf but it seems that most of the boot arguments are ignored during startup. Probably inside XenSever /boot is not really used in order to launch CentOS. I even tried to add boot option in the boot option tab (VM -> Property -> Boot Option) but doesn't work. | I would recommend using /etc/cron.d over crontab . You can place files in /etc/cron.d which behave like crontab entries. Though the format is slightly different. For example /etc/cron.d/pcap : */05 * * * * root bash /mnt/md0/capture/delete_old_pcap.sh
*/12 * * * * root bash /mnt/md0/capture/merge_pcap.sh The difference in the format is adding the user to run the job as after the time specification. Now you can simply check if the file exists, and if you overwrite it, it doesn't matter. Note that it's possible your cron daemon might not have /etc/cron.d . I do not know which cron daemons have it, but vixie cron is the the standard cron daemon on linux, and it does. | {
"source": [
"https://unix.stackexchange.com/questions/117250",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61510/"
]
} |
117,325 | Where are filenames stored on a filesystem? It's not in inode or with the actual file content since we have hard link that two filenames can point to the same inode. | I wasn't finding a suitable duplicate so here's an answer to your question. File names & directories excerpt File names and directory implications: inodes do not contain file names, only other file metadata. Unix directories are lists of association structures, each of which contains one filename and one inode number. The file system driver must search a directory looking for a particular filename and then convert the filename to the correct corresponding inode number. Source: Wikipedia page on Inode So the name of the file is stored within the directories' information structure. For example: Directory's structure excerpt In the EXT2 file system, directories are special files that are used to create and hold access paths to the files in the file system. Figure 9.3 shows the layout of a directory entry in memory. A directory file is a list of directory entries, each one containing the following information: inode - The inode for this directory entry. This is an index into the array of inodes held in the Inode Table of the Block Group. In figure 9.3, the directory entry for the file called file has a reference to inode number i1, name length - The length of this directory entry in bytes, name - The name of this directory entry. The first two entries for every directory are always the standard . and .. entries meaning "this directory" and "the parent directory" respectively. Here's the Figure 9.3 references above: Source: The Linux Documentation Project: Filesystem References Chapter 3: File System Basics How are directory structures stored in UNIX filesystem? | {
"source": [
"https://unix.stackexchange.com/questions/117325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60014/"
]
} |
117,336 | First off, I've installed arch before, but managed to not encounter any of the problems I'm having at the moment (not sure how). But I'm well and truly stuck. First, my network interface is now called enp3s0 rather than eth0, so every time I start arch, I need to run ip link set enp3s0 and then dhcpcd enp3s0 how do I configure this so it happens automatically? My second issue seems more peculiar; after booting into arch, I installed the enlightenment WM with pacman, and tried to run it, but apparently I did not have a couple of xorg packages, namely xorg-xinit, and another which I forget. After installing these however, editing the .xinitrc file, and running startx, I just got 3 white bash boxes on a black screen. Though if I run enlightenment_start in one of those boxes, enlightenment starts fine (albeit with 3 terminal boxes open, 2 I can close fine, but if the third is closed, enlightenment exits). I am certain this is not normal behaviour, and any help as to what I'm doing wrong here would be much appreciated. | I wasn't finding a suitable duplicate so here's an answer to your question. File names & directories excerpt File names and directory implications: inodes do not contain file names, only other file metadata. Unix directories are lists of association structures, each of which contains one filename and one inode number. The file system driver must search a directory looking for a particular filename and then convert the filename to the correct corresponding inode number. Source: Wikipedia page on Inode So the name of the file is stored within the directories' information structure. For example: Directory's structure excerpt In the EXT2 file system, directories are special files that are used to create and hold access paths to the files in the file system. Figure 9.3 shows the layout of a directory entry in memory. A directory file is a list of directory entries, each one containing the following information: inode - The inode for this directory entry. This is an index into the array of inodes held in the Inode Table of the Block Group. In figure 9.3, the directory entry for the file called file has a reference to inode number i1, name length - The length of this directory entry in bytes, name - The name of this directory entry. The first two entries for every directory are always the standard . and .. entries meaning "this directory" and "the parent directory" respectively. Here's the Figure 9.3 references above: Source: The Linux Documentation Project: Filesystem References Chapter 3: File System Basics How are directory structures stored in UNIX filesystem? | {
"source": [
"https://unix.stackexchange.com/questions/117336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61555/"
]
} |
117,438 | As a primarily Java programmer, I find the bash if - then construct quite confusing, especially regarding whitespace. Can anyone explain why the first one works, but not the second or third? #works
if [ -n $1 ]; then echo "parameterized"; fi
#output: ./links: Zeile 1: [-n: Kommando nicht gefunden.
if [-n $1 ]; then echo "parameterized"; fi
#output: ./links: Zeile 1: [: Fehlende `]'
if [ -n $1]; then echo "parameterized"; fi | The inconsistency is largely due to historical reasons. The use of brackets as the conditional command came after the use of brackets in wildcard patterns. So at the time [ -n foo ] came onto the scene, [foo] already meant “a file whose name is f or o ”. While few uses of the brackets operator in practice would have conflicted with realistic uses of the brackets in wildcards, the authors chose not to risk breaking existing scripts by changing the syntax. This design choice also made implementation easier: initially [ was implemented as an external command, which couldn't have been done if the space after [ had been optional. (Modern systems still have [ as an external command but almost all modern shells also have it built in.) For similar reasons, your “working” code is actually incorrect in most circumstances. $1 does not mean “the value of parameter 1”: it means “take the value of parameter 1, split it into separate words at whitespace (or whatever the value of IFS is), and interpret each word as a glob pattern”. The way to write “the value of parameter 1” requires double quotes: "$1" . See When is double-quoting necessary? for the nitty-gritty. You don't need to understand this; all you need to remember is: always put double quotes around variable substitutions "$foo" and command substitutions "$(foo)" . Thus: if [ -n "$1" ]; then … In bash, ksh and zsh, but not in plain sh, you can also use double brackets. Single brackets [ … ] are parsed like an ordinary command (and indeed [ is an ordinary command, albeit usually built in). Double brackets are a separate syntactic construct and you can omit the double quotes most of the time. if [[ -n $1 ]]; then … However, there is an exception: [[ "$foo" = "$bar" ]] to test if the two variables have the same value requires double quotes around $bar , otherwise $bar is interpreted as a wildcard pattern. Again, rather than remember the details, you might as well use double quotes all the time. | {
"source": [
"https://unix.stackexchange.com/questions/117438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22513/"
]
} |
117,467 | My variables are LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
ORACLE_HOME=/usr/lib/oracle/11.2/client64 How to save these variables permanently ? | You can add it to the file .profile or your login shell profile file (located in your home directory). To change the environmental variable "permanently" you'll need to consider at least these situations: Login/Non-login shell Interactive/Non-interactive shell bash Bash as login shell will load /etc/profile , ~/.bash_profile , ~/.bash_login , ~/.profile in the order Bash as non-login interactive shell will load ~/.bashrc Bash as non-login non-interactive shell will load the configuration specified in environment variable $BASH_ENV $EDITOR ~/.profile
#add lines at the bottom of the file:
export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
export ORACLE_HOME=/usr/lib/oracle/11.2/client64 zsh $EDITOR ~/.zprofile
#add lines at the bottom of the file:
export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
export ORACLE_HOME=/usr/lib/oracle/11.2/client64 fish set -Ux LD_LIBRARY_PATH /usr/lib/oracle/11.2/client64/lib
set -Ux ORACLE_HOME /usr/lib/oracle/11.2/client64 ksh $EDITOR ~/.profile
#add lines at the bottom of the file:
export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
export ORACLE_HOME=/usr/lib/oracle/11.2/client64 bourne $EDITOR ~/.profile
#add lines at the bottom of the file:
LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
ORACLE_HOME=/usr/lib/oracle/11.2/client64
export LD_LIBRARY_PATH ORACLE_HOME csh or tcsh $EDITOR ~/.login
#add lines at the bottom of the file:
setenv LD_LIBRARY_PATH /usr/lib/oracle/11.2/client64/lib
setenv ORACLE_HOME /usr/lib/oracle/11.2/client64 If you want to make it permanent for all users, you can edit the corresponding files under /etc/ , i.e. /etc/profile for Bourne-like shells, /etc/csh.login for (t)csh, and /etc/zsh/zprofile and /etc/zsh/zshrc for zsh. Another option is to use /etc/environment , which on Linux systems is read by the PAM module pam_env and supports only simple assignments, not shell-style expansions. (See Debian's guide on this.) These files are likely to already contain some assignments, so follow the syntax you see already present in your file. Make sure to restart the shell and relogin the user, to apply the changes. If you need to add system wide environment variable, there's now /etc/profile.d folder that contains sh script to initialize variable. You could place your sh script with all you exported variables here. Be carefull though this should not be use as a standard way of adding variable to env on Debian. | {
"source": [
"https://unix.stackexchange.com/questions/117467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53846/"
]
} |
117,501 | In a bash script, I would like to capture the standard output of a long command line by line, so that they can be analysed and reported while initial command is still running. This is the complicated way I can imagine of doing it: # Start long command in a separated process and redirect stdout to temp file
longcommand > /tmp/tmp$$.out &
#loop until process completes
ps cax | grep longcommand > /dev/null
while [ $? -eq 0 ]
do
#capture the last lines in temp file and determine if there is new content to analyse
tail /tmp/tmp$$.out
# ...
sleep 1 s # sleep in order not to clog cpu
ps cax | grep longcommand > /dev/null
done I would like to know if there is a simpler way of doing so. EDIT: In order to clarify my question, I will add this. The longcommand displays its status line by line once per second. I would like to catch the output before the longcommand completes. This way, I can potentially kill the longcommand if it does not provide the results I expect. I have tried: longcommand |
while IFS= read -r line
do
whatever "$line"
done But whatever (e.g. echo ) only executes after longcommand completes. | Just pipe the command into a while loop. There are a number of nuances to this, but basically (in bash or any POSIX shell): longcommand |
while IFS= read -r line
do
whatever "$line"
done The other main gotcha with this (other than the IFS stuff below) is when you try to use variables from inside the loop once it has finished. This is because the loop is actually executed in a sub-shell (just another shell process) which you can't access variables from (also it finishes when the loop does, at which point the variables are completely gone. To get around this, you can do: longcommand | {
while IFS= read -r line
do
whatever "$line"
lastline="$line"
done
# This won't work without the braces.
echo "The last line was: $lastline"
} Hauke's example of setting lastpipe in bash is another solution. Update To make sure you are processing the output of the command 'as it happens', you can use stdbuf to set the process' stdout to be line buffered. stdbuf -oL longcommand |
while IFS= read -r line
do
whatever "$line"
done This will configure the process to write one line at a time into the pipe instead of internally buffering its output into blocks. Beware that the program can change this setting itself internally. A similar effect can be achieved with unbuffer (part of expect ) or script . stdbuf is available on GNU and FreeBSD systems, it only affects the stdio buffering and only works for non-setuid, non-setgid applications that are dynamically linked (as it uses a LD_PRELOAD trick). | {
"source": [
"https://unix.stackexchange.com/questions/117501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61353/"
]
} |
117,521 | What are the differences in dependencies between select and depends on in the kernels Kconfig files? config FB_CIRRUS
tristate "Cirrus Logic support"
depends on FB && (ZORRO || PCI)
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
---help---
This enables support for Cirrus Logic GD542x/543x based boards on
Amiga: SD64, Piccolo, Picasso II/II+, Picasso IV, or EGS Spectrum. In the example above, how is FB_CIRRUS diffrently related to FB && (ZORRO || PCI) than it is to FB_CFB_FILLRECT , FB_CFB_COPYAREA and FB_CFB_IMAGEBLIT ? Update I've noticed that depend on doesn't really do much in terms of compilation order. For example. A successful build of AppB depends on a statically linked LibB to be built first. Setting depends on LibB in Kconfig for AppB will not force the LibB to be built first. Setting select LibB will. | depends on A indicates the symbol(s) A must already be positively selected ( =y ) in order for this option to be configured. For example, depends on FB && (ZORRO || PCI) means FB must have been selected, and (&&) either ZORRO or (||) PCI . For things like make menuconfig , this determines whether or not an option will be presented. select positively sets a symbol. For example, select FB_CFB_FILLRECT will mean FB_CFB_FILLRECT=y . This fulfills a potential dependency of some other config option(s). Note that the kernel docs discourage the use of this for "visible" symbols (which can be selected/deselected by the user) or for symbols that themselves have dependencies, since those will not be checked. Reference: https://www.kernel.org/doc/Documentation/kbuild/kconfig-language.txt | {
"source": [
"https://unix.stackexchange.com/questions/117521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29436/"
]
} |
117,529 | I've got root access on a SLES 9 with a Remote Supervisor Adapter II Refresh 1 card :) The password for the user for the RSA card is incorrect. My question: How can I reset the RSA user's password without rebooting the machine? | depends on A indicates the symbol(s) A must already be positively selected ( =y ) in order for this option to be configured. For example, depends on FB && (ZORRO || PCI) means FB must have been selected, and (&&) either ZORRO or (||) PCI . For things like make menuconfig , this determines whether or not an option will be presented. select positively sets a symbol. For example, select FB_CFB_FILLRECT will mean FB_CFB_FILLRECT=y . This fulfills a potential dependency of some other config option(s). Note that the kernel docs discourage the use of this for "visible" symbols (which can be selected/deselected by the user) or for symbols that themselves have dependencies, since those will not be checked. Reference: https://www.kernel.org/doc/Documentation/kbuild/kconfig-language.txt | {
"source": [
"https://unix.stackexchange.com/questions/117529",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61246/"
]
} |
117,568 | How can I add a Column of values in a file which has a certain number of rows.
I have a input file like this: Input file: SPATA17 1 217947738
LYPLAL1 1 219383905
FAM47E 4 77192838
SHROOM3 4 77660162
SHROOM3 4 77660731
SHROOM3 4 77662248 Output file: SPATA17 1 217947738 file1
LYPLAL1 1 219383905 file1
FAM47E 4 77192838 file1
SHROOM3 4 77660162 file1
SHROOM3 4 77660731 file1
SHROOM3 4 77662248 file1 In this case, I want to add a Column of values, upto the number of rows in the file.The value remains consistent, such as "file1". The reason is I have 100 of those files.I don't want to open each file and paste a column.
Also is there any way to automate this, by going in a directory and adding a column of values.
The value comes from the filename, which has to be added in each row of the file in the last/first column. | You can use a one-liner loop like this: for f in file1 file2 file3; do sed -i "s/$/\t$f/" $f; done For each file in the list, this will use sed to append to the end of each line a tab and the filename. Explanation: Using the -i flag with sed to perform a replacement in-place, overwriting the file Perform a substitution with s/PATTERN/REPLACEMENT/ . In this example PATTERN is $ , the end of the line, and REPLACEMENT is \t (= a TAB), and $f is the filename, from the loop variable. The s/// command is within double-quotes so that the shell can expand variables. | {
"source": [
"https://unix.stackexchange.com/questions/117568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60411/"
]
} |
117,680 | Is there a method of slowing down the copy process on Linux? I have a big file, say 10GB, and I'd like to copy it to another directory, but I don't want to copy it with full speed. Let's say I'd like to copy it with the speed of 1mb/s, not faster. I'd like to use a standard Linux cp command. Is this possible? (If yes, how?) Edit : so, I'll add more context to what I'm trying to achieve. I have a problem on the ArchLinux system when copying large files over USB (to a pendrive, usb disk, etc). After filling up the usb buffer cache, my system stops responding (even the mouse stops; it moves only sporadically). The copy operation is still ongoing, but it takes 100% resources of the box. When the copy operation finishes, everything goes back to normal -- everything is perfectly responsive again. Maybe it's a hardware error, I don't know, but I do know I have two machines with this problem (both are on ArchLinux, one is a desktop box, second is a laptop). Easiest and fastest "solution" to this (I agree it's not the 'real' solution, just an ugly 'hack') would be to prevent this buffer from filling up by copying the file with an average write speed of the USB drive, for me that would be enough. | You can throttle a pipe with pv -qL (or cstream -t provides similar functionality) tar -cf - . | pv -q -L 8192 | tar -C /your/usb -xvf - -q removes stderr progress reporting. The -L limit is in bytes. More about the --rate-limit/-L flag from the man pv : -L RATE, --rate-limit RATE
Limit the transfer to a maximum of RATE bytes per second.
A suffix of "k", "m", "g", or "t" can be added to denote
kilobytes (*1024), megabytes, and so on. This answer originally pointed to throttle but that project is no longer available so has slipped out of some package systems. | {
"source": [
"https://unix.stackexchange.com/questions/117680",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61702/"
]
} |
117,981 | I am trying to figure out how a tty works 1 (the workflow and responsibilities of each element). I have read several interesting articles about it, but there are still some blurry areas. This is what I understand so far: The emulated terminal makes different system calls to /dev/ptmx , the master part of the pseudo terminal. The master part of the pseudo terminal allocates a file in /dev/pts/[0-N] , corresponding to the obsolete serial port, and "attaches" a slave pseudo terminal to it. The slave pseudo terminal keeps information such as session ID, foreground job, screen size. Here are my questions: Has ptmx any purpose besides allocating the slave part? Does it provide some kind of "intelligence" , or does the emulated terminal
(xterm for instance) have all the intelligence of behaving like a
terminal? Why does xterm have to interact with the master part, as it only forwards the stdout and stdin of the slave part? Why can't it directly write and read from the pts file ? Is a session ID always attached to one pts file and vice versa?
Could I execute ps and find two session IDs for the same
/dev/pts/X ? What other information does the pts store? Does xterm update all
fields by itself, or does the ptm add some "intelligence" to it? 1. I base my understanding on the TTY demystified by Linus Åkesson , and the Linux Kernel by Andries Brouwer posts, as on several other questions on these sites | Terminal emulators The master side replaces the line (the pair of TX/RX wires) that goes to the terminal. The terminal displays the characters that it receives on one of the wires (some of those are control characters and make it do things like move the cursor, change colour...) and sends on another wire the characters corresponding to the keys you type. Terminal emulators like xterm are not different except that instead of sending and receiving characters on wires, they read and write characters on their file descriptor to the master side. Once they've spawned the slave terminal, and started your shell on that, they no longer touch that. In addition to emulating the pair of wire, xterm can also change some of the line discipline properties via that file descriptor to the master side. For instance, they can update the size attributes so a SIGWINCH is sent to the applications that interact with the slave pty to notify them of a changed size. Other than that, there is little intelligence in the terminal/terminal emulator. What you write to a terminal device (like the pty slave) is what you mean to be displayed there, what you read from it is what you have typed there, so it does not make sense for the terminal emulator to read or write to that. They are the ones at the other end. The tty line discipline A lot of the intelligence is in the tty line discipline . The line discipline is a software module (residing in the driver, in the kernel) pushed on top of a serial/pty device that sits between that device and the line/wire (the master side for a pty). A serial line can have a terminal at the other end, but also a mouse or another computer for networking. You can attach a SLIP line discipline for instance to get a network interface on top of a serial device (or pty device), or you can have a tty line discipline. The tty line discipline is the default line discipline at least on Linux for serial and pty devices. On Linux, you can change the line discipline with ldattach . You can see the effect of disabling the tty line discipline by issuing stty raw -echo (note that the bash prompt or other interactive applications like vi set the terminal in the exact mode they need, so you want to use a dumb application like cat to experiment with that).
Then, everything that is written to the slave terminal device makes it immediately to the master side for xterm to read, and every character written by xterm to the master side is immediately available for reading from the slave device. The line discipline is where the terminal device internal line editor is implemented. For instance with stty icanon echo (as is the default), when you type a , xterm writes a to the master, then the line discipline echoes it back (makes a a available for reading by xterm for display), but does not make anything available for reading on the slave side. Then if you type backspace, xterm sends a ^? or ^H character, the line discipline (as that ^? or ^H corresponds to the erase line discipline setting) sends back on the master a ^H , space and ^H for xterm to erase the a you've just typed on its screen and still doesn't send anything to the application reading from the slave side, it just updates its internal line editor buffer to remove that a you've typed before. Then when you press Enter, xterm sends ^M (CR), which the line discipline converts on input to a ^J (LF), and sends what you've entered so far for reading on the slave side (an application reading on /dev/pts/x will receive what you've typed including the LF, but not the a since you've deleted it), while on the master side, it sends a CR and LF to move the cursor to the next line and the start of the screen. The line discipline is also responsible for sending the SIGINT signal to the foreground process group of the terminal when it receives a ^C character on the master side etc. Many interactive terminal applications disable most of the features of that line discipline to implement it themselves. But in any case, beware that the terminal ( xterm ) has little involvement in that (except displaying what it's told to display). And there can be only one session per process and per terminal device. A session can have a controlling terminal attached to it but does not have to (all sessions start without a terminal until they open one). xterm , in the process that it forks to execute your shell will typically create a new session (and therefore detach from the terminal where you launched xterm from if any), open the new /dev/pts/x it has spawned, by that attaching that terminal device to the new session. It will then execute your shell in that process, so your shell will become the session leader. Your shell or any interactive shell in that session will typically juggle with process groups and tcsetpgrp() , to set the foreground and background jobs for that terminal. As to what information is stored by a terminal device with a tty discipline (serial or pty) , that's typically what the stty command displays and modifies. All the discipline configuration: terminal screen size, local, input output flags, settings for special characters (like ^C, ^Z...), input and output speed (not relevant for ptys). That corresponds to the tcgetattr() / tcsetattr() functions which on Linux map to the TCGETS / TCSETS ioctls, and TIOCGWINSZ / TIOCSWINSZ for the screen size. You may argue that the current foreground process group is another information stored in the terminal device ( tcsetpgrp() / tcgetpgrp() , TIOC{G,S}PGRP ioctls), or the current input or output buffer. Note that that screen size information stored in the terminal device may not reflect reality. The terminal emulator will typically set it (via the same ioctl on the master size) when its window is resized, but it can get out of sync if an application calls the ioctl on the slave side or when the resize is not transmitted (in case of an ssh connection which implies another pty spawned by sshd if ssh ignores the SIGWINCH for instance). Some terminals can also be queried for their size via escape sequences, so an application can query it that way, and update the line discipline with that information. For more details, you can have a look at the termios and tty_ioctl man pages on Debian for instance. To play with other line disciplines: Emulate a mouse with a pseudo-terminal: socat pty,link=mouse fifo:fifo
sudo inputattach -msc mouse # sets the MOUSE line discipline and specifies protocol
xinput list # see the new mouse there
exec 3<> fifo
printf '\207\12\0' >&3 # moves the cursor 10 pixels to the right Above, the master side of the pty is terminated by socat onto a named pipe ( fifo ). We connect that fifo to a process (the shell) that writes 0x87 0x0a 0x00 which in the mouse systems protocol means no button pressed, delta(x,y) = (10,0) . Here, we (the shell) are not emulating a terminal, but a mouse, the 3 bytes we send are not to be read (potentially transformed) by an application from the terminal device ( mouse above which is a symlink made by socat to some /dev/pts/x device), but are to be interpreted as a mouse input event. Create a SLIP interface: # on hostA
socat tcp-listen:12345,reuseaddr pty,link=interface
# after connection from hostB:
sudo ldattach SLIP interface
ifconfig -a # see the new interface there
sudo ifconfig sl0 192.168.123.1/24
# on hostB
socat -v -x pty,link=interface tcp:hostA:12345
sudo ldattach SLIP interface
sudo ifconfig sl0 192.168.123.2/24
ping 192.168.123.1 # see the packets on socat output Above, the serial wire is emulated by socat as a TCP socket in-between hostA and hostB. The SLIP line discipline interprets those bytes exchanged over that virtual line as SLIP encapsulated IP packets for delivery on the sl0 interface. | {
"source": [
"https://unix.stackexchange.com/questions/117981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61862/"
]
} |
117,988 | I need to download a file using wget, however I don't know exactly what the file name will be. https://foo/bar.1234.tar.gz According to the man page , wget lets you turn off and on globbing when dealing with a ftp site, however I have a http url. How can I use a wildcard while using a wget? I'm using gnu wget. Things I've tried. /usr/local/bin/wget -r "https://foo/bar.*.tar.gz" -P /tmp Update Using the -A causes all files ending in .tar.gz on the server to be downloaded. /usr/local/bin/wget -r "https://foo/" -P /tmp -A "bar.*.tar.gz" Update From the answers, this is the syntax which eventually worked. /usr/local/bin/wget -r -l1 -np "https://foo" -P /tmp -A "bar*.tar.gz" | I think these switches will do what you want with wget : -A acclist --accept acclist
-R rejlist --reject rejlist
Specify comma-separated lists of file name suffixes or patterns to
accept or reject. Note that if any of the wildcard characters, *, ?,
[ or ], appear in an element of acclist or rejlist, it will be
treated as a pattern, rather than a suffix.
--accept-regex urlregex
--reject-regex urlregex
Specify a regular expression to accept or reject the complete URL. Example $ wget -r --no-parent -A 'bar.*.tar.gz' http://url/dir/ | {
"source": [
"https://unix.stackexchange.com/questions/117988",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39263/"
]
} |
118,116 | Under certain conditions, the Linux kernel may become tainted . For example, loading a proprietary video driver into the kernel taints the kernel. This condition may be visible in system logs, kernel error messages (oops and panics), and through tools such as lsmod , and remains until the system is rebooted. What does this mean? Does it affect my ability to use the system, and how might it affect my support options? | A tainted kernel is one that is in an unsupported state because it cannot be guaranteed to function correctly . Most kernel developers will ignore bug reports involving tainted kernels, and community members may ask that you correct the tainting condition before they can proceed with diagnosing problems related to the kernel. In addition, some debugging functionality and API calls may be disabled when the kernel is tainted. The taint state is indicated by a series of flags which represent the various reasons a kernel cannot be trusted to work properly. The most common reason for the kernel to become tainted is loading a proprietary graphics driver from NVIDIA or AMD, in which case it is generally safe to ignore the condition. However, some scenarios that cause the kernel to become tainted may be indicative of more serious problems such as failing hardware. It is a good idea to examine system logs and the specific taint flags set to determine the underlying cause of the issue. This feature is intended to identify conditions which may make it difficult to properly troubleshoot a kernel problem. For example, a proprietary driver can cause problems that cannot be debugged reliably because its source code is not available and its effects cannot be determined. Likewise, if a serious kernel or hardware error had previously occurred, the integrity of the kernel space may have been compromised, meaning that any subsequent debug messages generated by the kernel may not be reliable. Note that correcting the tainting condition alone does not remove the taint state because doing so does not change the fact that the kernel can no longer be relied on to work correctly or produce accurate debugging information. The system must be restarted to clear the taint flags. More information is available in the Linux kernel documentation , including what each taint flag means and how to troubleshoot a tainted kernel prior to reporting bugs . A partial list of conditions that can result in the kernel being tainted follows, each with their own flags. Note that some Linux vendors, such as SUSE, add additional taint flags to indicate conditions such as loading a module that is supported by a third party rather than directly by the vendor. Loading a proprietary (or non-GPL-compatible) kernel module. As noted above, this is the most common reason for the kernel to become tainted. The use of staging drivers, which are part of the kernel source code but are experimental and not fully tested. The use of out-of-tree modules that are not included with the Linux kernel source code. Forcibly loading or unloading modules. This can happen if one is trying to use a module that is not built for the current version of the kernel. (The Linux kernel module ABI is not stable across versions, or even differently-configured builds of the same version.) Running a kernel on certain hardware configurations that are specifically not supported, such as an SMP (multiprocessor) kernel on early AMD Athlon processors not supporting SMP operation. Overriding the ACPI DSDT in the kernel. This is sometimes needed to correct for firmware power-management bugs; see this Arch Linux wiki article for details. Certain critical error conditions, such as machine check exceptions and kernel oopses . Certain serious bugs in the BIOS, UEFI, or other system firmware which the kernel must work around. | {
"source": [
"https://unix.stackexchange.com/questions/118116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24131/"
]
} |
118,124 | I have a 1 core CPU installed on my PC. Sometimes, uptime shows load >1. How is this possible and what does this mean? EDIT: The values go up to 2.4 | Load is not equal to CPU usage. It is basically an indicator how many processes are waiting to be executed. Some helpful links: https://superuser.com/questions/23498/what-does-load-average-mean-in-unix-linux http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages | {
"source": [
"https://unix.stackexchange.com/questions/118124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42740/"
]
} |
118,217 | Is it possible execute chmod and ignore error from command Example ( remark file.txt not exsist to show the example ) When I type chmod 777 file.txt I get error on the output chmod: cannot access file.txt : no such file or directory So I add the-f flag to the command as the following: ( file.txt not exist in order to show the case ) chmod -f 777 file.txt
echo $?
1 But from the example chmod return 1 Please advice how to force the chmod command to give exit code 0 in spite of error | Please advice how to force the chmod command to give exit code 0 in
spite of error chmod -f 777 file.txt || : This would execute : , i.e. the null command, if chmod fails. Since the null command does nothing but always succeeds, you would see an exit code of 0. | {
"source": [
"https://unix.stackexchange.com/questions/118217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
118,244 | I've got 10k+ files totaling over 20GB that I need to concatenate into one file. Is there a faster way than cat input_file* >> out ? The preferred way would be a bash command, Python is acceptable too if not considerably slower. | Nope, cat is surely the best way to do this. Why use python when there is a program already written in C for this purpose? However, you might want to consider using xargs in case the command line length exceeds ARG_MAX and you need more than one cat . Using GNU tools, this is equivalent to what you already have: find . -maxdepth 1 -type f -name 'input_file*' -print0 |
sort -z |
xargs -0 cat -- >>out | {
"source": [
"https://unix.stackexchange.com/questions/118244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61602/"
]
} |
118,247 | I'm trying to connect my rasberry Pi to some display using the i2c bus.
To get started I wanted to manually write stuff, bytes in particular to a file. How do you write specific bytes to a file? I already read that one and I figured my problem should be solved by something like this echo -n -e \x66\x6f\x6f > byteFileForNow However, when I open up that file with nano, instead of foo I see: x66x6fx6f So the backslashes were escaped, but not the bytes themselves. I also tried the same only without the -e this time, so I would've exepected to see \x66\x6f\x6f , but got the same as before. So echo is escaping backslashes, backslashes alone and backslashes regardless of if it is supposed to. Any idea how to fix this? According to the man-page that should've done what I'm looking for. | You have to take your codes into quotes: echo -n -e '\x66\x6f\x6f' > byteFileForNow cause otherwise shell replaces \x to x before it goes to echo -e . ps. double escape will also work: echo -n -e \\x66\\x6f\\x6f > byteFileForNow | {
"source": [
"https://unix.stackexchange.com/questions/118247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61486/"
]
} |
118,333 | I have a file in the format as follows: $ cat file.txt
27.33.65.2
27.33.65.2
58.161.137.7
121.50.198.5
184.173.187.1
184.173.187.1
184.173.187.1 What's the best way to parse the file file.txt into a format like: 27.33.65.2: 2
58.161.137.7: 1
121.50.198.5: 1
184.173.187.1: 3 In other words, I want to loop through the file and count the number of times each IP address appears. I've already run it through sort so all the IP addresses are in order and directly after each other. | You're looking for uniq -c If the output of that is not to your liking, it can be parsed and reformatted readily. For example: $ uniq -c logfile.txt | awk '{print $2": "$1}'
27.33.65.2: 2
58.161.137.7: 1
121.50.198.5: 1
184.173.187.1: 3 | {
"source": [
"https://unix.stackexchange.com/questions/118333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62064/"
]
} |
118,433 | In my Bash environment I use variables containing spaces, and I use these variables within command substitution. What is the correct way to quote my variables? And how should I do it if these are nested? DIRNAME=$(dirname "$FILE") or do I quote outside the substitution? DIRNAME="$(dirname $FILE)" or both? DIRNAME="$(dirname "$FILE")" or do I use back-ticks? DIRNAME=`dirname "$FILE"` What is the right way to do this? And how can I easily check if the quotes are set right? | In order from worst to best: DIRNAME="$(dirname $FILE)" will not do what you want if $FILE contains whitespace (or whatever characters $IFS currently contains) or globbing characters \[?* . DIRNAME=`dirname "$FILE"` is technically correct, but backticks are not recommended for command expansion because of the extra complexity when nesting them and the extra backslash processing that happens within them. DIRNAME=$(dirname "$FILE") is correct, but only because this is an assignment to a scalar (not array) variable . If you use the command substitution in any other context, such as export DIRNAME=$(dirname "$FILE") or du $(dirname -- "$FILE") , the lack of quotes will cause trouble if the result of the expansion contain whitespace or globbing characters. DIRNAME="$(dirname "$FILE")" (except for the missing -- , see below) is the recommended way. You can replace DIRNAME= with a command and a space without changing anything else, and dirname receives the correct string. To improve even further: DIRNAME="$(dirname -- "$FILE")" works if $FILE starts with a dash. DIRNAME="$(dirname -- "$FILE" && printf x)" && DIRNAME="${DIRNAME%?x}" || exit works even if $FILE 's dirname ends with a newline, since $() chops off newlines at the end of output, both the one added by dirname and the ones that may be part of the actual data. You can nest command expansions as much as you like. With $() you always create a new quoting context, so you can do things like this: foo "$(bar "$(baz "$(ban "bla")")")" You do not want to try that with backticks. | {
"source": [
"https://unix.stackexchange.com/questions/118433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58223/"
]
} |
118,462 | Is there way for a bash script to know if it is running in the foreground or background, and so it can behave slightly differently in each case? | Quoting man ps : PROCESS STATE CODES Here are the different values that the s, stat and state output
specifiers (header "STAT" or "S") will display to describe the state of
a process.
...
+ is in the foreground process group So you could perform a simple check: case $(ps -o stat= -p $$) in
*+*) echo "Running in foreground" ;;
*) echo "Running in background" ;;
esac | {
"source": [
"https://unix.stackexchange.com/questions/118462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6973/"
]
} |
118,471 | CentOS 5.x My understanding is that the contents of /tmp/ in CentOS 5.x can/are automatically purged by the OS via one of two methods: A daily cron task running tmpwatch If /tmp/ is mounted on a tmpfs (RAM), a system reboot/power cycle will clear everything. Is that correct? If so, how can I confirm if /tmp is mounted on tmpfs? I checked /etc/fstab and saw this: LABEL=/ / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-sda2 swap swap defaults 0 0 | You can resolve which filesystem a directory or file is on with the command df , and if you include the -T option, the output will include the filesystem type. $ df -T /tmp
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda3 ext4 38715020 5073600 31674780 14% / In the above example, the /tmp directory is on an ext4 filesystem, not tmpfs. | {
"source": [
"https://unix.stackexchange.com/questions/118471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
118,472 | Here are my spec: first search for "#firstpattern(" copy the content inside the ( ) to the 3rd field inside the ( ) of all lines where the "secondpattern" found. remove the entire line with the "#firstpattern" at I'd like to process all files "files.txt" in the same folder. no need to save backups. A sed command but others should be fine. datain.txt : ...some lines...
#firstpattern(stringtobecopied)
...some lines...
secondpattern something here (something, some text here, someting else).
secondpattern something here (something, some text here, someting else). dataout.txt : ...some lines...
secondpattern something here (something, some text here, stringtobecopied, someting else).
secondpattern something here (something, some text here, stringtobecopied, someting else). | You can resolve which filesystem a directory or file is on with the command df , and if you include the -T option, the output will include the filesystem type. $ df -T /tmp
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda3 ext4 38715020 5073600 31674780 14% / In the above example, the /tmp directory is on an ext4 filesystem, not tmpfs. | {
"source": [
"https://unix.stackexchange.com/questions/118472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61434/"
]
} |
118,550 | I wanted to learn about autotools, so I just started watching some tutorials on YouTube. I made a folder named hello and then made a configure.ac file: AC_INIT([hello],[.01])
AC_OUTPUT I saved it and then ran autoreconf -i . Obviously, this didn't work the first time because it was not installed. Then I installed autoconf by the command sudo apt-get install autoconf2.13 . Now after this I again ran autoreconf -i , but now I am getting the error as show below: Can't exec "libtoolize": No such file or directory at /usr/bin/autoreconf2.50 line 196.
Use of uninitialized value in pattern match (m//) at /usr/bin/autoreconf2.50 line 196. | You should do sudo apt-get install build-essential libtool | {
"source": [
"https://unix.stackexchange.com/questions/118550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62096/"
]
} |
118,568 | I have some doubts regarding *nix. I don't know which type of executable file is ls , whether it is .sh
or .ksh or any other kind of system executable if it is, what
is that? when I tried to see what is the source code of ls command looks like, it shows something unreadable, what method does *nix use to create these types of unreadable files and can I make my files similar to these files (like ls - unreadable). | You can determine the nature of an executable in Unix using the file command and the type command. type You use type to determine an executable's location on disk like so: $ type -a ls
ls is /usr/bin/ls
ls is /bin/ls So I now know that ls is located here on my system in 2 locations: /usr/bin/ls & /bin/ls . Looking at those executables I can see they're identical: $ ls -l /usr/bin/ls /bin/ls
-rwxr-xr-x. 1 root root 120232 Jan 20 05:11 /bin/ls
-rwxr-xr-x. 1 root root 120232 Jan 20 05:11 /usr/bin/ls NOTE: You can confirm they're identical beyond their sizes by using cmp or diff . with diff $ diff -s /usr/bin/ls /bin/ls
Files /usr/bin/ls and /bin/ls are identical with cmp $ cmp /usr/bin/ls /bin/ls
$ Using file If I query them using the file command: $ file /usr/bin/ls /bin/ls
/usr/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x303f40e1c9349c4ec83e1f99c511640d48e3670f, stripped
/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x303f40e1c9349c4ec83e1f99c511640d48e3670f, stripped So these would be actual physical programs that have been compiled from C/C++. If they were shell scripts they'd typically present like this to file : $ file somescript.bash
somescript.bash: POSIX shell script, ASCII text executable What's ELF? ELF is a file format , it is the output of a compiler such as gcc , which is used to compile C/C++ programs such as ls . In computing, the Executable and Linkable Format (ELF, formerly called Extensible Linking Format) is a common standard file format for executables, object code, shared libraries, and core dumps. It typically will have one of the following extensions in the filename: none, .o, .so, .elf, .prx, .puff, .bin | {
"source": [
"https://unix.stackexchange.com/questions/118568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62181/"
]
} |
118,577 | I'm using merge cap to create a merge pcap file from 15 files. For the merged file, I have changed the name to that of the first of the 15 files. But I would also like to change the merged file's attributes like "Date Created" and "Last Modified" to that of the first one. Is there anyway to do this? FILES_dcn=($(find $dir_dcn -maxdepth 1 -type f -name "*.pcap" -print0 | xargs -0 ls -lt | tail -15 | awk '{print $9}'))
TAG1_dcn=$(basename "${FILES_dcn[14]}" | sed 's/.pcap//')
mergecap -w "${dir_dcn}"/merge_dcn.pcap "${FILES_dcn[@]}"
mv "${dir_dcn}"/merge_dcn.pcap "${dir_dcn}"/"${TAG1_dcn}".pcap I try to access the merged files over a samba server (Ubuntu). So that an extractor function can access auto extract the files to D folder. But as the created date will be changed for the merged file the extraction fails. Is there anyway to fix this? | You can use the touch command along with the -r switch to apply another file's attributes to a file. NOTE: There is no such thing as creation date in Unix, there are only access, modify, and change. See this U&L Q&A titled: get age of given file for further details. $ touch -r goldenfile newfile Example For example purposes here's a goldenfile that was created with some arbitrary timestamp. $ touch -d 20120101 goldenfile
$ ls -l goldenfile
-rw-rw-r--. 1 saml saml 0 Jan 1 2012 goldenfile Now I make some new file: $ touch newfile
$ ls -l newfile
-rw-rw-r--. 1 saml saml 0 Mar 7 09:06 newfile Now apply goldenfile 's attributes to newfile . $ touch -r goldenfile newfile
$ ls -l goldenfile newfile
-rw-rw-r--. 1 saml saml 0 Jan 1 2012 newfile
-rw-rw-r--. 1 saml saml 0 Jan 1 2012 goldenfile Now newfile has the same attributes. Modify via Samba I just confirmed that I'm able to do this using my Fedora 19 laptop which includes version 1.16.3-2 connected to a Thecus N12000 NAS (uses a modified version of CentOS 5.x). I was able to touch a file as I mentioned above and it worked as I described. Your issue is likely a problem with the either the mounting options being used, which may be omitting the tracking of certain time attributes, or perhaps it's related to one of these bugs: Bug 461505 - can't set timestamp on samba shares Bug 693491 - Unable to set attributes/timestamps on CIFS/Samba share | {
"source": [
"https://unix.stackexchange.com/questions/118577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58138/"
]
} |
118,712 | In Ubuntu, when I am up in "/" folder I type: sudo find . -name .erlang.cookie and the result is: ./var/lib/rabbitmq/.erlang.cookie Then, when I am on the folder /var/lib/rabbitmq and I type ls , I see one file named mnesia. When I type the find command again, I see ./.erlang.cookie -- what does that mean? | In Unix, a filename beginning with a dot, like .erlang.cookie , is considered a hidden file and is not shown by bare ls . Type ls -a to also show hidden files. From man ls : -a, --all
do not ignore entries starting with . However, you can show a hidden file with ls if you specify the name: $ ls .erlang.cookie
.erlang.cookie | {
"source": [
"https://unix.stackexchange.com/questions/118712",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62248/"
]
} |
118,806 | Introduction My question arises from the necessity of understanding why I have now (after multiple trials) Terminal and tmux supporting 256 colours and tput colors telling me there are only 8 of them. Background Let's start from the beginning. I'm using a Ubuntu box, Guake , tmux , Vim and I love the theme Solarized . They were looking pretty awful, so I decided to enable the 256 colours support and play a bit around. Let's see what happens for my Terminal . tput colors says there are 8 colours. I personally set them to purple, on the left, and of course on the right we have 2 different shades of blue. $TERM says xterm . (To have the coloured ls I eval this in my .bashrc .) Vim also looks fine, despite the fact that I call it with the 256 flag in an environment where 256 colours are not supported. set t_Co=256
let g:solarized_termcolors=256
colorscheme solarized The only guy who complains about the reduced colour space is tmux . Calling tmux provides the "wrong" expected results. But calling tmux with the -2 flag makes everything work fine, magically . Now the only thing that I understand is that -2 is equivalent of export TERM=screen-256color ( source ). Guake behaves analogously to Terminal and both of them answer xterm to the question echo $TERM . Question Basically, does anyone understand why everything works even if it shouldn't? Am I sadistic that I complaining why things work? Maybe. Is there a better reason? Sure: I'd like to fix the appearance of other Ubuntu boxes in my office, and I'd like to understand why things work or don't work. Additional experiments Running this script (slightly modified) on my xterm provides the following result: 256 colours, but only 16 are displayed correctly. Then, changing terminal's profile, also these 16 colours change. More tests are following. From left to right, top to bottom, we have Solarized colour theme, dircolor ansi-dark and 256dark , then default ( Tango ) colour scheme, dircolor ansi-dark and 256dark . Observation : in theory the dircolor ansi-dark on Solarized colour scheme should have match closely the dircolor 256dark . This is not clearly happening for the specific listed files. Instead, this quite happens when in the working directory there are folders , text files and symbolic links . Conclusion : no much attention as been paid while encoding the 256dark colours. Preliminary conclusions xterm supports 256 colours, despite what tput colors says. Programs can refer to the ansi palette (customisable by the user) or define their colours, picking from a total of 256 colours. | There is some information on 256-color support in the tmux FAQ . Detecting the number of colors that the terminal supports is unfortunately not straightforward, for historical reasons. See Checking how many colors my terminal emulator supports for an explanation. This means that tmux cannot reliably determine whether the terminal supports more than 8 colors; tmux cannot reliably communicate to the application that it supports more than 8 colors. When you're in tmux, the terminal you're interacting with is tmux. It doesn't support all of xterm's control sequences. In particular, it doesn't support the OSC 4 ; … control sequence to query or set color values. You need to use that while directly running in xterm, outside tmux. If you run tmux -2 , then tmux starts with 256-color support, even if it doesn't think that your terminal supports 256 colors (which is pretty common). By default, tmux advertises itself as screen without 256-color support. You can change the value of TERM in .tmux.conf to indicate 256-color support: set -g default-terminal "screen-256color" You can use TERM=xterm-256color or TERM=screen-256color on Ubuntu. These values will only cause trouble if you log in to a remote machine that doesn't have a termcap/terminfo entry for these names. You can copy the entries to your home directory on the remote machine; this works with most modern terminfo implementations. # From the Ubuntu machine to a machine that doesn't have *-256color terminfo entries
ssh somewhere.example.com mkdir -p .terminfo/s .terminfo/x
scp -p /lib/terminfo/s/screen-256color somewhere.example.com:.terminfo/s/
scp -p /lib/terminfo/x/xterm-256color somewhere.example.com:.terminfo/x/ | {
"source": [
"https://unix.stackexchange.com/questions/118806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38879/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.