source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
61,142 | I keep receiving this error: Warning!! Unsupported GPT (GUID Partition Table) detected. Use GNU Parted I want to go back to the normal MBR. I found some advice here and did: parted /dev/sda
mklabel msdos
quit But when I get to the mklabel option it spits out a warning that I will lose all data on /dev/sda . Is there a way to get the normal MBR back without formatting the disk? | That link you posted looks like a very ugly hack type solution. However, according to the man page, gdisk , which is used to convert MBR -> GPT, also has an option in the "recovery & transformation" menu (press r to get that) to convert GPT -> MBR; the g key will: Convert GPT into MBR and exit. This option converts as many partitions
as possible into MBR form, destroys the GPT data structures,
saves the new MBR, and exits. Use this option if you've tried GPT and
find that MBR works better for you. Note that this function generates
up to four primary MBR partitions or three primary partitions and as
many logical partitions as can be generated. Each logical
partition requires at least one unallocated block immediately
before its first block. I'd try that first. | {
"source": [
"https://unix.stackexchange.com/questions/61142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11267/"
]
} |
61,197 | I'm trying to burn a DVD from Windows but it fails because the full path name length exceeds the limit of something like 255 characters. Our files are stored in Debian Linux (accessed by Windows using samba), so to avoid running some dodgy Windows app to find long path names I'd prefer to find them using a Linux command. What command could I run to output a list of the relative path and file names for a given folder, sorted by the length of each (in descending order)? The output should look something like this: 92 ./site/testapidocs/wjhk/jupload2/policies/class-use/DefaultUploadPolicy_WithoutAlertBox.ht
83 ./site/testapidocs/wjhk/jupload2/upload/class-use/PacketConstructionThreadTest.html
76 ./site/apidocs/wjhk/jupload2/upload/helper/class-use/ProgressBarManager.html
52 ./site/xref/wjhk/jupload2/gui/JUploadFileFilter.html
31 ./site/samples.java/applet.jnlp
17 ./site/index.html | With GNU find (on Linux or Cygwin), you can look for files whose relative path is more than 255 characters long: find -regextype posix-extended -regex '.{257,}' (257 accounts for the initial ./ .) | {
"source": [
"https://unix.stackexchange.com/questions/61197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11898/"
]
} |
61,209 | Is it possible to create and format an exFAT partition from Linux? | Yes, there is a project implementing exfat and the related utilities at relan/exfat . To format a partition, use mkexfatfs / mkfs.exfat like with most filesystems, e.g.: mkfs.exfat /dev/sdX1 As for creating the partition in the first place, this is the same as for any other filesystem. Create a partition in your favourite partition manager. If you have an MBR partition table, set the partition type to NTFS (that is, code 7 ). Note, that some distributions only package the fuse module, so you may have to build it yourself. | {
"source": [
"https://unix.stackexchange.com/questions/61209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11544/"
]
} |
61,210 | root@host [/etc]# iostat -xk
Linux 2.6.32-279.19.1.el6.x86_64 (host.superhostsite.com) 01/13/2013 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
12.53 0.19 3.72 0.18 0.00 83.38
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0.24 252.39 13.95 4.24 381.61 1026.56 154.88 1.35 73.99 1.67 3.04
sdb 0.00 12.88 62.55 134.82 755.65 1146.14 19.27 0.82 4.17 0.10 1.92
sdc 0.01 129.31 28.19 298.49 451.10 1711.38 13.24 0.21 0.63 0.05 1.75
root@host [/etc]# mount
/dev/sda3 on / type ext4 (rw,noatime,noload,data=ordered,commit=10)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
/dev/sdb1 on /home2 type ext2 (rw,noatime)
/dev/sdc1 on /home3 type ext4 (rw,noatime,noload,data=ordered,commit=10)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/usr/tmpDSK on /tmp type ext3 (rw,noexec,nosuid,loop=/dev/loop0)
/tmp on /var/tmp type none (rw,noexec,nosuid,bind)
root@host [/etc]# 1026 wKB/s and 73 seconds wait. What is being written there? It makes the whole server slow. SDA is the only drive that is not SSD. This could be the bottle neck. One approach I am thinking is to do solution in How to know recently updated files However, sda, being the root, is mounted at / there are files in sdb and sdc that are written but I don't care about them. iotop -o -a yields Total DISK READ: 1314.41 K/s | Total DISK WRITE: 3.58 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
13266 be/4 root 0.00 B 2.22 M 0.00 % 0.67 % [flush-8:16]
880 be/3 root 0.00 B 144.00 K 0.00 % 0.61 % [jbd2/sda3-8]
1778 be/4 root 0.00 B 0.00 B 0.00 % 0.08 % [kjournald]
940 be/4 root 0.00 B 1024.00 K 0.00 % 0.04 % [flush-8:32]
26823 be/4 nudenude 24.00 K 300.00 K 0.00 % 0.01 % php /home2/nudenude/public_html/hello/index.php
1775 be/0 root 0.00 B 56.00 K 0.00 % 0.00 % [loop0]
27273 be/4 nudenude 8.00 K 360.00 K 0.00 % 0.01 % [php]
128 be/4 root 0.00 B 4.00 K 0.00 % 0.00 % [sync_supers]
8414 be/4 nobody 0.00 B 8.00 K 0.00 % 0.00 % httpd -k start -DSSL
24938 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
24997 be/4 nobody 0.00 B 16.00 K 0.00 % 0.00 % httpd -k start -DSSL
25068 be/4 nobody 0.00 B 12.00 K 0.00 % 0.00 % httpd -k start -DSSL
25070 be/4 nobody 0.00 B 12.00 K 0.00 % 0.00 % httpd -k start -DSSL
25074 be/4 nobody 0.00 B 16.00 K 0.00 % 0.00 % httpd -k start -DSSL
25075 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
25076 be/4 nobody 0.00 B 8.00 K 0.00 % 0.00 % httpd -k start -DSSL
4215 be/4 root 0.00 B 4.00 K 0.00 % 0.00 % whostmgrd - serving 139.193. --llu=1357836602 --listen=3,4,5,6,7,8
4117 be/4 nobody 0.00 B 12.00 K 0.00 % 0.00 % httpd -k start -DSSL
21264 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
25398 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
17226 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
21331 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
21332 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
17290 be/4 nobody 0.00 B 8.00 K 0.00 % 0.00 % httpd -k start -DSSL
17296 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
938 be/4 root 0.00 B 24.00 K 0.00 % 0.00 % [flush-8:0]
17358 be/4 nobody 0.00 B 8.00 K 0.00 % 0.00 % httpd -k start -DSSL
21467 be/4 nobody 0.00 B 20.00 K 0.00 % 0.00 % httpd -k start -DSSL
17372 be/4 nobody 0.00 B 8.00 K 0.00 % 0.00 % httpd -k start -DSSL
21470 be/4 nobody 0.00 B 8.00 K 0.00 % 0.00 % httpd -k start -DSSL
21471 be/4 nobody 0.00 B 16.00 K 0.00 % 0.00 % httpd -k start -DSSL
17377 be/4 nobody 0.00 B 8.00 K 0.00 % 0.00 % httpd -k start -DSSL
17381 be/4 nobody 0.00 B 8.00 K 0.00 % 0.00 % httpd -k start -DSSL
17465 be/4 nobody 0.00 B 12.00 K 0.00 % 0.00 % httpd -k start -DSSL
17467 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
17483 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
17492 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
17501 be/4 nobody 0.00 B 12.00 K 0.00 % 0.00 % httpd -k start -DSSL
17507 be/4 nobody 0.00 B 4.00 K 0.00 % 0.00 % httpd -k start -DSSL
17509 be/4 nobody 0.00 B 8.00 K 0.00 % 0.00 % httpd -k start -DSSL | Yes, there is a project implementing exfat and the related utilities at relan/exfat . To format a partition, use mkexfatfs / mkfs.exfat like with most filesystems, e.g.: mkfs.exfat /dev/sdX1 As for creating the partition in the first place, this is the same as for any other filesystem. Create a partition in your favourite partition manager. If you have an MBR partition table, set the partition type to NTFS (that is, code 7 ). Note, that some distributions only package the fuse module, so you may have to build it yourself. | {
"source": [
"https://unix.stackexchange.com/questions/61210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29357/"
]
} |
61,283 | I am a non-admin user on a large computer system. I need some up to date packages that are not installed on the system. I want to use yum to install them. As a user without sudo, admin, or root access, can I use package management to install packages in my home directory? I can always use make from the sources, but being able to use yum will make life easier. | Rather than use yum , find the rpms you want and download them. You still can't install them directly without being root, but RPM packages are actually fancy .cpio files, and you can unpack their contents. The easiest way to do this is probably via the mc ("midnight commander") file browser (one of the greatest pieces of software ever), which allows you to browse the contents of an .rpm and copy files straight out of it. Sans that, you can use rpm2cpio to convert it to .cpio, then cpio to extract the files inside and put them in the right places. Both of these will already be installed on a redhat or fedora system. Here's an example installing "xsnow" (you probably want to do this in an empty directory): »rpm2cpio xsnow-1.42-17.fc17.x86_64.rpm > xsnow.cpio Notice I found an .rpm appropriate to my system, fc17 x86_64. This is important because these are precompiled binaries that are linked against other components. Now extract the .cpio: »cpio -idv < xsnow.cpio
./usr/bin/xsnow
./usr/share/doc/xsnow-1.42
./usr/share/doc/xsnow-1.42/README
./usr/share/man/man6/xsnow.6.gz
212 blocks
Press any key to continue... If I browse through this directory tree, everything I need is there, except some of the meta-information that might help me resolve dependencies. This can be found using rpm -q -p [package] --[query] : »rpm -q -p xsnow-1.42-17.fc17.x86_64.rpm --requires
warning: xsnow-1.42-17.fc17.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID d2382b83: NOKEY
libX11.so.6()(64bit)
libXext.so.6()(64bit)
libXpm.so.4()(64bit)
libc.so.6()(64bit)
libc.so.6(GLIBC_2.2.5)(64bit)
libc.so.6(GLIBC_2.3.4)(64bit)
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
rtld(GNU_HASH)
rpmlib(PayloadIsXz) <= 5.2-1 Pretty sure I already have all this stuff. So now all I have to do is put the xsnow executable in my $PATH, which already includes a bin in my home directory: »cp ./usr/bin/xsnow ~/bin Viola! Now I can type xsnow and watch nothing, since as it turns out xsnow does not play well with KDE :( but hopefully the jist of the process is clear. I did not have to do anything outside my home directory. If you need to install libraries you will need to create a directory in home for them too and add to ~/.bashrc : export LD_LIBRARY_PATH=/home/you/lib | {
"source": [
"https://unix.stackexchange.com/questions/61283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24637/"
]
} |
61,287 | I am trying to install Sublime Text 2 on Linux Mint (Mate) from this tutorial and I'm stuck on: Next, to create a menu icon press Alt+F2 and type:
gksu gedit /usr/share/applications/sublime.desktop When I press Alt + F2 nothing happens; is there another way I can run this command? | Rather than use yum , find the rpms you want and download them. You still can't install them directly without being root, but RPM packages are actually fancy .cpio files, and you can unpack their contents. The easiest way to do this is probably via the mc ("midnight commander") file browser (one of the greatest pieces of software ever), which allows you to browse the contents of an .rpm and copy files straight out of it. Sans that, you can use rpm2cpio to convert it to .cpio, then cpio to extract the files inside and put them in the right places. Both of these will already be installed on a redhat or fedora system. Here's an example installing "xsnow" (you probably want to do this in an empty directory): »rpm2cpio xsnow-1.42-17.fc17.x86_64.rpm > xsnow.cpio Notice I found an .rpm appropriate to my system, fc17 x86_64. This is important because these are precompiled binaries that are linked against other components. Now extract the .cpio: »cpio -idv < xsnow.cpio
./usr/bin/xsnow
./usr/share/doc/xsnow-1.42
./usr/share/doc/xsnow-1.42/README
./usr/share/man/man6/xsnow.6.gz
212 blocks
Press any key to continue... If I browse through this directory tree, everything I need is there, except some of the meta-information that might help me resolve dependencies. This can be found using rpm -q -p [package] --[query] : »rpm -q -p xsnow-1.42-17.fc17.x86_64.rpm --requires
warning: xsnow-1.42-17.fc17.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID d2382b83: NOKEY
libX11.so.6()(64bit)
libXext.so.6()(64bit)
libXpm.so.4()(64bit)
libc.so.6()(64bit)
libc.so.6(GLIBC_2.2.5)(64bit)
libc.so.6(GLIBC_2.3.4)(64bit)
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
rtld(GNU_HASH)
rpmlib(PayloadIsXz) <= 5.2-1 Pretty sure I already have all this stuff. So now all I have to do is put the xsnow executable in my $PATH, which already includes a bin in my home directory: »cp ./usr/bin/xsnow ~/bin Viola! Now I can type xsnow and watch nothing, since as it turns out xsnow does not play well with KDE :( but hopefully the jist of the process is clear. I did not have to do anything outside my home directory. If you need to install libraries you will need to create a directory in home for them too and add to ~/.bashrc : export LD_LIBRARY_PATH=/home/you/lib | {
"source": [
"https://unix.stackexchange.com/questions/61287",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30290/"
]
} |
61,386 | I find myself often doing the same thing with tmux : cd to a given directory. tmux Rename window to what I'm doing. Split the window vertically 50%. Start one process in the left window. Start another process in the right window. Profit. Is there a way for me to automate launching all of this so that I can run a single command and get the window I'm looking for? | Archwiki saves the day! Session Initialization on the tmux page gives an example. That said, instead of starting tmux as tmux , tmux new -s name will name the session when it starts instead of giving it a number. Session initialization You can have tmux open a session with preloaded windows by including those details in your ~/.tmux.conf: new -n WindowName Command
neww -n WindowName Command
neww -n WindowName Command To start a session with split windows (multiple panes), include the splitw command below the neww you would like to split; thus: new -s SessionName -n WindowName Command
neww -n foo/bar foo
splitw -v -p 50 -t 0 bar
selectw -t 1
selectp -t 0 would open 2 windows, the second of which would be named foo/bar and would be split vertically in half (50%) with foo running above bar. Focus would be in window 2 (foo/bar), top pane (foo). Note: Numbering for sessions, windows and panes starts at zero, unless
you have specified a base-index of 1 in your .conf To manage multiple sessions, source separate session files from your conf file: # initialize sessions
bind F source-file ~/.tmux/foo
bind B source-file ~/.tmux/bar | {
"source": [
"https://unix.stackexchange.com/questions/61386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
61,390 | I need to run systemd-tmpfiles --create during the boot process with a systemd distro. So I need to create a systemd .service file doing this job. In this question you can read all the details about what I need and why: How does systemd-tmpfiles work? I have read some docs about it and I am writing the following test: [Unit]
Description=Execute tmpfiles to disable usb-wakeup # see details in the link above
Requires=multi-user.target # see details in the link above
After=multi-user.target # see details in the link above
[Service]
Type=oneshot
ExecStart=/usr/bin/systemd-tmpfiles --create
[Install]
WantedBy=multi-user.target But I'm not sure, because systemd-tmpfiles is not a simple program but a piece of systemd itself. I wouldn't like to break my system. Any tips about a correct .service file? | [This does not directly address the issue of systemd-tmpfiles but I think you have already recognized that in this particular case you are better off just using echo.] First up, "multi-user.target" may or may not be what you want to use. If you are familiar with the concept of runlevels from SysV style init stuff, multi-user is the systemd equivalent of runlevel 3, which is a multi-user system that boots to a console, not a GUI. The equivalent of runlevel 5, which boots to X, is graphical.target . The default is determined by a symlink in /etc/systemd/system (and/or /lib/systemd/system ; the one in /etc will overrule the one in /lib ) called default.target , use ls to find where it points: »ls -l /etc/systemd/system/default.target
default.target -> /usr/lib/systemd/system/multi-user.target systemctl get-default will tell you "multi-user.target" in this case.
For normal linux desktops it will be graphical.target . This is actually not important if you want the boot service you are creating to start regardless of what the default runlevel/target is -- in that case, we can just use default.target, and not worry what it is an alias for. If you use multi-user, however, and your default is graphical, your service won't happen. Depending on the service, there may be more appropriate and specific targets or services that you want to start this one in relation to. Based on your other question, default.target is probably fine. As a note, the difference between a "target" and a "service" is that a service contains a [Service] section which actually runs a process; a target is just a way of grouping services together via the various "depends" and "requires" directives; it doesn't do anything of its own beyond triggering other targets or services. When a service starts is determined by what other services explicitly depend on it. In the case of a simple, stand-alone event like this that we want run late in the boot process, we can use this combination of directives: [Unit]
Requires=local-fs.target
After=local-fs.target
[Install]
WantedBy=default.target The "Install" section is used when the service is installed. "WantedBy=" specifies a target we want this service to be included with, meaning it will run if that target does. If you don't have specific dependencies, getting the unit to run later rather than sooner may be a matter of looking at what's going on normally and picking something to use as a dependency or an optional prerequisite. To distinguish: By dependency I mean something which your unit requires to also be activated, and by optional prerequisite I mean something that should run before your unit if it is being used, but it is not required. Those terms are mine, but this is an important distinction used in the systemd documentation, particularly in the sense that a required dependency is guaranteed to be started if your unit is , but this requirement does not influence the order in which they are started , meaning, something that is just a dependency may actually be started afterwards (and yes, since that means your unit may be started first, the dependency is not guaranteed to succeed). Above, Requires on local-fs.target may be a bit pointless unless you think your unit is going to be used on a system where it might not be included otherwise, but combining it with After means your unit is guaranteed to be started after it is -- so you could do without the Requires (you can set a unit to start after a unit that it doesn't depend on, hence "After"
without "Requires" = an optional prerequisite). The example here is just to introduce the concepts and the distinction between dependency and order of execution: One does not determine the other. Note that "started after" still doesn't mean the prereq will have reached any particular point it its own execution. Eg., if it is about mounting remote filesystems and you this is important to your unit, you will want to use Requires and probably After the service that establishes that but you still need the actual process you are executing to do proper error handling in case the remote filesystems are not yet accessible (eg., by sleeping in a loop until they are). For the example, I'll just echo "hello world" to the console. The service itself is described in the [Service] section: [Service]
Type=simple
ExecStart=/usr/local/bin/helloworld The command needs a full path. The reason I did not just use /usr/bin/echo "hello world" is that it won't work (the output goes to /dev/null, I think), and while a service that does an echo "hello world" > /dev/console will, experimentation demonstrates that using shell redirection in an ExecStart directive won't, because the ExecStart command isn't run by a shell . But you can make so: /usr/local/bin/helloworld is a shell script with that one line, echo "hello world" > /dev/console . 1 Note the Type=simple . This is fine for what helloworld does, and a great many other things.
If your service is long running (beyond a few seconds), systemd will fork it to the background when using simple , which is what you want (the other option is to have it killed for remaining in the foreground too long). However, if the program does this fork itself (as servers and daemons often do) you should use Type=forking . Under simple , it will be killed as a stray orphan process. The "Type" param is covered in detail in man systemd.service and you should read that part regardless of what you are trying to do. 2 Our complete, minimal service file is just those three sections ( [Unit] , [Service] , and [Install] ). To install, place the file or a symlink to it in either /etc/systemd/system or /usr/lib/systemd/system, and: systemctl --system enable helloworld It should print ln -s ... . This does not run the service, it just configures it to run at boot as discussed above. That's it in a nutshell. man systemd.unit and man systemd.service have more details (BTW, there's an index for all these things in man systemd.directives ). You can redirect output using StandardOutput and StandardError parameters in the [Service] block, see man systemd.exec . There is an index of all service file directives in man systemd-directives , indicating which man page there are documented in, eg.: After=
systemd.unit(5)
Alias=
systemd.unit(5) | {
"source": [
"https://unix.stackexchange.com/questions/61390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28800/"
]
} |
61,461 | How can we extract specific files from a large tar.gz file? I found the process of extracting files from a tar in this question but, when I tried the mentioned command there, I got the error: $ tar --extract --file={test.tar.gz} {extract11}
tar: {test.tar.gz}: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now How do I then extract a file from tar.gz ? | You can also use tar -zxvf <tar filename> <file you want to extract> You must write the file name exacty as tar ztf test.tar.gz shows it. If it says e.g. ./extract11 , or some/bunch/of/dirs/extract11 , that's what you have to give (and the file will show up under exactly that name, needed directories are created automatically). -x : instructs tar to extract files. -f : specifies filename / tarball name. -v : Verbose (show progress while extracting files). -z : filter archive through gzip, use to decompress .gz files. -t : List the contents of an archive | {
"source": [
"https://unix.stackexchange.com/questions/61461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29327/"
]
} |
61,567 | I've got a question that I've not been able to find an answer for. I have two computers, both of which run Ubuntu Linux 12.04. I have set up my first computer ("home") to be able to SSH into my second computer ("remote") using public/private RSA key authentication. This is not the first SSH connection that have set up using key authentication on my home computer, so my home computer has several id_rsa private keyfiles (each of which is for a different computer to SSH into). Thus, I am able to successfully SSH only when I specify a keyfile (in ssh , the -i option), using ssh username@ipaddress -i path/to/keyfile/id_rsa.2 . That works great. However, I would also like to use sshfs , which mounts the remote filesystem. While ssh seems to play nice with multiple keys, I can't find a way to get sshfs to use the correct private key ("id_rsa.2"). Is there a way to get sshfs to do this? | Here's what works for me: sshfs [email protected]:/remote/path /local/path/ -o IdentityFile=/path/to/key You can figure this out via man sshfs : -o SSHOPT=VAL ssh options (see man ssh_config) man ssh_config IdentityFile Specifies a file from which the user's DSA, ECDSA or DSA authentication identity is read. | {
"source": [
"https://unix.stackexchange.com/questions/61567",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30419/"
]
} |
61,580 | I was able to do sftp yesterday to a RHEL 5.4 box (RedHat) and today I can't. The message is "Received message too long 778199411" , and after some investigation, it was due to my RHEL box's .bashrc having a line echo "running .bashrc" -- or echoing anything at all, I think. So why would printing out a line affect sftp ? It felt a bit like a design issue as printing out a line in .bashrc works in other situations such as log in or ssh and it is kind of hard to track down when sftp fails for such a weird reason. So the question is, why printing out a line cause such error and what if we still like to print out something in .bashrc ? (mainly to see when this file gets sourced/executed). | This is a longstanding problem. I found it ten years ago when I first had to mix commercial SSH at work and open-SSH at home. I ran into it again today and found this post. If I had searched for "sftp/scp fails but ssh is OK" I would have been reminded of the solution sooner! Put simply, .bashrc , .bash_profile , .cshrc , .profile , etc.,
have to be silent for non-interactive sessions
or they interfere with the sftp / scp connection protocol. This output confuses the sftp/scp client.
You can verify if your shell is doing this by executing: ssh yourhost /usr/bin/true If the above command produces any output,
then you need to modify your shell initialization. From the open-SSH FAQ: 2.9 - sftp/scp fails at connection, but ssh is OK. | {
"source": [
"https://unix.stackexchange.com/questions/61580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19342/"
]
} |
61,584 | Sometimes, a terminal screen is messed up, and when we use man ls to read the manpages, or press the UP arrow to go to previous commands in history, the screen will show characters not as the right place. (for example, treat the end of screen as some where in the middle of the screen). The command reset is tried and it wouldn't work. One way that works is to log out or close the window, and resize the window first, and then do ssh (or close that tab, and resize the window, and then open a new tab to get a new shell). But this way, we will lose anything that we previously did, such as starting a virtual machine console, etc. So if we don't close the shell, is there a way to fix this problem? (this happened before right inside Fedora, and also for a Macbook ssh into a RHEL 5.4 box). Update: I remember now how it happened in Fedora: I opened up a Terminal, and did a FreeVM to use a console of a Virtual Machine (a shell). I think it was size 80 x 25 and then after a while, I resized the Terminal to 130 x 50 approximately, and then the "inner shell" (of the VM) started to behave weird). | If you are using bash, check if "checkwinsize" option is activated in your session using shopt | grep checkwinsize If you don't get checkwinsize on then activate it with shopt -s checkwinsize Bash documentation says for "checkwinsize" attribute : "If set, Bash checks the window size after each command and, if
necessary, updates the values of LINES and COLUMNS." If you like the setting, you could activate checkwinsize in your ~/.bashrc . To activate: shopt -s checkwinsize To deactivate: shopt -u checkwinsize | {
"source": [
"https://unix.stackexchange.com/questions/61584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19342/"
]
} |
61,586 | Related to this question Short description of the problem: When source tree has a mounted point inside it, then time stamps on files inside that mounted point when copied to target tree are not preserved even when using -a option Detailed description: Assume this is the source tree: /home/ /home/
| |
me/ BACKUP/
| |
+----+----------+ +----+-------+
| | | | | |
data/ foo.txt boo.txt data/ foo.txt boo.txt
| |
a.txt a.txt where data/ above is mounted external USB disk. Everything is ext4 file system. Everything in source is owned my me . BACKUP also happened to be a mount point, the backup USB disk. After issuing this command rsync -av --delete /home/me/ /home/BACKUP/ , I found that /home/BACKUP/data/ and everything below it has the current time stamp, as if these files were created now, and not the time stamp on the files in /home/me/data/ . Other files and folders outside data did have the time stamp preserved OK. Question is: How to use rsync in the above setting to tell it to preserve time stamps on all files and folders even on files and folders on a mounted point? I am using: >uname -a
Linux 3.5.0-17-generic #28-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux
>rsync -v
rsync version 3.0.9 protocol version 30 | from man rsync : -t, --times preserve modification times Since you are copying files from one filesystem to another and wanting to preserve c-time . Most people understand c-time to mean "create time" which is incorrect on most UNIX/Linux systems (Windows filesystems track "creation" or "birth" times). For the most part, in UNIX and Linux, c-time is the timestamp used to record the last inode ' C 'hange. An inode changes if any of its attributes are updated: creation (OP's case) mode (permissions) owner/group hard link count etc. (stat() system call) OP cannot preserve the c-time of their file's when they are brought onto a new filesystem. The creation of these files in the new filesystems is one of the conditions listed above (creation of inode/file). | {
"source": [
"https://unix.stackexchange.com/questions/61586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30274/"
]
} |
61,655 | Say I want to configure my ssh options for 30 servers with the same setup in my .ssh config file: host XXX
HostName XXX.YYY.com
User my_username
Compression yes
Ciphers arcfour,blowfish-cbc
Protocol 2
ControlMaster auto
ControlPath ~/.ssh/%r@%h:%p
IdentityFile ~/.ssh/YYY/id_rsa where the only thing that changes between these 30 machines is XXX . Instead than repeating the above structure 30 times in my config file, is there another way to define a range of machines? | From the ssh_config(5) man page: Host Restricts the following declarations (up to the next Host key‐
word) to be only for those hosts that match one of the patterns
given after the keyword. If more than one pattern is provided,
they should be separated by whitespace. ... HostName
Specifies the real host name to log into. This can be used to
specify nicknames or abbreviations for hosts. If the hostname
contains the character sequence ‘%h’, then this will be replaced
with the host name specified on the commandline (this is useful
for manipulating unqualified names). So: Host XXX1 XXX2 XXX3
HostName %h.YYY.com | {
"source": [
"https://unix.stackexchange.com/questions/61655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
61,774 | I need to backup a fairly large directory, but I am limited by the size of individual files. I'd like to essentially create a tar.(gz|bz2) archive which is split into 200MB maximum archives. Clonezilla does something similar to this by splitting image backups named like so: sda1.backup.tar.gz.aa
sda1.backup.tar.gz.ab
sda1.backup.tar.gz.ac Is there a way I can do this in one command? I understand how to use the split command, but I'd like to not have to create one giant archive, then split it into smaller archives, as this would double the disk space I'd need in order to initially create the archive. | You can pipe tar to the split command: tar cvzf - dir/ | split --bytes=200MB - sda1.backup.tar.gz. On some *nix systems (like OS X) you may get the following error: split: illegal option -- - In that case try this (note the -b 200m ): tar cvzf - dir/ | split -b 200m - sda1.backup.tar.gz. If you happen to be trying to split the file to fit on a FAT32 formatted drive,
use a byte limit of 4294967295. For example: tar cvzf - /Applications/Install\ macOS\ Sierra.app/ | \
split -b 4294967295 - /Volumes/UNTITLED/install_macos_sierra.tgz. When you want to extract the files use the following command (as of @Naftuli Kay commented): cat sda1.backup.tar.gz.* | tar xzvf - | {
"source": [
"https://unix.stackexchange.com/questions/61774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
61,876 | In my new Gentoo installation, su doesn't work as my non-root user: After entering the correct password I get the message "su: Permission denied". What could be causing this? I have already tried reinstalling the package containng /bin/su . EDIT: sudo works. | You have to add your user to the wheel group : gpasswd -a youruser wheel Alternatively, you can disable the group membership check for su in pam by editing /etc/pam.d/su and commenting out this line: auth required pam_wheel.so use_uid It requires users to be in the wheel group to be able to switch user.
User switching as non-root works again when this pam module is disabled for su . | {
"source": [
"https://unix.stackexchange.com/questions/61876",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18110/"
]
} |
61,885 | I'm using a rescue-live-system (similar to a live-cd) to fix some issues with my Debian server, like that: # mkdir -p /mnt/rescue
# mount /dev/md2 /mnt/rescue
# mount --bind /dev /mnt/rescue/dev/
# mount --bind /proc /mnt/rescue/proc/
# mount --bind /sys /mnt/rescue/sys/ Now I can chroot to /mnt/rescue - but after I'm done, how to unmount the filesystem again? umount: /mnt/rescue: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1)) I guess it's because dev , proc and sys are bound to the mounted file system. But it's not possible to unmount them either... | You have to first exit the chroot session, usually a simple exit will do: exit Then umount ALL binded directories: umount /mnt/rescue/dev/
umount /mnt/rescue/proc/
umount /mnt/rescue/sys/ Then: umount /mnt/rescue In case you were worried that sync isn't used here, note that it has no influence on whether unmounting is possible. Unmounting flushes pending writes anyway (it has to, because there'd be nowhere for them to go after the unmounting). The presence of a chrooted process is irrelevant (except in that it prevents unmounting). In normal system operation, sync has no observable effect. sync only makes a difference if a device is physically disconnected without having been unmounted or if the system crashes while the device is mounted. | {
"source": [
"https://unix.stackexchange.com/questions/61885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19062/"
]
} |
61,907 | Suppose I have a directory structure like this: $ [~/practice] ls
a/ b/ c/ d/ Now I want to create a directory tmp1 in all sub directories of practice and I do this: $ [~/practice] mkdir */tmp1
mkdir: cannot create directory `*/tmp1': No such file or directory Then I try the -p switch and I endup with a directory named * with a sub directory tmp1 $ [~/practice] mkdir -p */tmp1
$ [~/practice] ls
*/ a/ b/ c/ d/ I know the use of -p switch is to create multiple nonexistent directories. I just thought it might help. How do I create tmp1 in all subdirectories at once? If this can be done, how do I extend it to create \tmp1 , \tmp2 , \tmp3 in \a , \b and \c at once? Edit: I missed mentioning that the directories don't have to be simple and in order, like a , b , c etc., and the directory to be created is not necessarily like tmp1 , tmp2 . $ [~/practice] ls
dog/ cat/ rat/ In them, I would like to have something like $ [~/practice] ls *
dog:
red/ blue/
cat:
red/ blue/
rat:
red/ blue/ | With globs : for dir in */; do mkdir -- "$dir/tmp1"; done NOTE I treat only dirs (including symlinks to dirs) with the little hack of using */ as a glob If you want to create multiple subdirs at once : for dir in */; do mkdir -- "$dir"/{tmp1,foo,bar,qux}; done | {
"source": [
"https://unix.stackexchange.com/questions/61907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4605/"
]
} |
61,931 | I have a bash file that I need to redirect all output to one file, debug log as well as to the terminal. I need to redirect both stdout and stderr to the debug and log it for all commands in the script. I do not want to add 2>&1 | tee -a $DEBUG for every single command in the file. I could live with | tee -a $DEBUG . I remember there was a way to do it with something like exec 2>&1 . Currently I'm using something like the following: #!/bin/bash
DEBUGLOG=/tmp/debug
exec 2>&1
somecommand | tee -a $DEBUGLOG
somecommand2 | tee -a $DEBUGLOG
somecommand3 | tee -a $DEBUGLOG but it does not work. Does anyone have a solution/can explain the cause? | As for a solution to redirect lots of command at once: #!/bin/bash
{
somecommand
somecommand2
somecommand3
} 2>&1 | tee -a $DEBUGLOG Why your original solution does not work: exec 2>&1 will redirect the standard error output to the standard output of your shell, which, if you run your script from the console, will be your console. the pipe redirection on commands will only redirect the standard output of the command. On the point of view of somecommand , its standard output goes into a pipe connected to tee and the standard error goes into the same file/pseudofile as the standard error of the shell, which you redirect to the standard output of the shell, which will be the console if you run your program from the console. The one true way to explain it is to see what really happens: Your shell's original environment might look like this if you run it from the terminal: stdin -> /dev/pts/42
stdout -> /dev/pts/42
stderr -> /dev/pts/42 After you redirect standard error into standard output ( exec 2>&1 ), you ... basically change nothing. But if you redirect the script's standard output to a file, you would end up with an environment like this: stdin -> /dev/pts/42
stdout -> /your/file
stderr -> /dev/pts/42 Then redirecting the shell standard error into standard output would end up like this : stdin -> /dev/pts/42
stdout -> /your/file
stderr -> /your/file Running a command will inherit this environment. If you run a command and pipe it to tee, the command's environment would be : stdin -> /dev/pts/42
stdout -> pipe:[4242]
stderr -> /your/file So your command's standard error still goes into what the shell uses as its standard error. You can actually see the environment of a command by looking in /proc/[pid]/fd : use ls -l to also list the symbolic link's content. The 0 file here is standard input, 1 is standard output and 2 is standard error. If the command opens more files (and most programs do), you will also see them. A program can also choose to redirect or close its standard input/output and reuse 0 , 1 and 2 . | {
"source": [
"https://unix.stackexchange.com/questions/61931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30613/"
]
} |
62,015 | Sometimes I define a function that shadows an executable and tweaks its arguments or output. So the function has the same name as the executable, and I need a way how to run the executable from the function without calling the function recursively. For example, to automatically run the output of fossil diff through colordiff and less -R I use: function fossil () {
local EX=$(which fossil)
if [ -z "$EX" ] ; then
echo "Unable to find 'fossil' executable." >&2
return 1
fi
if [ -t 1 ] && [ "$1" == "diff" ] ; then
"$EX" "$@" | colordiff | less -R
return
fi
"$EX" "$@"
} If I'd be sure about the location of the executable, the I could simply type /usr/bin/fossil . Bash recognizes that / means the command it's an executable, not a function. But since I don't know the exact location, I have to resort to calling which and checking the result. Is there a simpler way? | Use the command shell builtin: bash-4.2$ function date() { echo 'at the end of days...'; }
bash-4.2$ date
at the end of days...
bash-4.2$ command date
Mon Jan 21 16:24:33 EET 2013
bash-4.2$ help command
command: command [-pVv] command [arg ...]
Execute a simple command or display information about commands.
Runs COMMAND with ARGS suppressing shell function lookup, or display
information about the specified COMMANDs. Can be used to invoke commands
on disk when a function with the same name exists. | {
"source": [
"https://unix.stackexchange.com/questions/62015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22339/"
]
} |
62,032 | I am writing a csh alias so that I can use the following bash function in my csh : function up( )
{
LIMIT=$1
P=$PWD
for ((i=1; i <= LIMIT; i++))
do
P=$P/..
done
cd $P
export MPWD=$P
} (I stole the above bash function from here ) I have written this: alias up 'set LIMIT=$1; set P=$PWD; set counter = LIMIT; while[counter!=0] set counter = counter-1; P=$P/.. ; end cd $P; setenv MPWD=$P' However, I am getting the following error: while[counter!=0]: No match.
P=/net/devstorage/home/rghosh/..: Command not found.
end: Too many arguments. and my script is not working as intended. I have been reading up on csh from here . | Use the command shell builtin: bash-4.2$ function date() { echo 'at the end of days...'; }
bash-4.2$ date
at the end of days...
bash-4.2$ command date
Mon Jan 21 16:24:33 EET 2013
bash-4.2$ help command
command: command [-pVv] command [arg ...]
Execute a simple command or display information about commands.
Runs COMMAND with ARGS suppressing shell function lookup, or display
information about the specified COMMANDs. Can be used to invoke commands
on disk when a function with the same name exists. | {
"source": [
"https://unix.stackexchange.com/questions/62032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13871/"
]
} |
62,049 | For some reason, when I make a text file on OS X, it's always at least 4kB, unless it's blank. Why is this? Could there be 4,000 bytes of metadata about 1 byte of plain text? | The block size of the file system must be 4 kB. When data is written to a file that is contained in a file system the operating system must allocate blocks of storage to contain the data that will be written to the file. Typically, when a file system is created the storage contained in that file system is segmented into blocks of a fixed size. This Wikipedia article briefly explains this process. The underlying block size of the file system for this file must have a 4K byte block size. This file is using 1 4K block and only one byte within that block contains actual data. | {
"source": [
"https://unix.stackexchange.com/questions/62049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19064/"
]
} |
62,154 | To know when was a process started, my first guess was to check the time when /proc/<pid>/cmdline was written/modified the last time. ps also shows a START field. I thought both of these sources would be the same. Sometimes they are not the same. How could that be? | On Linux at least, you can also do: ps -o lstart= -p the-pid to have a more useful start time. Note however that it's the time the process was started, not necessarily the time the command that it is currently executing was invoked. Processes can (and generally do) run more than one command in their lifetime. And commands sometimes spawn other processes. The mtimes of the files in /proc on Linux (at least) are generally the date when those files were instantiated, which would be the first time something tried to access them or list the directory content. For instance: $ sh -c 'date +%T.%N; sleep 3; echo /proc/"$$"/xx*; sleep 3; stat -c %y "/proc/$$/cmdline"'
13:39:14.791809617
/proc/31407/xx*
2013-01-22 13:39:17.790278538 +0000 Expanding /proc/$$/xx* caused the shell to read the content of /proc/$$ which caused the cmdline file to be instantiated. See also: Timestamp of socket in /proc//fd | {
"source": [
"https://unix.stackexchange.com/questions/62154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23930/"
]
} |
62,176 | What is the difference between ps and top command ? I see that both can display information about running processes . Which one should be used when ? | top is mostly used interactively (try reading man page or pressing "h" while top is running) and ps is designed for non-interactive use (scripts, extracting some information with shell pipelines etc.) | {
"source": [
"https://unix.stackexchange.com/questions/62176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28032/"
]
} |
62,182 | A part of the output from the ps -ef command is given below : UID PID PPID C STIME TTY TIME CMD
root 1 0 0 2012 ? 00:00:01 init [3]
root 2 1 0 2012 ? 00:00:01 [migration/0]
root 3 1 0 2012 ? 00:00:00 [ksoftirqd/0]
root 4 1 0 2012 ? 00:00:00 [watchdog/0]
root 5 1 0 2012 ? 00:00:00 [events/0]
root 6 1 0 2012 ? 00:00:00 [khelper]
root 7 1 0 2012 ? 00:00:00 [kthread]
root 9 7 0 2012 ? 00:00:00 [xenwatch]
root 10 7 0 2012 ? 00:00:00 [xenbus]
root 18 7 0 2012 ? 00:00:01 [migration/1]
root 19 7 0 2012 ? 00:00:00 [ksoftirqd/1] What does the "?" for all the rows in the TTY column mean? Also what does C and CMD column stand for? | You can check the manpage using man ps to find out what the columns mean. The Linux ps manpage, for example, gives: c C integer value of the processor utilisation percentage.
(see %cpu)
tname TTY controlling tty (terminal). (alias tt, tty).
args COMMAND command with all its arguments as a string. May chop as
desired. Modifications to the arguments are not shown.
The output in this column may contain spaces.
(alias cmd, command)
cmd CMD see args. (alias args, command) If the TTY is ? that means that the process is not associated with any user terminal. | {
"source": [
"https://unix.stackexchange.com/questions/62182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28032/"
]
} |
62,202 | Why does RHEL (and its derivatives) use such an old kernel? It uses 2.6.32-xxx, which seems old to me. How do they support newer hardware with that kernel? As far as I know these kind of distributions do run on fairly modern hardware. | Because Red Hat Enterprise Linux is foremost about stability , and is a long-lived distribution (some 10 years guaranteed). RHEL users don't want anything to change unless absolutely necessary. But note that the base version of the kernel is old, RHEL's kernel contains lots of backported stuff and bug fixes, so it isn't really old. | {
"source": [
"https://unix.stackexchange.com/questions/62202",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30184/"
]
} |
62,231 | I'm reading "BASH pocket guide of Oreilly".
It said: The process ID of the current Bash process.
In some cases, this can differ from $$. Above explanation , explained $BASHPID variable. Question: which cases? | An example is provided in the BASHPID description of the bash manpage: BASHPID
Expands to the process id of the current bash process. This
differs from $$ under certain circumstances, such as subshells
that do not require bash to be re-initialized. Here is an example of a subshell outputting the contents of the variable, along with $$ and the contents of BASHPID outside of the subshell. $ echo $(echo $BASHPID $$) $$ $BASHPID
25680 16920 16920 16920
# | | | |
# | | | -- $BASHPID outside of the subshell
# | | -- $$ outside of the subshell
# | -- $$ inside of the subshell
# -- $BASHPID inside of the subshell | {
"source": [
"https://unix.stackexchange.com/questions/62231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21911/"
]
} |
62,247 | I am trying to run weblogic server on my linux machine and I am getting the following error : ERROR: transport error 202: bind failed: Address already in use
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:690]
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197) I think that the error means that the debugger port which by default is 8453 is already held by some other service . How can I find out what service is runnning in a partcular port number ? P.S: I used netstat command but that shows all the services occupying all ports ..here I am interested in a particular port only. | Two ways: lsof -i :port -S netstat -a | grep port You can do man lsof or man netstat for the needed info.
Replace port by the port number you want to search for. | {
"source": [
"https://unix.stackexchange.com/questions/62247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28032/"
]
} |
62,256 | I am using Huawei 3G USB Modem in Ubuntu 12.04 LTS on DELL Inspiron NS5520. I tried every possible solution but had no luck. Whenever my USB Modem disconnects, I can not re-connect it and then I have to restart laptop. On restart, sometimes it detects the USB Modem and sometimes not. In Ubuntu 11.x It was working fine, but now I am using 12.04. This is the lsusb ouput: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 003 Device 003: ID 12d1:1001 Huawei Technologies Co., Ltd. E169/E620/E800 HSDPA Modem
Bus 001 Device 003: ID 0bda:0129 Realtek Semiconductor Corp.
Bus 001 Device 004: ID 0c45:648d Microdia
Bus 002 Device 003: ID 8087:07da Intel Corp Any help will be greatly appreciated. | Two ways: lsof -i :port -S netstat -a | grep port You can do man lsof or man netstat for the needed info.
Replace port by the port number you want to search for. | {
"source": [
"https://unix.stackexchange.com/questions/62256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30990/"
]
} |
62,316 | I used to use the somewhat whimsical en_DK.UTF-8 locale when installing a new system because that would produce (roughly) the locale results I wanted, even though I am not in Denmark. Measurements metric Sensible date and time formats, but day and month names in English 24-hour time format Work week starts on Monday Numeric date in (something at least resembling) ISO format, yyyy-mm-dd Informal date is dd/mm, not the other way around A4 paper size Euro currency System messages in English Alas, Ubuntu and Debian no longer seem to support the en_DK locale. I have been thinking there should be something like en_EU for "Euro English". Every place I have worked has had this sort of requirement -- the official language of the organization is English, but we want continental European defaults for everything else. I am imagining I am not the first person to think that a "location agnostic" English locale would benefit both me personally and the organizations I work for. So why does it not exist, and where do I look for further discussions and rationale? ... Or should I go ahead and propose it? To whom? | en_IE.UTF-8 English (Ireland) locale has all the things you're asking for: Measurements metric — yes 24-hour time format — yes Work week starts on Monday — yes Numeric date in (something at least resembling) ISO format, yyyy-mm-dd
— no , it this locale it's dd/mm/yy . But that seems close enough to what you're used to Informal date is dd/mm, not the other way around — yes A4 paper size — yes Euro currency — yes System messages in English — yes I'm actually using this locale, even though I'm in Amsterdam, as there is no English (Paneuropean) locale that I know of. BTW. don't make mistake of selecting ga_IE.UTF-8 Irish (Ireland) locale, as it's Irish Gaelic language. | {
"source": [
"https://unix.stackexchange.com/questions/62316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19240/"
]
} |
62,322 | A hacker has dropped a file in my tmp dir that is causing issues. Nothing malicious except creating GB's of error_log entries because their script is failing. However, the file they are using to execute has no permissions and even as ROOT I can't delete or rename this file. ---------- 1 wwwusr wwwusr 1561 Jan 19 02:31 zzzzx.php
root@servername [/home/wwwusr/public_html/tmp]# rm zzzzx.php
rm: remove write-protected regular file './zzzzx.php'? y
rm: cannot remove './zzzzx.php': Operation not permitted I have also tried removing by inode root@servername [/home/wwwusr/public_html/tmp]# ls -il
...
1969900 ---------- 1 wwwusr wwwusr 1561 Jan 19 02:31 zzzzx.php
root@servername [/home/wwwusr/public_html/tmp]# find . -inum 1969900 -exec rm -i {} \;
rm: remove write-protected regular file './zzzzx.php'? y
rm: cannot remove './zzzzx.php': Operation not permitted How do I delete this file? | The file has probably been locked using file attributes . As root, do lsattr zzzzx.php Attributes a (append mode) or i (immutable) present would prevent your rm . If they're there, then chattr -ai zzzzx.php
rm zzzzx.php should delete your file. | {
"source": [
"https://unix.stackexchange.com/questions/62322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31023/"
]
} |
62,333 | I'm really fond of "null coalescing", where you can set a variable to the first "non-null" value in a list of things. Many languages support this, for example: C#: String myStr = string1 ?? string2 ?? "default"; JavaScript: var myStr = string1 || string2 || "default"; ...etc. I'm just curious if this can be done in Bash to set a variable? pseudo: MY_STR=$ENV{VAR_NAME}??$ANOTHER_VAR??"default"; | The POSIX shell (so includes bash ) equivalent would be: ${FOO:-${BAR:-default}} See also the: ${FOO-${BAR-default}} variant which checks whether the variable is set or not instead of whether it resolves to the empty string or not (which makes a difference in the cases where a variable is set but empty). | {
"source": [
"https://unix.stackexchange.com/questions/62333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28615/"
]
} |
62,355 | I am currently looking for a website or a tool that would allow me to compare the package state of a particular software in different Linux distributions. For instance, which version of gimp is provided by Mint, Ubuntu, Debian Sid and Fedora 18? An immediate interest would be to be able to avoid reinventing the wheel when packaging software (for instance re-use patches from other distros). | whohas package (link) may help you. Example % whohas pidgin|grep "pidgin "
MacPorts pidgin 2.10.6 https://trac.macports.org/browser/trunk/dports/net/pidgin/Portfile
Slackware pidgin 2.7.11-i486-3sl slacky.eu
Slackware pidgin 2.7.0-i486-1 salixos.org
Slackware pidgin 2.7.0-i486-1 slackware.com
OpenBSD pidgin 2.9.0-gtkspell 8.3M
OpenBSD pidgin 2.9.0 8.3M 16-Aug-201
Mandriva pidgin 2.10.6-0.1.i586 http://sophie.zarb.org/rpms/a6ec6cd30f5fa024d14549eea375dba4
Fink pidgin 2.10.6-1 http://pdb.finkproject.org/pdb/package.php/pidgin
FreeBSD pidgin 2.10.6 net-im http://www.freebsd.org/cgi/pds.cgi?ports/net-im/pidgin
FreeBSD e17-module-everything-pidgin 20111128 x11-wm http://www.freebsd.org/cgi/pds.cgi?ports/x11-wm/e17-module-everything-pidgin
NetBSD pidgin 2.10.6nb5 10M 2012-12-15 chat http://pkgsrc.se/chat/pidgin
Ubuntu pidgin 1:2.10.0-0ubuntu2. 695K oneiric http://packages.ubuntu.com/oneiric/pidgin
Ubuntu indicator-status-provider-pidgin 0.5.0-0ubuntu1 7K oneiric http://packages.ubuntu.com/oneiric/indicator-status-provider-pidgin
Debian pidgin 2.7.3-1+squeeze3 706K stable http://packages.debian.org/squeeze/pidgin
Debian pidgin 2.10.6-2 591K testing http://packages.debian.org/wheezy/pidgin
Debian indicator-status-provider-pidgin 0.6.0-1 33K testing http://packages.debian.org/wheezy/indicator-status-provider-pidgin
Source Mage funpidgin 2.5.0 test
Source Mage funpidgin 2.5.0 stable
Source Mage pidgin 2.10.6 test
Source Mage pidgin 2.10.5 stable
Gentoo pidgin 2.10.6 http://gentoo-portage.com/net-im/pidgin
Gentoo pidgin 2.10.4 http://gentoo-portage.com/net-im/pidgin | {
"source": [
"https://unix.stackexchange.com/questions/62355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22904/"
]
} |
62,579 | Is there an easy way in zsh to add a directory to my PATH only if it's not already present? (or, more generally, any environment variable). I've tried: PATH+=/my/directory ... but if that's executed twice, it gets added twice. | In zsh $PATH is tied (see typeset -T ) to the $path array. You can force that array to have unique values with: typeset -U path PATH (here with the U nique attribute also added to $PATH , so deduplication also happens when assigning to $PATH instead of $path ) And then, add the path with: path+=(~/foo) Without having to worry if it was there already. To add it at the front, do: path=(~/foo "$path[@]") or: path[1,0]=~/foo if ~/foo was already in $path that will move it to the front. | {
"source": [
"https://unix.stackexchange.com/questions/62579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18985/"
]
} |
62,660 | I know using the command ls will list all the directories. But what does the ls * command do ? I used it and it just lists the directories. Does the star in front of ls mean how deep it will list the directories? | ls lists the files and content of directories it is being passed as arguments, and if no argument is given, it lists the current directory. It can also be passed a number of options that affect its behaviour (see man ls for details). If ls is being passed an argument called * , it will look for a file or directory called * in the current directory and list it just like any other. ls doesn't treat the * character in any other way than any other one. However if ls * is a shell command line, that is code in the language of a Unix shell , then the shell will expand that * according to its globbing (also referred to as Filename Generation or Filename/Pathname Expansion ) rules. While different shells support different globbing operators, most of them agree on the simplest one * . * as a pattern means any number of characters, so * as a glob will expand to the list of files in the current directories that match that pattern. There's an exception however that a leading dot ( . ) character in a file name has to be matched explicitly, so * actually expands to the list of files and directories not starting with . (in lexical order). For instance, if the current directory contains the files called . , .. , .foo , -l and foo bar , * will be expanded by the shell to two arguments to pass to ls : -l and foo bar , so it will be as if you had typed: ls -l "foo bar" or 'ls' "-l" foo\ bar Which are three ways to run exactly the same command. In all 3 cases, the ls command (which will probably be executed from /bin/ls from a lookup of directories mentioned in $PATH ) will be passed those 3 arguments: "ls", "-l" and "foo bar". Incidentally, in this case, ls will treat the first (strictly speaking second ) one as an option. Now, as I said, different shells have different globbing operators. A few decades ago, zsh introduced the **/ operator¹ which means to match any level of subdirectories, short for (*/)# and ***/ which is the same except that it follows symlinks while descending the directories. A few years ago (July 2003, ksh93o+ ), ksh93 decided to copy that behaviour but decided to make it optional, and only covered the ** case (not *** ). Also, while ** alone was not special in zsh ² (just meant the same as * like in other traditional shells since ** means any number of character followed by any number of characters), in ksh93, ** meant the same as **/* (so any file or directory below the current one (excluding hidden files)³. bash copied ksh93 a few years later (February 2009, bash 4.0), with the same syntax but an unfortunate difference: bash's ** was like zsh 's *** , that is it was following symlinks when recursing into sub-directories which is generally not what you want it do and can have nasty side effects. It was partly fixed in bash-4.3 in that symlinks were still followed, but recursion stopped there. It was fully fixed in 5.0. yash added ** in version 2.0 in 2008, enabled with the extended-glob option. Its implementation is closer to zsh 's in that ** alone is not special. In version 2.15 (2009), it added *** like in zsh and two of its own extensions: .** and .*** to include hidden dirs when recursing (in zsh , the D glob qualifier (as in **/*(D) ) will consider hidden files and directories, but if you only want to traverse hidden dirs but not expand hidden files, you need ((*|.*)/)#* or **/[^.]*(D) ). fish also supports ** . Like earlier version of bash , it follows symlinks when descending the directory tree. In that shell however **/* is not the same as ** . ** is more an extension of * that can span several directories. In fish , **/*.c will match a/b/c.c but not a.c , while a**.c will match a.c and ab/c/d.c and zsh 's **/.* for instance has to be written {,**/}.* . There, *** is understood as ** followed by * so the same as ** . tcsh also added a globstar option in V6.17.01 (May 2010) and supports both ** and *** à la zsh . So in tcsh , bash and ksh93 , (when the corresponding option is enabled ( globstar )) or fish , ** expands all the files and directories below the current one, and *** is the same as ** for fish , a symlink traversing ** for tcsh with globstar , and the same as * in bash and ksh93 (though it's not impossible that future versions of those shells will also traverse symlinks). Above, you'll have noticed the need to make sure none of the expansions is interpreted as an options. For that, you'd do: ls -- * Or: ls ./* There are some commands (it doesn't matter for ls ) where the second is preferable since even with the -- some filenames may be treated specially. It's the case of - for most text utilities, cd and pushd and filenames that contain the = character for awk for instance. Prepending ./ to all the arguments removes their special meaning (at least for the cases mentioned above). It should also be noted that most shells have a number of options that affect the globbing behaviour (like whether dot files are ignored or not, the sorting order, what to do if there's no match...), see also the $FIGNORE parameter in ksh Also, in every shell but csh , tcsh , fish and zsh , if the globbing pattern doesn't match any file, the pattern is passed as an unexpanded argument which causes confusion and possibly bugs. For instance, if there's no non-hidden file in the current directory ls * Will actually call ls with the two arguments ls and * . And as there's no file at all, so none called * either, you'll see an error message from ls (not the shell) like: ls: cannot access *: No such file or directory , which has been known to make people think that it was ls that was actually expanding the globs. The problem is even worse in cases like: rm -- *.[ab] If there's no *.a nor *.b file in the current directory, then you might end up deleting a file called *.[ab] by mistake ( csh , tcsh , and zsh would report a no match error and wouldn't call rm (and fish doesn't support the [...] wildcards)). If you do want to pass a literal * to ls , you have to quote that * character in some way as in ls \* or ls '*' or ls "*" . In POSIX-like shells, globbing can be disabled altogether using set -o noglob or set -f (the latter not working in zsh unless in sh / ksh emulation). ¹ While (*/)# was always supported, it was first short-handed as ..../ in zsh-2.0 (and potentially before), then ****/ in 2.1 before getting its definitive form **/ in 2.2 (early 1992) ² The globstarshort option , has since be added (in 2015) to allow ** and *** being used instead of **/* and ***/* respectively ³ See also these few more oddities with the ksh93 globstar design , some of which were copied by bash . . | {
"source": [
"https://unix.stackexchange.com/questions/62660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31206/"
]
} |
62,677 | I have two RasberryPi running debian wheezy and I would like to mount a folder from computer A on computer B. What is the best (as in most efficient) way to do this? I can do it via SMB, but that is for windows, I think there must be a better way to share across linux. | You can use plenty of things, among which, popular options are: NFS Samba / CIFS SSHFS By ease-of-setup I think they would have to be put in this order (top: easiest) SSHFS Through FUSE, you can mount remote filesystems via ssh. I won't cover how, as Cristopher has already very well explained that. Just note that, in order to mount the file automatically it will need a bit more of work . Samba It will allow you to use Windows and Unix machines to access the remote folder. If it's not a big deal for you, then you won't probably benefit from it. However, it's easy to automount it on init (just input the apropriate values at /etc/fstab , including username=<your-samba-username>,password=<your-samba-password> in the options column. NFS It will let you authenticate just via IP (no usernames thing = faster, only of use inside your non-hostile LAN) or via Kerberos Tickets (too painful for just two Raspberries; but useful in corporate environments). As it has kernel mode support, it will run faster than sshfs. Besides, as there's no encryption performed it will have a better throughput, and in the case of the tiny Raspberry ARM, it may make a difference. Besides, it's not so painful to setup simply you trust your network. You have automount support in /etc/fstab too, and you don't have to put sensitive data (such as usernames or passwords), and if you have your usernames syncrhronized (same /etc/passwd and /etc/group files) you can use the usual POSIX permissions toolset ( chown , chgrp and chmod ). | {
"source": [
"https://unix.stackexchange.com/questions/62677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31234/"
]
} |
62,818 | I use my laptop with an external monitor which has speakers. When the monitor is attached through HDMI I can switch (using the GUI: Sound Setting --> Hardware) between the normal laptop audio output and the monitor output. I repeat this procedure a lot of time and I started to wonder if I can automate it or, anyway, execute it in a faster way using the shell. My distro is Ubuntu 12.04 with gnome 3. EDIT: I tried using pacmd, but list-sinks gives me only the device I'm currently using: pacmd list-sinks | grep name:
name: <alsa_output.pci-0000_00_1b.0.hdmi-stereo> After a switch from GUI: pacmd list-sinks | grep name:
name: <alsa_output.pci-0000_00_1b.0.analog-stereo> And if I try to change it I get: pacmd set-default-sink alsa_output.pci-0000_00_1b.0.hdmi-stereo
Welcome to PulseAudio! Use "help" for usage information.
Sink alsa_output.pci-0000_00_1b.0.hdmi-stereo does not exist. | In this case the card is always the same. What is changing between a switch and another is the "card-profile". So the solution which actually worked is: pacmd set-card-profile <cardindex> <profilename> In my case I found all the card profiles with: pacmd list-cards And after I can switch between monitor and laptop speakers with: pacmd set-card-profile 0 output:hdmi-stereo And: pacmd set-card-profile 0 output:analog-stereo+input:analog-stereo Where 0 is the index of the card: pacmd list-cards
Welcome to PulseAudio! Use "help" for usage information.
>>> 1 card(s) available.
index: 0
name: <alsa_card.pci-0000_00_1b.0> And finally, in order to make the switch faster, I set up two alias in my .bashrc file: alias audio-hdmi='pacmd set-card-profile 0 output:hdmi-stereo+input:analog-stereo'
alias audio-laptop='pacmd set-card-profile 0 output:analog-stereo+input:analog-stereo' This way I can switch between audio from the monitor or from the laptop (headphones) typing in the shell: audio-hdmi or audio-laptop | {
"source": [
"https://unix.stackexchange.com/questions/62818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23269/"
]
} |
62,880 | Is there a way to force the find command to stop right after finding the first match? | With GNU or FreeBSD find , you can use the -quit predicate: find . ... -print -quit The NetBSD find equivalent: find . ... -print -exit If all you do is printing the name, and assuming the filenames don't contain newline characters, you could do: find . ... -print | head -n 1 That will not stop find after the first match, but possibly, depending on timing and buffering upon the second match or (much) later. Basically, find will be terminated with a SIGPIPE when it tries to output something while head is already gone because it has already read and displayed the first line of input. Note that not all shells will wait for that find command after head has returned. The Bourne shell and AT&T implementations of ksh (when non-interactive) and yash (only if that pipeline is the last command in a script) would not, leaving it running in background. If you'd rather see that behaviour in any shell, you could always change the above to: (find . ... -print &) | head -n 1 If you're doing more than printing the paths of the found files, you could try this approach: find . ... -exec sh -c 'printf "%s\n" "$1"; kill -s PIPE "$PPID"' sh {} \; (replace printf with whatever you would be doing with that file). That has the side effect of find returning an exit status reflecting the fact that it was killed though. We're sending the SIGPIPE signal instead of the default SIGTERM to avoid the message that some shells display when parts of a pipe line are killed with a signal. They generally don't do it for deaths by SIGPIPE, as those are naturally happening (like in find | head above...). | {
"source": [
"https://unix.stackexchange.com/questions/62880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28650/"
]
} |
63,098 | mkdir -p will create a directory; it will also make parent directories as needed. Does a similar command exist for files, that will create a file and parent directories as needed? | The install utility will do this, if given the source file /dev/null . The -D argument says to create all the parent directories: anthony@Zia:~$ install -D /dev/null /tmp/a/b/c
anthony@Zia:~$ ls -l /tmp/a/b/c
-rwxr-xr-x 1 anthony anthony 0 Jan 30 10:31 /tmp/a/b/c Not sure if that's a bug or not—its behavior with device files isn't mentioned in the manpage. You could also just give it a blank file (newly created with mktemp , for example) as the source. | {
"source": [
"https://unix.stackexchange.com/questions/63098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17307/"
]
} |
63,166 | I have a bash script that sets -e so the script will exit on any exit status != 0. I'm trying to do some basic shell arithmetic assigned to variables and sometimes the expression equals 0 which causes the exit status of the let or expr command to be "1". Here's an example: #!/bin/bash -ex
echo "Test 1"
Z=`expr 1 - 1` || true
echo "Z will print"
let "A=4 - 4"
echo "A WILL NEVER PRINT $A"
Y=`expr 1 - 1`
echo "Y WILL NEVER PRINT $Y"
X=$(expr 2 - 2)
echo "X WILL NEVER PRINT $X" The output is: $ ./test_error.sh
+ echo 'Test 1'
Test 1
++ expr 1 - 1
+ Z=0
+ true
+ echo 'Z will print'
Z will print
+ let 'A=4 - 4' My question is what's the idiomatic bash scripting way to allow the script to fail on real exit errors and not on basic arithmetic equaling 0. I could suffix all those expressions with: A=`expr $C - $D` || true But that seems hacky. | Don't use expr for arithmetic. It has long been obsolete: shells now have arithmetic built in, with the $((…)) construct (POSIX), or with let builtin (ksh/bash/zsh) or the ((…)) construct (ksh/bash/zsh). let and ((…)) return 1 (a failure status code) if the last evaluated expression is 0. To avoid this causing your script to exit under set -e , arrange for the last expression not to return 0, for example: let "a = 2 - 2" 1
((a = 2 - 2, 1)) Alternatively, use the || true idiom: ((a = 2 - 2)) || true Alternatively, do your arithmetic inside $((…)) and your assignments outside. An assignment returns the status of the last command substitution in the value, or 0 if there is no command substitution, so you're safe. This has the added benefit of working in any POSIX shell (such as dash). a=$((2 - 2)) | {
"source": [
"https://unix.stackexchange.com/questions/63166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31476/"
]
} |
63,358 | I know how to list available packages from the repos and so on, but how can I find a list that matches up equivalent meta-packages, such as the build-essential . Is there such a thing and if not what would be a sensible approach to find such close/similar matches? | The equivalent command is yum groupinstall 'Development Tools' | {
"source": [
"https://unix.stackexchange.com/questions/63358",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
63,389 | I noticed that some applications use the Adwaita Dark theme, while others use the light one. Is there a way to make all applications use the Dark one? | Replacing the main theme with the dark one in /usr/share/themes is not an ideal solution as each time gnome-themes is updated your theme will revert to default. It's preferable to properly configure your user account to use the dark theme, that way your settings will be preserved between updates. You can do that: Manually: create (open if already present) the following file: ~/.config/gtk-3.0/settings.ini edit like this: [Settings] gtk-application-prefer-dark-theme=1 Via gnome-tweak-tool (which essentially does the same thing as above, writing the same file): GTK2 Problem This answer is indeed the ideal and "safe" (i.e no need to replace/mod any system files) method. However, currently neither this method nor Jeff's answer works for all apps. Coz only GNOME3/GTK3 has support for the dark theme mode . As of now, there are still many apps (for example those written in GNOME2/GTK2, Qt, wxWidgets, Java, etc) which don't use GTK3 and remain on the old GTK2 theming. To make those apps dark, replacing/modding the GTK2 part( $ACTIVE_THEME/gtk-2.0/gtkrc ) of the currently active theme works. To know more, see the answer to "Firefox not affected by gtk theme" . As GTK2 is deprecated and superseded by GTK3, all actively developed GTK2 apps are getting ported to or replaced by GTK3. Sooner or later, other toolkits like Qt, wxWidgets, etc may also implement support for GTK3. By that time, this method may become valid for all(maybe almost all) apps. | {
"source": [
"https://unix.stackexchange.com/questions/63389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31288/"
]
} |
63,408 | What happens when a hard drive gets full with Linux running? Does it lock the system? Or something else, or nothing happens? I am using Ubuntu Linux. | Replacing the main theme with the dark one in /usr/share/themes is not an ideal solution as each time gnome-themes is updated your theme will revert to default. It's preferable to properly configure your user account to use the dark theme, that way your settings will be preserved between updates. You can do that: Manually: create (open if already present) the following file: ~/.config/gtk-3.0/settings.ini edit like this: [Settings] gtk-application-prefer-dark-theme=1 Via gnome-tweak-tool (which essentially does the same thing as above, writing the same file): GTK2 Problem This answer is indeed the ideal and "safe" (i.e no need to replace/mod any system files) method. However, currently neither this method nor Jeff's answer works for all apps. Coz only GNOME3/GTK3 has support for the dark theme mode . As of now, there are still many apps (for example those written in GNOME2/GTK2, Qt, wxWidgets, Java, etc) which don't use GTK3 and remain on the old GTK2 theming. To make those apps dark, replacing/modding the GTK2 part( $ACTIVE_THEME/gtk-2.0/gtkrc ) of the currently active theme works. To know more, see the answer to "Firefox not affected by gtk theme" . As GTK2 is deprecated and superseded by GTK3, all actively developed GTK2 apps are getting ported to or replaced by GTK3. Sooner or later, other toolkits like Qt, wxWidgets, etc may also implement support for GTK3. By that time, this method may become valid for all(maybe almost all) apps. | {
"source": [
"https://unix.stackexchange.com/questions/63408",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31175/"
]
} |
63,425 | I have read about diff and patch but I can't figure out how to apply what I need.
I guess its pretty simple, so to show my problem take these two files: a.xml <resources>
<color name="same_in_b">#AAABBB</color>
<color name="not_in_b">#AAAAAA</color>
<color name="in_b_but_different_val">#AAAAAA</color>
<color name="not_in_b_too">#AAAAAA</color>
</resources> b.xml <resources>
<color name="same_in_b">#AAABBB</color>
<color name="in_b_but_different_val">#BBBBBB</color>
<color name="not_in_a">#AAAAAA</color>
</resources> I want to have an output, which looks like this (order doesn't matter): <resources>
<color name="same_in_b">#AAABBB</color>
<color name="not_in_b">#AAAAAA</color>
<color name="in_b_but_different_val">#BBBBBB</color>
<color name="not_in_b_too">#AAAAAA</color>
<color name="not_in_a">#AAAAAA</color>
</resources> The merge should contain all lines along this simple rules: any line which is only in one of the files if a line has the same name tag but a different value, take the value from the second I want to apply this task inside a bash script, so it must not nessesarily need to get done with diff and patch, if another programm is a better fit | You don't need patch for this; it's for extracting changes and sending them on without the unchanged part of the file. The tool for merging two versions of a file is merge , but as @vonbrand wrote, you need the "base" file from which your two versions diverged. To do a merge without it, use diff like this: diff -DVERSION1 file1.xml file2.xml > merged.xml It will enclose each set of changes in C-style #ifdef / #ifndef "preprocessor" commands, like this: #ifdef VERSION1
<stuff added to file1.xml>
#endif
...
#ifndef VERSION1
<stuff added to file2.xml>
#endif If a line or region differs between the two files, you'll get a "conflict", which looks like this: #ifndef VERSION1
<version 1>
#else /* VERSION1 */
<version 2>
#endif /* VERSION1 */ So save the output in a file, and open it in an editor. Search for any places where #else comes up, and resolve them manually. Then save the file and run it through grep -v to get rid of the remaining #if(n)def and #endif lines: grep -v '^#if' merged.xml | grep -v '^#endif' > clean.xml In the future, save the original version of the file. merge can give you much better results with the help of the extra information. (But be careful: merge edits one of the files in-place, unless you use -p . Read the manual). | {
"source": [
"https://unix.stackexchange.com/questions/63425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12447/"
]
} |
63,480 | I apologize in advance if this is a duplicate question. I did make an effort to search/check before asking here. I'm comfortable with writing one-liners like this: foocommand && foocommand2 && foocommand3 The idea being that I only want subsequent commands to run if the previous one was "successful". I'm writing a somewhat lengthy script and this one-liner isn't feasible because it looks like a huge block of confusing code to everyone else. I want to space out the commands and write comments inbetween them in the script. How can I do this and still have the equivalent of && in there? | You can do it like this: #!/bin/sh
ls -lh &&
# This is a comment
echo 'Wicked, it works!' I hope I understood what you asked correctly. | {
"source": [
"https://unix.stackexchange.com/questions/63480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
63,509 | Is there a user-friendly command I can use to list users in a console on an Ubuntu system? When I cat /etc/passwd I get a hard-to-read list of users. It would be nice to see an alphabetized list, with entries arranged into columns, and with group names in parentheses beside the group IDs. | Good way for nice output of /etc/passwd file: $ column -nts: /etc/passwd Now you can sort it: $ column -nts: /etc/passwd | sort With groups names in last column (no parenthesis): $ paste -d: /etc/passwd <(groups $(cut -d: -f1 /etc/passwd) | sed 's/.*: //') | column -nts: | sort | {
"source": [
"https://unix.stackexchange.com/questions/63509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6495/"
]
} |
63,512 | I have a task requiring some awk to verify a few records quickly. Let' say: A1,A2
B1,B2
C1,C2
C2,C1
A1,C1
A1,B1
B1,A1 Which would be the best way to check for reciprocity only between the A# the B# and the C# and output only the non reciprocal. For instance: the above should output A2 -> A1
B2 -> B1 A# belong to one group, B# to another and so on. There's no interest in findind any sort of conexion between A# and C# or B#. Instead it is required to keep the search within the group of As, Bs, Cs and so on. | Good way for nice output of /etc/passwd file: $ column -nts: /etc/passwd Now you can sort it: $ column -nts: /etc/passwd | sort With groups names in last column (no parenthesis): $ paste -d: /etc/passwd <(groups $(cut -d: -f1 /etc/passwd) | sed 's/.*: //') | column -nts: | sort | {
"source": [
"https://unix.stackexchange.com/questions/63512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30422/"
]
} |
63,580 | Say I do the following: cd /some/path
ln -s /target/path symbolic_name If then do: cd /some/path
cd symbolic_name
pwd I get: /some/path/symblic_name and not: /target/path Is there a way to have the shell "fully resolve" a symbolic link (i.e. updating CWD, etc.), as if I had directly done: cd /target/path ? I need to run some programs that seem to be "aware" or "sensitive" about how I get to my target path, and I would like them to think that I arrived to the target path as if had done cd /target/path directly. | Your shell has a builtin pwd , which tries to be "smart". After you did a cd to a symlink the internal pwd fakes the output as if you moved to a real directory. Pass the -P option to pwd , i.e. run pwd -P . The -P option (for “physical”) tells pwd not to do any symbolic link tracking and display the “real” path to the directory. Alternatively, there should also be a real binary pwd , which does not do (and is even not able to do) this kind of magic. Just use that binary explicity: $ type -a pwd
pwd is a shell builtin
pwd is /bin/pwd
$ mkdir a
$ ln -s a b
$ cd b
$ pwd
/home/michas/b
$ /bin/pwd
/home/michas/a | {
"source": [
"https://unix.stackexchange.com/questions/63580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
63,616 | I would like to see the output in a logfile greped by only one domain but also the following two lines . Example: tail -f /var/log/apache2/modsec_audit.log |grep mydomain.de this shows all lines, that contain "mydomain.de" but the important information is in the line below the line, where the domain is included | grep has extra options to define how many lines before and after the result: -A (after) -B (before) -C (context [before + after]) So in your case you need -A : YOUR_COMMAND |grep -A NUMBER YOURDOMAIN the above command prints NUMBER of lines after YOURDOMAIN in file. | {
"source": [
"https://unix.stackexchange.com/questions/63616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
63,651 | What is the -w (deadline) flag in ping for? I cannot find a description of it in the ping man page; only for -W , which takes seconds as a parameter. What is the difference between them, and how can I set a ping timeout (if host is not responding) to 200ms? | From man ping : -w deadline Specify a timeout, in seconds, before ping exits regardless of how many packets have been sent or received. In this case ping does
not stop after count packet are sent, it waits either for deadline
expire or until count probes are answered or for some error
notification from network. -W timeout Time to wait for a response, in seconds. The option affects only timeout in absense of any responses, otherwise ping waits for two
RTTs. That is, -w sets the timeout for the entire program session . If you set -w 30 , ping (the program) will exit after 30 seconds. -W on the other hand sets the timeout for a single ping . If you set -W 1 , that particular ping attempt will time out. As for how to set an individual ping timeout of 200ms, I don't believe this can be done with iputils ' version of ping . You might want to try directly programming with an ICMP library. | {
"source": [
"https://unix.stackexchange.com/questions/63651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12447/"
]
} |
63,658 | I have a file named my_file.txt whose content is just the string Hello . How could I redirect its content to the command echo ? I know I have the commands less , cat , more ... but I need to do it with echo . I tried this: $ cat my_file.txt | echo and also this: $ echo < my_file.txt But in both cases it appears only a blank in the stdout, not the content of my_file.txt. How could I do that? | You can redirect all you want to echo but it won't do anything with it. echo doesn't read its standard input. All it does is write to standard output its arguments separated by a space character and terminated by a newline character (and with some echo implementations with some escape sequences in them expanded and/or arguments starting with - possibly treated as options). If you want echo to display the content of a file, you have to pass that content as an argument to echo . Something like: echo "$(cat my_file.txt)" Note that $(...) strips the trailing newline character s from the output of that cat command, and echo adds one back. Also note that except with zsh , you can't pass NUL characters in the arguments of a command, so that above will typically not work with binary files. yash will also remove bytes that don't form part of valid characters. If the reason for wanting to do that is because you want echo to expand the \n , \b , \0351 ... escape sequences in the file (as UNIX conformant echo implementations do, but not all ), then you'd rather use printf instead : printf '%b\n' "$(cat my_file.txt)" Contrary to echo , that one is portable and won't have problems if the content of the file starts with - . As an alternative to $(cat file) , with ksh , zsh and bash , one can also do: $(<file) . That's a special operator whereby the shell as opposed to cat reads the content of the file to make up the expansion. It still strips the trailing newlines and chokes on NUL bytes except in zsh . In bash , that still forks an extra process. Also note that one difference is that you won't get any error if trying to read a file of type directory that way. Also, while $(< file) is special, $(< file; other command) is not (in zsh , when not emulating other shell, that would still expand the content of the file , by running the implicit $READNULLCMD command (typically a pager)). | {
"source": [
"https://unix.stackexchange.com/questions/63658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31748/"
]
} |
63,690 | In bash, suppose that I have a string strname : strname="ph7go04325r" I would like to extract the characters between the first "3" character and the last "r" character in strname , saving the result in a string strresult . In the example above, the resulting strresult would be: strresult="25" The first "3" character is not necessarily at string position 8 in strname ; likewise, the last "r" is not necessarily at string position 11. Thus, both of the following strings strname should yield strresult="25" : strname="ph11go04325raa"
strname="325r"
strname="rgo04325raa" Also, strname=ph12go04330raa" should yield strresult="30" . I am new to bash scripting, and I don't know where to begin to do string pattern matching like this. Do you have any suggestions? | You can use a regex in bash (3.0 or above) to accomplish this: if [[ $strname =~ 3(.+)r ]]; then
strresult=${BASH_REMATCH[1]}
else
echo "unable to parse string $strname"
fi In bash, capture groups from a regex are placed in the special array BASH_REMATCH . Element 0 contains the entire match, and 1 contains the the match for the first capture group. | {
"source": [
"https://unix.stackexchange.com/questions/63690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
63,769 | I use gframecatcher to generate thumbnail video galleries, i.e. something like this: However this is a GUI tool and I want to create recursively a gallery for every video in a directory structure, so I am looking for a fast command line tool to do this. | Pull out the image captures (these are 100 pixels tall, and keep aspect ratio), the rate ( -r ) is per-second (this yields one frame every ~5 minutes), this also adds time stamp to output image. ffmpeg -i MOVIE.mp4 -r 0.0033 -vf scale=-1:120 -vcodec png capture-%002d.png Then use ImageMagick to build your gallery image: montage -title "Movie Name\nSubtitle" -geometry +4+4 capture*.png output.png | {
"source": [
"https://unix.stackexchange.com/questions/63769",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
63,845 | htop allows me to nicely see trees of processes within the shell. I can kill processes by pressing F9 (KILL) and then selecting which signal (e.g. 15 SIGTERM ) I want to send to a job to kill. However, this only allows me to kill one process at a time. Is there a way to kill a full tree of processes using htop ? | From man htop : INTERACTIVE COMMANDS Space Tag or untag a process. Commands that can operate on multiple processes, like "kill", will then apply over the list of
tagged processes , instead of the currently highlighted one. U Untag all processes (remove all tags added with the Space key). F9, k "Kill" process: sends a signal which is selected in a menu, to one or a group of processes. If processes were tagged, sends
the signal to all tagged processes. If none is tagged, sends to the currently selected process. Not quite the answer you were looking for, but close. You can also eliminate process groups or children with kill, see: https://stackoverflow.com/questions/392022/best-way-to-kill-all-child-processes | {
"source": [
"https://unix.stackexchange.com/questions/63845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
63,876 | I know how to change the timestamp of a regular file: touch -t 201301291810 myfile.txt I was not able to do the same with a symlink. Is it possible? Distro: RHEL 5.8 | add switch -h touch -h -t 201301291810 myfile.txt
Mandatory arguments to long options are mandatory for short options too.
-a change only the access time
-c, --no-create do not create any files
-d, --date=STRING parse STRING and use it instead of current time
-f (ignored)
-h, --no-dereference affect each symbolic link instead of any referenced
file (useful only on systems that can change the
timestamps of a symlink)
-m change only the modification time
-r, --reference=FILE use this file's times instead of current time
-t STAMP use [[CC]YY]MMDDhhmm[.ss] instead of current time | {
"source": [
"https://unix.stackexchange.com/questions/63876",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23944/"
]
} |
63,891 | I was reading this awk script awk -F"=" '{OFS="=";gsub(",",";",$2)}1' I want to know what is the function of 1 at the end of it. | An awk program is a series of condition-action pairs, conditions being outside of curly braces and actions being enclosed in them. A condition is considered false if it evaluates to zero or the empty string, anything else is true (uninitialized variables are zero or empty string, depending on context, so they are false). Either a condition or an action can be implied; braces without a condition (as yours begins) are considered to have a true condition and are always executed if they are hit, and any condition without an action will print the line if and only if the condition is met. The 1 at the end of your script is a condition (always true) with no action, so it executes the default action for every line, printing the line (which may have been modified by the previous action in braces). | {
"source": [
"https://unix.stackexchange.com/questions/63891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6797/"
]
} |
63,923 | I often want to feed relatively short string data (could be several lines though) to commandline programs which accept only input from files (e.g. wdiff) in a repeated fashion. Sure I can create one or more temporary files, save the string there and run the command with the file name as parameter. But it looks to me as if this procedure would be highly inefficient if data is actually written to the disk and also it could harm the disk more than necessary if I repeat this procedure many times, e.g. if I want to feed single lines of long text files to wdiff. Is there a recommended way to circumvent this, say by using pseudo files such as pipes to store the data temporarily without actually writing it to the disk (or writing it only if it exceeds a critical length). Note that wdiff takes two arguments and, as far as I understand it will not be possible to feed the data doing something like wdiff <"text" . | In Bash, you can use the command1 <( command0 ) redirection syntax, which redirects command0 's stdout and passes it to a command1 that takes a filename as a command-line argument. This is called process substitution . Some programs that take filename command-line arguments actually need a real random-access file, so this technique won't work for those. However, it works fine with wdiff : user@host:/path$ wdiff <( echo hello; echo hello1 ) <( echo hello; echo hello2 )
hello
[-hello1-]
{+hello2+} In the background, this creates a FIFO, pipes the command inside the <( ) to the FIFO, and passes the FIFO's file descriptor as an argument. To see what's going on, try using it with echo to print the argument without doing anything with it: user@host:/path$ echo <( echo hello )
/dev/fd/63 Creating a named pipe is more flexible (if you want to write complicated redirection logic using multiple processes), but for many purposes this is enough, and is obviously easier to use. There's also the >( ) syntax for when you want to use it as output, e.g. $ someprogram --logfile >( gzip > out.log.gz ) See also the bash man page "process substitution" section and the Bash redirections cheat sheet for related techniques. | {
"source": [
"https://unix.stackexchange.com/questions/63923",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18047/"
]
} |
63,928 | I have a single disk that I want to create a mirror of; let's call this disk sda . I have just bought another identically-sized disk, which we can call sdb . sda and sdb have one partition called sda1 and sdb1 respectively. When creating a raid, I don't want to wipe my sda clean and start again, I just want it to start mirroring with sdb . My train of thought was to do: mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=1 /dev/sda1 ... to create the array without sdb disk, then run something like (I'm thinking the following command out loud, because I am not sure how to achieve this step) mdadm /dev/md0 --add /dev/sdb1 Note sdb1 is assumed to be formatted similarly to sda1 Is this possible? | The simple answer to the question in the title is "Yes". But what you really want to do is the next step, which is getting the existing data mirrored. It's possible to convert the existing disk, but it's risky, as mentioned, due the the metadata location. Much better to create an empty (broken) mirror with the new disk and copy the existing data onto it. Then, if it doesn't work, you just boot back to the un-mirrored original. First, initialize /dev/sdb1 as the new /dev/md0 with a missing drive and initialize the filesystem (I'm assuming ext3, but the choice is yours) mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 missing
mkfs -text3 /dev/md0 Now, /dev/sda1 is most likely your root file system ( / ) so for safety you should do the next step from a live CD, rescue disk or other bootable system which can access both /dev/sda1 and /dev/md0 although I have successfully done this by dropping to single user mode. Copy the entire contents of the filesystem on /dev/sda1 to /dev/md0 . For example: mount /dev/sda1 /mnt/a # only do this if /dev/sda1 isn't mounted as root
mount /dev/md0 /mnt/b
cd /mnt/a # or "cd /" if it's the root filesystem
cp -dpRxv . /mnt/b Edit /etc/fstab or otherwise ensure that on the next boot, /dev/md0 is mounted instead of /dev/sda1 . Your system is probably set to boot from /dev/sda1 and the boot parameters probably specify this as the root device, so when rebooting you should manually change this so that the root is /dev/md0 (assuming /dev/sda1 was root). After reboot, check that /dev/md0 is now mounted ( df ) and that it is running as a degraded mirror ( cat /proc/mdstat ). Add /dev/sda1 to the array: mdadm /dev/md0 --add /dev/sda1 Since the rebuild will overwrite /dev/sda1 , which metadata version you use is irrelevant. As always when making major changes, take a full backup (if possible) or at least ensure that anything which can't be recreated is safe. You will need to regenerate your boot config to use /dev/md0 as root (if /dev/sda1 was root) and probably need to regenerate mdadm.conf to ensure /dev/md0 is always started. | {
"source": [
"https://unix.stackexchange.com/questions/63928",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3368/"
]
} |
63,979 | I've got a simple script: #!/usr/bin/env ruby --verbose
# script.rb
puts "hi" On my OSX box, it runs fine: osx% ./script.rb
hi However, on my linux box, it throws an error linux% ./script.rb
/usr/bin/env: ruby --verbose: No such file or directory If I run the shebang line manually, it works fine linux% /usr/bin/env ruby --verbose ./script.rb
hi But I can replicate the error if I pack ruby --verbose into a single argument to env linux% /usr/bin/env "ruby --verbose" ./script.rb
/usr/bin/env: ruby --verbose: No such file or directory So I think this is an issue with how env is interpreting the reset of the shebang line. I'm using GNU coreutils 8.4 env : linux% /usr/bin/env --version
env (GNU coreutils) 8.4
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Richard Mlynarik and David MacKenzie. This seems really odd. Is this a common issue with this version of env , or is there something else going on here that I don't know? | Looks like this is because Linux (unlike BSD) only passes a single argument to the shebang command (in this case env). This has been extensively discussed on StackOverflow . | {
"source": [
"https://unix.stackexchange.com/questions/63979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6231/"
]
} |
64,126 | To set the sticky bit on a directory, why do the commands chmod 1777 and chmod 3777 both work? | 1 1 1 1 1 1 1 1 1 1 1 1
___________ __________ __________ ___ ___ ___ ___ ___ ___ ___ ___ ___
setUID bit setGID bit sticky bit user group others Each number (also referred to as an octal because it is base8) in that grouping represents 3 bits. If you turn it into binary it makes it a lot easier. 1 = 0 0 1 3 = 0 1 1 5 = 1 0 1 7 = 1 1 1 So if you did 1777, 3777, 5777, or 7777 you would set the sticky bit because the third column would be a 1. However, with 3777, 5777, and 7777 you are additionally setting other bits (SUID for the first column, and SGID for the second column). Conversely, any other number in that spot (up to the maximum of 7) would not set the sticky bit because the last column wouldn't be a 1 or "on." 2 = 0 1 0 4 = 1 0 0 6 = 1 1 0 | {
"source": [
"https://unix.stackexchange.com/questions/64126",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31995/"
]
} |
64,148 | What commands do I need for Linux's ls to show the file size in MB? | ls -l --block-size=M will give you a long format listing (needed to actually see the file size) and round file sizes up to the nearest MiB. If you want MB (10^6 bytes) rather than MiB (2^20 bytes) units, use --block-size=MB instead. If you don't want the M suffix attached to the file size, you can use something like --block-size=1M . Thanks Stéphane Chazelas for suggesting this. If you simply want file sizes in "reasonable" units, rather than specifically megabytes , then you can use -lh to get a long format listing and human readable file size presentation. This will use units of file size to keep file sizes presented with about 1-3 digits (so you'll see file sizes like 6.1K , 151K , 7.1M , 15M , 1.5G and so on. The --block-size parameter is described in the man page for ls; man ls and search for SIZE . It allows for units other than MB/MiB as well, and from the looks of it (I didn't try that) arbitrary block sizes as well (so you could see the file size as a number of 429-byte blocks if you want to). Note that both --block-size and -h are GNU extensions on top of the Open Group's ls , so this may not work if you don't have a GNU userland (which most Linux installations do). The ls from GNU Coreutils 8.5 does support --block-size and -h as described above. Thanks to kojiro for pointing this out. | {
"source": [
"https://unix.stackexchange.com/questions/64148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30901/"
]
} |
64,155 | This should be a simple question: Host OS: Arch Linux Guest OS: Arch Linux (GNOME) How can I send Ctrl + Alt + F1 to my Guest Linux OS? | Host + F1 , default Host key is Right Ctrl . | {
"source": [
"https://unix.stackexchange.com/questions/64155",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10993/"
]
} |
64,258 | I am reading about basic shell scripting from Linux Command Line and Shell Scripting Bible . It says that the /etc/profile file sets the environment variables at startup of the Bash shell. The /etc/profile.d directory contains other scripts that contain application-specific startup files, which are also executed at startup time by the shell. Why are these files not a part of /etc/profile if they are also critical to Bash startup ? If these files are application-specific startup files not critical to Bash startup, then why are they part of the startup process ? Why are they not run only when the specific applications, for which they contain settings, are executed ? | Why are these files not a part of /etc/profile if they are also critical to Bash startup ? If you mean, "Why are they not just combined into one giant script?", the answer is: Because that would be a maintenance nightmare for the people who are responsible for the scripts. Because having the scripts loaded as independent modules makes the whole system more dynamically adjustable -- individual scripts can be added and removed without affecting the others. Etc. Because they are loaded via /etc/profile which makes them a part of the bash "profile" in the same way anyway. If these files are application-specific startup files not critical to Bash startup, then why are they part of the startup process ? Why
are they not run only when the specific applications, for which they
contain settings, are executed ? That seems to me like a broader design philosophy question that I'll split into two. The first question is about the value and appropriateness of using the shell environment. Does it have positive value? Yes, it is useful. Is it the best solution to all configuration issues? No, but it is very efficient for managing simple parameters, and also widely recognized and understood. Contrast that to say, deciding to configure such things heterogeneously, perhaps $PATH could be managed by a separate independent tool, preferred tools such as $EDITOR could be in an sqlite file somewhere, $LC lang stuff could be in a text file with a custom format somewhere else, etc -- doesn't just using env variables and /etc/profile.d suddenly seem simpler? You probably already know what an env variable is, how they work and how to use them, vs. learning 5 completely different mechanisms for 5 different ubiquitous aspects of what is appropriately named "the environment". The second question is, "Is startup the appropriate time for this?", which begs the objection that it is not very efficient (all that data which may or may not get used, etc). But: Realistically, it is not all that much data, partially because no one in their right mind would use it for more than a few simple parameters (since there are other means of configuring an application). If it is used wisely, with regard to things that are commonly invoked, then setting, eg, default $CFLAGS from a file somewhere every time you invoke gcc would be less efficient. Keep in mind that the amount of memory involved is, again, infinitesimal. It can involve systemic things which more than one application may be involved with, and the shell is a common ground . More could be added to that list, but hopefully this gives you some idea about the pros and cons of the issue -- the major 'pro' and the major 'con' being that it is a global namespace. | {
"source": [
"https://unix.stackexchange.com/questions/64258",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29295/"
]
} |
64,280 | I just want to know difference between in reboot init 6 shutdown -r now and which is the safest and the best? | There is no difference in them. Internally they do exactly the same thing: reboot uses the shutdown command (with the -r switch). The shutdown command used to kill all the running processes, unmount all the file systems and finally tells the kernel to issue the ACPI power command. The source can be found here .
In older distros the reboot command was forcing the processes to exit by issuing the SIGKILL signal (still found in sources, can be invoked with -f option), in most recent distros it defaults to the more graceful and init friendly init 1 -> shutdown -r . This ensures that daemons clean up themselves before shutdown. init 6 tells the init process to shutdown all of the spawned processes/daemons as written in the init files (in the inverse order they started) and lastly invoke the shutdown -r now command to reboot the machine Today there is not much difference as both commands do exactly the same, and they respect the init scripts used to start services/daemons by invoking the shutdown scripts for them. Except for reboot -f -r now as stated below There is a small explanation taken from manpages of why the reboot -f is not safe: -f, --force
Force immediate halt, power-off, reboot. Don't contact the init system. Edit: Forgot to mention, in upcoming RHEL distributions you should use the new systemctl command to issue poweroff/reboot. As stated in the manpages of reboot and shutdown they are "a legacy command available for compatibility only." and the systemctl method will be the only one safe. | {
"source": [
"https://unix.stackexchange.com/questions/64280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28434/"
]
} |
64,374 | RHEL 6 Is there a difference between the >> and >\> operators? I read the following blurb in a RHEL training book: "You can add standard output to the end of an existing file with a
double redirection arrow with a command such as ls >\> filelist I'm more accustomed to the >> operator and when I try both, I get different results. Using >> seems to append output to the file that follows it (as
expected). Using >\> seems to append output to a file literally called > Is this a error in the book I'm reading? Or am I missing the author's point? | To append text to a file you use >> . To overwrite the data currently in that file, you use > . In general, in bash and other shells, you escape special characters using \ . So, when you use echo foo >\> what you are saying is "redirect to a file called > ", but that is because you are escaping the second > . It is equivalent to using echo foo > \> which is the same as echo foo > '>' . So, yes, as Sirex said, that is likely a typo in your book. | {
"source": [
"https://unix.stackexchange.com/questions/64374",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
64,414 | I get the error while running a script which tries to send emails. send-mail: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
send-mail: warning: inet_protocols: configuring for IPv4 support only
postdrop: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
postdrop: warning: inet_protocols: configuring for IPv4 support only Could anyone say what is the issue, do I require some permission? | To disable the messsage, go to /etc/postfix/main.cf and change from: inet_protocols = all to: inet_protocols = ipv4 This will only use ipv4 and the warning message will go away. You will have to issue a stop and start for postfix to register the change.
A simple reload will yield: mail postfix/master[8330]: reload -- version 2.9.6, configuration /etc/postfix
mail postfix/master[8330]: warning: ignoring inet_protocols parameter value change
mail postfix/master[8330]: warning: old value: "all", new value: "ipv4"
mail postfix/master[8330]: warning: to change inet_protocols, stop and start Postfix | {
"source": [
"https://unix.stackexchange.com/questions/64414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28428/"
]
} |
64,432 | url=http://www.foo.bar/file.ext; echo ${url##/*} I expected this code to print file.ext , but it prints the whole URL. Why? How can I extract the file name? | Because word has to match the string to be trimmed. It should look like: $ url="http://www.foo.bar/file.ext"; echo "${url##*/}"
file.ext Thanks derobert, you steered me in the right direction. Further, as @frank-zdarsky mentioned, basename is in the GNU coreutils and should be available on most platforms as well. $ basename "http://www.foo.bar/file.ext"
file.ext | {
"source": [
"https://unix.stackexchange.com/questions/64432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27330/"
]
} |
64,623 | I usually run few Java applications, one for server running locally and other for some IDE like NetBeans. And from time to time, after lots of redeployments, my server get stuck on OutOfMemoryException so I need to kill Java process in order to reboot. So I do pkill -9 java but this also kills my running IDE which I don't want to. So how do I kill only application linked to running server and not the other ones?I assume that they all are running under same process but there has to be some way how to distuingish them. | For killing a process that is associated with multiple processes, you need to kill that by using process id associated with that process. To get the process id of that java process run ps -A |grep java output of this command will give the list of java processes running on your system. Note down Process ID (PID) of that process whom you want to kill and run kill -9 PID | {
"source": [
"https://unix.stackexchange.com/questions/64623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16647/"
]
} |
64,628 | I am trying to see the content in a boot.img file from an Android image. I googled and found this article to extract system.img , but it doesn't work for boot.img . When trying to do this for boot.img , it is showing the following: Invalid sparse file format at header magi
Failed to read sparse file Is simg2img used only for extracting system.img ? If so, Is there any other method to extract boot.img ? If not, what is the problem for not extracting boot.img ? | boot.img is a small(ish) file that contain two main parts. * kernel(important for android)
* ramdisk( a core set of instruction & binaries) Unpack boot.img: It contains the following steps: Download the tool using wget https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/android-serialport-api/android_bootimg_tools.tar.gz Extract the file using tar xvzf android_bootimg_tools.tar.gz . It contains two binaries: * unpackbootimg
* mkbootimg Then execute ./unpackbootimg -i <filename.img> -o <output_directory> The output_directory will contain: boot.img-zImage ----> kernel boot.img-ramdisk.gz ----> ramdisk We can extract ramdisk also, using the following command gunzip -c boot.img-ramdisk.gz | cpio -i After changing the files, we can again pack those files as boot.img using mkbootimg Have fun! | {
"source": [
"https://unix.stackexchange.com/questions/64628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31002/"
]
} |
64,657 | I want to grok how fast a particular file is growing. I could do watch ls -l file And deduce this information from the rate of change. Is there something similar that would directly output the rate of growth of the file over time? | tail -f file | pv > /dev/null But beware that it involves acually reading the file, so it might consume a bit more resources than something that watches just file size. | {
"source": [
"https://unix.stackexchange.com/questions/64657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1594/"
]
} |
64,672 | I know how to select a field from a line using the cut command. For instance, given the following data: a,b,c,d,e
f,g,h,i,j
k,l,m,n,o This command: cut -d, -f2 # returns the second field of the input line Returns: b
g
l My question: How can I select the second field counting from the end? In the previous example, the result would be: d
i
n | Reverse the input before and after cut with rev : <infile rev | cut -d, -f2 | rev Output: d
i
n | {
"source": [
"https://unix.stackexchange.com/questions/64672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32300/"
]
} |
64,736 | Is it possible to combine output from these two commands? node ~/projects/trunk/index.js
python ~/projects/trunk/run.py run Neither command exits so I'm not sure how to do this. | You can combine two commands by grouping it with { } : { command1 & command2; } so far, you can redirect the group to a file ( last ; before } is mandatory), and the space between the open and closing bracket too . { command1 & command2; } > new_file if you want to separate STDOUT and STDERR in two files : { command1 & command2; } > STDOUT_file 2> STDERR_file If you don't want to run the first command in the background, use this form : { command1; command2; } or { command1 && command2; } to run the second command only if the first is a success | {
"source": [
"https://unix.stackexchange.com/questions/64736",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28012/"
]
} |
64,762 | I want to convert my shell scripts into binary executable so that nobody else could edit or read it. Is there a way to convert it into a binary executable? | shc is what you're looking for. get it here: shc Extract, cd into dir, make and then ./shc -f SCRIPT . Done. Everything you need to do this, you find here: SHC Howto | {
"source": [
"https://unix.stackexchange.com/questions/64762",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8032/"
]
} |
64,861 | I have iterate over numbers in various order. I am able to display them in increasing order, even with steps like: $ seq --separator="," 1 10
1,2,3,4,5,6,7,8,9,10
$ seq --separator="," 1 2 10
1,3,5,7,9 I am also able to display them in reverse order, neither continuous nor step wise. $ seq --separator="," 10 1
$ seq --separator="," 10 2 1 No output for above commands. My shell details: $ bash --version
GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc. Let me know how I would be able to display the numbers in descending order? | use negative increment seq -s, 10 -2 1
10,8,6,4,2 | {
"source": [
"https://unix.stackexchange.com/questions/64861",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
64,895 | I want to create a tar archive in a different directory rather than the current directory. I tried this command: tar czf file.tar.gz file1 -C /var/www/ but it creates the archive in the current directory. Why? | The easy way, if you don't particularly need to use -C to tell tar to change to some other directory, is to simply specify the full path to the archive on the command line. Then you can be in whatever directory you prefer to create the directory structure that you want inside the archive. The following will create the archive /var/www/file.tar.gz and put file1 from the current directory (whatever that happens to be) in it, with no in-archive path information. tar czf /var/www/file.tar.gz file1 The path (to either the archive, the constituent files, or both) can of course also be relative. If file1 is in /tmp , you are in /var/spool and want to create the archive in /var/www , you could use something like: tar czf ../www/file1.tar.gz /tmp/file1 There's a million variations on the theme, but this should get you started. Add the v flag if you want to see what tar actually does. | {
"source": [
"https://unix.stackexchange.com/questions/64895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20188/"
]
} |
64,927 | In order to get coloured output from all git commands, I set the following: git config --global color.ui true However, this produces an output like this for git diff , git log whereas commands like git status display fine Why is it not recognizing the escaped color codes in only some of the commands and how can I fix it? I'm using iTerm 2 (terminal type xterm-256color ) on OS X 10.8.2 and zsh as my shell zsh --version
zsh 5.0.0 (x86_64-apple-darwin12.0.0)
git --version
git version 1.7.9.6 (Apple Git-31.1) | You're seeing the escape sequences that tell the terminal to change colors displayed with the escape character shown as ESC , whereas the desired behavior would be that the escape sequences have their intended effect. Commands such as git diff and git log pipe their output into a pager , less by default. Git tries to tell less to allow control characters to have their control effect, but this isn't working for you. If less is your pager but you have the environment variable LESS set to a value that doesn't include -r or -R , git is unable to tell less to display colors. It normally passes LESS=-FRSX , but not if LESS is already set in the environment. A fix is to explicitly pass the -R option to tell less to display colors when invoked by git: git config --global core.pager 'less -R' If less isn't your pager, either switch to less or figure out how to make your pager display colors. If you don't want git to display colors when it's invoking a pager, set color.ui to auto instead of true . | {
"source": [
"https://unix.stackexchange.com/questions/64927",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
64,998 | My understanding is that hard links include a copy of the original file, and that I could delete a hard-linked file in one location, and it would still exist in the other location. If that's the case, why would I want to use hard links at all? Why not just have two separate files? | If you copy a file, it will duplicate the content. So if you modify the content of a single file, that has no effect on the other one. If you make a hardlink, that will create a file pointing to the same content. So if you change the content of either of the files, the change will be seen on both. | {
"source": [
"https://unix.stackexchange.com/questions/64998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
65,013 | This is the contents of the file /etc/aliases on my Debian (Wheezy) server, as it is: # /etc/aliases
mailer-daemon: postmaster
postmaster: root
nobody: root
hostmaster: root
usenet: root
news: root
webmaster: root
www: root
ftp: root
abuse: root
noc: root
security: root
root: t I noticed that, by default, my server sends emails from what looks like [email protected] . So, which one of the rules above governs this? postmaster: root ? So, the rules in /etc/aliases are used to assign users to specific departments? For example, all emails to be sent/received for abuse will be delivered from/to [email protected] (which would be the default email for root, unless there's an alias). Correct? Can someone please explain what each of these is really meant for? -- mailer-daemon , postmaster , nobody , hostmaster , usenet , news , webmaster , www , ftp , abuse , noc , security , root . I mean, a description like " mailer-daemon is for sending email delivery errors, but not really meant for receiving emails. security for where people should contact you about security issues" , or something like that. | The /etc/aliases file is part of sendmail . It specifies which account mail sent to an alias should really be delivered to. For example, mail to the ftp account would be sent to root's mailbox in the configuration you show. Multiple recipients can be specified as comma-separated lists, too. Redirecting mail to users isn't all that can be done. Mail can be piped to programs, too, or simply directed into a file of your choice. The following would "bit-bucket" all mail for the user somebody : somebody: /dev/null Modifications to the /etc/aliases file are not complete until the newaliases command is run to build /etc/aliases.db . It is in this later form that sendmail actually uses. | {
"source": [
"https://unix.stackexchange.com/questions/65013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9610/"
]
} |
65,068 | I have noticed that some Linux Servers in Network takes long time to connect using ssh. Situations : There are two situations I have faced: On some servers some times it takes a long time to ask for password but on other servers When I insert the password it doesn't respond . And after some time say 20 0r 30 seconds it just say Connection Closed Detail for 1 case : debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure. Minor code may provide more information
Cannot determine realm for numeric host address
debug1: Unspecified GSS failure. Minor code may provide more information
Cannot determine realm for numeric host address
debug1: Unspecified GSS failure. Minor code may provide more information
debug1: Unspecified GSS failure. Minor code may provide more information
Cannot determine realm for numeric host address
debug2: we did not send a packet, disable method
debug1: Next authentication method: publickey
debug1: Trying private key: /home/umairmustafa/.ssh/id_rsa
debug1: Trying private key: /home/umairmustafa/.ssh/id_dsa
debug1: Trying private key: /home/umairmustafa/.ssh/id_ecdsa
debug2: we did not send a packet, disable method
debug1: Next authentication method: password | I had this same problem just this morning... Edit your /etc/ssh/sshd_config to set GSSAPIAuthentication no | {
"source": [
"https://unix.stackexchange.com/questions/65068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19946/"
]
} |
65,075 | When accidentally pasting a file into the shell it puts a ton of ugly nonsense entries in the bash history. Is there a clean way to remove those entries? Obviously I could close the shell and edit the .bash_history file manually but maybe there's some kind of API available to modify the history of the current shell? | As of bash-5.0-alpha , the history command now takes a range for the delete ( -d ) option. See rastafile's answer . For older versions, workaround below. You can use history -d offset builtin to delete a specific line from the current shell's history, or history -c to clear the whole history. It's not really practical if you want to remove a range of lines, since it only takes one offset as an argument, but you could wrap it in a function with a loop. rmhist() {
start=$1
end=$2
count=$(( end - start ))
while [ $count -ge 0 ] ; do
history -d $start
((count--))
done
} Call it with rmhist first_line_to_delete last_line_to_delete . (Line numbers according to the output of history .) (Use history -w to force a write to the history file.) | {
"source": [
"https://unix.stackexchange.com/questions/65075",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2505/"
]
} |
65,077 | I'm having problems when copying large files using nautilus (it gets stuck). I need to copy, using cp . I would like to know if there are any parameters that shows the % copied and also the transfer speed. | rsync version 3.0.9+ has a --progress flag, which shows progress per file: rsync --progress SOURCE DEST rsync of any version has a --info=progress2 flag, which shows the overall percentage: rsync --info=progress2 SOURCE DEST | {
"source": [
"https://unix.stackexchange.com/questions/65077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27807/"
]
} |
65,181 | I would like to list the files recursively and uniquely that contain the given word. Example : Checking for word 'check', I normal do is a grep $ grep check * -R But as there are many occurrence of this word, I get a lot of output. So I just need to list the filenames that contain the given search word. I guess some trick with find and xargs would suffice here, but not sure. Any ideas? | Use the -l or --files-with-matches option which is documented as follows: Suppress normal output; instead print the name of each input file
from which output would normally have been printed. The scanning
will stop on the first match. ( -l is specified by POSIX.) So, for you example you can use the following: $ grep check * -lR | {
"source": [
"https://unix.stackexchange.com/questions/65181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
65,235 | Node.js is very popular these days and I've been writing some scripts on it. Unfortunately, compatibility is a problem. Officially, the Node.js interpreter is supposed to be called node , but Debian and Ubuntu ship an executable called nodejs instead. I want portable scripts that Node.js can work with in as many situations as possible. Assuming the filename is foo.js , I really want the script to run in two ways: ./foo.js runs the script if either node or nodejs is in $PATH . node foo.js also runs the script (assuming the interpreter is called node ) Note: The answers by xavierm02 and myself are two variations of a polyglot script. I'm still interested in a pure shebang solution, if such exists. | The best I have come up with is this "two-line shebang" that really is a polyglot (Bourne shell / Node.js) script: #!/bin/sh
':' //; exec "$(command -v nodejs || command -v node)" "$0" "$@"
console.log('Hello world!'); The first line is, obviously, a Bourne shell shebang. Node.js bypasses any shebang that it finds, so this is a valid javascript file as far as Node.js is concerned. The second line calls the shell no-op : with the argument // and then executes nodejs or node with the name of this file as parameter. command -v is used instead of which for portability. The command substitution syntax $(...) isn't strictly Bourne, so opt for backticks if you run this in the 1980s. Node.js just evaluates the string ':' , which is like a no-op, and the rest of the line is parsed as a comment. The rest of the file is just plain old javascript. The subshell quits after the exec on second line is completed, so the rest of the file is never read by the shell. Thanks to xavierm02 for inspiration and all the commenters for additional information! | {
"source": [
"https://unix.stackexchange.com/questions/65235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6775/"
]
} |
65,246 | I have a set of nice wireless headphones which I use from time to time, in addition to my speakers and normal microphone. I'd like to write a script to switch between one input and output source and another, essentially a switch between my headphones and my speakers+microphone. I'd like to change between this: ...and this: Is there a way for me script a transfer between the two inputs and outputs? Essentially I'm looking for something like this: CURRENT_INPUT="$(get-current-input-name)"
CURRENT_OUTPUT="$(get-current-output-name)"
if [ "$CURRENT_INPUT" == "Vengeance 2000" ]; then
set-current-input "HD Pro Webcam C920"
else
set-current-input "Vengeance 2000"
fi
if ["$CURRENT_OUTPUT" == "Vengeance 2000" ]; then
set-current-output "Built-in Audio"
else
set-current-output "Vengeance 2000"
fi Is there a way to script this? | As @Teresa-e-Junior pointed out pactl is the tool to use: First of all we might want to get the IDs of our PA sinks. On my system this is what I get: $ pactl list short sinks
0 alsa_output.pci-0000_01_00.1.hdmi-surround module-alsa-card.c s16le 6ch 44100Hz SUSPENDED
1 alsa_output.pci-0000_00_1b.0.analog-stereo module-alsa-card.c s16le 2ch 44100Hz RUNNING Sink 1 is currently my default sink. But now I want all my current and future streams to be played via HDMI (i.e. sink 0). There is a command to set the default sink for PulseAudio, but it doesn't seem to have any effect on my PC: $ pacmd set-default-sink 0 #doesn't work on my PC :( Instead, new streams seem to be connected to the sink that had a stream moved to it most recently. So let's tell pactl to move all currently playing streams to sink 0 .
We'll first need to list them: $ pactl list short sink-inputs
290 1 176 protocol-native.c float32le 2ch 44100Hz
295 1 195 protocol-native.c float32le 2ch 44100Hz Ok, we've got two streams (IDs 290 and 295) that are both attached to sink 1 . Let's move them to sink 0 : $ pactl move-sink-input 290 0
$ pactl move-sink-input 295 0 So, that should be it. Now we just have to make a script that does the work for us: #!/bin/bash
if [ -z "$1" ]; then
echo "Usage: $0 <sinkId/sinkName>" >&2
echo "Valid sinks:" >&2
pactl list short sinks >&2
exit 1
fi
newSink="$1"
pactl list short sink-inputs|while read stream; do
streamId=$(echo $stream|cut '-d ' -f1)
echo "moving stream $streamId"
pactl move-sink-input "$streamId" "$newSink"
done You can call it with either a sink ID or a sink name as parameter (i.e. either 0 or something like alsa_output.pci-0000_01_00.1.hdmi-surround ). Now you could attach this script to a udev event or key shortcut. | {
"source": [
"https://unix.stackexchange.com/questions/65246",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
65,280 | I have a context where I need to convert binary to hexadecimal and decimal and viceversa in a shell script. Can someone suggest me a tool for this? | It's fairly straightforward to do the conversion from binary in pure bash ( echo and printf are builtins): Binary to decimal $ echo "$((2#101010101))"
341 Binary to hexadecimal $ printf '%x\n' "$((2#101010101))"
155 Going back to binary using bash alone is somewhat more complex, so I suggest you see the other answers for solutions to that. | {
"source": [
"https://unix.stackexchange.com/questions/65280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32658/"
]
} |
65,315 | I know of course that cat logfile.txt | wc -l
120 will tell me the number of lines in a file. Whereas tail -f logfile.txt will show me the new lines that another program writes to logfile.txt . Is it possible to combine both so that I get a continuous updating line count of logfile.txt with standard text utilities? I do know about watch wc -l logfile.txt but I do not want to re-count the whole file each time, that seems to be a waste. One would need an appended-only count every second or so and probably an \r instead of an \n at the end of line. | Maybe: tail -n +1 -f file | awk '{printf "\r%lu", NR}' Beware that it would output a number for every line of input (though overriding the previous value if sent to a terminal). Or you can implement the tail -f by hand in shell: n=0
while :; do
n=$(($n + $(wc -l)))
printf '\r%s' "$n"
sleep 1
done < file (note that it runs up to one wc and one sleep command per second which not all shells have built in. With ksh93 while sleep is builtin, to get a built in wc (at least on Debian), you need to add /opt/ast/bin at the front of $PATH (regardless of whether that directory exists or not) or use command /opt/ast/bin/wc (don't ask...)). You could use pv , as in: tail -n +1 -f file | pv -bl > /dev/null But beware that it adds k , M ... suffixes when the number is over 1000 (and there doesn't seem to be a way around that ). | {
"source": [
"https://unix.stackexchange.com/questions/65315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5590/"
]
} |
65,349 | The man pages for badblocks do not seem to mention what the three numbers in the output mean in particular: Pass completed, 7 bad blocks found (7/0/0 errors) Pass completed, 120 bad blocks found (0/0/120 errors) I'm guessing it's "Errors while reading/writing/comparing". Can someone enlighten me? | Your guess is correct. The source code looks like this: if (v_flag)
fprintf(stderr,
_("Pass completed, %u bad blocks found. (%d/%d/%d errors)\n"),
bb_count, num_read_errors, num_write_errors, num_corruption_errors); So its read/write/corruption errors. And corruption means comparison with previously written data: if (t_flag) {
/* test the comparison between all the
blocks successfully read */
int i;
for (i = 0; i < got; ++i)
if (memcmp (blkbuf+i*block_size,
blkbuf+blocks_at_once*block_size,
block_size))
bb_count += bb_output(currently_testing + i, CORRUPTION_ERROR);
} | {
"source": [
"https://unix.stackexchange.com/questions/65349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2579/"
]
} |
65,475 | I suddenly came across the term "ephemeral port" in a Linux article that I was reading, but the author did not mention what it is. What is an ephemeral port in UNIX? | In essence an ephemeral port is a random high port used to communicate with a known server port. For example, if I ssh from my machine to a server the connection would look like: 192.168.1.102:37852 ---> 192.168.1.105:22 22 is the standard SSH port I'm connecting to on the remote machine; 37852 is the ephemeral port used on my local machine | {
"source": [
"https://unix.stackexchange.com/questions/65475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23755/"
]
} |
65,507 | Is it possible to swap the Left Shift and the Left CTRL keys using setxkbmap instead of xmodmap ? EDIT I have switched to Fcitx , which works way much better with my keyboard layout and customized keymap than IBus in every respect. I highly recommend it. | xmodmap is obsolete; so indeed it should be done with the xkb tools. The swap you want seems not to be included by default with X11 files; so you have to write it yourself. The page https://web.archive.org/web/20170825051821/http://madduck.net/docs/extending-xkb/ helped me to understand and find a way to do it. Create a file ~/.xkb/keymap/mykbd where you put the output of setxkbmap , it will be your base keyboard definition; eg: setxkbmap -print > ~/.xkb/keymap/mykbd then, create a symbols file to define your key swapping, put it for example in ~/.xkb/symbols/myswap there, put the following lines: partial modifier_keys
xkb_symbols "swap_l_shift_ctrl" {
replace key <LCTL> { [ Shift_L ] };
replace key <LFSH> { [ Control_L ] };
}; then, edit the ~/.xkb/keymap/mykbd file, and change the xkb_symbols line to add +myswap(swap_l_shift_ctrl) finally, you can load it with xkbcomp -I$HOME/.xkb ~/.xkb/keymap/mykbd $DISPLAY (you cannot use "~" for the -I parameter). It will probably spit a lot of warnings about undefined symbols for some rare keys, but you can ignore them (eg, redirect error to dave: 2> /dev/null ). If you want to be able to easily swap between a normal and your inverted ctrl/shift one; just create under ~/.xkb/keymap/ another file, without the extra "myswap" option, and load it with xkbcomp . You can make two small scripts to load them. | {
"source": [
"https://unix.stackexchange.com/questions/65507",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32799/"
]
} |
65,510 | I have a directory full of text files. My goal is to append text to the beginning and end of all of them. The text that goes at the beginning and end is the same for each file. Based on code I got from the web, this is the code for appending to the beginning of the file: echo -e 'var language = {\n$(cat $BASEDIR/Translations/Javascript/*.txt)' > $BASEDIR/Translations/Javascript/*.txt This is the code for appending to the end of the file. The goal is to add the text }; at the end of each file: echo "};" >> $BASEDIR/Translations/Javascript/*.txt The examples I drew from were for acting on individual files. I thought I'd try acting on multiple files using the wildcard, *.txt . I might be making other mistakes as well. In any case, how do I append text to the beginning and end of multiple files? | To prepend text to a file you can use (with the GNU implementation of sed ): sed -i '1i some string' file Appending text is as simple as echo 'Some other string' >> file The last thing to do is to put that into a loop which iterates over all the
files you intend to edit: for file in *.txt; do
sed -i '1i Some string' "$file" &&
echo 'Some other string' >> "$file"
done | {
"source": [
"https://unix.stackexchange.com/questions/65510",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17279/"
]
} |
65,532 | I've run across some scripting like this recently: ( set -e ; do-stuff; do-more-stuff; ) || echo failed This looks fine to me, but it does not work! The set -e does not apply, when you add the || . Without that, it works fine: $ ( set -e; false; echo passed; ); echo $?
1 However, if I add the || , the set -e is ignored: $ ( set -e; false; echo passed; ) || echo failed
passed Using a real, separate shell works as expected: $ sh -c 'set -e; false; echo passed;' || echo failed
failed I've tried this in multiple different shells (bash, dash, ksh93) and all behave the same way, so it's not a bug. Can someone explain this? | According to this thread , it's the behavior POSIX specifies for using " set -e " in a subshell. (I was surprised as well.) First, the behavior: The -e setting shall be ignored when executing the compound
list following the while, until, if, or elif reserved word,
a pipeline beginning with the ! reserved word, or any
command of an AND-OR list other than the last. The second post notes, In summary, shouldn't set -e in (subshell code) operate independently
of the surrounding context? No. The POSIX description is clear that surrounding context affects
whether set -e is ignored in a subshell. There's a little more in the fourth post, also by Eric Blake, Point 3 is not requiring subshells to override the contexts where set
-e is ignored. That is, once you are in a context where -e is ignored,
there is nothing you can do to get -e obeyed again, not even a subshell. $ bash -c 'set -e; if (set -e; false; echo hi); then :; fi; echo $?'
hi
0 Even though we called set -e twice (both in the parent and in the
subshell), the fact that the subshell exists in a context where -e is
ignored (the condition of an if statement), there is nothing we can do
in the subshell to re-enable -e . This behavior is definitely surprising. It is counter-intuitive: one would expect the re-enabling of set -e to have an effect, and that the surrounding context would not take precedent; further, the wording of the POSIX standard does not make this particularly clear. If you read it in the context where the command is failing, the rule does not apply: it only applies in the surrounding context, however, it applies to it completely. | {
"source": [
"https://unix.stackexchange.com/questions/65532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16460/"
]
} |
65,595 | I want to know whether a disk is a solid-state drive or hard disk. lshw is not installed. I do yum install lshw and it says there is no package named lshw. I do not know which version of http://pkgs.repoforge.org/lshw/ is suitable for my CentOS. I search the net and there is nothing that explain how to know whether a drive is SSD or HDD. Should I just format them first? Result of fdisk -l : Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00074f7d
Device Boot Start End Blocks Id System
/dev/sda1 * 1 14 103424 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 14 536 4194304 82 Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3 536 14594 112921600 83 Linux
Disk /dev/sdc: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdb: 128.0 GB, 128035676160 bytes
255 heads, 63 sectors/track, 15566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 480.1 GB, 480103981056 bytes
255 heads, 63 sectors/track, 58369 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000 | Linux automatically detects SSD, and since kernel version 2.6.29, you may verify sda with: cat /sys/block/sda/queue/rotational You should get 1 for hard disks and 0 for a SSD. It will probably not work if your disk is a logical device emulated by hardware (like a RAID controller). See this answer for more information about SSD partitioning, filesystem... | {
"source": [
"https://unix.stackexchange.com/questions/65595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29357/"
]
} |
65,624 | I've read that you need double quotes for expanding variables, e.g. if [ -n "$test" ]; then echo '$test ok'; else echo '$test null'; fi will work as expected, while if [ -n $test ]; then echo '$test ok'; else echo '$test null'; fi will always say $test ok even if $test is null. but then why don't we need quotes in echo $test ? | You always need quotes around variables in all list contexts, that is everywhere the variable may be expanded to multiple values unless you do want the 3 side effects of leaving a variable unquoted. list contexts include arguments to simple commands like [ or echo , the for i in <here> , assignments to arrays... There are other contexts where variables also need to be quoted. Best is to always quote variables unless you've got a very good reason not to. Think of the absence of quotes (in list contexts) as the split+glob operator. As if echo $test was echo glob(split("$test")) . The shell behaviour is confusing to most people because in most other languages, you put quotes around fixed strings, like puts("foo") , and not around variables (like puts(var) ) while in shell it's the other way round: everything is string in shell, so putting quotes around everything would be cumbersome, you echo test , you don't need to "echo" "test" . In shell, quotes are used for something else: prevent some special meaning of some characters and/or affect the behaviour of some expansions. In [ -n $test ] or echo $test , the shell will split $test (on blanks by default), and then perform filename generation (expand all the * , '?'... patterns to the list of matching files), and then pass that list of arguments to the [ or echo commands. Again, think of it as "[" "-n" glob(split("$test")) "]" . If $test is empty or contains only blanks (spc, tab, nl), then the split+glob operator will return an empty list, so the [ -n $test ] will be "[" "-n" "]" , which is a test to check wheter "-n" is the empty string or not. But imagine what would have happened if $test was "*" or "= foo"... In [ -n "$test" ] , [ is passed the four arguments "[" , "-n" , "" and "]" (without the quotes), which is what we want. Whether it's echo or [ makes no difference, it's just that echo outputs the same thing whether it's passed an empty argument or no argument at all. See also this answer to a similar question for more details on the [ command and the [[...]] construct. | {
"source": [
"https://unix.stackexchange.com/questions/65624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17175/"
]
} |
65,700 | I've seen people mention in other answers that it's a bad idea to include the current working directory (' . ') in your $PATH environment variable, but haven't been able to find a question specifically addressing the issue. So, why shouldn't I add . to my path? And if despite all warnings I do it anyway, what do I have to watch out for? Is it safer to add it to the end than to the the start? | If you're the only user on the machine it's okay, as long as you know what you're doing. The general concern is that by having your current directory in PATH , you cannot see commands as a constant list. If you need to run a script/program from your current directory, you can always explicitly run it by prepending ./ to its name (you telling the system "I want to run this file from my current directory"). Say, now you have all these little scripts all over your filesystem; one day you'll run the wrong one for sure. So, having your PATH as a predefined list of static paths is all about order and saving oneself from a potential problem. However, if you're going to add . to your PATH , I suggest appending it to the end of the list ( export PATH=$PATH:. ). At least you won't override system-wide binaries this way. If you're a root on the system and have system exposed to other users' accounts, having . in PATH is a huge security risk: you can cd to some user's directory, and unintentionally run a malicious script there only because you mistyped a thing or script that has the same name as a system-wide binary. | {
"source": [
"https://unix.stackexchange.com/questions/65700",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3629/"
]
} |
65,803 | I have heard that printf is better than echo . I can recall only one instance from my experience where I had to use printf because echo didn't work for feeding some text into some program on RHEL 5.8 but printf did. But apparently, there are other differences, and I would like to inquire what they are as well as if there are specific cases when to use one vs the other. | Basically, it's a portability (and reliability) issue. Initially, echo didn't accept any option and didn't expand anything. All it was doing was outputting its arguments separated by a space character and terminated by a newline character. Now, someone thought it would be nice if we could do things like echo "\n\t" to output newline or tab characters, or have an option not to output the trailing newline character. They then thought harder but instead of adding that functionality to the shell (like perl where inside double quotes, \t actually means a tab character), they added it to echo . David Korn realized the mistake and introduced a new form of shell quotes: $'...' which was later copied by bash and zsh but it was far too late by that time. Now when a standard UNIX echo receives an argument which contains the two characters \ and t , instead of outputting them, it outputs a tab character. And as soon as it sees \c in an argument, it stops outputting (so the trailing newline is not output either). Other shells/Unix vendors/versions chose to do it differently: they added a -e option to expand escape sequences, and a -n option to not output the trailing newline. Some have a -E to disable escape sequences, some have -n but not -e , and the list of escape sequences supported by one echo implementation is not necessarily the same as that supported by another. Sven Mascheck has a nice page that shows the extent of the problem . On those echo implementations that support options, there's generally no support of a -- to mark the end of options (the echo builtin of some non-Bourne-like shells do, and zsh supports - for that though), so for instance, it's difficult to output "-n" with echo in many shells. On some shells like bash ¹ or ksh93 ² or yash ( $ECHO_STYLE variable), the behaviour even depends on how the shell was compiled or the environment (GNU echo 's behaviour will also change if $POSIXLY_CORRECT is in the environment and with the version 4 , zsh 's with its bsd_echo option, some pdksh-based with their posix option or whether they're called as sh or not). So two bash echo s, even from the same version of bash are not guaranteed to behave the same. POSIX says: if the first argument is -n or any argument contains backslashes, then the behaviour is unspecified . bash echo in that regard is not POSIX in that for instance echo -e is not outputting -e<newline> as POSIX requires. The UNIX specification is stricter, it prohibits -n and requires the expansion of some escape sequences including the \c one to stop outputting. Those specifications don't really come to the rescue here given that many implementations are not compliant. Even some certified systems like macOS 5 are not compliant. To really represent the current reality, POSIX should actually say : if the first argument matches the ^-([eEn]*|-help|-version)$ extended regexp or any argument contains backslashes (or characters whose encoding contains the encoding of the backslash character like α in locales using the BIG5 charset), then the behaviour is unspecified. All in all, you don't know what echo "$var" will output unless you can make sure that $var doesn't contain backslash characters and doesn't start with - . The POSIX specification actually does tell us to use printf instead in that case. So what that means is that you can't use echo to display uncontrolled data. In other words, if you're writing a script and it is taking external input (from the user as arguments, or file names from the file system...), you can't use echo to display it. This is OK: echo >&2 Invalid file. This is not: echo >&2 "Invalid file: $file" (Though it will work OK with some (non-UNIX compliant) echo implementations like bash 's when the xpg_echo option has not been enabled in one way or another like at compilation time or via the environment). file=$(echo "$var" | tr ' ' _) is not OK in most implementations (exceptions being yash with ECHO_STYLE=raw (with the caveat that yash 's variables can't hold arbitrary sequences of bytes so not arbitrary file names) and zsh 's echo -E - "$var" 6 ). printf , on the other hand, is more reliable, at least when it's limited to the basic usage of echo . printf '%s\n' "$var" Will output the content of $var followed by a newline character regardless of what character it may contain. printf '%s' "$var" Will output it without the trailing newline character. Now, there also are differences between printf implementations. There's a core of features that is specified by POSIX, but then there are a lot of extensions. For instance, some support a %q to quote the arguments but how it's done varies from shell to shell, some support \uxxxx for Unicode characters. The behaviour varies for printf '%10s\n' "$var" in multi-byte locales, there are at least three different outcomes for printf %b '\123' But in the end, if you stick to the POSIX feature set of printf and don't try doing anything too fancy with it, you're out of trouble. But remember the first argument is the format, so shouldn't contain variable/uncontrolled data. A more reliable echo can be implemented using printf , like: echo() ( # subshell for local scope for $IFS
IFS=" " # needed for "$*"
printf '%s\n' "$*"
)
echo_n() (
IFS=" "
printf %s "$*"
)
echo_e() (
IFS=" "
printf '%b\n' "$*"
) The subshell (which implies spawning an extra process in most shell implementations) can be avoided using local IFS with many shells, or by writing it like this: echo() {
if [ "$#" -gt 0 ]; then
printf %s "$1"
shift
if [ "$#" -gt 0 ]; then
printf ' %s' "$@"
fi
fi
printf '\n'
} In ksh88 and pdksh and some of its derivatives, printf is not built-in. There, you may prefer using print -r -- (for echo ) and print -rn -- (for echo -n / \c ) which print their arguments space-separated (and followed by a newline without -n ) without alteration (also works in zsh ). Notes 1. how bash 's echo behaviour can be altered. With bash , at run time, there are two things that control the behaviour of echo (beside enable -n echo or redefining echo as a function or alias):
the xpg_echo bash option and whether bash is in posix mode. posix mode can be enabled if bash is called as sh or if POSIXLY_CORRECT is in the environment or with the the posix option: The default behaviour on most systems: $ bash -c 'echo -n "\0101"'
\0101% # the % here denotes the absence of newline character xpg_echo expands sequences as UNIX requires: $ BASHOPTS=xpg_echo bash -c 'echo "\0101"'
A It still honours -n and -e (and -E ): $ BASHOPTS=xpg_echo bash -c 'echo -n "\0101"'
A% With xpg_echo and POSIX mode: $ env BASHOPTS=xpg_echo POSIXLY_CORRECT=1 bash -c 'echo -n "\0101"'
-n A
$ env BASHOPTS=xpg_echo sh -c 'echo -n "\0101"' # (where sh is a symlink to bash)
-n A
$ env BASHOPTS=xpg_echo SHELLOPTS=posix bash -c 'echo -n "\0101"'
-n A This time, bash is both POSIX and UNIX conformant. Note that in POSIX mode, bash is still not POSIX conformant as it doesn't output -e in: $ env SHELLOPTS=posix bash -c 'echo -e'
$ The default values for xpg_echo and posix can be defined at compilation time with the --enable-xpg-echo-default and --enable-strict-posix-default options to the configure script. That's typically what recent versions of OS/X do to build their /bin/sh . No Unix/Linux implementation/distribution in their right mind would typically do that for /bin/bash though . Actually, that's not true, the /bin/bash that Oracle ships with Solaris 11 (in an optional package) seems to be built with --enable-xpg-echo-default (that was not the case in Solaris 10). 2. How ksh93 's echo behaviour can be altered In ksh93 , whether echo expands escape sequences or not and recognises options depends on the content of the $PATH and/or $_AST_FEATURES environment variables. If $PATH contains a component that contains /5bin or /xpg before the /bin or /usr/bin component then it behave the SysV/UNIX way (expands sequences, doesn't accept options). If it finds /ucb or /bsd first or if $_AST_FEATURES 7 contains UNIVERSE = ucb , then it behaves the BSD 3 way ( -e to enable expansion, recognises -n ). The default is system dependent, BSD on Debian (see the output of builtin getconf; getconf UNIVERSE in recent versions of ksh93): $ ksh93 -c 'echo -n' # default -> BSD (on Debian)
$ PATH=/foo/xpgbar:$PATH ksh93 -c 'echo -n' # /xpg before /bin or /usr/bin -> XPG
-n
$ PATH=/5binary:$PATH ksh93 -c 'echo -n' # /5bin before /bin or /usr/bin -> XPG
-n
$ PATH=/5binary:$PATH _AST_FEATURES='UNIVERSE = ucb' ksh93 -c 'echo -n' # -> BSD
$ PATH=/ucb:/foo/xpgbar:$PATH ksh93 -c 'echo -n' # /ucb first -> BSD
$ PATH=/bin:/foo/xpgbar:$PATH ksh93 -c 'echo -n' # /bin before /xpg -> default -> BSD 3. BSD for echo -e? The reference to BSD for the handling of the -e option is a bit misleading here. Most of those different and incompatible echo behaviours were all introduced at AT&T: \n , \0ooo , \c in Programmer's Work Bench UNIX (based on Unix V6), and the rest ( \b , \r ...) in Unix System III Ref . -n in Unix V7 (by Dennis Ritchie Ref ) -e in Unix V8 (by Dennis Ritchie Ref ) -E itself possibly initially came from bash (CWRU/CWRU.chlog in version 1.13.5 mentions Brian Fox adding it on 1992-10-18, GNU echo copying it shortly after in sh-utils-1.8 released 10 days later) While the echo builtin of the sh of BSD has supported -e since the day they started using the Almquist shell for it in the early 90s, the standalone echo utility to this day doesn't support it there ( FreeBSD echo still doesn't support -e , though it does support -n like Unix V7 (and also \c but only at the end of the last argument)). The handling of -e was added to ksh93 's echo when in the BSD universe in the ksh93r version released in 2006 and can be disabled at compilation time. 4. GNU echo change of behaviour in 8.31 Since coreutils 8.31 (and this commit ), GNU echo now expands escape sequences by default when POSIXLY_CORRECT is in the environment, to match the behaviour of bash -o posix -O xpg_echo 's echo builtin (see bug report ). 5. macOS echo Most versions of macOS have received UNIX certification from the OpenGroup . Their sh builtin echo is compliant as it's bash (a very old version) built with xpg_echo enabled by default, but their stand-alone echo utility is not. env echo -n outputs nothing instead of -n<newline> , env echo '\n' outputs \n<newline> instead of <newline><newline> . That /bin/echo is the one from FreeBSD which suppresses newline output if the first argument is -n or (since 1995) if the last argument ends in \c , but doesn't support any other backslash sequences required by UNIX, not even \\ . 6. echo implementations that can output arbitrary data verbatim Strictly speaking, you could also count that FreeBSD/macOS /bin/echo above (not their shell's echo builtin) where zsh 's echo -E - "$var" or yash 's ECHO_STYLE=raw echo "$var" ( printf '%s\n' "$var" ) could be written: /bin/echo "$var
\c" And zsh 's echo -nE - "$var" ( printf %s "$var" ) could be written /bin/echo "$var\c" Implementations that support -E and -n (or can be configured to) can also do: echo -nE "$var
" For the equivalent of printf '%s\n' "$var" . 7. _AST_FEATURES and the AST UNIVERSE The _AST_FEATURES is not meant to be manipulated directly, it is used to propagate AST configuration settings across command execution. The configuration is meant to be done via the (undocumented) astgetconf() API. Inside ksh93 , the getconf builtin (enabled with builtin getconf or by invoking command /opt/ast/bin/getconf ) is the interface to astgetconf() For instance, you'd do builtin getconf; getconf UNIVERSE = att to change the UNIVERSE setting to att (causing echo to behave the SysV way among other things). After doing that, you'll notice the $_AST_FEATURES environment variable contains UNIVERSE = att . | {
"source": [
"https://unix.stackexchange.com/questions/65803",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23944/"
]
} |
65,885 | How can I interactively execute a command in Linux (zsh, if it matters) with a different umask from the default, for one command only? Perhaps a combination of commands combined in a single line? The new umask should apply only to that command and return to its default value for the next command entered. Ideally the command would be agnostic to the default umask in force before its entered (in other words, the default umask doesn't have to specified). | Start a subshell : (umask 22 && cmd) Then the new umask will only alter that subshell. Note that zsh executes the last command of the subshell in the subshell process instead of forking another one, which means that if that command is external, you're not even wasting a process, it's just that the fork is done earlier (so the umask is done in the child process that will later execute your command). In shells such as bash , that don't do that optimisation already, you can use exec to explicitly request that no child process be spawned. Compare: $ zsh -c '(umask 22 && ps -H); exit'
PID TTY TIME CMD
3806 pts/0 00:00:00 zsh
3868 pts/0 00:00:00 zsh
3869 pts/0 00:00:00 ps $ bash -c '(umask 22 && ps -H); exit'
PID TTY TIME CMD
3806 pts/0 00:00:00 zsh
3870 pts/0 00:00:00 bash
3871 pts/0 00:00:00 bash
3872 pts/0 00:00:00 ps
$ bash -c '(umask 22 && exec ps -H); exit'
PID TTY TIME CMD
3806 pts/0 00:00:00 zsh
3884 pts/0 00:00:00 bash
3885 pts/0 00:00:00 ps In bash, contrary to zsh, do not use exec for builtins or functions as that would not run the builtin / functions but try to run an external command by that name instead. $ bash -c '(printf "%(%F)T\n" -1)'
2022-02-27
$ bash -c '(exec printf "%(%F)T\n" -1)'
printf: %(: invalid conversion specification The latter called /usr/bin/printf which doesn't support %T ( bash extension inspired by ksh93 's). Use type cmd to check whether cmd is builtin or not in your particular version of bash . | {
"source": [
"https://unix.stackexchange.com/questions/65885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18985/"
]
} |
65,889 | I started using dwm today and am trying to wrap my head around it, as OpenBox is my only other exposure to window managers. As suggested in the official tutorial, I first opened couple of terminals and they all got tiled, with the first terminal being pushed to left, which I understand is the master. I played with the default keybindings and opened and closed many windows and programs. I spent quite a bit of time trying to get what tags are and how to use them. After a while came back to tag 1 and see that the windows, though in tiled mode, somehow changed to a horizontal split like this: Any and all new windows are added horizontally. I don't see any specific keybinding for changing layout of existing windows (like in tmux ). So, how can I get back the default tiling mode where master is on left and stacks are on right? | You have (inadvertently) incremented the windows in master, the default keybind for which is Mod i , so that all of your clients in that selected tag are in master. You can decrement the number of clients in master with Mod d . Each press will decrement the clients in master by 1. It may also be worth pointing out that dwm doesn't use the "desktop" paradigm; whatever layout is applied to the currently visible tag(s) is applied to all tags—hence the "dynamic" in d wm. This is a powerful concept as it allows you to tag multiple clients, and manipulate those tags (and the associated views) on the fly. Combined with some rules in your config.h , it provides for an incredibly versatile model for managing clients. See this archived post for an explanation of dwm's tagging/client model. | {
"source": [
"https://unix.stackexchange.com/questions/65889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4605/"
]
} |
65,891 | I want to execute a script when I plug in a device in my Linux machine. For example, run xinput on mouse or a backupscript on a certain drive. I have seen a lot of articles on this, most recently here and here . But I just can't get it to work. Here's some simple examples trying to get at least some kind of response. /etc/udev/rules.d/test.rules #KERNEL=="sd*", ATTRS{vendor}=="*", ATTRS{model}=="*", ATTRS{serial}=="*", RUN+="/usr/local/bin/test.sh"
#KERNEL=="sd*", ACTION=="add", "SUBSYSTEM=="usb", ATTRS{model}=="My Book 1140 ", ATTRS{serial}=="0841752394756103457194857249", RUN+="/usr/local/bin/test.sh"
#ACTION=="add", "SUBSYSTEM=="usb", RUN+="/usr/local/bin/test.sh"
#KERNEL=="sd*", ACTION=={add}, RUN+="/usr/local/bin/test.sh"
KERNEL=="sd*", RUN+="/usr/local/bin/test.sh"
KERNEL=="*", RUN+="/usr/local/bin/test.sh" /usr/local/bin/test.sh #!/usr/bin/env bash
echo touched >> /var/log/test.log
if [ "${ACTION}" = "add" ] && [ -f "${DEVICE}" ]
then
echo ${DEVICE} >> /var/log/test.log
fi The rules folder is watched by inotify and should be active immediately. I keep replugging my keyboard, mouse, tablet, memorystick and usb-drive, but nothing. No log file touched. Now, what would be the most simple way to at least know something is working? It's easier to work from something that's working than from something that's not. | If you want to run the script on a specific device, you can use the vendor and product ids In /etc/udev/rules.d/test.rules : ATTRS{idVendor}=="152d", ATTRS{idProduct}=="2329", RUN+="/tmp/test.sh" in test.sh : #! /bin/sh
env >>/tmp/test.log
file "/sys${DEVPATH}" >>/tmp/test.log
if [ "${ACTION}" = add -a -d "/sys${DEVPATH}" ]; then
echo "add ${DEVPATH}" >>/tmp/test.log
fi With env , you can see what environment is set from udev and with file , you will discover the file type. The concrete attributes for your device can be discovered with lsusb lsusb gives ... Bus 001 Device 016: ID 152d:2329 JMicron Technology Corp. / JMicron USA Technology Corp. JM20329 SATA Bridge ... | {
"source": [
"https://unix.stackexchange.com/questions/65891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12991/"
]
} |
65,902 | Here is the behaviour I want to understand: $ ps
PID TTY TIME CMD
392 ttys000 0:00.20 -bash
4268 ttys000 0:00.00 xargs
$ kill 4268
$ ps
PID TTY TIME CMD
392 ttys000 0:00.20 -bash
[1]+ Terminated: 15 xargs
$ ps
PID TTY TIME CMD
392 ttys000 0:00.21 -bash Why does it show the [1]+ Terminated: 15 xargs after I kill a process, instead of just not showing it as it was just killed? I'm using bash on Mac OS X 10.7.5. | Short answer In bash (and dash ) the various "job status" messages are not displayed from signal handlers, but require an explicit check. This check is performed only before a new prompt is provided, probably not to disturb the user while he/she is typing a new command. The message is not shown just before the prompt after the kill is displayed probably because the process is not dead yet - this is particularly probable condition since kill is an internal command of the shell, so it's very fast to execute and doesn't need forking. Doing the same experiment with killall , instead, usually yields the "killed" message immediately, sign that the time/context switches/whatever required to execute an external command cause a delay long enough for the process to be killed before the control returns to the shell. matteo@teokubuntu:~$ dash
$ sleep 60 &
$ ps
PID TTY TIME CMD
4540 pts/3 00:00:00 bash
4811 pts/3 00:00:00 sh
4812 pts/3 00:00:00 sleep
4813 pts/3 00:00:00 ps
$ kill -9 4812
$
[1] + Killed sleep 60
$ sleep 60 &
$ killall sleep
[1] + Terminated sleep 60
$ Long answer dash First of all, I had a look at the dash sources, since dash exhibits the same behavior and the code is surely simpler than bash . As said above, the point seems to be that job status messages are not emitted from a signal handler (which can interrupt the "normal" shell control flow), but they are the consequence of an explicit check (a showjobs(out2, SHOW_CHANGED) call in dash ) that is performed only before requesting new input from the user, in the REPL loop. Thus, if the shell is blocked waiting for user input no such message is emitted. Now, why doesn't the check performed just after the kill show that the process was actually terminated? As explained above, probably because it's too fast. kill is an internal command of the shell, so it's very fast to execute and doesn't need forking, thus, when immediately after the kill the check is performed, the process is still alive (or, at least, is still being killed). bash As expected, bash , being a much more complex shell, was trickier and required some gdb -fu. The backtrace for when that message is emitted is something like (gdb) bt
#0 pretty_print_job (job_index=job_index@entry=0, format=format@entry=0, stream=0x7ffff7bd01a0 <_IO_2_1_stderr_>) at jobs.c:1630
#1 0x000000000044030a in notify_of_job_status () at jobs.c:3561
#2 notify_of_job_status () at jobs.c:3461
#3 0x0000000000441e97 in notify_and_cleanup () at jobs.c:2664
#4 0x00000000004205e1 in shell_getc (remove_quoted_newline=1) at /Users/chet/src/bash/src/parse.y:2213
#5 shell_getc (remove_quoted_newline=1) at /Users/chet/src/bash/src/parse.y:2159
#6 0x0000000000423316 in read_token (command=<optimized out>) at /Users/chet/src/bash/src/parse.y:2908
#7 read_token (command=0) at /Users/chet/src/bash/src/parse.y:2859
#8 0x00000000004268e4 in yylex () at /Users/chet/src/bash/src/parse.y:2517
#9 yyparse () at y.tab.c:2014
#10 0x000000000041df6a in parse_command () at eval.c:228
#11 0x000000000041e036 in read_command () at eval.c:272
#12 0x000000000041e27f in reader_loop () at eval.c:137
#13 0x000000000041c6fd in main (argc=1, argv=0x7fffffffdf48, env=0x7fffffffdf58) at shell.c:749 The call that checks for dead jobs & co. is notify_of_job_status (it's more or less the equivalent of showjobs(..., SHOW_CHANGED) in dash ); #0-#1 are related to its inner working; 6-8 is the yacc-generated parser code; 10-12 is the REPL loop. The interesting place here is #4, i.e. from where the notify_and_cleanup call comes. It seems that bash , unlike dash , may check for terminated jobs at each character read from the command line, but here's what I found: /* If the shell is interatctive, but not currently printing a prompt
(interactive_shell && interactive == 0), we don't want to print
notifies or cleanup the jobs -- we want to defer it until we do
print the next prompt. */
if (interactive_shell == 0 || SHOULD_PROMPT())
{
#if defined (JOB_CONTROL)
/* This can cause a problem when reading a command as the result
of a trap, when the trap is called from flush_child. This call
had better not cause jobs to disappear from the job table in
that case, or we will have big trouble. */
notify_and_cleanup ();
#else /* !JOB_CONTROL */
cleanup_dead_jobs ();
#endif /* !JOB_CONTROL */
} So, in interactive mode it's intentional to delay the check until a new prompt is provided, probably not to disturb the user entering commands. As for why the check doesn't spot the dead process when displaying the new prompt immediately after the kill , the previous explanation holds (the process is not dead yet). | {
"source": [
"https://unix.stackexchange.com/questions/65902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20334/"
]
} |
65,932 | When I echo * I get the following output: file1 file2 file3 ... What I want is to pick out the first word. How can I proceed? | You can pipe it through awk and make it echo the first word echo * | head -n1 | awk '{print $1;}' or you cut the string up and select the first word: echo * | head -n1 | cut -d " " -f1 or you pipe it thorugh sed and have it remove everything but the first word echo * | head -n1 | sed -e 's/\s.*$//' Added the | head -n1 to satisfy nitpickers. In case your string contains newlines | head -n1 will select the first line first before the important commands select the first word from the string passed to it. | {
"source": [
"https://unix.stackexchange.com/questions/65932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32990/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.