output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
The behavior can be reproduced with:
export HISTSIZE=V=For some reason I had that in my .bashrc file. Most likely I didn't have focus in the window I thought I had, ended up adding V= without noticing it and was careless saving the file; maybe it was open in the editor - those are set to enable unlimited history. But zsh uses the parameter too. So bash didn't complain about the error but zsh did... A quick look at the output of envfor anything unusual has to be on the debug checklist.
|
I'm using Linux arch 3.15.2-1-ARCH x86_64. I don't use zsh (zsh-5.0.5-1) every day, otherwise I would have noticed it segfaults, when I launch it from the bash cli:
arch kernel: zsh[2187]: segfault at 8 ip 00007f3b49853083 sp 00007fff2ad39198 error 4 in libc-2.19.so[7f3b497d2000+1a4000]
arch systemd-coredump[2188]: Process 2187 (zsh) dumped core.Now I remember running it at some point and going through the configuration script(zsh /usr/share/zsh/functions/Newuser/zsh-newuser-install -f segfaults too now). So I have done the following:Reboot
The behavior is the same in X or the console
Remove with configuration and clear package cache, then reinstall: pacman -Rns zsh; paccache -r; paccache -ruk0; pacman -S zsh.
Try to compile using the recipe for 5.0.5-1 and 5.0.4-1: compiles, then goes on to check the binary by executing it, which segfaults in both cases, so this aborts.
Made sure to have no .zsh* file in my home directory - made sure to have the basic .zsh* files in my home dir.
Renamed /etc/profile
Renamed /etc/zsh/zprofile
zsh-completions is/is not installedIronically, zsh --version or --help work and echo the relevant info.
So what could be the issue with my setup/environment here?Note: ran quick gdb if that brings any insight into what is happening:
(gdb) run
Starting program: /usr/bin/zsh
warning: Could not load shared library symbols for linux-vdso.so.1.
Do you need "set solib-search-path" or "set sysroot"?Program received signal SIGSEGV, Segmentation fault.
0x00007ffff713e083 in __strchr_sse2 () from /usr/lib/libc.so.6
(gdb) bt
#0 0x00007ffff713e083 in __strchr_sse2 () from /usr/lib/libc.so.6
#1 0x000000000044f2e3 in ?? ()
#2 0x000000000044f7d4 in op ()
#3 0x000000000045003b in ?? ()
#4 0x00000000004505cd in ?? ()
#5 0x00000000004507dc in matheval ()
#6 0x0000000000450829 in mathevali ()
#7 0x00000000004602b0 in ?? ()
#8 0x00000000004624ab in assignsparam ()
#9 0x0000000000463636 in createparamtable ()
#10 0x000000000043f2b5 in setupvals ()
#11 0x0000000000440e14 in zsh_main ()
#12 0x00007ffff70dd000 in __libc_start_main () from /usr/lib/libc.so.6
#13 0x000000000040f7be in _start () | Issue with zsh segfaulting: how to further assess the issue? |
arch/x86/kernel/idt.c:152 - page_fault is used in the IDT
arch/x86/entry/entry_64.S:1143 - page_fault is defined as a wrapper function for do_page_fault(), implemented using the macro idtentry
arch/x86/entry/entry_64.S:847 - idtentry macro
arch/x86/mm/fault.c:1562 - do_page_fault()Once you reach do_page_fault(), you should see clickable links to navigate the rest of the code. You can't do that for these first four steps, because Elixir doesn't understand the macro magic. It also doesn't understand assembly.
If you need to look at any other traps, some of the other handler functions (do_*) are in turn defined by another macro, x86/kernel/traps.c:281 DO_ERROR().
The function that logs the segfault message is also in fault.c: show_signal_msg(). A little freebie for you. Elixir doesn't allow searching for strings in general, only identifiers. GitHub also shut down their code search. In any case it's hard to search for this message format string without actually downloading the source code, because "%s%s[%d]: segfault at %lx ip %px sp %px error %lx" does not contain a lot of specific words or phrases :-).
The above links are to specific line numbers of the v5.0 source code. Using Elixir, which I really like :-).
|
What are the steps that the Linux kernel does whenever the hardware raises segfault. Right now I know that through the IDT, the fault handler handles it, and somewhere along the road there is a message in the kern.log (dmesg) about the fault.
I am asking this question because I am developing an hypervisor, and whenever there is a segfault in user space (which should`nt crash the system), the system crashes (the crash happens only after the message in the kern.log). So if I could retrace what the kernel does whenever it encounters segfault it will help me a lot.
| What happens whenever there is a segfault in linux |
You cannot.
Once systemd has reached this state, there is no way out. It is an infinite loop in the systemd program.
You will have to wait for the actual bugs (one in Oracle's barmy VirtualBox post-installation procedures, q.v., and one in systemd when daemon-reexec is called often) to be fixed.
Oracle's barmy post-installation procedure is not only calling daemon-reexec instead of daemon-reload several times in quick succession, it is duplicating with its own mechanisms, written in shell script, work that systemd-sysv-generator already does. (So, too, does the Debian replacement mechanism, sad to say.) As seems to be so often the case, one major cause of problems is Oracle's shell script layers on top of stuff.
Further readingJonathan de Boyne Pollard (2015). The systemd House of Horror. Frequently Given Answers.
https://unix.stackexchange.com/a/233581/5132 |
How to restart systemd after it crash? systemd currently crash during VirtualBox installation. Problem is already tracked by this issue#10716. I'm using Ubuntu 18.10.
sudo dpkg -i virtualbox-5.2_5.2.20-125813_Ubuntu_bionic_amd64.deb
Setting up virtualbox-5.2 (5.2.20-125813~Ubuntu~bionic) ...
addgroup: The group `vboxusers' already exists as a system group. Exiting.
Failed to enable unit: Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms)sudo journalctl --no-pager -b -0 -p 0..4
dec 08 20:20:55 machine kernel: systemd[1]: segfault at 40000 ip 00007fe8fdcb9116 sp 00007ffd6f134918 error 4 in libc-2.28.so[7fe8fdc2e000+171000]
dec 08 20:20:55 machine systemd[1]: Caught <SEGV>, dumped core as pid 6345.
dec 08 20:20:55 machine systemd[1]: Freezing execution.I tried to execute following commands, but without success. The only solution that I came up is to hard restart my machine.
sudo systemctl restart org.freedesktop.systemd1
Failed to restart org.freedesktop.systemd1.service: Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms)# This finish without errors, but "list-units" still doesn't show anything
sudo systemctl daemon-reexecsudo systemctl list-units --no-pager
Failed to list units: Connection timed outsudo systemctl daemon-reload
Failed to reload daemon: Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms) | restart systemd after it crash |
Type the following command in a terminal:
echo $XDG_SESSION_TYPEIf it returns Wayland, type:
xhost si:localuser:rootto allow root to start graphical apps under Wayland sessions.
|
When I run GUFW as root, it returns:
No protocol specified
Unable to init server: Could not connect: Connection refused
No protocol specified
Unable to init server: Could not connect: Connection refused(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_modifier_mask: assertion 'GDK_IS_KEYMAP (keymap)' failed(gufw.py:5272): Gdk-CRITICAL **: gdk_keymap_get_for_display: assertion 'GDK_IS_DISPLAY (display)' failed(gufw.py:5272): Gtk-CRITICAL **: _gtk_replace_virtual_modifiers: assertion 'GDK_IS_KEYMAP (keymap)' failed
/usr/bin/gufw-pkexec: line 13: 5272 Segmentation fault (core dumped) python3 ${LOCATIONS[${i}]} $1It crashes due to a segmentation fault, even though this is a fresh install of a stable GUFW build.
How do I fix the problem?
| GUFW returns a segmentation fault in line 13 |
First of all, you need to check the core file size in your limits using ulimit command (in bash or in zsh).
# ulimit -c
0If it's zero, you need to increase it. For instance, to increase it to unlimited:
# ulimit -c unlimited
# ulimit -c
unlimitedSecondly, you need to check where the coredump is created. On older distros, the default was usually a file called "core" in the CWD. That was probably the default when the book was written.
# /sbin/sysctl kernel.core_pattern
kernel.core_pattern = coreHowever, today it isn't true anymore on most distros. Today usually the core is generated using systemd-coredump(8).
# /sbin/sysctl kernel.core_pattern
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %eFrom man systemd-coredump(8):By default, systemd-coredump will log the core dump to the journal,
including a backtrace if possible, and store the core dump (an image
of the memory contents of the process) itself in an external file in
/var/lib/systemd/coredump.You can either find the core dump in that directory, or you can use coredumpctl to list those (you might require sudo or run it as root):
# coredumpctl list
TIME PID UID GID SIG PRESENT EXE
Wed 2022-01-26 12:53:06 IST 10347 111 222 11 * /tmp/a.outThe * under the "PRESENT" indicates that a core dump file was created.
You could see the location of the (compressed) coredump by using coredumpctl info <pid>:
# coredumpctl info 10347 |grep Coredump
Coredump: /var/lib/systemd/coredump/core.a\x2eout.111.1bd8e22a25e844f1b03a87d378b4ed9b.10347.1643194386000000.xzYou can decompress this file using xz command:
# xz --decompress --stdout '/var/lib/systemd/coredump/core.a\x2eout.111.1bd8e22a25e844f1b03a87d378b4ed9b.10347.1643194386000000.xz' > coreOr use the following command to dump the core to some destination.
# coredumpctl -o core dump 10347Either of those commands will create the file:
# gdb -q -c core
[New LWP 10347]
Core was generated by `/tmp/a.out'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0000000af7a56725 in ?? () |
I'm now reading The Shellcoder's Handbook: Discovering and Exploiting Security Holes, 2nd Edition.
In the second chapter considered the simple buffer overflow problem like this (C code):
int main () {
int array[5];
int i;
for (i = 0; i <= 255; i++ ) {
array[i] = 10;
}
}Author compiled the code with cc and execute it:
shellcoders@debian:~/chapter_2$ cc buffer2.c
shellcoders@debian:~/chapter_2$ ./a.out
Segmentation fault (core dumped)then had a peek at written core dump with gdb:
shellcoders@debian:~/chapter_2$ gdb -q -c core
Program terminated with signal 11, Segmentation fault.
#0 0x0000000a in ?? ()
(gdb)The problem is that core dump haven't been written in my case. I have only message: zsh: segmentation fault ./a.out.
I use Kali 2021.4a in VirtualBox. I have been tried to change default shell with chsh -s /bin/bash but it change nothing and terminal keep opens as zsh.
How to make core dump written on fault? Looks it's should be file written in the same directory with executable file.
| Core dump not written on segmentation fault |
You most likely have corrupted binaries/and or a corrupted filesystem/SD card.
The SD cards are not meant for heavy I/O use and degrade over time; Raspberry(es) are also known to corrupt data in SD cards when turning off occasionally due to characteristics of their design (electronics is not my area, not able to get into details).
You might very well have a corruption on the mysql binary or associated libraries. (actually having a gdb failure at do-rel.h suggests the latter).
I would reinstall the mysql client and associated libraries, as a command similar to this one (your mileage may vary):
sudo apt-get install --reinstall default-mysql-client default-mysql-client-coreI would use this command to see what package is giving you the mysql binary and would reinstall it:
dpkg -S /usr/bin/mysqlThen I would also see what libraries mysql is using, if that does not fix the problem:
ldd /usr/bin/mysql
linux-vdso.so.1 (0x00007ffc8903c000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5989c75000)
libreadline.so.5 => /lib/x86_64-linux-gnu/libreadline.so.5 (0x00007f5989a33000)
libncurses.so.5 => /lib/x86_64-linux-gnu/libncurses.so.5 (0x00007f5989810000)
libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007f59895e6000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f59893cc000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f59891c8000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f5988e46000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5988b42000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f59887a3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f598a4bc000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f598858c000)You might have as last resort to reinstall each package supporting each one of this libraries until your have your error corrected. Some of them are: libaio1, libjemalloc1, libreadline5. There are more.
sudo apt-get install -reinstall libaio1 libjemalloc1 libreadline5Nevertheless, there are no guarantees you do not have other bits of your filesystem corrupted. I would backup the DB and reinstall the OS/MySQL from scratch.
The good news is that you mentioning other ways of accessing the DB are working well, it means the corruption is mostly related to the mysql binary client.
Nonetheless, I would probably reevaluate running Linux from an SD card card in the future, especially if using MySQL.
PS. As @cas points well, "if you have dlocate or debsums installed, you can run dlocate --md5check PKGNAME or debsums PKGNAME to verify the package's installed files against its md5sum file"
See Raspberry: booting from a USB pen instead of an SD card
|
I have a Raspberry Pi server with Raspbian OS:
Kernel: Linux 4.9.35+ #1014 Fri Jun 30 14:34:49 BST 2017 armv6l GNU/Linux
Description: Raspbian GNU/Linux 8.0 (jessie)
Release: 8.0
Codename: jessieToday I noticed that attempts to use mysql end in a segmentation fault.
user@host~ $ mysql -u root -p
Enter password:
Segmentation faultThis happens for both the wrong and the right password. Or even if I made up a username. Actually, it turns out than even running the mysql command without any arguments has the same effect.
The Mysql server can still be accessed via Python (pymysql) and Perl. I have scripts that write and read various DBs, they are all working without problems.
Shell scripts that use the mysql command, they all fail. For example:
/home/user/example.sh: line 2: 27974 Segmentation fault /usr/bin/mysql -u dbuser -p$dbpass dbname --execute="select * from example;"The segmentation faults started appearing today and I cannot figure what is causing them now. The server has not been booted in a few weeks. It has been more than a week since it was last updated.
I can't find any errors that might look like relevant to this situation from the Mysql logs or syslog.
I have tried:Restarting Mysql
Upgrading the system and rebooting
Checking disk on reboot, no errors foundAs these procedures didn't help, I tried using gdb as suggested here:
Running application ends with "Segmentation Fault"
This is what I get when debugging the command mysql without any parameters:
gdb mysql
run
run
Starting program: /usr/bin/mysql
gdb mysql
run
Starting program: /usr/bin/mysql
Program received signal SIGSEGV, Segmentation fault.
elf_dynamic_do_Rel (skip_ifunc=<optimized out>, lazy=0, nrelative=<optimized out>, relsize=<optimized out>,
reladdr=<optimized out>, map=0xb6fff968) at do-rel.h:112
112 do-rel.h: No such file or directory.I wonder what could I do to fix this problem? (Other than making a bug report about this.)
| Mysql segfaulting when used from the shell |
Turned out this was a numbskull error, looks like my copy of badblocks may have just had a bug.
I ran yum update and after that, badblocks no longer segfaults.
|
I am trying to check a mounted partition to see if the drive has errors:
[root@virtuality ~]# /sbin/badblocks -v /dev/sdb1
Segmentation faultUh oh. What does this mean? Why is badblocks segfaulting? Can I fix it?
(System is CentOS release 4.6, drive is an SATA drive)
EDIT: Using strace:
[root@virtuality ~]# strace /sbin/badblocks -v /dev/sdb1
...[snip]...
open("/dev/sdb1", O_RDONLY) = 3
ioctl(3, BLKGETSIZE, 0x7fbffff878) = 0
close(3) = 0
open("/dev/sdb1", O_RDONLY) = 3
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++ | Why is badblocks segfaulting? |
This issue was fixed on the TI Forum, here is the link for those interested.
https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1158936/am3359-caught-segv-when-distro-s-systemd-starting
In summary, the issue was that my VDD_MPU(1.1V) was driven by my PMIC and the PMIC driver was not properly initialized. Meaning that when the system rose the CPU Freq. to 800MHz, the 1.1V was too low and the MPU would be in brownout and get corrupted. The solution was to either manually raise the VDD_MPU to something higher (under 1.3V) or to configure my PMIC to automatically manage the voltage based on the CPU Freq.
|
I tried asking my question on TI's forum, but I am not getting much feedback, so I thought I'd try my luck here.
You can see my ongoing discussion with TI here: https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1158936/am3359-caught-segv-when-distro-s-systemd-starting
We have been working with the TI AM335x-ICEV2 board for some time now to develop an embedded Linux Application and received our own custom board design. After a lot of debugging and reading, I am able to make U-Boot load properly and then I went on to load the kernel with a very basic FDT (run-time configuration of U-Boot via a flattened devicetree) which seems to work, but as soon as I reach the distribution booting stage, I get segmentation fault on segmentation fault and/or freezing execution.
At first we suspected hardware issue with the DDR3 mapping, but the design seems to respect all of TI's requirements. Also the freeze/segv always happens during distribution boot. I have never seen a single crash/freeze/segv during U-Boot or Linux Kernel bootings.
The only successful configuration I could get to boot, is the Tiny Filesystem from TI-Linux-SDK that uses SysVinit and loads no module at all. Any systemd os has failed so far (Debian and Arago) and I tried replacing Systemd by a SysV in an existing Debian 10 image, but it failed.
Although the same Uboot + Kernel runs perfectly on a TI IceV2 dev. board.
I am far from being a Linux Bootloader/Kernel pro and I am running out of theories on what could cause this issue or even what tests to run. If someone is willing to answer a few questions, I am more than willing to share some data.
Here is my console output when booting:
I removed this one for an updated one beneath, due to the limit of characters.
And here is my FDT File:
/dts-v1/;#include "am33xx.dtsi"/ {
model = "AM335x HELLO";
compatible = "ti,am335x-hello", "ti,am33xx"; chosen {
stdout-path = &uart3;
tick-timer = &timer2;
}; memory {
device_type = "memory";
reg = <0x80000000 0x10000000>; /* 256 MB */
}; vbat: fixedregulator@0 {
compatible = "regulator-fixed";
regulator-name = "vbat";
regulator-min-microvolt = <5000000>;
regulator-max-microvolt = <5000000>;
regulator-boot-on;
}; vmmc: fixedregulator@1 {
compatible = "regulator-fixed";
regulator-name = "vmmc";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
regulator-always-on;
regulator-boot-on;
};
};&am33xx_pinmux {
mmc0_pins_default: mmc0_pins_default {
pinctrl-single,pins = <
AM33XX_PADCONF(AM335X_PIN_MMC0_DAT3, PIN_INPUT_PULLUP, MUX_MODE0)
AM33XX_PADCONF(AM335X_PIN_MMC0_DAT2, PIN_INPUT_PULLUP, MUX_MODE0)
AM33XX_PADCONF(AM335X_PIN_MMC0_DAT1, PIN_INPUT_PULLUP, MUX_MODE0)
AM33XX_PADCONF(AM335X_PIN_MMC0_DAT0, PIN_INPUT_PULLUP, MUX_MODE0)
AM33XX_PADCONF(AM335X_PIN_MMC0_CLK, PIN_INPUT_PULLUP, MUX_MODE0)
AM33XX_PADCONF(AM335X_PIN_MMC0_CMD, PIN_INPUT_PULLUP, MUX_MODE0)
>;
}; uart1_pins: uart1_pins {
pinctrl-single,pins = <
0x180 (PIN_INPUT_PULLUP | MUX_MODE0) /* uart1_rxd.uart1_rxd */
0x184 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart1_txd.uart1_txd */
>;
};
};&uart1 {
pinctrl-names = "default";
pinctrl-0 = <&uart1_pins>;
status = "okay";
};&mmc1 {
status = "okay";
vmmc-supply = <&vmmc>;
bus-width = <4>;
pinctrl-names = "default";
pinctrl-0 = <&mmc0_pins_default>;
};EDIT #1
Initialisation Logs do not always crash/freeze at the same place. here is another one but with U-Boot optargs printk.devkmsg=on systemd.log_level=debug debug:
[ 2.175611] Run /sbin/init as init process
[ 2.863478] systemd[1]: System time before build time, advancing clock.
[ 2.996170] systemd[1]: systemd 241 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN
2 +IDN -PCRE2 default-hierarchy=hybrid)
[ 3.018180] systemd[1]: No virtualization found in DMI
[ 3.023569] systemd[1]: No virtualization found in CPUID
[ 3.028969] systemd[1]: Virtualization XEN not found, /proc/xen does not exist
[ 3.036559] systemd[1]: No virtualization found in /proc/device-tree/*
[ 3.043420] systemd[1]: UML virtualization not found in /proc/cpuinfo.
[ 3.050020] systemd[1]: This platform does not support /proc/sysinfo
[ 3.056508] systemd[1]: Found VM virtualization none
[ 3.061529] systemd[1]: Detected architecture arm.
[ 3.067154] systemd[1]: Mounting cgroup to /sys/fs/cgroup/perf_event of type cgroup with options perf_event.
[ 3.078818] systemd[1]: Mounting cgroup to /sys/fs/cgroup/blkio of type cgroup with options blkio.
[ 3.089463] systemd[1]: Mounting cgroup to /sys/fs/cgroup/cpu,cpuacct of type cgroup with options cpu,cpuacct.
[ 3.101298] systemd[1]: Mounting cgroup to /sys/fs/cgroup/freezer of type cgroup with options freezer.
[ 3.112236] systemd[1]: Mounting cgroup to /sys/fs/cgroup/memory of type cgroup with options memory.
[ 3.123482] systemd[1]: Mounting cgroup to /sys/fs/cgroup/devices of type cgroup with options devices.
[ 3.134395] systemd[1]: Mounting cgroup to /sys/fs/cgroup/pids of type cgroup with options pids.
[ 3.144778] systemd[1]: Mounting cgroup to /sys/fs/cgroup/net_cls of type cgroup with options net_cls.Welcome to Debian GNU/Linux 10 (buster)![ 3.184740] systemd[1]: Set hostname to <arm>.
[ 3.197755] systemd[1]: Successfully added address 127.0.0.1 to loopback interface
[ 3.205882] systemd[1]: Successfully added address ::1 to loopback interface
[ 3.213414] systemd[1]: Successfully brought loopback interface up
[ 3.220179] systemd[1]: Setting 'fs/file-max' to '2147483647'.
[ 3.230141] systemd[1]: Found cgroup2 on /sys/fs/cgroup/unified, unified hierarchy for systemd controller
[ 3.240134] systemd[1]: Unified cgroup hierarchy is located at /sys/fs/cgroup/unified. Controllers are on legacy hierarchies.
[ 3.258918] systemd[1]: Can't allocate BPF LPM TRIE map, BPF firewalling is not supported: Function not implemented
[ 3.269658] systemd[1]: Can't load kernel CGROUP DEVICE BPF program, BPF device control is not supported: Function not implemented
[ 3.281537] systemd[1]: Controller 'cpu' supported: yes
[ 3.286948] systemd[1]: Controller 'cpuacct' supported: yes
[ 3.292562] systemd[1]: Controller 'io' supported: no
[ 3.297676] systemd[1]: Controller 'blkio' supported: yes
[ 3.303132] systemd[1]: Controller 'memory' supported: yes
[ 3.308654] systemd[1]: Controller 'devices' supported: yes
[ 3.314306] systemd[1]: Controller 'pids' supported: yes
[ 3.319657] systemd[1]: Controller 'bpf-firewall' supported: no
[ 3.325634] systemd[1]: Controller 'bpf-devices' supported: no
[ 3.331602] systemd[1]: Set up TFD_TIMER_CANCEL_ON_SET timerfd.
[ 3.339066] systemd[1]: Enabling showing of status.
[ 3.345654] systemd[1]: Successfully forked off '(sd-executor)' as PID 55.
[ 3.358258] systemd[55]: Successfully forked off '(direxec)' as PID 56.
[ 3.373243] systemd[55]: Successfully forked off '(direxec)' as PID 57.
[ 3.381122] systemd[55]: Successfully forked off '(direxec)' as PID 58.
[ 3.434440] systemd[55]: Successfully forked off '(direxec)' as PID 59.
[ 3.442219] systemd[55]: Successfully forked off '(direxec)' as PID 60.
[ 3.504363] systemd[55]: Successfully forked off '(direxec)' as PID 61.
[ 3.512081] systemd[55]: Successfully forked off '(direxec)' as PID 62.
[ 3.634380] systemd[55]: Successfully forked off '(direxec)' as PID 63.
[ 3.642125] systemd[55]: Successfully forked off '(direxec)' as PID 64.
[ 3.744461] systemd[55]: Successfully forked off '(direxec)' as PID 65.
[ 3.752179] systemd[55]: Successfully forked off '(direxec)' as PID 66.
[ 3.866714] systemd[55]: Successfully forked off '(direxec)' as PID 67.
[ 3.923497] systemd[55]: /lib/systemd/system-generators/systemd-rc-local-generator terminated by signal SEGV.
[ 3.971004] systemd-hibernate-resume-generator[62]: Not running in an initrd, quitting.
[ 4.046186] systemd[55]: /lib/systemd/system-generators/systemd-hibernate-resume-generator succeeded.
[ 4.080695] systemd[55]: /lib/systemd/system-generators/systemd-gpt-auto-generator terminated by signal SEGV.
[ 4.110089] systemd[55]: /lib/systemd/system-generators/systemd-getty-generator failed with exit status 127.
[ 4.121572] systemd[55]: /lib/systemd/system-generators/systemd-cryptsetup-generator succeeded.
[ 4.130888] systemd-sysv-generator[66]: Native unit for sendsigs.service already exists, skipping.
[ 4.142139] systemd-sysv-generator[66]: Cannot find unit udhcpd.service.
[ 4.149412] systemd-sysv-generator[66]: Native unit for bootlogs.service already exists, skipping.
[ 4.159691] systemd-sysv-generator[66]: Cannot find unit cpufrequtils.service.
[ 4.170591] systemd-sysv-generator[66]: Native unit for procps.service already exists, skipping.
[ 4.179985] systemd-sysv-generator[66]: Native unit for checkroot.service already exists, skipping.
[ 4.189433] systemd-sysv-generator[66]: Native unit for urandom.service already exists, skipping.
[ 4.200115] systemd-sysv-generator[66]: Native unit for rcS.service already exists, skipping.
[ 4.211224] systemd-sysv-generator[66]: Native unit for kmod.service already exists, skipping.
[ 4.220302] systemd-sysv-generator[66]: Native unit for checkfs.service already exists, skipping.
[ 4.229691] systemd-sysv-generator[66]: Cannot find unit loadcpufreq.service.
[ 4.237351] systemd-sysv-generator[66]: Native unit for rc.local.service already exists, skipping.
[ 4.248773] systemd-sysv-generator[66]: Native unit for udev.service already exists, skipping.
[ 4.257901] systemd-sysv-generator[66]: Native unit for bluetooth.service already exists, skipping.
[ 4.268275] systemd-sysv-generator[66]: Native unit for rsyslog.service already exists, skipping.
[ 4.277771] systemd-sysv-generator[66]: Cannot find unit exim4.service.
[ 4.284927] systemd-sysv-generator[66]: Native unit for umountroot.service already exists, skipping.
[ 4.294421] systemd-sysv-generator[66]: Native unit for halt.service already exists, skipping.
[ 4.303444] systemd-sysv-generator[66]: Native unit for mountnfs-bootclean.service already exists, skipping.
[ 4.313619] systemd-sysv-generator[66]: Native unit for hostname.service already exists, skipping.
[ 4.324009] systemd-sysv-generator[66]: Native unit for avahi-daemon.service already exists, skipping.
[ 4.333755] systemd-sysv-generator[66]: Native unit for mountall.service already exists, skipping.
[ 4.343999] systemd-sysv-generator[66]: Native unit for ofono.service already exists, skipping.
[ 4.354101] systemd-sysv-generator[66]: Native unit for connman.service already exists, skipping.
[ 4.363490] systemd-sysv-generator[66]: Native unit for mountkernfs.service already exists, skipping.
[ 4.372965] systemd-sysv-generator[66]: Native unit for reboot.service already exists, skipping.
[ 4.382251] systemd-sysv-generator[66]: Native unit for hostapd.service already exists, skipping.
[ 4.391633] systemd-sysv-generator[66]: Native unit for hwclock.service already exists, skipping.
[ 4.400903] systemd-sysv-generator[66]: Native unit for rmnologin.service already exists, skipping.
[ 4.411226] systemd-sysv-generator[66]: Native unit for dnsmasq.service already exists, skipping.
[ 4.420535] systemd-sysv-generator[66]: Native unit for mountdevsubfs.service already exists, skipping.
[ 4.431225] systemd-sysv-generator[66]: Native unit for dbus.service already exists, skipping.
[ 4.440317] systemd-sysv-generator[66]: Native unit for umountfs.service already exists, skipping.
[ 4.450598] systemd-sysv-generator[66]: Native unit for cron.service already exists, skipping.
[ 4.459795] systemd-sysv-generator[66]: Native unit for sudo.service already exists, skipping.
[ 4.468808] systemd-sysv-generator[66]: Native unit for mountall-bootclean.service already exists, skipping.
[ 4.479086] systemd-sysv-generator[66]: Native unit for mountnfs.service already exists, skipping.
[ 4.488424] systemd-sysv-generator[66]: Native unit for brightness.service already exists, skipping.
[ 4.498809] systemd-sysv-generator[66]: Native unit for dundee.service already exists, skipping.
[ 4.507999] systemd-sysv-generator[66]: Native unit for umountnfs.service already exists, skipping.
[ 4.517566] systemd-sysv-generator[66]: Native unit for rc.service already exists, skipping.
[ 4.527371] systemd-sysv-generator[66]: Native unit for ssh.service already exists, skipping.
[ 4.536340] systemd-sysv-generator[66]: Native unit for checkroot-bootclean.service already exists, skipping.
[ 4.546706] systemd-sysv-generator[66]: Native unit for apache-htcacheclean.service already exists, skipping.
[ 4.557047] systemd-sysv-generator[66]: Native unit for single.service already exists, skipping.
[ 4.567119] systemd-sysv-generator[66]: Native unit for rsync.service already exists, skipping.
[ 4.576220] systemd-sysv-generator[66]: Native unit for killprocs.service already exists, skipping.
[ 4.586582] systemd-sysv-generator[66]: Native unit for networking.service already exists, skipping.
[ 4.596250] systemd-sysv-generator[66]: Native unit for apache2.service already exists, skipping.
[ 4.605557] systemd-sysv-generator[66]: Native unit for bootmisc.service already exists, skipping.
[ 4.616340] systemd-sysv-generator[66]: Ignoring S02single symlink in rc1.d, not generating single.service.
[ 4.626331] systemd-sysv-generator[66]: Ignoring S01killprocs symlink in rc1.d, not generating killprocs.service.
[ 4.636740] systemd-sysv-generator[66]: Ignoring S01bootlogs symlink in rc1.d, not generating bootlogs.service.
[ 4.648331] systemd-sysv-generator[66]: Ignoring S01rsyslog symlink in rc2.d, not generating rsyslog.service.
[ 4.658459] systemd-sysv-generator[66]: Ignoring S04ofono symlink in rc2.d, not generating ofono.service.
[ 4.668189] systemd-sysv-generator[66]: Ignoring S03ssh symlink in rc2.d, not generating ssh.service.
[ 4.677527] systemd-sysv-generator[66]: Ignoring S01rsync symlink in rc2.d, not generating rsync.service.
[ 4.687190] systemd-sysv-generator[66]: Ignoring S04dundee symlink in rc2.d, not generating dundee.service.
[ 4.697047] systemd-sysv-generator[66]: Ignoring S01hostapd symlink in rc2.d, not generating hostapd.service.
[ 4.707054] systemd-sysv-generator[66]: Ignoring S03rsync symlink in rc2.d, not generating rsync.service.
[ 4.716720] systemd-sysv-generator[66]: Ignoring S03rmnologin symlink in rc2.d, not generating rmnologin.service.
[ 4.727116] systemd-sysv-generator[66]: Ignoring S01cron symlink in rc2.d, not generating cron.service.
[ 4.736608] systemd-sysv-generator[66]: Ignoring S01apache2 symlink in rc2.d, not generating apache2.service.
[ 4.746613] systemd-sysv-generator[66]: Ignoring S01ssh symlink in rc2.d, not generating ssh.service.
[ 4.755960] systemd-sysv-generator[66]: Ignoring S01sudo symlink in rc2.d, not generating sudo.service.
[ 4.765522] systemd-sysv-generator[66]: Ignoring S01bluetooth symlink in rc2.d, not generating bluetooth.service.
[ 4.775884] systemd-sysv-generator[66]: Ignoring S01ofono symlink in rc2.d, not generating ofono.service.
[ 4.785583] systemd-sysv-generator[66]: Ignoring S04bluetooth symlink in rc2.d, not generating bluetooth.service.
[ 4.795942] systemd-sysv-generator[66]: Ignoring S02apache2 symlink in rc2.d, not generating apache2.service.
[ 4.805969] systemd-sysv-generator[66]: Ignoring S01connman symlink in rc2.d, not generating connman.service.
[ 4.815976] systemd-sysv-generator[66]: Ignoring S04connman symlink in rc2.d, not generating connman.service.
[ 4.826024] systemd-sysv-generator[66]: Ignoring S05rc.local symlink in rc2.d, not generating rc.local.service.
[ 4.836208] systemd-sysv-generator[66]: Ignoring S01dundee symlink in rc2.d, not generating dundee.service.
[ 4.846062] systemd-sysv-generator[66]: Ignoring S01bootlogs symlink in rc2.d, not generating bootlogs.service.
[ 4.856258] systemd-sysv-generator[66]: Ignoring S01avahi-daemon symlink in rc2.d, not generating avahi-daemon.service.
[ 4.867164] systemd-sysv-generator[66]: Ignoring S04avahi-daemon symlink in rc2.d, not generating avahi-daemon.service.
[ 4.878058] systemd-sysv-generator[66]: Ignoring S01dbus symlink in rc2.d, not generating dbus.service.
[ 4.887561] systemd-sysv-generator[66]: Ignoring S03dbus symlink in rc2.d, not generating dbus.service.
[ 4.897057] systemd-sysv-generator[66]: Ignoring S03cron symlink in rc2.d, not generating cron.service.
[ 4.907950] systemd-sysv-generator[66]: Ignoring S01rsyslog symlink in rc3.d, not generating rsyslog.service.
[ 4.918100] systemd-sysv-generator[66]: Ignoring S04ofono symlink in rc3.d, not generating ofono.service.
[ 4.927809] systemd-sysv-generator[66]: Ignoring S03ssh symlink in rc3.d, not generating ssh.service.
[ 4.937169] systemd-sysv-generator[66]: Ignoring S01rsync symlink in rc3.d, not generating rsync.service.
[ 4.946831] systemd-sysv-generator[66]: Ignoring S04dundee symlink in rc3.d, not generating dundee.service.
[ 4.956668] systemd-sysv-generator[66]: Ignoring S01hostapd symlink in rc3.d, not generating hostapd.service.
[ 4.966695] systemd-sysv-generator[66]: Ignoring S03rsync symlink in rc3.d, not generating rsync.service.
[ 4.976358] systemd-sysv-generator[66]: Ignoring S03rmnologin symlink in rc3.d, not generating rmnologin.service.
[ 4.986727] systemd-sysv-generator[66]: Ignoring S01cron symlink in rc3.d, not generating cron.service.
[ 4.996231] systemd-sysv-generator[66]: Ignoring S01apache2 symlink in rc3.d, not generating apache2.service.
[ 5.006237] systemd-sysv-generator[66]: Ignoring S01ssh symlink in rc3.d, not generating ssh.service.
[ 5.015561] systemd-sysv-generator[66]: Ignoring S01sudo symlink in rc3.d, not generating sudo.service.
[ 5.025108] systemd-sysv-generator[66]: Ignoring S01bluetooth symlink in rc3.d, not generating bluetooth.service.
[ 5.035470] systemd-sysv-generator[66]: Ignoring S01ofono symlink in rc3.d, not generating ofono.service.
[ 5.045144] systemd-sysv-generator[66]: Ignoring S04bluetooth symlink in rc3.d, not generating bluetooth.service.
[ 5.055522] systemd-sysv-generator[66]: Ignoring S02apache2 symlink in rc3.d, not generating apache2.service.
[ 5.065531] systemd-sysv-generator[66]: Ignoring S01connman symlink in rc3.d, not generating connman.service.
[ 5.075557] systemd-sysv-generator[66]: Ignoring S04connman symlink in rc3.d, not generating connman.service.
[ 5.085585] systemd-sysv-generator[66]: Ignoring S05rc.local symlink in rc3.d, not generating rc.local.service.
[ 5.095786] systemd-sysv-generator[66]: Ignoring S01dundee symlink in rc3.d, not generating dundee.service.
[ 5.105621] systemd-sysv-generator[66]: Ignoring S01bootlogs symlink in rc3.d, not generating bootlogs.service.
[ 5.115840] systemd-sysv-generator[66]: Ignoring S01avahi-daemon symlink in rc3.d, not generating avahi-daemon.service.
[ 5.126722] systemd-sysv-generator[66]: Ignoring S04avahi-daemon symlink in rc3.d, not generating avahi-daemon.service.
[ 5.137635] systemd-sysv-generator[66]: Ignoring S01dbus symlink in rc3.d, not generating dbus.service.
[ 5.147118] systemd-sysv-generator[66]: Ignoring S03dbus symlink in rc3.d, not generating dbus.service.
[ 5.156631] systemd-sysv-generator[66]: Ignoring S03cron symlink in rc3.d, not generating cron.service.
[ 5.167496] systemd-sysv-generator[66]: Ignoring S01rsyslog symlink in rc4.d, not generating rsyslog.service.
[ 5.177641] systemd-sysv-generator[66]: Ignoring S04ofono symlink in rc4.d, not generating ofono.service.
[ 5.187344] systemd-sysv-generator[66]: Ignoring S03ssh symlink in rc4.d, not generating ssh.service.
[ 5.196702] systemd-sysv-generator[66]: Ignoring S01rsync symlink in rc4.d, not generating rsync.service.
[ 5.206365] systemd-sysv-generator[66]: Ignoring S04dundee symlink in rc4.d, not generating dundee.service.
[ 5.216198] systemd-sysv-generator[66]: Ignoring S01hostapd symlink in rc4.d, not generating hostapd.service.
[ 5.226224] systemd-sysv-generator[66]: Ignoring S03rsync symlink in rc4.d, not generating rsync.service.
[ 5.235886] systemd-sysv-generator[66]: Ignoring S03rmnologin symlink in rc4.d, not generating rmnologin.service.
[ 5.246259] systemd-sysv-generator[66]: Ignoring S01cron symlink in rc4.d, not generating cron.service.
[ 5.255763] systemd-sysv-generator[66]: Ignoring S01apache2 symlink in rc4.d, not generating apache2.service.
[ 5.265771] systemd-sysv-generator[66]: Ignoring S01ssh symlink in rc4.d, not generating ssh.service.
[ 5.275106] systemd-sysv-generator[66]: Ignoring S01sudo symlink in rc4.d, not generating sudo.service.
[ 5.284660] systemd-sysv-generator[66]: Ignoring S01bluetooth symlink in rc4.d, not generating bluetooth.service.
[ 5.295023] systemd-sysv-generator[66]: Ignoring S01ofono symlink in rc4.d, not generating ofono.service.
[ 5.304697] systemd-sysv-generator[66]: Ignoring S04bluetooth symlink in rc4.d, not generating bluetooth.service.
[ 5.315074] systemd-sysv-generator[66]: Ignoring S02apache2 symlink in rc4.d, not generating apache2.service.
[ 5.325082] systemd-sysv-generator[66]: Ignoring S01connman symlink in rc4.d, not generating connman.service.
[ 5.335108] systemd-sysv-generator[66]: Ignoring S04connman symlink in rc4.d, not generating connman.service.
[ 5.345135] systemd-sysv-generator[66]: Ignoring S05rc.local symlink in rc4.d, not generating rc.local.service.
[ 5.355336] systemd-sysv-generator[66]: Ignoring S01dundee symlink in rc4.d, not generating dundee.service.
[ 5.365172] systemd-sysv-generator[66]: Ignoring S01bootlogs symlink in rc4.d, not generating bootlogs.service.
[ 5.375388] systemd-sysv-generator[66]: Ignoring S01avahi-daemon symlink in rc4.d, not generating avahi-daemon.service.
[ 5.386269] systemd-sysv-generator[66]: Ignoring S04avahi-daemon symlink in rc4.d, not generating avahi-daemon.service.
[ 5.397180] systemd-sysv-generator[66]: Ignoring S01dbus symlink in rc4.d, not generating dbus.service.
[ 5.406664] systemd-sysv-generator[66]: Ignoring S03dbus symlink in rc4.d, not generating dbus.service.
[ 5.416178] systemd-sysv-generator[66]: Ignoring S03cron symlink in rc4.d, not generating cron.service.
[ 5.426959] systemd-sysv-generator[66]: Ignoring S01rsyslog symlink in rc5.d, not generating rsyslog.service.
[ 5.437110] systemd-sysv-generator[66]: Ignoring S04ofono symlink in rc5.d, not generating ofono.service.
[ 5.446818] systemd-sysv-generator[66]: Ignoring S03ssh symlink in rc5.d, not generating ssh.service.
[ 5.456174] systemd-sysv-generator[66]: Ignoring S01rsync symlink in rc5.d, not generating rsync.service.
[ 5.465836] systemd-sysv-generator[66]: Ignoring S04dundee symlink in rc5.d, not generating dundee.service.
[ 5.475671] systemd-sysv-generator[66]: Ignoring S01hostapd symlink in rc5.d, not generating hostapd.service.
[ 5.485698] systemd-sysv-generator[66]: Ignoring S03rsync symlink in rc5.d, not generating rsync.service.
[ 5.495360] systemd-sysv-generator[66]: Ignoring S03rmnologin symlink in rc5.d, not generating rmnologin.service.
[ 5.505731] systemd-sysv-generator[66]: Ignoring S01cron symlink in rc5.d, not generating cron.service.
[ 5.515235] systemd-sysv-generator[66]: Ignoring S01apache2 symlink in rc5.d, not generating apache2.service.
[ 5.525241] systemd-sysv-generator[66]: Ignoring S01ssh symlink in rc5.d, not generating ssh.service.
[ 5.534565] systemd-sysv-generator[66]: Ignoring S01sudo symlink in rc5.d, not generating sudo.service.
[ 5.544117] systemd-sysv-generator[66]: Ignoring S01bluetooth symlink in rc5.d, not generating bluetooth.service.
[ 5.554480] systemd-sysv-generator[66]: Ignoring S01ofono symlink in rc5.d, not generating ofono.service.
[ 5.564153] systemd-sysv-generator[66]: Ignoring S04bluetooth symlink in rc5.d, not generating bluetooth.service.
[ 5.574532] systemd-sysv-generator[66]: Ignoring S02apache2 symlink in rc5.d, not generating apache2.service.
[ 5.584540] systemd-sysv-generator[66]: Ignoring S01connman symlink in rc5.d, not generating connman.service.
[ 5.594567] systemd-sysv-generator[66]: Ignoring S04connman symlink in rc5.d, not generating connman.service.
[ 5.604592] systemd-sysv-generator[66]: Ignoring S05rc.local symlink in rc5.d, not generating rc.local.service.
[ 5.614792] systemd-sysv-generator[66]: Ignoring S01dundee symlink in rc5.d, not generating dundee.service.
[ 5.624629] systemd-sysv-generator[66]: Ignoring S01bootlogs symlink in rc5.d, not generating bootlogs.service.
[ 5.634845] systemd-sysv-generator[66]: Ignoring S01avahi-daemon symlink in rc5.d, not generating avahi-daemon.service.
[ 5.645726] systemd-sysv-generator[66]: Ignoring S04avahi-daemon symlink in rc5.d, not generating avahi-daemon.service.
[ 5.656638] systemd-sysv-generator[66]: Ignoring S01dbus symlink in rc5.d, not generating dbus.service.
[ 5.666122] systemd-sysv-generator[66]: Ignoring S03dbus symlink in rc5.d, not generating dbus.service.
[ 5.675635] systemd-sysv-generator[66]: Ignoring S03cron symlink in rc5.d, not generating cron.service.
[ 5.685319] systemd-sysv-generator[66]: Loading SysV script /etc/init.d/udhcpd
[ 5.694263] systemd-sysv-generator[66]: Loading SysV script /etc/init.d/exim4
[ 5.705397] systemd-sysv-generator[66]: Loading SysV script /etc/init.d/loadcpufreq
[ 5.716624] systemd-sysv-generator[66]: Loading SysV script /etc/init.d/cpufrequtils
[ 5.731392] systemd[55]: /lib/systemd/system-generators/systemd-sysv-generator succeeded.
[ 5.739965] systemd[55]: /lib/systemd/system-generators/systemd-debug-generator succeeded.
[ 5.748639] systemd[55]: /lib/systemd/system-generators/systemd-run-generator succeeded.
[ 5.757026] systemd[55]: /lib/systemd/system-generators/systemd-veritysetup-generator terminated by signal SEGV.
[ 5.767390] systemd[55]: /lib/systemd/system-generators/systemd-bless-boot-generator terminated by signal SEGV.
[ 5.777681] systemd[55]: /lib/systemd/system-generators/systemd-system-update-generator succeeded.
[ 5.786799] systemd[55]: /lib/systemd/system-generators/systemd-fstab-generator terminated by signal SEGV.
[ 5.797673] systemd[1]: (sd-executor) succeeded.
[ 5.802788] systemd[1]: Looking for unit files in (higher priority first):
[ 5.809936] systemd[1]: /etc/systemd/system.control
[ 5.815015] systemd[1]: /run/systemd/system.control
[ 5.820012] systemd[1]: /run/systemd/transient
[ 5.824629] systemd[1]: /etc/systemd/system
[ 5.828928] systemd[1]: /run/systemd/systemI can't seem to find the constant between each boots, except that it crashes...
EDIT 2:
Expected Dev board output is:
Welcome to Debian GNU/Linux 10 (buster)!
[ 6.828019] systemd[1]: Set hostname to <arm>.
[ 7.793038] systemd[1]: File /lib/systemd/system/systemd-journald.service:12 configures an IP fire wall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ 7.810794] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ 8.292129] systemd[1]: Listening on udev Control Socket.
[ OK ] Listening on udev Control Socket.
...
(and so on)Not that using a SySVInit environment with no modules, I can boot to the shell, but then some commands can cause a freeze.
| AM335x - Custom Board <SEGV> when running Systemd |
The "Segmentation Fault" messages are not printed by the faulting program, but by the shell.
The *** stack smashing detected *** & backtrace + memmap messages (at least on my system) are printed by the stack protector handler directly to the the controlling terminal (_PATH_TTY/dev/tty is opened directly, with no regard to stdout or stderr, then the messages are written there -- see fortify_fail.c and libc_fatal.c in glibc).
If you want to catch the whole thing, run your your program with script(1) (eg script -c './rpneval ...') or something similar.
|
I can redirect stdout and stderr of a program using ./a.out > output.txt 2> error.txt
But these don't redirect messages like segmentation faults.
So I found
{ ./a.out < $TEST_DIR'test'$i'.in' > $OUTPUT_DIR/output$i.txt ; } 2> $OUTPUT_DIR/error$i.txtYet, the core dump and stack smash messages are not getting redirected.
How do redirect them?
| How to redirect Core dump and stack smash messages |
I've had the same and a number of times (I don't remember how many, but at least two). I eventually came to consider this a fatal situation. The only thing that worked for me was a completely new installation. What I blame for this is the magnetic card. That's only my guess, I don't have any arguments. Well, I might have one. This always happened after my Pi had spent a long idle time in a drawer. So my pessimistic advice is to backup what might still be of value and start anew. But if I were you, I'd wait for other answers before I proceed, as I'm not that experienced nor knowledgable.
|
I'm having a strange occurrence on my Raspbian Raspberry Pi, as most programs are now segfaulting before they even start:
user@raspberrypi:~ $ sudo -s
Segmentation fault
user@raspberrypi:~ $ ssh -vvv localhost
Segmentation fault
user@raspberrypi:~ $ sudo reboot
Segmentation fault
user@raspberrypi:~ $ sudo apt update
Segmentation fault
user@raspberrypi:~ $ htop
htop 2.0.2 aborting. Please report bug at http://hisham.hm/htop
(...)Some simple things still work:
user@raspberrypi:~ $ touch abc
user@raspberrypi:~ $ ls
abcMy uname -a is Linux raspberrypi 4.9.41-v7+ #1023 SMP Tue Aug 8 16:00:15 BST 2017 armv7l GNU/Linux, and free returns
total used free shared buff/cache available
Mem: 927M 40M 31M 9.1M 855M 816M
Swap: 99M 13M 86MI can't reboot gracefully since sudo and su both die. This session was open before this started, but I can't open new ssh sessions into the box either.
I'll try to pull the plug and turn it on again, but what could be causing this? I did an apt upgrade earlier, but it seems like this started later; I have no idea what other culprits may be.
Thanks!
| Most programs suddenly segfault |
Following meuh suggestion I ran Python with strace and look at the differences between interactive and non-interactive Python.
Interactive Python read my ~/.inputrc as it uses readline, and this was the file that was causing the Segmentation fault (core dumped).
I had an ~/.inputrc which came from another machine (Ubuntu) and inside of it I had blindly copied the contents of (Ubuntu) /usr/share/doc/bash/inputrc.arrows.
The content of /usr/share/doc/bash/inputrc.arrows is:
# This file controls the behaviour of line input editing for
# programs that use the Gnu Readline library.
#
# Arrow keys in keypad mode
#
"\C-[OD" backward-char
"\C-[OC" forward-char
"\C-[OA" previous-history
"\C-[OB" next-history
#
# Arrow keys in ANSI mode
#
"\C-[[D" backward-char
"\C-[[C" forward-char
"\C-[[A" previous-history
"\C-[[B" next-history
#
# Arrow keys in 8 bit keypad mode
#
"\C-M-OD" backward-char
"\C-M-OC" forward-char
"\C-M-OA" previous-history
"\C-M-OB" next-history
#
# Arrow keys in 8 bit ANSI mode
#
"\C-M-[D" backward-char
"\C-M-[C" forward-char
"\C-M-[A" previous-history
"\C-M-[B" next-historyThe 8 bit keypad mode and the 8 bit ANSI mode were the specific cause of the problem so after removing them everything works fine.
Thanks to thrig for pointing me to gdb and being patient enough as I had never used that tool before. The same with meuh who suggested using strace which was also new to me. I had no idea where to start debugging as I am just a casual user who enjoys learning new things. Great community!
|
Using Arch Linux everytime I try to use Python interactive mode no matter what I type I get Segmentation fault (core dumped) and the Python interpreter exits.
I do not have any problem running Python scripts or doing something like:
$ echo "print(1+1)" | pythonBut when I enter interactive mode, whether it is with python or python2, as soon as I type any command and press enter, the interpreter halts and then if I press enter again (or any other key) I get the message Segmentation fault (core dumped) and the interpreter exits.
I test installing bpython and I have no problem or errors with this interface for the python interpreter.
I tried gdb and when I type run at the gdb prompt I had to press enter twice (when hit enter once it just halted) and then got:
Starting program: /usr/bin/python
Segmentation fault (core dumped)and gdb exits.
Maybe this information is useful:
$ which python
/usr/bin/python$ which python2
/usr/bin/python2$ python --version
Python 3.6.1$ python2 --version
Python 2.7.13$ uname -a
Linux archimiro 4.11.6-3-ARCH #1 SMP PREEMPT Thu Jun 22 12:21:46 CEST 2017 x86_64 GNU/Linux | Python Interactive mode on Arch Linux "Segmentation fault (core dumped)" |
There are two common sources for this kind of problem (i.e. affecting multiple unrelated programs)Faulty memory. Use memtester or memtest86 to test your memory. Replace any bad DIMMs. If your motherboard supports it, buy ECC RAM - it's usually only 10-30% more expensive.Note that some distros (e.g. debian) are conveniently configured to add a grub entry to run memtest86 when you install the memtest86 package. memtester can be run without having to reboot.A bad library that's common to all affected programs. Have you upgraded recently? At a guess, I'd start looking suspiciously at gnome/gtk libraries as all programs you mentioned either rely on them or can be compiled to use them.
other potential suspects include libc6. You can use ldd to find out exactly which libs each program uses and compare them to find common libs. |
On my build of Arch Linux I've recently noticed a weird problem. After launching certain programs from the shell it will echo "Segmentation Fault (core dumped)".
Some examples of this are when I close shutter, launch chrome, launch sublime text, or close emacs.
As far as I can tell the segfaults aren't affecting the programs, but they show up consistently and it's starting to get kind of annoying.
I have no idea what's causing them, and couldn't find any info on it after searching around for a while.
My shell is bash and my terminal is urxvt.
| After some commands, bash prints "Segmentation Fault (core dumped)" for unknown reason |
Check what CPU architecture you have on your system. You're likely mixing the 32-bit and 64-bit binaries and libraries together which is probably giving you this error message.
You can find out with this command:
# 64-bit system
$ getconf LONG_BIT
64# 32-bit system
$ getconf LONG_BIT
32If you have 32 then you should uninstall the 64-bit package of iptables. If you have 64-bit, then uninstall the 32-bit.
How to uninstall?
On the surface it would appear that the packages have the same name but I assure you they're different. You can get their actual names using this command:
$ rpm -aq iptables*
iptables-services-1.4.18-1.fc19.x86_64
iptables-1.4.18-1.fc19.x86_64
iptables-1.4.18-1.fc19.i686So to get rid of the 32-bit version you can use this command:
$ yum remove iptables-1.4.18-1.fc19.i686Obviously substitute your result in for the example above.
References32-bit, 64-bit CPU op-mode on Linux |
I'm currently trying to set up a MySQL replication and I don't have a /etc/sysconfig/iptables.
So I tried to install it with yum install. It showed the following output.
Installed Packages
Name : iptables
Arch : x86_64
Version : 1.4.7
Release : 11.el6
Size : 836 k
Repo : installed
From repo : anaconda-RedHatEnterpriseLinux-201311111358.x86_64
Summary : Tools for managing Linux kernel packet filtering capabilities
URL : http://www.netfilter.org/
License : GPLv2
Description : The iptables utility controls the network packet filtering code in
: the Linux kernel. If you need to set up firewalls and/or IP
: masquerading, you should install this package.Available Packages
Name : iptables
Arch : i686
Version : 1.4.7
Release : 11.el6
Size : 247 k
Repo : rhel-x86_64-server-6
Summary : Tools for managing Linux kernel packet filtering capabilities
License : GPLv2
Description : The iptables utility controls the network packet filtering code in
: the Linux kernel. If you need to set up firewalls and/or IP
: masquerading, you should install this package.So far so good but when I try to run iptables I get a Segmentation fault
Do I need a better hardware?
[root@wltwd1 sysconfig]# iptables
Segmentation fault (core dumped)This is my lscpu output:
[root@wltwd1 sysconfig]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 1
On-line CPU(s) list: 0
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 47
Stepping: 2
CPU MHz: 1995.034
BogoMIPS: 3990.06
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 18432K
NUMA node0 CPU(s): 0 | Segmentation fault (core dumped) when try to run iptables rhel6 |
I simply added this patch to my layer and now it works.
|
I plan to run an Electron app on my UDOO Neo i.MX6 board with the official st1232 7'' touchscreen kit for UDOO Neo. I created a custom Linux distro with Poky (thud branch) to be able to build a "ready to boot" image. I enabled the "x11-base" image feature to have all the Xorg packages and I have the meta-freescale layer that provides patches in order to have a fully compatible distro with the hardware (Vivante).
However, at the end of the boot process, I get a seg fault error message from xinit.
Here is my /etc/X11/xorg.conf file:
Section "Device"
Identifier "i.MX Accelerated Framebuffer Device"
Driver "vivante"
Option "fbdev" "/dev/fb0"
Option "vivante_fbdev" "/dev/fb0"
Option "HWcursor" "false"
Option "DisplayEngine" "pxp"
EndSectionSection "ServerFlags"
Option "BlankTime" "0"
Option "StandbyTime" "0"
Option "SuspendTime" "0"
Option "OffTime" "0"
EndSectionSection "InputClass"
Identifier "Touchscreen"
MatchProduct "st1232-touchscreen"
Driver "evdev"
Option "Calibration" "3 794 476 0"
EndSectionIt's added by the meta-freescale layer and I've appended the st1232 touchscreen InputClass manually.
The output of the /var/log/Xorg.0.log file:
X.Org X Server 1.20.1
X Protocol Version 11, Revision 0
[ 63.243] Build Operating System: Linux 4.15.0-72-generic x86_64
[ 63.244] Current Operating System: Linux 4.1.15+2.0.x-udoo+g34f88fa2766c #1 SMP PREEMPT Mon Jan 6 14:51:20 UTC 2020 armv7l
[ 63.244] Kernel command line: console=ttymxc0,115200,115200 root=/dev/mmcblk0p1 rootwait rw rootfstype=ext4 uart_from_osc clk_ignore_unused cpuidle.off=1 consoleblank=0
[ 63.257] Build Date: 07 January 2020 10:34:32AM
[ 63.257]
[ 63.258] Current version of pixman: 0.34.0
[ 63.258] Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
[ 63.258] Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[ 63.260] (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jan 7 11:51:16 2020
[ 63.288] (==) Using config file: "/etc/X11/xorg.conf"
[ 63.288] (==) Using system config directory "/usr/share/X11/xorg.conf.d"
[ 63.297] (==) No Layout section. Using the first Screen section.
[ 63.297] (==) No screen section available. Using defaults.
[ 63.297] (**) |-->Screen "Default Screen Section" (0)
[ 63.297] (**) | |-->Monitor "<default monitor>"
[ 63.298] (==) No device specified for screen "Default Screen Section".
Using the first device section listed.
[ 63.298] (**) | |-->Device "i.MX Accelerated Framebuffer Device"
[ 63.298] (==) No monitor specified for screen "Default Screen Section".
Using a default monitor configuration.
[ 63.298] (**) Option "BlankTime" "0"
[ 63.298] (**) Option "StandbyTime" "0"
[ 63.298] (**) Option "SuspendTime" "0"
[ 63.298] (**) Option "OffTime" "0"
[ 63.298] (==) Automatically adding devices
[ 63.298] (==) Automatically enabling devices
[ 63.298] (==) Automatically adding GPU devices
[ 63.298] (==) Max clients allowed: 256, resource mask: 0x1fffff
[ 63.299] (WW) The directory "/usr/share/fonts/X11/misc/" does not exist.
[ 63.299] Entry deleted from font path.
[ 63.299] (WW) The directory "/usr/share/fonts/X11/TTF/" does not exist.
[ 63.299] Entry deleted from font path.
[ 63.299] (WW) The directory "/usr/share/fonts/X11/OTF/" does not exist.
[ 63.299] Entry deleted from font path.
[ 63.299] (WW) The directory "/usr/share/fonts/X11/Type1/" does not exist.
[ 63.299] Entry deleted from font path.
[ 63.299] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist.
[ 63.299] Entry deleted from font path.
[ 63.299] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist.
[ 63.299] Entry deleted from font path.
[ 63.299] (==) FontPath set to:[ 63.299] (==) ModulePath set to "/usr/lib/xorg/modules"
[ 63.299] (II) The server relies on udev to provide the list of input devices.
If no devices become available, reconfigure udev or disable AutoAddDevices.
[ 63.299] (II) Loader magic: 0x54cd6d78
[ 63.300] (II) Module ABI versions:
[ 63.300] X.Org ANSI C Emulation: 0.4
[ 63.300] X.Org Video Driver: 24.0
[ 63.300] X.Org XInput driver : 24.1
[ 63.300] X.Org Server Extension : 10.0
[ 63.301] (II) xfree86: Adding drm device (/dev/dri/card0)
[ 63.301] (II) no primary bus or device found
[ 63.301] falling back to /sys/devices/platform/Vivante GCCore/drm/card0
[ 63.301] (II) LoadModule: "glx"
[ 63.326] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so
[ 63.463] (II) Module glx: vendor="X.Org Foundation"
[ 63.463] compiled for 1.20.1, module version = 1.0.0
[ 63.463] ABI class: X.Org Server Extension, version 10.0
[ 63.463] (II) LoadModule: "vivante"
[ 63.467] (II) Loading /usr/lib/xorg/modules/drivers/vivante_drv.so
[ 63.478] (II) Module vivante: vendor="X.Org Foundation"
[ 63.478] compiled for 1.20.1, module version = 1.0.0
[ 63.478] ABI class: X.Org Video Driver, version 24.0
[ 63.478] (II) VIVANTE: fb driver for vivante: VivanteGC500, VivanteGC2100,
VivanteGCCORE
[ 63.479] (--) using VT number 1[ 63.482] (WW) Falling back to old probe method for vivante
[ 63.482] (II) Loading sub module "fbdevhw"
[ 63.482] (II) LoadModule: "fbdevhw"
[ 63.482] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so
[ 63.484] (II) Module fbdevhw: vendor="X.Org Foundation"
[ 63.484] compiled for 1.20.1, module version = 0.0.2
[ 63.484] ABI class: X.Org Video Driver, version 24.0
[ 63.484] (II) VIVANTE(0): using default device
[ 63.484] (WW) VGA arbiter: cannot open kernel arbiter, no multi-card support
[ 63.485] (WW) VIVANTE(0): Cannot get device preferred mode '/sys/class/graphics/fb0/mode (Not a directory)'
[ 63.485] (II) VIVANTE(0): Creating default Display subsection in Screen section
"Default Screen Section" for depth/fbbpp 16/16
[ 63.485] (==) VIVANTE(0): Depth 16, (==) framebuffer bpp 16
[ 63.485] (==) VIVANTE(0): RGB weight 565
[ 63.485] (==) VIVANTE(0): Default visual is TrueColor
[ 63.485] (==) VIVANTE(0): Using gamma correction (1.0, 1.0, 1.0)
[ 63.485] (DB) xf86MergeOutputClassOptions unsupported bus type 0
[ 63.485] (**) VIVANTE(0): mExaHwType:1
[ 63.485] (II) VIVANTE(0): checking modes against framebuffer device...
[ 63.485] (II) VIVANTE(0): checking modes against monitor...
[ 63.485] (II) VIVANTE(0): Virtual size is 0x0 (pitch 0)
[ 63.485] (==) VIVANTE(0): DPI set to (96, 96)
[ 63.485] (II) Loading sub module "fb"
[ 63.485] (II) LoadModule: "fb"
[ 63.485] (II) Loading /usr/lib/xorg/modules/libfb.so
[ 63.493] (II) Module fb: vendor="X.Org Foundation"
[ 63.493] compiled for 1.20.1, module version = 1.0.0
[ 63.493] ABI class: X.Org ANSI C Emulation, version 0.4
[ 63.493] (II) Loading sub module "exa"
[ 63.493] (II) LoadModule: "exa"
[ 63.495] (II) Loading /usr/lib/xorg/modules/libexa.so
[ 63.499] (II) Module exa: vendor="X.Org Foundation"
[ 63.499] compiled for 1.20.1, module version = 2.6.0
[ 63.499] ABI class: X.Org Video Driver, version 24.0
[ 63.500] (II) VIVANTE(0): printing discovered frame buffer 'fb0' supported modes:
[ 63.501] (II) VIVANTE(0): Modeline "U:800x480p-60"x0.0 33.66 800 850 1000 1056 480 500 502 525 -hsync -vsync -csync (31.9 kHz e)
[ 63.501] (II) VIVANTE(0): Output mxs-lcdif1 has no monitor section
[ 63.501] (II) VIVANTE(0): Printing probed modes for output mxs-lcdif1
[ 63.501] (II) VIVANTE(0): Modeline "U:800x480p-60"x60.7 33.66 800 850 1000 1056 480 500 502 525 -hsync -vsync -csync (31.9 kHz e)
[ 63.501] (II) VIVANTE(0): Output mxs-lcdif1 connected
[ 63.502] (II) VIVANTE(0): Using sloppy heuristic for initial modes
[ 63.502] (II) VIVANTE(0): Output mxs-lcdif1 using initial mode U:800x480p-60 +0+0
[ 63.502] (II) VIVANTE(0): imxDisplayPreInit: virtual set 800 x 480, display width 0
[ 63.502] (II) VIVANTE(0): VivPreInit: adjust display width 800
[ 63.502] (II) VIVANTE(0): reserve 4177920 bytes for on screen frame buffer; total fb memory size 33554432 bytes; offset of shadow buffer 4177920
[ 63.504] (II) VIVANTE(0): hardware: mxs-lcdif1 (video memory: 32768kB)
[ 63.512] (II) VIVANTE(0): FB Start = 0x7458d000 FB Base = 0x7458d000 FB Offset = (nil)
[ 63.513] (II) VIVANTE(0): test Initializing EXA
[ 63.817] (II) EXA(0): Driver allocated offscreen pixmaps
[ 63.817] (II) EXA(0): Driver registered support for the following operations:
[ 63.817] (II) Solid
[ 63.817] (II) Copy
[ 63.817] (II) Composite (RENDER acceleration)
[ 63.817] (II) UploadToScreen
[ 63.817] (==) VIVANTE(0): Backing store enabled
[ 63.818] (==) VIVANTE(0): DPMS enabled
[ 63.820] drmOpenDevice: node name is /dev/dri/card0
[ 63.821] drmOpenDevice: open result is 11, (OK)
[ 63.821] drmOpenDevice: node name is /dev/dri/card0
[ 63.821] drmOpenDevice: open result is 11, (OK)
[ 63.821] drmOpenByBusid: Searching for BusID platform:Vivante GCCore:00
[ 63.821] drmOpenDevice: node name is /dev/dri/card0
[ 63.821] drmOpenDevice: open result is 11, (OK)
[ 63.821] drmOpenByBusid: drmOpenMinor returns 11
[ 63.821] drmOpenByBusid: drmGetBusid reports platform:Vivante GCCore:00
[ 63.821] (II) [drm] DRM interface version 1.4
[ 63.821] (II) [drm] DRM open master succeeded.
[ 63.821] (II) VIVANTE(0): [drm] Using the DRM lock SAREA also for drawables.
[ 63.821] (II) VIVANTE(0): [drm] framebuffer handle = 0xac100000
[ 63.821] (II) VIVANTE(0): [drm] added 1 reserved context for kernel
[ 63.821] (II) VIVANTE(0): X context handle = 0x1
[ 63.821] (EE) VIVANTE(0): [drm] failed to setup DRM signal handler
[ 63.821] (EE) VIVANTE(0): [dri] DRIScreenInit failed. Disabling DRI.
[ 63.822] (II) Initializing extension Generic Event Extension
[ 63.822] (II) Initializing extension SHAPE
[ 63.822] (II) Initializing extension MIT-SHM
[ 63.822] (II) Initializing extension XInputExtension
[ 63.825] (II) Initializing extension XTEST
[ 63.825] (II) Initializing extension BIG-REQUESTS
[ 63.825] (II) Initializing extension SYNC
[ 63.825] (II) Initializing extension XKEYBOARD
[ 63.825] (II) Initializing extension XC-MISC
[ 63.825] (II) Initializing extension XFIXES
[ 63.825] (II) Initializing extension RENDER
[ 63.825] (II) Initializing extension RANDR
[ 63.825] (II) Initializing extension COMPOSITE
[ 63.825] (II) Initializing extension DAMAGE
[ 63.825] (II) Initializing extension MIT-SCREEN-SAVER
[ 63.825] (II) Initializing extension DOUBLE-BUFFER
[ 63.825] (II) Initializing extension DPMS
[ 63.825] (II) Initializing extension Present
[ 63.825] (II) Initializing extension DRI3
[ 63.825] (II) Initializing extension X-Resource
[ 63.826] (II) Initializing extension XVideo
[ 63.826] (II) Initializing extension XVideo-MotionCompensation
[ 63.826] (II) Initializing extension GLX
[ 63.826] (II) AIGLX: Screen 0 is not DRI2 capable
[ 63.974] (II) IGLX: Loaded and initialized swrast
[ 63.974] (II) GLX: Initialized DRISWRAST GL provider for screen 0
[ 63.974] (II) Initializing extension XFree86-VidModeExtension
[ 63.974] (II) Initializing extension XFree86-DGA
[ 63.974] (II) Initializing extension XFree86-DRI
[ 63.974] (II) Initializing extension DRI2
[ 63.974] (II) Initializing extension vivext
[ 63.974] (II) VIVANTE(0): Setting screen physical size to 211 x 127
[ 64.295] (II) config/udev: Adding input device 20cc000.snvs:snvs-powerkey (/dev/input/event0)
[ 64.295] (**) 20cc000.snvs:snvs-powerkey: Applying InputClass "evdev keyboard catchall"
[ 64.296] (**) 20cc000.snvs:snvs-powerkey: Applying InputClass "libinput keyboard catchall"
[ 64.296] (II) LoadModule: "libinput"
[ 64.297] (II) Loading /usr/lib/xorg/modules/input/libinput_drv.so
[ 64.323] (II) Module libinput: vendor="X.Org Foundation"
[ 64.323] compiled for 1.20.1, module version = 0.28.0
[ 64.323] Module class: X.Org XInput Driver
[ 64.323] ABI class: X.Org XInput driver, version 24.1
[ 64.323] (II) Using input driver 'libinput' for '20cc000.snvs:snvs-powerkey'
[ 64.323] (**) 20cc000.snvs:snvs-powerkey: always reports core events
[ 64.323] (**) Option "Device" "/dev/input/event0"
[ 64.323] (**) Option "_source" "server/udev"
[ 64.324] (II) event0 - 20cc000.snvs:snvs-powerkey: is tagged by udev as: Keyboard
[ 64.324] (II) event0 - 20cc000.snvs:snvs-powerkey: device is a keyboard
[ 64.325] (II) event0 - 20cc000.snvs:snvs-powerkey: device removed
[ 64.360] (**) Option "config_info" "udev:/sys/devices/soc0/soc/2000000.aips-bus/20cc000.snvs/20cc000.snvs:snvs-powerkey/input/input0/event0"
[ 64.360] (II) XINPUT: Adding extended input device "20cc000.snvs:snvs-powerkey" (type: KEYBOARD, id 6)
[ 64.362] (II) event0 - 20cc000.snvs:snvs-powerkey: is tagged by udev as: Keyboard
[ 64.362] (II) event0 - 20cc000.snvs:snvs-powerkey: device is a keyboard
[ 64.365] (II) config/udev: Adding input device st1232-touchscreen (/dev/input/event1)
[ 64.365] (**) st1232-touchscreen: Applying InputClass "Touchscreen"
[ 64.365] (II) LoadModule: "evdev"
[ 64.366] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so
[ 64.372] (II) Module evdev: vendor="X.Org Foundation"
[ 64.372] compiled for 1.20.1, module version = 2.10.6
[ 64.372] Module class: X.Org XInput Driver
[ 64.372] ABI class: X.Org XInput driver, version 24.1
[ 64.372] (II) Using input driver 'evdev' for 'st1232-touchscreen'
[ 64.372] (**) st1232-touchscreen: always reports core events
[ 64.372] (**) evdev: st1232-touchscreen: Device: "/dev/input/event1"
[ 64.373] (II) evdev: st1232-touchscreen: Using mtdev for this device
[ 64.373] (--) evdev: st1232-touchscreen: Vendor 0 Product 0
[ 64.378] (--) evdev: st1232-touchscreen: Found absolute axes
[ 64.378] (--) evdev: st1232-touchscreen: Found absolute multitouch axes
[ 64.378] (II) evdev: st1232-touchscreen: No buttons found, faking one.
[ 64.378] (II) evdev: st1232-touchscreen: Forcing relative x/y axes to exist.
[ 64.379] (II) evdev: st1232-touchscreen: Configuring as mouse
[ 64.379] (**) evdev: st1232-touchscreen: YAxisMapping: buttons 4 and 5
[ 64.379] (**) evdev: st1232-touchscreen: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200
[ 64.379] (**) Option "config_info" "udev:/sys/devices/soc0/soc/2100000.aips-bus/21a0000.i2c/i2c-0/0-0055/input/input1/event1"
[ 64.379] (II) XINPUT: Adding extended input device "st1232-touchscreen" (type: MOUSE, id 7)
[ 64.379] (II) evdev: st1232-touchscreen: initialized for relative axes.
[ 64.379] (WW) evdev: st1232-touchscreen: ignoring absolute axes.
[ 64.381] (**) st1232-touchscreen: (accel) keeping acceleration scheme 1
[ 64.381] (**) st1232-touchscreen: (accel) acceleration profile 0
[ 64.381] (**) st1232-touchscreen: (accel) acceleration factor: 2.000
[ 64.381] (**) st1232-touchscreen: (accel) acceleration threshold: 4
[ 64.384] (II) config/udev: Adding input device FreescaleGyroscope (/dev/input/event2)
[ 64.384] (II) No input driver specified, ignoring this device.
[ 64.384] (II) This device may have been added with another device file.
[ 64.385] (II) config/udev: Adding input device FreescaleAccelerometer (/dev/input/event3)
[ 64.385] (II) No input driver specified, ignoring this device.
[ 64.385] (II) This device may have been added with another device file.
[ 64.386] (II) config/udev: Adding input device FreescaleMagnetometer (/dev/input/event4)
[ 64.386] (II) No input driver specified, ignoring this device.
[ 64.386] (II) This device may have been added with another device file.
[ 64.431] (EE)
[ 64.432] (EE) Backtrace:
[ 64.432] (EE)
[ 64.433] (EE) Segmentation fault at address 0x4
[ 64.434] (EE)
Fatal server error:
[ 64.440] (EE) Caught signal 11 (Segmentation fault). Server aborting
[ 64.440] (EE)
[ 64.441] (EE)
Please consult the The X.Org Foundation support
at http://wiki.x.org
for help.
[ 64.442] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
[ 64.442] (EE)
[ 64.447] (EE) Server terminated with error (1). Closing log file.Is my Xorg configration right? What's important in order to make Xorg work? I've heard about export DISPLAY=:0 a lot but running it doesn't solve my problem.
| How to configure X11 for embedded i.MX6 board with touchscreen? |
I am unable to debug the problem with gdb (bt only gives me hexadecimal numbers, not function names).
Since, for the severity of this bug, everyone using the current version should have been affected (Quite a number of people have this bug too, according to the bug tracker). Seeing that a few other people have this problem (on chrome AND chromium), I assume this is a Chromium bug and/or something on the computer is causing the problem.
I'm going to file the bug on the Chromium bug tracker, I bet they're going to fix it. :]
Update: This is the bug report I posted.
Update #2: It seems like this is a GNOME-specific issue, and it works fine on KDE Plasma, so that's what I'll be using from now on.
|
Today I installed Google Chrome 55 (i.e. the latest stable version) on 64-bit SL7.0 using a 64-bit RPM. The problem is that google chrome starts fine (except for an SELinux-related issue which I fixed), but it closes without warning after about a minute. No indication of a crash is shown except for "Google Chrome crashed. Would you like to restore your tabs?" the next time I start it (only to happen again, after a minute), and the following output from the terminal when I start chrome there:
$ google-chrome...Failed to get crash dump id.
Report Id:
Segmentation fault (core dumped)
$Reinstalling chrome did not fix the problem. Nether did deleting my profile folder so chrome can use the defaults.
This is the output when I ran systemctl status abrtd -l:
abrtd.service - ABRT Automated Bug Reporting Tool
Loaded: loaded (/usr/lib/systemd/system/abrtd.service; enabled)
Active: active (running) since Sat 2016-12-10 08:22:53 EAT; 7h ago
Main PID: 698 (abrtd)
CGroup: /system.slice/abrtd.service
└─698 /usr/sbin/abrtd -d -sDec 10 15:26:10 localhost.localdomain abrt-server[14419]: Package 'google-chrome-stable' isn't signed with proper key
Dec 10 15:26:10 localhost.localdomain abrt-server[14419]: 'post-create' on '/var/tmp/abrt/ccpp-2016-12-10-15:26:04-13832' exited with 1
Dec 10 15:26:10 localhost.localdomain abrt-server[14419]: Deleting problem directory '/var/tmp/abrt/ccpp-2016-12-10-15:26:04-13832'
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.In particular, Package 'google-chrome-stable' isn't signed with proper key: Is this a problem? I downloaded it from google's official site. What should I do now?
| Google Chrome segmentation fault within a minute |
I was unable to open the user settings as root. (Really, on a fresh install.) All I had to do was type useradd tempuser1 and then the reopen the user settings! I guess that my segfualt occurred because there were "no users" (because root doesn't count).
I'm on the latest version of Kali.
|
I am having a problematic issue that I can't seem to figure out. I am able to open the settings as root user just fine. When I click on Users, nothing happens. I am running on Linux kali 4.7.0-kali1-amd64 #1 SMP Debian 4.7.6-1kali1 (2016-10-17) x86_64 GNU/Linux. This is what I get in /var/log/syslog
Oct 27 21:00:09 kali kernel: [ 695.533180] gnome-control-c[1944]: segfault at 0 ip 00000000004c9a5d sp 00007fffae418480 error 4 in gnome-control-center[400000+394000]
I have tried running the command:
addr2line -e /usr/bin/gnome-control-center -fCi 0xC9A5D where 0xC9A5d is the offset into the object that was causing the problem and I get nothing.
I then ran:
addr2line -e /usr/bin/gnome-control-center -fCi 0x00000000004c9a5d and I get:
cc_universal_access_get_resource
??:?
Interesting...
I understand that the cause of this was that a user mode instruction resulted in a page fault. I have a couple gnome-control-center files of interest listed here:
/usr/share/bash-completion/completions/gnome-control-center
/usr/bin/gnome-control-center
I would doubt the problem was in the first file listed as it shell script for tab completion. So I try to run gdb on the binary, but it wasn't compiled with debugging it seems, since there aren't any debugging symbols to be found. The whole point of this was trying to create a new user so I can install steam on Kali to play some games in my spare time aside from messing with this darn system lol.
When I look at Github for gnome-control-center. I find the cc_ua_panel.c file. It has a method in it where I have found the call to cc_universal_access_get_resource as follows.
static void
cc_ua_panel_init (CcUaPanel *self)
{
CcUaPanelPrivate *priv;
GtkWidget *panel;
GtkWidget *content; priv = self->priv = G_TYPE_INSTANCE_GET_PRIVATE (self,
CC_TYPE_UA_PANEL,
CcUaPanelPrivate); g_resources_register (cc_universal_access_get_resource ()); priv->interface_settings = g_settings_new (INTERFACE_SETTINGS);
priv->a11y_settings = g_settings_new (A11Y_SETTINGS);
priv->wm_settings = g_settings_new (WM_SETTINGS);
priv->kb_settings = g_settings_new (KEYBOARD_SETTINGS);
priv->kb_desktop_settings = g_settings_new (KEYBOARD_DESKTOP_SETTINGS);
priv->mouse_settings = g_settings_new (MOUSE_SETTINGS);
priv->gsd_mouse_settings = g_settings_new (GSD_MOUSE_SETTINGS);
priv->application_settings = g_settings_new (APPLICATION_SETTINGS); priv->builder = gtk_builder_new ();
gtk_builder_add_from_resource (priv->builder, "/org/gnome/control-center/universal-access/uap.ui",
NULL); cc_ua_panel_init_status (self);
cc_ua_panel_init_seeing (self);
cc_ua_panel_init_hearing (self);
cc_ua_panel_init_keyboard (self);
cc_ua_panel_init_mouse (self); panel = WID ("universal_access_panel");
content = WID ("universal_access_content"); gtk_scrolled_window_set_min_content_height (GTK_SCROLLED_WINDOW (panel), SCROLL_HEIGHT); priv->focus_adjustment = gtk_scrolled_window_get_vadjustment (GTK_SCROLLED_WINDOW (panel));
gtk_container_set_focus_vadjustment (GTK_CONTAINER (content), priv->focus_adjustment); gtk_container_add (GTK_CONTAINER (self), panel);
}I have no clue what the problem here could be if this is indeed the problem and do not know how to proceed. I wonder what resource it is looking for and why it is not there. Where could I find this and how can I fix this problem?
I've actually obtained a backtrace
Thread 1 "gnome-control-c" received signal SIGSEGV, Segmentation fault.
0x00000000004c9a5d in ?? ()
(gdb) bt full
#0 0x00000000004c9a5d in ?? ()
No symbol table info available.
#1 0x00007ffff0b74f75 in g_closure_invoke ()
from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
No symbol table info available.
#2 0x00007ffff0b86f82 in ?? ()
from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
No symbol table info available.
#3 0x00007ffff0b8fbcc in g_signal_emit_valist ()
from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
No symbol table info available.
#4 0x00007ffff0b8ffaf in g_signal_emit ()
from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
No symbol table info available.
#5 0x00007ffff0b793a4 in ?? ()
from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
No symbol table info available.
#6 0x00007ffff0b7b861 in g_object_notify ()
from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
No symbol table info available.
#7 0x00007ffff58587e2 in ?? () from /usr/lib/libaccountsservice.so.0
No symbol table info available.
#8 0x00007ffff0e424e3 in ?? () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0
---Type <return> to continue, or q <return> to quit---return
No symbol table info available.
#9 0x00007ffff0e42b96 in ?? () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0
No symbol table info available.
#10 0x00007ffff0e80a5b in ?? () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0
No symbol table info available.
#11 0x00007ffff0e424e3 in ?? () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0
No symbol table info available.
#12 0x00007ffff0e42b96 in ?? () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0
No symbol table info available.
#13 0x00007ffff0e7568a in ?? () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0
No symbol table info available.
#14 0x00007ffff0e424e3 in ?? () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0
No symbol table info available.
#15 0x00007ffff0e42519 in ?? () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0
No symbol table info available.
#16 0x00007ffff089b68a in g_main_context_dispatch () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
No symbol table info available.
#17 0x00007ffff089ba40 in ?? () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
No symbol table info available.
#18 0x00007ffff089baec in g_main_context_iteration () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
No symbol table info available.
#19 0x00007ffff0e5770d in g_application_run () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0
No symbol table info available.
#20 0x000000000044cff7 in main ()
No symbol table info available. | Segfault trying to access Users in Settings |
The value to revert is org.cinnamon.desktop.interface.gtk-theme. It can be set back to Adwaita, Debian's Cinnamon default installation value from the terminal with
gsettings set org.cinnamon.desktop.interface gtk-theme Adwaitaor graphically with dconf Editor (dconf-editor package) setting the value Adwaita under
org > cinnamon > desktop > interface > gtk-theme |
Installed mint-x-icons and mint-x-theme on a fresh Debian 8 testing running Cinnamon (v2.6.13). All themes options could be set to Mint-X, except for Controls.As soon as it was selected, the Themes window crashed and couldn't be opened again, nor cinnamon-settings in general. Nemo wouldn't start any longer either and there were graphical oddities at many places. The terminal returned segmentation fault for both applications.
Selecting Troubleshoot... > Restore all settings to default in the panel menu doesn't fix it.
Since the default config editor is gone, how do I revert the changes?
| cinnamon-settings and Nemo crash with certain Cinnamon themes' contols on Debian 8 |
Sounds like a hardware fault. Segfaults across the whole system can be caused by bad sectors on the disk or bad memory.
Run a memory test and see if there's some faulty memory that is causing your issue.
After that if the memory test does not find an issue, I would run a badblocks scan on your hard drive.
|
Lately my PC keeps getting segmentation fault, everywhere, on almost every apps (thunderbird, firefox, chromium, google-chrome, wine, byobu)
here's the last few output of dmesg:
[ 1474.815026] traps: chrome[2134] general protection ip:7f38bb407aef sp:7fff59358350 error:0 in chrome[7f38b9b18000+51d2000]
[ 3867.645750] traps: chrome[10568] general protection ip:7fb4864ed072 sp:7fffceacd7a0 error:0 in chrome[7fb4848e0000+51d2000]
[ 5075.793435] traps: chrome[5951] general protection ip:7fb486530bbe sp:7fffceaceaa0 error:0 in chrome[7fb4848e0000+51d2000]
[ 6232.893991] traps: chrome[6031] general protection ip:7fb486530bbe sp:7fffceacd4f0 error:0 in chrome[7fb4848e0000+51d2000]
[ 6270.321944] traps: chrome[15562] general protection ip:7fb4864f0150 sp:7fffceacedb8 error:0 in chrome[7fb4848e0000+51d2000]
[ 6284.271553] traps: HTMLParserThrea[15673] general protection ip:7fb4869ed85b sp:7fb4702b6910 error:0 in chrome[7fb4848e0000+51d2000]
[ 6390.844552] traps: chromium[15700] general protection ip:7f63e065b76e sp:7fff367a4ba0 error:0 in chromium[7f63debd0000+5864000]
[ 7645.971546] traps: chromium[18004] general protection ip:7f63e0653e64 sp:7fff36799df0 error:0 in chromium[7f63debd0000+5864000]
[ 7927.389333] traps: chrome[18743] general protection ip:7f3acfd00150 sp:7fff42406438 error:0 in chrome[7f3ace0f0000+51d2000]
[ 8075.937762] traps: chrome[18674] general protection ip:7f3acfd00150 sp:7fff42406438 error:0 in chrome[7f3ace0f0000+51d2000]
[ 8159.218088] chromium[2966]: segfault at 180816000388 ip 00007f63e06a62c5 sp 00007fff367a47a0 error 4 in chromium[7f63debd0000+5864000]
[ 8515.463384] traps: chromium[20359] general protection ip:7f63e06bfcdf sp:7fff36798100 error:0 in chromium[7f63debd0000+5864000]
[ 8540.132912] traps: komodo[14107] trap stack segment ip:7f81c87b158a sp:7ffff8021890 error:0
[ 8745.165817] Key type dns_resolver registered
[ 8745.174187] FS-Cache: Netfs 'cifs' registered for caching
[ 8745.174286] Key type cifs.spnego registered
[ 8745.174293] Key type cifs.idmap registered
[ 8771.698855] traps: chrome[21109] general protection ip:7f3acfd038ce sp:7fff42406390 error:0 in chrome[7f3ace0f0000+51d2000]
[ 8956.317097] Chrome_IOThread[2374]: segfault at 39 ip 00007f18935a8d00 sp 00007f18775279a8 error 6 in chromium[7f1892658000+5864000]
[ 8959.347113] traps: chromium[22247] general protection ip:7f72f97c6f0a sp:7fffeb74ac90 error:0 in chromium[7f72f7db8000+5864000]
[ 8979.590397] traps: chromium[22514] general protection ip:7f72f983931e sp:7fffeb753070 error:0 in chromium[7f72f7db8000+5864000]
[ 8982.106736] chromium[22627]: segfault at 96dfb2e4e0 ip 00007f72f9842bd8 sp 00007fffeb754670 error 4 in chromium[7f72f7db8000+5864000]
[ 9031.249176] traps: chromium[22333] general protection ip:7f72f97068d0 sp:7fffeb7546f8 error:0 in chromium[7f72f7db8000+5864000]And first few second during execution of firefox:
(process:23183): GLib-CRITICAL **: g_slice_set_config: assertion 'sys_page_size == 0' failed
Segmentation faultWhen using gdb firefox:
[New Thread 0x7fffc33ff700 (LWP 22838)]Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffda3ff700 (LWP 22692)]
0x00007ffff386e905 in ?? () from /usr/lib/firefox/libxul.soOn chromium:
[2333:2374:0731/105842:ERROR:raw_channel_posix.cc(139)] recvmsg: Connection reset by peer
[2333:2374:0731/105842:ERROR:channel.cc(297)] RawChannel fatal error (type 1)*snip*[2333:2374:0731/105903:ERROR:raw_channel_posix.cc(139)] recvmsg: Connection reset by peer
[2333:2374:0731/105903:ERROR:channel.cc(297)] RawChannel fatal error (type 1)
../../third_party/tcmalloc/chromium/src/free_list.h:118] Memory corruption detected.
Segmentation faultWhen using google-chrome-stable:
[18495:18528:0731/105816:ERROR:raw_channel_posix.cc(139)] recvmsg: Connection reset by peer
[18495:18528:0731/105816:ERROR:channel.cc(297)] RawChannel fatal error (type 1)
--2014-07-31 10:58:16-- https://clients2.google.com/cr/report
Resolving clients2.google.com (clients2.google.com)... 111.94.248.52, 111.94.248.38, 111.94.248.18, ...
Connecting to clients2.google.com (clients2.google.com)|111.94.248.52|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘/dev/fd/3’ [<=> ] 0 --.-K/s
Crash dump id: 8d43b07b2158483dWhen using thunderbird:
[Exception... "'Method not implemented' when calling method:
[imIAccount::loadBuddy]" nsresult: "0x80004001 (NS_ERROR_NOT_IMPLEMENTED)" location: "JS frame :: resource://gre/components/imContacts.js :: <TOP_LEVEL> :: line 1237" data: no]*snip*[Exception... "'Method not implemented' when calling method: [imIAccount::loadBuddy]" nsresult: "0x80004001 (NS_ERROR_NOT_IMPLEMENTED)" location: "JS frame :: resource://gre/components/imContacts.js :: <TOP_LEVEL> :: line 1237" data: no]
Segmentation faultAnd many more.. Currently I'm using Manjaro Linux:
Linux archpc 3.14.13-1-MANJARO #1 SMP PREEMPT Fri Jul 18 09:02:40 UTC 2014 x86_64 GNU/LinuxPreviously I'm using latest ArchLinux, it also happened..
I'm seriously don't know what to do.. It's only happened since past 3 weeks, before that, Chrome and Chromium only crash "Aw snap" once in a while.
Here's the list of installed program in my computer: http://pastie.org/9433158
Could anyone help me to find the causation of this problem?
| Segfault everywhere even when reinstalling the OS |
I get the impression that if the function at the top of the list references "glib" or "gobject", you have Bad Issues(TM) with libraries that usually shouldn't go wrong.You get the wrong impression, if you mean this indicates the flaw is probably in those libraries. It doesn't mean that; it more likely means that's where an earlier mistake finally blew up. By nature C doesn't have a lot of runtime safeguards in it, so you can easily pass arguments that will compile but aren't validated any further (unless you do it yourself). Simple example:
int main (void) {
char whoops[3] = { 'a', 'b', 'c' };
if (strcmp(whoops, "abcdef")) puts(whoops);Passes an unterminated string to several different string functions. This will compile no problem, and most likely run okay because the memory violation will be very slight, but it could seg fault in strcmp() or puts(). That doesn't mean the strcmp() implementation is buggy; the mistake is clearly right there in main().
Functions like those can't logically determine if an argument passed is properly terminated (this is what I meant WRT runtime checks and C "by nature" lacking them). There's not much point in stipulating the compiler should check, because most of the time the data won't be hard coded like that.
The stuff in the middle of a backtrace doesn't necessarily play a role either, although it could. Generally the place to start looking is the last entry; that's where the problem has been traced back to.
But the bug could always be anywhere. Often comparing a backtrace to errors reported by a mem checker like valgrind can help narrow things down. WRT your examples there may be a lot to sift through though; last I checked valgrind and gtk were not happy playmates.I was thinking of compiling a new version of glib2 (and co.), then statically linking these programs against it.You could, although I don't see any reason to believe anything will work any better because of it. It's grasping at straws. You can't actually debug the problem yourself, which is understandable, so you consider what you could try out of desperation.
Most likely you will be just be wasting a lot of time and frustrating yourself.I'm 99.99% confident the issues I'm looking at are some kind of glitch-out with glib2.I'm 99% confident you are overconfident there.
While again the bug could be anywhere, as a rule of thumb, consider the most widely tested parts the least likely culprits. In this case, glib is pretty ubiquitous, whereas irssi and NetSurf are relatively obscure.
The best thing for you to do is probably file a bug report. Backtraces are usually much appreciated there. Start with irssi and NetSurf; if you go straight to glib they will, reasonably enough, just say there's no reason for them to believe it's their problem unless you can demonstrate it (which all this doesn't). If on the other hand the irssi people determine it is in glib, they'll probably want to pursue that themselves.
|
I don't yet fully understand how segfaults and backtraces work, but I get the impression that if the function at the top of the list references "glib" or "gobject", you have Bad Issues(TM) with libraries that usually shouldn't go wrong.
Well, that's what I'm getting here, from two completely different programs.
The first is the latest build of irssi, compiled (cleanly, without any glitches or errors) directly from github.com.
Program received signal SIGSEGV, Segmentation fault.
0xb7cf77ea in g_ascii_strcasecmp () from /usr/lib/libglib-2.0.so.0
(gdb) bt
#0 0xb7cf77ea in g_ascii_strcasecmp () from /usr/lib/libglib-2.0.so.0
#1 0x08103455 in config_node_section_index ()
#2 0x081036b0 in config_node_traverse ()
#3 0x080fb674 in settings_get_bool ()
#4 0x08090bce in command_history_init ()
#5 0x08093d81 in fe_common_core_init ()
#6 0x0805a60d in main ()The second program I'm having issues with is the NetSurf web browser (which also compiles 100% cleanly) when built against GTK (when not built to use GTK it runs fine):
Program received signal SIGSEGV, Segmentation fault.
0xb7c1bace in g_type_check_instance_cast () from /usr/lib/libgobject-2.0.so.0
(gdb) bt
#0 0xb7c1bace in g_type_check_instance_cast () from /usr/lib/libgobject-2.0.so.0
#1 0x080cd31c in nsgtk_scaffolding_set_websearch ()
#2 0x080d05da in nsgtk_new_scaffolding ()
#3 0x080dafd8 in gui_create_browser_window ()
#4 0x0809e806 in browser_window_create ()
#5 0x080c2fa9 in ?? ()
#6 0x0807c09d in main ()I'm 99.99% confident the issues I'm looking at are some kind of glitch-out with glib2. The rest of my system works 100% fine, just these two programs are doing weird things.
I'm similarly confident that if I tried to build other programs that used these libraries, they would quite likely fail too.
Obviously, poking glib and friends - and making even one tiny little mistake - is an instant recipe to make practically every single program in the system catastrophically break horribly (and I speak from experience with another system, long ago :P).
Given I have absolutely no idea what I'm doing with this kind of thing and I know it, I am loathe to go there; I'd like to keep my current system configuration functional :)
I was thinking of compiling a new version of glib2 (and co.), then statically linking these programs against it. I just have no idea how to do this - what steps do I need to perform?
An alternative idea I had was to ./configure --prefix=/usr; make; make install exactly the same version of glib I have right now "back into" my system, to reinstall it. I see that the associated core libraries all end with "0.3200.4":
-rwxr-xr-x 1 root root 1.4M Aug 9 2012 /usr/lib/libgio-2.0.so.0.3200.4
-rwxr-xr-x 1 root root 1.2M Aug 9 2012 /usr/lib/libglib-2.0.so.0.3200.4
-rwxr-xr-x 1 root root 11K Aug 9 2012 /usr/lib/libgmodule-2.0.so.0.3200.4
-rwxr-xr-x 1 root root 308K Aug 9 2012 /usr/lib/libgobject-2.0.so.0.3200.4
-rwxr-xr-x 1 root root 3.7K Aug 9 2012 /usr/lib/libgthread-2.0.so.0.3200.4Would that possibly work, or break things horribly? :S
If it would possibly work, what version does "0.3200.4" translate to?
What other ideas can I try?
I'm not necessarily looking for fixes for glib itself that correct whatever fundamental error is going on - it isn't affecting me that badly. I just want to get irssi and NetSurf to run correctly.
| Getting segmentation faults from inside glib and gobject - I THINK I want to build/statically link against an independant version of glib2 |
Yeah it's definitely a bug but don't worry, LVM is smart enough to handle this stuff, I once had the power go out in the middle of a pvmove and all I had to do was basically get the server turned on again "cancel" the old pvmove and start it over it again.
First off, it's important to know that the tools you use are just a user-space interface to kernel processes. LVM lives inside the kernel so unless your kernel panic'd you're alright. The user space tools like pvmove or lvchange just interface with LVM for us and then just sit back and basically ask the kernel "Hey, you done with that yet? How'd it turn out?" or "Hey, how far along are we with this?" (Your specific issue is with lvchange segfaulting after lvchange successfully completes, sounds like a recently fixed bug so you may want to make sure you have all your system updates).
As a general point, you also shouldn't be so skiddish or paranoid about whether you're in trouble with LVM, it's designed to handle unexpected errors like this well (even when they impact it directly and not just the tool you're using) and that guarantee is part of the point of using a volume manager over traditional partitions. You're only in trouble if something really bad happens or you do something without thinking it through. LVM operates by extents (instead of blocks) and it doesn't make the copied extent active and the original inactive until the copy operation is already successfully completed. By doing so, half copied extents will stay marked as unallocated and any subsequent tools will jsut write over it. This is the case with my pvmove and with your lvchange.
EDIT:
Looking at this mailing list announcement, we can get a more detailed description of how your merge actually works "under the hood":While the merging is active, any accesses to the origin device are
[directed to] the snapshot that is being merged. When the merging finishes,
the origin target is seamlessly reloaded and the merging snapshot is
dropped. The [non-snapshot] filesystem can stay mounted during this time.Figured it might be interesting to know
|
I was just experimenting with snapshots in LVM on Ubuntu 12.10. I created a snapshot logical volume of 6.5 GiB, and after making some changes to the origin decided to merge the snapshot back in to undo them. All seemed to be going well, but I noticed several LVM-related segfault messages in syslog.
Commands entered:
sudo lvcreate -L6.5G -n backup_snapshot -s /dev/mapper/vg0-backup
# made some miscellaneous writes
sudo lvconvert --merge /dev/vg0/backup_snapshot
sudo umount /snapshot/backup
sudo umount /backup
sudo lvchange -an /dev/vg0/backup
sudo lvchange -ay /dev/vg0/backup
sudo mount /backupFrom syslog:
Apr 12 04:57:10 bournemouth kernel: [ 5260.813253] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro
Apr 12 05:00:11 bournemouth kernel: [ 5441.841401] EXT4-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro
Apr 12 05:02:00 bournemouth kernel: [ 5551.438487] show_signal_msg: 48 callbacks suppressed
Apr 12 05:02:00 bournemouth kernel: [ 5551.438495] lvm[5813]: segfault at 28 ip 000000000047f319 sp 00007fff60873de0 error 4 in lvm[400000+d9000]
Apr 12 05:02:01 bournemouth kernel: [ 5552.458797] lvchange[6449]: segfault at 28 ip 000000000047f319 sp 00007fff935f4380 error 4 in lvm[400000+d9000]I then unmounted the origin LV, made sure the snapshot no longer existed, and ran fsck.ext4 -f on it; it checked out OK that way. But I'm still worried about the segfaults. Is it possible my data got messed up in some way that fsck wouldn't catch? The volume I was experimenting with was a backup one, and all the filesystems I have backed up on it are still in working order, so I could just start over and back them up again. But on the other hand, I'd like to keep my incremental backup history. I'd just like reassurance that I can trust these backups.
| Should I be worried? Segfaults reported in syslog when merging LVM snapshot (reverting the original back to the snapshot) |
OP and I worked through this; see comments & chat for details. First, to find the problem process and location, this line in /etc/init/mountall-shell.conf
/sbin/suloginwas changed to
/usr/bin/ltrace -S -f -o /root/sulogin-ltrace.log /bin/suloginExcerpt from log:
837 crypt("password", "x") = nil
837 strcmp(nil, "x" <no return ...>
837 --- SIGSEGV (Segmentation fault) The log indicates that the segfault occurs in the following code in sulogin, where crypt is returning NULL.
if ((p = getpasswd(pwd->pw_passwd)) == NULL) break;
if (pwd->pw_passwd[0] == 0 ||
strcmp(crypt(p, pwd->pw_passwd), pwd->pw_passwd) == 0)
sushell(pwd);Next question is, what's causing crypt to return NULL?
OP confirmed that the encrypted password really was x; the shadow entry for root was root:x:16273:0:99999:7:::. In a stock Ubuntu 14.04, root's encrypted password is !; OP had changed it to x awhile ago and this is the first time since then that he's had to use single-user mode.
sulogin has its own interpretation of special encrypted passwords. If it sees * or !, it lets the user in with no password. Anything else, it does some validity checking, but x sails right through, yet crypt doesn't like it (salt not long enough?) and returns NULL.
OP is going to file a bug report for sysvinit-utils; sulogin ought to handle a NULL return from crypt more gracefully.
|
I'm running Ubuntu 14.04 Trusty Tahr in a Microsoft Hyper-V virtual machine on Windows Server 2012 R2. I've stopped the VM, replaced an EXT4-formatted virtual disk volume (/dev/sdb) with a new (unformatted) disk volume and restarted the VM. I see the following messages:
Filesystem check or mount failed.
A maintenance shell will now be started.
CONTROL-D will terminate this shell and continue booting after re-trying
filesystems. Any further errors will be ignored
Give root password for maintenance
(or type Control-D to continue):
Upon typing the root password, I see:
Segmentation Fault
I would like to determine which process has caused the segmentation fault and why. I am reproducing a issue that was brought to my attention and would like to provide an explanation for the segmentation fault. Is it a bug in Ubuntu 14.04? If so, is there a work-around? If there is a work-around, I would like to see it documented here.
| What Causes Maintenance Shell Segmentation Fault? |
Note: I've written the individual fstab lines with 2 spaces between each and then spaced it out as best I could in the completed fstab at the bottom. Feel free to alter the mountpoint to match whatever location you're actually using
VerificationInstall: sudo pacman -S ntfs-3g fuse
Test Mount: sudo mkdir -p /mnt/ntfs && sudo mount -t ntfs /dev/sdX /mnt/ntfs - Use parted to determine what X is for the test only. We'll add the UUID shortly.
Test that the mount in Step 2 mounted properly: mount | grep ntfsAssemble fstabTake note of the UUID: sudo blkid /dev/sdX - See Step 2 Above.
Add to fstab: UUID=XXXX-XXXX /mnt/ntfs ntfs user,ro,umask=0222,defaults 0 0 - This "master mount" is handled by the kernel's filesystem handlersThis should mount the entire drive read-only without needing root, and match the Linux access mask. We can now add/mount directories read/write like so in fstab (taking advantage of the FUSE filesystem): /mnt/ntfs/path/to/directory /some/new/path ntfs-3g rbind,user,umask=0222,defaults 0 0 - This "sub mount" is handled by the userspace tools.
Note 2: rbind is used here to keep the changes preserved recursively, because we are mounting part of the entire read-only filesystem as another mount that is read/write. Both mounts must be kept in sync to ensure changes are saved to disk correctly. See: Understanding Bind Mounts, which includes an explanation of Recursive Bindings.
The Completed fstab# Device #Mount Point #FS #Mount Options #Fsck
UUID=XXXX-XXXX /mnt/ntfs ntfs user,ro,umask=0222,defaults 0 0
/mnt/ntfs/specific/dir /some/new/path ntfs-3g rbind,user,umask=0222,defaults 0 0 |
When I try to run any executable from my second (NTFS) drive, I get a segmentation fault. If I run the exact same executable from, for example, my home folder, it works just fine.
For example:
I compile the following C program using gcc a.c:
#include <stdio.h>int main() {
puts("Hello");
return 0;
}Now I run ./a.out from my second drive:
$ ./a.out
zsh: segmentation fault ./a.out(Also no core dump is generated, even though they are enabled and work for other things.)
If I copy the exact same file, without any modifications, to e.g. /home/username/ (which is on my main/OS drive):
$ ./a.out
HelloEverything works perfectly fine there.
On the second drive however, GDB just fails during startup:
(gdb) starti
Starting program: /path/to/a.out
During startup program terminated with signal SIGSEGV, Segmentation fault.When I use strace, it says execve failed:
$ strace ./a.out
execve("./a.out", ["./a.out"], 0x7ffd0aa31070 /* 83 vars */) = -1 EOPNOTSUPP (Operation not supported)
+++ killed by SIGSEGV +++
zsh: segmentation fault (core dumped) strace ./a.outAlso ldd just says not a dynamic executable on the second drive. readelf -d and objdump -p work just fine.
My drive and one of its subfolders is mounted like this in /etc/fstab:
UUID=drive-uuid-123 /path/to/drive ntfs3 defaults 0 2
/path/to/drive/some/path /my/new/path none defaults,bind 0 2The same issue occurs when I run the binary from yet another different NTFS drive.
System information:
$ uname -a
Linux thomas-manjaro 6.6.25-1-MANJARO #1 SMP PREEMPT_DYNAMIC Thu Apr 4 20:32:38 UTC 2024 x86_64 GNU/LinuxThis is a pretty fresh install of Manjaro and all packages are up-to-date.
Does anyone know what the problem could be? Do I need to mount my drive in a different way? Do I need to set some kind of system variable?
| Segmentation fault when running binaries from second drive |
The problem was actually an outdated BIOS, upgrading it solved almost everything. Noticed that there was also something wrong with libxul.so, everything was fine after I deleted and reinstalled it.
|
I have a Dell, debian stable, laptop working with gnome environment. For several weeks Firefox crashed more and more often, resulting in mouse slows down during fews seconds, and then everything freezes, the laptop heats a lot, forcing me to hard reboot. I noticed that websites such as WhatsApp web or YouTube were specifically involved in the "crash situations".
For a week now, things have gotten much worse, any request on Google, with one window and one tab, crashes. I noticed that setting javascript.enabled=false in the settings of ff preserves from crashes.
Also, signal-desktop segfaults when launched.
I have issues finding enlightening logs, but can try.
I also tried to uninstall and reinstall all the java packages but didn't have any effect.
I ran journalctl -f, then waited for firefox to crash and I have uploaded the resulting log here: https://pastebin.com/4hFgtyAj.
And, probably more readable, the logs corresponding to the segfault of signal-desktop: https://pastebin.com/9b4zd2Cm.
I am also running memtest and I already have more than 800 errors.
| Seems that any JavaScript makes my debian laptop crash |
I reverted back to NetworkManager 1.4.4-r1 and the problem appears to go away. I will file a bug report with NetworkManager.
|
I am trying to debug an issue that happens all the time. NetworkManager is running and upon connecting to a network, I am using a dispatcher script to setup my firewall rules (shorewall, and it is set to run asynchronously). As soon as shorewall sets up the rules, NetworkManager crashes:
NetworkManager segfault at 8 ip 00007fa89e102e16 sp 00007fff51f34be8 error 4 in libc-2.23.so[7fa89e084000+18e000]I don't understand why shorewall / iptables would have a direct impact here. I tried cutting back my ruleset from shorewall and it still crashes. Lastly, I disabled shorewall altogether and went with a very plain iptables script, and that worked without issue. Shorewall is also configuring QOS, so there are other things going on, but I still am having a tough time believing that there could be a direct link.
Also, if I revert back to an older version of NetworkManager 1.4.4-r1, I don't have any issues. Lastly, I also manually reverted back to the prior working version of shorewall while keeping NetworkManager at 1.10.2 and that had no impact. So, I don't believe it is an issue with shorewall, but instead something NetworkManager is doing differently.
I tried to use strace here, but I'm not making heads or tails out of the output.
What else can I do to sort this out?
| NetworkManager 1.10.2 segfaults when shorewall starts |
You could try
sudo dpkg-reconfigure transmission-gtkAnd, if that doesn't help, completely remove it and reinstall:
sudo apt-get purge transmission-gtk && sudo apt-get install transmission-gtk This could also be caused by something in your user's settings. Try renaming transmission's configuration directory:
mv ~/.config/transmission/ ~/.config/transmission.old |
When I run transmission-gtk on my Linux Mint, the window is shown as usual but suddenly it crashes. I tried transmission-gtk on terminal and the output was Segmentation fault. I didn't update or upgrade the system before this happened, but I did run sudo apt-get --purge autoremove once.
How can I fix this?
| segmentation fault : transmission-gtk & transmission-qt in linux mint |
I am using a Haswell CPU and thus had to install the updated microcode, wy installing the intel-ucode package.
|
A lot of the programs I use on my machine exit with segfaults. Nearly all programs function normally until being closed, at which point they segfault. So far the only two programs that have not worked because of this issue are VLC and Cinnamon, while many other programs like firefox and chromium are affected, but only segfault on what would have been a normal exit.
This does not seem to be a problem with my RAM. I removed all RAm from my machine and replaced it with one known good stick, but the problem persisted.
On running the affected programs with gdb, all seem to return the same trace.
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff79af080 in __lll_unlock_elision () from /usr/lib/libpthread.so.0So I am led to believe that the root of the problem is in libpthread. I am currently running glibc version 2.21. Please ask anything more you wish to know, and help if you can. Thank you.
| Antergos libpthread causes segmentation faults |
When a program aborts abnormally the exit code (as seen by the shell) typically has the high bit set so the value is 128 or higher. So a simple solution might be
dodgy_commandwhile [ $? -ge 128 ]
do
process_data
dodgy_command
doneIf you specifically only want a segfault and not any other type of error, the while line becomes $? -eq 139 (because SEGV is signal 11; 128+11=139).
If you don't get an high valued exit code on failure then it probably means the application is trapping the error, itself, and forcing a different exit code.
|
I have a program that throws a segmentation fault on certain circumstances. I want to execute a command when the segmentation fault occurs to process the data, then execute the command again, and keep doing so until the segmentation fault stops.
As a rough attempt at pseudo code,
dodgy_commandwhile SegFault
dataProcessing
dodgy_command
endI think I need to be using a Trap command, but I don't understand the syntax for this command.
| Construct a while loop around a command throwing a segmentation fault |
at first, it looks like a socket issue but socket.gethostbyname(host_name) works.
after some digging, I find that the issue is isolated to " mysql-connector-python"
if you change that to " mysql-connector-python-rf" in your tox.ini the error is gone. that means somewhere in mysql-connector-python it is unable to resolve dns to ip.
|
Segfault but why? It only happens with debian:stretch+mysql.connector+tox and Python3.x. Reproducible from just a few lines:
FROM debian:stretch
RUN apt update -y && apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev gcc wget tox vim python-pip python3-pip
RUN wget https://www.python.org/ftp/python/3.6.9/Python-3.6.9.tgz
RUN tar xvf Python-3.6.9.tgz && cd Python-3.6.9 &&./configure --enable-optimizations --enable-shared --with-ssl --with-ensurepip=install && make -j8 && make altinstall
RUN mkdir /tox-test && echo "[tox]" >> /tox-test/tox.ini && echo "envlist = py36" >> /tox-test/tox.ini && echo "[testenv]" >> /tox-test/tox.ini && echo "deps=" >> /tox-test/tox.ini
RUN echo " mysql-connector-python" >> /tox-test/tox.ini && echo "commands=python3.6 setup.py test" >> /tox-test/tox.ini
RUN mkdir /tox-test/tests && touch /tox-test/tests/__init__.py && echo "import faulthandler\nfaulthandler.enable()\nimport mysql.connector as mysql" >> /tox-test/tests/test_segfault.py
RUN echo "mysql.connect(host='localhost', user='joe', password='bloggs')" >> /tox-test/tests/test_segfault.py
RUN mkdir /tox-test/foo && echo "print('foo')" >> /tox-test/foo/foo.py
RUN echo "from setuptools import setup" >> /tox-test/setup.py && echo "setup( name='foo',version='1.0',description='A module',author='Niklas R.',author_email='[emailprotected]',packages=['foo'],test_suite='tests',)" >> /tox-test/setup.py
RUN cd /tox-test && export LD_LIBRARY_PATH=/Python-3.6.9 && toxThe above generates a segfault. With Ubuntu and Debian Jessie it works or can be worked around. I could not understand why it happens with Stretch and I could not fix it. The segfault seems to be related to networking because if I write "127.0.0.1" instead of "localhost" then it doesn't crash. Please help me understand. My speculation is that the imports are shadowing or using some own versions of ssl or similar. It is quite far-fetched. It is reproducible on Ubuntu as well and even weirder story is that on Ubuntu the segfault is fixed if I add to the python from werkzeug.exceptions import BadRequestKeyError without even using the import, only importing it without using it. If I replace the mysql-connector-python with PyMySQL then it works. So it must be something from the mysql-connector-python. There are similar bug reports about that connector: https://bugs.mysql.com/bug.php?id=97220
If I change one string "localhost" and write instead "127.0.0.1" then there is no segfault anymore:
FROM debian:stretch
RUN apt update -y && apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev gcc wget tox vim python-pip python3-pip
RUN wget https://www.python.org/ftp/python/3.6.9/Python-3.6.9.tgz
RUN tar xvf Python-3.6.9.tgz && cd Python-3.6.9 &&./configure --enable-optimizations --enable-shared --with-ssl --with-ensurepip=install && make -j8 && make altinstall
RUN mkdir /tox-test && echo "[tox]" >> /tox-test/tox.ini && echo "envlist = py36" >> /tox-test/tox.ini && echo "[testenv]" >> /tox-test/tox.ini && echo "deps=" >> /tox-test/tox.ini
RUN echo " mysql-connector-python" >> /tox-test/tox.ini && echo "commands=python3.6 setup.py test" >> /tox-test/tox.ini
RUN mkdir /tox-test/tests && touch /tox-test/tests/__init__.py && echo "import faulthandler\nfaulthandler.enable()\nimport mysql.connector as mysql" >> /tox-test/tests/test_segfault.py
RUN echo "mysql.connect(host='127.0.0.1', user='joe', password='bloggs')" >> /tox-test/tests/test_segfault.py
RUN mkdir /tox-test/foo && echo "print('foo')" >> /tox-test/foo/foo.py
RUN echo "from setuptools import setup" >> /tox-test/setup.py && echo "setup( name='foo',version='1.0',description='A module',author='Niklas R.',author_email='[emailprotected]',packages=['foo'],test_suite='tests',)" >> /tox-test/setup.py
RUN cd /tox-test && export LD_LIBRARY_PATH=/Python-3.6.9 && tox | Segfault with Debian Stretch |
This is because the readline library will actively modify that string using strtok, and the string you passed is a constant. Trying to write to it will result in a segmentation fault.
Try:
char *copy = strdup("\"C-b\":history-search-backward");
rl_parse_and_bind(copy);
// free(copy); copy = NULL; // This to tidy up thingsThe copy, being writeable, will work.
|
I am using Ubuntu 18.04.5. This very simple program segfaults on the invocation of rl_parse_and_bind. Can anyone help me?
// Build with cc read.c -o read -lreadline#include <readline/readline.h> // apt install libreadline-devint main() {
rl_parse_and_bind("\"C-b\":history-search-backward");
char *input = readline("Input: ");
} | readline's rl_parse_and_bind causes crash |
At the beginning you are recursively sourcing ~/.bashrc. You probably wanted to include /etc/bashrc instead. As a result bash terminates with stack overflow during parsing.
|
After editing .bashrc I had "Segmentation fault" as an error.
This is what I added to bashrc:
if [ -f ~/.bashrc ];
then
source ~/.bashrc
fi# If not running interactively, don't do anything
[ -z "$PS1" ] && return# some more ls aliases
alias ll='ls -l'# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fiPS1="\[\e[1;37m\](\#) \[\e[1;33m\]\D{%H:%M:%S} \[\e[0;32m\]\[\e[1;31m\]\u\[\e[1;36m\]@\h:\[\e[1;32m\]\w\[\e[1;35m\]#\[\e[m\] "export PATH=$PATH:/usr/local/sbin:/usr/sbinThen I executed bash to reload profile :
~$ bash
Segmentation faultFor information, I am using :
~$ cat /etc/debian_version
7.7Another thing, my bashrc is not loaded. Example:
~$ ll /etc/
-bash: ll: command not found | Debian Segmentation Fault |
nouveau is known for being quite unstable and crash-prone, so I'd recommend installing NVIDIA proprietary drivers instead.
The error you're getting indicates exactly that.
Alternatively try installing a fresh kernel, 4.19 is quite dated and may not contain all the fixes the nouveau driver has seen.
|
Since I use microsoft teams on my Debian buster machine, I get a GUI freeze sometimes: The mouse pointer can still be moved on the screen, but no visible feedback on clicks or keyboard presses. Also no switching to a console with ctrlaltF1
I could not help myself other then sshing to the machine to restart Xorg.
The dmesg shows me fingerprints of teams, but I guess the deeper problem must the in the nouveau GPU driver?
[ 4918.083079] show_signal_msg: 7 callbacks suppressed
[ 4918.083082] GpuWatchdog[2056]: segfault at 0 ip 000055dcd609b006 sp 00007f5a8f043490 error 6 in teams[55dcd2705000+5fbe000]
[ 4918.083087] Code: 89 de e8 4d 0e 71 ff 80 7d cf 00 79 09 48 8b 7d b8 e8 1e 45 ce fe 41 8b 84 24 e0 00 00 00 89 45 b8 48 8d 7d b8 e8 ea f0 66 fc <c7> 04 25 00 00 00 00 37 13 00 00 48 83 c4 38 5b 41 5c 41 5d 41 5e
[ 5006.078739] traps: Watchdog[2423] trap invalid opcode ip:555c23f287de sp:7f77fb7fd6f0 error:0 in teams[555c23dba000+5fbe000]For now I deactivated GPU acceleration in MS teams, but would you recommend switching to NVIDIA driver instead of nouveau in this case?
| MS teams makes the whole GUI stall: GpuWatchdog segfault |
Currently, operations on a filesystem are un-interruptible - except for network filesystems.
See TASK_KILLABLE [LWN.net, 2008].
For traditional block-based filesystems, you might predict your guarantee will be met. I don't believe TASK_KILLABLE has been adopted widely outside of network filesystems. However I would not want to assume this will always be the case, without a good reason.
If there is a possibility the application could be run on a network filesystem, it is hard to say there are strong guarantees. (And in general, e.g note NFS3 does not follow all the expectations for a POSIX filesystem).
Storage technology is still evolving. E.g. if you assumed that filesystems will work a certain way based on looking at the architecture of the Linux block layer, you might be surprised in future when your application is run on a filesystem based on byte-addressable memory.
|
To the best of my knowledge, when a process writes to a file it starts a system call. Among the required information, it expects a pointer to a buffer in the user space, filled with the data to write.
Consider a scenario where there is a process that spawns two threads. One thread executes a system call to write 10MB. The other thread performs invalid memory access that triggers a segmentation fault, while the Operating System is serving the IO request.
What happens to the write request in this scenario? In particular, I have the guarantee that either the write operation does not happen or it is completed before the deallocation of the process memory? Do the answers change if the io request is just a 64-bit integer?
| An IO write operation can outlive a process? |
You're confusing address spaces here; this is virtual memory of the process address space of the udisks process. You reserved physical address spaces.
A segfault happens when a process tries to access a virtual memory address that is not mapped to any physical page, or that it's not allowed to access.
Physical and virtual addresses have nothing to do with each other, keeping a table to map virtual addresses to physical addresses is why your processor has a memory management unit. So the problem here is software accessing the wrong memory address – a bug.
Of course, that bug might not be a software bug, but caused by damaged RAM that you didn't reserve; nobody can know that! There's no guarantee that yesterday night's memtest still is relevant today, especially if there's problems on more than one physical address range. Honestly, what you're doing is quite hazardous – you know you have memory that might randomly corrupt data, you hope for the best that you caught all the offending memory and blocked it from usage. If the things you do with your computer matter, I wouldn't do that. Since you say you're planning to replace that memory, remove the whole RAM module now, and resume working if possible, or hurry getting replacement RAM.
|
I've got a device with bad RAM. Running memtest overnight shows all faulting addresses to be in the 0x7d0000000 - 0x7f0000000 range. I plan to replace the RAM, but until then, I've disabled a 2GB chunk around it with memmap=:
# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.5.0-25-generic root=UUID=5277c53f-b2cd-4301-8fdf-0b2119430870 ro memmap=2G$0x0000000790000000 quiet splash vt.handoff=7Those cmdline options do seem to be acknowledged by the kernel:
[ 0.000000] user-defined physical RAM map:
[ 0.000000] user: [mem 0x0000000000000000-0x000000000009efff] usable
[ 0.000000] user: [mem 0x000000000009f000-0x00000000000fffff] reserved
[ 0.000000] user: [mem 0x0000000000100000-0x0000000019e6a017] usable
[ 0.000000] user: [mem 0x0000000019e6a018-0x0000000019e7ae57] usable
[ 0.000000] user: [mem 0x0000000019e7ae58-0x000000002cb82fff] usable
[ 0.000000] user: [mem 0x000000002cb83000-0x000000002ed2ffff] reserved
[ 0.000000] user: [mem 0x000000002ed30000-0x000000002edacfff] ACPI data
[ 0.000000] user: [mem 0x000000002edad000-0x000000002f29bfff] ACPI NVS
[ 0.000000] user: [mem 0x000000002f29c000-0x000000002fd0efff] reserved
[ 0.000000] user: [mem 0x000000002fd0f000-0x000000002fd0ffff] usable
[ 0.000000] user: [mem 0x000000002fd10000-0x000000003cffffff] reserved
[ 0.000000] user: [mem 0x00000000e0000000-0x00000000efffffff] reserved
[ 0.000000] user: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
[ 0.000000] user: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[ 0.000000] user: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[ 0.000000] user: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
[ 0.000000] user: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[ 0.000000] user: [mem 0x0000000100000000-0x000000078fffffff] usable
[ 0.000000] user: [mem 0x0000000790000000-0x000000080fffffff] reserved
[ 0.000000] user: [mem 0x0000000810000000-0x00000008beffffff] usableHowever, I still get segfaults, ostensibly in the reserved address range:
Mar 09 20:47:40 srv0 kernel: udisksd[656]: segfault at 7fe974786218 ip 00007fe974786218 sp 00007ffcd10d1848 error 7 in libbd_swap.so.3.0.0[7fe974785000+2000] likely on CPU 7 (core 3, socket 0)According to this page, I should interpret that as udiskd trying to write to the reserved address 0x7fe974786218 (error 7). At first glance, the 0x7f address seems to match up with what memtest found to be bad RAM, but is off by orders of magnitude, since it points to a value of 140 TB. My machine has 32 GB.
What, if not a memory address, does the segfault at X value represent?
| What does the "segfault at X" kernel log message mean if X is very large? |
I'm not sure of the precise cause; but the root of my problems ended up being that I hadn't booted with the resume & resume_offset kernel parameters. I had thought that these were only required on the resuming boot; not the boot that hibernates, but that seems not to be the case.
|
After resuming from a hybrid-sleep, I can log in (swaylock) and initially it seems ok - pwd, journalctl -xe run as expected in the shell still open from when I put it to sleep.
After a short while though, tens of seconds, when I'd exited journalctl (I just wanted to confirm it had actually been asleep) CPU load increases, I hear the fan spin up, and anything I try to run in the same shell (pwd again, say) results in a SIGSEGV - address boundary error.
Consequently, I can't even issue a shutdown command, so I have to force it off with the power button. Once rebooted, journalctl --boot=-1 has no entries from after it went to sleep, like it never woke. I assume when I saw them they were stored only in RAM, and when I shut down it was unable to write them to disk with the same segfault.
The behaviour is quite erratic - after drafting the above I tested again and was able to 'log in' (bypass swaylock) by entering a single key, not my full password, but any command I tried to run in the open (resumed) shells crashed the terminal emulator, and as before I couldn't re-open any more (the command run by my keybinding for that presumably segfaulted too).
Any ideas what the cause could be? Or even how I can debug this without access to the logs when the system is stable?Some possibly relevant info, I'll edit in more if anyone can suggest what might be relevant/suspect:
# /etc/systemd/system/swapfile.swap
[Unit]
Description=providing a swapfile[Swap]
What=/swapfile
Priority=20[Install]
WantedBy=multi-user.target# /etc/systemd/system/swapfile-creation.service
[Unit]
Description=creating a swap file at /swapfile
ConditionPathExists=!/swapfile
Before=swapfile.swap[Service]
Type=oneshot
ExecStart=/bin/sh -c 'dd if=/dev/zero of=/swapfile bs=1M count="$(expr "$(cat /sys/power/image_size)" / 1024 / 1024)" status=progress'
ExecStart=/usr/bin/chmod 600 /swapfile
ExecStart=/usr/bin/mkswap /swapfile[Install]
RequiredBy=swapfile.swapI was going to include the systemd-boot entry (or script I used to create it) but actually realised I haven't tested from power loss - this is occurring when resuming from RAM. I will double check that the (unused) suspense to disk isn't somehow the culprit, that it doesn't work when resuming from a plain systemctl suspend.
| SIGSEGV address boundary error shortly after waking from sleep |
I've solved this issue and managed to map PCI memory to userspace via the driver.
I've changed the pfn input of the remap_pfn_range function I was using in my custom .mmap
The original was:
io_remap_pfn_range(vma, vma->vm_start, pfn, vma->vm_end - vma->vm_start, vma->vm_page_prot ))where the pfn was the result of the buffer pointer return from the ioremap().
I changed the pfn to:
pfn = pci_resource_start(pdev, BAR) >> PAGE_SHIFT .That basically points to the actual starting address pointed by the BAR.
My working remap_pfn_range function is now:
io_remap_pfn_range(vma, vma->vm_start, pci_resource_start(pdev, BAR) >> PAGE_SHIFT, vma->vm_end - vma->vm_start,vma->vm_page_prot )I confirmed that it works by doing some dummy writes to the buffer pointer in my driver, then picking up the reads and doing some wirtes in my userspace app.
|
I am trying to implement a PCI device driver for a virtual PCI device on QEMU. The device defines a BAR region as RAM, and the driver can do ioremap() this region and access it without any issues. The next step is to assign this region (or a fraction of it) to a user application. To do this, I have also implemented an .mmap function as part of my driver file operations. This mmap is simply using remap_pfn_range, but it also passes the pfn of the memory pointer returned by the ioremap() earlier.
However, upon running the user space application, the mmap is successful, but when the app tries to access the memory, it is killed and I get the following dmesg errors.
[ 1502.402970] a.out: Corrupted page table at address 7f911b79f000
[ 1502.404085] PGD 13926d067 P4D 13926d067 PUD 1317aa067 PMD 1326d9067 PTE 800026d901000227
[ 1502.404085] Bad pagetable: 000f [#1] SMP NOPTI
[ 1502.404085] Modules linked in: edu_driver(OE) ppdev kvm_amd kvm irqbypass input_leds parport_pc serio_raw parport mac_hid qemu_fw_cfg sch_fq_codel ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear psmouse pata_acpi floppy e1000 i2c_piix4
[ 1502.404085] CPU: 0 PID: 1988 Comm: a.out Tainted: G OE 4.15.0-55-generic #60-Ubuntu
[ 1502.404085] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1502.404085] RIP: 0033:0x55d687642811
[ 1502.404085] RSP: 002b:00007ffe16c38da0 EFLAGS: 00000213
[ 1502.404085] RAX: 00007f911b79f000 RBX: 0000000000000000 RCX: 00007f911b2a1813
[ 1502.404085] RDX: 0000000000000003 RSI: 0000000000001000 RDI: 0000000000000000
[ 1502.404085] RBP: 00007ffe16c38dc0 R08: 0000000000000003 R09: 0000000000000000
[ 1502.404085] R10: 0000000000008001 R11: 0000000000000246 R12: 000055d687642660
[ 1502.404085] R13: 00007ffe16c38ea0 R14: 0000000000000000 R15: 0000000000000000
[ 1502.404085] FS: 00007f911b7984c0(0000) GS:ffff97237fc00000(0000) knlGS:0000000000000000
[ 1502.404085] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1502.404085] CR2: 00007f911b79f000 CR3: 0000000132cd8000 CR4: 00000000000006f0
[ 1502.404085] RIP: 0x55d687642811 RSP: 00007ffe16c38da0
[ 1502.404085] ---[ end trace 6b088b58eb816baf ]---Does anyone know what have I done wrong? Did I missed a step? Or it could be an error specific to QEMU? I am running x86_softmmu as my QEMU configuration and my kernel is the 4.14
| How can my PCI device driver remap PCI memory to userspace? |
Update:
The solution is odd, you need to go into Ubuntu Software, select Chromium, then permissions. All permissions will be selected on by default. Oddly enough, if you flip off the ones file access, then turn them back on, it will magically work. If you get issues with you Chromium after this, toggle them on/again, and it will work.
Don't know why, but it just does...This seems to be widespread enough, there are complaints about a new security "feature" causing this behaviour: |
When selecting an image for upload, I am able to see the preview in the browser. Once the actual upload process begins, Chromium segfaults.
Seems like it's some sort of odd permission issue? The file I'm trying to upload, is an image I had previously downloaded from another page using "Save As...". The picture is stored in /home/<username>/Pictures. If I attempt to upload the image right away: segfault. However, if I open the image with Gimp, and simply select "Overwrite *.jpg" and exit. Then when I go back to the browser the image uploads just fine. Somehow overwriting the image with Gimp makes it so that it can be later uploaded.
When I run ls -la on the folder before and after that, the files are always owned by the same user have the same file access permissions, so I am unsure what is going on.
Running the snap installed version: 86.0.4240.183 (Official Build) snap (64-bit) | Why does Chromium segfault when uploading images? |
I believe it was a permissions issue on a hardened box. I was extracting some configuration files via tar and overwrote the permissions on some directories hence creating the problem.
|
I am trying to compile dev-lang/yasm inside a funtoo chroot and consistently get segfaults. Any ideas what I should try:
[Fri Jan 1 14:54:33 2016] re2c[14786]: segfault at 0 ip 00007f5ad4d4dd5b sp 00007ffe0c08b8c0 error 4 in libc-2.20.so[7f5ad4ce4000+193000]I am compiling on Fedora 23 as the libraries there seem to be a little more stable than the unstable versions on my other funtoo installation. I checked my mount points and permissions and everything is congruent with the gentoo and funtoo installation documents.
Thanks,
Walter
| libc segfaults while compiling dev-lang/yasm |
Turned out, I interrupted an upgrade process earlier. I manually reinstalled the network manager package.
|
this problem occurs on Debian jessie x86 with systemd. It leads to an incomplete boot sequence on init 2 because network-manager won't start. it leaves the whole system unusable
NetworkManager[785]: segfault at e7394845 ip b74ab7a1 sp b7548810 error 7 in libgnutls-deb0.so.28.41.0[b746f000+13a000] | segfault in libgnutls - Debian won't complete boot |
This is a combination of two things:You have not told the program where your X server is.
M. Unangst's program does no error checking and handling at all.The program needs to inherit a DISPLAY environment variable, specified in your crontab or in a wrapper script, to tell it where the X server display that you want to adjust is. The segmentation fault that you are seeing is its failure mode if it is not told that.
You might like to report this as a bug.
You happen to have a DISPLAY variable in the environment of the shell that you are using, probably because you are using a GUI terminal emulator. If you had logged on in a non-GUI environment, such as a kernel/user virtual terminal, a real terminal, or a SSH session without X11 forwarding, you would have seen this same behaviour when you invoked the program interactively, too. % DISPLAY= sct
zsh: segmentation fault DISPLAY= sct
%
Further readinghttps://sources.debian.org/src/setcolortemperature/1.3-1/sct.c/#L50
https://unix.stackexchange.com/a/355177/5132
https://unix.stackexchange.com/a/19238/5132
https://unix.stackexchange.com/a/154453/5132
https://unix.stackexchange.com/a/215151/5132
https://bugs.debian.org/cgi-bin/pkgreport.cgi?archive=both;src=setcolortemperature |
Debian stretch user here. I wanted a screen flash every ten minutes. After trying a couple of alternatives (includeing xrefresh) I decided to use sct. It works in shell but does not work with cron.
This works:
sct 2000
The script: (named colrefr)
#!/bin/bash
PATH=/usr/bin
sct 2000; sleep 3; sctCron: (pgrep cron shows cron is running)
* * * * * /home/user/folder/colrefr(once every minute until debugging is successful)
I have mitigated the usual gotchas - newline after last command, setting PATH in the script, no dots in file name, etcetera.
$ which sct
/usr/bin/sct$ which sleep
/bin/sleep/-
$ sudo tail -f /var/log/syslog
Oct 16 16:00:01 user CRON[29060]: (user) CMD (/home/user/folder/colrefr )
Oct 16 16:00:01 user kernel: [229206.201351] sct[29062]: segfault at e0 ip 000055dca79aa8cd sp 00007ffd9dfc6220 error 4 in sct[55dca79aa000+2000]
Oct 16 16:00:01 user kernel: [229206.201366] Code: 17 20 00 66 90 ff 25 4a 17 20 00 66 90 41 57 41 56 41 55 41 54 55 53 89 fb 31 ff 48 89 f5 48 83 ec 38 e8 ae ff ff ff 49 89 c4 <48> 63 80 e0 00 00 00 4c 89 e7 48 c1 e0 07 49 03 84 24 e8 00 00 00
Oct 16 16:00:01 user kernel: [229206.209280] sct[29064]: segfault at e0 ip 000055dcdd3268cd sp 00007ffdf60c9e40 error 4 in sct[55dcdd326000+2000]
Oct 16 16:00:01 user kernel: [229206.209295] Code: 17 20 00 66 90 ff 25 4a 17 20 00 66 90 41 57 41 56 41 55 41 54 55 53 89 fb 31 ff 48 89 f5 48 83 ec 38 e8 ae ff ff ff 49 89 c4 <48> 63 80 e0 00 00 00 4c 89 e7 48 c1 e0 07 49 03 84 24 e8 00 00 00 I have three other cronjobs and they all work.
It runs without a hitch in shell.
| sct (setcolortemperature) segfaults with cron |
The operating system will not use more memory than it can handle in its allocation table.
Since the maximum number of bytes that can be represented with 32 bits is 4 294 967 296, that limits the memory to 4GB. On 64 bit systems, the maximum would therefore be 18 446 744 073 709 551 616 bytes (16 777 216 TB) which will obviously not be an issue for decades. The memory limitations on 64 bit systems depend more on how much memory the hardware can actively handle.
Note that maximum file size often suffers from the same limitation but some systems implemented ways to overcome it.
|
I'm reading about the x86, In that they have mentioned segment size can go upto 4GB (in 32-bit) architecture.
Does anybody knows or experienced the segment size going beyond limit?
Or
upto what limit segment size goes in practical life?
and
If it goes beyond the max limit then, it is breaked into segments of different sizes then, how does the switching between the segments is managed? If RAM is smaller to accomodate the more than one segment
| What is maximum size of any memory segment goes in real life coding? |
Objective Criteria/Requirements:
In determining whether to use an absolute
orlogical (/usr/bin/env) path to aninterpreter in ashebang,
there are two key considerations:
a) The interpreter can be found on thetarget system
b) The correct version of theinterpreter
can be found on thetarget system
If we AGREE that "b)" is desirable, we also agree that:
c) It's preferable our scripts fail rather than execute using an incorrect interpreter version and potentially achieve inconsistent results.
If we DON'T AGREE that "b)" matters, then any interpreter found will suffice.
Testing:
Since using a logical path — /usr/bin/env to the interpreter in the shebang — is the most extensible solution allowing the same script to execute successfully on target hosts with different paths to the same interpreter, we'll test it — using Python, due to its popularity — to determine whether it meets our criteria.Does /usr/bin/env live in a predictable, consistent location on POPULAR (not "every") operating systems? Yes:RHEL 7.5
Ubuntu 18.04
Raspbian 10 ("Buster")
OSX 10.15.02Below Python script executed both inside and outside of virtual envelopes (Pipenv used) during tests:
#!/usr/bin/env pythonX.x
import sys
print(sys.version)
print('Hello, world!')The shebang in the script was varied
bythePython version number desired
(all installed on thesame host):#!/usr/bin/env python2
#!/usr/bin/env python2.7
#!/usr/bin/env python3
#!/usr/bin/env python3.5
#!/usr/bin/env python3.6
#!/usr/bin/env python3.7Expected results:
that print(sys.version) = env pythonX.x.
Each time ./test1.py wasexecuted using a different installed Python version, the correct version specified in theshebang wasprinted.Testing Notes:Tests were exclusively limited to Python
Perl, like Python,
MUST live in /usr/bin according to the FHS
I've not tested every possible combination on every possible number of Linuxy/Unixy Operating System and version of each Operating System.Conclusion:
Although it's TRUE that #!/usr/bin/env python
will use the first version of Python it matches inthe user's Path,
we can enforce an express preference by specifying a version number
such as #!/usr/bin/envpythonX.x.
Indeed, developers don't care which interpreter is found "first";
all they care about is that their code is executed using the specified interpreter they know to be compatible with their code
toensure consistent results —
wherever that may live in the filesystem...
In terms of portability/flexibility, using a logical — /usr/bin/env — rather than absolute path not only meets requirements a), b) & c) from my testing with different versions of Python, but also has the benefit of fuzzy-logic finding the same version interpreter even if they live at different paths on different Operating Systems.
Andalthough MOST distros respect theFHS, notalldo.
So, where a script will FAIL if thebinary lives in adifferent absolute path than specified in the shebang,
the same script using a logical path SUCCEEDS
as it keeps going until it finds a match,
thereby offering greater reliability & extensibility across platforms.
|
I notice that some scripts which I have acquired from others have the shebang #!/path/to/NAME while others (using the same tool, NAME) have the shebang #!/usr/bin/env NAME.
Both seem to work properly. In tutorials (on Python, for example), there seems to be a suggestion that the latter shebang is better. But, I don't quite understand why this is so.
I realize that, in order to use the latter shebang, NAME must be in the PATH whereas the first shebang does not have this restriction.
Also, it appears (to me) that the first would be the better shebang, since it specifies precisely where NAME is located. So, in this case, if there are multiple versions of NAME (e.g., /usr/bin/NAME, /usr/local/bin/NAME), the first case specifies which to use.
My question is why is the first shebang preferred to the second one?
| Why is it better to use "#!/usr/bin/env NAME" instead of "#!/path/to/NAME" as my shebang? |
The shebang #! is an human readable instance of a magic number consisting of the byte string 0x23 0x21, which is used by the exec() family of functions to determine whether the file to be executed is a script or a binary. When the shebang is present, exec() will run the executable specified after the shebang instead.
Note that this means that if you invoke a script by specifying the interpreter on the command line, as is done in both cases given in the question, exec() will execute the interpreter specified on the command line, it won't even look at the script.
So, as others have noted, if you want exec() to invoke the interpreter specified on the shebang line, the script must have the executable bit set and invoked as ./my_shell_script.sh.
The behaviour is easy to demonstrate with the following script:
#!/bin/ksh
readlink /proc/$$/exeExplanation:#!/bin/ksh defines ksh to be the interpreter.
$$ holds the PID of the current process.
/proc/pid/exe is a symlink to the executable of the process (at least on Linux; on AIX, /proc/$$/object/a.out is a link to the executable).
readlink will output the value of the symbolic link.Example:
Note: I'm demonstrating this on Ubuntu, where the default shell /bin/sh is a symlink to dash i.e. /bin/dash and /bin/ksh is a symlink to /etc/alternatives/ksh, which in turn is a symlink to /bin/pdksh.
$ chmod +x getshell.sh
$ ./getshell.sh
/bin/pdksh
$ bash getshell.sh
/bin/bash
$ sh getshell.sh
/bin/dash |
This may be a silly question, but I ask it still. If I have declared a shebang
#!/bin/bash in the beginning of my_shell_script.sh, so do I always have to invoke this script using bash
[my@comp]$bash my_shell_script.shor can I use e.g.
[my@comp]$sh my_shell_script.shand my script determines the running shell using the shebang? Is it the same happening with ksh shell? I'm using AIX.
| Does the shebang determine the shell which runs the script? |
This kind of message is usually due to a buggy shebang line, either an extra carriage return at the end of the first line or a BOM at the beginning of it.
Run:
$ head -1 yourscript | od -cand see how it ends.
This is wrong:
0000000 # ! / b i n / b a s h \r \nThis is wrong too:
0000000 357 273 277 # ! / b i n / b a s h \nThis is correct:
0000000 # ! / b i n / b a s h \nUse dos2unix (or sed, tr, awk, perl, python…) to fix your script if this is the issue.
Here is one that will remove both of a BOM and tailing CRs:
sed -i '1s/^.*#//;s/\r$//' brokenScriptNote that the shell you are using to run the script will slightly affect the error messages that are displayed.
Here are three scripts just showing their name (echo $0) and having the following respective shebang lines:
correctScript:
0000000 # ! / b i n / b a s h \nscriptWithBom:
0000000 357 273 277 # ! / b i n / b a s h \nscriptWithCRLF:
0000000 # ! / b i n / b a s h \r \nUnder bash, running them will show these messages:
$ ./correctScript
./correctScript
$ ./scriptWithCRLF
bash: ./scriptWithCRLF: /bin/bash^M: bad interpreter: No such file or directory
$ ./scriptWithBom
./scriptWithBom: line 1: #!/bin/bash: No such file or directory
./scriptWithBomRunning the buggy ones by explicitely calling the interpreter allows the CRLF script to run without any issue:
$ bash ./scriptWithCRLF
./scriptWithCRLF
$ bash ./scriptWithBom
./scriptWithBom: line 1: #!/bin/bash: No such file or directory
./scriptWithBomHere is the behavior observed under ksh:
$ ./scriptWithCRLF
ksh: ./scriptWithCRLF: not found [No such file or directory]
$ ./scriptWithBom
./scriptWithBom[1]: #!/bin/bash: not found [No such file or directory]
./scriptWithBomand under dash:
$ ./scriptWithCRLF
dash: 2: ./scriptWithCRLF: not found
$ ./scriptWithBom
./scriptWithBom: 1: ./scriptWithBom: #!/bin/bash: not found
./scriptWithBom |
I've created a bash script but when I try to execute it, I get
#!/bin/bash no such file or directoryI need to run the command: bash script.sh for it to work.
How can I fix this?
| #!/bin/bash - no such file or directory |
There is no general solution, at least not if you need to support Linux, because the Linux kernel treats everything following the first “word” in the shebang line as a single argument.
I’m not sure what NixOS’s constraints are, but typically I would just write your shebang as
#!/bin/bash --posixor, where possible, set options in the script:
set -o posixAlternatively, you can have the script restart itself with the appropriate shell invocation:
#!/bin/sh -if [ "$1" != "--really" ]; then exec bash --posix -- "$0" --really "$@"; fishift# Processing continuesThis approach can be generalised to other languages, as long as you find a way for the first couple of lines (which are interpreted by the shell) to be ignored by the target language.
GNU coreutils’ env provides a workaround since version 8.30, see unode’s answer for details. (This is available in Debian 10 and later, RHEL 8 and later, Ubuntu 19.04 and later, etc.)
|
I am wondering whether there is a general way of passing multiple options to an executable via the shebang line (#!).
I use NixOS, and the first part of the shebang in any script I write is usually /usr/bin/env. The problem I encounter then is that everything that comes after is interpreted as a single file or directory by the system.
Suppose, for example, that I want to write a script to be executed by bash in posix mode. The naive way of writing the shebang would be:
#!/usr/bin/env bash --posixbut trying to execute the resulting script produces the following error:
/usr/bin/env: ‘bash --posix’: No such file or directoryI am aware of this post, but I was wondering whether there was a more general and cleaner solution.EDIT: I know that for Guile scripts, there is a way to achieve what I want, documented in Section 4.3.4 of the manual:
#!/usr/bin/env sh
exec guile -l fact -e '(@ (fac) main)' -s "$0" "$@"
!#The trick, here, is that the second line (starting with exec) is interpreted as code by sh but, being in the #! ... !# block, as a comment, and thus ignored, by the Guile interpreter.
Would it not be possible to generalize this method to any interpreter? Second EDIT: After playing around a little bit, it seems that, for interpreters that can read their input from stdin, the following method would work:
#!/usr/bin/env sh
sed '1,2d' "$0" | bash --verbose --posix /dev/stdin; exit;It's probably not optimal, though, as the sh process lives until the interpreter has finished its job. Any feedback or suggestion would be appreciated.
| Multiple arguments in shebang |
The kernel interprets the line starting with #! and uses it to run the script, passing in the script's name; so this ends up running
/bin/rm scriptnamewhich deletes the script. (As Stéphane Chazelas points out, scriptname here is sufficient to find the script — if you specified a relative or absolute path, that's passed in as-is, otherwise whatever path was found in PATH is prepended, including possibly the emptry string if your PATH contains that and the script is in the current directory. You can play around with an echo script — #!/bin/echo — to see how this works.)
As hobbs pointed out, this means your script is actually an rm script, not a bash script — the latter would start with #!/bin/bash.
See How programs get run for details of how this works in Linux; the comments on that article give details for other platforms. #! is called a shebang, you'll find lots of information by searching for that term (thanks to Aaron for the suggestion). As jlp pointed out, you'll also find it referred to as "pound bang" or "hash bang" (# is commonly known as "pound" — in countries that don't use £ — or "hash", and ! as "bang"). Wikipedia has more info.
|
If you create an executable file with the following contents, and run it, it will delete itself.
How does this work?
#!/bin/rm | Why does the following script delete itself? |
It isn't a shebang, it is just a script that gets run by the default shell. The shell executes the first line
//usr/bin/env go run $0 $@ ; exit which causes go to be invoked with the name of this file, so the result is that this file is run as a go script and then the shell exits without looking at the rest of the file.
But why start with // instead of just / or a proper shebang #! ?
This is because the file need to be a valid go script, or go will complain. In go, the characters // denote a comment, so go sees the first line as a comment and does not attempt to interpret it. The character # however, does not denote a comment, so a normal shebang would result in an error when go interprets the file.
This reason for the syntax is just to build a file that is both a shell script and a go script without one stepping on the other.
|
I'm confused about following script (hello.go).
//usr/bin/env go run $0 $@ ; exitpackage main
import "fmt"
func main() {
fmt.Printf("hello, world\n")
}It can execute. (on MacOS X 10.9.5)
$ chmod +x hello.go
$ ./hello.go
hello, worldI haven't heard about shebang starting with //. And it still working when I insert a blank line at the top of the script. Why does this script work?
| Shebang starting with `//`? |
There are systems not shipping bash by default (e.g. FreeBSD).
Even if bash is installed, it might not be located in /bin.
Most simple scripts don't require bash.
Using the POSIX shell is more portable and the scripts will run on a greater variety of systems. |
In most shell scripts I've seen (besides ones I haven't written myself), I noticed that the shebang is set to #!/bin/sh. This doesn't really surprise me on older scripts, but it's there on fairly new scripts, too.
Is there any reason for preferring /bin/sh over /bin/bash, since bash is pretty much ubiquitous, and often default, on many Linux and BSD machines going back well over a decade?
| Is there any reason to have a shebang pointing at /bin/sh rather than /bin/bash? |
does using the .bash extension actually invoke bash or does it depend
on system config / 1st shebang line.If you do not use an interpreter explicitly, then the interpreter being invoked is determined by the "shebang" used in the script (the #!-line, which must be the first line of the script file).
On the other hand, if you use an interpreter explicitly, then the interpreter doesn't care what extension you gave your script. However, the extension exists to make it very obvious for others what kind of script it is.
[sreeraj@server ~]$ cat ./ext.py
#!/bin/bash
echo "Hi. I am a Bash script"See, the .py extension to the Bash script does not make it a Python script.
[sreeraj@server ~]$ python ./ext.py
File "./ext.py", line 2
echo "Hi. I am a Bash script"
^
SyntaxError: invalid syntaxIt's always a Bash script.
[sreeraj@server ~]$ ./ext.py
Hi. I am a Bash script |
(See Use #!/bin/sh or #!/bin/bash for Ubuntu-OSX compatibility and ease of use & POSIX)
If I want my scripts to use the Bash shell, does using the .bash extension actually invoke Bash or does it depend on system config or the first shebang line. If both were in effect but different, which would have precedence?
I'm not sure whether to end my scripts with .sh to just indicate "shell script" and then have the first line select the Bash shell (e.g. #!/usr/bin/env bash) or whether to just end them with .bash (as well as the setting in the first line).
I want Bash to be invoked.
| Use .sh or .bash extension for Bash scripts? |
The definitive answer to "how programs get run" on Linux is the pair of articles on LWN.net titled, surprisingly enough, How programs get run and How programs get run: ELF binaries. The first article addresses scripts briefly. (Strictly speaking the definitive answer is in the source code, but these articles are easier to read and provide links to the source code.)
A little experimentation show that you pretty much got it right, and that the execution of a file containing a simple list of commands, without a shebang, needs to be handled by the shell. The execve(2) manpage contains source code for a test program, execve; we'll use that to see what happens without a shell. First, write a testscript, testscr1, containing
#!/bin/shpstreeand another one, testscr2, containing only
pstreeMake them both executable, and verify that they both run from a shell:
chmod u+x testscr[12]
./testscr1 | less
./testscr2 | lessNow try again, using execve (assuming you built it in the current directory):
./execve ./testscr1
./execve ./testscr2testscr1 still runs, but testscr2 produces
execve: Exec format errorThis shows that the shell handles testscr2 differently. It doesn't process the script itself though, it still uses /bin/sh to do that; this can be verified by piping testscr2 to less:
./testscr2 | less -ppstreeOn my system, I get
|-gnome-terminal--+-4*[zsh]
| |-zsh-+-less
| | `-sh---pstreeAs you can see, there's the shell I was using, zsh, which started less, and a second shell, plain sh (dash on my system), to run the script, which ran pstree. In zsh this is handled by zexecve in Src/exec.c: the shell uses execve(2) to try to run the command, and if that fails, it reads the file to see if it has a shebang, processing it accordingly (which the kernel will also have done), and if that fails it tries to run the file with sh, as long as it didn't read any zero byte from the file:
for (t0 = 0; t0 != ct; t0++)
if (!execvebuf[t0])
break;
if (t0 == ct) {
argv[-1] = "sh";
winch_unblock();
execve("/bin/sh", argv - 1, newenvp);
}bash has the same behaviour, implemented in execute_cmd.c with a helpful comment (as pointed out by taliezin):Execute a simple command that is hopefully defined in a disk file
somewhere.fork ()
connect pipes
look up the command
do redirections
execve ()
If the execve failed, see if the file has executable mode set.
If so, and it isn't a directory, then execute its contents as
a shell script.POSIX defines a set of functions, known as the exec(3) functions, which wrap execve(2) and provide this functionality too; see muru's answer for details. On Linux at least these functions are implemented by the C library, not by the kernel.
|
So, I thought I had a good understanding of this, but just ran a test (in response to a conversation where I disagreed with someone) and found that my understanding is flawed...
In as much detail as possible what exactly happens when I execute a file in my shell? What I mean is, if I type: ./somefile some arguments into my shell and press return (and somefile exists in the cwd, and I have read+execute permissions on somefile) then what happens under the hood?
I thought the answer was:The shell make a syscall to exec, passing the path to somefile
The kernel examines somefile and looks at the magic number of the file to determine if it is a format the processor can handle
If the magic number indicates that the file is in a format the processor can execute, thena new process is created (with an entry in the process table)
somefile is read/mapped to memory. A stack is created and execution jumps to the entry point of the code of somefile, with ARGV initialized to an array of the parameters (a char**, ["some","arguments"])If the magic number is a shebang then exec() spawns a new process as above, but the executable used is the interpreter referenced by the shebang (e.g. /bin/bash or /bin/perl) and somefile is passed to STDIN
If the file doesn't have a valid magic number, then an error like "invalid file (bad magic number): Exec format error" occursHowever someone told me that if the file is plain text, then the shell tries to execute the commands (as if I had typed bash somefile). I didn't believe this, but I just tried it, and it was correct. So I clearly have some misconceptions about what actually happens here, and would like to understand the mechanics.
What exactly happens when I execute a file in my shell? (in as much detail is reasonable...)
| What exactly happens when I execute a file in my shell? |
For portability, no. While zsh can be compiled on any Unix or Unix-like and even Windows at least via Cygwin, and is packaged for most Open Source Unix-likes and several commercial ones, it is generally not included in the default install.
bash on the other end is installed on GNU systems (as bash is the shell of the GNU project) like the great majority of non-embedded Linux based systems and sometimes on non-GNU systems like Apple OS/X. In the commercial Unix side, the Korn shell (the AT&T variant, though more the ksh88 one) is the norm and both bash and zsh are in optional packages. On the BSDs, the preferred interactive shell is often tcsh while sh is based on either the Almquist shell or pdksh and bash or zsh need to be installed as optional packages as well.
zsh is installed by default on Apple OS/X. It even used to be the /bin/sh there. It can be found by default in a few Linux distributions like SysRescCD, Grml, Gobolinux and probably others, but I don't think any of the major ones.
Like for bash, there's the question of the installed version and as a consequence the features available. For instance, it's not uncommon to find systems with bash3 or zsh3. Also, there's no guarantee that the script that you write now for zsh5 will work with zsh6 though like for bash they do try to maintain backward compatibility.
For scripts, my view is: use the POSIX shell syntax as all Unices have at least one shell called sh (not necessarily in /bin) that is able to interpret that syntax. Then you don't have to worry so much about portability. And if that syntax is not enough for your need, then probably you need more than a shell.
Then, your options are:Perl which is ubiquitous (though again you may have to limit yourself to the feature set of old versions, and can't make assumptions on the Perl modules installed by default)
Specify the interpreter and its version (python 2.6 or above, zsh 4 or above, bash 4.2 or above...), as a dependency for your script, either by building a package for every targeted system which specifies the dependency or by stipulating it in a README file shipped alongside your script or embedded as comments at the top of your script, or by adding a few lines in Bourne syntax at the beginning of your script that checks for the availability of the requested interpreter and bails out with an explicit error when it's not, like this script needs zsh 4.0 or above.
Ship the interpreter alongside your script (beware of licensing implications) which means you also need one package for every targeted OS. Some interpreters make it easier by providing a way to pack the script and its interpreter in a single executable.
Write it in a compiled language. Again, one package per targeted system. | Can I assume that enough people have zsh installed to run scripts with a
#!/usr/bin/env zshas shebang?
Or will this make my scripts un-runnable on too many systems?
Clarification: I’m interested in programs/scripts an end user might want to run (like on Ubuntu, Debian, SUSE, Arch &c.)
| Is it recommended to use zsh instead of bash scripts? [closed] |
The shebang line you've seen may work on some unix variants, but not on Linux. Linux's shebang lines are limited: you can only have one option. The whole string -d -m -S screenName /bin/bash is passed as a single option to screen, instead of being passed as different words.
If you want to run a script inside screen and not mess around with multiple files or quoting, you can make the script a shell script which invokes screen if not already inside screen.
#!/bin/sh
if [ -z "$STY" ]; then exec screen -dm -S screenName /bin/bash "$0"; fi
do_stuff
more_stuff |
I want to run a bash script in a detached screen. The script calls a program a few times, each of which takes too long to wait. My first thought was to simply open a screen and then call the script, but it appears that I can't detach (by ctrl-a d) while the script is running. So I did some research and found this instruction to replace the shebang with following:
#!/usr/bin/screen -d -m -S screenName /bin/bashBut that doesn't work, either (the options are not recognized). Any suggestions?
PS It occurs to me just now that screen -dmS name ./script.sh would probably work for my purposes, but I'm still curious about how to incorporate this into the script. Thank you.
| Run script in a screen |
PATH lookup is a feature of the standard C library in userspace, as are environment variables in general. The kernel doesn't see environment variables except when it passes over an environment from the caller of execve to the new process.
The kernel does not perform any interpretation on the path in execve (it's up to wrapper functions such as execvp to perform PATH lookup) or in a shebang (which more or less re-routes the execve call internally). So you need to put the absolute path in the shebang¹. The original shebang implementation was just a few lines of code, and it hasn't been significantly expanded since.
In the first versions of Unix, the shell did the work of invoking itself when it noticed you were invoking a script. Shebang was added in the kernel for several reasons (summarizing the rationale by Dennis Ritchie:The caller doesn't have to worry whether a program to execute is a shell script or a native binary.
The script itself specifies what interpreter to use, instead of the caller.
The kernel uses the script name in logs.Pathless shebangs would require either to augment the kernel to access environment variables and process PATH, or to have the kernel execute a userspace program that performs the PATH lookup. The first method requires adding a disproportionate amount of complexity to the kernel. The second method is already possible with a #!/usr/bin/env shebang.
¹ If you put a relative path, it's interpreted relatively to the current directory of the process (not the directory containing the script), which is hardly useful in a shebang.
|
Is it possible to have a shebang that, instead of specifying a path to an interpreter, it has the name of the interpreter, and lets the shell find it through $PATH?
If not, is there a reason why?
| Why not use pathless shebangs? |
.bashrc and .bash_profile are NOT scripts. They're configuration file which get sourced every time bash is executed in one of 2 ways:interactive
loginThe INVOCATION section of the bash man page is what's relevent.A login shell is one whose first character of argument zero is a -, or
one started with the --login option.
An interactive shell is one started without non-option arguments and
without the -c option whose standard input and error are both
connected to terminals (as determined by isatty(3)), or one started
with the -i option. PS1 is set and $- includes i if bash is
interactive, allowing a shell script or a startup file to test this
state.
The following paragraphs describe how bash executes its startup
files. If any of the files exist but cannot be read, bash reports an
error. Tildes are expanded in file names as described below under
Tilde Expansion in the EXPANSION section.
When bash is invoked as an interactive login shell, or as a
non-interactive shell with the --login option, it first reads and
executes commands from the file /etc/profile, if that file
exists. After reading that file, it looks for ~/.bash_profile,
~/.bash_login, and ~/.profile, in that order, and reads and executes
commands from the first one that exists and is readable. The
--noprofile option may be used when the shell is started to inhibit this behavior.
When a login shell exits, bash reads and executes commands from the
file ~/.bash_logout, if it exists.
When an interactive shell that is not a login shell is started, bash
reads and executes commands from ~/.bashrc, if that file exists.
This may be inhibited by using the --norc option. The --rcfile file
option will force bash to read and execute commands from file instead
of ~/.bashrc.You can control when they get loaded through the command line switches, --norc and --noprofile. You can also override the location of where they get loaded from using the --rcfile switch.
As other's have mentioned you can mimic how these files get loaded through the use of the source <file> command or the use of the . <file> command.
It's best to think of this functionality as follows:bash starts up with a bare environment
bash then opens one of these files (depending on how it was invoked as interactive or login, and then...
...line by line executes each of the commands within the file...
when complete gives control to in the form of a prompt, waiting for inputMethods for invoking
This topic seems to come up every once in a while, so here's a little cheatsheet of the various ways to invoke bash and what they result in. NOTE: To help I've added the messages "sourced $HOME/.bashrc" and "sourced $HOME/.bash_profile" to their respective files.
basic callsbash -i
$ bash -i
sourced /home/saml/.bashrcbash -l
$ bash -l
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profilebash -il -or- bash -li
$ bash -il
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profilebash -c "..cmd.."
$ bash -c 'echo hi'
hiNOTE: Notice that the -c switch didn't source either file!disabling config files from being readbash --norc
$ bash --norc
bash-4.1$ bash --noprofile
$ bash --noprofile
sourced /home/saml/.bashrcbash --norc -i
$ bash --norc -i
bash-4.1$ bash --norc -l
$ bash --norc -l
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profilebash --noprofile -i
$ bash --noprofile -i
sourced /home/saml/.bashrcbash --noprofile -l
$ bash --noprofile -l
bash-4.1$ bash --norc -i -or- bash --norc -l
$ bash --norc -c 'echo hi'
hiMore esoteric ways to call bashbash --rcfile $HOME/.bashrc
$ bash -rcfile ~/.bashrc
sourced /home/saml/.bashrcbash --norc --rcfile $HOME/.bashrc
$ bash --norc -rcfile ~/.bashrc
bash-4.1$ These failedbash -i -rcfile ~/.bashrc
$ bash -i -rcfile ~/.bashrc
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profile
bash: /home/saml/.bashrc: restricted: cannot specify `/' in command namesbash -i -rcfile .bashrc
$ bash -i -rcfile .bashrc
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profile
bash: .bashrc: command not foundThere are probably more but you get the point, hopefully....
What else?
Lastly if you're so enthralled with this topic that you'd like to read/explore more on it, I highly suggest taking a look at the Bash Beginners Guide, specifically section: 1.2. Advantages of the Bourne Again SHell. The various subsections under that one, "1.2.2.1. Invocation" through "1.2.2.3.3. Interactive shell behavior" explain the low level differences between the various ways you can invoke bash.
|
Simple inquiry: I have just realized that I have never seen a shebang on top of a .bashrc script, which leads me to think the system uses the default shell to source it upon login (${SHELL}). I am pondering over reasons why that is the case, i.e. is it considered a bad habit to use something other than the default shell to run the login script.
| Why no shebang in .bashrc/.bash_profile? |
Because the script does not begin with a #! shebang line indicating which interpreter to use, POSIX says that:If the execl() function fails due to an error equivalent to the [ENOEXEC] error defined in the System Interfaces volume of POSIX.1-2008, the shell shall execute a command equivalent to having a shell invoked with the pathname resulting from the search as its first operand, with any remaining arguments passed to the new shell, except that the value of "$0" in the new shell may be set to the command name. If the executable file is not a text file, the shell may bypass this command execution. In this case, it shall write an error message, and shall return an exit status of 126.That phrasing is a little ambiguous, and different shells have different interpretations.
In this case, Bash will run the script using itself. On the other hand, if you ran it from zsh instead, zsh would use sh (whatever that is on your system) instead.
You can verify that behaviour for this case by adding these lines to the script:
echo $BASH_VERSION
echo $ZSH_VERSIONYou'll note that, from Bash, the first line outputs your version, while the second never says anything, no matter which shell you use.If your /bin/sh is, say, dash, then neither line will output anything when the script is executed from zsh or dash.
If your /bin/sh is a link to Bash, you'll see the first line output in all cases.
If /bin/sh is a different version of Bash than you were using directly, you'll see different output when you run the script from bash directly and from zsh.The ps -p $$ command from rools's answer will also show useful information about the command the shell used to execute the script.
|
Suppose the default shell for my account is zsh but I opened the terminal and fired up bash and executed a script named prac002.sh, which shell interpreter would be used to execute the script, zsh or bash? Consider the following example:
papagolf@Sierra ~/My Files/My Programs/Learning/Shell % sudo cat /etc/passwd | grep papagolf
[sudo] password for papagolf:
papagolf:x:1000:1001:Rex,,,:/home/papagolf:/usr/bin/zsh
# papagolf's default shell is zshpapagolf@Sierra ~/My Files/My Programs/Learning/Shell % bash
# I fired up bash. (See that '%' prompt in zsh changes to '$' prompt, indicating bash.)papagolf@Sierra:~/My Files/My Programs/Learning/Shell$ ./prac002.sh
Enter username : Rex
Rex
# Which interpreter did it just use?**EDIT : ** Here's the content of the script
papagolf@Sierra ~/My Files/My Programs/Learning/Shell % cat ./prac002.sh
read -p "Enter username : " uname
echo $uname | Which shell interpreter runs a script with no shebang? |
Another interesting name derivation from here.Among UNIX shell (user interface) users, a shebang is a term for the
"#!" characters that must begin the first line of a script. In musical
notation, a "#" is called a sharp and an exclamation point - "!" - is
sometimes referred to as a bang. Thus, shebang becomes a shortening of
sharp-bang |
Does "shebang" mean "bang she"?
Why not "hebang" as "bang he"?
| Why is "shebang" called "shebang"? |
This looks like a placeholder in an GNU Automake template which is going to be filled in by a configure script. So it's neither a Perl or Unix kernel thing, but a GNU autotools thing.
It is probably from a file in a source distribution, not from a file that was installed on the system through make install or a package manager.
Alternatively, it's from a broken build with GNU autotools that never defined perlbin properly.
That the file has a .in suffix confirms that it is supposed to be processed by configure.
No, you can not execute this file as it is. The placeholder will be replaced with the proper path to the perl executable when you run configure.
|
In the Apache httpd project's "support/apxs.in" script, a text surrounded with @ signs comes after #!. That is, the first line of the script is:
#!@perlbin@ -wIs this a Perl thing or a UNIX kernel thing? In other words, it it possible to execute this script using path/to/script/script_name.in?
If not, then what is the reason to start the script with a #!?
| At sign after shebang? |
No, that won't work. The two characters #! absolutely needs to be the first two characters in the file (how would you specify what interpreted the if-statement anyway?). This constitutes the "magic number" that the exec() family of functions detects when they determine whether a file that they are about to execute is a script (which needs an interpreter) or a binary file (which doesn't).
The format of the shebang line is quite strict. It needs to have an absolute path to an interpreter and at most one argument to it.
What you can do is to use env:
#!/usr/bin/env interpreterNow, the path to env is usually /usr/bin/env, but technically that's no guarantee.
This allows you to adjust the PATH environment variable on each system so that interpreter (be it bash, python or perl or whatever you have) is found.
A downside with this approach is that it will be impossible to portably pass an argument to the interpreter.
This means that
#!/usr/bin/env awk -fand
#!/usr/bin/env sed -fis unlikely to work on some systems.
Another obvious approach is to use GNU autotools (or some simpler templating system) to find the interpreter and place the correct path into the file in a ./configure step, which would be run upon installing the script on each system.
One could also resort to running the script with an explicit interpreter, but that's obviously what you're trying to avoid:
$ sed -f script.sed |
Is there any way to dynamically choose the interpreter that's executing a script? I have a script that I'm running on two different systems, and the interpreter I want to use is located in different locations on the two systems. What I end up having to to is change the hashbang line every time I switch over. I would like to do something that is the logical equivalent of this (I realize that this exact construct is impossible):
if running on system A:
#!/path/to/python/on/systemA
elif running on system B:
#!/path/on/systemB#Rest of script goes hereOr even better would be this, so that it tries to use the first interpreter, and if it doesn't find it uses the second:
try:
#!/path/to/python/on/systemA
except:
#!path/on/systemB#Rest of script goes hereObviously, I can instead execute it as
/path/to/python/on/systemA myscript.py
or
/path/on/systemB myscript.py
depending on where I am, but I actually have a wrapper script that launches myscript.py, so I would like to specify the path to the python interpreter programmatically rather than by hand.
| Choose interpreter after script start e.g. if/else inside hashbang |
Setting to #!/bin/sh will go directly to that file /bin/sh.
Setting to #!/usr/bin/env sh will execute /usr/bin/env with an argument of sh. This will cause the script to be executed by sh in your PATH variable rather than explicitly with /bin/sh.
|
I recently noticed that many scripts are using /usr/bin/env in their shebang. I have seen that mainly using Bash and Python, but thus far never in conjunction with POSIX sh (ash, dash,...).
I wonder why, and if my, meant-to-be highly portable, POSIX shell scripts might benefit from the env approach?Is there a general concensus on whether to use:standard:
#!/bin/shenvironment:
#!/usr/bin/env shLet me stress this enough:
I never have seen this with sh.
| POSIX shell scripts shebang #!/bin/sh vs #!/usr/bin/env sh, any difference? |
ls -lL /usr/bin/env shows that the symbolic link is broken. That explains why the shebang line isn't working: the kernel is trying, and obviously failing, to execute a dangling symbolic link.
/usr/bin/env -> ../../bin/env is correct if /usr and /usr/bin are both actual directories (not symlinks). Evidently this isn't the case on your machine. Maybe /usr is a symbolic link? (Evidently it isn't a symbolic link to /, otherwise /usr/bin/env would be the same file as /bin/env, not a symbolic link).
You need to fix that symbolic link. You can make it an absolute link:
sudo ln -snf /bin/env /usr/bin/envYou can make it a relative link, but if you do, make sure it's correct. Switch to /usr/bin and run ls -l relative/path/to/bin/env to confirm that you've got it right before creating the symlink.
This isn't a default RHEL setup, so you must have modified something locally. Try to find out what you did and whether that could have caused other similar problems.
|
I ran into some issues when running some installation scripts where they complained of bad interpreter.
So I made a trivial example but I can't figure out what the problem is, see below.
#!/usr/bin/env bash
echo "hello"Executing the script above results in the following error
[root@ech-10-24-130-154 dc-user]# ./junk.sh
bash: ./junk.sh: /usr/bin/env: bad interpreter: No such file or directoryThe /usr/bin/env file exists, see below:
[root@ech-10-24-130-154 dc-user]# ls -l /usr/bin/env
lrwxrwxrwx 1 root root 13 Jan 27 04:14 /usr/bin/env -> ../../bin/env
[root@ech-10-24-130-154 dc-user]# ls -l /bin/env
-rwxr-xr-x 1 root root 23832 Jul 16 2014 /bin/env
[root@ech-10-24-130-154 dc-user]#If I alter the script to use the regular shebang #!/bin/bash it works no problem. #!/bin/env bash works as well.
What is missing from the environment to allow the portable shebang to work?
ls -lL /usr/bin/env returns ls: cannot access /usr/bin/env: No such file or directory so I guess I need to alter the symbolic link? Can I point it to /bin/env?
env --version is 8.4 and the OS is Red Hat Enterprise Linux Server release 6.6.
| Why is #!/usr/bin/env bash not working on my system? |
They're the same options as -e and -u to the set builtin. They can be given on the shell command line too, and they get given as command line arguments from the hashbang line too. (But note e.g. issues with Multiple arguments in shebang)
The online manual says, under "Invoking Bash", thatAll of the single-character options used with the set builtin can be used as options when the shell is invoked.The single character options are also explicitly listed in the invocation synopsis on the online manual (bash [long-opt] [-abefhkmnptuvxdBCDHP] [-o option] ...
), though not in the manpage.
set -u tells the shell to treat expanding an unset parameter an error, which helps to catch e.g. typos in variable names.
set -e tells the shell to exit if a command exits with an error (except if the exit value is tested in some other way). That can be used in some cases to abort the script on error, without explicitly testing the status of each and every command.
|
What does the eu mean after #!/bin/bash -eu at the top of a bash script?
Normally I begin my bash scripts with this hashbang/shebang:
#!/bin/bashbut I just came across one with
#!/bin/bash -euand I have no idea why there is a -eu there. Reading the man bash pages doesn't seem to help me, but maybe I'm overlooking something.Not a duplicate:
This is not a duplicate of Correct behavior of EXIT and ERR traps when using `set -eu`.
Quoting @ilkkachu in the comments below this question, directly addressing this:...how -e and -u work with regard to traps or anything else is completely unrelated to how they and other single-character options can be given on the command line.I agree with that. These are separate questions and answers with differing motivations behind them. That question is so different I would never even think to click on it by looking at its title OR its description when trying to understand the answer to my own question here, and the answers are vastly different too.
| What does the `-eu` mean in `#!/bin/bash -eu` at the top of a bash script? (or any of `-abefhkmnptuvxBCHP`) |
I think primarily because:the behaviour varies greatly between implementation. See https://www.in-ulm.de/~mascheck/various/shebang/ for all the details.
It could however now specify a minimum subset of most Unix-like implementations: like #! *[^ ]+( +[^ ]+)?\n (with only characters from the portable filename character set in those one or two words) where the first word is an absolute path to a native executable, the thing is not too long and behaviour unspecified if the executable is setuid/setgid, and implementation defined whether the interpreter path or the script path is passed as argv[0] to the interpreter.POSIX doesn't specify the path of executables anyway. Several systems have pre-POSIX utilities in /bin//usr/bin and have the POSIX utilities somewhere else (like on Solaris 10 where /bin/sh is a Bourne shell and the POSIX one is in /usr/xpg4/bin; Solaris 11 replaced it with ksh93 which is more POSIX compliant, but most of the other tools in /bin are still ancient non-POSIX ones). Some systems are not POSIX ones but have a POSIX mode/emulation. All POSIX requires is that there be a documented environment in which a system behaves POSIXly.
See Windows+Cygwin for instance. Actually, with Windows+Cygwin, the she-bang is honoured when a script is invoked by a cygwin application, but not by a native Windows application.
So even if POSIX specified the shebang mechanism it could not be used to write POSIX sh/sed/awk... scripts (also note that the shebang mechanism cannot be used to write reliable sed/awk script as it doesn't allow passing an end-of-option marker).Now the fact that it's unspecified doesn't mean you can't use it (well, it says you shouldn't have the first line start with #! if you expect it to be only a regular comment and not a she-bang), but that POSIX gives you no guarantee if you do.
In my experience, using shebangs gives you more guarantee of portability than using POSIX's way of writing shell scripts: leave off the she-bang, write the script in POSIX sh syntax and hope that whatever invokes the script invokes a POSIX compliant sh on it, which is fine if you know the script will be invoked in the right environment by the right tool but not otherwise.
You may have to do things like:
#! /bin/sh -
if : ^ false; then : fine, POSIX system by default
else
# cover Solaris 10 or older. ": ^ false" returns false
# in the Bourne shell as ^ is an alias for | there for
# compatibility with the Thompson shell.
PATH=`getconf PATH`:$PATH; export PATH
exec /usr/xpg4/bin/sh - "$0" ${1+"$@"}
fi
# rest of scriptIf you want to be portable to Windows+Cygwin, you may have to name your file with a .bat or .ps1 extension and use some similar trick for cmd.exe or powershell.exe to invoke the cygwin sh on the same file.
|
From the Shell Command Language page of the POSIX specification:If the first line of a file of shell commands starts with the characters "#!", the results are unspecified.Why is the behavior of #! unspecified by POSIX? I find it baffling that something so portable and widely used would have an unspecified behavior.
| Why is the behavior of the `#!` syntax unspecified by POSIX? |
According to Sven Mascheck (who's generally reliable and well-informed):interpreter itself as #! script
or: can you nest #!?
(…)
Linux since 2.6.27.9 2 and Minix accept this.
(…)
see the kernel patch
(patch to be applied to 2.6.27.9) and especially see binfmt_script.c which contains the important parts.
Linux allows at most BINPRM_MAX_RECURSION, that is 4, levels of nesting.Note that this recursion concerns both indirect execution mechanisms that Linux implements: #! scripts, and executable formats registered through binfmt_misc. So for example you can have a script with a #! line that points to an interpreter written in bytecode which gets dispatched to a foreign-architecture binary which gets dispatched via Qemu, and that counts for 3 levels of nesting.
Sven Mascheck also notes that no BSD supports nested shebang, but that some shells will take over if the kernel returns an error.
|
I've always heard that the target of a shebang line (e.g. #!/bin/bash) must be a binary executable, not a script. And this is still true for many OSes (e.g. MacOS). But I was surprised to see that this is not true on Linux, where up to 4 levels of scripts can be used, where the fourth script references a binary executable in its shebang line. However, if 5 levels of scripts are used, then the program will fail with the error Too many levels of symbolic links.
See the LWN article "How programs get run" and the following code which was not shown in that article.
$ cat wrapper2
#!./wrapperWhen did this change occur (assuming that at some point it was not allowed)?
| Shebang can reference a script in Linux |
For portability, you can safely assume that #!/bin/sh will find a mostly POSIX-compliant shell on any standard Unix or Linux system, but that's really about it.
In FreeBSD, OpenBSD and NetBSD (along with DragonFly, PC-BSD and some other derivatives), bash is located at /usr/local/bin/bash (if it is installed), so the /usr/bin/env approach provides portability between Linux and BSD.
Android is not a standard Unix or Linux system. On my not-rooted Android phone, none of /usr/bin/env, /bin/bash or even /bin/sh exist, and the system shell is /system/bin/sh.
A shell script that is missing the #! (shebang) will attempt to run in the shell that called it on some systems, or may use a different default interpreter (/bin/bash for example), on other systems. And while this may work in Android, it isn't guaranteed to work in other operating systems, where users may elect to use an interactive shell that is not bash. (I use tcsh in FreeBSD, where it is the default shell, and shebang-less script are interpreted by the calling shell.)
So from where I sit, it looks like it is not possible to create a shell script that is portable between Android and non-Android (Linux or Unix) systems, because Android does things differently.
| Which one is better:#!/usr/bin/env sh
#!/bin/sh
empty/no headerI used to think the 1st one is the best, anyway i've found on some Linux-based systems (like Android) that pathname is missing, so now i'm thinking the only way to have "portable" shell scripts is to not include any header...
| shell script header for best compatibility [duplicate] |
Shebang wasn't meant to be that flexible. There may be some cases where having a second parameter works, I think FreeBSD is one of them.
gawk and most utilities that come with the OS are expected to be in /usr/bin/.
In the older UNIX days, it was common to have /usr/ mounted over NFS or some less expensive media to save local disk space and cost per workstation. /bin/ was supposed to have everything needed to boot in single user mode. Since /usr/ wasn't mounted on a reliable media, /bin/ included enough utilities to make it friendly enough for general administration and troubleshooting.
This was inherited in Linux initially, but as disk space is no longer an issue and in most cases /usr/ is in the root filesystem, the current trend is to move everything in /usr/bin (at least in the Linux world). So most utilities installed by a distro are expected to be found there. Even the most basic utilities, like cp, rm, ls etc (well, not yet).
Regarding the shebang choice. Traditionally, this is something the admins or users have to edit according to their environment. For all a developer knows, in other people's systems, the interpreter could be anywhere in the filesystem (eg /usr/local/bin, /opt/gawk-4.0.1/bin). Properly packaged scripts (rpm, deb etc) come with either a dependency on a distro package (ie. the interpreter has a known location) or a config script that setups the proper hashbang during installation.
|
Is gawk in /bin or /usr/bin usually? I would go with #!/usr/bin/env gawk but then I can't use arguments. Right now I'm using #!/bin/gawk -f. The script is very long and contains a lot of single quotes and works with stdin.
The GNU Awk manual has section 1.1.4 Executable awk Programs where it uses #!/bin/awk in its example but goes on to say: Note that on many systems awk may be found in /usr/bin instead of
in /bin. Caveat Emptor.What do most people do? I've read sed is supposedly standardized in /bin whereas perl is supposedly standardized in /usr/bin (same page as sed link but they won't let me make a third link for this post). What about awk/gawk? Does anyone know which is more common or popular?
| Distributing a script: Should I use /bin/gawk or /usr/bin/gawk for shebang? |
When the current interactive shell is bash, and you run a script with no #!-line, then bash will run the script. The process will show up in the ps ax output as just bash.
$ cat foo.sh
# foo.shecho "$BASHPID"
while true; do sleep 1; done$ ./foo.sh
55411In another terminal:
$ ps -p 55411
PID TT STAT TIME COMMAND
55411 p2 SN+ 0:00.07 bashRelated:Which shell interpreter runs a script with no shebang?The relevant sections form the bash manual:If this execution fails because the file is not in executable format,
and the file is not a directory, it is assumed to be a shell script, a
file containing shell commands. A subshell is spawned to execute it.
This subshell reinitializes itself, so that the effect is as if a new
shell had been invoked to handle the script, with the exception that
the locations of commands remembered by the parent (see hash below
under SHELL BUILTIN COMMANDS) are retained by the child.
If the program is a file beginning with #!, the remainder of the first
line specifies an interpreter for the program. The shell executes the
specified interpreter on operating systems that do not handle this
executable format themselves. [...]This means that running ./foo.sh on the command line, when foo.sh does not have a #!-line, is the same as running the commands in the file in a subshell, i.e. as
$ ( echo "$BASHPID"; while true; do sleep 1; done )With a proper #!-line pointing to e.g. /bin/bash, it is as doing
$ /bin/bash foo.sh |
When I run this script, intended to run until killed...
# foo.shwhile true; do sleep 1; done...I'm not able to find it using ps ax:
>./foo.sh// In a separate shell:
>ps ax | grep foo.sh
21110 pts/3 S+ 0:00 grep --color=auto foo.sh...but if I just add the common "#!" header to the script...
#! /usr/bin/bash
# foo.shwhile true; do sleep 1; done...then the script becomes findable by the same ps command...
>./foo.sh// In a separate shell:
>ps ax | grep foo.sh
21319 pts/43 S+ 0:00 /usr/bin/bash ./foo.sh
21324 pts/3 S+ 0:00 grep --color=auto foo.shWhy is this so?
This may be a related question: I thought "#" was just a comment prefix, and if so "#! /usr/bin/bash" is itself nothing more than a comment. But does "#!" carry some significance greater than as just a comment?
| Why doesn't "ps ax" find a running bash script without the "#!" header? |
Even though your project may now be solely consisting of 50 Bash scripts, it will sooner or later start accumulating scripts written in other languages such as Perl or Python (for the benefits that these scripting languages have that Bash does not have).
Without a proper #!-line in each script, it would be extremely difficult to use the various scripts without also knowing what interpreter to use. It doesn't matter if every single script is executed from other scripts, this only moves the difficulty from the end users to the developers. Neither of these two groups of people should need to know what language a script was written in to be able to use it.
Shell scripts executed without a #!-line and without an explicit interpreter are executed in different ways depending on what shell invokes them (see e.g. the question Which shell interpreter runs a script with no shebang? and especially Stéphane's answer), which is not what you want in a production environment (you want consistent behaviour, and possibly even portability).
Scripts executed with an explicit interpreter will be run by that interpreter regardless of what the #!-line says. This will cause problems further down the line if you decide to re-implement, say, a Bash script in Python or any other language.
You should spend those extra keystrokes and always add a #!-line to each and every script.In some environments, there are multi-paragraph boilerplate legal texts in each script in each project. Be very happy that it's only a #!-line that feels "redundant" in your project.
|
I have a project comprised of about 20 small .sh files. I name these "small" because generally, no file has more than 20 lines of code. I took a modular approach because thus I'm loyal to the Unix philosophy and it's easier for me to maintain the project.
In the start of each .sh file, I put #!/bin/bash.
Simply put, I understand script declarations have two purposes:They help the user recall what shell is needed to execute the file (say, after some years without using the file).
They ensure that the script runs only with a certain shell (Bash in that case) to prevent unexpected behavior in case another shell was used.When a project starts to grow from say 5 files to 20 files or from 20 files to 50 files (not this case but just to demonstrate) we have 20 lines or 50 lines of script declarations. I admit, even though it might be funny to some, it feels a bit redundant for me to use 20 or 50 instead say just 1 per project (maybe in the main file of the project).
Is there a way to avoid this alleged redundancy of 20 or 50 or a much greater number of lines of script declarations by using some "global" script declaration, in some main file?
| Too many shebang (script declaration) lines --- any way to reduce their amount? |
From AskUbuntu, answer by Gilles:If you see the error “: No such file or directory” (with nothing before the colon), it means that your shebang line has a carriage return at the end, presumably because it was edited under Windows (which uses CR,LF as a line separator). The CR character causes the cursor to move back to the beginning of the line after the shell prints the beginning of the message and so you only get to see the part after CR which ends the interpreter string that's part of the error message.
Remove the CR: the shebang line needs to have a Unix line ending (linefeed only). Python itself allows CRLF line endings, so the CR characters on other lines don't hurt. Shell scripts on the other hand must be free of CR characters.
To remove the Windows line endings, you can use dos2unix:
sudo dos2unix /usr/local/bin/casperjs
or sed:
sudo sed -i -e 's/\r$//' /usr/local/bin/casperjs
If you must edit scripts under Windows, use an editor that copes with Unix line endings (i.e. something less brain-dead than Notepad) and make sure that it's configured to write Unix line endings (i.e. LF only) when editing a Unix file. |
I'm trying to run a python script, on a headless Raspberry PI using winSCP and get the following error message:
Command '"./areadetect_movie_21.py"'
failed with return code 127 and error message
/usr/bin/env: python
: No such file or directory.When I try and run from terminal, I get:
: No such file or directory.I try a similar python script, in the same directory, with the same python shebang, the same permissions and using the same user pi, and it works.
I also do a ls and I can see the file, so I don't know why it will not run.
| No such file or directory but I can see it! |
You have a space instead of a forward slash here:#! /bin bashShould be:
#! /bin/bashor simply
#!/bin/bash(the first space is optional).
The shebang (#!) should be followed by the path to an executable, which may be followed by one argument, e.g.,
#!/usr/bin/env shIn this case /usr/bin/env is the executable; see man env for details.
Just /bin refers to a directory.
|
I'm on a kali linux 64 bit.
I have created a python script which takes 2 arguments to start. I don't want to type out every time the exact same paths or search in the history of the commands I used in terminal. So I decided to create a simple script which calls the python script with its arguments.
#! /bin bashpython CreateDB.py ./WtfPath ./NoWtfPath/NewSystem/It is the exact same command I would use in terminal. However, I get an error message when I try to execute the script file.
bash: ./wtf.sh: /bin: bad interpreter: Permission deniedwtf.sh has executable rights.
What is wrong?
| Bash Script Permission denied & Bad Interpreter |
No. By the time a shebang comes into play, you have already lost. A shebang is applied when a process is exec()'d and typically that happens after forking, so you're already in a separate process. It's not the shell that reads the shebang, it's the kernel.
|
I have a growing collection of scripts which should be sourced, not run. At the moment they have the shebang
#! /bin/catbut I would prefer the have them be sourced into bash when run, in the same way as I had done
$ . /path/to/script.shor
$ source /path/to/script.shBut . and source are bash builtins, so is an alternative shebang line for such scripts possible?
| Can I use a shebang to have a file source itself into current bash environment? |
Typically shebang refers to just the #! (! is typically called "bang", and it looks like "she" is a corruption of either "SHArp" or "haSH" for #) -- the whole line is called a shebang line
It does intentionally start with a comment character for backwards-compatibility with things that don't know how to handle it; the ! is presumably just to distinguish it from a random comment starting the file, so a file that begins with # this is my script! doesn't try to run the this is my script! interpreter
|
Why does the "she-bang" begin with a #!, like #!/bin/bash? I have always accepted that this how it is done, but is there a reason behind it?
Why start with #; isn't that usually a comment? Or is it the point that it should be comment?
| Why does the "she-bang" begin with a "#!"? |
Some Unices, most notably the macOS (and up until 2005, FreeBSD), will allow for this, while Linux will not, but...
If one use the env utility from a recent release of the GNU coreutils package (8.30+), it has a non-standard -S option that allows for supplying multiple arguments in #! lines.
The opposite question: Shebang line with `#!/usr/bin/env command --argument` fails on Linux
|
I read in another answer that I'm not able to pass arguments to the interpreter than I'm giving to /usr/bin/env:Another potential problem is that the #!/usr/bin/env trick doesn't let you pass arguments to the interpreter (other than the name of the script, which is passed implicitly).However, it looks like I am able to, because awk is breaking when I don't give it the -f flag and it's fixed when I do give it the -f flag, while using /usr/bin/env:
First, without the -f flag:
$ cat wrap_in_quotes
#!/usr/bin/env awk
# wrap each line in quotes
# usage: wrap_in_quotes [ file ... ]
{ print "\""$0"\"" }
$ echo foobar | ./wrap_in_quotes
awk: syntax error at source line 1
context is
>>> . <<< /wrap_in_quotes
awk: bailing out at source line 1Second, with the -f flag:
$ vim wrap_in_quotes
$ cat wrap_in_quotes
#!/usr/bin/env awk -f
# wrap each line in quotes
# usage: wrap_in_quotes [ file ... ]
{ print "\""$0"\"" }
$ echo foobar | ./wrap_in_quotes
"foobar"So, if according to the linked answer I'm not able to pass flags to the interpreter, why am I able to pass the -f flag to awk?I'm running macOS:
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.12.1
BuildVersion: 16B2657 | Why am I able to pass arguments to /usr/bin/env in this case? |
[The following assumes that your unspecified "logging issue" was related to missing environment setup, normally inherited from your profile.]
The -l option tells bash to read all the various "profile" scripts, from /etc and from your home directory. Bash normally only does this for interactive sessions (in which bash is run without any command line parameters).
Normal scripts have no business reading the profile; they're supposed to run in the environment they were given. That said, you might want to do this for personal scripts, maybe, if they're tightly bound to your environment and you plan to run them outside of a normal session.
A crontab is one example of running a script outside your session, so yes, go do it!
If the script is purely for the use of the crontab then adding -l to the shebang is fine. If you might use the script other ways then consider fixing the environment problem in the crontab itself:
0 * * * * bash -l hourly.sh |
I recently came up to an easy fix for a crontab logging issue and I am wondering what are the pro's and con's of using this specific fix (running a script with a "login shell flag"), as:
#!/bin/bash -l | What are the pro's and con's in using the "-l" in a script shebang |
The shebang is working and cron has nothing to do with that. When a file is executed, if that file's content begins with #!, the kernel executes the file specified on the #! line and passes it the original file as an argument.
Your problem is that you seem to believe that SHELL in a shell script reflects the shell that is executing the script. This is not the case. In fact, in most contexts, SHELL means the user's prefered interactive shell, it is meant for applications such as terminal emulator to decide which shell to execute. In cron, SHELL is the variable that tells cron what program to use to run the crontab entries (the part of the lines after the time indications).
Shells do not set the SHELL variable unless it is not set when they start.
The fact that SHELL is /bin/sh is very probably irrelevant. Your script has a #!/bin/bash line, so it's executed by bash. If you want to convince yourself, add ps $$ in the script to make ps show information about the shell executing the script.
|
I have a script containing:
#!/bin/bash
printenvWhen I run it from the command line:
env testscript.sh
bash testscript.sh
sh testscript.shevery time, it outputs SHELL=/bin/bash. However, when it is run from the cron, it always outputs SHELL=/bin/sh. Why is this? How can I make cron apply the shebang?
I already checked the cron PATH; it does include /bin.
| Shebang does not set SHELL in cron |
The Linux kernel treats everything following the first “word” in the shebang line as a single argument. One solution is to set -o later in the script:
#!/bin/bashset -o pipefailecho "Running test"
git diff HEAD^ HEAD -M --summary | grep delete | cut --delimiter=' ' -f 5 | I have a file named test.sh:
#!/bin/bash -o pipefail
echo "Running test"
git diff HEAD^ HEAD -M --summary |
grep delete |
cut --delimiter=' ' -f 5When I try to run this script as:
./test.shI get:
/bin/bash: line 0: /bin/bash: ./test: invalid option nameI ran cat -v test.sh to check if there are carriage returns or anything, but that doesn't seem to be the case. I can run the script if I just run it as bash test.sh. Grateful for any help, and lmk if I can provide more info!
| Invalid option name error with shebang "#!/bin/bash -o pipefail" in script [duplicate] |
You can use perl:
perl helloFrom perl docs:If the #! line does not contain the word "perl" nor the word "indir", the program named after the #! is executed instead of the Perl interpreter. This is slightly bizarre, but it helps people on machines that don't do #!, because they can tell a program that their SHELL is /usr/bin/perl, and Perl will then dispatch the program to the correct interpreter for them.(via)
|
Say I have a file hello:
#!/bin/shecho "Hello World!"Provided the executable bit is set on that file, I can execute it by entering its path on the prompt:
$ ./hello
Hello World!Is there a more explicit equivalent to the above? Something akin to:
$ execute helloI know I can pass hello as an argument to /bin/sh, but I'm looking for a solution that automatically uses the interpreter specified in the shebang line
My use case for this is to execute script files that do not have the executable flag set. These files are stored in a git repository, so I would like to avoid setting their executable flag or having to copy them to another location first.
| Equivalent of executing a file (with shebang line) by entering its path? |
If these lines are not the beginning of included shell scripts to be built, i.e. inside a scheme of the form:
cat <<end_of_shell_script >dynamically_built_shell
#!/bin/bash
[...]
end_of_shell_scriptThen the repeated construct you found is the result of many copy - paste of full shell scripts but without enough care and understanding of what is the use of these very special comment on line 1 of scripts, starting with #!.
Be careful before using such a shell script (no sudo, no su :) ).
|
I have a script from other person which has a look (note: it's a single file):
#!/bin/bashsome commands
some commands#!/bin/bashsome commands
some commands#!/bin/bashsome commands
some commandsI wondering what is the purpose of second and third shebangs? Is it by mistake or on purpose?
| Multiple shebangs in a single bash file |
There are a few options:Use #!/usr/bin/gsed -f (assuming it is in /usr/bin) as the shebang everywhere, and make sure that your Linux environments symlink this properly;
Remove the GNUisms;
Symlink sed to /usr/bin/gsed from a directory that earlier than /usr/bin in the user's $PATH (possibly dangerous);
Make a wrapper script that looks something like this:#!/bin/shscript=/footype gsed >/dev/null 2>&1 && exec gsed -f "$script"
exec sed -f "$script"Ultimately there either need to be changes to at least one of the environments, or changes to the script itself.
|
I have a GNU sed script I use on Linux; it is installed at /bin/sed and it seems it contains GNUisms. I have collaborators using Mac OS X. They have installed (non-GNU) sed, located at /usr/bin/sed, and using Homebrew (http://mxcl.github.io/homebrew/) can install GNU sed as gsed with the coreutils package, located at /usr/local/bin/gsed.
Currently, the script starts with #!/bin/sed -f. How do I modify it so that it can be run on Mac OS X, when GNU sed is installed as gsed?
Another option would be to remove the GNUisms, but this may be a bit hard, as I do not have a Mac OS X install at hand and cannot ask my collaborators to test intermediate versions.
| How to share a GNU sed script between Linux and Mac OS X |
It seems you have a badly-written shebang line. From the error you're getting:
-bash: /usr/bin/pyAES.py: /usr/bin/python2: bad interpreter: No such file or directoryI'd say you should set the first line of /usr/bin/pyAES.py to
#!/correct/path/to/pythonwhere the /correct/path/to/python can be found from the output of:
type -P pythonIt's /usr/bin/python (not /usr/bin/python2) on my system.
|
I have downloaded this script named, pyAES.py and put it in a folder name codes, inside a Desktop directory of my Linux,
According to this example,
http://brandon.sternefamily.net/2007/06/aes-tutorial-python-implementation/
When I type,
./pyAES.py -e testfile.txt -o testfile_encrypted.txtthe file pyAES.py should be executed.
but I am getting this error,
pi@raspberrypi ~/Desktop/Codes $ pyAES.py
-bash: pyAES.py: command not foundthe output of ls -l command is,
pi@raspberrypi ~/Desktop/Codes $ ls -l
total 16
-rw-r--r-- 1 pi pi 14536 Oct 8 10:44 pyAES.pyHere is the output after chmod +x
pi@raspberrypi ~/Desktop/Codes $ chmod +x pyAES.py pi@raspberrypi ~/Desktop/Codes $
pi@raspberrypi ~/Desktop/Codes $ pyAES.py
-bash: pyAES.py: command not found
pi@raspberrypi ~/Desktop/Codes $and the command, chmod +x pyAES.py && ./pyAES.py gives the following error,
-bash: ./pyAES.py: /usr/bin/python2: bad interpreter: No such file or directoryI have also tried moving the file in /usr/bin directory and then executing it,
pi@raspberrypi /usr/bin $ pyAES.py
-bash: /usr/bin/pyAES.py: /usr/bin/python2: bad interpreter: No such file or directory
pi@raspberrypi /usr/bin $I can see the file is present in /usr/bin directory but it is still giving an error that No such file or directory.
I want to know why the Linux terminal is not executing the python script ?
| Running python script from Linux Terminal |
Since shebang is a Linux kernel feature -- maybe it sets some indicator that this mechanism has been used?Yes, it does. Linux sets the AT_EXECFN auxiliar vector entry to the path of the original executable. In C, you can do it with char *at_execfn = (char*)getauxval(AT_EXECFN), followed by stat(at_execfn), etc.
Getting it from bash is tricky, though. You can try unpacking the /proc/self/auxv and then looking through /proc/self/mem. Good luck with that ;-)
|
In the Pyenv project, we've had a peculiar problem.
We are substituting python (and python*) with our Bash scripts ("shims") that select a Python executable to run at runtime.
Now, some users wish to use a special selection logic when a Python script is run as path/to/script.py. The problem is, this logic should NOT apply if the script is instead run as <python> path/to/script.py!
Is there a way to reliably distinguish these two cases?
I wasn't able to find anything: depending on the way command line arguments are formulated in the 2nd case, the exact same command line could be executed in both cases:
(the Bash script given is not a real shim, just a demonstration example to showcase what our logic sees and does)
$ cat python3
#!/bin/bash
echo "'$0'"
for a in "$@"; do
echo "'$a'"
done
# need to do the detection here
exec python3 "$@"$ cat t.py
#!/home/vmuser/python3
import sys
print(sys.argv)$ $PWD/t.py
'/home/vmuser/python3'
'/home/vmuser/t.py'
['/home/vmuser/t.py']$ $PWD/python3 $PWD/t.py
'/home/vmuser/python3'
'/home/vmuser/t.py'
['/home/vmuser/t.py']Since shebang is a Linux kernel feature -- maybe it sets some indicator that this mechanism has been used?We've considered requiring users to use a special shebang in their Python scripts that they wish to apply the special logic to, but that idea proved unpopular because it makes those scripts unportable.
| Detect if a script is being run via shebang or was specified as a command line argument |
Your kernel was compiled without CONFIG_BINFMT_SCRIPT=y. This setting controls shebang support.
From make menuconfig:
Symbol: BINFMT_SCRIPT [=y]
Type : tristate
Prompt: Kernel support for scripts starting with #!
Location:
(1) -> Executable file formats / Emulations
Defined at fs/Kconfig.binfmt:68 Reconfigure and recompile your kernel. (Technically, it can also be built as a module, but there's no point in doing that for something as fundamental as #! support.)
|
All my Python and Perl scripts are simply NOT iterpreted via shebang. Never. But they work as expected when I explicitly call the binary.
I double checked my Perl and Python installations, it is just too strange: they shebang-way execution works very well in the target system chroot on a sane host but not in the actual running system.
I work on a homemade Linux system which worked just great before that problem appeared. See by yourself:
A test on the 'xscreensaver-text' Perl program, once via shebang then with the interpreter on the CLI:
$ LC_ALL=C LANG=C /usr/bin/xscreensaver-text
/usr/bin/xscreensaver-text: line 23: require: command not found
/usr/bin/xscreensaver-text: line 25: use: command not found
/usr/bin/xscreensaver-text: line 29: BEGIN: command not found
/usr/bin/xscreensaver-text: line 31: use: command not found
/usr/bin/xscreensaver-text: line 32: syntax error near unexpected token `('
/usr/bin/xscreensaver-text: line 32: `use POSIX qw(strftime);'$ LC_ALL=C LANG=C perl /usr/bin/xscreensaver-text
poopy
Linux 3.11.1Sat Oct 5 23:07:33 2013up 11:35, 2 users
load average: 0.09, 0.08, 0.06SO this for Perl programs but the same happens with Python scripts. We've been messing with encodings and terminfo's and different kernel, still no success. I've even rebuilt my entire system. It works just great in a chroot env, but once I boot it I have this problem.
Here's an strace output:
$ LC_ALL=C LANG=C strace /usr/bin/xscreensaver-text
execve("/usr/bin/xscreensaver-text", ["/usr/bin/xscreensaver-text"], [/* 50 vars */]) = -1 ENOEXEC (Exec format error)
write(2, "strace: exec: Exec format error\n", 32strace: exec: Exec format error
) = 32
exit_group(1) = ?
+++ exited with 1 +++$ LC_ALL=C LANG=C strace perl /usr/bin/xscreensaver-text
execve("/usr/bin/perl", ["perl", "/usr/bin/xscreensaver-text"], [/* 50 vars */]) = 0
brk(0) = 0x601000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff12e312000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=240674, ...}) = 0
mmap(NULL, 240674, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ff12e2d7000
close(3) = 0
open("/usr/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 \37\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1868472, ...}) = 0
mmap(NULL, 3981888, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7ff12dd24000
mprotect(0x7ff12dee6000, 2097152, PROT_NONE) = 0
mmap(0x7ff12e0e6000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1c2000) = 0x7ff12e0e6000
mmap(0x7ff12e0ec000, 16960, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7ff12e0ec000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff12e2d6000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff12e2d4000
arch_prctl(ARCH_SET_FS, 0x7ff12e2d4740) = 0
mprotect(0x7ff12e0e6000, 16384, PROT_READ) = 0
mprotect(0x7ff12e313000, 4096, PROT_READ) = 0
munmap(0x7ff12e2d7000, 240674) = 0
brk(0) = 0x601000
brk(0x622000) = 0x622000
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7ff12e2d4a10) = 5680
wait4(5680, poopy
Linux 3.11.1Sat Oct 5 23:11:49 2013up 11:39, 2 users
load average: 0.08, 0.12, 0.08[{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 5680
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5680, si_status=0, si_utime=2, si_stime=0} ---
exit_group(0) = ?
+++ exited with 0 +++Contents (only the beginning) of the script :
$ cat /usr/bin/xscreensaver-text
#!/usr/bin/perl -w
# Copyright � 2005-2013 Jamie Zawinski
#
#
# Created: 19-Mar-2005.require 5;
#use diagnostics; # Fails on some MacOS 10.5 systems
use strict;# Some Linux systems don't install LWP by default!
# Only error out if we're actually loading a URL instead of local data.
BEGIN { eval 'use LWP::UserAgent;' }
--*snip*-- | Perl and Python wrongly interpreted via shebang on Linux |
Nothing is wrong with your system, you're just using the wrong path to env. On Linux systems, at least, the env binary is normally in /usr/bin and not /bin:
$ type env
env is /usr/bin/envSo, your script is telling your system to use /bin/env, which doesn't exist, and that's why you're getting that error. Simply change to the right shebang and you should be fine:
#!/usr/bin/env bash |
I am on Linux Mint 19.03.
I have a setup shell script file, setup.sh. When I run ./setup.sh
muyustan@mint:~/Downloads/quartusExtracted$ ./setup.sh
bash: ./setup.sh: /bin/env: bad interpreter: No such file or directoryThe shebang in setup.sh:
#!/bin/env bashMy understanding of these things are very narrow, since I am pretty new at Linux world.
I knew that, using /bin/env bash instead of giving the exact bash path was something like "search in the environment variables and try to find bash". When I look to /bin directory for env, I see that there is not such file:
muyustan@mint:/usr/bin$ ll /bin | grep "env"
lrwxrwxrwx 1 root root 6 Mar 21 14:35 open -> openvt*
-rwxr-xr-x 1 root root 18872 Jan 22 2018 openvt*Also,
muyustan@mint:~/Downloads/quartusExtracted$ which bash
/bin/bashSo, I assume that changing the shebang in the setup.sh to #! /bin/bash will solve the problem(I haven't tried), however, this does not seem very intuitive, because if so then I ask myself that,
" Did the developers of this application(Quartus 13.1) make a mistake? ", which leads me to think that something is wrong with my system.
So, the question is, why this is the situation?
Thanks.
| /bin/env : bad interpreter |
It should be
#!/usr/local/bin/oshif your shell is in /usr/local/bin. If /usr/local/bin is on your PATH then
#!/usr/bin/env oshshould work too... (In fact that's the only point of env here — it will find osh wherever it's installed, as long as it's on the PATH, so it doesn't matter if it's in /usr/local/bin, /usr/bin etc. The first one found wins if multiple copies are installed.)
If you use the env variant, just make sure there isn't another osh binary on your PATH: as schily points out, the Thompson shell could be installed as osh (although that strikes me as rather unlikely, but since you're researching shells it might happen), and there's also the OCaml build system's debugging shell (typically in an omake package).
|
I'm writing a script to test a shell project to see that my custom shell has correct output.
str="HELLO"
echo $str
echo "*** YOU SHOULD SEE HELLO ABOVE ***"
ls *
echo "*** YOU SHOULD SEE THE OUTPUT FROM ls * ABOVE ***"
who|awk '{print $1}'
echo "*** YOU SHOULD SEE THE OUTPUT FROM who ABOVE ***"
echo $((1+2*3-4/5+6*7-8/9)))
echo "*** YOU SHOULD SEE THE NUMBER 49 ABOVE ***"This is the output from the script
$ ./shell < ../tst_exp.sh
'PATH' is set to /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin.
HELLO
*** YOU SHOULD SEE HELLO ABOVE ***
CMakeCache.txt hello_2.7-0ubuntu1_amd64.changes hello_2.7-0ubuntu1.diff.gz hello_2.7.orig.tar.gz jeff not script.sh testresult.txt
cmake_install.cmake hello_2.7-0ubuntu1_amd64.deb hello_2.7-0ubuntu1.dsc hello-2.7.tar.gz Makefile osh shellbuild-area:
hello_2.7-0ubuntu1_amd64.build hello_2.7-0ubuntu1_amd64.deb hello_2.7-0ubuntu1.dsc
hello_2.7-0ubuntu1_amd64.changes hello_2.7-0ubuntu1.diff.gz hello_2.7.orig.tar.gz.bzr:
branch branch-format branch-lock checkout README repositoryCMakeFiles:
3.5.1 CMakeDirectoryInformation.cmake CMakeRuleHashes.txt feature_tests.bin feature_tests.cxx Makefile.cmake shell.dir TargetDirectories.txt
cmake.check_cache CMakeOutput.log CMakeTmp feature_tests.c Makefile2 progress.marks shellparser.dirhello:
ABOUT-NLS AUTHORS ChangeLog config.in configure.ac COPYING doc INSTALL Makefile.in NEWS README tests TODO
aclocal.m4 build-aux ChangeLog.O configure contrib debian gnulib Makefile.am man po src THANKShello-2.7:
ABOUT-NLS AUTHORS ChangeLog config.h config.log configure contrib doc INSTALL Makefile.am man osh README stamp-h1 THANKS
aclocal.m4 build-aux ChangeLog.O config.in config.status configure.ac COPYING gnulib Makefile Makefile.in NEWS po src tests TODO
*** YOU SHOULD SEE THE OUTPUT FROM ls * ABOVE ***
[21420]
dac
dac
*** YOU SHOULD SEE THE OUTPUT FROM who ABOVE ***
49
*** YOU SHOULD SEE THE NUMBER 49 ABOVE ***The output is expected, but if I add a shebang, which is recommended, then strange things happen. This is my own shell that I'm testing, so what should the shebang be for scripts that should be run with my shell, that I named osh and can put in /usr/local/bin or install like another shell?
I suppose the shebang should not be #!/usr/bin/env bash which is what my editor (CLion) recommends, and it can't be #!/usr/bin/env osh because that is not installed and gives strange results.
How should I handle the shebang when writing my own shell?
| How should I handle the shebang when writing my own shell? |
The #!/usr/bin/env bash results in the script using whatever bash is found first in $PATH.
While it is common for bash to be located at /bin/bash. There are cases where it is not (different operating systems). Another potential use is when there are multiple bash shells installed (newer version at an alternate location like /usr/local/bin/bash).
Doing #!/usr/bin/env bash just takes advantage of a behavior of the env utility.
The env utility is normally used for manipulating the environment when calling a program (for example; env -i someprog to wipe the environment clean). However by providing no arguments other than the program to execute, it results in executing the specified program as found in $PATH.Note that there are both advantages and disadvantages to doing this.
The advantages are as mentioned earlier, in that it makes the script portable if bash is installed in a different location, or if /bin/bash is too old to support things the script is trying to do.
The disadvantage is that you can get unpredictable behavior. Since you're at the mercy of the user's $PATH, it can result in the script being run with a version of bash that has different behavior than what the script expects.
|
I notice that with bash scripts, some people use a different
shebang to the one that I'm used to putting at the top of my own.
Can someone simplify the difference between these two? I use the #!/bin/bash one all the time.
#!/bin/bash
#!/usr/bin/env bash | What is the difference in these two bash environments? |
Or, you can have sh take care of it for you:
#!/bin/sh
exec perl -x "$0" "$@" #!/usr/bin/perl
...Yes, that's sh and Perl all in one file.
From man perlrun:-x
tells Perl that the program is embedded in a larger chunk of
unrelated text, such as in a mail message. Leading garbage will
be discarded until the first line that starts with "#!" and
contains the string "perl". Any meaningful switches on that line
will be applied.This approach only assumes the path of sh (which should be the same on any POSIX-compliant OS) and that a non-interactive instance of sh has perl somewhere in its PATH.
As for ensuring the script has the executable bit set, you can always distribute it as a tarball and have your users "right click, extract here" from the GUI. If the tarball contained the script with the executable bit set, the extracted script should have the executable bit set.
|
I'm developing a perl script which expected to be downloaded by Mac users with a very small knowledge of shell, linux etc, let's say office managers and accountants.
After the downloading the script should be executed just by double-clicking via GUI.
My goal is to make this as painless as possible to non-tech-savvy user.
My doubts are:after the downloading the script won't have the executable bit
if the perl executable is not at the default location then I should write something instead of #!/usr/bin/perl. What should I write there? Is there any other way except open a console and type perl ./script.pl ? | run perl script with unknown perl location |
#!/usr/bin/make -f is a valid shebang to allow execution of a Makefile. The problem with your Makefile isn’t its shebang, it’s that it uses Windows line-endings; if you fix that, e.g. with
sed -i $'s/\r$//' Makefileyour Makefile will run correctly.
The difference between using make to run such a Makefile, and running it directly, is that in the latter case, because of the Windows line endings, make is invoked as
make -f $'\r'MakefileThis produces the “No such file or directory” error, since there is no file with a name consisting of a single carriage return. When Make is asked to process a file as a Makefile, it tries to produce it or update it if necessary; since the file that Make is looking for here is missing, it tries to create it. This invokes Make’s built-in rules, which is where the C compiler invocation comes from.
|
I have a Makefile, and I want make to run automatically when I double-click it (from the Ubuntu file manager). So, I made this Makefile executable, and added at its top the following shebang line:
#!/usr/bin/make -fWhen I run /usr/bin/make -f Makefile, I get the desired result.
However, when I double-click the Makefile, or just run ./Makefile, I get an error:
: No such file or directory
clang-9 -o .o
clang: error: no input files
make: *** [<builtin>: .o] Error 1What is the correct way to make my Makefile executable?
Here are the entire contents of my makefile:
#!/usr/bin/make -f# A makefile for building pdf files from the text (odt files) and slides (odp files).
# Author: Erel Segal-Halevi
# Since: 2019-02SOURCES_ODP=$(shell find . -name '*.odp')
TARGETS_ODP=$(subst .odp,.pdf,$(SOURCES_ODP))
SOURCES_ODT=$(shell find . -name '*.odt')
TARGETS_ODT=$(subst .odt,.pdf,$(SOURCES_ODT))
SOURCES_DOC=$(shell find . -name '*.doc*')
TARGETS_DOC=$(subst .doc,.pdf,$(subst .docx,.pdf,$(SOURCES_DOC)))
SOURCES_ODS=$(shell find . -name '*.ods')
TARGETS_XSLX=$(subst .ods,.xlsx,$(SOURCES_ODS))all: $(TARGETS_ODP) $(TARGETS_ODT) $(TARGETS_DOC) $(TARGETS_XSLX)
#
-git commit -am "update pdf files"
-git push
echo Done!
sleep 86400%.pdf: %.odt
#
libreoffice --headless --convert-to pdf $< --outdir $(@D)
-git add $@
-git add $<
%.pdf: %.doc*
#
libreoffice --headless --convert-to pdf $< --outdir $(@D)
-git add $@
-git add $<%.pdf: %.odp
#
libreoffice --headless --convert-to pdf $< --outdir $(@D)
-git add $@
-git add $<%.xlsx: %.ods
#
libreoffice --headless --convert-to xlsx $< --outdir $(@D)
-git add $@
-git add $<clean:
rm -f *.pdf | How to make a Makefile executable? |
I would say yes, it's intrinsically better to use a shebang.
Pro:
If you put all your scripts in your $PATH (maybe /usr/local/bin or ~/bin) and mark them as executable, then you can execute them by name without thinking about which interpreter you need to invoke (bash, Python, Ruby, Perl, etc.).
If you place an executable file named foo with a shebang anywhere in your $PATH, you can simply type foo to execute it.
Con:
You have to type #!/bin/bash at the top and chmod +x the file.
This is near-zero cost for a very convenient return.
|
I was looking up shebang and wondering why I would use it.
I could execute a bash script using:
bash foo.shor
./foo.sh(with a shebang line in foo.sh)
What are the pros and cons of each and which one should I use by default?
| Is it better to use a shebang line to execute a script? |
If you have GNU grep
grep -rIzl '^#![[:blank:]]*/bin/sh' ./ |
I want to find out all scripts with a specific shebang line. Specifically, I want all files that match the following criteria: It's mostly a plain text file (stuffs created by gzexe don't look very friendly)
The 1st line contains solely #!/bin/sh or #! /bin/sh (with a space) I would like to do this with find, sed and grep (file available).
File names are useless, because some scripts don't have extensions or even have wrong extensions. Also a something.sh may have a shebang line of #!/bin/bash which is also not what I wanted.
Besides, sometimes I would come across a file like this:#!/bin/sh
blah.blah.blah...
The 1st line is empty and the shebang is located at the 2nd line, which is not what I wanted.
I am able to find shebang lines with find|grep but I don't know how to find lines specifically on the 1st line of a file.
Thanks for any help in advance.
| Find all scripts with a given shebang line with find & sed |
Shebang lines are parsed slightly differently on different Unix kernels. On Linux and on modern BSD, the command is followed by a single argument (or none), which can contain spaces. On macOS, the command is followed by zero or more arguments separated by spaces.
So on Linux, when you run ./test-shebang.mjs with the argument foo, this is what happens:The kernel runs /usr/bin/env with the arguments -S zsh -c 'source ~/.zshrc; zx --install $@' -- (note that all of that is the first argument), ./test-shebang.mjs, foo.
env runs zsh with the arguments -c, source ~/.zshrc; zx --install $@, --, ./test-shebang.mjs, foo.
Zsh runs the script source ~/.zshrc; zx --install $@ with the script name -- and the two positional parameters ./test-shebang.mjs, foo.
After .zshrc returns, zsh runs zx with the arguments --install, ./test-shebang.mjs, foo.On macOS the first step is different (inherited from old versions of FreeBSD), which results in unintended data that is detected in the third step:The kernel runs /usr/bin/env with the arguments -S, zsh, -c, 'source, ~/.zshrc;, zx, --install, $@', --, ./test-shebang.mjs, foo.
env runs zsh with the arguments -c, 'source, ~/.zshrc;, zx, --install, $@', --, ./test-shebang.mjs, foo.
Zsh runs the script 'source with the script name ~/.zshrc; and the positional parameters zx, --install, $@', --, ./test-shebang.mjs, foo.
Zsh complains that 'source is not a syntactically correct script.Unfortunately, env -S on Linux and FreeBSD compensates for their shebang parsing weirdness, but macOS mixes FreeBSD's env utility with a different shebang parsing, so env -S doesn't work as intended (or in any particularly useful way) on macOS.
The portable solution for complex shebangs is to write a polyglot with a #!/bin/sh shebang, and a second line that contains shell code written specially to be a no-op in whatever language the script is in. For example, here's an sh+JavaScript polyglot (assuming a JS variant that treats shebang lines as comments) that does the same thing as your shebang line.
#!/bin/sh
eval : '; exec zsh -c "source ~/.zshrc; zx --install \$@" -- "$0" "$@"';
console.log("Javascript code starts here");In sh, the second line is the built-in command eval with the two arguments : and ; exec zsh -c "source ~/.zshrc; zx --install \$@" -- "$0" "$@". This causes sh to run : ; exec zsh -c "source ~/.zshrc; zx --install \$@" -- "$0" "$@". : is a no-op command. The exec built-in causes sh to replace itself by zsh, so sh stops parsing the file after the second line.
In JavaScript, the second line is a string literal (which has no effect), with the label eval (which is useless but harmless).Note that what you've put in the script is really weird and unlikely to be helpful. .zshrc is intended for interactive customizations and is likely to cause weird effects when used in a non-interactive shell. .zshrc should not set environment variables since these variables would not be available in non-terminal programs and would be reset if you run a nested zsh; you should set environment variables in .zprofile instead.
|
I have the following script in an executable file test-shebang.mjs and I wanted to use zx to run my script but have my ~/.zshrc be sourced before that:
#!/usr/bin/env -S zsh -c 'source ~/.zshrc; zx --install $@' --console.log("work pls")./test-shebang.mjs works fine in Ubuntu:
❯ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy❯ zsh --version
zsh 5.8.1 (x86_64-ubuntu-linux-gnu)But when I copy over the same script in macOS, I get this error:
❯ ./test-shebang.mjs
zsh:1: unmatched '❯ zsh --version
zsh 5.9 (x86_64-apple-darwin23.0)❯ sw_vers
ProductName: macOS
ProductVersion: 14.4
BuildVersion: 23E214Why does this happen?
I tried with bash as well but running into errors there as well. FWIW, I'm doing this roundabout way of doing things because I've installed zx via pnpm which itself is installed via brew and I would prefer not to set PATH which would just make the script longer.
| Why does env -S with quoted strings in the shebang line work fine in Ubuntu but not in macOS? |
A path-less shebang assumes that the command in the shebang is in the current directory, in the general case. More generically, a non-absolute shebang is interpreted relative to the current directory of the process executing the script.
Path-less shebangs where the command isn’t in the current directory work only when the script is started from certain shells (Zsh at least, but not Bash), and it works because the shell helps out. When the script is run, execution fails, but the rule then is for the shell to try to run the script if it thinks it is a script; Zsh looks up shebang commands in its path, but that’s not standard.
Scripts with path-less shebang commands won’t work in any other context.
The idiomatic way to write a PATH-based shebang is to use /usr/bin/env, as you mention:
#! /usr/bin/env python3or, with some versions of env,
#! /usr/bin/env -S python3 --to avoid problems with paths to the script starting with dashes.
|
#!python3
print("Hello")I find that this code works fine in my terminal. But everyone does #!/path/to/file or #!/usr/bin/env command.
Is there any reason to avoid using #!command in shebang lines?
| Why don't we use #!command for the shebang line? |
If the user has no read permission on an executable script, then trying to run it will fail, unless she has the CAP_DAC_OVERRIDE capability (eg. she's root):
$ cat > yup; chmod 100 yup
#! /bin/sh
echo yup
^D
$ ./yup
/bin/sh: 0: Can't open ./yupThe interpreter (whether failing or successful) will always run as the current user, ignoring any setuid bits or setcap extended attributes of the script.
Executable scripts are different from binaries in the fact that the interpreter should be able to open and read in order to run them. However, notice that they're simply passed as an argument to the interpreter, which may not try to read them at all, but do something completely different:
$ cat > interp; chmod 755 interp
#! /bin/sh
printf 'you said %s\n' "$1"
^D
$ cat > script; chmod 100 script
#! ./interp
nothing to see here
^D
$ ./script
you said ./scriptOf course, the interpreter itself may be a setuid or cap_dac_override=ep-setcap binary (or pass down the script's path as an argument to such a binary), in which case it will run with elevated privileges and could ignore any file permissions.
Unreadable setuid scripts on Linux via binfmt_misc
On Linux you can bypass all the restrictions on executable scripts (and wreck your system ;-)) by using the binfmt_misc module:
As root:
# echo ':interp-test:M::#! ./interp::./interp:C' \
> /proc/sys/fs/binfmt_misc/register# cat > /tmp/script <<'EOT'; chmod 4001 /tmp/script # just exec + setuid
#! ./interp
id -u
EOTAs an ordinary user:
$ echo 'int main(void){ dup2(getauxval(AT_EXECFD), 0); execl("/bin/sh", "sh", "-p", (void*)0); }' |
cc -include sys/auxv.h -include unistd.h -x c - -o ./interp
$ /tmp/script
0Yuppie!
More information in Documentation/admin-guide/binfmt-misc.rst in the kernel source.
The -p option may cause an error with some shells (where it could be simply dropped), but is needed with newer versions of dash and bash in order to prevent them from dropping privileges even if not asked for.
|
If the current user only has execute (--x) permissions on a file, under which user does the interpreter (specified by #!/path/to/interpreter at the beginning of the file) run?
It couldn't be the current user, because he doesn't have permission to read the file. It couldn't be root, because then arbitrary code included in the interpreter would gain root access.
As which user, then, does the interpreter process run?
Edit: I think my question assumes that the file has already been read enough to know which interpreter it specifies, when in reality it wouldn't get that far. The current shell (usually b/a/sh) interpreting the command to execute the target file would attempt to read it, and fail.
| Who runs the interpreter for files that are execute-only? |
To find out, I created two shell files. Each starts with a shebang line and ends with the sole command date. long.sh has 10,000 comment lines while short.sh has none. Here are the results:
$ time short.sh
Wed Nov 12 18:06:02 PST 2014real 0m0.007s
user 0m0.000s
sys 0m0.004s$ time long.sh
Wed Nov 12 18:06:05 PST 2014real 0m0.013s
user 0m0.004s
sys 0m0.004sThe difference is non-zero but not enough for you to notice.
Let's get more extreme. I created very_long.sh with 1 million comment lines:
$ time very_long.sh
Wed Nov 12 18:14:45 PST 2014real 0m1.019s
user 0m0.928s
sys 0m0.088sThis has a noticeable delay.
Conclusion
10,000 comment lines has a small effect. A million comment lines cause a significant delay.
How to create long.sh and very_long.sh
To create the script long.sh, I used the following awk command:
echo "date" | awk 'BEGIN{print "#!/bin/bash"} {for (i=1;i<=10000;i++) print "#",i} 1' >long.shTo create very_long.sh, I only needed to modify the above code slightly:
echo "date" | awk 'BEGIN{print "#!/bin/bash"} {for (i=1;i<=1000000;i++) print "#",i} 1' >very_long.sh |
A shebang (#!/bin/sh) is placed on the first line of a bash script, and it's usually followed on the second line by a comment describing what action the script performs. What if, for no particular reason, you decided to place the first command far beneath the shebang and the comment by, say, 10000 lines. Would that slow the execution of the script?
| Distance of a command from a shebang? |
#!/bin/sh
my_script(){
{ cat; cat <&3; }>"$0"
} <<SHEBANG 3<<\SCRIPT
#!${SHELL}
SHEBANG
#now all the rest of your script#goes in hereSCRIPTmy_scriptmost shells will put all of the contents of here-documents in secure temp files automatically. those that don't use pipes, and those buffers are usually more than enough to accomodate shell-script writes, but they're no sure thing, of course.
and functions are literal strings stored into the shell's memory. doing the above should only be required the one time, and afterwards your script will be interpreted by whatever was in $SHELL at the time you ran it.
|
I'm looking for what to put on my_zsh_script.sh's "shebang line" that would have the same effect, portably, as
$SHELL my_zsh_script.shIOW, I looking for the valid equivalent of
#!$SHELLor
#!/usr/bin/env $SHELL (In some systems, my value for $SHELL is a version of zsh that, under some circumstances, differs from what #!/usr/bin/env zsh resolves to.)
I suppose that I can always arrange to have my_zsh_script.sh custom-built, with the right shebang line hard-coded in, for each host I may want to run it on. I'm hoping to avoid this scenario.
| Shebang line for "run with $SHELL" |
Just realized that the following environment variable does it all: $_
When launched using <myscript>, its value is './<myscript>'
When launched using <mypgm> <myscript> its value is the full path to <mypgm>.
That simple, in my case:
#!/bin/bashhow_called=$_if [[ "X$how_called" == X$0 || "X$how_called" ==X$BASH ]]; then
# ^in this case, if the login shell is not bash
shebang=0
else
shebang=1
fibn=$(basename $0)A bit later (for my purpose):
if (( shebang == 1 )) || [[ ! -z $1 && "X$1" != X-* && "X$1" == X*\.${bn:0:3} && -x $1 ]]; then
# ^ shebang: first argument is the script file
# ^ or not shebang: first argument **may** be a script file name
# ^ ensure that this is a script by script extension
# (otherwise just use the more verbose but standard --script=...) shebang_fn="$1"
shift 1
set -- --script="$shebang_fn" "$@" # fall back on standard way.
fi(I know that I'm flipping the table a bit here, and that we still have to ensure that this is a portable solution).
|
I want to use a program in the shebang, so I create a script named <myscript> with:
#!<mypgm>I also want to be able to run <mypgm> directly from the command prompt.
<mypgm> args...So far, no issue.
I want to be able to run <myscript> from the command prompt with arguments.
<myscript> blablaIn turn, the shebang makes <mypgm> being called with the following arguments:
<mypgm> <myscript> blablaNow, I need to know when <mypgm> <myscript> blabla is called using the shebang, or not:
myscript blabla # uses the shebang
-or-
<mypgm> myscript blabla # directly in the command prompt.I looked at the environment variables (edit: <=== wrong assertion (¬,¬”) ), at the process table (parent process too) but didn't find any way to make a difference.
The only thing I found so far is:
grep nonvoluntary_ctxt_switches /proc/$$/statusWhen this line is just after the shebang, the value is often 2 (sometimes 3) when called through the shebang, and 1 (sometimes 2) with the direct call. Being unstable and dependent on process scheduling (the number of times the process was taken off from its CPUs), I am wondering if anybody here might have a better solution.
| shebang or not shebang |
You could use the generic binfmt-misc kernel module that handles which interpreter is used when an executable file is run. It is typically used to allow you to run foreign architecture files without needing to prefix them with qemu or wine, but can be used to recognise any magic characters sequence in a file header, or even a given filename extension, like *.xslt. See the kernel documentation.
As an example, if you have a file demo.xslt that starts with the characters
<xsl:stylesheet version=...you can ask the module to recognise the string <xsl:stylesheet at offset 0 in the file and run /usr/bin/xsltproc by doing as root
colon=$(printf '\\x%02x' \':) # \x3a
echo ":myxsltscript:M::<xsl${colon}stylesheet::/usr/bin/xsltproc:" >/etc/binfmt.d/myxslt.conf
cat /etc/binfmt.d/myxslt.conf >/proc/sys/fs/binfmt_misc/registerYou don't need to go via the /etc file unless you want the setting to be preserved over a reboot. If you don't have the /proc file, you will need to mount it first:
mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_miscNow, if you chmod +x demo.xslt you can run demo.xslt with any args and it will run xsltproc with the filename demo.xslt provided as an extra first argument.
To undo the setup, use
echo -1 >/proc/sys/fs/binfmt_misc/myxsltscript |
I have (foolishly?) written a couple of moderately general-purpose xslt
scripts.
I'd quite like to turn these into executables that read an xml document from standard in or similar.
The way you do this with other languages is to use a shbang.
Is there an easy / standard way to do this with xsltproc and friends? Sure I could hack up a wrapper around xsltproc that pulls off the first comment line... but if there is something approximating a standard this would be nicer to use.
| xslt shbang: Using xslt from the command line |
Linux (like many other Unix variants) only supports passing a single argument to the interpreter of a script. (The interpreter is the program on the shebang line.) A script starting with #!/usr/bin/sudo -u bob -- /bin/bash is executed by calling /usr/bin/sudo with the arguments -u bob -- /bin/bash and /home/alice/script.sh.
One solution is to use a wrapper script: make /home/alice/script.sh contain
#!/bin/sh
exec sudo -u bob /home/alice/script.realand put the code in /home/alice.script.real starting with #!/bin/bash and make the sudo rule refer to /home/alice.script.real.
Another solution is to make the script reexecute itself. You need to be careful to detect the desirable condition properly, otherwise you risk creating an infinite loop.
#!/bin/bash
if ((EUID != 123)); then
exec sudo -u \#123 /home/alice/script.sh
fi(where 123 is the user ID of bob)
A simple solution is to tell people to run sudo -u bob /home/alice/script.sh instead of running the script directly. You can provide shell aliases, .desktop files, etc.
|
I'd like to be able to run a script as another user, and only as that user.
The way I currently have this set up is to have
alice ALL = (bob) NOPASSWD: /home/alice/script.shin the sudoers file and
alice@foo:~$ ls script.sh
-rwxr-xr-x 1 root root ..... script.shalice@foo:~$ lsattr script.sh
----i----------- script.shalice@foo:~$ head -1 script.sh
#!/bin/bashalice@foo:~$ sudo -u bob ./script.sh
okIs there a way to have the shebang line be something like
#!/usr/bin/sudo -u bob -- /bin/bashso that alice could just run
alice@foo:~$ ./script.sh
ok?
If I try this I simply get the error message
sudo: unknown user: blog -- /bin/bash
sudo: unable to initialize policy plugin | `sudo -u user` in shebang line |
Most systems accept only up to one argument after the path of the interpreter in the shebang line. If you provide more than one, the behaviour depends on the system.
On Linux, all of what's after the interpreter (without leading and trailing space or tab characters) is passed as a single argument. So with a she-bang of:
#! /usr/bin/env FOO=bar bashThe system will call /usr/bin/env with FOO=bar bash and /path/to/the/script as arguments. Then env will run the script again with FOO=bar bash in its environment, and not bash with /path/to/the/script as argument (and FOO=bar in the environment), causing an infinite loop.
Some implementations of env, including FreeBSD's and recent versions of GNU env, can be told to do the splitting themselves with -S to work around that limitation.
In
#! /usr/bin/env -S FOO=bar bashA "-S FOO=bar bash" single argument will still be passed to env, but env will split the "FOO=bar bash" argument of that -S option and will behave as if called with FOO=bar and bash as separate arguments.
See the GNU env manual or FreeBSD env manual for details as to how the splitting is done.
|
I'm trying to figure out exactly what the semantics of the shebang are.
I can write a script like this:
#!/usr/bin/env bashif [ -z "$FOO" ]
then
echo "No FOO"
else
echo "$FOO"
fiwithout $FOO in my environment, and run it like ./foo.sh, bash foo.sh, env bash foo.sh, etc and it will print "No FOO" as expected.
I can of course run it like FOO=bar ./foo.sh and it will print bar.
The env man page gives the invocation as:
env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...]And I can use it as such:
$ env FOO=bar bash foo.sh
barHowever, if I try to use that syntax in a shebang:
#!/usr/bin/env FOO=bar bashif [ -z "$FOO" ]
then
echo "No FOO"
else
echo "$FOO"
fithen ./foo.sh hangs indefinitely and does not execute.
Can someone explain this to me? I assumed that when a shebang is encountered, it just copies the command, appends the path to the script to the end of the argument list, and executes it, but this behavior suggests otherwise.
| #!/usr/bin/env hangs with NAME=VALUE |
For the sake of the challenge, it could be done with the FreeBSD env or with GNU env >= 8.30 (already assumed by the OP) in a shebang:
#! /usr/bin/env -S sh -c 'exec awk -f "$0" -- "$@"'
BEGIN { for(i = 1; i < ARGC; i++) print ARGV[i] }./myscript -h 1 2 3
-h
1
2
3It doesn't mean that it's a good idea, though.
You could try this, instead:
#! /bin/sh
BEGIN { 2>"/dev/null"
exec awk -f "$0" "--" "$@"
}
BEGIN { for(i = 1; i < ARGC; i++) print ARGV[i] }This assumes that you don't have a BEGIN command in your PATH.
|
Can I get arguments that happen to be AWK options passed directly to a pure AWK script?
Example script:
#!/usr/bin/env -S awk -f
BEGIN { if (ARGV[1] == "-h") print "whoop" }I want ./myscript -h to print whoop. But AWK gets the -h first and prints its usage instead.
Running ./myscript -- -h works but I can't get -- working in the shebang because of the -f.
I know I could use a shell script with AWK in instead.
| Pass options to AWK script bypassing AWK |
On Linux, you could use pgrep to get PIDs of likely suspects, and inspect the first argument of those PIDs (available in /proc/$PID/cmdline). /proc/$PID/cmdline has all the arguments (including argument 0) of the process, separated ASCII NUL (\0).
Something like:
pgrep bash | xargs -I {} sed -nz '2{p; q}' /proc/{}/cmdline | grep -Fqzx "$0"This assumes your sed and grep support null-separated lines. The sed prints the second line of the respective cmdline files (argument 1), which would be the script name. grep then looks for exact matches of your script name. You could do the entire match in sed, but I don't feel like juggling quotes.
This will fail if you call your script by different paths:
/home/user/myscript
cd /home; usr/myscript
myscriptHowever, it should be safe against whitespace in script names.
|
I have tried migrating the shebang for my bash scripts from #!/bin/bash to #!/usr/bin/env bash, and some of them were broken because they relied on this code that checks for existing instances of themselves runnning, and which works only with #!/bin/bash:
$ pidof -x myscript -o %PPIDWhat I would like to know is how can I reliably check if a script called by env is running, since most solutions I have tried will involve some dirty regex, and stilll end up being unreliable:
$ pgrep -f '(^|/)myscript(\s|$)'Considering the path to my script is /home/user/myscript, the above code may return the PID for the following undesired commands:
editor /home/user/myscript
bash /tmp/myscript
bash /home/user/myscript with spaces | See if a script is running when using #!/usr/bin/env |
MacOS still retains the old FreeBSD behaviour from before 2005.
In 2005, there was a major change in the way that the FreeBSD kernel handled #! at the start of an executable file passed to execve(), to bring it more into line with some other operating system kernels, including Linux and the NetBSD kernel.
Commentary in the NetBSD kernel source code tries to paint this as a universal:
* collect the shell argument. everything after the shell name
* is passed as ONE argument; that's the correct (historical)
* behaviour.— kern/exec-script.c. NetBSD. lines 189 et seq..
It actually is not.
Sven Mascheck did some testing about a decade ago and there are four basic behaviours, the AT&T Unix System 5 one having as much claim to being "correct historical" behaviour as the 4.2BSD one has:Ignore the characters (before 4.2BSD and AT&T Unix System 5).
Pass the whole string in a single argument (4.2BSD, NetBSD, Linux and FreeBSD from 2005 onwards).
Split the string up by whitespace and pass it as multiple arguments (FreeBSD before 2005 and MacOS).
Split the string up by whitespace and pass just the first argument (AT&T Unix System 5 and Solaris)I've only included the operating systems relevant to this answer in parentheses.
M. Mascheck checked a lot more, as did Ahmon Dancy in discussion of FreeBSD Problem Report 16393.
See the further reading for the full lists.
What brought things to a head in FreeBSD in 2005 was that, ironically, FreeBSD wasn't quite as simple as that.
It had had a change introduced that was intended to make things written in popular books about Perl actually work: arguments were skipped after a comment character.
The books had recommended things like: #!/bin/sh -- # -*- perl -*-— Larry Wall, Tom Christiansen, Jon Orwant (2000). Programming Perl: 3rd Edition. O'Reilly Media. ISBN 9780596000271. p. 488.
PR 16393 in 2000 was a way of making the kernel handle executable Perl scripts, written in the way that Larry Wall no less had said would work.
However, it broke other stuff and didn't completely work.
There was some back and forth on this.
Finally, in 2005 the mechanism to make Larry Wall et al.'s idea work was moved out of the kernel, which was made to behave compatibly with Linux, NetBSD, and 4.2BSD (rather than Solaris and AT&T Unix System 5) and made the responsibility of sh.
The behaviour since 2005 has thus been that the shell gets three arguments, the second argument being the entire tail of the #! line, and invoking your script directly with execve() is effectively the same as invoking:sh '-eufo pipefail' ./script
It should be fairly obvious why the Almquist shell (which is what sh is on FreeBSD) is thinking that ./script is the option argument for the -o option, and that it is treating the pipefail part as further single-letter options collected behind - (which it hasn't got around to processing yet).
An also obvious alternative is to have set -o pipefail as the first command in the script, as pointed out at https://unix.stackexchange.com/a/533418/5132 for the Bourne Again shell.
This was only added to the FreeBSD Almquist shell in 2019, however and thus is only available in very recent versions of FreeBSD.
(The Debian Almquist shell has not yet had it added, as of 2020.)
Further readingPointers to some of the older history: https://unix.stackexchange.com/a/489688/5132
Sven Mascheck. "Test results from various systems". The #! magic, details about the shebang/hash-bang mechanism on various Unix flavours. www.in-ulm.de/~mascheck.
Garance A Drosihn (2005-02-23). Bug in #! processing — One More Time. freebsd-arch mailing list.
ryand (2000-01-27). /bin/sh doesn't strip comments on shebang line. FreeBSD Problem Report 16393.
Garance A Drosihn (2005). Updating note for May 28, 2005: Change in handling of shell-script options. people.freebsd.org/~gad.
sh. BSD General Commands Manual. 2019-02-24. freebsd.org.
Wolfram Schneider (2017-12-12). Get exit status of process that's piped to another: set -o pipefail is missing for /bin/sh. FreeBSD Problem Report 224270.
Ibrahim Ghazal (2020-06-30). Status of set -o pipefail. Debian Almquist shell mailing list. |
I would like to put shebang #!/bin/sh -eufo pipefail in my script. But there're several things strange:The script would fail with that shebang in FreeBSD but not when run on MacOS
on FreeBSd, the same shebang works when directly executed from command line (also /bin/sh).>>> sh -eufo pipefail -c 'echo hi' # this works
hi>>> cat <<EOF > script
#!/bin/sh -eufo pipefail
echo hi
EOF>>> chmod +x ./script
>>> ./script # this doesn't work on FreeBSD but works on MacOS
Illegal option -o ./script>>> cat ./script
#!/bin/sh -eufo pipefail
echo hi>>> uname -a
FreeBS 11.3-RELEASE-p7 | FreeBSD shebang error |
You can use find and awk to check the first line (your awk has to support FNR, like POSIX awks do) and add some search patterns to find (like excluding hidden folders):
echo "check bash files ..."
find . -not -path '*/\.*' \
-type f -exec awk 'FNR == 1 && /^#!.*sh/{print FILENAME}' {} + | \
while IFS= read -r f; do
echo "checking $f ..."
bash -n "$f"
doneNote: use IFS= to avoid splitting filenames containing one of the $IFS, for example: 'a b'
| I have a project with some lua and some bash files. I want to loop over all files and depending on the shebang I want to execute a validity check.
| validity check all files in a folder depending on the shebang [closed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.