output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
Taking cues from the comment above I managed to find a solution to the above.
udevadm info /dev/ttyUSB0 | grep "ID_PATH="The above lists the the sysfs path for the port that /dev/ttyUSB0 is connected to. Use this value to create rules for as many devices (ttyUSB1, ttyUSB2....) as you want in the rules file, say /etc/udev/rules.d/101-usb-serial.rules as follows:
SUBSYSTEM=="tty",ENV{ID_PATH}=="pci-0000:00:14.0-usb-0:10.1:1.0",SYMLINK+="ttyUSB001"
SUBSYSTEM=="tty",ENV{ID_PATH}=="pci-0000:00:14.0-usb-0:10.2:1.0",SYMLINK+="ttyUSB002"
SUBSYSTEM=="tty",ENV{ID_PATH}=="pci-0000:00:14.0-usb-0:10.3:1.0",SYMLINK+="ttyUSB003"Once done making changes or creating the file, run the following:
sudo udevadm control --reload-rules
sudo /etc/init.d/udev restartP.S. : The above example scenario (the one that I am using) is using a 4-port Belkin USB hub. Device 1 is connected to the port 1 of the hub Device 2 is connected to the port 2 and so on.
|
I have more than 2 serial devices enumerated by FTDI driver as /dev/ttyUSB0, /dev/ttyUSB1, /dev/ttyUSB2 etc. On reboot, these may get jumbled up in any other order. Also, I may replace/swap these devices physically among these or with some other similar device.
Now, I want a persistent enumeration for these. I want the device names to be enumerated according to the physical USB port (I may connect to the USB ports on the motherboard of the PC directly or use a USB hub) that the device is connected to - say if the devices are connected to the USB hub, port 1 should be reserved to named as ttyUSB0, port 2 as ttyUSB1 and so on.
After some basic reading, I figured (as mentioned here) that /dev/serial/by-path/ lists the devices as sort of a symlink. So, I created a file /etc/udev/rules.d/101-usb-serial.rules with the following:
KERNEL=="ttyUSB[0-9]*", SUBSYSTEM=="tty", DRIVERS=="ftdi_sio", PATH=="pci-0000:00:14.0-usb-0:10.1:1.0", SYMLINK+="ttyUSB000"
KERNEL=="ttyUSB[0-9]*", SUBSYSTEM=="tty", DRIVERS=="ftdi_sio", PATH=="pci-0000:00:14.0-usb-0:10.2:1.0", SYMLINK+="ttyUSB001"
KERNEL=="ttyUSB[0-9]*", SUBSYSTEM=="tty", DRIVERS=="ftdi_sio", PATH=="pci-0000:00:14.0-usb-0:10.3:1.0", SYMLINK+="ttyUSB002"But this doesn't work. On doing ls /dev/ttyUSB* I'm unable to see the new symlinks I have created. What could possibly be going wrong?
| udev rules for USB serial 'by path' not working |
If I'm not mistake, I don't use RNDIS right now.RNDIS is a Windows-specific network interface driver API. Um, that as nothing to do with what you're doing, right?What might be the reason, that ppp0 does not have a MAC address? How it's even possible?a MAC address is an Ethernet concept; and PPP is not ethernet :)
PPP's frame do contain an Address – but it's just a single byte long, and always set to 0xFF, PPP being a point-to-point protocol, where you don't need more addresses (you know who you're talking to – the other end).I don't think the reason is blockage by the ISP, because my other devices are ping-able on the network.Good debugging. Note, however, that mobile network operators (MNOs) routinely employ carrier-grade NAT to hide a lot of users behind one public IPv4 address – there's even theoretically, ignoring anything else but humans with phones – only 2³² possible IP addresses (ignoring any "special" addresses), and roughly as many phones as there are humans, so there's only IPv4 addresses for around half of the phones. The message here is that if you want your mobile device to be world-reachable, there's going to be some extra infrastructure involved (like a VPN server), or you need to use IPv6. I'd recommend going for IPv6 – due to market forces, for most MNOs it's cheaper to have IPv6 traffic to and from the internet, so they might prioritize that.
Anyway, your question is why it doesn't work for you – here's the thing: This is carrier-grade NAT (your IPv4 address on that interface is a private one!), there's neither a guarantee nor too much sense in guaranteeing that different subscribers can contact each other directly. It's cool that it works for you on your other devices, though.
What's a bit worrying is that you're only assigned a single IPv4 address, and not an IPv6 address (or a whole IPv6 subnet). That might mean that your Linux machine's pppd isn't configured to accept IPv6, or some other misconfiguration.
But more realistically: PPP is an … old protocol that's got nothing to do with how mobile network infrastructure operates, at all. 2.5G/GPRS/EDGE, 3G/UMTS, 4G/LTE, 5G… etc are packet networks – you get an interface that transports IP packets, directly; you don't get a serial line over which you need to talk PPP to establish a packet tunnel.
So, what must be happening is that your USB modem connects to the mobile network, gets an IP address, and puts the IP packets it gets through a PPP tunnel, which it seems to then put over an emulated serial link to your computer? That's an interesting way to do it, to say the least – could as well just have been a USB network card, and worked out of the box. In fact, that's how USB tethering on my phone works, and I had built-in modem cards for my laptops in the past, which did the same.
Maybe your USB modem has different modes of operating, and one that looks like a 1990's dial-up modem that emulates connecting to an Internet Service Provider that offers PPP, and a different mode where it's just an IP router? If that's the case, use the latter mode, and set it to forward packets to your computer.
So, either way, your modem is involved in mangling IP packets; so quite likely that's where your incoming packets get lost. But, maybe your other devices are also using the same APN nominally, but if they're using a different way of connecting to the mobile network, their packets go a different route in the core network, and don't end up in the same private network as your modem; there's much that could differ here, that you have zero visibility of. Generally, your MNO doesn't operate an internal network for you (unless you pay them to – for example using a special M2M subscription), but an internet access; communicate through the internet, if you can't communicate locally. For that you need a global IP address that you can connect to.
If you asked me, if you need to be mutually communicating:Use IPv6. That's your best chance to actually get a global address. Fall back to IPv4 only if you positively must.
Get some server that has a static public address. Your user equipment gets assigned a new IP address at will of your MNO.
Establish a VPN connection to that public address from each of your devices – wireguard is an excellent, relatively low-power method for that. This might already solve all your problems.
if you want to be Smart (with a capital S) about latencies, you can add additional direct network paths between different devices as soon as they detect they can mutually directly communicate – which requires some software running at the endpoints, delivering address information about the different network interfaces offered by your devices, and a daemon to make sense of these. |
I'm connecting a linux machine using a nowadays popular Huawei Brovi E3372-325 LTE USB Stick to the internet. The special requirement is that incoming ssh/ping/NTP/... connections must reach my linux OS.
The state is, that using usb_modeswitch -X and the option driver I can bring up the 3 ttyUSB interfaces, and connect successfully using wvdial. But for some reason, ifconfig does not list a HW/MAC address for ppp0 interface, and devices on the same APN network can't ping the my IP address. I don't think the reason is blockage by the ISP, because my other devices are ping-able on the network.
Output of ip addr
19: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 3
link/ppp
inet 10.250.0.112 peer 10.64.64.64/24 scope global ppp0
valid_lft forever preferred_lft foreverIf I'm not mistake, I don't use RNDIS right now. Am I right, that in general the popular RNDIS protocol does not suit my use case, because that creates an additional local network, making it trickier to forward incoming connections to the OS? Pinging might work from outside because that's answered by the USB modem itself, but incoming ssh would fail.
What might be the reason, that ppp0 does not have a MAC address? How it's even possible? Should I assign one? Is it probably the reason that other devices can't ping it's IP? How to solve this situation? | USB LTE modem without MAC address? |
I looked into this, and it does seem like remove_id was never implemented for usb-serial. Should be able to take the work in drivers/usb/core/driver.c and implement remove_id in drivers/usb/serial/bus.c.
Sorry for not having an easy answer.
|
I previously followed the answer in this question: Attaching USB-Serial device with custom PID to ttyUSB0 on embedded
Now, I need to revert that step so that the device id I echoed to new_id doesn't map to ttyUSB0 every time I connect it. The file, new_id, seems to have '0403 e0d0' permanently written to it now. I've tried to use the unbind file to no luck. There's also no "remove_id" file. Only bind, new_id, uevent, and unbind.
How can I revert this?
| How to remove device id from manually entered usb-serial driver |
The Belkin F5U109 seems to be a device of fairly old design, so perhaps the F5U409 with the same usb vendor:device id is similar. In this case the Linux driver chosen because of the id is the mct_u232.c. We can read in the .h file for Flow control:no flow control specific requests have been realized
apart from DTR/RTS settings. Both signals are dropped for no flow control
but asserted for hardware or software flow control.So it seems that XON/XOFF software flow control is not implemented in this driver, which is derived by sniffing the usb commands issued under Windows98. Perhaps the hardware itself does not provide this feature.
You could try implementing the flow control at the user level, but it is unlikely to be adequate as there will probably be a fifo on input and output so that when the XOFF arrives at the user level, there may still be too many characters already in the fifo that cannot be cancelled. Perhaps the PX-8 provides other protocols that could be used to packetize the data?
You might still be able to use hardware flow control, by connecting the extra modem lines RTS and CTS (pins 7 and 8 for 9pin DB9, 4 and 5 for DB25). You may need to swap these if you PX-8 is wired as a computer rather than a terminal. You would need stty crtscts too, and perhaps -clocal.
Alternatively, there are other serial-usb devices that Linux supports better, due to adequate documentation by the manufacturer, such as the popular FTDI series. The FTDI driver seems to have code to set the XON and XOFF characters in the device, which would allow for a rapid response by the hardware to the reception of an XOFF character, without needing to wait for the character to arrive at the kernel to be recognised. There are illegal copies of the FTDI chip, so try to buy a reputable make to ensure full compatibility.
|
This is related to a previous thread I created about a month ago and which was answered.
Today I am attempting to setup a serial console login prompt on a laptop running Ubunutu 20 with a Belkin F5U409 USB serial adapter. I have run into the same issue where larger text output will eventually fall apart into scrambled text. However, this time setting stty ixon does not resolve the behavior. See below for sample output of the issue.
For context, the computer I am using to connect to the Ubuntu laptop over RS232 is an EPSON PX-8. On the PX-8 I am using terminal emulation software called TEL.COM. See below for terminal parameters I have configured on the PX-8.
I am using systemd to enable a console on USB0 with systemctl start [emailprotected]. Do I need to configure flow control with systemd? Is there some place other than stty I need to configure parameters for ttyUSB0?
I have attempted to set this up on another laptop running Debian 10 but get the same behavior.
TEL.COM settings on the PX-8:
Baud: 9600, Char Bits: 8, Parity: NONE, Stop Bits: 2, RTS: ON, Flow Control: ONExample of this issue when I attempt to output command history:
albert@t450:/$ history
1 sudo rasp-config
2 sudo raspi-config
3 sudo nano /boot/cmdline.txt
4 tail /boot/cmdline.txt
5 sudo shutdown -r now
6 sudo vim ~/boot/cmdline.txt
7 cd /./boot
8 dir
9 sudo vim cmdline.txt
10 sudo vim config.txt
11 sudo shutdown -r now
12 dfgdf
13 vim
14 sudo vim cmdline.txt
15 cd /./boot
16 sudo vim cmdline.txt
17 sudo shutdown -r now
18 cd /./boot
19 sudo vim cmdline.txt
20 sudo shutdown -r now
21 ping 8.8.8.8
2 xprt TEM=Vvj9s9ds9j3oin so nat1 machine
x Rom =vos cngas-2goses9g3
-xtiet n n5
-s oiy
ystty configuration on the Ubuntu machine:
albert@t450:/$ stty -a
speed 9600 baud; rows 40; columns 80; line = 0; intr = ^C; quit = ^\; erase = ^?;
kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q;
stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; discard = ^O; min = 1;
time = 0; -parenb -parodd -cmspar cs8 -hupcl cstopb cread -clocal -crtscts -ignbrk
-brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon ixoff -iuclc -ixany
-imaxbel iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0
bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop
-echoprt echoctl echoke -flusho -extprocNote that all of these parameters are set in stty:
ixon
ixoff
stop = ^S
start = ^Q;
cs8
cstopb
-parenb | Serial flow control issue with ttyUSB0 |
A one liner (edited):
ps | grep picocom | awk '{print $1}' | tr -s '\n' ',' | xargs lsof -p | grep ttyUSBSearches through the running processes for picocom captures the PID and lists the open files, filtering them on the string ttyUSB.
The last column of output should show all your /dev/ttyUSB devices.
|
I have a bunch (20+) of serial ports connected to my linux machine. ttyUSB0 through ttyUSB27 as of now. I use picocom to connect/monitor these ports, but not all of them are connected.
If I want to connect picocom to a new port I have to eithergo through all the port numbers until I find the ones that are not connected yet
or try to see what all I have connected in order to find the ones that are not. This process is cumbersome with such a large number or ports.
Is there a way to get a list of the connected (or disconnected) ports from picocom?
| picocom: list all connected ports |
Yes, you can use USB serial cable to connect Terminal using lsusb and modprobe usbserial. USB to Serial Cable...
The external conversion cable directly implements hardware support internally. So whether the original board has hardware support or not, the software operating system can solve it. USB to Serial Conversion Hardware Logics...jay_k@jay_k ~ $ lsusb
Bus 001 Device 002: ID 8087:8000 Intel Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 002 Device 005: ID 2232:5005 Silicon Motion
Bus 002 Device 004: ID 8087:07dc Intel Corp.
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
......
jay_k@jay_k ~ $ sudo modprobe usbserial vendor=VENDOR product=PRODUCTVENDOR and PRODUCT are determined as XXXX:XXXX. this means that VENDOR:PRODUCT.
after executing modprobe, you can find a ttyXXX with dmesg. like;
jay_k@jay_k ~ $ dmesg | grep ttyit will be formatted as /dev/ttyUSB....
Even more, you can create a connection as USB to USB with Data Communication Converter Cable. but this products are just only in Korean Market, external link - Auction Korea.
Also, you can communicate with two Serial Converter. (but, it has overhead)
Laptop ---> USB to Serial ---> Serial to USB -> Targetand you can redirect your bash to /dev/ttyUSB....redirect output from interface?
You should use the serial device much like a normal file. The only difference is that it needs some ioctl()s to do the speed and control line setup.
So don't use os.system("echo ... but f = open('/dev/ttyUSB3', 'rw') and then f.write() and f.read().
In theory you could use ioctl() to set the speed and so on, but at that stage it's simply easier to use pySerial than to do all of the parameter marshalling yourself. ser = serial.Serial(port='/dev/ttyUSB3', baudrate=9600, timeout=1, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS) with ser.write() and ser.read().
Note that you should use udev to set a unique name for the serial port, rather than hard-coding /dev/ttyUSB3. Here's how to do that for a single USB/RS-232 adapter and here's how to do that for a multiport USB/RS-232 adapter. |
It is possible to connect my laptop to any network equipment like managed switch or firewall via mini-USB port on them, then use screen to connect to its TTY.
Is it actually possible to connect to another computer, say laptop or server, by adhoc USB-to-USB connection? Like:
Terminal Client Linux Box - USB PORT ===> Target Machine - USB PORT
Assume operating system is Linux/Unix/BSD, hardware itself doesn't support emulation of terminal by USB port.
This question is purely out of curiosity. For example if I want TTY access to my RPI but I don't want to connect a monitor.
| Can I connect to any computer's physical terminal without monitor |
If you look at man udev, KERNELS searches the device path, while KERNEL matches the device itself, and SUBSYSTEM represents the part of the kernel generating the event. When your USB dongle is plugged in, several udev events are created, as parts of the kernel discover the device and react according.
You want your rule to trigger on the action for the device itself (SUBSYSTEM=="tty", because you want a link for /dev/ttyUSB0), but with SUBSYSTEMS=="usb", it triggers when the USB device itself is discovered, not when the driver for the USB device is started. That's why you get a link to the USB device as seen from the USB subsystem, bus/usb/001/009o.
So what you need is
KERNELS=="1-1.5.6", SUBSYSTEM=="tty", SYMLINK+="rs485"(note the S and the tty).
|
I'm creating udev rules to map my USB devices (ttyUSB*) to the USB ports where they are connected. The usual way to do so is looking at the output of:
udevadm info --name=/dev/ttyUSB0 --attribute-walkHere my output (I removed the ATTRS lines not meaningful):
looking at device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5/1-1.5.6/1-1.5.6:1.0/ttyUSB0/tty/ttyUSB0':
KERNEL=="ttyUSB0"
SUBSYSTEM=="tty"
DRIVER=="" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5/1-1.5.6/1-1.5.6:1.0/ttyUSB0':
KERNELS=="ttyUSB0"
SUBSYSTEMS=="usb-serial"
DRIVERS=="ftdi_sio" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5/1-1.5.6/1-1.5.6:1.0':
KERNELS=="1-1.5.6:1.0"
SUBSYSTEMS=="usb"
DRIVERS=="ftdi_sio"
ATTRS{interface}=="USB-RS485 Cable"
ATTRS{supports_autosuspend}=="1" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5/1-1.5.6':
KERNELS=="1-1.5.6"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{idProduct}=="6001"
ATTRS{idVendor}=="0403"
ATTRS{manufacturer}=="FTDI"
ATTRS{product}=="USB-RS485 Cable"
ATTRS{serial}=="FTY48GF2" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5':
KERNELS=="1-1.5"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{product}=="USB 2.0 Hub [MTT]" looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1':
KERNELS=="1-1"
SUBSYSTEMS=="usb"
DRIVERS=="usb" looking at parent device '/devices/platform/soc/3f980000.usb/usb1':
KERNELS=="usb1"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{manufacturer}=="Linux 4.9.41-v7+ dwc_otg_hcd"
ATTRS{product}=="DWC OTG Controller" looking at parent device '/devices/platform/soc/3f980000.usb':
KERNELS=="3f980000.usb"
SUBSYSTEMS=="platform"
DRIVERS=="dwc_otg" looking at parent device '/devices/platform/soc':
KERNELS=="soc"
SUBSYSTEMS=="platform"
DRIVERS=="" looking at parent device '/devices/platform':
KERNELS=="platform"
SUBSYSTEMS==""
DRIVERS==""Here the connection: Raspberry Pi -> USB HUB -> FTDI dongle.
My rule is the following:
$ cat /etc/udev/rules.d/99-usb.rules
KERNEL=="1-1.5.6", SUBSYSTEM=="usb", SYMLINK+="rs485"but:
# ls -l /dev/rs485
lrwxrwxrwx 1 root root 15 Oct 4 07:04 /dev/rs485 -> bus/usb/001/009I was expecting the symlink should be created to /dev/ttyUSB0.
Now, I understand my dongle is at that usb position from:
$ lsusb
Bus 001 Device 006: ID 046d:c062 Logitech, Inc. M-UAS144 [LS1 Laser Mouse]
Bus 001 Device 009: ID 0403:6001 Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC
...but of course it IS NOT the serial port (i.e. I cannot echo to it).
Trying to use 1-1.5.6:1.0 as KERNEL key doesn't work - no symlink created.
What value should I use?
| KERNELS path for USB device connected to HUB |
It turned out, that I had to change two things. First I had to make number from filename with my rules higher (I changed it from 52 to 70). Second, I noticed, that ATTR{idVendor}=="10c4", ATTR{idProduct}=="ea70" didn't work, as there was no attributes avaliable for checking for subsysem tty. I changed it to proper environmental variables, and ended with this rule:
SUBSYSTEM=="tty", \
ENV{ID_MODEL}=="CP2105_Dual_USB_to_UART_Bridge_Controller", \
ENV{ID_VENDOR_ID}=="10c4", ENV{ID_MODEL_ID}=="ea70", \
SYMLINK+="ttyCP2105-$env{ID_USB_INTERFACE_NUM}-$env{ID_SERIAL_SHORT}" |
I have board with CP2105. It is usb-to-serial bridge with two uart interfaces on one usb port. I've read some guides, and documentation of UDEV, but I'm stuck on creating symlinks for them. I want them to have serial ID in name. The problem is that it does not work, even with simple rules.
My rule, that should do what I want:
ACTION=="add", SUBSYSTEM=="tty", \
ATTR{idVendor}=="10c4", ATTR{idProduct}=="ea70",
ENV{ID_MODEL}=="CP2105_Dual_USB_to_UART_Bridge_Controller", \
SYMLINK+="CP2105$env{ID_USB_INTERFACE_NUM}-$env{ID_SERIAL_SHORT}"Simpler attempt, that should add two symlinks with different number at the end:
ACTION=="add", SUBSYSTEM=="tty", ATTR{idVendor}=="10c4", ATTR{idProduct}=="ea70", SYMLINK+="CP2105%n"I also have one rule, that works, but does something else (prevents some other driver from writing to this device):
SUBSYSTEM=="usb", ATTR{idVendor}=="10c4", ATTR{idProduct}=="ea70", ENV{ID_MM_DEVICE_IGNORE}="1"My udevadm info output for first device:
udevadm info -q all /dev/ttyUSB0
P: /devices/pci0000:00/0000:00:14.0/usb3/3-5/3-5.3/3-5.3:1.0/ttyUSB0/tty/ttyUSB0
N: ttyUSB0
S: serial/by-id/usb-Silicon_Labs_CP2105_Dual_USB_to_UART_Bridge_Controller_0087144E-if00-port0
S: serial/by-path/pci-0000:00:14.0-usb-0:5.3:1.0-port0
E: DEVLINKS=/dev/serial/by-path/pci-0000:00:14.0-usb-0:5.3:1.0-port0 /dev/serial/by-id/usb-Silicon_Labs_CP2105_Dual_USB_to_UART_Bridge_Controller_0087144E-if00-port0
E: DEVNAME=/dev/ttyUSB0
E: DEVPATH=/devices/pci0000:00/0000:00:14.0/usb3/3-5/3-5.3/3-5.3:1.0/ttyUSB0/tty/ttyUSB0
E: ID_BUS=usb
E: ID_MM_CANDIDATE=1
E: ID_MODEL=CP2105_Dual_USB_to_UART_Bridge_Controller
E: ID_MODEL_ENC=CP2105\x20Dual\x20USB\x20to\x20UART\x20Bridge\x20Controller
E: ID_MODEL_FROM_DATABASE=CP210x UART Bridge
E: ID_MODEL_ID=ea70
E: ID_PATH=pci-0000:00:14.0-usb-0:5.3:1.0
E: ID_PATH_TAG=pci-0000_00_14_0-usb-0_5_3_1_0
E: ID_PCI_CLASS_FROM_DATABASE=Serial bus controller
E: ID_PCI_INTERFACE_FROM_DATABASE=XHCI
E: ID_PCI_SUBCLASS_FROM_DATABASE=USB controller
E: ID_REVISION=0100
E: ID_SERIAL=Silicon_Labs_CP2105_Dual_USB_to_UART_Bridge_Controller_0087144E
E: ID_SERIAL_SHORT=0087144E
E: ID_TYPE=generic
E: ID_USB_DRIVER=cp210x
E: ID_USB_INTERFACES=:ff0000:
E: ID_USB_INTERFACE_NUM=00
E: ID_VENDOR=Silicon_Labs
E: ID_VENDOR_ENC=Silicon\x20Labs
E: ID_VENDOR_FROM_DATABASE=Cygnal Integrated Products, Inc.
E: ID_VENDOR_ID=10c4
E: MAJOR=188
E: MINOR=0
E: SUBSYSTEM=tty
E: TAGS=:systemd:
E: USEC_INITIALIZED=5571337818And the second one:
udevadm info -q all /dev/ttyUSB1
P: /devices/pci0000:00/0000:00:14.0/usb3/3-5/3-5.3/3-5.3:1.1/ttyUSB1/tty/ttyUSB1
N: ttyUSB1
S: serial/by-id/usb-Silicon_Labs_CP2105_Dual_USB_to_UART_Bridge_Controller_0087144E-if01-port0
S: serial/by-path/pci-0000:00:14.0-usb-0:5.3:1.1-port0
E: DEVLINKS=/dev/serial/by-path/pci-0000:00:14.0-usb-0:5.3:1.1-port0 /dev/serial/by-id/usb-Silicon_Labs_CP2105_Dual_USB_to_UART_Bridge_Controller_0087144E-if01-port0
E: DEVNAME=/dev/ttyUSB1
E: DEVPATH=/devices/pci0000:00/0000:00:14.0/usb3/3-5/3-5.3/3-5.3:1.1/ttyUSB1/tty/ttyUSB1
E: ID_BUS=usb
E: ID_MM_CANDIDATE=1
E: ID_MODEL=CP2105_Dual_USB_to_UART_Bridge_Controller
E: ID_MODEL_ENC=CP2105\x20Dual\x20USB\x20to\x20UART\x20Bridge\x20Controller
E: ID_MODEL_FROM_DATABASE=CP210x UART Bridge
E: ID_MODEL_ID=ea70
E: ID_PATH=pci-0000:00:14.0-usb-0:5.3:1.1
E: ID_PATH_TAG=pci-0000_00_14_0-usb-0_5_3_1_1
E: ID_PCI_CLASS_FROM_DATABASE=Serial bus controller
E: ID_PCI_INTERFACE_FROM_DATABASE=XHCI
E: ID_PCI_SUBCLASS_FROM_DATABASE=USB controller
E: ID_REVISION=0100
E: ID_SERIAL=Silicon_Labs_CP2105_Dual_USB_to_UART_Bridge_Controller_0087144E
E: ID_SERIAL_SHORT=0087144E
E: ID_TYPE=generic
E: ID_USB_DRIVER=cp210x
E: ID_USB_INTERFACES=:ff0000:
E: ID_USB_INTERFACE_NUM=01
E: ID_VENDOR=Silicon_Labs
E: ID_VENDOR_ENC=Silicon\x20Labs
E: ID_VENDOR_FROM_DATABASE=Cygnal Integrated Products, Inc.
E: ID_VENDOR_ID=10c4
E: MAJOR=188
E: MINOR=1
E: SUBSYSTEM=tty
E: TAGS=:systemd:
E: USEC_INITIALIZED=5570324399Can anybody help?
| How to assign symlinks to serial devices from usb-to-serial device CP2105? |
Just type. Anything you type (that is not a screen escape key combination) will be sent to the device.
The problem might be that the device is not currently echoing back the characters you type, so it may look like you're not sending anything.
Using screen in the way you describe is pretty much a set of "straight pipes": from your keyboard to the device, and from the device to your screen. That's all. If you need to adjust the parameters of the serial communication, you should probably use minicom or some other program that is actually designed to work with hardware serial ports.
If the device cannot be configured to echo the input characters back at you (to conform to the Unix common serial port behavior), then you would need a communication program that has an option to activate "local echo". Minicom can do that with Ctrl-AE.
|
I use screen \dev\ttyUSB0 command to connect to virtual serial port. It prints out the incoming communication, but I dont know how to send message back to the connected device.
Thanks for answers
EDIT:
Turns out I send out character as I type, but I cannot see them. So is there any way I could see what am I typing before sending? Could it work something like terminal?
| How to enter message in serial communication using screen |
As it turns out, PPP is affecting it's own serial port, and since that's the one that's used to configure the GPS, that's what's causing the problem.
By comparing the results of stty -F /dev/ttyUSB3 before and after running PPP, it became apparent that PPP was configuring the serial port in raw mode, which meant I couldn't use it to configure the GPS port. What's interesting is that these settings persisted even after the ttyUSBx device nodes were removed and recreated due to the modem being reset.
Simply running stty sane -F /dev/ttyUSB3 to revert back to default settings allowed me to configure the GPS port without issue.
|
I've got a Buildroot-based embedded system that uses a 3G modem (Cinterion PH8-P) and PPP to connect to the internet. The 3G modem is a USB device that provides 4 ttyUSB ports. One of these is used for PPP, whereas another is used for GPS.
Occasionally, the 3G modem will stop working and will need to be restarted. I do this by first stopping the PPP and GPSd daemons, then restarting the modem, and then restarting the daemons again. Unfortunately, it seems that if PPP is run beforehand, it seems to affect the serial ports in some way so that other programs can no longer use them.
For example, if I run the following on a freshly booted system where PPP has not been run yet:
cat /dev/ttyUSB3&
echo "AT" > /dev/ttyUSB3I get the expected OK AT response back. If I then run PPP for a bit (by calling pon), then stop it (by calling poff), restart the modem and try to send the same AT command again, the terminal just seems to echo back exactly what I sent to the modem and I don't get the OK response. As a result, the GPS won't work, since I stop receiving NMEA messsages from the GPS tty port. It's almost like PPP is configuring all the serial ports to redirect their outputs somewhere else. Despite this, PPP has no problem at all starting up again after the modem has rebooted - according to the logs, the chat scripts happily send their AT commands and get the expected responses back.
What could be causing this issue?
| ppp affecting serial ports so that they cannot be used if modem is reset |
So apperently it's all been about the Virtual Box. Now that I am connected to an "actual" Ubuntu-PC I can access both at the same time.
|
I am struggling accessing my beaglebone black via FTDI using picocom. It has been working without any trouble in the past few weeks. I entered following command and it would work properly.
sudo picocom -b 115200 /dev/ttyUSB0Anyhow, I have been working with Ethernet- over- USB lately. Which wasn't a problem either. I am pretty sure I connected via picocom as well a couple of times just to check something.. So it has been working in the last few weeks. Obviously I didn't changed any settings in order to get the Ethernet working which might cause the picocom- trouble.
Anyhow. I figured you cannot do Ethernet-Over-USB and Serial FTDI at the same time. So I unplugged mUSB when trying to connect via picocom. And it came up with an error:
picocom v1.7
port is : /dev/ttyUSB0
flowcontrol : none
baudrate is : 115220
parity is : none
databits are : 8
escape is : C-a
local echo is : no
noinit is : no
noreset is : no
nolock is : no
send_cmd is : sz -vv
receive_cmd is : rz -vv
imap is :
omap is :
emap is : crcrlf,delbs,
FATAL: failed to add device /dev/ttyUSB0: Invalid baud rateThen I changed baud rate to 9600 just to make sure. Now, instead of the error, it says Terminal ready. And then it just stops and doesn't do anything anymore. I press ENTER: it still doesn't do anything.
So my questions
1. Why can't I do Ethernet-Over-USB and the FTDI Connection at the same time?
2. What's picocom up to with it's baud rate? How do I fix this?
My environment
Beaglebone Black Rev C running Debian Wheezy (3.8.13)
VirtualBox running Ubuntu 14.04
I am not sure whether you need These Information:
my ifconfig
eth0 Link encap:Ethernet Hardware Adresse 08:00:27:89:55:d3
inet Adresse:10.0.2.15 Bcast:10.0.2.255 Maske:255.255.255.0
inet6-Adresse: fe80::a00:27ff:fe89:55d3/64 Gültigkeitsbereich:Verbindung
UP BROADCAST RUNNING MULTICAST MTU:1500 Metrik:1
RX-Pakete:1386 Fehler:0 Verloren:0 Überläufe:0 Fenster:0
TX-Pakete:982 Fehler:0 Verloren:0 Überläufe:0 Träger:0
Kollisionen:0 Sendewarteschlangenlänge:1000
RX-Bytes:962896 (962.8 KB) TX-Bytes:95644 (95.6 KB)lo Link encap:Lokale Schleife
inet Adresse:127.0.0.1 Maske:255.0.0.0
inet6-Adresse: ::1/128 Gültigkeitsbereich:Maschine
UP LOOPBACK RUNNING MTU:65536 Metrik:1
RX-Pakete:395 Fehler:0 Verloren:0 Überläufe:0 Fenster:0
TX-Pakete:395 Fehler:0 Verloren:0 Überläufe:0 Träger:0
Kollisionen:0 Sendewarteschlangenlänge:0
RX-Bytes:33859 (33.8 KB) TX-Bytes:33859 (33.8 KB)/etc/network/Interfacesinterfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback | Sudden picocom error: invalid baud rate |
A USB bus must have a tree structure, with exactly one USB Host Adapter at the root of the tree. The Host Adapter will be responsible for overall control of the entire bus.
Connecting two basic USB Host Adapters together with just a simple cable with USB-A connectors on both ends will never work and is explicitly against USB specifications. That is a non-starter.
The "virtual ethernet bridge" you've seen is not just a cable, but it includes some electronics (probably inside one of the connectors) to make the cable work like two USB Ethernet adapters connected together with a short Ethernet cable.
You could in theory do something similar by connecting two USB-to-serial converter chips back-to-back, but that would be just throwing away most of the speed of the USB bus, without getting the kind of minimal latency the handshaking signals of a RS-232 UART wired directly into an IRQ signal can offer. There would be no advantage in doing that.
A RS-232 serial port has fully independent TX and RX lines; USB has a single differential pair of data wires that are both used for receive and for transmit, and most USB host adapters have dedicated hardware circuits for differential signaling, and they cannot be easily bypassed. The two other lines are dedicated power lines, and might not be controllable at all.
A USB On-The-Go (OTG) cable (as mentioned in the question comments by Giacomo Catenazzi) cannot have a regular USB-A connector on both ends, because it needs either the 5th pin of the Mini-AB or Micro-AB connectors, or the equivalent systems of USB Type-C, and a dual-mode USB controller that can work both as a USB Host Adapter or as a USB Device. The extra 5th pin of the Mini-AB or Micro-AB connectors is used to select the initial mode of the dual-mode USB controller it's plugged into. There is also a specific protocol for flipping the host/device roles around if necessary.
One end of an OTG cable could be regular USB-A, but the other must be Mini-AB, Micro-AB or Type-C. And as noted above, the system at that end would need to have a dual-mode capable USB controller.
In Type-C, the host/device role is just one of the several optional negotiable things on the connection, like various alternate modes and power delivery.
|
I have seen that it is possible to connect two computers by USB cable and make a virtual ethernet bridge or something like that.
My question is, can we do something similar but configure the connection as a SERIAL interface?
To be exact:
Is it possible to connect two computers with a USB cable (I do not know what the correct name is, I mean a cable that has USB-A connector on both ends) without any Serial adapter involved and then configure the usb port as a serial.
| Virtual COM port using only USB cable |
If Thunar is behaving like udisksctl power-off, then it is using usb_remove_store().
That means Thunar is being misfeature-compatible with Microsoft Windows. You can just use eject /dev/sdX from the command line instead to allow hardware to be safely removed. The only difference is that the LED light won't turn off. To un-eject, use eject -t /dev/sdX.
Here's a quote from Alan Stern (who actually wrote the Linux kernel code that performs the "remove" option):In fact, the "remove" attribute works for any USB device, since all it
does is disable the upstream port. But normally it's intended only for
mass-storage devices. I was going to say that it's needed only for
mass-storage devices, but that's not correct -- it isn't needed at
all. Its main purpose is to make people who have been conditioned by
Windows feel more comfortable, by turning off an LED on the device to
indicate that removal is now safe. |
When I "safely remove" an external hard-drive from my file-manager (Thunar), the whole hard-drive is powered off and disappears from /dev. Therefore, I guess that under the hood, this is done by calling udisksctl power-off -b /dev/sdX which has the same effect.
I thought it should somehow be possible to bring the device up again. After having read https://stackoverflow.com/a/12675749, I thought that powering off is maybe done by writing to /sys/bus/usb/devices/usbX/power/control, but the sysfs seems to remain untouched.
So, how is it possible to power-on an external device again after powering it off with udisksctl? To me, it is annoying that I can not re-mount a partition after unmounting it from the file manager.
| How to power on an external hard-drive after powering it off? |
External media/drives mounting is handled by udisks2 on most modern distros. I don't think there's any trivial way to change the default mount options as they are hard-coded (see FSMountOptions in udiskslinuxfilesystem.c) that is, they're not configurable (at least not yet1). Your options are quite limited: unmount the partition and remount it with different mount options (unless you're willing to patch the source code or write your own automount tool).
As to your other question:I think one may be able to disallow mounting by type, though, by the
looks of the rules?! When I insert a USB (3.0) thumbdrive or HDD all
ext[34] partitions get mounted (I wish they weren't) and the user gets
a graphical prompt for any LUKS partition to unlock. I wish to disable
both. A user may have FAT drives but others may only be mounted by
root.You could use a udev rule to ignore all USB thumbdrive partitions except vfat ones. Create a new rule file e.g. /usr/lib/udev/rules.d/90-ignore-nonvfat.rules with the following content:
SUBSYSTEM=="block", ENV{DEVTYPE}=="partition", ENV{ID_BUS}=="usb", ENV{ID_FS_TYPE}!="vfat", ENV{UDISKS_IGNORE}="1"(replace UDISKS_IGNORE with UDISKS_PRESENTATION_HIDE if your distro uses udisks1).
1: see FreeDesktop ML for a proposed patch (and a long discussion).
|
When I plug in a USB drive it is automatically mounted on /run/media/user/fslabel. This is done, I guess by some udev/dbus/thunar/nautilus/gvfs or other system, and I find this convenient and do not want to revert to manual mounting by root. However, I have a problem with the default mount options: vfat drives are mounted such that the executable flag ist set on all regular files. This is a nuissance and a security problem and I wish to disable it.
How do I set system-wide options for mounting, like using the noexec flag for all vfat partitions and disabling mounting of ext4 partitions by user-space programs/daemons?
A few years ago I tried something very time-consuming on a different system, like editing some udev or dbus rules (quite apparently not files designed to be edited by hand), which was a great effort due to lack of proper documentation and great variation between distros. Is this the intended and only way? If so, could someone please tell me what to change where?
I am using Arch Linux, CentOS and openSUSE with the XFCE Desktop. Automount may be performed by one of nautilus, thunar or dolphin, running in the background (or possibly, a service started by these?!). I am not sure because it happens in the background.
| How do I change automatic mounts of removable vfat / fat32 drives/partitions to use "noexec"? |
No. And even solutions that apparently do it without root privileges, actually do have root privileges. This is just a basic requirement for mounting or accessing raw disk data. If you could do those without root priveleges, you could read files you have no permission to read (by reading and searching the raw data), and if you could mount you could mess up the VFS filesystem tree, possibly in creative ways that let you obtain permissions you're not supposed to have.
What you could do, if you already had read permission to the raw encrypted data, is implement everything needed to access it and extract files from it in software that runs without root permissions. So basically you'd treating a LUKS encrypted image file as you would a GPG-encrypted tar. If this is what you wanted, and for some reason absolutely had no root or sudo available, you'd usually use the tar in the first place since that's what already exists as a read-to-use solution.
To provide a more practical approach to your problem: keyfiles are passphrases and passphrases are keyfiles, really. Apart from some minor details (e.g. how it treats newlines), LUKS does not really make a distinction here. Keyfile just means it reads the passphrase from a file.
So you could just use keyfiles that are printable ASCII and don't have newlines in them.
That is, if udisksctl really doesn't support keyfiles. Kind of hard to understand why.
|
Is there any way to unlock LUKS partition using keyfile while not having root priviliges? (using sudo is not an option)
I know that udisksctl can open LUKS partition, however it can do it only with a passphrase.
| Unlock LUKS partition using keyfile without root access? |
I had a detailed look into the udisks2 source code and found the solution there.
The devices correctly mounted under user permissions were formatted with old filesystems, like fat. These accept uid= and gid= mount options to set the owner. Udisks automatically sets these options to user and group id of the user that issued the mount request.
Modern filesystems, like the ext series, do not have such options but instead remember owner and mode of the root node. So chown auser /run/media/auser/[some id] indeed works persistently. An alternative is passing -E root_user to mkfs.ext4 which initializes uid and gid of the newly created filesystem to its creator.
|
Loop devices, i.e. for mounting raw disk images, can be managed without root privileges using udisks.
For testing purposes, an image can be created and formatted like so:
dd if=/dev/urandom of=img.img bs=1M count=16
mkfs.ext4 img.imgAnd then setup using udisks
udisksctl loop-setup -f img.imgThis creates a loop device for the image and mounts it to a new directory under /run/$USER, just like any local hard drive managed by udisks. Only the permissions are not what I expected.
# ls -l /run/media/$USER/
drwxr-xr-x 3 root root 1024 Apr 10 11:19 [some id]
drwx------ 1 auser auser 12288 Oct 30 2012 [a device label]The first one listed is the loop device, owned by root and not writable by anybody else. The second one is a local hard drive or an USB pen device mounted for comparison, belonging to the user who mounted it. I know that I could fix this with a simple chmod executed as root.
But why does udisks assign different permissions and owners? Can it be configured to do otherwise?
| Mount image user-readable with udisks2 |
The file managers uses UDisks2 to mount the external drives without admin rights. GNOME, KDE, XFCE and various other desktop environments uses UDisks2 to allow normal users to mount removable media devices.
UDisks2 project provides a system daemon called udisksd, and a command-line tool called udisksctl.
The udiskd daemon runs in the background and implements well-defined D-Bus interfaces that can be used to query and manipulate storage devices. udiskd starts automatically at system boot and runs as root all the time. You can verify it using command:
sudo systemctl status udisks2Below are the steps to mount a usb disk without sudo!
1. Find what the drive is called
You'll need to know what the drive is called to mount it. To do that enter the command below
lsblkYou're looking for a partition that should look something like: /dev/sda1 or /dev/sdb1. The more disks you have the higher the letter this is likely to be. Anyway, find it and remember what it's called.
2. Mount using udisksctl
udisksctl mount -b /dev/sda1Sample Output:
Mounted /dev/sda1 at /media/myusername/usb_stick_name.3. Unmount the disk
Similarly, you can unmount the USB drive using command:
udisksctl unmount -b /dev/sda1 |
In the terminal I have to use sudo mount, otherwise it says operation not permitted.
But in the file explorer (started normally, without sudo) I can mount by pressing the icon next to the external disk (or right click -> Mount) and it works. How do I use the same technique in bash to mount a usb drive without sudo?
| mount usb drive without sudo |
For udisks version 2.6.4 and later
Note: I haven't tested this. I will once I get udisks 2.6.4 (whenever https://github.com/NixOS/nixpkgs/pull/41723 is backported to NixOS stable).
Update: I have udisks 2.8.0 now, so I can test my solution. The only thing I missed was removing the trailing newline from the output of pass (...) | head (...). To trim that, either use the -n flag with echo, or append | tr -d '\n' to the head output . I've reflected this in my two solutions below.
Generic (unsecure) solution
Use the --key-file flag and substitute the password string in place of a keyfile. To unlock /dev/sdb with the password hunter2:
udisksctl unlock --block-device /dev/sdb --key-file <(echo -n "hunter2")Passing sensitive data directly through the command line is unsafe, so this method should be avoided.
pass implementation
Instead, retrieve the password string with pass thumbdrive-password | head -n 1, trim the trailing newline, and substitute it in place of a keyfile:
udisksctl unlock \
--block-device /dev/sdb \
--key-file <(pass thumbdrive-password | head -n 1 | tr -d '\n') |
Currently, I do this to mount my encrypted thumbdrive:
# Works!
pass thumbdrive-password | # get device password entry from password manager
head -n 1 | # get the device password itself
sudo cryptsetup luksOpen /dev/sdb thumbdrive # unlock device
udisksctl mount -b /dev/mapper/thumbdrive # mount deviceI'd like to do something like this instead:
# Does not work!
pass thumbdrive-password |
head -n 1 |
udisksctl unlock -b /dev/sdb # unlock device
udisksctl mount -b /dev/mapper/luks-foobar # mount device with uuid "foobar"This would allow semi-privileged users (with permission to org.freedesktop.udisks2.filesystem-mount in polkit) to mount encrypted filesystems without using sudo. Udisks will not accept this piping method, because it uses an interactive password prompt. How can I provide my device password to udisksctl unlock without typing it in manually?
| Provide password to udisks to unlock LUKS-encrypted device |
I had the same problem a while ago.
Solution:Fixing your configuration: create file /etc/polkit-1/localauthority/50-local.d/50-mount-as-pi.pkla with the following contents:
[Media mounting by pi]
Identity=unix-user:pi
Action=org.freedesktop.udisks.filesystem-mount
ResultAny=yesFixing your init script:add a variable containing the user you would like to run udisks-glue as:
NAME=udisks-glue
PIDFILE=/var/run/udisks.pid
DAEMON="/usr/bin/udisks-glue"
DAEMONUSER=pi <-- add this linemodify start-stop-daemon invocations to use the $DAEMONUSER variable:
start)
log_daemon_msg "Starting Automounter" "$NAME"
--> start-stop-daemon --start --exec $DAEMON --chuid $DAEMONUSER
log_end_msg $?
;;
stop)
log_daemon_msg "Stopping Automounter" "$NAME"
--> start-stop-daemon --stop --exec $DAEMON --user $DAEMONUSER
log_end_msg $?
rm -f $PIDFILE
;;(NOTE: I removed the -- -p $PIDFILE part from the first invocation. Your regular user account probably won't have write permissions for /var/run, so you can either do what I did above or change the $PIDFILE variable to a path writable by your regular user.)Comments on the steps you've taken:This couldn't have worked. The $DAEMON variable is used as an argument for --exec in a start-stop-daemon invocation. That argument should be an executable, while exec is a shell builtin.
Doing that broke your init script. While starting udisks-glue that way worked, stopping it wouldn't as start-stop-daemon would try to stop /path/to/your/helper/script.sh instead of the actual daemon (/usr/bin/udisks-glue). Putting that aside, when you start udisks-glue in daemon mode, it doesn't generate debug messages. If you ran the following command in an interactive shell:
# su pi -c "/usr/bin/udisks-glue -f"you'd probably see something like:
Device file /dev/sdb1 inserted
Trying to automount /dev/sdb1...
Failed to automount /dev/sdb1: Not Authorized
Device file /dev/sdb insertedwhich would've explained why your drives aren't mounted.
This was effectively the same as 2. One extra remark: the ampersand (&) at the end was redundant as udisks-glue daemonizes by default.
Again, running udisks-glue in foreground would've explained the problem for non-FAT filesystems:
Device file /dev/sdb1 inserted
Trying to automount /dev/sdb1...
Failed to automount /dev/sdb1: Mount option dmask=0 is not allowed
Device file /dev/sdb insertedAlso note that if you would like to change the owner of an ext4 mountpoint, you need to chown it after mounting. |
I'm trying to make udisks-glue work on my Raspbian Raspberry Pi. This works fine if I manually start udisks-glue via ssh. However, I wish to start it automatically on startup.
Hence, a script at /etc/init.d/udisks-glue launches the daemon for me (as per instructions here). This works fine, but disks are mounted with root permissions (drwx------). Is it possible to make this script start the daemon as user pi, not root?
What I've tried
1) Modifying the script above, replacing
DAEMON="/usr/bin/udisks-glue" with
DAEMON="exec su - pi -c /usr/bin/udisks-glue"This failed to execute.
2) Replacing this line with a reference to a custom script, which then calls exec su - pi -c /usr/bin/udisks-glue. When I connect hard drives, they aren't mounted. However, there is the appearance of correctly running processes. Looking at ps aux | grep [u]disks, I can see udisks-glue running as pi (and two udisks-daemons running as root); I get the same ps output if I manually start udisks-glue, as above.
3) I tried editing /etc/rc.local, adding the line
su pi -c "/usr/bin/udisks-glue &"This had the same result as in (2), with udisks-glue running as pi, but not functional.
4) As per this page, running udisks-glue as root, but giving permissions of mounts to all. This works for FAT filesystems, but fails to even mount ext4. (I'd prefer mounts to be owned by user pi anyway.)
| How can I make udisks-glue run at startup and mount drives as particular user? |
Do you have a block device/partition/LV that is not listed in /etc/fstab (and so looks like a candidate for udisksd management) but is encrypted or otherwise not easily identifiable?
Or maybe a CD drive or a memory card reader with a slow or non-existent detection of a "no media in drive" error? In the case of a memory card reader, a symptom of this situation might be I/O timeout errors for that device in dmesg output.
In these cases, you might want to exclude these problem devices from udisksd management, by creating an udev rule matching the problem device and adding the ENV{UDISKS_IGNORE}="1" attribute to it.
The rule could be as simple as
KERNEL=="sda6", ENV{UDISKS_IGNORE}="1" # encrypted diskor by device serial number
SUBSYSTEM=="block", ENV{ID_SERIAL_SHORT}=="S467NX0KB24459Y", ENV{UDISKS_IGNORE}="1"or by any valid combination of udev attributes.
You might want to read [/usr]/lib/udev/rules.d/*-udisks2.rules for examples and informative comments, but you should add your own rules to /etc/udev/rules.d/*.rules so it won't be overwritten by system updates. udev will read both directories, and if there is a file with the exact same name in both, the file in /etc/udev/rules.d will override the corresponding system file. In this case, you probably won't need to override the system default rules files; just add your own rule file with a non-overlapping name.
You can use any filename as long as it is in the correct directory and has the .rules suffix, but remember that the rules are executed in the default US-ASCII sorting order of the filenames, so there is a convention to add a number prefix to the filename to make the rule ordering explicit.
|
I'm analyzing the systemd and I want to improve my system's booting speed.The number one service in the blame list of systemd-analyze by a clear gap is udisks2.service with almost 10 seconds ( those numbers might be misleading because of the dependencies but udisks doesn't have any ) . It's not a good solution to disable it since it's needed by another service :
$ cat /lib/systemd/system/udisks2.service
...
[Install]
WantedBy=graphical.targetI also tried disabling it in a test instance of ubuntu in VirtualBox , it booted completely and without problem but once the dbus-daemon got initialized , it automatically started it.
From man udisksd :Users or administrators should never need to start
this daemon as it will be automatically started by dbus-daemon(1) or
systemd(1) whenever an application tries to access its D-Bus
interfacesIn the `udisks2.conf` manpage it is stated that you can set the `modules_load_preference` option to `ondemand` and by default it is. It seems that currently it's in the most optimum form.
So the question is "Is it possible to safely speedup the execution of /usr/lib/udisks2/udisksd ?
Any suggestion would be appreciated.
| Speeding up udisks2.service in linux |
Yes, either
su -c 'udisksctl mount -b /dev/sdd --no-user-interaction' - thbor
su - thb
udisksctl mount -b /dev/sdd --no-user-interaction
exitwill mount /dev/sdd on e.g. /media/thb/mydevice
without causing an unwanted GUI authentication dialog to pop up.
|
Logged in as the root user, udisksctl mount mounts my device at /media/root/mydevice. Alternately, logged in as another user, udisksctl mount mounts my device at /media/anotheruser/mydevice.
So far, so good. However, I would like to mix the two. Logged in as the root user, I would like udisksctl mount to mount my device at /media/anotheruser/mydevice. Reason: I want another user to be able to access my device.
In other words, logged in as root, I think that I want to do this: udisksctl --user=anotheruser mount. Unfortunately, udisksctl does not seem to have a --user option.
This does not work, either: USER=anotheruser udisksctl mount.
What should I do?
ADDITIONAL INFORMATION
Logged in as the root user, the exact command I am issuing is USER=thb udisksctl mount -b /dev/sda11.
I have thought of making a setuid wrapper, but this would not help, would it? The point of issuing the command as root is to skip the GUI authentication dialog udisksctl otherwise pops up.
Is there perhaps some D-Bus technique that would help? I have not yet learned D-Bus well. At some stage in the control flow, whether at the Udisks stage, the D-Bus stage, or some other stage, I need to persuade the system to act for another user without causing an unwanted GUI authentication dialog to pop up.
This should be possible for the root user to do, shouldn't it?
My platform is Debian 8 jessie.
| How to cause udisksctl to act for another user? |
I feel your pain... I also love the sudo-less power of udisksctl, I use UDisks2 in several snippets and projects, and I also hate how "machine-unfriendly" its output is.
That said, one approach I'm leaning towards to is not to parse the output of udisksctl mount,loop-*,..., use it for actions only, and leave parsing to udisksctl info, udevadm or even lsblk -lnpb (which can even have JSON output if you're willing to use jq!).
If you do parse udisksctl, at least prepend it with LC_ALL=C to avoid localized messages by using the fixed C "virtual" locale, so you at least guarantee the strings matched won't change per user environment.
Examples:Listing the (writable) partitions with a known filesystem of a drive device (if any):lsblk "$device" -lnpb --output NAME,RO,FSTYPE | awk '$2 == 0 && $3 {print $1}'Getting the mountpoint of one such partition above after mounting:LC_ALL=C udisksctl info -b "$partition" | grep -Po '^ *MountPoints: *\K.*'No more grepping human-intended messages!
(or, better yet, make your voice heard in this still open 2017 request for a script-friendly interface)
|
udisksctl is my tool of choice when dealing with file system images (recent example, but I've been doing this all over the place).
The dance typically looks like
fallocate -l ${img_size} filesystem.img
mkfs.${fs} filesystem.img# Set up loop device as regular user
loopback=$(udisksctl loop-setup -b "${img_file}" | sed 's/.* \([^ ]*\)\.$/\1/')# Mount as regular user
mounted=$(udisksctl mount -b "${loopback}" | sed 's/^.* \([^ ]*\)$/\1/')# Do the testing/benchmarking/file management
# e.g.:
cp -- "${files[@]}" "${mounted}"Quite frankly, I have a bad feeling about the way I parse the output of udisksctl; these are clearly human-aimed strings:Mapped file filesystem.img as /dev/loop0.
Mounted /dev/loop1 at /run/media/marcus/E954-81FBAnd I don't think anyone considers their actual format "API". So, my scripts might break in the future! (Not to mention the nasal demons I invite if the image file name contains line breaks.)
udisksctl doesn't seem to have a "porcelain" output option or similar. Is there an existing method that does udisksctl's job of loopback mounting with user privilege through udisks2, with a proper, unambiguous output?
| udisksctl: get loop device and mount point without resorting to parsing localized output? |
Unfortunately I wasn't able to find the reason why udev and udisks2 didn't work together. But I found a solution for my problem here. Below is a simple example how to implement automount of a ntfs usb hdd. First is a script mount.sh to mount a drive
#!/bin/bash
mkdir -p /media/usbhdd
mount -t ntfs-3g -o locale=en_IE.UTF-8,fmask=0113,dmask=0002,uid=storage-user,gid=storage-group /dev/mx1 /media/usbhddThen we create a systemd unit in /etc/systemd/system/mount-hdd.service
[Unit]
Description=mount usb hdd
[Service]
Type=forking
ExecStart=/usr/local/scripts/storage/mount.sh
[Install]
WantedBy=multi-user.targetAnd finally udev rule
ACTION=="add", SUBSYSTEMS=="usb", KERNEL=="sd*", ATTRS{serial}=="<serial_number>", SYMLINK+="mx%n"
ACTION=="add", SUBSYSTEMS=="usb", KERNEL=="sd*1", ATTRS{serial}=="<serial_number>", RUN+="/bin/systemctl start mount-hdd"
ACTION=="remove", SUBSYSTEMS=="usb", ATTRS{serial}=="<serial_number>", RUN+="/bin/umount /media/usbhdd", RUN+="/bin/rmdir /media/usbhdd" |
In Debian Wheezy I had a special rule for my ntfs usb hdd. When it is inserted it is mounted in /media under a specific sub-folder.
ACTION=="add", SUBSYSTEMS=="usb", ATTRS{serial}=="<serial_number>", SYMLINK+="mx%n"
ACTION=="add", SUBSYSTEMS=="usb", ATTRS{serial}=="<serial_number>", RUN+="/bin/mount <options>", OPTIONS="last_rule"
ACTION=="remove", SUBSYSTEMS=="usb", ATTRS{serial}=="<serial_number>", RUN+="/bin/umount <options>"Ufter I updated to Jessie it stopped working. I found out that after changes in udev you cannot use mount and it is recommended to use either udisks2 or some self-written systemd unit. I chose udisks2 and rewrote rule as follows
ACTION=="add", SUBSYSTEMS=="usb", ATTRS{serial}=="<serial_number>", SYMLINK+="mx%n"
ACTION=="add", SUBSYSTEMS=="usb", ATTRS{serial}=="<serial_number>", RUN+="/bin/su storage_user -c '/usr/bin/udisksctl mount --block-device /dev/mx1 --filesystem-type ntfs --options locale=en_IE.UTF-8,fmask=0113,dmask=0002 --no-user-interaction'", OPTIONS="last_rule"
ACTION=="remove", SUBSYSTEMS=="usb", ATTRS{serial}=="<serial_number>", RUN+="/usr/bin/udisksctl unmount --block-device /dev/mx1 --no-user-interaction"It doesn't work. In syslog I see:
Error looking up object for device /dev/mx1But if I run this command from cli it works fine. I believe because of the asynchronous nature of systemd services /dev/mx1 is not ready when udisk2 is trying to mount usb hdd. What rule should I write instead?
Is there any good guide on Internet how to write custom automounting rules especially for ntfs file systems? | udev+udisks2: udisksctl gives 'Error looking up object for device' |
When you create a filesystem that supports file ownership, its root directory starts owned by root (with all the mkfs that I remember seeing). The ownership of the mount point and the user who did the mounting are irrelevant to the ownership of the root directory (or any other file) on that filesystem. It would be problematic after all if mounting a filesystem in a different place changed the privileges required to access each file on it.
If you want to create files as a non-root user, you'll have to give that user write permission to some directory on that filesystem.
|
When I mount blank btrfs partition in Dolphin, I get "Permission denied" on write.
You can see that it's mounted on /run/media/%username% dir which is correct, but owner is root.
[doctor@doctoror doctor]$ pwd
/run/media/doctor
[doctor@doctoror doctor]$ ls -l
total 4
dr-xr-xr-x 1 root root 0 січ 1 1970 Home
[doctor@doctoror doctor]$ mkdir Home/tmp
mkdir: cannot create directory ‘Home/tmp’: Permission denied | udisks2: permission denied |
As @don_crissti suggests, I was indeed using udisks 1. I should've done
udisksctl mount -b /dev/sda7 |
I'm running Linux Mint 17.1, based on Ubuntu Trusty.
If I run
udisks --mount /dev/sda7then the partition is mounted in /media and not in /media/$USER as it should. What am I doing wrong?
| Mount a partition in Terminal with udisks |
First read the man page (man udisksctl)! It says you should use --options.
If this is not working, then subvol is considered an unsafe option (IMHO, it should). If so, you can use the mount command or you can set the options directly in /etc/fstab for a given mount point.
|
I have a btrfs partition with several subvolumes, one of which is /. I have another subvolume with some other stuff that I also want to mount, but trying to mount it with udisksctl:
udisksctl mount -b /dev/mapper/container -o subvol=othergives me this error:
Error mounting /dev/dm-0: GDBus.Error:org.freedesktop.UDisks2.Error.AlreadyMounted: Device /dev/dm-0 is already mounted at `/'.It mounts fine with mount /dev/mapper/container -o subvol=other /mnt. Is there anything I can do to get this to work with udisksctl?
| udisksctl mount a different subvolume of an already mounted btrfs partition |
The maintainer of udiskie pointed out that stale mount points not being cleaned up is a bug in udisks2. In fact, after more tests, I can confirm that sometimes the mount point is deleted.
|
udisks2 with udiskie are set up to automount USB storage devices. Connecting a pen drive labeled MYDRIVE, it is mounted to:
/media/MYDRIVEWhen pulling the drive without prior unmounting, the above directory persists.
Is it possible to get stale mount points be deleted right away?
I've actually seen that happening in an several years old installation of Ubuntu. So it is possible, though perhaps not with udisks2: I don't know what software for managing removable media is part of that installation.
Update: Raised the issue on GitHub.
| Make udisks2 clean up stale mount points? |
I assume you're referring to a btrfs raid1 filesystem created on top of two block devices created with something like mkfs.btrfs -L Raid1 -d raid1 /dev/sd* /dev/sd*
Reproduced this setup locally (based on Funtoo instructions from here):
$ dd if=/dev/zero of=/tmp/btrfs-vol0.img bs=1G count=1
$ dd if=/dev/zero of=/tmp/btrfs-vol1.img bs=1G count=1
$ sudo losetup /dev/loop0 /tmp/btrfs-vol0.img
$ sudo losetup /dev/loop1 /tmp/btrfs-vol1.imgCreated the fs
$ sudo mkfs.btrfs -L Raid1 -d raid1 /dev/loop0 /dev/loop1Both loop0 and loop1 do appear in nautilus and unity (using ubuntu 14.10 here). This is not really related to btrfs itself though, but rather due to the way udisks and udev work.
There are two ways to hide the devices from GUI tools, as mentioned below. Solution 1 (preferred) will only hide the ghost device, solution 2 will hide both devices from GUI tools.
1. Create a udev rule to ignore the device(s)
Create a file in /etc/udev/rules.d (e.g. /etc/udev/rules.d/99-local-udisks-btrfs.rules), and add a rule like this one:KERNEL=="sdh1", ENV{UDISKS_IGNORE}:="1"Then run sudo udevadm trigger to trigger the rule.
for more info, see following links:https://wiki.archlinux.org/index.php/udev,
https://askubuntu.com/questions/124094/how-to-hide-an-ntfs-partition-from-ubuntu
2. Add it to /etc/fstab
e.gLABEL=rootfs / btrfs defaults,subvol=@,autodefrag 0 0
LABEL=rootfs /home btrfs defaults,subvol=@home,autodefrag
0 0
LABEL=Raid1 /tmp/raid1 btrfs defaults 0 0Use filesystem LABEL= or UUID=, which you can retrieve from
$ sudo btrfs filesystem show [<path>|<uuid>|<device>|label]Label: 'Raid1' uuid: 98780c23-5330-4357-8fb8-ef3307fdabc3
Total devices 2 FS bytes used 112.00KiB
devid 1 size 1.00GiB used 231.75MiB path /dev/loop0
devid 2 size 1014.19MiB used 211.75MiB path /dev/loop1
Btrfs v3.14.1Both volumes shall disappear from unity and nautilus immediately after saving changes to /etc/fstab.This will not however works if your mount point is under /media
|
I mounted 2 drives as a RAID1 btrfs array (btrfs v3.12, Ubuntu 14.04). Everything's working fine except nautilus and other GUI-based apps see two disks, both labeled "Raid1". One is mounted (the working btrfs disk), the other is unmounted.
Does anyone know why this "ghost" volume exists or how to get rid of it?
Edit - Adding additional details:
The result of "sudo btrfs filesystem show":
$ sudo btrfs filesystem show
Label: Raid1 uuid: 3d12bc7b-61b1-4dea-b78b-ef9a44a6b698
Total devices 2 FS bytes used 2.39TiB
devid 1 size 3.64TiB used 2.43TiB path /dev/sdg1
devid 2 size 3.64TiB used 2.43TiB path /dev/sdh1Btrfs v3.12My fstab:
UUID=3d12bc7b-61b1-4dea-b78b-ef9a44a6b698 /media/btr0 btrfs defaults,noauto 0 0All fstab seems to do is mount the device as /media/btr0. If I comment out the fstab entry it automatically gets mounted as /media/fred/Raid1.
| btrfs RAID1 array shows as two disks |
Debian uses tasksel for installing software for a specific system. The command gives you some information:
> tasksel --list-tasks
i desktop Graphical desktop environment
u web-server Web server
u print-server Print server
u dns-server DNS server
u file-server File server
u mail-server Mail server
u database-server SQL database
u ssh-server SSH server
u laptop Laptop
u manual manual package selectionThe command above lists all tasks known to tasksel. The line desktop should print an i in front. If that is the case you can have a look at all packages which this task usually installs:
> tasksel --task-packages desktop
twm
eject
openoffice.org
xserver-xorg-video-all
cups-client
…On my system the command outputs 36 packages. You can uninstall them with the following command:
> apt-get purge $(tasksel --task-packages desktop)This takes the list of packages (output of tasksel) and feeds it into the purge command of apt-get. Now apt-get tells you what it wants to uninstall from the system. If you confirm it everything will be purged from your system.
|
I just did my first install of any Linux OS, and I accidentally selected "Desktop GUI" in the install, but I want to build everything myself. Is there any way by which I can remove the GUI environment without re-installing OS?
| Can I Remove GUI From Debian? |
Personally, I don't like yum plugins because they don't work a lot of the time, in my experience.
You can use the yum history command to view your yum history.
[root@testbox ~]# yum history
Loaded plugins: product-id, rhnplugin, search-disabled-repos, subscription-manager, verify, versionlock
ID | Login user | Date and time | Action(s) | Altered
----------------------------------------------------------------------------------
19 | Jason <jason> | 2016-06-28 09:16 | Install | 10You can find info about the transaction by doing yum history info <transaction id>. So:
yum history info 19 would tell you all the packages that were installed with transaction 19 and the command line that was used to install the packages. If you want to undo transaction 19, you would run yum history undo 19.
Alternatively, if you just wanted to undo the last transaction you did (you installed a software package and didn't like it), you could just do yum history undo last
|
I am using CentOS 7. I installed okular, which is a PDF viewer, with the command:
sudo yum install okularAs you can see in the picture below, it installed 37 dependent packages to install okular.But I wasn't satisfied with the features of the application and I decided to remove it. The problem is that if I remove it with the command:
sudo yum autoremove okularIt only removes four dependent packages.And if I remove it with the command:
sudo yum remove okularIt removes only one package which is okular.x86_64.
Now, my question is that is there a way to remove all 37 installed packages with a command or do I have to remove all of them one by one?
| How to remove all installed dependent packages while removing a package in CentOS 7? |
Depending on the configuration of the repository you wish to remove, apt list --installed might provide enough information to identify packages you need to uninstall or downgrade. Another option, if the repository defines a unique “Origin”, is to use aptitude search '~i ~Oorigin' (replacing origin as appropriate).
(This is a generic answer; if you edit your question to specify exactly which source you want to remove, I can add a specific answer.)
|
I previously added some external sources to /etc/apt/sources.list.d but I now want to remove one of them. I also want to:remove all packages solely from that source
revert all packages to versions in my original source(s)
alternatively, make a list of all packages from this source so I can perform this procedure manuallyHow can I do this?
| How can I uninstall all packages from one Debian source? |
If you installed them today, they’ll all be listed in /var/log/apt/history.log. Look through that, identify the packages you don’t want, and remove them.
|
I installed the development packages for X11 today and now want to remove them. I do not remember the exact name of the package that I installed. I installed by running apt-get install ... and now want to remove the development package using apt-get purge --auto-remove <name of package>. Any suggestions?
| How to uninstall or remove recently-installed packages |
There is a similar post on Ask Ubuntu:snapd is seeded in the default install so as to enable snaps to be
installed without further work. However, no part of the base install
is a snap (you can verify via snap list, it should return no snaps).
Because of this, snapd can be safely removed with no ill side effects:
sudo apt purge snapdIt will probably leave some dependencies lying around. If you want to
remove them as well:
sudo apt autoremoveThe answer is it should be safe to remove it, if you don't intend to use snap.
I don't, and removed it, and nothing bad happened.
|
I don't use snap and never install snap packages.
On a new Ubuntu Server 18.04, snap list shows:No snaps are installed yet.Is it safe to remove it?
I'm not sure what weird dependencies are going on in the background - so I don't want to accidentally break the system now or in the future. (I want to be sure, because on ubuntu desktop, even though I don't use snap, the OS itself does.)
| Safe to remove snap on Ubuntu Server? |
To uninstall part of a package, your approach is correct: if you know you don’t (and won’t) need a file shipped in a package, you can delete it (after all, it’s your system). However if you leave it at that, the next time the package is upgraded, the deleted files will be restored (unless they are configuration files in /etc). To avoid that, you should tell dpkg that you don’t want the files you removed: add a configuration file in /etc/dpkg/dpkg.cfg.d, with lines of the form
path-exclude=/path/to/foofor every file you deleted.
As Marcus says, this isn’t usually a great idea, and the dpkg man page warns against it too. But there are circumstances where it is appropriate; one common setup is to remove documentation shipped with packages, or man pages in languages which no one using your computer needs or wants. I have a /etc/dpkg.cfg.d/locales file containing
# Drop locales except English and French
path-exclude=/usr/share/locale/*
path-include=/usr/share/locale/en/*
path-include=/usr/share/locale/fr/*
path-include=/usr/share/locale/locale.alias# Drop translated manpages except English and French
path-exclude=/usr/share/man/*
path-include=/usr/share/man/man[1-9]/*
path-include=/usr/share/man/en*/*
path-include=/usr/share/man/fr*/*to avoid installing locale files and man pages in languages other than English or French.
Aggregate packages like bsdgames are another situation where file removals can be useful; the disk space savings are probably not worth it, but removing candidates from your path can be worthwhile (assuming you are the only user of your system).
|
I have installed the BSD games package. Many of the games are awful or broken, so I wish to uninstall some of them without uninstalling others. Is there an easy way to do that? Currently, I'm sudo rming them from usr/share/applications/bsdgames and usr/games/.
| How can I uninstall part of a package? |
I solved it by making my own install-info command and putting it before /usr/bin in $PATH. The script was
#!/bin/sh
/usr/bin/install-info "$@" || true |
I'm running sid, and in the course of trying to cross-grade my system from i386 to amd64 I came across some ancient packages that I couldn't remove. Some background: I've had this system since potato, or maybe earlier.
There are about a hundred packages like this, so I'd like a generic or scriptable answer. Here's one example:
bminton:/var/cache/apt/archives# dpkg --purge libstdc++2.10-dev
(Reading database ... 1352516 files and directories currently installed.)
Removing libstdc++2.10-dev (1:2.95.4-27) ...
install-info: No dir file specified; try --help for more information.
dpkg: error processing package libstdc++2.10-dev (--purge):
subprocess installed pre-removal script returned error exit status 1
Errors were encountered while processing:
libstdc++2.10-devThe prerm script `/var/lib/dpkg/info/libstdc++2.10-dev.prerm script contains the following:
#! /bin/sh -einstall-info --quiet --remove iostream-2.95Manually running install-info --quiet --remove iostream-2.95 gives the following error:
install-info: No dir file specified; try --help for more information. | How can I remove a bunch of ancient packages on debian? |
You've fallen into the "default" trap; yum list will (from man yum, under "List Options"):List all available and installed packages.My emphasis on available. If you only want to see the packages that you have currently installed, use:
yum list installedAdditionally, be careful with constructs like:
sudo yum list | grep devtoolset-7*As your shell will attempt to expand devtoolset-7* as a wildcard and could potentially match one or more filenames in your current directory, confusing your results. Instead, yum can take a wildcard to search for:
sudo yum list installed 'devtoolset-7*'(Notice the single quotes protecting the wildcard from the shell).
|
I am trying to remove devtoolset-7 from my CentOS system. For this I am running following commands:-
sudo yum remove devtoolset-7
sudo yum remove devtoolset-7-libatomic-devel
sudo yum remove devtoolset-7-libatomic-develAfter running these commands, I list out devtools by command:-
sudo yum list | grep devtoolset-7*devtoolset-7 is still present in there. here is the list I got:-
devtoolset-7.x86_64 7.1-4.el7 centos-sclo-rh
devtoolset-7-all.x86_64 7.0-5.el7 centos-sclo-rh
devtoolset-7-binutils.x86_64 2.28-11.el7 centos-sclo-rh
devtoolset-7-binutils-devel.x86_64 2.28-11.el7 centos-sclo-rh
devtoolset-7-build.x86_64 7.1-4.el7 centos-sclo-rh
devtoolset-7-dockerfiles.x86_64 7.1-4.el7 centos-sclo-rh
devtoolset-7-dwz.x86_64 0.12-1.1.el7 centos-sclo-rh
devtoolset-7-dyninst.x86_64 9.3.2-3.el7 centos-sclo-rh
devtoolset-7-dyninst-devel.x86_64 9.3.2-3.el7 centos-sclo-rh
devtoolset-7-dyninst-doc.x86_64 9.3.2-3.el7 centos-sclo-rh
devtoolset-7-dyninst-static.x86_64 9.3.2-3.el7 centos-sclo-rh
devtoolset-7-dyninst-testsuite.x86_64 9.3.2-3.el7 centos-sclo-rh
devtoolset-7-elfutils.x86_64 0.170-5.el7 centos-sclo-rh
devtoolset-7-elfutils-devel.x86_64 0.170-5.el7 centos-sclo-rh
devtoolset-7-elfutils-libelf.x86_64 0.170-5.el7 centos-sclo-rh
devtoolset-7-elfutils-libelf-devel.x86_64 0.170-5.el7 centos-sclo-rh
devtoolset-7-elfutils-libs.x86_64 0.170-5.el7 centos-sclo-rh
devtoolset-7-gcc.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-gcc-c++.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-gcc-gdb-plugin.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-gcc-gfortran.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-gcc-plugin-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-gdb.x86_64 8.0.1-36.el7 centos-sclo-rh
devtoolset-7-gdb-doc.noarch 8.0.1-36.el7 centos-sclo-rh
devtoolset-7-gdb-gdbserver.x86_64 8.0.1-36.el7 centos-sclo-rh
devtoolset-7-go.x86_64 7.0-5.el7 centos-sclo-rh
devtoolset-7-libasan-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libatomic-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libcilkrts-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libgccjit.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libgccjit-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libgccjit-docs.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libitm-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-liblsan-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libmpx-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libquadmath-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libstdc++-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libstdc++-docs.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libtsan-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-libubsan-devel.x86_64 7.3.1-5.15.el7 centos-sclo-rh
devtoolset-7-llvm.x86_64 7.0-5.el7 centos-sclo-rh
devtoolset-7-ltrace.x86_64 0.7.91-2.el7 centos-sclo-rh
devtoolset-7-make.x86_64 1:4.2.1-3.el7 centos-sclo-rh
devtoolset-7-memstomp.x86_64 0.1.5-5.1.el7 centos-sclo-rh
devtoolset-7-oprofile.x86_64 1.2.0-2.el7.1 centos-sclo-rh
devtoolset-7-oprofile-devel.x86_64 1.2.0-2.el7.1 centos-sclo-rh
devtoolset-7-oprofile-jit.x86_64 1.2.0-2.el7.1 centos-sclo-rh
devtoolset-7-perftools.x86_64 7.1-4.el7 centos-sclo-rh
devtoolset-7-runtime.x86_64 7.1-4.el7 centos-sclo-rh
devtoolset-7-rust.x86_64 7.0-5.el7 centos-sclo-rh
devtoolset-7-strace.x86_64 4.17-7.el7 centos-sclo-rh
devtoolset-7-systemtap.x86_64 3.1-4s.el7 centos-sclo-rh
devtoolset-7-systemtap-client.x86_64 3.1-4s.el7 centos-sclo-rh
devtoolset-7-systemtap-devel.x86_64 3.1-4s.el7 centos-sclo-rh
devtoolset-7-systemtap-initscript.x86_64 3.1-4s.el7 centos-sclo-rh
devtoolset-7-systemtap-runtime.x86_64 3.1-4s.el7 centos-sclo-rh
devtoolset-7-systemtap-sdt-devel.x86_64 3.1-4s.el7 centos-sclo-rh
devtoolset-7-systemtap-server.x86_64 3.1-4s.el7 centos-sclo-rh
devtoolset-7-systemtap-testsuite.x86_64 3.1-4s.el7 centos-sclo-rh
devtoolset-7-toolchain.x86_64 7.1-4.el7 centos-sclo-rh
devtoolset-7-valgrind.x86_64 1:3.13.0-11.el7 centos-sclo-rhPlease tell me right way to remove devtoolset-7.
| Unable to remove devtoolset-7 from my CentOS System |
Use an interactive tool that lets you easily get information about a package (its description, its dependencies, what depends on it, …). You can use aptitude in a text terminal. There are also GUI programs for that.
Beware that it's difficult to know whether a package is necessary. Sometimes a package may be used in a way that isn't obvious to the uninitiated. With Linux kernels between 2.6.30 and 3.19, file access times are not saved accurately by default. Even with systems that are set up to save file access times, the information may not be complete, e.g. for files that are access during the early boot before the root partition is mounted read-write (for example, based on access times alone, you'd end up reporting the kernel as unused).
Programs that are installed but not running only hurt if you're short on disk space. Disk space was mildly expensive 20 years ago, but today, installed programs take up a negligible amount in most scenarios, and this does not justify a hunt for unused programs. If you are short on disk space (e.g. on a cheap VPS), you can use the following command to list packages by size:
dpkg-query -W -f='${Installed-Size;8} ${Package}\n' | sort -nPrograms that are installed and running but not actually used can hurt because use memory or they're a security risk. However, there's no way to determine that automatically, you really have to understand what the program is doing.
|
I've done a bit of searching, and have come to no perfect answer, so I'm wondering, is there a good way to uninstall (and purge dependencies of) unused applications/programs in my Ubuntu Server install?
When I first installed 16.04, there were a ton of programs that were pre-installed, and I know they're not all useless, but how do I get rid of the ones that I'm never going to use (programs that haven't been used or run for since install)? Because when I use
apt list --installedthere are so many programs that I can't even scroll back far enough to see the first ones.
Any suggestions?
| Removing unused applications/programs |
Pacman saves important configuration files when removing certain applications and names them with the extension: .pacsave. To prevent the creation of these backup files use the -n option:
# pacman -Rn package_nameNote: Pacman will not remove configurations that the application itself creates (for example "dotfiles" in the home folder).
Basically, pacman will not remove those configuration files in Home folder you will have to remove them manually. Those packages created by the application have no impact on a system after you remove the package so it won't do any harm. That is just how it works.
Usually all the configurations files created by the apps are in ~/config folder. You can just type in terminal rm -r and drag & drop folders to terminal and click Enter or whatever works best for you.
|
I'd atom installed on the system and tried to remove it with:
pacman -Rs atomI still have a .atom folder in /home/user and another Atom folder in /home/user/.config, not sure whether there's anymore left in any other place!
How to remove programs together with all the folders and files it created automatically over the time?
| How to remove programs completely on Arch Linux, including files that it created in home directories? |
checkinstall is the Debian way to make the Debian package manager aware of packages that you can build through configure and make && make install. Apparently something went wrong during installation via checkinstall and the your build is not properly registered as installed package.
You may uninstall the software in several ways. Some packages provide the target uninstall (i.e. make uninstall). If not, you have to remove the according files by hand. Using find and searching for a suitable -mtime or -mmin might be most promising. If not, you may install vim into a temporary base directory and use the result as a pattern to search for files to delete.
| I downloaded this Vim release and extract the tarball on Ubuntu 16.04. Then I switched to the vim directory and run sudo checkinstall, the procedure ends up with:
Makefile:2412: recipe for target 'installpack' failed
make[1]: *** [installpack] Error 1
make[1]: Leaving directory '/home/mudde/Downloads/vim-8.0.0326/src'
Makefile:36: recipe for target 'install' failed
make: *** [install] Error 2**** Installation failed. Aborting package creation.Restoring overwritten files from backup...OKCleaning up...OKBye.It mentioned there are errors, but I can start /usr/local/bin/vim.
vim --version
VIM - Vi IMproved 8.0 (2016 Sep 12, compiled Feb 12 2017 19:18:51)
Included patches: 1-326Now I want to remove it, but using package manager does not work
sudo apt-get remove vim
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'vim' is not installed, so not removed
The following packages were automatically installed and are no longer required:
feh libexif12 libjpeg-progs libjpeg9 linux-headers-4.4.0-57 linux-headers-4.4.0-57-generic
linux-image-4.4.0-57-generic linux-image-extra-4.4.0-57-generic lua-lgi menu rlwrap
vim-gui-common vim-runtime
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 62 not upgraded.How to remove vim manually? Can I delete all files and directories or should I exclude some or at the other hand did I missed something?
$ sudo find /etc /usr /lib /var -name "*vim*" -prune
/etc/vim
/usr/share/man/man5/apparmor.vim.5.gz
/usr/share/man/man5/vim-registry.5.gz
/usr/share/man/it/man1/vimdiff.1.gz
/usr/share/man/it/man1/vim.1.gz
/usr/share/man/it/man1/rvim.1.gz
/usr/share/man/it/man1/gvim.1.gz
/usr/share/man/it/man1/gvimtutor.1.gz
/usr/share/man/it/man1/evim.1.gz
/usr/share/man/it/man1/gvimdiff.1.gz
/usr/share/man/it/man1/rgvim.1.gz
/usr/share/man/it/man1/vimtutor.1.gz
/usr/share/man/fr/man1/vimdiff.1.gz
/usr/share/man/fr/man1/vim.1.gz
/usr/share/man/fr/man1/rvim.1.gz
/usr/share/man/fr/man1/gvim.1.gz
/usr/share/man/fr/man1/gvimtutor.1.gz
/usr/share/man/fr/man1/evim.1.gz
/usr/share/man/fr/man1/gvimdiff.1.gz
/usr/share/man/fr/man1/rgvim.1.gz
/usr/share/man/fr/man1/vimtutor.1.gz
/usr/share/man/pl/man1/vimdiff.1.gz
/usr/share/man/pl/man1/vim.1.gz
/usr/share/man/pl/man1/rvim.1.gz
/usr/share/man/pl/man1/gvim.1.gz
/usr/share/man/pl/man1/gvimtutor.1.gz
/usr/share/man/pl/man1/evim.1.gz
/usr/share/man/pl/man1/gvimdiff.1.gz
/usr/share/man/pl/man1/rgvim.1.gz
/usr/share/man/pl/man1/vimtutor.1.gz
/usr/share/man/ru/man1/vimdiff.1.gz
/usr/share/man/ru/man1/vim.1.gz
/usr/share/man/ru/man1/rvim.1.gz
/usr/share/man/ru/man1/gvim.1.gz
/usr/share/man/ru/man1/gvimtutor.1.gz
/usr/share/man/ru/man1/evim.1.gz
/usr/share/man/ru/man1/gvimdiff.1.gz
/usr/share/man/ru/man1/rgvim.1.gz
/usr/share/man/ru/man1/vimtutor.1.gz
/usr/share/man/ja/man1/vimdiff.1.gz
/usr/share/man/ja/man1/vim.1.gz
/usr/share/man/ja/man1/rvim.1.gz
/usr/share/man/ja/man1/gvim.1.gz
/usr/share/man/ja/man1/gvimtutor.1.gz
/usr/share/man/ja/man1/evim.1.gz
/usr/share/man/ja/man1/gvimdiff.1.gz
/usr/share/man/ja/man1/rgvim.1.gz
/usr/share/man/ja/man1/vimtutor.1.gz
/usr/share/man/man1/vimdiff.1.gz
/usr/share/man/man1/vim.1.gz
/usr/share/man/man1/rvim.1.gz
/usr/share/man/man1/vim-addon-manager.1.gz
/usr/share/man/man1/vim-addons.1.gz
/usr/share/man/man1/vimdot.1.gz
/usr/share/man/man1/gvim.1.gz
/usr/share/man/man1/gvimtutor.1.gz
/usr/share/man/man1/evim.1.gz
/usr/share/man/man1/gvimdiff.1.gz
/usr/share/man/man1/rgvim.1.gz
/usr/share/man/man1/vimtutor.1.gz
/usr/share/lintian/overrides/vim-common
/usr/share/lintian/overrides/vim-gui-common
/usr/share/doc/vim-common
/usr/share/doc/vim-addon-manager
/usr/share/doc/vim-gui-common
/usr/share/doc/vim-runtime
/usr/share/doc/mercurial-common/examples/vim
/usr/share/bash-completion/completions/vim-addon-manager
/usr/share/pixmaps/vim-32.xpm
/usr/share/pixmaps/vim-16.xpm
/usr/share/pixmaps/vim-48.xpm
/usr/share/pixmaps/gvim.svg
/usr/share/applications/vim.desktop
/usr/share/applications/gvim.desktop
/usr/share/icons/hicolor/48x48/apps/gvim.png
/usr/share/icons/hicolor/scalable/apps/gvim.svg
/usr/share/icons/locolor/32x32/apps/gvim.png
/usr/share/icons/locolor/16x16/apps/gvim.png
/usr/share/vim
/usr/share/rubygems-integration/all/specifications/vim-addon-manager-0.5.3.gemspec
/usr/share/cmake-3.5/editors/vim
/usr/share/gettext/styles/po-vim.css
/usr/bin/vim-addon-manager
/usr/bin/vim-addons
/usr/bin/vimdot
/usr/bin/gvimtutor
/usr/bin/vimtutor
/usr/lib/mime/packages/vim-common
/usr/lib/mime/packages/vim-gui-common
/usr/lib/ruby/vendor_ruby/vim
/usr/lib/git-core/mergetools/vimdiff2
/usr/lib/git-core/mergetools/vimdiff
/usr/lib/git-core/mergetools/vimdiff3
/usr/lib/git-core/mergetools/gvimdiff
/usr/lib/git-core/mergetools/gvimdiff2
/usr/lib/git-core/mergetools/gvimdiff3
/usr/src/linux-headers-4.4.0-57-generic/include/config/video/vim2m.h
/usr/src/linux-headers-4.4.0-59-generic/include/config/video/vim2m.h
/usr/src/linux-headers-4.4.0-62-generic/include/config/video/vim2m.h
/usr/local/share/awesome/lib/awful/hotkeys_popup/keys/vim.lua
/usr/local/share/man/man1/vim.1
/usr/local/share/man/man1/vimtutor.1
/usr/local/share/man/man1/vimdiff.1
/usr/local/share/man/man1/evim.1
/usr/local/share/doc/awesome/doc/libraries/awful.hotkeys_popup.keys.vim.html
/usr/local/share/vim
/usr/local/bin/vim
/usr/local/bin/vim.rm
/usr/local/bin/vimtutor
/lib/modules/4.4.0-57-generic/kernel/drivers/media/platform/vim2m.ko
/lib/modules/4.4.0-59-generic/kernel/drivers/media/platform/vim2m.ko
/lib/modules/4.4.0-62-generic/kernel/drivers/media/platform/vim2m.ko
/var/lib/dpkg/info/vim-common.list
/var/lib/dpkg/info/vim-common.conffiles
/var/lib/dpkg/info/vim-common.preinst
/var/lib/dpkg/info/vim-common.md5sums
/var/lib/dpkg/info/vim-addon-manager.list
/var/lib/dpkg/info/vim-addon-manager.prerm
/var/lib/dpkg/info/vim-addon-manager.md5sums
/var/lib/dpkg/info/vim-addon-manager.preinst
/var/lib/dpkg/info/vim-addon-manager.postinst
/var/lib/dpkg/info/vim-addon-manager.postrm
/var/lib/dpkg/info/vim-latexsuite.postrm
/var/lib/dpkg/info/vim-latexsuite.list
/var/lib/dpkg/info/vim-gui-common.list
/var/lib/dpkg/info/vim-gui-common.conffiles
/var/lib/dpkg/info/vim-gui-common.md5sums
/var/lib/dpkg/info/vim-runtime.list
/var/lib/dpkg/info/vim-runtime.postrm
/var/lib/dpkg/info/vim-runtime.postinst
/var/lib/dpkg/info/vim-runtime.preinst
/var/lib/dpkg/info/vim-runtime.md5sums
/var/lib/dpkg/info/vim-tiny.list
/var/lib/vim
/var/cache/apt/archives/vim-runtime_2%3a7.4.1689-3ubuntu1.2_all.deb
/var/cache/apt/archives/vim_2%3a7.4.1689-3ubuntu1.2_amd64.deb
/var/cache/apt/archives/vim-addon-manager_0.5.5_all.deb
/var/cache/apt/archives/vim-latexsuite_20141116.812-2_all.deb
/var/cache/apt/archives/vim-gui-common_2%3a7.4.1689-3ubuntu1.2_all.deb
/var/cache/apt/archives/vim-gtk_2%3a7.4.1689-3ubuntu1.2_amd64.deb | Remove vim manually after checkinstall [duplicate] |
I was able to copy the directory from a Live DVD, and now I can work on my computer.
The reason why I couldn't do this before was because of my CD ROM. Apparently, it was damaged, and that's what all the "Kernel panic" was all about.
I chose to do this instead of reinstalling the whole operative system because I didn't really have many programs installed more then those given by default with this distribution of Linux.
As @Gilles suggested, I'm now making backups to prevent future incidents. And as he said, if some weird problem comes, I know what's to blame. But still, I'd like to try to solve them before formatting.
I'm yet not sure why I couldn't reinstall the packages I found in /var/backups/dpkg.status.0, so if anyone knows why, it'd be great if you could tell us.
Thanks to all of you who gave your help.
|
Well, I'm pretty scared now.
I was trying to delete a folder by
sudo rm /var/lib/texmf -rbut instead, wrote
sudo rm /var/lib -rI read some documentation about it, and found this thread.
I tried to follow all the steps from the last comment (2nd page), but I didn't know how to do some of the things listed there.
So this is all I did:
First, I created these folders:
mkdir /var/lib/dpkg/alternatives/ /var/lib/dpkg/info/ /var/lib/dpkg/methods/ /var/lib/dpkg/parts/ /var/lib/dpkg/ triggers/ /var/lib/dpkg/updates/ /var/lib/apt/ /var/lib/aptitude /var/lib/binfmts/ /var/lib/misc/I had to do it by sudo, and had to create subfolders first, in order to create all the folders.
Then I did:
aptitude update && aptitude upgradeHere is where confusion started.
First of all, the warnings listed by user @marco.org didn't appear to me (I'm not even sure in what language they're written), but instead another error appeard, saying that it couln't create some-file because some-folder (a subfolder in /var/lib) didn't exist.
I created the folder (using sudo mkdir), the process finished (with some errors I don't remember).
Then I moved the file dpkg.status.0, mentioned in the link above, by:
sudo cp /var/backups/dpkg.status.0 /var/lib/dpkg/status(Again, I had to create the folder. I'm not really sure why I did this, I just read about it somewhere)
So apparently the first problem was solved. I then tried installing some program, using
sudo apt-get install some_programand even though it showed some warnings (all about files missing from sub directories of /var/lib), the program was installed, and I could use it with no problem.
A few minutes later, I wanted to relax a little bit and watch some video on youtube (I had installed the latest version of Google Chrome), but instead of relaxing, I got yet another reason to panic:
There was no sound output for the video, or for any other application on Chrome (other programs did have sound).
So I panicked and shutdown my computer.
Then, when I turned it on, and after Linux started, I didn't see my desktop, but a console screen asking me for a login and password.
I gave the information, and some message appeared, saying it couldn't creat a file, because a folder was missing. I created it with:
sudo mkdir /var/lib/ubuntu-release-upgraderThen another message said that it couldn't move the file, so I moved it manually with:
sudo mv /usr/lib/ubuntu-release-upgrader/release-upgrader-motd -t /var/lib/ubuntu-release-upgrader/Then I restarted the system, hoping to see my desktop. But instead, the same console screen showed, saying, after giving my login and password:
Last login: Tue Apr 8 13:19:16 CST 2014 on tty1
Welcome to Linux Mint 15 Olivia (GNU/LINUX 3.8.0-26-generic i686)Welcome to Linux Mint
* Documentation: http://www.linuxmint.comNow I can't get back to GUI mode, or use any of my programs.
I guess there should be a way of getting the folder I deleted from the installation dvd, but I'm kind of new with this, and am not sure of how to do it.
Could anybody please help me? I would really appreciate it. P.S: Also, after deleting /var/lib, whenever I used sudo, a line appeared saying something like:
couldn't mkdir /var/lib/sudo, file not existingbut it got solved after the
aptitude update && aptitude upgrade | Removed /var/lib. Can't open Desktop |
The mongodb-org-server package appears to be broken.
The prerm script mongodb-org-server.prerm is trying to run the script
/etc/init.d/mongod as part of invoke-rc.d. As the name suggests, the prerm script is run by dpkg prior to removal of the mongodb-org-server package.
The poster said the server was not running, so the prerm script is therefore a no-op.
So, the obvious thing to do is to comment out the relevant part of mongodb-org-server.prerm, namely:
if [ -e "/etc/init/mongod.conf" ]; then
invoke-rc.d mongod stop || exit $?
fiand then run the removal again. Though I would recommend
apt-get purge mongodb-org-serverand report a bug against this package if possible.
|
So, it would appear that following the official MongoDB installation instructions when installing on Debian - you're heading for a world of pain. Firstly, it didn't install correctly so now - i'm trying to remove all installed MongoDB packages so that I can start from scratch.
Frustratingly, because it didn't install cleanly (presumably), it won't uninstall.
Originally, I installed using the instructions here:
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/
Currently, i've managed to remove every package apart from mongodb-org-server which, just won't go.
An attempted removal results in the following:
user@host:/$ sudo apt-get remove mongodb-org-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
mongodb-org-server
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 23.9 MB disk space will be freed.
Do you want to continue [Y/n]? y
(Reading database ... 31030 files and directories currently installed.)
Removing mongodb-org-server ...
invoke-rc.d: unknown initscript, /etc/init.d/mongod not found.
dpkg: error processing mongodb-org-server (--remove):
subprocess installed pre-removal script returned error exit status 100
invoke-rc.d: unknown initscript, /etc/init.d/mongod not found.
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 100
Errors were encountered while processing:
mongodb-org-server
E: Sub-process /usr/bin/dpkg returned an error code (1)This is causing me untold problems, any ideas how I can properly and cleanly get rid of MongoDB now?
Contents of /var/lib/dpkg/info/mongodb-org-server.prerm:
#!/bin/sh
set -e
# Automatically added by dh_installinit
if [ -e "/etc/init/mongod.conf" ]; then
invoke-rc.d mongod stop || exit $?
fi
# End automatically added sectionJordan
| MongoDB will not uninstall |
It's not particularly useful here (where you can just fix your escaping as commented) but in the case where you want to search the whole dpkg -l line, you can run it through something like awk and then into apt-get purge with minimal conditioning:
sudo apt-get purge $(dpkg -l | awk '$2~/nvidia/ {print $2}')That should prompt you before it does anything but just in case, you could test it with:
apt-get -s purge $(dpkg -l | awk '$2~/nvidia/ {print $2}') |
Background: I bought an NVIDIA graphics card and tried to install its driver. Somewhere along the way I messed up and now I'm running my computer on Cinnamon backup mode (I have Ubuntu but I removed Unity and replaced it with Cinnamon). I want to start back from scratch (prior to this, I was using a core i3 and no graphics card).
Problem:
When I enter sudo dpkg -l | grep -i nvidia I get a list of results:But when I enter sudo apt-get remove --purge nvidia-* it says I have no matches found.
I've tried a couple of other different ways with similar results. Again, I want to start fresh by removing all unnecessary files.
How do I remove all unnecessary nvidia files?
| Remove All NVIDIA Files |
Can you make a backup of the configuration and :
yum remove openldap
rpm -e openldap.package_name
yum install openldapAnd copy your configuration files back
|
I have installed OpenLDAP with yum, but I have accidentally deleted some of the config files. I am not able to recover them. I want to uninstall it. I tried the following command but it ends with an error:
--> Processing Dependency: PackageKit-glib = 0.5.8-20.el6 for package: PackageKit-gtk-module-0.5.8-20.el6.x86_64
--> Running transaction check
---> Package PackageKit-device-rebind.x86_64 0:0.5.8-20.el6 will be erased
---> Package PackageKit-gstreamer-plugin.x86_64 0:0.5.8-20.el6 will be erased
---> Package PackageKit-gtk-module.x86_64 0:0.5.8-20.el6 will be erased
--> Finished Dependency Resolution Error: Trying to remove "yum", which is protected You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigestCan someone please tell how to uninstall it properly so I can install it again and make the config changes? | How to uninstall OpenLDAP in RedHat? |
Caveat: I don't have a Fedora system at-hand, so this is untested!
I would suggest removing the file /etc/modules-load.d/virtualbox.conf; it may be owned by a package, so check: dnf provides /etc/modules-load.d/virtualbox.conf and if needed, remove that package with: dnf remove (that package name).
As per this Fedora Forum post, you may need to rebuild your initramfs so that it's built without the vbox driver(s). Use: dracut -f /boot/initramfs-"$(uname -r)".img "$(uname -r)"Specific solution from the OP:I found virtualbox.conf located in /lib/modules-load.d/ and provided by VirtualBox-server. After removing VirtualBox-server, then removing vboxpci, vboxnetadp, vboxnetflt, and vboxdrv via rmmod, and finally rebuilding initramfs as written above, the problem is solved. |
I previously was using VirtualBox on Fedora 30, but recently moved to using Boxes (review) and am quite happy with the switch. I've removed VirtualBox, but upon booting up my system, I still get a tainted kernel message:
vboxdrv: module verification failed: signature and/or required key missing - tainting kernelI've tried:
sudo rmmod vboxpci vboxnetadp vboxnetflt vboxdrv(the other modules were being used by vboxdrv)
This appeared to remove the module, but after a reboot, the modules were back.
Since I am no longer using VirtualBox, how can I remove this vboxdrv kernel module?
Thank you for any help!
| How to remove VirtualBox vboxdrv kernel module? |
You can test where /usr/bin/vi to lead
update-alternatives --query viUsually there is link to /usr/bin/vim.tiny
To find package name you can try
dpkg -S /usr/bin/vim.tinyIn my system I have received
vim-tiny: /usr/bin/vim.tinySo there is additional package vim-tiny.
|
I want to remove the vi text editor from Linux but it does not show up as a package in aptitude. Is it possible to remove this?
I have already removed vim by running
sudo apt-get remove vimI am using Linux Mint.
| Is it possible to remove vi? |
Brute force: Remove Oracle software: Find the Oracle home directory by looking around /u01/app/Oracle or browsing /etc/oratab and then rm -rf starting there
Besides removing the software, if you created any databases, you will need to remove the data files that hold the database data. If you followed recommended naming conventions, look at directories /u## (where ## is 01 .. 99) for files named *.dbf and *.ctl and remove those too. |
I installed oracle 11g xe to test a software and have not been using it for a while. How to cleanly uninstall?
| how to uninstall oracle 11g xe from centos 7? |
In Debian Policy-compliant packages, yes, systemd services are supposed to be removed along with the package.
However, this isn’t a systemd service, it’s an init script. Since these live in /etc, they are only removed if the package is purged, not just removed:
sudo apt purge webminIn addition to that, webmin might not be a Policy-compliant package (it hasn’t been available in Debian for nearly 15 years).
|
I installed and later uninstalled webmin with Apper years ago.
Since I switched to Wayland on Debian11/KDE starting the computer from standby, sessions are breaking all the time (currently once every second day on average).
After pressing ctrl+alt+F2, logging the user out with pkill -KILL -u {username}, and pressing ctrl+alt+F(8 for example) to show to login screen it shortly shows this error on a black background: Failed to start LSB: web-based administration interface for Unix systems.
Researching this error on the Web, it appears to be related to Webmin.
I checked that it's not installed anymore with sudo apt-get remove webmin (it shows Package 'webmin' is not installed, so not removed).
Running sudo systemctl status webmin shows:
● webmin.service - LSB: web-based administration interface for Unix systems
Loaded: loaded (/etc/init.d/webmin; generated)
Active: failed (Result: exit-code) since Sat 2022-01-08 23:56:28 CET; 1 day 15h ago
Docs: man:systemd-sysv-generator(8)
Process: 2061 ExecStart=/etc/init.d/webmin start (code=exited, status=127)
CPU: 1mshostname systemd[1]: Starting LSB: web-based administration interface for Unix systems...
hostname systemd[1]: webmin.service: Control process exited, code=exited, status=127/n/a
hostname systemd[1]: webmin.service: Failed with result 'exit-code'.
hostname systemd[1]: Failed to start LSB: web-based administration interface for Unix systems.Are systemd services init scripts like Webmin not removed after removing packages?
| Are init scripts not removed after uninstalling packages? |
My doubt was justified because of the lack of docs about avfs. Anyway, it was enough to umount the virtual fs
$ umountavfsand uninstall the package the usual way.
|
I installed avfs to check how it works. It works, but after a short test drive I decided to uninstall the package, basically because I can change files within a zip but it's not possible to write them back.
Now my question is, at this point is it safe to completely uninstall avfs?
$ sudo apt-get purge --auto-remove avfsI ask that because I see that a new "virtual" directory ~/.avfs has been created with a mirror of my disk. For this reason I wonder which is the right procedure to uninstall without compromising any data.
| Safely uninstall avfs |
If you want to use apt-get remove for a file contained in a specific package you can do:
apt-get remove $(dpkg -S /usr/bin/mysql | cut -d ':' -f 1)(replace /usr/bin/mysql, with whatever file you were looking for to remove)
Using this, apt-get will still ask if you really want to remove the package (that dpkg found), sometimes you realise you did not want that after you see the package name
|
Recently I asked to my hosting provider to reload the OS to Ubuntu 12.04 64 bit minimal assuming minimal would have the minimum required packages installed, but I realized that mysql was installed so, as I don't need it, I want to uninstall all packages related to it.
What I did was:
$ sudo apt-get --purge remove mysql-client
$ sudo apt-get --purge remove mysql-serverHowever I'm still finding mysql binaries and files
$ whereis mysql
mysql: /usr/bin/mysql /etc/mysql /usr/bin/X11/mysql /usr/share/mysql /usr/share/man/man1/mysql.1.gzI'm thinking something like
$ dpkg -s mysql*But this didn't help.
Any advice?
| Shell: How to uninstall all related packages to a specific one? / Ubuntu |
The best that I can think is this:
DEBIAN_FRONTEND=noninteractive apt remove --purge -yq mariadb\*
rm -rf /var/lib/mysql WARNING: This could be dangerous.
|
When I want to remove MariaDB from the system, I run # apt remove --purge mariadb*, but then I get a prompt like this one:Is there a way for me to skip this prompt specifying a value for yes or no? I tried # yes | apt remove --purge mariadb*, but it just managed to freeze the installer.
Any idea? Thanks!
| Remove MariaDB in non-interactive mode |
Found a hint here. I tried to apply the command with -n and it works!!
The command should finally be pkgrm -n -a /export/home/admin mypackage
|
I'm trying to do a pkgadd on Solaris with non-interactive. Somehow pkgadd -d /home/mypackage -n doesn't work. While reading the man page, I found out that i can disable interaction by using admin file. So i followed the guideline here. When i tried to run
pkgadd -d /home/mypackage -a /home/admin it still prompt for user input.
*I create the admin file at /home/
This is the display:
The following packages are available:
1 mypackage mypackage
(all) 4.4.0Select package (s) you wish to process (or 'all' to process all packages). (default: all) [?,??,q]:Google then lead me to this site. By improving a bit, I manage to make it run with the command pkgadd -d /home/mypackage -a /home/admin 'all'
Since pkgadd can be done, I assume pkgrm should be the same as well. So I tried pkgrm -a /home/admin mypackage.
Then a prompt appear.
The following package is currently installed:
mypackage mypackage
(all) 4.4.0Do you want to remove this package? [y,n,?,q]Then I thought maybe its just the same problem as pkgadd. So I tried pkgrm -a /home/admin 'y' mypackage. Instead it gave me an error.
pkgrm: ERROR: no package associated with <y>What is it exactly that I should pass so that I can do pkgrm non-interactively. Should i add another parameter inside the admin file? If so, what is the parameter? This is the parameter that I have tried using:
remove=nocheck
removal=nocheck
confirm=nocheckAll these tries cause a WARNING: unknown admin parameter
This is the admin file that I use:
mail=
instance=nocheck
partial=nocheck
runlevel=nocheck
idepend=nocheck
space=nocheck
setuid=nocheck
conflict=nocheck
authentication=nocheck
action=nocheck
rscriptalt=root
bsedir=defaultI am using Solaris 10 i386
| Add parameter to admin file for pkgrm |
The timestamp of the files should give you a hint if things were created during your installation.
The /root directory is only the home directory of the root user. Nothing important lives there unless you explicitly place it there. Therefore removing everything should be fine. (Maybe you want to keep .bashrc and .profile.)
Hint: You can compile and install most software as a normal user. No need to use root for that.
|
I wanted a software to be installed and its default directory to install was /root. But the software could not get installed. Now I want to do it again from scratch. But there are some files in /root directory. I am not sure whether they are of the software that I installed or system files. In short I wanted to ask whether the /root directory is empty when we install Ubuntu or has it any system files?
here is the output of the ls -la command
root@rnt-U410:~# ls -la
total 84
drwx------ 13 root root 4096 Jan 4 23:24 .
drwxr-xr-x 25 root root 4096 Dec 23 21:16 ..
drwxr-xr-x 4 root root 4096 Dec 23 19:44 .android
drwxr-xr-x 3 root root 4096 Dec 23 19:05 Android
drwxr-xr-x 4 root root 4096 Dec 23 19:01 .AndroidStudio
-rw------- 1 root root 6955 Jan 3 23:30 .bash_history
-rw-r--r-- 1 root root 3133 Jan 1 16:53 .bashrc
-rw-r--r-- 1 root root 3106 Feb 20 2014 .bashrc~
drwx------ 3 root root 4096 Dec 23 21:28 .cache
drwxr-xr-x 5 root root 4096 Dec 23 21:28 .config
drwx------ 3 root root 4096 Dec 23 21:28 .dbus
-rw-r--r-- 1 root root 68 Dec 23 22:24 .gitconfig
drwx------ 2 root root 4096 Dec 23 21:28 .gvfs
drwxr-xr-x 3 root root 4096 Dec 23 19:01 .java
drwxr-xr-x 3 root root 4096 Dec 23 21:28 .local
-rw-r--r-- 1 root root 256 Dec 26 20:47 .profile
-rw-r--r-- 1 root root 194 Dec 26 20:46 .profile~
drwxr-xr-x 3 root root 4096 Dec 23 22:21 .repoconfig
-rw-r--r-- 1 root root 116 Dec 23 22:24 .repopickle_.gitconfig
drwxr-xr-x 3 root root 4096 Dec 25 04:26 .swt
root@rnt-U410:~# | Remove all the contents from /root ( /~ )directory |
dnf listlists available packages, it isn’t limited to installed packages.
To list installed packages matching wine, run
dnf list --installed '*wine*'If any are shown by the latter command, you should be able to uninstall them. If no packages match, you’ll see
Error: No matching Packages to listinstead.
|
I'm currently trying to uninstall Wine from my Fedora system using dnf, and while I seem to have been mostly successful, I'm still noticing a few packages that I can't seem to remove.
Running dnf list | grep wine shows about a dozen helper utilities for Wine and their versions:
wine.i686 7.12-2.fc35 updates
wine.x86_64 7.12-2.fc35 updates
wine-alsa.i686 7.12-2.fc35 updates
wine-alsa.x86_64 7.12-2.fc35 updates
wine-arial-fonts.noarch 7.12-2.fc35 updates
wine-capi.i686 6.16-1.fc35 fedora
wine-capi.x86_64 6.16-1.fc35 fedora
...and so onYet when I try to remove any of these packages (say, wine.i686) with dnf remove, I get a "no match for argument" error.
Additionally, I can still see some of these Wine packages in my Applications display (using GNOME, and clicking the 9 dots on my dock, I see a folder that appears to have several apps with the default icon, but is actually empty once opened. This folder used to contain 9 or 10 Wine utilities.)
I've already deleted the .wine folder in my home dir and run dnf remove wine successfully.
How can I get rid of these remaining Wine packages and the ghost apps in that folder?
Thanks in advance,
-Robbie
| 'dnf list' shows Wine packages that can't be removed |
apt keeps track, for each package, whether it was installed because it was explicitly requested or because it was automatically pulled in as a dependency. Packages which are automatically installed become candidates for auto-removal when all the packages which need them are themselves removed.
Determining why packages become auto-removable in a specific context requires knowing the history of the system; there isn’t enough information here to say.
However, there’s no cause for alarm: apt is telling you that these packages are candidates for auto-removal, not that it is going to remove them. The packages will only be removed if you ask apt to remove them (apt autoremove for example). As it is, if you confirm the command in your question, only letsencrypt will be removed.
You can avoid this in future by marking them as manually installed, for example with
sudo apt-mark manual bsdmainutils |
I've gone from a single server to multi server setup in the past few weeks. I am now ready to uninstall Let's Encrypt / Certbot from the original server. (I've setup SSL termination with HA Proxy.)
I've tried apt remove --purge letsencrypt. But this is showing packages I am still requiring:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
bsdmainutils cpp-8 dh-python libapache2-mod-php7.3 libasan5 libbind9-161 libbison-dev libboost-iostreams1.67.0 libboost-system1.67.0 libc-dev-bin libcwidget3v5 libdns1104 libdns1110 libevent-2.1-6 libf2fs-format4 libf2fs5 libgfortran5
libicu63 libip6tc0 libiptc0 libirs161 libisc1100 libisc1105 libisccc161 libisccfg163 libisl19 liblinear3 libllvm7 liblua5.2-0 liblwres161 libmemcachedutil2 libmpdec2 libperl5.28 libprocps7 libpython3.7 libpython3.7-minimal
libpython3.7-stdlib linux-libc-dev ncal perl-modules-5.28 php-symfony-debug php7.3 php7.3-bcmath php7.3-fpm php7.3-mysql php7.3-pgsql php7.3-soap php8.0-memcached python3-asn1crypto python3-future python3-mock python3-pbr python3.7
python3.7-minimal usb.ids
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
letsencrypt*
0 upgraded, 0 newly installed, 1 to remove and 5 not upgraded.
After this operation, 30.7 kB disk space will be freed.
Do you want to continue? [Y/n]I am at the tail end of PHP7.3 with just 1 script remaining using it. I am not fluent in Python. Would someone please explain the logic that has determined the list of packages the computer doesn't see as being required any more?
| Uninstalling Showing Unexpected Packages |
To remove rpm packages you can use rpm's -e flag. Firstly find the name of the rpm you have installed;
rpm -qa | egrep -i "webmin|virtualmin"
Then remove the package from the name that you see from above;
rpm -e $packagename
|
Webmin and Virtualmin were recently installed using rpm packages on a remote CentOS 7 server. How do I uninstall them?
I have googled this, but all the results are either so old as to be obsolete or do not pertain to rpm files, or both.
| removing virtualmin and webmin from remote CentOS 7 server |
As per this posting in Bodhi Linux community, commands to remove Bodhi's version of Chromium are:
sudo apt purge bodhi-chromium && sudo apt autoremove |
I am using Bodhi Linux how to uninstall the Chromium Browser?
| How to uninstall Chromium browser |
If you still have your full build tree, perhaps in /home/vlastimil/Downloads/seahorse/seahorse-3.31.91/build, then either
cd /home/vlastimil/Downloads/seahorse/seahorse-3.31.91/build
sudo ninja uninstallor
cd /home/vlastimil/Downloads/seahorse/seahorse-3.31.91/build
sudo make uninstallshould uninstall the /usr/local version of Seahorse.
If the build tree isn’t available, you’ll have to re-do the build step, ideally with the same parameters you used in 2019:
cd /home/vlastimil/Downloads/seahorse/seahorse-3.31.91
meson build
cd build && sudo ninja uninstall | Passwords and Keys alias seahorse won't run if clicked on.
When launched from terminal, I get this error:
seahorse: error while loading shared libraries: libldap_r-2.4.so.2: cannot open shared object file: No such file or directoryWhen trying to find such package and possibly install it I get:
$ apt-cache policy 'libldap*'
libldap2:
Installed: (none)
Candidate: (none)
Version table:
libldap-common:
Installed: 2.5.13+dfsg-0ubuntu0.22.04.1
Candidate: 2.5.13+dfsg-0ubuntu0.22.04.1
Version table:
*** 2.5.13+dfsg-0ubuntu0.22.04.1 500
500 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
500 http://archive.ubuntu.com/ubuntu jammy-updates/main i386 Packages
100 /var/lib/dpkg/status
2.5.11+dfsg-1~exp1ubuntu3.1 500
500 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages
500 http://security.ubuntu.com/ubuntu jammy-security/main i386 Packages
2.5.11+dfsg-1~exp1ubuntu3 500
500 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages
500 http://archive.ubuntu.com/ubuntu jammy/main i386 Packages
libldap-ocaml-dev:
Installed: (none)
Candidate: 2.4.2-1build3
Version table:
2.4.2-1build3 500
500 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages
libldap-dev:
Installed: 2.5.13+dfsg-0ubuntu0.22.04.1
Candidate: 2.5.13+dfsg-0ubuntu0.22.04.1
Version table:
*** 2.5.13+dfsg-0ubuntu0.22.04.1 500
500 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
100 /var/lib/dpkg/status
2.5.11+dfsg-1~exp1ubuntu3.1 500
500 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages
2.5.11+dfsg-1~exp1ubuntu3 500
500 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages
libldap-ocaml-dev-vpsg7:
Installed: (none)
Candidate: (none)
Version table:
libldap-2.3-0:
Installed: (none)
Candidate: (none)
Version table:
libldap-2.4-2:
Installed: (none)
Candidate: (none)
Version table:
libldap-2.5-0:
Installed: 2.5.13+dfsg-0ubuntu0.22.04.1
Candidate: 2.5.13+dfsg-0ubuntu0.22.04.1
Version table:
*** 2.5.13+dfsg-0ubuntu0.22.04.1 500
500 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
100 /var/lib/dpkg/status
2.5.11+dfsg-1~exp1ubuntu3.1 500
500 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages
2.5.11+dfsg-1~exp1ubuntu3 500
500 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages
libldap-java:
Installed: (none)
Candidate: 5.0.0+dfsg1-1
Version table:
5.0.0+dfsg1-1 500
500 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages
500 http://archive.ubuntu.com/ubuntu jammy/universe i386 Packages
libldap2-dev:
Installed: 2.5.13+dfsg-0ubuntu0.22.04.1
Candidate: 2.5.13+dfsg-0ubuntu0.22.04.1
Version table:
*** 2.5.13+dfsg-0ubuntu0.22.04.1 500
500 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
500 http://archive.ubuntu.com/ubuntu jammy-updates/main i386 Packages
100 /var/lib/dpkg/status
2.5.11+dfsg-1~exp1ubuntu3.1 500
500 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages
500 http://security.ubuntu.com/ubuntu jammy-security/main i386 Packages
2.5.11+dfsg-1~exp1ubuntu3 500
500 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages
500 http://archive.ubuntu.com/ubuntu jammy/main i386 Packages
$ sudo apt-get --simulate install libldap-2.4-2
[sudo] password for vlastimil:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package libldap-2.4-2 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
libldap-commonE: Package 'libldap-2.4-2' has no installation candidate
$ sudo apt-get --simulate install libldap-2.4
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'libldap-2.4-2' for regex 'libldap-2.4'
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.$ which seahorse
/usr/local/bin/seahorse
$ whereis seahorse
seahorse: /usr/bin/seahorse /usr/local/bin/seahorse /usr/libexec/seahorse /usr/share/seahorse /usr/share/man/man1/seahorse.1.gzso naturally I tried /usr/bin/seahorse and I got it up and running:So, I digged in Downloads directory, found:
/home/vlastimil/Downloads/seahorse/seahorse-3.31.91with timestamp 2019-Feb-23. Built by meson, but how to remove it?
| seahorse won't run, how to remove old version properly? |
I am also very confused by seeing dirmngr as the pattern you searched for.
1.However, in case you ever have a .deb file that you installed but don't know what actual package it corresponds to you can run dpkg-deb -W.
Just to illustrate, using epson-printer-utility_1.0.2.deb as an example:
$ ls *.deb
epson-printer-utility_1.0.2.deb $ sudo dpkg-deb -W epson-printer-utility_1.0.2.deb
epson-printer-utility 1.0.2-1lsb3.22.What is interesting is that trying to figure out what package got installed from the original .deb filename is never a good idea, because if you just make a copy and give it a different name :
$ cp epson-printer-utility_1.0.2.deb abcde.deb$ sudo dpkg-deb -W abcde.deb
epson-printer-utility 1.0.2-1lsb3.23.And finally if you ever want to find all the info from a given .deb, and especially whether you have that particular package already installed, you can run sudo dpkg-query -s [package name given by dpkg-deb] or in one step (replace "abcde.deb" by your .deb file, put in "" if it has spaces):
sudo dpkg-query -s $(dpkg-deb -f abcde.deb | grep "Package" | cut -d: -f2)Package: epson-printer-utility
Status: install ok installed
Priority: extra
Section: alien
Installed-Size: 10652
Maintainer: Seiko Epson Corporation <[emailprotected]>
Architecture: amd64
Version: 1.0.2-1lsb3.2
Depends: lsb (>= 3.2)
Description: Epson Printer Utility for LinuxUpdate
For the final command one can "usually" also use a slightly shortened version:
dpkg-query -s $(dpkg-deb -W abcde.deb |cut -f1)as long as someone doesn't go crazy and create a package with a [TAB] inserted in the name, because with CTRL+VTAB a filename can actually be made to look like this:
$ cp abcde.deb "abc de.deb"
$ ls
'abc'$'\t''de.deb' abcde.debYet it will still work just as intended:
$ dpkg-deb -W 'abc'$'\t''de.deb'
epson-printer-utility 1.0.2-1lsb3.2 |
I installed Visual Studio Code on Ubuntu using:
sudo apt install ./code_1.37.1-1565886362_amd64.debI then found these commands to try to find out more information about the package:
dpkg -l dirmngr
systemctl --user status dirmngr
apt-cache search codeI still see it on my ubuntu application gui but I can't find it in the uninstall options. I also tried erasing the .deb file. What tools can I use to see these package details?
How do I fully remove this installed program?
Is there a folder that .deb files install into? Will it show up in /bin?
| how to uninstall vscode with apt |
FreeBSD 10 comes with pkg utility that allows you to do exactly that:
pkg autoremoveSee pkg help for the full list of pkg commands.
You probably will need to clean the port after failed build as well.
You can do it this way:
cd /usr/ports/x11/gnome2
make cleanAbout your second question: yes, there is a way. You should delete packages that require these dependencies and then execute pkg autoremove, it will do the rest.
|
I've been trying to compile x11/gnome2 under FreeBSD 10.0-REL, but have been running into all sorts of issues. Eventually I found things indicating that gnome2 is no longer really supported, and that I should use something else (MATE, Xfce, KDE, whatever) instead.
But gnome2 installs a trillion other packages, none of which I actually want if I'm not going to be using Gnome. So "make install" of gnome2 has failed, but not before installing a few billion packages that I don't want. I'd like to get rid of them before starting an install of Xfce or whatever.
How can I easily delete those that aren't needed by anything that doesn't ultimately go back go the gnome2 package? So, in a perfect world I'd like a command that says:
"Figure out all packages that are supposed to be installed via gnome2 (including recursively). For each such package, if it is installed, uninstall it unless there is some installed package that needs it and that is not among those installed via gnome2 (including recursively)."
Is there an easy way to do this?
Thanks in advance.
| FreeBSD - delete a partially installed gnome2? |
update: (short summary) my problem was fixed by an advice of someone from EOS forums... then I had another problem, but thankfully that was fixed as well.
(more details) I asked someone from EOS forums whether it's safe to delete uEFI boot loader partitions (along with the system partitions) using GParted. They said that it is safe. They also helped me to determine which of my partitions were Windows-related (in my case it's /sda1-2-3-4) and which were not (/sda5-6-7-8-9). I deleted all non-Windows ones. Then I had another problem. For whatever reason, EOS installer did not create a new uEFI partition for my new EOS when I was installing it. I don't know why that was happening, my guess is that something went wrong when I first installed EOS on a separate partition and then also installed another EOS alongside the first one (on that same partition), then deleted the first one. I guess maybe the uEFI configuration got messed up or something... I don't know. In the end, some smart users from EOS forums were able to make sense of it, they advised me to manually create an EFI partition (with some important criteria - if you wonder which ones, check comments #97-121 of this thread) so that the EOS installer would recognize it and use it. It worked. Now I have a proper EOS install.
but still, thanks a lot to everyone here for taking the time and effort to help me!
|
(disclaimer: I am… a certified noob)
I think I made a real mess while installing EndeavourOS (gonna call it EOS for short) and then trying to kind-of reinstall it… so I might need an advice of someone experienced. I’ll begin from the start
(oh and yeah, I did post this on EOS forums, but... nobody seems to have solved my problem, that's why I went here)
I have an asus x550ik laptop with a hard drive of around 930 GB (if this is important, I also have one of these modern BIOS, which supports mouse and all that, I think they have another name of their own as well). Originally, I just had Windows 10 on it. I decided to pick a distro for myself to switch to (which was Fedora originally but not anymore), but I decided to keep Windows 10 just in case I would ever need it for a task that my new system would not be able to fulfill (I know not all Windows functions are perfectly recreated). So… I tried to make a new partition for Fedora (I encrypted it too), everything went alright.
after using it for some time, I decided that I want to switch to EOS instead, but I wasn’t sure whether to keep Fedora or not, and my 950 GB space allowed be to consider keeping it, so I decided I’ll keep it for now. I installed EOS, and it went fine too, but during the installation, there was a known bug (which the devs are already working on) - disk encryption is only available on the “install alongside” option (someone advised to try a manual partition too but it was full of options that I felt really unsure about). I have cleared up some space in the storage and made it unallocated beforehand because that was a problem I faced while installing Fedora. That’s why “install alongside” didn’t work for me and I went with another option, without disk encryption.
after some time, I felt like I wanted to reinstall my EOS, partially because I felt like I already got it full of unneded software, partially because I do want disk encryption after all, partially because I wanted to try out a different desktop environment. So I booted the live USB and I chose “install alongside” (since, again, only that option has disk encryption). My plan was: install EOS 2 alongside, then erase the EOS 1 through GParted and just keep using EOS 2. So that’s exactly what I did - I erased EOS 1 through GParted while still being in the live USB system. The erased partition started showing as “unallocated space” in GParted. I booted into the second EOS, and generally everything was (and still is) alright, it functions normally to this moment, but I have these problems now:my boot loader menu for some reason still has the old EOS versions, so now each kernel has 2 versions of booting, one of which never works ( due to me having cleared it’s partition, obviously)
as shown on the screenshot below, when I open my file manager, on the left, for some reason, I see 2 systems named “endeavouros” and both of them have the same size (even though I vividly remember shrinking the EOS 1 to around 40-50 GB before erasing it, and the new one should be around 430 GB, but they both are 430 GB)this confuses me and I am unsure what to do.
I asked someone in the community what’s the proper way to uninstall a system and they said “through GParted”, but that’s exactly what I did and… here we are.
so… what I’m asking for is this:I would like to be advised on how to PROPERLY erase all my current EOS, in such a way that they would not show up anywhere anymore, not in boot loader, not in file manager, etc.; only after that I wanna reinstall a clean EOS again, just to make sure to get rid of all the issues.
I would also like to know which specific partitions of my hard drive I can erase (I only want to erase all EOS versions, not Fedora or Windows)
to make it easier to tell which one is which, I would like to remind that only Windows and EOS 1 are non-encrypted, while Fedora and EOS 2 are encrypted (and again, EOS 1 was shrinked to around 40 or 50 GB and then erased through GParted)
I would like to be explained, what are the small partitions from my hard drive. I know the big ones are Windows 10, Fedora and EOS, but the small ones (I highlighted them in red on the GParted screenshot) I have no idea about. What is each one of them? Should I remove any?
apparently, I also have an error saying "Partition 9 does not start on physical sector boundary" (as seen on one of the screenshots below). I am hoping that this is not going to matter since I am planning to install a clean EOS all over again eventually?..I attached some captioned screenshots in hopes that they would make sense to someone experienced in this, because to me they sure make little sense :')
please let me know if any additional info is needed.
thanks a lot in advance to whoever is willing to try to help me out. | I made a mess while reinstalling a distro, how can I FULLY and properly remove a distro from my computer? |
I still don't really understand what the files are, but sudo snap install pycharm-community seems to be working. I had some issues with opening a project which was linked to my github:Graphics Device initialization failed for : es2, sw
Error initializing QuantumRenderer: no suitable pipeline found
java.lang.RuntimeException: java.lang.RuntimeException: Error initializing QuantumRenderer: no suitable pipeline found
at com.sun.javafx.tk.quantum.QuantumRenderer.getInstance(QuantumRenderer.java:280)
at com.sun.javafx.tk.quantum.QuantumToolkit.init(QuantumToolkit.java:221)
at com.sun.javafx.tk.Toolkit.getToolkit(Toolkit.java:205)
at com.sun.javafx.application.PlatformImpl.startup(PlatformImpl.java:209)
at org.intellij.plugins.markdown.ui.preview.javafx.JavaFxHtmlPanel.lambda$null$4(JavaFxHtmlPanel.java:100)
at sun.awt.SunToolkit.unsafeNonblockingExecute(SunToolkit.java:644)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.ide.IdeEventQueue.unsafeNonblockingExecute(IdeEventQueue.java:1397)
at org.intellij.plugins.markdown.ui.preview.javafx.JavaFxHtmlPanel.runFX(JavaFxHtmlPanel.java:134)
at org.intellij.plugins.markdown.ui.preview.javafx.JavaFxHtmlPanel.lambda$new$5(JavaFxHtmlPanel.java:100)
at com.intellij.openapi.application.TransactionGuardImpl$2.run(TransactionGuardImpl.java:315)
at com.intellij.openapi.application.impl.LaterInvocator$FlushQueue.doRun(LaterInvocator.java:447)
at com.intellij.openapi.application.impl.LaterInvocator$FlushQueue.runNextEvent(LaterInvocator.java:431)
at com.intellij.openapi.application.impl.LaterInvocator$FlushQueue.run(LaterInvocator.java:415)
at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:311)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:762)
at java.awt.EventQueue.access$500(EventQueue.java:98)
at java.awt.EventQueue$3.run(EventQueue.java:715)
at java.awt.EventQueue$3.run(EventQueue.java:709)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:80)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:732)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.java:779)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:720)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:395)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:201)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
Caused by: java.lang.RuntimeException: Error initializing QuantumRenderer: no suitable pipeline found
at com.sun.javafx.tk.quantum.QuantumRenderer$PipelineRunnable.init(QuantumRenderer.java:94)
at com.sun.javafx.tk.quantum.QuantumRenderer$PipelineRunnable.run(QuantumRenderer.java:124)
at java.lang.Thread.run(Thread.java:745) But on the advice from https://stackoverflow.com/questions/21185156/javafx-on-linux-is-showing-a-graphics-device-initialization-failed-for-es2-s#21203726 I ran sudo apt-get install libgtk2.0-bin libxtst6 libxslt1.1 and everything seems to be running smoothly now.
|
I recently had some issues with running pycharm-community where it completely failed to load, so I decided to completely remove the software and try again with a fresh install. Unlike most software I currently use, pycharm-community was installed by downloading a tarball of the software and running an install.sh script, so I couldn't uninstall with apt.
I decided to try removing all the files associated with pycharm-community in the hopes of removing it completely. Here is the list for all the files in my system that had "pycharm" in the name:
/var/snap/pycharm-community
/var/lib/lxcfs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-58.mount
/var/lib/lxcfs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/lxcfs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-58.mount
/var/lib/lxcfs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/lxcfs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-58.mount
/var/lib/lxcfs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/lxcfs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-58.mount
/var/lib/lxcfs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/lxcfs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity-58.mount
/var/lib/lxcfs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/lxcfs/cgroup/name=systemd/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/name=systemd/system.slice/snap-pycharm\x2dcommunity-58.mount
/var/lib/lxcfs/cgroup/name=systemd/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/snapd/seccomp/bpf/snap.pycharm-community.pycharm-community.bin
/var/lib/snapd/seccomp/bpf/snap.pycharm-community.pycharm-community.src
/var/lib/snapd/snaps/pycharm-community_56.snap
/var/lib/snapd/snaps/pycharm-community_58.snap
/var/lib/snapd/snaps/pycharm-community_51.snap
/var/lib/snapd/apparmor/profiles/snap-update-ns.pycharm-community
/var/lib/snapd/apparmor/profiles/snap.pycharm-community.pycharm-community
/var/lib/snapd/desktop/applications/pycharm-community_pycharm-community.desktop
/var/lib/snapd/cookie/snap.pycharm-community
/var/cache/apparmor/snap-update-ns.pycharm-community
/var/cache/apparmor/snap.pycharm-community.pycharm-community
/snap/pycharm-community
/snap/pycharm-community/51/bin/pycharm.png
/snap/pycharm-community/51/bin/pycharm.sh
/snap/pycharm-community/51/bin/pycharm.vmoptions
/snap/pycharm-community/51/bin/pycharm64.vmoptions
/snap/pycharm-community/51/command-pycharm-community.wrapper
/snap/pycharm-community/51/helpers/pycharm
/snap/pycharm-community/51/helpers/pycharm/pycharm_commands
/snap/pycharm-community/51/helpers/pycharm/pycharm_commands/pycharm_test.py
/snap/pycharm-community/51/helpers/pycharm/pycharm_load_entry_point.py
/snap/pycharm-community/51/helpers/pycharm/pycharm_run_utils.py
/snap/pycharm-community/51/helpers/pycharm/pycharm_setup_runner.py
/snap/pycharm-community/51/helpers/pycharm_generator_utils
/snap/pycharm-community/51/helpers/pycharm_matplotlib_backend
/snap/pycharm-community/51/helpers/pydev/merge_pydev_pycharm.txt
/snap/pycharm-community/51/helpers/pydev/pycharm-readme.txt
/snap/pycharm-community/51/lib/pycharm-pydev.jar
/snap/pycharm-community/51/lib/pycharm.jar
/snap/pycharm-community/51/lib/src/pycharm-openapi-src.zip
/snap/pycharm-community/51/lib/src/pycharm-pydev-src.zip
/snap/pycharm-community/51/meta/gui/pycharm-community.desktop
/snap/pycharm-community/51/snap/gui/pycharm-community.desktop
/snap/pycharm-community/58/bin/pycharm.png
/snap/pycharm-community/58/bin/pycharm.sh
/snap/pycharm-community/58/bin/pycharm.vmoptions
/snap/pycharm-community/58/bin/pycharm64.vmoptions
/snap/pycharm-community/58/command-pycharm-community.wrapper
/snap/pycharm-community/58/helpers/pycharm
/snap/pycharm-community/58/helpers/pycharm/pycharm_commands
/snap/pycharm-community/58/helpers/pycharm/pycharm_commands/pycharm_test.py
/snap/pycharm-community/58/helpers/pycharm/pycharm_load_entry_point.py
/snap/pycharm-community/58/helpers/pycharm/pycharm_run_utils.py
/snap/pycharm-community/58/helpers/pycharm/pycharm_setup_runner.py
/snap/pycharm-community/58/helpers/pycharm_generator_utils
/snap/pycharm-community/58/helpers/pycharm_matplotlib_backend
/snap/pycharm-community/58/helpers/pydev/merge_pydev_pycharm.txt
/snap/pycharm-community/58/helpers/pydev/pycharm-readme.txt
/snap/pycharm-community/58/lib/pycharm-pydev.jar
/snap/pycharm-community/58/lib/pycharm.jar
/snap/pycharm-community/58/lib/src/pycharm-openapi-src.zip
/snap/pycharm-community/58/lib/src/pycharm-pydev-src.zip
/snap/pycharm-community/58/meta/gui/pycharm-community.desktop
/snap/pycharm-community/58/snap/gui/pycharm-community.desktop
/snap/pycharm-community/56/bin/pycharm.png
/snap/pycharm-community/56/bin/pycharm.sh
/snap/pycharm-community/56/bin/pycharm.vmoptions
/snap/pycharm-community/56/bin/pycharm64.vmoptions
/snap/pycharm-community/56/command-pycharm-community.wrapper
/snap/pycharm-community/56/helpers/pycharm
/snap/pycharm-community/56/helpers/pycharm/pycharm_commands
/snap/pycharm-community/56/helpers/pycharm/pycharm_commands/pycharm_test.py
/snap/pycharm-community/56/helpers/pycharm/pycharm_load_entry_point.py
/snap/pycharm-community/56/helpers/pycharm/pycharm_run_utils.py
/snap/pycharm-community/56/helpers/pycharm/pycharm_setup_runner.py
/snap/pycharm-community/56/helpers/pycharm_generator_utils
/snap/pycharm-community/56/helpers/pycharm_matplotlib_backend
/snap/pycharm-community/56/helpers/pydev/merge_pydev_pycharm.txt
/snap/pycharm-community/56/helpers/pydev/pycharm-readme.txt
/snap/pycharm-community/56/lib/pycharm-pydev.jar
/snap/pycharm-community/56/lib/pycharm.jar
/snap/pycharm-community/56/lib/src/pycharm-openapi-src.zip
/snap/pycharm-community/56/lib/src/pycharm-pydev-src.zip
/snap/pycharm-community/56/meta/gui/pycharm-community.desktop
/snap/pycharm-community/56/snap/gui/pycharm-community.desktop
/snap/bin/pycharm-community
/sys/fs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-58.mount
/sys/fs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-58.mount
/sys/fs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-58.mount
/sys/fs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-58.mount
/sys/fs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity-58.mount
/sys/fs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/systemd/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/systemd/system.slice/snap-pycharm\x2dcommunity-58.mount
/sys/fs/cgroup/systemd/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/kernel/security/apparmor/policy/profiles/snap-update-ns.pycharm-
community.26
/sys/kernel/security/apparmor/policy/profiles/snap.pycharm-community.pycharm-community.17
/etc/systemd/system/snap-pycharm\x2dcommunity-58.mount
/etc/systemd/system/snap-pycharm\x2dcommunity-51.mount
/etc/systemd/system/multi-user.target.wants/snap-pycharm\x2dcommunity-58.mount
/etc/systemd/system/multi-user.target.wants/snap-pycharm\x2dcommunity-51.mount
/etc/systemd/system/multi-user.target.wants/snap-pycharm\x2dcommunity-56.mount
/etc/systemd/system/snap-pycharm\x2dcommunity-56.mounI had a look into what the snap directory was and discovered I could uninstall pycharm-community with sudo snap remove pycharm-community. Now when I run find / -iname "*pycharm*" the following files are still present:
/var/lib/lxcfs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/lxcfs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/lxcfs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/lxcfs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-51.mount
/var/lib/lxcfs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-56.mount
/var/lib/lxcfs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity- 51.mount
/var/lib/lxcfs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/devices/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/pids/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/blkio/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/memory/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/fs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity-51.mount
/sys/fs/cgroup/cpu,cpuacct/system.slice/snap-pycharm\x2dcommunity-56.mount
/sys/kernel/security/apparmor/policy/profiles/snap-update-ns.pycharm-community.26
/sys/kernel/security/apparmor/policy/profiles/snap.pycharm-community.pycharm-community.17I was wondering what these files are; if I need to remove them to start with a completely fresh install; and if so, how do I remove them?
| cgroup files for uninstalled software |
Those Microsoft applications can be found as the package "ms-office-online". One can simply open the default package manager, search for this name, and untick the box to remove it.
|
Manjaro with Gnome comes with some preloaded applications. One of them is a suite of Microsoft Word Online applications (Word, Outlook, etc). And I would like to uninstall them, since I'll be using either Libre Office or Google Docs.
| How can I uninstall Microsoft Word Online on Manjaro (Gnome ver.)? |
I suggest marking all such packages as automatically installed:
apt-show-versions | awk '/No available version in archive/ { print $1 }' | xargs sudo apt-mark autoThen apt autoremove will weed out those which can be removed; you’ll need to check the list and un-mark any packages you want to keep (apt-mark manual).
Packages can lose their “automatic” marker in a number of ways; one in particular is if apt install is ever used with them, e.g. to upgrade them or to try to figure out why they’re held back. Packages without the “automatic” marker won’t be candidates for removal unless another package conflicts with them. Even automatically-installed packages can stick around longer than intended: by default, if another package suggests them, they won’t be removed (which is a weaker dependency than any which would cause a package to be installed in the first place, by default).
|
To find packages which have no installation candidate in my current repositories I ran:
apt-show-versions | grep "No available version in archive" (as recommended here)
I think it would be best if one was notified by the package manager that there are packages that were installed from a repository but are not in any of the current repositories anymore. That would be a (separate) issue with the package-manager though.
The command returned packages such as libgnome-keyring-common:all, bum and libgdbm3:amd64.
Now I'd like to find out which of these I can safely remove. I know that I installed some of those by by installing .deb files and these should not be removed.
I already tried running sudo apt-cache show libgdbm3, sudo apt show libgdbm3, sudo dpkg -p libgdbm3 but it only shows (latter command):
Package: libgdbm3
Priority: important
Section: libs
Installed-Size: 68
Maintainer: Debian QA Group <[emailprotected]>
Architecture: amd64
Multi-Arch: same
Source: gdbm
Version: 1.8.3-14
Depends: libc6 (>= 2.14), dpkg (>= 1.15.4) | install-info
Filename: pool/main/g/gdbm/libgdbm3_1.8.3-14_amd64.deb
Size: 30042
MD5sum: 4bd924fc8be5471a12d1e0204c74d6c3
Description: GNU dbm database routines (runtime version)
Description-md5: 900375b4641d82391c1c951c3b8647f6
Homepage: http://directory.fsf.org/project/gdbm/
Tag: role::shared-lib
SHA256: fbce0e2500aa970ed03665d15822265ff8d31c81927b987ae34e206b9b5ab0b6and not how this package was installed. When I run sudo apt-get remove gdbm I get E: Unable to locate package gdbm.
How to properly clean out packages with no install candidate? And why are they not automatically removed or prompted to be removed when they got installed via a repo but are not in the current repos anymore? (Seems like this should be done every time repos are changed or an upgrade was done.)
System is Debian10/KDE.
| How to find out which packages that have no install candidate can be removed? (How to properly clean out such packages?) |
The unwanted versions can be removed by name:
sudo apt remove openjdk-8-jdk
sudo apt remove openjdk-11-jdkThey are installed with different packages.
You can see all the openjdk packages available with this command:
sudo apt update
apt-cache search openjdkYou can see all openjdk packages in your system with:
dpkg -l | grep openjdkNote: I am a RedHat user, not a Mint expert but this is pretty much the same on every distro. I just checked the package names on Mint for those commands to work.
|
I am using Linux Mint XFCE 20.
Recently I installed Java by running sudo apt-get install openjdk and it automatically installed versions 8 and 11 of both JRE and JDK, including JRE headless.
It turns out that I needed only openjdk 16 to run what I wanted to, and having other versions of java are kinda pointless. My main issue is that programs default to version 8 and I can't seem to fix it.
So how can I uninstall both versions 11 and 8 and just keep 16?
| How to uninstall versions of Java openJDK |
There are either one of two things:
The package that is installed isn't vim and is actually vim-tiny, vim-athena, vim-gtk, vim-gtk3, or something else. To find out if this is the case, use the following command:
dpkg-query -l | grep vimIt could also be that the vim on your system has been compiled from source and wouldn't be found by apt or dpkg. You can verify this with:
whereis vimThat will show any vim binaries located anywhere on the system including any not located in /usr/bin that may have been compiled in different locations such as /opt or /usr/local.
You can also just use a wildcard:
If you are using apt 1.9 or newer:
apt remove '~nvim.*'If you are using apt 1.8 or older:
apt remove vim* |
uname -a
Linux MiWiFi-R3-srv 4.19.0-0.bpo.9-amd64 #1 SMP Debian 4.19.118-2~bpo9+1 (2020-05-20) x86_64 GNU/Linuxsudo dpkg -l vim
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=====================================-=======================-=======================-===============================================================================
un vim <none> <none> (no description available)Try to remove it:
sudo apt remove vim
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'vim' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.When to type vim in console:sudo dpkg -S $(readlink -f $(which vim))
dpkg-query: no path found matching pattern /usr/local/bin/vim
ls -l /usr/local/bin/vim
-rwxr-xr-x 1 root staff 2946336 Jul 17 20:34 /usr/local/bin/vim | Why can't remove vim? |
This is adapted from another answer I wrote.
This will list packages from unstable:
apt list --installed | grep /unstableYou can then either downgrade them manually, or force their downgrade by adding a pin priority. To do the latter, add this to /etc/apt/preferences (creating it if necessary):
Package: *
Pin: release a=buster
Pin-Priority: 1001(I’m assuming you’ve installed Debian 10; change buster as appropriate if that’s not the case.)
Then run apt full-upgrade, which will try to downgrade all your packages to their Debian 10 version.
|
While setting up Debian, allowed sid repositories to install some stuff, but forget disable it after.
So latter executed apt update & upgrade.
That added more than 500 mb of insecure repos.
How remove just these adds without touch the previous?
If from terminal, better.
| how remove sid (unstable) repositories |
To determine how to uninstall a piece of software installed in this fashion, you need to read through the script used to perform the installation and determine what it did, then undo that. Using curl | sh-style patterns means that the installation script isn’t preserved, so we need to download the current version and hope that its behaviour hasn’t changed since the installation.
In this particular case, uninstallation is straightforward, assuming defaults:delete ~/.mos/bin/mos — that’s the only file added during installation;
if ~/.mos/bin is empty, delete it, and then if ~/.mos is empty, delete it;
edit ~/.bashrc or ~/.profile to remove the line adding ~/.mos/bin to the PATH.If you specified a different DESTDIR when installing, then you’ll need to remove mos from that directory instead.
|
I installed Mongoose OS using the following command on Debian 9.5.0:
curl -fsSL https://mongoose-os.com/downloads/mos/install.sh | /bin/bashHow do I safely uninstall this software?
| How to uninstall Mongoose OS from Debian? |
In theory, you should be able to "undo" this installation by removing kchmviewer and all automatically installed packages:
apt-get remove kchmviewer && apt-get autoremovebut pay attention to the packages removed by the second command. Auto-removal may not produce the results you're after though: by default, if an auto-installed package is recommended by any other installed package, this command won't remove it. (See Why did 'apt-get autoremove' not work properly? for details.)
If you want to process the log instead, you can use this sed command to turn the "Install:" line into a list of packages you can use with apt-get remove (along with `apt-get remove kchmviewer):
sed 's/([^)]*)//g;s/Install: //;s/ ,//g'Taking your log in a file named log (with only the lines given in your question):
apt-get remove kchmviewer $(grep Install: log | sed 's/([^)]*)//g;s/Install: //;s/ ,//g') |
Suppose I have the history looks like this:
Start-Date: 2016-09-20 15:49:21
Commandline: apt-get install kchmviewer
Install: libkde3support4:amd64 (4.14.2-5+deb8u1, automatic), libresid-builder0c2a:amd64 (2.1.1-14, automatic), ntrack-module-libnl-0:amd64 (016-1.3, automatic), libmpeg2-4:amd64 (0.5.1-7, automatic), libwinpr-thread0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libkrosscore4:amd64 (4.14.2-5+deb8u1, automatic), libgpgme++2:amd64 (4.14.2-2+b1, automatic), oxygen-icon-theme:amd64 (4.14.0-1, automatic), libktexteditor4:amd64 (4.14.2-5+deb8u1, automatic), kdelibs5-data:amd64 (4.14.2-5+deb8u1, automatic), libchm1:amd64 (0.40a-3+b1, automatic), kchmviewer:amd64 (6.0-1), libcrystalhd3:amd64 (0.0~git20110715.fdd2f19-11, automatic), libdc1394-22:amd64 (2.2.3-1, automatic), libkdeui5:amd64 (4.14.2-5+deb8u1, automatic), libkdeclarative5:amd64 (4.14.2-5+deb8u1, automatic), libfam0:amd64 (2.7.0-17.1, automatic), libthreadweaver4:amd64 (4.14.2-5+deb8u1, automatic), libfreerdp-client1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libwinpr-utils0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libqt4-sql-mysql:amd64 (4.8.6+git64-g5dc8b2b+dfsg-3+deb8u1, automatic), libfreerdp-core1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), kde-runtime:amd64 (4.14.2-2, automatic), libchromaprint0:amd64 (1.2-1, automatic), libwinpr-synch0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libkparts4:amd64 (4.14.2-5+deb8u1, automatic), nepomuk-core-data:amd64 (4.14.0-1, automatic), libqt4-sql:amd64 (4.8.6+git64-g5dc8b2b+dfsg-3+deb8u1, automatic), libexiv2-13:amd64 (0.24-4.1, automatic), libqca2:amd64 (2.0.3-6, automatic), libntrack0:amd64 (016-1.3, automatic), upower:amd64 (0.99.1-3.2, automatic), kde-runtime-data:amd64 (4.14.2-2, automatic), libphonon4:amd64 (4.8.0-4, automatic), libwinpr-pool0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libkemoticons4:amd64 (4.14.2-5+deb8u1, automatic), vlc-plugin-notify:amd64 (2.2.4-1~deb8u1, automatic), libnepomukquery4a:amd64 (4.14.2-5+deb8u1, automatic), libkmediaplayer4:amd64 (4.14.2-5+deb8u1, automatic), libwinpr-handle0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libwinpr-crt0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libvlccore8:amd64 (2.2.4-1~deb8u1, automatic), libqt4-qt3support:amd64 (4.8.6+git64-g5dc8b2b+dfsg-3+deb8u1, automatic), libdvbpsi9:amd64 (1.2.0-1, automatic), libfreerdp-cache1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libwinpr-interlocked0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), katepart:amd64 (4.14.2-2, automatic), libproxy-tools:amd64 (0.4.11-4+b2, automatic), vlc-nox:amd64 (2.2.4-1~deb8u1, automatic), libkdnssd4:amd64 (4.14.2-5+deb8u1, automatic), soprano-daemon:amd64 (2.9.4+dfsg-1.1, automatic), libupnp6:amd64 (1.6.19+git20141001-1, automatic), phonon:amd64 (4.8.0-4, automatic), libsoprano4:amd64 (2.9.4+dfsg-1.1, automatic), libdbusmenu-qt2:amd64 (0.9.2-1, automatic), libkatepartinterfaces4:amd64 (4.14.2-2, automatic), vlc-plugin-samba:amd64 (2.2.4-1~deb8u1, automatic), libusageenvironment1:amd64 (2014.01.13-1, automatic), kdelibs5-plugins:amd64 (4.14.2-5+deb8u1, automatic), libqt4-script:amd64 (4.8.6+git64-g5dc8b2b+dfsg-3+deb8u1, automatic), libebml4:amd64 (1.3.0-2+deb8u1, automatic), libxcb-xv0:amd64 (1.10-3+b1, automatic), libkjsapi4:amd64 (4.14.2-5+deb8u1, automatic), libcddb2:amd64 (1.3.2-5, automatic), libbasicusageenvironment0:amd64 (2014.01.13-1, automatic), libkactivities6:amd64 (4.13.3-1, automatic), libfreerdp-codec1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libwinpr-input0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libgroupsock1:amd64 (2014.01.13-1, automatic), libiso9660-8:amd64 (0.83-4.2, automatic), libfreerdp-gdi1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libwinpr-heap0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libwinpr-rpc0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), vlc-data:amd64 (2.2.4-1~deb8u1, automatic), libwinpr-library0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libknewstuff3-4:amd64 (4.14.2-5+deb8u1, automatic), libnepomukutils4:amd64 (4.14.2-5+deb8u1, automatic), libfreerdp-locale1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libfreerdp-primitives1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), kdoctools:amd64 (4.14.2-5+deb8u1, automatic), libgles2-mesa:amd64 (10.3.2-1+deb8u1, automatic), libkxmlrpcclient4:amd64 (4.14.2-2+b1, automatic), libkpty4:amd64 (4.14.2-5+deb8u1, automatic), libkjsembed4:amd64 (4.14.2-5+deb8u1, automatic), libwinpr-registry0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libqt4-designer:amd64 (4.8.6+git64-g5dc8b2b+dfsg-3+deb8u1, automatic), libsolid4:amd64 (4.14.2-5+deb8u1, automatic), libkhtml5:amd64 (4.14.2-5+deb8u1, automatic), libssh-gcrypt-4:amd64 (0.6.3-4+deb8u2, automatic), libntrack-qt4-1:amd64 (016-1.3, automatic), libtwolame0:amd64 (0.3.13-1.1, automatic), libkfile4:amd64 (4.14.2-5+deb8u1, automatic), libwinpr-sspi0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), phonon-backend-vlc:amd64 (0.8.0-2, automatic), libzvbi0:amd64 (0.2.35-3, automatic), mysql-common:amd64 (5.5.52-0+deb8u1, automatic), libgles1-mesa:amd64 (10.3.2-1+deb8u1, automatic), libattica0.4:amd64 (0.4.2-1, automatic), libkdesu5:amd64 (4.14.2-5+deb8u1, automatic), libmysqlclient18:amd64 (5.5.52-0+deb8u1, automatic), libdlrestrictions1:amd64 (0.15.15, automatic), libstreams0:amd64 (0.7.8-1.2+b2, automatic), libknotifyconfig4:amd64 (4.14.2-5+deb8u1, automatic), libfreerdp-rail1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libxcb-composite0:amd64 (1.10-3+b1, automatic), libfreerdp-common1.1.0:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libkdecore5:amd64 (4.14.2-5+deb8u1, automatic), kdelibs-bin:amd64 (4.14.2-5+deb8u1, automatic), libfreerdp-crypto1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libkactivities-bin:amd64 (4.13.3-1, automatic), libva-x11-1:amd64 (1.4.1-1, automatic), plasma-scriptengine-javascript:amd64 (4.14.2-2, automatic), libnepomukcore4:amd64 (4.14.0-1+b2, automatic), libshine3:amd64 (3.1.0-2.1, automatic), libnepomuk4:amd64 (4.14.2-5+deb8u1, automatic), libwinpr-path0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libzvbi-common:amd64 (0.2.35-3, automatic), libwinpr-dsparse0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), vlc:amd64 (2.2.4-1~deb8u1, automatic), libstreamanalyzer0:amd64 (0.7.8-1.2+b2, automatic), libvlc5:amd64 (2.2.4-1~deb8u1, automatic), libmodplug1:amd64 (0.8.8.4-4.1+b1, automatic), docbook-xsl:amd64 (1.78.1+dfsg-1, automatic), libmatroska6:amd64 (1.4.1-2+deb8u1, automatic), libkntlm4:amd64 (4.14.2-5+deb8u1, automatic), libkdewebkit5:amd64 (4.14.2-5+deb8u1, automatic), liblivemedia23:amd64 (2014.01.13-1, automatic), libvcdinfo0:amd64 (0.7.24+dfsg-0.2, automatic), libiodbc2:amd64 (3.52.9-2, automatic), libwinpr-sysinfo0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libkcmutils4:amd64 (4.14.2-5+deb8u1, automatic), libpolkit-qt-1-1:amd64 (0.103.0-1, automatic), libsidplay2:amd64 (2.1.1-14, automatic), libwinpr-environment0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libva-drm1:amd64 (1.4.1-1, automatic), libkactivities-models1:amd64 (4.13.3-1, automatic), libnl-route-3-200:amd64 (3.2.24-2, automatic), libkate1:amd64 (0.4.1-4, automatic), libkio5:amd64 (4.14.2-5+deb8u1, automatic), libxml2-utils:amd64 (2.9.1+dfsg1-5+deb8u3, automatic), kate-data:amd64 (4.14.2-2, automatic), libfreerdp-utils1.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libvncclient0:amd64 (0.9.9+dfsg2-6.1+deb8u1, automatic), vlc-plugin-pulse:amd64 (2.2.4-1~deb8u1, automatic), libplasma3:amd64 (4.14.2-5+deb8u1, automatic), libupower-glib3:amd64 (0.99.1-3.2, automatic), libwinpr-file0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libqt4-declarative:amd64 (4.8.6+git64-g5dc8b2b+dfsg-3+deb8u1, automatic), libwinpr-crypto0.1:amd64 (1.1.0~git20140921.1.440916e+dfsg1-4, automatic), libmpcdec6:amd64 (0.1~r459-4.1, automatic)
End-Date: 2016-09-20 15:50:34There are a lot of packages listed, but the package names are not libkde3support4:amd64 (4.14.2-5+deb8u1, automatic) for example, instead it is libkde3support4. So how can I easily remove all these packages in one command?
I am using lxde and debian.
Using apt-get remove kchmviewer && apt-get autoremove only removes two packages:
Start-Date: 2016-09-20 15:51:15
Commandline: apt-get remove kchmviewer
Remove: kchmviewer:amd64 (6.0-1)
End-Date: 2016-09-20 15:51:18Start-Date: 2016-09-20 15:51:32
Commandline: apt-get autoremove
Remove: libchm1:amd64 (0.40a-3+b1)
End-Date: 2016-09-20 15:51:33 | how to easisly uninstall the packages listed in the history? |
It seems that the package either isn't installed, or it's name is different than you expect. You can use --info option on .deb file to check proper name:
dpkg --info unity3d.debIf you're concerned about contents and their locations, OSS or not, typically you can check them with --contents option:
dpkg --contents unity3d.debUnity package doesn't need source code to work, so there's no reason to hide anything.
Last thing: your which isn't working, because it can't find the exact filename you provide it in your PATH; it isn't a tool like locate, which does partial matches. As for locate itself - it has a database for lookups, and this database requires updates. There might be a chance that you didn't force update, and it didn't happen anywhere after your installation spontaneously - you can run it with sudo updatedb, or sudo -b updatedb if you prefer running it in the background.
|
I downloaded the Linux beta .deb package of Unity3D from the official place with the official links. (I'm too lazy to recover the link and include it; it doesn't really matter anyways.)
I ran sudo dpkg -i unity3d.deb, and the package installed.
I don't want Unity, and now neither which nor sudo locate unity3d return anything.
I'm aware U3D is closed source; are its installation files masked or hidden? I can't find anything about this.$ sudo dpkg --remove unity3d
dpkg: warning: ignoring request to remove unity3d which isn't installed | Where did Unity3D just install? |
Source packages which use autotools -- ./configure; make; make install -- usually have a make uninstall target as well. However, that target doesn't exist until you run ./configure (because there's actually no makefile), so if you get the error:
make: *** No rule to make target 'uninstall'. Stop.That is likely the problem. This can be confirmed by trying just make; if you get make: *** No targets specified and no makefile found. Stop. then there is no makefile because ./configure has not been successfully run.
If you are using a fresh extraction of the source package to do the uninstall, it probably is not incredibly important if your options to ./configure are not exactly the same as the original build (with the exception of the target directories, which obviously must be the same) but it would be good to try and get close if you can remember them.I also think that installing the program using checkinstall and then uninstalling it using synaptic or apt-get or any package manager would be suitable, right?I haven't used checkinstall myself, but it certainly looks like a good idea and does appear to be explicitly useful in uninstalling things if you have used it in the first place. As far as I can tell it is only current for debian derived distros (such as ubuntu).
|
A few days ago I compiled GTK+ 3.12 on my Ubuntu 14.04 and Linux Mint 17 (with Cinnamon) distros. It messed up the appearance. How can I remove it totally and safely? I didn't change the default installation location when compiling.
I also have versions 3.10 and 2.24 (installed by default.)
| How to uninstall compiled GTK+ |
sudo apt autoremove will not break anything or remove dependencies that are required by other packages.
Running it with the extra argument for mousepad,
sudo apt autoremove mousepad will only add mousepad & it's unique dependencies to the list of orphans to be removed.
|
Note: Originally posted question on Ask Ubuntu and they kindly directed to this StackExchange as more appropriate for a non-official Ubuntu derivative.Background information.
Currently Running POP_OS 20.04
Installed mousepad text editor when OS version was at 18.10 and used it as default text editor throughout upgrades to successive OS releases. I am now running Pop_OS 20.04 LTS and find that gedit works fine for basic text editing. Now ready to remove mousepad (along with any unnecessary dependencies if safe and possible).
Have set gedit as default text editor.
First attempt to uninstall mousepad 0.4.2 (deb version) via POP Shop gives the following error,
Failed to uninstall “Mousepad”
This may have been caused by external or manually compiled software.
The following packages have unmet dependencies:
gir1.2-gtksource-3.0: Depends: libgtksourceview-3.0-1 (>= 3.23.90) but it is not going to be installedLooked at removal of mousepad (only) using command line gives the following,
sudo apt remove mousepad
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 1,612 kB disk space will be freed. (aborted for now)
Then looked at removal of mousepad and dependencies using command,
username@computer:~$ sudo apt autoremove mousepad
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
diffstat engrampa engrampa-common exfalso fonts-font-awesome fonts-lato
fuseiso gir1.2-gst-plugins-base-1.0 gir1.2-gtksource-3.0
gir1.2-javascriptcoregtk-4.0 gir1.2-keybinder-3.0 gir1.2-webkit2-4.0
gnome-shell-extension-pop-battery-icon-fix gnustep-base-common
gnustep-base-runtime gnustep-common i965-va-driver:i386 icoutils
intel-media-va-driver:i386 javascript-common libaom0:i386 libappindicator1
libappstreamqt2 libapt-pkg-perl libaribb24-0:i386 libasn1-8-heimdal:i386
libasound2:i386 libasound2-plugins:i386 libasync-mergepoint-perl
libasyncns0:i386 libavahi-client3:i386 libavahi-common-data:i386
libavahi-common3:i386 libavcodec-extra58:i386 libavutil56:i386
libb-hooks-endofscope-perl libb-hooks-op-check-perl libbrotli1:i386
libcaja-extension1 libcapi20-3 libcapi20-3:i386 libcapture-tiny-perl
libclass-method-modifiers-perl libclass-xsaccessor-perl libclone-perl
libcodec2-0.9:i386 libcpanel-json-xs-perl libcups2:i386 libcurl3-gnutls:i386
libdatrie1:i386 libdbus-1-3:i386 libdbusmenu-gtk4 libdevel-callchecker-perl
libdevel-size-perl libdigest-bubblebabble-perl libdrm-amdgpu1:i386
libdrm-intel1:i386 libdrm-nouveau2:i386 libdrm-radeon1:i386 libdrm2:i386
libdynaloader-functions-perl libelf1:i386 libemail-valid-perl libexif12:i386
libexporter-tiny-perl libfaudio0 libfaudio0:i386 libfile-find-rule-perl
libflac8:i386 libfm-data libfm-extra4 libfm-gtk-data libfm-gtk4
libfm-modules libfm4 libfont-ttf-perl libfox-1.6-0 libfribidi0:i386
libfuture-perl libgc1c2 libgd3:i386 libgdbm-compat4:i386 libgdbm6:i386
libgdk-pixbuf2.0-0:i386 libgl1:i386 libgl1-mesa-dri:i386
libgl1-mesa-glx:i386 libglapi-mesa:i386 libglu1-mesa:i386 libglvnd0:i386
libglx-mesa0:i386 libglx0:i386 libgmp10:i386 libgnustep-base1.26
libgnutls30:i386 libgomp1:i386 libgphoto2-6:i386 libgphoto2-port12:i386
libgraphite2-3:i386 libgsettings-qt1 libgsm1:i386 libgssapi-krb5-2:i386
libgssapi3-heimdal:i386 libgtk2-perl libgtksourceview-3.0-1
libgtksourceview-3.0-common libharfbuzz0b:i386 libhcrypto4-heimdal:i386
libheimbase1-heimdal:i386 libheimntlm0-heimdal:i386 libhogweed5:i386
libhx509-5-heimdal:i386 libicu66:i386 libieee1284-3:i386 libigdgmm11:i386
libimport-into-perl libio-async-loop-epoll-perl libio-async-perl
libio-pty-perl libio-string-perl libipc-run-perl libjack-jackd2-0:i386
libjbig0:i386 libjpeg-turbo8:i386 libjpeg8:i386 libjs-jquery libjs-modernizr
libjs-sphinxdoc libjs-underscore libjson-maybexs-perl libk5crypto3:i386
libkeybinder-3.0-0 libkeyutils1:i386 libkf5itemmodels5
libkrb5-26-heimdal:i386 libkrb5-3:i386 libkrb5support0:i386 liblcms2-2:i386
libldap-2.4-2:i386 liblinux-epoll-perl liblist-compare-perl
liblist-moreutils-perl libltdl7:i386 libmarkdown2 libmenu-cache-bin
libmenu-cache3 libmodule-implementation-perl libmodule-runtime-perl
libmoo-perl libmoox-aliases-perl libmp3lame0:i386 libmpg123-0:i386
libmysqlclient21:i386 libnamespace-clean-perl libnet-dns-perl
libnet-dns-sec-perl libnet-domain-tld-perl libnet-ip-perl libnettle7:i386
libnghttp2-14:i386 libnotify-bin libnuma1:i386 libnumber-compare-perl
libobjc4 libodbc1:i386 libopenal1:i386 libopenjp2-7:i386 libosmesa6
libosmesa6:i386 libp11-kit0:i386 libpackage-stash-perl
libpackage-stash-xs-perl libpackagekitqt5-1 libpango-1.0-0:i386
libpango-perl libpangocairo-1.0-0:i386 libpangoft2-1.0-0:i386
libparams-classify-perl libpath-tiny-perl libpcap0.8:i386 libpci3:i386
libpciaccess0:i386 libpeony2 libperl5.30:i386 libperlio-gzip-perl
libpsl5:i386 libpulse0:i386 libqhttpengine0 libreadonly-perl
libref-util-perl libref-util-xs-perl libroken18-heimdal:i386
librole-tiny-perl librsvg2-2:i386 librsvg2-common:i386 librtmp1:i386
libsamplerate0:i386 libsane:i386 libsasl2-2:i386 libsasl2-modules:i386
libsasl2-modules-db:i386 libsdl2-2.0-0:i386 libsensors5:i386
libsereal-decoder-perl libsereal-encoder-perl libsereal-perl libshine3:i386
libsnapd-qt1 libsnappy1v5:i386 libsndfile1:i386 libsndio7.0:i386
libsnmp35:i386 libsoxr0:i386 libspeex1:i386 libsqlite3-0:i386 libssh-4:i386
libssl1.1:i386 libstb0 libstb0:i386 libstrictures-perl libstruct-dumb-perl
libsub-exporter-progressive-perl libsub-identify-perl libsub-quote-perl
libswresample3:i386 libsystemd0:i386 libtasn1-6:i386 libtest-fatal-perl
libtest-refcount-perl libtext-glob-perl libtext-levenshtein-perl
libthai0:i386 libtiff5:i386 libtwolame0:i386 libtype-tiny-perl
libtype-tiny-xs-perl libudev1:i386 libunicode-utf8-perl libusb-1.0-0:i386
libv4l-0:i386 libv4lconvert0:i386 libva-drm2:i386 libva-x11-2:i386
libva2:i386 libvariable-magic-perl libvdpau1:i386 libvkd3d1 libvkd3d1:i386
libvo-amrwbenc0:i386 libvpx6:i386 libvulkan1:i386 libwavpack1:i386
libwayland-client0:i386 libwayland-cursor0:i386 libwayland-egl1:i386
libwebp6:i386 libwebpmux3:i386 libwind0-heimdal:i386 libwrap0:i386
libx11-xcb1:i386 libx265-179:i386 libxcb-dri2-0:i386 libxcb-dri3-0:i386
libxcb-glx0:i386 libxcb-present0:i386 libxcb-randr0:i386 libxcb-sync1:i386
libxcb-xfixes0:i386 libxcomposite1:i386 libxcursor1:i386 libxdamage1:i386
libxfce4util-bin libxfce4util-common libxfce4util7 libxfconf-0-3
libxfixes3:i386 libxi6:i386 libxinerama1:i386 libxkbcommon0:i386
libxml-writer-perl libxml2:i386 libxpm4:i386 libxrandr2:i386
libxshmfence1:i386 libxslt1.1:i386 libxss1:i386 libxvidcore4:i386
libxxf86vm1:i386 libyaml-libyaml-perl libzvbi0:i386 lintian lxmenu-data
mate-desktop-common mate-terminal-common mesa-va-drivers:i386
mesa-vdpau-drivers:i386 mesa-vulkan-drivers:i386 mousepad
ocl-icd-libopencl1:i386 p7zip p7zip-full parchives patchutils
python3-dbus.mainloop.pyqt5 python3-feedparser python3-musicbrainzngs
python3-mutagen python3-pyflatpak python3-pyinotify
qml-module-org-kde-kcoreaddons qml-module-org-kde-kquickcontrols
qml-module-org-kde-qqc2desktopstyle qml-module-qtquick-controls
qml-module-qtquick-dialogs qml-module-qtquick-layouts
qml-module-qtquick-privatewidgets qt5-gtk2-platformtheme
sphinx-rtd-theme-common t1utils unar va-driver-all:i386
vdpau-driver-all:i386 xarchiver xfconf
0 upgraded, 0 newly installed, 324 to remove and 0 not upgraded.
After this operation, 787 MB disk space will be freed.
Do you want to continue? [Y/n] (aborted for now)I could probably be fine to remove only mousepad (1,612 kB) but would certainly like to remove 787 MB worth of 324 apparently no longer needed dependencies. However, given the amount of information coming back don't have enough knowledge and experience to tell the terminal yes - please remove these.
The Question: Would it really be basically safe to remove these dependencies without causing catastrophic issues? If not, I'm curious to know what is happening here...
Have so far understood that autoremove command would only remove dependencies that are no longer needed (safe to uninstall) but possibly this assumption is not correct.
| Safe to remove Mousepad dependencies in POP_OS? |
Compatibility with other platforms, or compatibility with older stuff to avoid overruns while using snprintf() and strncpy().
Michael Kerrisk explain in his book at the page 1165 - Chapter 57, Sockets: Unix domain :SUSv3 doesn’t specify the size of the sun_path field. Early BSD implementations used 108 and 104 bytes, and one contemporary implementation (HP-UX 11) uses 92 bytes. Portable applications should code to this lower value, and use snprintf() or strncpy() to avoid buffer overruns when writing into this field.Docker guys even made fun of it, because some sockets were 110 characters long:lol 108 chars ETOOMANYThis is why LINUX uses a 108 char socket. Could this be changed? Of course. And this, is the reason why in the first place this limitation was created on older Operating Systems:Why is the maximal path length allowed for unix-sockets on linux 108?Quoting the answer: It was to match the space available in a handy kernel data structure.
Quoting "The Design and Implementation of the 4.4BSD Operating System"
by McKusick et. al. (page 369):The memory management facilities revolve around a data structure
called an mbuf. Mbufs, or memory buffers, are 128 bytes long, with 100
or 108 bytes of this space reserved for data storage.Other OSs(unix domain sockets):OpenBSD: 104 characters
FreeBSD: 104 characters
Mac OS X 10.9: 104 characters |
On Unix systems path names have usually virtually no length limitation (well, 4096 characters on Linux)... except for socket files paths which are limited to around 100 characters (107 characters on Linux).First question: why such a low limitation?I've checked that it seems possible to work around this limitation by changing the current working directory and creating in various directories several socket files all using the same path ./myfile.sock: the client applications seem to correctly connect to the expected server processes even-though lsof shows all of them listening on the same socket file path.Is this workaround reliable or was I just lucky?
Is this behavior specific to Linux or may this workaround be applicable to other Unixes as well? | Why is socket path length limited to a hundred chars? |
Ancillary data is received as if it were queued along with the first normal data octet in the segment (if any).
-- POSIX.1-2017For the rest of your question, things get a bit hairy....For the purposes of this section, a datagram is considered to be a data segment that terminates a record, and that includes a source address as a special type of ancillary data.
Data segments are placed into the queue as data is delivered to the socket by the protocol. Normal data segments are placed at the end of the queue as they are delivered. If a new segment contains the same type of data as the preceding segment and includes no ancillary data, and if the preceding segment does not terminate a record, the segments are logically merged into a single segment...
A receive operation shall never return data or ancillary data from more than one segment.So modern BSD sockets exactly match this extract. This is not surprising :-).
Remember the POSIX standard was written after UNIX, and after splits like BSD v.s. System V. One of the main goals was to help understand the existing range of behaviour, and prevent even more splits in existing features.
Linux was implemented without reference to BSD code. It appears to behave differently here.If I read you correctly, it sounds like Linux is additionally merging "segments" when a new segment does include ancillary data, but the previous segment does not.
Your point that "Linux will append portions of ancillary-bearing messages to the end of other messages as long as no prior ancillary payload needed to be delivered during this call to recvmsg", does not seem entirely explained by the standard. One possible explanation would involve a race condition. If you read part of a "segment", you will receive the ancillary data. Perhaps Linux interpreted this as meaning the remainder of the segment no longer counts as including ancillary data! So when a new segment is received, it is merged - either as per the standard, or as per difference 1 above.If you want to write a maximally portable program, you should avoid this area altogether. When using ancillary data, it is much more common to use datagram sockets. If you want to work on all the strange platforms that technically aspire to provide something mostly like POSIX, your question seems to be venturing into a dark and untested corner.You could argue Linux still follows several significant principles:"Ancillary data is received as if it were queued along with the first normal data octet in the segment".
Ancillary data is never "concatenated", as you put it.However, I am not convinced the Linux behaviour is particularly useful, when you compare it to the BSD behaviour. It seems like the program you describe would need to add a Linux-specific workaround. And I don't know a justification for why Linux would expect you to do that.
It might have looked sensible when writing the Linux kernel code, but without ever having been tested or exercised by any program.
Or it might be exercised by some program code which mostly works under this subset, but in principle could have edge-case "bugs" or race conditions.
If you cannot make sense of the Linux behaviour and its intended usage, I think that argues for treating this as a "dark, untested corner" on Linux.
|
So I've read lots of information on unix-stream ancillary data, but one thing missing from all the documentation is what is supposed to happen when there is a partial read?
Suppose I'm receiving the following messages into a 24 byte buffer
msg1 [20 byes] (no ancillary data)
msg2 [7 bytes] (2 file descriptors)
msg3 [7 bytes] (1 file descriptor)
msg4 [10 bytes] (no ancillary data)
msg5 [7 bytes] (5 file descriptors)The first call to recvmsg, I get all of msg1 (and part of msg2? Will the OS ever do that?) If I get part of msg2, do I get the ancillary data right away, and need to save it for the next read when I know what the message was actually telling me to do with the data? If I free up the 20 bytes from msg1 and then call recvmsg again, will it ever deliver msg3 and msg4 at the same time? Does the ancillary data from msg3 and msg4 get concatenated in the control message struct?
While I could write test programs to experimentally find this out, I'm looking for documentation about how ancillary data behaves in a streaming context. It seems odd that I can't find anything official on it.I'm going to add my experimental findings here, which i got from this test program:
https://github.com/nrdvana/daemonproxy/blob/master/src/ancillary_test.c
Linux 3.2.59, 3.17.6
It appears that Linux will append portions of ancillary-bearing messages to the end of other messages as long as no prior ancillary payload needed to be delivered during this call to recvmsg. Once one message's ancillary data is being delivered, it will return a short read rather than starting the next ancillary-data message. So, in the example above, the reads I get are:
recv1: [24 bytes] (msg1 + partial msg2 with msg2's 2 file descriptors)
recv2: [10 bytes] (remainder of msg2 + msg3 with msg3's 1 file descriptor)
recv3: [17 bytes] (msg4 + msg5 with msg5's 5 file descriptors)
recv4: [0 bytes]BSD 4.4, 10.0
BSD provides more alignment than Linux, and gives a short read immediately before the start of a message with ancillary data. But, it will happily append a non-ancillary-bearing message to the end of an ancillary-bearing message. So for BSD, it looks like if your buffer is larger than the ancillary-bearing message, you get almost packet-like behavior. The reads I get are:
recv1: [20 bytes] (msg1)
recv2: [7 bytes] (msg2, with msg2's 2 file descriptors)
recv3: [17 bytes] (msg3, and msg4, with msg3's 1 file descriptor)
recv4: [7 bytes] (msg5 with 5 file descriptors)
recv5: [0 bytes]TODO:
Would still like to know how it happens on older Linux, iOS, Solaris, etc, and how it could be expected to happen in the future.
| What happens with unix stream ancillary data on partial reads? |
I've found how to fix it situationally.
Following command made a hint for me
$ lsof -U +c 15 | cut -f1 -d' ' | sort | uniq -c | sort -rn | head -3
382 zenity
256 dbus-daemon
212 chromeI've took the top process name and have read little about it. From WikipediaZenity is free software and a cross-platform program that allows the execution of GTK+ dialog boxes in command-line and shell scripts.Then I've listed processes and filtered them by zenity key
$ ps axwwu | grep -i zenity
tom 762 0.0 0.2 390752 27476 ? Sl Jun06 0:01 /usr/bin/zenity --notification --window-icon /usr/share/icons/gnome/32x32/status/mail-unread.png --text You have new mail
tom 1239 0.0 0.2 390756 27700 ? Sl Jun06 0:01 /usr/bin/zenity --notification --window-icon /usr/share/icons/gnome/32x32/status/mail-unread.png --text You have new mail
tom 1249 0.0 0.2 390760 27752 ? Sl Jun02 0:01 /usr/bin/zenity --notification --window-icon /usr/share/icons/gnome/32x32/status/mail-unread.png --text You have new mail
...Aha! This is related to mail notification toasts
$ pidof zenity | wc -w
186$ killall zenity$ pidof zenity | wc -w
0$ lsof -U +c 15 | cut -f1 -d' ' | sort | uniq -c | sort -rn | head -3
140 chrome
61 dbus-daemon
37 skypeforlinux# lsof -p `pidof X` | tail -n 10
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
Xorg 1672 root 58u unix 0xffff8801c05221c0 0t0 9714900 @/tmp/.X11-unix/X0
Xorg 1672 root 59u unix 0xffff8801c0527440 0t0 9715809 @/tmp/.X11-unix/X0
Xorg 1672 root 62u CHR 13,79 0t0 9540231 /dev/input/event15
Xorg 1672 root 69u unix 0xffff88031c155280 0t0 175280 @/tmp/.X11-unix/X0
Xorg 1672 root 79u unix 0xffff880063b103c0 0t0 9243076 @/tmp/.X11-unix/X0
Xorg 1672 root 90u unix 0xffff880111b22940 0t0 2858278 @/tmp/.X11-unix/X0
Xorg 1672 root 96u unix 0xffff88000aeb2d00 0t0 9301134 @/tmp/.X11-unix/X0
Xorg 1672 root 113u unix 0xffff880063b14b00 0t0 939782 @/tmp/.X11-unix/X0
Xorg 1672 root 153u unix 0xffff880111a47080 0t0 1819503 @/tmp/.X11-unix/X0
Xorg 1672 root 256u REG 0,16 4096 22306 /sys/devices/pci0000:00/0000:00:02.0/drm/card1/card1-LVDS-1/intel_backlight/brightness# lsof -p `pidof X` | wc -l
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
524Voilà! I can start other apps now (until zenity or another buggy app eats all available connections).
NOTE. I still will have to sort how to prevent zenity to keep connections
|
After some period of time, I'm experiencing problems with starting applications, for example, Viber.
$ /opt/viber/Viber
QSqlDatabasePrivate::removeDatabase: connection 'ConfigureDBConnection' is still in use, all queries will cease to work.
Maximum number of clients reached
(Viber:1279): Gtk-WARNING **: cannot open display: :0Skype
$ skype
Maximum number of clients reachedGnote
$ gnote
Maximum number of clients reached
** (gnote:21284): WARNING **: Could not open X display
Maximum number of clients reached
(gnote:21284): Gtk-WARNING **: cannot open display: :0xrestop
$ xrestop
Maximum number of clients reachedxrestop: Unable to open display!After some research, I've found that it is related to some limit of unix sockets.
$ lsof -U +c 15 | wc -l
1011$ lsof -U +c 15 | cut -f1 -d' ' | sort | uniq -c | sort -rn | head -3
382 zenity
256 dbus-daemon
212 chromeOn some sources people are talking about 256 max X client number limit. It looks like this is proved by the following command output:
# lsof -p `pidof X` | tail -n 50
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
Xorg 1672 root 207u unix 0xffff880141189e00 0t0 3759941 @/tmp/.X11-unix/X0
Xorg 1672 root 208u unix 0xffff88001a50a940 0t0 3759944 @/tmp/.X11-unix/X0
Xorg 1672 root 209u unix 0xffff88003a0430c0 0t0 3802123 @/tmp/.X11-unix/X0
Xorg 1672 root 210u unix 0xffff8801117c3c00 0t0 3817272 @/tmp/.X11-unix/X0
Xorg 1672 root 211u unix 0xffff8801225e2580 0t0 4395710 @/tmp/.X11-unix/X0
Xorg 1672 root 212u unix 0xffff88015ed3a580 0t0 4425629 @/tmp/.X11-unix/X0
Xorg 1672 root 213u unix 0xffff88013095f800 0t0 4427059 @/tmp/.X11-unix/X0
Xorg 1672 root 214u unix 0xffff8802e75d9a40 0t0 4427075 @/tmp/.X11-unix/X0
Xorg 1672 root 215u unix 0xffff8801225e12c0 0t0 4608310 @/tmp/.X11-unix/X0
Xorg 1672 root 216u unix 0xffff88031bc8fbc0 0t0 4608314 @/tmp/.X11-unix/X0
Xorg 1672 root 217u unix 0xffff8801309a5dc0 0t0 4608318 @/tmp/.X11-unix/X0
Xorg 1672 root 218u unix 0xffff8801309a2940 0t0 4607747 @/tmp/.X11-unix/X0
Xorg 1672 root 219u unix 0xffff880130958b40 0t0 4786413 @/tmp/.X11-unix/X0
Xorg 1672 root 220u unix 0xffff8800b1382d00 0t0 4787103 @/tmp/.X11-unix/X0
Xorg 1672 root 221u unix 0xffff88011f350000 0t0 5001136 @/tmp/.X11-unix/X0
Xorg 1672 root 222u unix 0xffff88011f352d00 0t0 5144089 @/tmp/.X11-unix/X0
Xorg 1672 root 223u unix 0xffff88011f351a40 0t0 5144417 @/tmp/.X11-unix/X0
Xorg 1672 root 224u unix 0xffff88011f357bc0 0t0 5145648 @/tmp/.X11-unix/X0
Xorg 1672 root 225u unix 0xffff88014108a940 0t0 5145652 @/tmp/.X11-unix/X0
Xorg 1672 root 226u unix 0xffff88001a50c740 0t0 5145655 @/tmp/.X11-unix/X0
Xorg 1672 root 227u unix 0xffff88006c7b6cc0 0t0 5161703 @/tmp/.X11-unix/X0
Xorg 1672 root 228u unix 0xffff8802e75dddc0 0t0 5225428 @/tmp/.X11-unix/X0
Xorg 1672 root 229u unix 0xffff88015ed3cb00 0t0 5228455 @/tmp/.X11-unix/X0
Xorg 1672 root 230u unix 0xffff880111b203c0 0t0 5235401 @/tmp/.X11-unix/X0
Xorg 1672 root 231u unix 0xffff88013089bfc0 0t0 5259828 @/tmp/.X11-unix/X0
Xorg 1672 root 232u unix 0xffff8800b10030c0 0t0 5310616 @/tmp/.X11-unix/X0
Xorg 1672 root 233u unix 0xffff88010d461e00 0t0 5349971 @/tmp/.X11-unix/X0
Xorg 1672 root 234u unix 0xffff88001a50ddc0 0t0 5530781 @/tmp/.X11-unix/X0
Xorg 1672 root 235u unix 0xffff880142e703c0 0t0 5529146 @/tmp/.X11-unix/X0
Xorg 1672 root 236u unix 0xffff880142e73c00 0t0 5654363 @/tmp/.X11-unix/X0
Xorg 1672 root 237u unix 0xffff88025087f800 0t0 5260838 @/tmp/.X11-unix/X0
Xorg 1672 root 238u unix 0xffff880142e712c0 0t0 5814164 @/tmp/.X11-unix/X0
Xorg 1672 root 239u unix 0xffff8802508a21c0 0t0 5917312 @/tmp/.X11-unix/X0
Xorg 1672 root 240u unix 0xffff8800b1387080 0t0 5851281 @/tmp/.X11-unix/X0
Xorg 1672 root 241u unix 0xffff8802e6854380 0t0 5851284 @/tmp/.X11-unix/X0
Xorg 1672 root 242u unix 0xffff88011f3503c0 0t0 5851295 @/tmp/.X11-unix/X0
Xorg 1672 root 243u unix 0xffff8801041d8f00 0t0 5917315 @/tmp/.X11-unix/X0
Xorg 1672 root 244u unix 0xffff8801041d83c0 0t0 5917322 @/tmp/.X11-unix/X0
Xorg 1672 root 245u unix 0xffff88000aeb4ec0 0t0 5917325 @/tmp/.X11-unix/X0
Xorg 1672 root 246u unix 0xffff880111b21e00 0t0 5993474 @/tmp/.X11-unix/X0
Xorg 1672 root 247u unix 0xffff880143546180 0t0 6115119 @/tmp/.X11-unix/X0
Xorg 1672 root 248u unix 0xffff88000aeb30c0 0t0 6120777 @/tmp/.X11-unix/X0
Xorg 1672 root 249u unix 0xffff88013089da00 0t0 6119223 @/tmp/.X11-unix/X0
Xorg 1672 root 250u unix 0xffff8801309a5280 0t0 6121614 @/tmp/.X11-unix/X0
Xorg 1672 root 251u unix 0xffff88000aeb6cc0 0t0 6139354 @/tmp/.X11-unix/X0
Xorg 1672 root 252u unix 0xffff88010d460000 0t0 6635385 @/tmp/.X11-unix/X0
Xorg 1672 root 253u unix 0xffff88013095b840 0t0 6659213 @/tmp/.X11-unix/X0
Xorg 1672 root 254u unix 0xffff88005c96b480 0t0 6661835 @/tmp/.X11-unix/X0
Xorg 1672 root 255u unix 0xffff88011f350f00 0t0 6710815 @/tmp/.X11-unix/X0
Xorg 1672 root 256u REG 0,16 4096 22306 /sys/devices/pci0000:00/0000:00:02.0/drm/card1/card1-LVDS-1/intel_backlight/brightnessI can close some application, for example, Chrome and then Viber can be started.
I'm wondering if this is normal to have 200+ connections for the top three apps? Or just suggest how to solve the problem, please.
Note, I can use my system for months without rebooting (suspend/resume).
Linux Mint 17.3 64-bit Cinnamon
| Can't start applications due to "Maximum number of clients reached" error |
Yes, linux automatically "cleans up" abstract sockets to the extent that cleaning up even makes sense. Here's a minimal working example with which you can verify this:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/socket.h>
#include <sys/un.h>int
main(int argc, char **argv)
{
int s;
struct sockaddr_un sun; if (argc != 2 || strlen(argv[1]) + 1 > sizeof(sun.sun_path)) {
fprintf(stderr, "usage: %s abstract-path\n", argv[0]);
exit(1);
} s = socket(AF_UNIX, SOCK_STREAM, 0);
if (s < 0) {
perror("socket");
exit(1);
}
memset(&sun, 0, sizeof(sun));
sun.sun_family = AF_UNIX;
strcpy(sun.sun_path + 1, argv[1]);
if (bind(s, (struct sockaddr *) &sun, sizeof(sun))) {
perror("bind");
exit(1);
}
pause();
}Run this program as ./a.out /test-socket &, then run ss -ax | grep test-socket, and you will see the socket in use. Then kill %./a.out, and ss -ax will show the socket is gone.
However, the reason you can't find this clean-up in any documentation is that it isn't really cleaning up in the same sense that non-abstract unix-domain sockets need cleaning up. A non-abstract socket actually allocates an inode and creates an entry in a directory, which needs to be cleaned up in the underlying file system. By contrast, think of an abstract socket more like a TCP or UDP port number. Sure, if you bind a TCP port and then exit, that TCP port will be free again. But whatever 16-bit number you used still exists abstractly and always did. The namespace of port numbers is 1-65535 and never changes or needs cleaning.
So just think of the abstract socket name like a TCP or UDP port number, just picked from a much larger set of possible port numbers that happen to look like pathnames but are not. You can't bind the same port number twice (barring SO_REUSEADDR or SO_REUSEPORT). But closing the socket (explicitly or implicitly by terminating) frees the port, with nothing left to clean up.
|
There's a great answer on StackOverflow about providing a better lock for daemons (synthesized from Eduardo Fleury) that doesn't depend on the common PID file lock mechanism for daemons. There are lots of good comments there about why PID lock files can sometimes cause problems, so I won't rehash them here.
In short, the solution relies on Linux abstract namespace domain sockets, which keeps track of the sockets by name for you, rather than relying on files, which can stick around after the daemon is SIGKILL'd. The example shows that Linux seems to free the socket once the process is dead.
But I can't find definitive documentation in Linux that says what exactly Linux does with the abstract socket when the bound process is SIGKILL'd. Does anyone know?
Put another way, when precisely is the abstract socket freed to be used again?
I don't want to replace the PID file mechanism with abstract sockets unless it definitively solves the problem.
| Does Linux automatically clean up abstract domain sockets? |
The code that generates this file is in the unix_seq_show() function in net/unix/af_unix.c in the kernel source. Looking at include/net/af_unix.h is also helpful, to see the data structures in use.
The socket path is always the last column in the output, and the Android kernel source matches the stock kernel in this respect. So unless I'm mistaken, that number that looks like a column isn't actually a separate column.
You can name UNIX domain sockets practically anything you want, as long as the total path length is less than 108 bytes. So you can't make any assumptions as to what these paths will look like. It's possible that the userspace code that's choosing those names is using a tab character followed by a number, or even padding the name out to a certain length with spaces. To test my theory, you might try looking at the socket files in /dev/socket/qmux_radio/.
|
On my Android device there is the file called /proc/net/unix who's content does not conform to that of any standard linux distribution (which show the unix domain sockets.) First few lines:
Num RefCount Protocol Flags Type St Inode Path
00000000: 00000002 00000000 00000000 0002 01 5287581 /data/misc/wifi/sockets/wpa_ctrl_789-3189
00000000: 00000003 00000000 00000000 0001 03 6402 /dev/socket/qmux_radio/qmux_client_socket 297
00000000: 00000002 00000000 00010000 0001 01 7180 /dev/.secure_storage/ssd_socket
00000000: 00000002 00000000 00010000 0001 01 6424 /dev/socket/cnd
00000000: 00000002 00000000 00010000 0001 01 6400 @QMulticlient
...(1) What does these different columns stand for?EDIT: Ok I've found this:Here 'Num' is the kernel table slot number, 'RefCount' is the number of users of the socket, 'Protocol' is currently always 0, 'Flags' represent the internal kernel flags holding the status of the socket. Currently, type is always '1' (Unix domain data-gram sockets are not yet supported in the kernel). 'St' is the internal state of the socket and Path is the bound path (if any) of the socket.However, that is not up-to-date as we have a type and not clarifying what "internal state" means.
(2) Also at the end of the path, there are sometimes an additional number without its own column name. What is that?
In addition, where in the kernel source code could I expect to find where this is created?
EDIT: 2016-04-27 (Resolved)
Thanks to answer below, I've confirmed through lsof |grep qmux, that the number in the last column for qmux_client_sockets items, is the PID of the process using it.
| What is the meaning of the contents of /proc/net/unix? |
You should be able to do utilizing socat and ProxyCommand option for ssh. ProxyCommand configures ssh client to use proxy process for communicating with your server. socat establishes two-way communication between STDIN/STDOUT (socat and ssh client) and your UNIX socket.
ssh -o "ProxyCommand socat - UNIX-CLIENT:/home/username/test.sock" foo |
Short question:
How do I connect to a local unix socket (~/test.sock) via ssh? This sockets forwards to an actual ssh server. The obvious does not work and I can't find any documentation:
public> ssh /home/username/test.sock
"ssh: Could not resolve hostname: /home/username/test.sock: Name of service not known"Long Question:
The Problem I try to solve, is to connect from my (public) university server to my (local) PC, which is behind NAT and not visible to public.
The canonical solution is to create a ssh proxy/tunnel to local on public:
local> ssh -NR 2222:localhost:22 publicBut this is not possible, as the administration prohibits creating ports.
So I have thought about using UNIX socket instead, which works:
local> ssh -NR /home/username/test.sock:localhost:22 publicBut now, how can I connect to it with ssh?
| SSH connect to a UNIX socket instead of hostname |
Connecting to a DBus daemon listening on an abstract Unix socket in a different network namespace is not possible. Such addresses can be identified in ss -x via an address that contains a @:
u_str ESTAB 0 0 @/tmp/dbus-t00hzZWBDm 11204746 * 11210618 As a workaround, you can create a non-abstract Unix or IP socket which proxies to the abstract Unix socket. This is to be done outside the network namespace. From within the network namespace, you can then connect to that address. E.g. assuming the above abstract socket address, run this outside the namespace:
socat UNIX-LISTEN:/tmp/whatever,fork ABSTRACT-CONNECT:/tmp/dbus-t00hzZWBDmThen from within the namespace you can connect by setting this environment variable:
DBUS_SESSION_BUS_ADDRESS=unix:path=/tmp/whatever |
I am using network namespaces such that I can capture network traffic of a single process. The namespace is connected through the "host" via a veth pair and has network connectivity through NAT. So far this works for IP traffic and named Unix domain sockets.
A problem arises when a program needs to communicate with the D-Bus session bus. The D-Bus daemon listens on an abstract socket as specified with this environment variable:
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-jIB6oAy5ea,guid=04506c9a7f54e75c0b617a6c54e9b63aIt appears that the abstract Unix domain socket namespace is different in the namespace. Is there a way to get access to this D-Bus session from the network namespace?
| Connect with D-Bus in a network namespace |
The default is not configurable, but it is different between 32-bit and 64-bit Linux. The value appears to written so as to allow 256 packets of 256 bytes each, accounting for the different per-packet overhead (structs with 32-bit v.s. 64-bit pointers or integers).
On 64-bit Linux 4.14.18: 212992 bytes
On 32-bit Linux 4.4.92: 163840 bytes
The default buffer sizes are the same for both the read and write buffers. The per-packet overhead is a combination of struct sk_buff and struct skb_shared_info, so it depends on the exact size of these structures (rounded up slightly for alignment). E.g. in the 64-bit kernel above, the overhead is 576 bytes per packet.
http://elixir.free-electrons.com/linux/v4.5/source/net/core/sock.c#L265
/* Take into consideration the size of the struct sk_buff overhead in the
* determination of these values, since that is non-constant across
* platforms. This makes socket queueing behavior and performance
* not depend upon such differences.
*/
#define _SK_MEM_PACKETS 256
#define _SK_MEM_OVERHEAD SKB_TRUESIZE(256)
#define SK_WMEM_MAX (_SK_MEM_OVERHEAD * _SK_MEM_PACKETS)
#define SK_RMEM_MAX (_SK_MEM_OVERHEAD * _SK_MEM_PACKETS)/* Run time adjustable parameters. */
__u32 sysctl_wmem_max __read_mostly = SK_WMEM_MAX;
EXPORT_SYMBOL(sysctl_wmem_max);
__u32 sysctl_rmem_max __read_mostly = SK_RMEM_MAX;
EXPORT_SYMBOL(sysctl_rmem_max);
__u32 sysctl_wmem_default __read_mostly = SK_WMEM_MAX;
__u32 sysctl_rmem_default __read_mostly = SK_RMEM_MAX;Interestingly, if you set a non-default socket buffer size, Linux doubles it to provide for the overheads. This means that if you send smaller packets (e.g. less than the 576 bytes above), you won't be able to fit as many bytes of user data in the buffer, as you had specified for its size.
|
Linux documents the default buffer size for tcp, but not for AF_UNIX ("local") sockets. The value can be read (or written) at runtime.
cat /proc/sys/net/core/[rw]mem_defaultIs this value always set the same across different Linux kernels, or is there a range of possible values it could be?
| What values may Linux use for the default unix socket buffer size? |
Short answer, you can control this with a command line flag: -o 'StreamLocalBindUnlink=yes'
Long answer: See ssh_config(5):
StreamLocalBindUnlink
Specifies whether to remove an existing Unix-domain socket file for local or
remote port forwarding before creating a new one. If the socket file already
exists and StreamLocalBindUnlink is not enabled, ssh will be unable to forward
the port to the Unix-domain socket file. This option is only used for port for‐
warding to a Unix-domain socket file. The argument must be yes or no (the default). |
I have a local unix socket tunneled to another unix socket on a remote instance over SSH:
ssh -N -L $HOME/my.sock:/var/run/another.sockhowever, when I terminate ssh gracefully (i.e. ctrl+C or SIGTERM), the $HOME/my.sock remains. It looks like this is not cleaned up properly. Is there an option/flag for this?
This is problematic because if I run the command for the second time, it fails due to existing socket file. (I can't see a "reuse" flag/option either that’ll overwrite the existing socket file.) And I much rather don’t add a rm -f $HOME/my.sock.
| OpenSSH not cleaning up the domain socket upon termination |
That is a very short test line. Try something larger than the buffer size used by either netcat or socat, and sending that string in multiple times from the multiple test instances; here's a sender program that does that:
#!/usr/bin/env expectpackage require Tcl 8.5set socket [lindex $argv 0]
set character [string index [lindex $argv 1] 0]
set length [lindex $argv 2]
set repeat [lindex $argv 3]set fh [open "| socat - UNIX-CONNECT:$socket" w]
# avoid TCL buffering screwing with our results
chan configure $fh -buffering noneset teststr [string repeat $character $length]while {$repeat > 0} {
puts -nonewline $fh $teststr
incr repeat -1
}And then a launcher to call that a bunch of times (25) using different test characters of great length (9999) a bunch of times (100) to hopefully blow well past any buffer boundary:
#!/bin/sh# NOTE this is a very bad idea on a shared system
SOCKET=/tmp/blablafor char in a b c d e f g h i j k l m n o p q r s t u v w x y; do
./sender -- "$SOCKET" "$char" 9999 100 &
donewaitHmm, I don't have a netcat hopefully nc on Centos 7 will suffice:
$ nc -klU /tmp/blabla > /tmp/outAnd then elsewhere we feed data to that
$ ./launcherNow our /tmp/out will be awkward as there are no newlines (some things buffer based on newline so newlines can influence test results if that is the case, see setbuf(3) for the potential for line-based buffering) so we need code that looks for a change of a character, and counts how long the previous sequence of identical characters was.
#include <stdio.h>int main(int argc, char *argv[])
{
int current, previous;
unsigned long count = 1; previous = getchar();
if (previous == EOF) return 1; while ((current = getchar()) != EOF) {
if (current != previous) {
printf("%lu %c\n", count, previous);
count = 0;
previous = current;
}
count++;
}
printf("%lu %c\n", count, previous);
return 0;
}Oh boy C! Let's compile and parse our output...
$ make parse
cc parse.c -o parse
$ ./parse < /tmp/out | head
49152 b
475136 a
57344 b
106496 a
49152 b
49152 a
38189 r
57344 b
57344 a
49152 b
$ Uh-oh. That don't look right. 9999 * 100 should be 999,900 of a single letter in a row, and instead we got...not that. a and b got started early, but it looks like r somehow got some early shots in. That's job scheduling for you. In other words, the output is corrupt. How about near the end of the file?
$ ./parse < /tmp/out | tail
8192 l
8192 v
476 d
476 g
8192 l
8192 v
8192 l
8192 v
476 l
16860 v
$ echo $((9999 * 100 / 8192))
122
$ echo $((9999 * 100 - 8192 * 122))
476
$Looks like 8192 is the buffer size on this system. Anyways! Your test input was too short to run past buffer lengths, and gives a false impression that multiple client writes are okay. Increase the amount of data from clients and you will see mixed and therefore corrupt output.
|
Is it OK for two or more processes concurrently read/write to the same unix socket?
I've done some testing.
Here's my sock_test.sh, which spawns 50 clients each of which concurrently write 5K messages:
#! /bin/bash --SOC='/tmp/tst.socket'test_fn() {
soc=$1
txt=$2
for x in {1..5000}; do
echo "${txt}" | socat - UNIX-CONNECT:"${soc}"
done
}for x in {01..50}; do
test_fn "${SOC}" "Test_${x}" &
doneI then create a unix socket and capture all traffic to the file sock_test.txt:
# netcat -klU /tmp/tst.socket | tee ./sock_test.txtFinally I run my test script (sock_test.sh) and monitor on the screen all 50 workers doing their job. At the end I check whether all messages have reached their destination:
# ./sock_test.sh
# sort ./sock_test.txt | uniq -cTo my surprise there were no errors and all 50 workers have successfully sent all 5K messages.
I suppose I must conclude that simultaneous writing to unix sockets is OK?
Was my concurrency level too low to see collisions?
Is there something wrong with my test method? How then I test it properly?
EDIT
Following the excellent answer to this question, for those more familiar with python there's my test bench:
#! /usr/bin/python3 -u
# coding: utf-8import socket
from concurrent import futurespow_of_two = ['B','KB','MB','GB','TB']
bytes_dict = {x: 1024**pow_of_two.index(x) for x in pow_of_two}
SOC = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
SOC.connect('/tmp/tst.socket')def write_buffer(
char: 'default is a' = 'a',
sock: 'default is /tmp/tst.socket' = SOC,
step: 'default is 8KB' = 8 * bytes_dict['KB'],
last: 'default is 2MB' = 2 * bytes_dict['MB']): print('## Dumping to the socket: {0}'.format(sock))
while True:
in_memory = bytearray([ord(char) for x in range(step)])
msg = 'Dumping {0} bytes of {1}'
print(msg.format(step, char))
sock.sendall(bytes(str(step), 'utf8') + in_memory)
step += step
if last % step >= last:
breakdef workers(concurrency=5):
chars = concurrency * ['a', 'b', 'c', 'd']
with futures.ThreadPoolExecutor() as executor:
for c in chars:
executor.submit(write_buffer, c)def parser(chars, file='./sock_test.txt'):
with open(file=file, mode='rt', buffering=8192) as f:
digits = set(str(d) for d in range(0, 10))
def is_digit(d):
return d in digits
def printer(char, size, found, junk):
msg = 'Checking {}, Expected {:8s}, Found {:8s}, Junk {:8s}, Does Match: {}'
print(msg.format(char, size, str(found), str(junk), size == str(found)))
char, size, found, junk = '', '', 0, 0
prev = None
for x in f.read():
if is_digit(x):
if not is_digit(prev) and prev is not None:
printer(char, size, found, junk)
size = x
else:
size += x
else:
if is_digit(prev):
char, found, junk = x, 1, 0
else:
if x==char:
found += 1
else:
junk += 1
prev = x
else:
printer(char, size, found, junk)if __name__ == "__main__":
workers()
parser(['a', 'b', 'c', 'd'])Then in the output you may observe lines like the following:
Checking b, Expected 131072 , Found 131072 , Junk 0 , Does Match: True
Checking d, Expected 262144 , Found 262144 , Junk 0 , Does Match: True
Checking b, Expected 524288 , Found 219258 , Junk 0 , Does Match: False
Checking d, Expected 524288 , Found 219258 , Junk 0 , Does Match: False
Checking c, Expected 8192 , Found 8192 , Junk 0 , Does Match: True
Checking c, Expected 16384 , Found 16384 , Junk 0 , Does Match: True
Checking c, Expected 32768 , Found 32768 , Junk 610060 , Does Match: True
Checking c, Expected 524288 , Found 524288 , Junk 0 , Does Match: True
Checking b, Expected 262144 , Found 262144 , Junk 0 , Does Match: TrueYou can see that payload in some cases (b, d) is incomplete, however missing fragments are received later (c). Simple math proves it:
# Expected
b + d = 524288 + 524288 = 1048576
# Found b,d + extra fragment on the other check on c
b + d + c = 219258 + 219258 + 610060 = 1048576Therefore simultaneous writing to unix sockets is OK NOT OK.
| Concurrently reading/writing to the same unix socket? |
Yes
It is easy to serve it.
No
But harder to get the client to use it.
An alternative
However because you told me why you are doing it, I have another solution.
You want several web-servers to serve to only the local machine, but not have conflicts of port. It may also be nice if they all used the same port number.
Loopback addresses are 127.0.0.0/8 That is 127.x.x.x not just 127.0.0.1.
Therefore use a different IP address for each server. E.g. 127.0.0.2, 127.0.0.3 ...
|
Is there a way to serve a webpage from a locally running tcp server listening on a unix domain socket instead of localhost:<port>?
something like:
file:///tmp/webpage.sockmy only real motivation is to avoid port conflicts in the 2000-5000 range.
| Display webpage with unix domain socket |
Unix sockets are reliable. If the reader doesn't read, the writer blocks. If the socket is a datagram socket, each write is paired with a read. If the socket is a stream socket, the kernel may buffer some bytes between the writer and the reader, but when the buffer is full, the writer will block. Data is never discarded, except for buffered data if the reader closes the connection before reading the buffer.
|
When you create a UNIX socket using socat and send data to it, but do not have another socat instance connecting to that socket, what will happen then?
What happens if you write massive amounts of data to a UNIX socket and never read it? Is there a buffer that overflows? Is it ring-buffered?
| Do UNIX Domain Sockets Overflow? |
There's nothing wrong with creating the socket in a dotfile or dotdir in the home directory of the user, if the user is not some kind of special, system user. The only problem would be with the home directory shared between multiple machines over nfs, but that could be easily worked around by including the hostname in the name of the socket. On Linux/Ubuntu you could also use "abstract" Unix domain sockets, which don't use any path or inode in the filesystem. Abstract unix sockets are those whose address/path starts with a NUL byte:abstract: an abstract socket address is distinguished (from a pathname socket) by the fact that sun_path[0] is a null byte (\0).
The socket's address in this namespace is given by the additional
bytes in sun_path that are covered by the specified length of the
address structure. (Null bytes in the name have no special significance.) The name has no connection with filesystem pathnames. When
the address of an abstract socket is returned, the returned addrlen
is greater than sizeof(sa_family_t) (i.e., greater than 2), and the
name of the socket is contained in the first (addrlen - sizeof(sa_family_t)) bytes of sun_path.When displayed for or entered by the user, the NUL bytes in a abstract Unix socket address are usually replaced with @s. Many programs get that horribly wrong, as they don't escape regular @s in any way and/or assume that only the first byte could be NUL.
Unlike regular Unix socket paths, abstract Unix socket names have different semantics, as anybody can bind to them (if the name is not already taken), and anybody can connect to them.
Instead of relying on file/directory permission to restrict who can connect to your socket, and assuming that eg. only root could create sockets inside some directory, you should check the peer's credential with getsockopt(SO_PEERCRED) (to get the uid/pid of who connected or bound the peer), or the SCM_CREDENTIALS ancillary message (the get the uid/pid of who sent a message over the socket).
This (replacing the usual file permission checks) is also the only sane use of SO_PEERCRED/SCM_CREDENTIALS IMHO.
|
I work on the application that uses Unix domain socket for IPC. The common way as I know is to place the socket file inside /var/run. I work with Ubuntu 18.04 and I see that var/run is a symlink for /run. Unfortunately the folder is accessible for root only:
ls -Al /
drwxr-xr-x 27 root root 800 Apr 12 17:39 runSo only root has write access for this folder and that makes it impossible to use Unix domain sockets for regular users.
First of all I can't understand why? And how to use Unix domain sockets for non-root users? I can use the home folder of course, but I prefer to use some correct and common method.
| Unix domain sockets for non-root user |
In 2013, trqauthd stopped using IP sockets and switched to a Unix domain socket in the server's home directory.
Later that same year, trqauthd switched from the home directory to /tmp.
As you can see, the only option that Adaptive Computing has given to you for altering /tmp/trqauthd-unix is to re-compile the programs from source, changing the --with-trqauthd-sock-dir build configuration option to denote somewhere other than /tmp. (/run/trqauthd perhaps?)
|
We have a webserver that performs scientific calculations submitted by users. The calculations can be long-running, so we use The Torque resource manager (aka pbs_server) to distribute/schedule them on a handful of compute nodes. Torque makes use of a unix domain socket in the /tmp directory for communication but the http server (and processes forked from it) can't access the true /tmp directory, so to those processes, the socket appears to be missing, resulting in an error.
The Details:The webserver is running Apache, which runs as a service with the systemd property PrivateTmp=true set. This casuses the service to have its own /tmp directory unrelated to the "true" root /tmp.
The jobs are actually submitted from PHP (running in the Apache process). PHP makes a system call to qsub, which is a Torque command to submit a job. Because qsub is called from PHP, it inherits the "fake" /tmp directory from Apache.
qsub internally attempts to connect to the unix socket located at /tmp/trqauthd-unix. But since it doesn't see the real /tmp directory, it fails with the following error: Error in connection to trqauthd (15137)-[could not connect to unix
socket /tmp/trqauthd-unix: 2]The only solution I could achieve was to edit the httpd.service file under systemd and change PrivateTmp to false. This DID fix the problem. But I'd rather not do this because (I assume) PrivateTmp was set to true for good reason.
What I want to know is whether there is any way to have the socket created in a different location or to somehow make a link to the socket that could be used from within Apache (and its forked processes).
Creating a link to the socket is trivial, but it doesn't solve the problem because I don't know of any way to configure qsub to look for the socket in a different location.
Note that the socket is created by the trqauthd service (a Torque program that performs user authorization for running jobs). The documentation for trqauthd mentions (in an obscure note) that the location of the socket can be configured, but there is no indication in any of the documentation about how that can be achieved (and more importantly, how to let qsub and other commands know about the new location).
Thanks for any suggestions that might help me find a way to submit jobs to Torque from PHP without disabling PrivateTmp for Apache.
| How can a service with PrivateTmp=true access a unix socket in the /tmp directory (e.g. to submit Torque jobs from PHP running in Apache) |
In short, have your library loaded by LD_PRELOAD override syslog(3) rather than connect(3).
The /dev/log Unix socket is used by the syslog(3) glibc function, which connects to it and writes to it. Overriding connect(3) probably doesn't work because the syslog(3) implementation inside glibc will execute the connect(2) system call rather than the library function, so an LD_PRELOAD hook will not trap the call from within syslog(3).
There's a disconnect between strace, which shows you syscalls, and LD_PRELOAD, which can override library functions (in this case, functions from glibc.) The fact that there's a connect(3) glibc function and also a connect(2) system call also helps with this confusion. (It's possible that using ltrace would have helped here, showing calls to syslog(3) instead.)
You can probably confirm that overriding connect(3) in LD_PRELOAD as you're doing won't work with syslog(3) by having your test program call syslog(3) directly rather than explicitly connecting to /dev/log, which I suspect is how the .NET Core application is behaving.
Hooking into syslog(3) is also potentially more useful, because being at a higher level in the stack, you can use that hook to make decisions such as selectively forwarding some of the messages to syslog. (You can load the syslog function from glibc with dlsym(RTLD_NEXT, "syslog"), and then you can use that function pointer to call syslog(3) for the messages you do want to forward from your hook.)
The approach of replacing /dev/log with a symlink to /dev/null is flawed in that /dev/null will not accept a connect(2) operation (only file operations such as open(2)), so syslog(3) will try to connect and get an error and somehow try to handle it (or maybe return it to the caller), in any case, this might have side effects.
Hopefully using an LD_PRELOAD override of syslog(3) is all you need here.
|
I am using a third party .NET Core application (a binary distribution used by a VS Code extension) that unfortunately has diagnostic logging enabled with no apparent way to disable it (I did already report this to the authors). The ideal solution (beside being able to disable it), would be if I could specify to systemd that it should not log anything for that particular program, but I have been unable to find any way to do so. Here is everything I tried so far:
The first thing I tried was to redirect stdout and stderr to /dev/null: dotnet-app > /dev/null 2>&1. This indeed disabled any of the normal output, but the diagnostic logging was still being written to the systemd journal.
I hoped that the application had a command line argument that allowed me to disable the diagnostic logging. It did have a verbosity argument, but after experimenting with, it only seemed to have effect on the normal output, not the diagnostic logging.
By using strace and looking for calls to connect, I found out that the application instead wrote the diagnostic logging directly to /dev/log.
The path /dev/log is a symlink to /run/systemd/journal/dev-log, so to verify my finding, I changed the symlink to point to /dev/null instead. This indeed stopped the diagnostic logging from showing up in the systemd journal.
I was told about LD_PRELOAD and made a library that replaced the standard connect with my own version that returned an error in the case it tried to connect to /dev/log. This worked correctly in my test program, but failed with the .NET Core application, failing with connect ENOENT /tmp/CoreFxPipe_1ddf2df2725f40a68990c92cb4d1ff1e. I experimented with my library, but even if all I did was directly pass the arguments to the standard connect function, it would still fail with the same error.
I then tried using Linux namespaces to make it so that /dev/log would point to /dev/null only for the .NET Core application: unshare --map-root-user --mount sh -c "mount --bind /dev/null /dev/log; dotnet-app $@". This too failed with the same error, even though it again worked for my test program. Even just using unshare --map-root-user --mount dotnet-app "$@" would fail with the error.
Next I tried using gdb to close the file descriptor to /dev/log while the application was running. This worked, but it reopens it after some time has passed. I also tried changing the file descriptor to point to /dev/null, which also worked, but it too was reset to /dev/log after some time.
My last attempt was to write my own UNIX socket that would filter out all written to it by the .NET Core application. That actually worked, but I learned that the PID is send along with what is written to UNIX sockets, so everything passed along to the systemd journal would report coming from the PID of the program backing my UNIX socket.
For now this is solution is acceptable for me, because on my system almost nothing uses /dev/log, but I would welcome a better solution. For example, I read that it was possible to spoof certain things as root for UNIX sockets, but I was unable to find out more about it.
Or if someone might have any insights on why both LD_PRELOAD and unshare might fail for the .NET Core application, while they work fine for a simple C test program that writes to /dev/log?
| How to prevent a process from writing to the systemd journal? |
I figured out a way. On Linux the ss program is kind of like netstat on steroids - it provides much more information, including the amount of data pending in receive buffers for AF_UNIX sockets. I like ss -axfor my purposes. Man page: http://man7.org/linux/man-pages/man8/ss.8.html
See also my answer here:
How to see the amount of pending data on a unix domain socket?
Also: Intro to SS (some details not found on man page - particularly filters)
|
netstat -a reports Recv-Q (amount of unread data pending for a reading application) for AF_INET sockets, but not AF_UNIX sockets (at least not for SOCK_DGRAM).
Does anybody know a way to obtain this information for AF_UNIX sockets from outside of the process itself?
Barring reporting the amount, is there a way to tell if there is any unread data pending for the application.
| How to report receive queue size for AF_UNIX sockets |
The man page for ssh offers two complementary options: -R for remote forwarding to local, and -L for local forwarding to remote.
In your case just use -L instead of -R.
|
I am trying to have a local Unix domain socket, say, ~/docker.sock. I want it to proxy everything to a remote Unix domain socket running elsewhere over SSH. (You can find a diagram of what I’m trying to do below).
OpenSSH supports this (an example here). For instance, this command will proxy MySQL client connections on a remote server to my local instance:
ssh -R/var/run/mysql.sock:/var/run/mysql.sock -R127.0.0.1:3306:/var/run/mysql.sock somehostBut this is not how I want it to be like. It forwards the traffic that comes to the remote socket to my local socket (I want it the other way). | Proxying traffic on local unix to a remote unix socket over SSH |
tl;dr; As of 2020, you cannot do that (or anything similar) if /proc/<pid>/fd/<fd> is a socket.
The stdin, stdout, stderr of a process may be any kind of file, not necessarily pipes, regular files, etc. They can also be sockets.
On Linux, the /proc/<pid>/fd/<fd> are a special kind of symbolic links which allow you to open from the scratch the actual file a file descriptor refers to, and do it even if the file has been removed, or it didn't ever have any presence in any file system at all (e.g. a file created with memfd_create(2)).
But sockets are a notable exception, and they cannot be opened that way (nor is it obvious at all how that could be implemented: would an open() on /proc/<pid>/fd/<fd> create another connection to the server if that fd is a connected socket? what if the socket is explicitly bound to a local port?).
Recent versions of Linux kernels have introduced a new system call, pidfd_getfd(2), which allows you to "steal" a file descriptor from another process, in the same way you were able to pass it via Unix sockets, but without the collaboration of the victim process. But that hasn't yet made its way in most Linux distros.
|
I'm trying to access a process' stdio streams from outside its parent process. I've found the /proc/[pid]/fd directory, but when I try
$ cat /proc/[pid]/fd/1I get a No such file or device error. I know for certain that it exists, as Dolphin (file explorer) shows it.
I also happened to notice the file explorer lists it as a socket and trying to read from it as suggested here produces a similar error. This appeared odd to me as stdio streams are typically pipes, rather than sockets, so I'm not sure what's up here.
I'd like to point out also that the processes are started by the same user and attempting to access it with sudo didn't work either. I apologise if this question appears noobish, but I'd sincerely appreciate some guidance - perhaps there's a better way of accessing the stdio pipes?
| /proc/[pid]/fd/[0, 1, 2]: No such file or device - even though file exists |
To find the file associated with the UNIX socket, you can use the +E flag for lsof to show the endpoint of the socket. From the man pages:+|-E +E specifies that Linux pipe, Linux UNIX socket and Linux
pseudoterminal files should be displayed with endpoint
information and the files of the endpoints should also be
displayedFor instance, here's some example from a question someone had where he was trying to find out the endpoint of fd 6 of a top process:
# lsof -d 6 -U -a +E -p $(pgrep top)
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dbus-daem 874 messagebus 12u unix 0xffff9545f6fee400 0t0 366381191 /var/run/dbus/system_bus_socket type=STREAM ->INO=366379599 25127,top,6u
top 25127 root 6u unix 0xffff9545f6fefc00 0t0 366379599 type=STREAM ->INO=366381191 874,dbus-daem,12uThe -U flag for lsof shows only Unix socket files.
Notice that you will only see the name of the socket file for the listening processes. The other process will not show the name of the unix socket file, but with +E lsof will show the inode of the listening socket file, and will also add a line for the process listening to this socket (along with the socket file name).
In this example notice that we only asked lsof to show the file descriptors of top command, but lsof added another line for dbus-daem - which is the listening process, and the socket file it listens to is /var/run/dbus/system_bus_socket.Pid 25127 (inode 366379599) interacts with inode 366381191
(type=STREAM ->INO=366381191 874,dbus-daem,12u)
Inode 366381191 belong to pid 874, and you can see this process has the fd that is the listening side for the second process (/var/run/dbus/system_bus_socket type=STREAM ->INO=366379599 25127,top,6u), and there you can see that the socket file name is /var/run/dbus/system_bus_socket.Also, how can I interact with it?Now that you have the filename of the UNIX socket, you can interact with it in various ways, such as:
socat - UNIX-CONNECT:/run/dbus/system_bus_socket
nc -U /run/dbus/system_bus_socketFor additional information:
How can I communicate with a Unix domain socket via the shell on Debian Squeeze?
|
I found a Unix socket being used in the output of the lsof command:
COMMAND PID TID TASKCMD USER FD TYPE DEVICE SIZE/OFF NODE NAME
screen 110970 username 4u unix 0xffff91fe3134c400 0t0 19075659 socketThe "DEVICE" column holds what looks like a memory address.
According to the lsof man page:
DEVICE contains the device numbers, separated by commas, for a character special, block special, regular, directory or NFS file; or ``memory'' for a memory file system node under Tru64 UNIX; or the address of the private data area of a Solaris socket stream; or a kernel reference address that identifies the file (The kernel reference address may be used for FIFO's, for example.); or the base address or device name of a Linux AX.25 socket device. Usually only the lower thirty two bits of Tru64 UNIX kernel addresses are displayed.My question is, which of these am I looking at with the value 0xffff91fe3134c400?
Also, how can I interact with it? I know I can use netcat to connect to a Unix domain socket, but from reading examples online it looks like you have to specify a file.
| Interacting with Unix Socket found in lsof |
Question #1Q1: From the ss man page I can't find out, what does it mean e.g. * 8567674 without file path.From the docs it explains the Address:Port column like so:
excerptThe format and semantics of ADDRESS_PATTERN depends on address family.inet - ADDRESS_PATTERN consists of IP prefix, optionally followed by colon and port. If prefix or port part is absent or replaced with *, this means wildcard match.
inet6 - The same as inet, only prefix refers to an IPv6 address. Unlike inet colon becomes ambiguous, so that ss allows to use scheme, like used in URLs, where address is suppounded with [ ... ].
unix - ADDRESS_PATTERN is shell-style wildcard.
packet - format looks like inet, only interface index stays instead of port and link layer protocol id instead of address.
netlink - format looks like inet, only socket pid stays instead of port and netlink channel instead of address.PORT is syntactically ADDRESS_PATTERN with wildcard address part. Certainly, it is undefined for UNIX sockets.The last sentence is your answer.
Question #2Q2: Why there is no file path to unix socket for some cases?See this SO Q&A titled: How to use unix domain socket without creating a socket file.
excerptYou can create a unix domain socket with an "abstract socket address". Simply make the first character of the sun_path string in the sockaddr_un you pass to bind be '\0'. After this initial NUL, write a string to the remainder of sun_path and pad it out to UNIX_PATH_MAX with NULs (or anything else).
Sockets created this way will not have any filesystem entry, ....Question #3Q3: How can I sniff unix DGRAM socket through socat without having file path?Again more googling once you know what things are called: socat docs.
excerptABSTRACT-LISTEN:
ABSTRACT-SENDTO:
ABSTRACT-RECVFROM:
ABSTRACT-RECV:
ABSTRACT-CLIENT:
>
The ABSTRACT addresses are almost identical to the related UNIX addresses except that they do not address file system based sockets
but an alternate UNIX domain address space. To archive this the socket
address strings are prefixed with "\0" internally. This feature is
available (only?) on Linux. Option groups are the same as with the
related UNIX addresses, except that the ABSTRACT addresses are not
member of the NAMED group. |
From that article, I realized that:a UNIX domain socket is bound to a file path.So, I need to sniff DGRAM Unix socket through the socat as mentioned here. But when I try to retrieve the path for this purpose, I find that the target application uses a socket without file path.
The ss -apex command shows results both with and without file paths, e.g.:
u_dgr UNCONN 0 0 /var/lib/samba/private/msg.sock/32222 1345285 * 0 users:(("nmbd",pid=32222,fd=7))
u_dgr UNCONN 0 0 * 8567674 * 0 users:(("gnome-shell",pid=16368,fd=23))From the ss man page I can't find out, what does it mean e.g. * 8567674 without file path.
So, two questions:Why there is no file path to unix socket for some cases?
How can I sniff unix DGRAM socket through socat without having file path? | How can I sniff unix dgram socket without having file path? |
It is not difficult to do the tasks you ask using python-tmux.
E.g. if you start a new server with session name foo
tmux new-session -s fooyou can attach to it via python tmux (assuming the python library is installed) from ipython via
import libtmux
server = libtmux.Server()
session = server.find_where({ "session_name": "foo" })Then you can watch in your tmux window the action of commands e.g.
session.cmd("send-keys","x")will send a keystroke "x". The pane list you asked for can be queried via
session.cmd("list-panes").stdoutand you can switch to a specific window (say nr. 1) with
session.cmd("select-window","-t","1").stdoutYou do not have to read the source code of tmux to learn this. All these commands are documented in the man page of tmux. If this is not sufficient for you, you need to be more specific what you mean by python-libtmux being "lacking in some way".
|
Is there any way I can control a tmux server and send commands to it like switching to a specific window in a session, or make some queries about the panes through the socket it creates?
I've looked into libtmux for python and it appears to be lacking in some ways. Is there an official reference for the tmux api where I could look? The official tmux package on my distro contains only a single tmux binary.
Is there any way other than reading the source to find out how one can control tmux through its socket?
Are there any other terminal multiplexers which make it easy/ are intended to make it easy?
| tmux socket api |
Abstract sockets
Their path name starts with the NUL characters, making their path length 0. They can use the remaining 107 characters to define a unique identifier, which other programs can use to connect.
they are not represented in the file system.
Most unix come with lsof (list of open files) command. If not you can easily add it.
lsof -Uupowerd 1604 root 5u unix 0xffff88005af5f400 0t0 18631 type=STREAM
colord 1614 colord 10u unix 0xffff880034d3f400 0t0 18170 type=STREAM
systemd 2009 root 13u unix 0xffff88005a293000 0t0 21213 /run/user/0/systemd/notify type=DGRAM
systemd 2009 root 14u unix 0xffff88005a293c00 0t0 21214 /run/user/0/systemd/private type=STREAMOn Linux, when showing abstract namespace paths, null bytes are converted to @. Older tool versions may not handle zero bytes properly
upstart 1525 lightdm 7u unix 0xffff880034b99800 0t0 17301 @/com/ubuntu/upstart-session/111/1525 type=STREAMYou'll be able to list all the unix domain sockets on your system.
the 'ss' command can also show sockets and abstract sockets. again abstract sockets will be prefixed with @
Good Luck!
|
Is there a command or system call for listing all the abstract unix sockets currently open?
Update: It was suggested that I use netstat -x, which theoretically works, but does not list the names of the abstract sockets, only those with paths.
bash-5.0$ netstat -xeW
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ] STREAM CONNECTED 3959158
unix 2 [ ] STREAM CONNECTED 3961068
unix 3 [ ] STREAM CONNECTED 3965008
unix 3 [ ] STREAM CONNECTED 3967192 /run/spire/writable/agent.sock | Is there a command to list all abstract unix sockets currently open? |
In this context, it's important to understand the rationale for why the kernel has the TIME_WAIT state for TCP connections. This state is intended to allow any packets associated with the connection (that may have taken longer routes or otherwise been delayed) to drain from the network before a new connection on the same port can be established. That way, you ensure that a new connection doesn't receive any packets associated with the old connection. The reuseaddr option enables the developer to communicate "don't perform that wait".
Unix domain sockets don't have that concern; reuseaddr doesn't really make sense in that context.
|
If I bind() an AF_INET socket (for a TCP connection), then later close() it, next time I run my program, I may have issues, since despite the close(), the kernel can still have the resources associated with the open socket.
I am not very clear on this issue with the Unix Domain Sockets, though.
So far I have seenI need a unique path to use it with bind(). The path must not
exist at the time bind() is called, and the file will be created
by bind(). (However, it may or may not be visible in the filesystem.
The file will not appear in the filesystem, if the path starts with the
special char \0.)
If the file is not unlink()-ed, even after close, the kernel keeps
the associated resources, and the socket is fully functional.QUESTION:
Since neither close() nor unlink() alone can make a Unix Domain Socket disappear, will both of them do the trick reliably / trigger the kernel to give up all the resources associated with the socket?
Is it possible that I will ever run into a reuseaddr error, if both close() and unlink() were called?EDIT (after the comments and answer):
So, a binded AF_LOCAL socket looks something like this:
unix_domain_socket_inode
-> binded to a socket
-> associated with a file (path)The unix_domain_socket_inode will live as long as:something keeps it open (the socket is not closed), or
it has the associated pathIf only 1. is true, we have an open socket and an inode, and everything works.
If only 2. is true, since the inode has a path associated with it, the kernel cannot clean it up, but it does not work either, because it lacks the socket resources that handle incoming connections. It won't even be a normal file, just a dead husk of the past glory of a busy, working socket.
In case of the AF_INET connection, the address reuse issue was a design choice for better usability.
In case of the AF_LOCAL, the leftover file is an artifact from previous design choices, which prevent the kernel itself from automatically cleaning up the file it created in 1 go, when close() is called. There is no hidden mechanism associated, because of which the kernel would want to keep this resource after a close() is called.
| Unix Domain Socket bind, reuse address |
At the core, the problem is that hostnamectl is a systemd utility, which acts on the systemd-hostnamed.service. WSL doesn't currently provide support for systemd.
Also, WSL sets the hostname to the name of the Windows computer hosting the instance. While you can change it by changing the Windows hostname (Control Panel -> System -> See the name of this computer -> Change Settings), you can't change the WSL hostname itself.
What's your ultimate goal with changing the hostname, other than just following the tutorial you linked? Perhaps there is a better solution (e.g. changing the prompt).
|
I'm following this guide and I'm running into issues.
https://www.tecmint.com/initial-ubuntu-server-setup-guide/
I am trying to create a linux machine in Ubuntu in wsl2 and then rename it using hostnamectl but have the error
Failed to create bus connection: No such file or directoryI have tried to follow these solutions.
This solution suggested installing a package which I did.
How do I fix my problem with hostnamectl command. It cannot connect to dbus
xuhu55@LAPTOP-DUPSMABG:/usr/share$ sudo dpkg -l | grep dbus
[sudo] password for xuhu55:
ii at-spi2-core 2.28.0-1 amd64 Assistive Technology Service Provider Interface (dbus core)
ii dbus 1.12.2-1ubuntu1.2 amd64 simple interprocess messaging system (daemon and utilities)
ii libdbus-1-3:amd64 1.12.2-1ubuntu1.2 amd64 simple interprocess messaging system (library)
ii python-dbus 1.2.6-1 amd64 simple interprocess messaging system (Python interface)
ii python3-dbus 1.2.6-1 amd64 simple interprocess messaging system (Python 3 interface)
xuhu55@LAPTOP-DUPSMABG:/usr/share$ sudo apt-get install dbus
Reading package lists... Done
Building dependency tree
Reading state information... Done
dbus is already the newest version (1.12.2-1ubuntu1.2).
0 upgraded, 0 newly installed, 0 to remove and 381 not upgradedThis other solution involved using strace which unfortunately showed that my problem was not a symlink problem that the other solution could have solved.
hostnamectl shows error: "Failed to create bus connection: No such file or directory"
xuhu55@LAPTOP-DUPSMABG:/usr/share$ strace hostnamectl
execve("/usr/bin/hostnamectl", ["hostnamectl"], 0x7ffff119f3f0 /* 20 vars */) = 0
brk(NULL) = 0x562bdd747000
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/systemd/tls/x86_64/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/lib/systemd/tls/x86_64/x86_64", 0x7fffde186d70) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/systemd/tls/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)...
stat("/lib/systemd", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=39569, ...}) = 0
mmap(NULL, 39569, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb215297000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\34\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=2030544, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb215295000
mmap(NULL, 4131552, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb214c89000
mprotect(0x7fb214e70000, 2097152, PROT_NONE) = 0
mmap(0x7fb215070000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1e7000) = 0x7fb215070000
mmap(0x7fb215076000, 15072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb215076000
close(3) = 0
openat(AT_FDCWD, "/lib/systemd/libsystemd-shared-237.so", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@|\3\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=2355440, ...}) = 0
mmap(NULL, 4457440, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb214848000
mprotect(0x7fb2149fd000, 2093056, PROT_NONE) = 0
mmap(0x7fb214bfc000, 569344, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b4000) = 0x7fb214bfc000
mmap(0x7fb214c87000, 5088, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb214c87000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/librt.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\"\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=31680, ...}) = 0
mmap(NULL, 2128864, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb214640000
mprotect(0x7fb214647000, 2093056, PROT_NONE) = 0
mmap(0x7fb214846000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7fb214846000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libcap.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20\30\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=22768, ...}) = 0
mmap(NULL, 2117976, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb21443a000
mprotect(0x7fb21443e000, 2097152, PROT_NONE) = 0
mmap(0x7fb21463e000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = 0x7fb21463e000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\33\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=31232, ...}) = 0
mmap(NULL, 2126336, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb214232000
mprotect(0x7fb214239000, 2093056, PROT_NONE) = 0
mmap(0x7fb214438000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7fb214438000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libcryptsetup.so.12", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0``\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=310040, ...}) = 0
mmap(NULL, 2405352, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb213fe6000
mprotect(0x7fb21402f000, 2097152, PROT_NONE) = 0
mmap(0x7fb21422f000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x49000) = 0x7fb21422f000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libgcrypt.so.20", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\274\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=1155768, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb215293000
mmap(NULL, 3252232, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb213ccb000
mprotect(0x7fb213ddf000, 2093056, PROT_NONE) = 0
mmap(0x7fb213fde000, 28672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x113000) = 0x7fb213fde000
mmap(0x7fb213fe5000, 8, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb213fe5000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/libip4tc.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300\25\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=27088, ...}) = 0
mmap(NULL, 2122304, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb213ac4000
mprotect(0x7fb213aca000, 2093056, PROT_NONE) = 0
mmap(0x7fb213cc9000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x7fb213cc9000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libseccomp.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200L\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=309456, ...}) = 0
mmap(NULL, 2404592, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb213878000
mprotect(0x7fb2138ab000, 2093056, PROT_NONE) = 0
mmap(0x7fb213aaa000, 106496, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x32000) = 0x7fb213aaa000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20b\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=154832, ...}) = 0
mmap(NULL, 2259152, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb213650000
mprotect(0x7fb213675000, 2093056, PROT_NONE) = 0
mmap(0x7fb213874000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x24000) = 0x7fb213874000
mmap(0x7fb213876000, 6352, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb213876000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libidn.so.11", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0+\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=206872, ...}) = 0
mmap(NULL, 2302000, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb21341d000
mprotect(0x7fb21344f000, 2093056, PROT_NONE) = 0
mmap(0x7fb21364e000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x31000) = 0x7fb21364e000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/liblzma.so.5", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340(\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=153984, ...}) = 0
mmap(NULL, 2248968, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb2131f7000
mprotect(0x7fb21321b000, 2097152, PROT_NONE) = 0
mmap(0x7fb21341b000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x24000) = 0x7fb21341b000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/liblz4.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\35\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=112672, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb215291000
mmap(NULL, 2207840, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb212fdb000
mprotect(0x7fb212ff6000, 2093056, PROT_NONE) = 0
mmap(0x7fb2131f5000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1a000) = 0x7fb2131f5000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libblkid.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0000\230\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=311720, ...}) = 0
mmap(NULL, 2411776, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb212d8e000
mprotect(0x7fb212dd5000, 2097152, PROT_NONE) = 0
mmap(0x7fb212fd5000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x47000) = 0x7fb212fd5000
mmap(0x7fb212fda000, 3328, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb212fda000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0000b\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=144976, ...}) = 0
mmap(NULL, 2221184, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb212b6f000
mprotect(0x7fb212b89000, 2093056, PROT_NONE) = 0
mmap(0x7fb212d88000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x19000) = 0x7fb212d88000
mmap(0x7fb212d8a000, 13440, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb212d8a000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\20\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=18680, ...}) = 0
mmap(NULL, 2113752, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb21296a000
mprotect(0x7fb21296e000, 2093056, PROT_NONE) = 0
mmap(0x7fb212b6d000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7fb212b6d000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libuuid.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@\26\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=27112, ...}) = 0
mmap(NULL, 2122112, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb212763000
mprotect(0x7fb212769000, 2093056, PROT_NONE) = 0
mmap(0x7fb212968000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x7fb212968000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libdevmapper.so.1.02.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\266\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=432640, ...}) = 0
mmap(NULL, 2532048, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb2124f8000
mprotect(0x7fb21255e000, 2093056, PROT_NONE) = 0
mmap(0x7fb21275d000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x65000) = 0x7fb21275d000
mmap(0x7fb212762000, 720, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb212762000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/libargon2.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\r\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=34872, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb21528f000
mmap(NULL, 2130080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb2122ef000
mprotect(0x7fb2122f7000, 2093056, PROT_NONE) = 0
mmap(0x7fb2124f6000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7fb2124f6000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libjson-c.so.3", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P'\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=43304, ...}) = 0
mmap(NULL, 2138456, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb2120e4000
mprotect(0x7fb2120ee000, 2093056, PROT_NONE) = 0
mmap(0x7fb2122ed000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x9000) = 0x7fb2122ed000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libgpg-error.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340+\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=84032, ...}) = 0
mmap(NULL, 2179304, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb211ecf000
mprotect(0x7fb211ee3000, 2093056, PROT_NONE) = 0
mmap(0x7fb2120e2000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13000) = 0x7fb2120e2000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpcre.so.3", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 \25\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=464824, ...}) = 0
mmap(NULL, 2560264, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb211c5d000
mprotect(0x7fb211ccd000, 2097152, PROT_NONE) = 0
mmap(0x7fb211ecd000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x70000) = 0x7fb211ecd000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\16\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=14560, ...}) = 0
mmap(NULL, 2109712, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb211a59000
mprotect(0x7fb211a5c000, 2093056, PROT_NONE) = 0
mmap(0x7fb211c5b000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7fb211c5b000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libudev.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3008\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=121016, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb21528d000
mmap(NULL, 2218280, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb21183b000
mprotect(0x7fb211858000, 2093056, PROT_NONE) = 0
mmap(0x7fb211a57000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1c000) = 0x7fb211a57000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libm.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\272\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=1700792, ...}) = 0
mmap(NULL, 3789144, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb21149d000
mprotect(0x7fb21163a000, 2093056, PROT_NONE) = 0
mmap(0x7fb211839000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x19c000) = 0x7fb211839000
close(3) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb21528b000
mmap(NULL, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb215288000
arch_prctl(ARCH_SET_FS, 0x7fb215288940) = 0
mprotect(0x7fb215070000, 16384, PROT_READ) = 0
mprotect(0x7fb211839000, 4096, PROT_READ) = 0
mprotect(0x7fb212d88000, 4096, PROT_READ) = 0
mprotect(0x7fb214846000, 4096, PROT_READ) = 0
mprotect(0x7fb211a57000, 4096, PROT_READ) = 0
mprotect(0x7fb211c5b000, 4096, PROT_READ) = 0
mprotect(0x7fb211ecd000, 4096, PROT_READ) = 0
mprotect(0x7fb2120e2000, 4096, PROT_READ) = 0
mprotect(0x7fb2122ed000, 4096, PROT_READ) = 0
mprotect(0x7fb2124f6000, 4096, PROT_READ) = 0
mprotect(0x7fb213874000, 4096, PROT_READ) = 0
mprotect(0x7fb21275d000, 4096, PROT_READ) = 0
mprotect(0x7fb212968000, 4096, PROT_READ) = 0
mprotect(0x7fb212b6d000, 4096, PROT_READ) = 0
mprotect(0x7fb212fd5000, 16384, PROT_READ) = 0
mprotect(0x7fb2131f5000, 4096, PROT_READ) = 0
mprotect(0x7fb21341b000, 4096, PROT_READ) = 0
mprotect(0x7fb21364e000, 4096, PROT_READ) = 0
mprotect(0x7fb213aaa000, 102400, PROT_READ) = 0
mprotect(0x7fb213cc9000, 4096, PROT_READ) = 0
mprotect(0x7fb213fde000, 8192, PROT_READ) = 0
mprotect(0x7fb21422f000, 4096, PROT_READ) = 0
mprotect(0x7fb214438000, 4096, PROT_READ) = 0
mprotect(0x7fb21463e000, 4096, PROT_READ) = 0
mprotect(0x7fb214bfc000, 565248, PROT_READ) = 0
mprotect(0x562bdd4dd000, 4096, PROT_READ) = 0
mprotect(0x7fb2152a1000, 4096, PROT_READ) = 0
munmap(0x7fb215297000, 39569) = 0
set_tid_address(0x7fb215288c10) = 1612
set_robust_list(0x7fb215288c20, 24) = 0
rt_sigaction(SIGRTMIN, {sa_handler=0x7fb212b74cb0, sa_mask=[], sa_flags=SA_RESTORER|SA_SIGINFO, sa_restorer=0x7fb212b81890}, NULL, 8) = 0
rt_sigaction(SIGRT_1, {sa_handler=0x7fb212b74d50, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART|SA_SIGINFO, sa_restorer=0x7fb212b81890}, NULL, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
brk(NULL) = 0x562bdd747000
brk(0x562bdd768000) = 0x562bdd768000
statfs("/sys/fs/selinux", 0x7fffde187670) = -1 ENOENT (No such file or directory)
statfs("/selinux", 0x7fffde187670) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/proc/filesystems", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
read(3, "nodev\tsysfs\nnodev\trootfs\nnodev\tt"..., 1024) = 474
read(3, "", 1024) = 0
close(3) = 0
access("/etc/selinux/config", F_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=1683056, ...}) = 0
mmap(NULL, 1683056, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2150ed000
close(3) = 0
openat(AT_FDCWD, "/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=2995, ...}) = 0
read(3, "# Locale name alias data base.\n#"..., 4096) = 2995
read(3, "", 4096) = 0
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_IDENTIFICATION", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=252, ...}) = 0
mmap(NULL, 252, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2152a0000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=26376, ...}) = 0
mmap(NULL, 26376, PROT_READ, MAP_SHARED, 3, 0) = 0x7fb215299000
close(3) = 0
futex(0x7fb215075a08, FUTEX_WAKE_PRIVATE, 2147483647) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_MEASUREMENT", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=23, ...}) = 0
mmap(NULL, 23, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb215298000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_TELEPHONE", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=47, ...}) = 0
mmap(NULL, 47, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb215297000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_ADDRESS", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=131, ...}) = 0
mmap(NULL, 131, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2150ec000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_NAME", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=62, ...}) = 0
mmap(NULL, 62, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2150eb000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_PAPER", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=34, ...}) = 0
mmap(NULL, 34, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2150ea000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_MESSAGES", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_MESSAGES/SYS_LC_MESSAGES", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=48, ...}) = 0
mmap(NULL, 48, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2150e9000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_MONETARY", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=270, ...}) = 0
mmap(NULL, 270, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2150e8000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_COLLATE", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=1516558, ...}) = 0
mmap(NULL, 1516558, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb21132a000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_TIME", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=3360, ...}) = 0
mmap(NULL, 3360, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2150e7000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_NUMERIC", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=50, ...}) = 0
mmap(NULL, 50, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2150e6000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/locale/C.UTF-8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=199772, ...}) = 0
mmap(NULL, 199772, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb2150b5000
close(3) = 0
openat(AT_FDCWD, "/proc/self/stat", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
read(3, "1612 (hostnamectl) R 1610 1610 1"..., 1024) = 312
close(3) = 0
getpid() = 1612
socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
getsockopt(3, SOL_SOCKET, SO_RCVBUF, [212992], [4]) = 0
setsockopt(3, SOL_SOCKET, SO_RCVBUFFORCE, [8388608], 4) = -1 EPERM (Operation not permitted)
setsockopt(3, SOL_SOCKET, SO_RCVBUF, [8388608], 4) = 0
getsockopt(3, SOL_SOCKET, SO_SNDBUF, [212992], [4]) = 0
setsockopt(3, SOL_SOCKET, SO_SNDBUFFORCE, [8388608], 4) = -1 EPERM (Operation not permitted)
setsockopt(3, SOL_SOCKET, SO_SNDBUF, [8388608], 4) = 0
connect(3, {sa_family=AF_UNIX, sun_path="/run/dbus/system_bus_socket"}, 29) = -1 ENOENT (No such file or directory)
close(3) = 0
openat(AT_FDCWD, "/usr/share/locale/C.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/locale/C.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/locale/C/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/locale-langpack/C.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/locale-langpack/C.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/locale-langpack/C/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
writev(2, [{iov_base="Failed to create bus connection:"..., iov_len=58}, {iov_base="\n", iov_len=1}], 2Failed to create bus connection: No such file or directory
) = 59
exit_group(1) = ?
+++ exited with 1 +++ | hostnamectl command causes Failed to create bus connection: No such file or directory |
Two processes cannot bind (and listen) to the same unix socket. A process which tries to bind to an already existing unix socket will get an EADDRINUSE error.Concretely, I can start two gunicorn processes with the same --bind unix:/ and no obvious error occursIt's probable that your gunicorn process is actually removing the socket file before binding to it, and so it ends up binding to a different unix socket.
Keep in mind that the actual address of a unix socket is the device_id:inode tuple, not the path through which it was accessed. If you remove a unix socket, a program which binds to the same path will end up creating a different socket file, with a different inode.
Note: all this applies to "normal", filesystem-resident Unix sockets. Linux also has abstract unix sockets, where the name of the socket is its actual address, and which do not use any kind of filesystem object. For these too, you won't be able to bind two sockets to the same address.
|
What happens when I set up two processes to listen to the same Berkeley socket?
Do messages get routed to both? Neither? One of the two? If so, how?
Concretely, I can start two gunicorn processes with the same path for --bind unix: and no obvious error occurs:
gunicorn --bind=unix:/path/to/some/socket This seems like a very simple question, although I have not been able to find a clear-cut answer on SE or elsewhere.
| What happens when two processes listen on the same Berkeley/Unix [file] socket? |
You do it with the -k option to nc.-k Forces nc to stay listening for another connection after its cur-
rent connection is completed. It is an error to use this option
without the -l option. When used together with the -u option,
the server socket is not connected and it can receive UDP data-
grams from multiple hosts.Example:
$ rm -f /tmp/socket # unlink the socket if it already exists
$ nc -vklU /tmp/socket # the server
Connection from mack received!
yes
Connection from mack received!
yes
...It's recommended to unlink() the socket after use -- but, in fact, most programs check if it exists and remove it before calling bind() on it; if the socket path exists in the filesystem and you try to bind() to it, you will get an EADDRINUSE error even when no program is using it in any way.
One way to avoid this whole mess on linux is to use "abstract" unix sockets, but they don't seem to be supported by netcat.
|
Consider /var/run/acpid.socket. At any point I can connect to it and disconnect from it. Compare that with nc:
$ nc -l -U ./myunixsocket.sock
Ncat: bind to ./myunixsocket.sock: Address already in use. QUITTING.nc apparently allows only single-use sockets. Question is then, how do I create a socket analogous to /var/run/acpid.socket, for multiple use and reuse ?
| How to create a public unix domain socket? |
SCM in this context stands for “socket-level control message” (see also the processing implementation).
|
From man 7 unixSCM_RIGHTS
Send or receive a set of open file descriptors from another
process. The data portion contains an integer array of the file
descriptors. The passed file descriptors behave as though they
have been created with dup(2).There are also other concepts with SCM in them; what does SCM mean here? I didn't manage to find it.
| What does SCM mean in unix sockets context (SCM_RIGHTS etc.)? |
OpenSSH of a sufficient version (OpenSSH 6.7/6.7p1 (2014-10-06) or higher) can do this, if the SSH is initiated from the client to the server system one could write something like
ssh -L /path/to/client.sock:/path/to/server.sock serverhostand then the client would connect to /path/to/client.sock and the server would listen at /path/to/server.sock. You probably will also need to set -o StreamLocalBindUnlink=yes, see ssh_config(5).
(And please don't use /tmp; improper use of /tmp can lead to local security exploits or denials of service or…)
|
I have two processes (Client and server) that communicate with each other using a Unix socket /tmp/tm.ipc. Both processes (Client and Server) don't support TCP.
Client -> /tmp/tm.ipc -> Server
Now, I want to separate both processes to run on two different machines that run in the same subnet. Therefore, I want to build sort of a TCP bridge in between.
Client -> /tmp/tm-machine1.ipc -> TCP port 15432 -> /tmp/tm-tm-machine2.ipc -> Server
I was thinking to use Socat, but this looks like that it only covers the server listening part.
socat -d -d TCP4-LISTEN:15432,fork UNIX-CONNECT:/tmp/tm.ipcNow I want to connect the client's Unix socket to that port. How can I do that?
| Building a Unix socket bridge via TCP |
A raw socket is a network socket (AF_INET or AF_INET6 usually). It can be used to create raw IP packages which can be used for troubleshooting or to implement your own TCP implementation without using SOCK_STREAM:Raw sockets allow new IPv4 protocols to be implemented in user space. A raw socket receives or sends the raw datagram not including link level headers. [raw(7)]Tools like nmap use raw sockets in order to stop the TCP handshake after the initial SYN, SYN-ACK, as the TCP connection never completely established. As a network socket, it uses sockaddr_in for addresses.
However, the creation of raw sockets is usually restricted. Only privileged processes can create them.A unix socket on the other hand is not a network socket (AF_UNIX). It's a local socket:The AF_UNIX (also known as AF_LOCAL) socket family is used to communicate between processes on the same machine efficiently. [unix(7)]It uses another address structure (sockaddr_un). It's a common way to implement two-way communication on a single system for inter-process communication without going through the network layer.And packet sockets are raw packets at the driver level:Packet sockets are used to receive or send raw packets at the device driver (OSI Layer 2) level. They allow the user to implement protocol modules in user space on top of the physical layer. [packet(7)]The other sockets act on the network layer (OSI Layer 3) or higher. At that point, you're talking directly to your network interface's driver.
For more information see socket(2), ip(7), packet(7), raw(7), socket(7) and unix(7).
|
The ss command (from the iproute2 set of tools which comes as a newer alternative to netstat) has in its --help the following options
-0, --packet display PACKET sockets
-t, --tcp display only TCP sockets
-S, --sctp display only SCTP sockets
-u, --udp display only UDP sockets
-d, --dccp display only DCCP sockets
-w, --raw display only RAW sockets
-x, --unix display only Unix domain socketsWhat exactly is the distinction made here between RAW and UNIX domain sockets?
And what actually are the PACKET sockets?
| ss command: difference between raw and unix sockets |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.