output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
The short answer is no, these are not equivalent. Automatic testing is just data collection; the short test is an actual test.
This is discussed (at length) in the smartctl manpage, in the section describing the --offlineauto settings:The second category of testing is called "offline" testing. This type of test can, in principle,
degrade the device performance. The -o on option causes this offline testing to be carried out,
automatically, on a regular scheduled basis. Normally, the disk will suspend offline testing
while disk accesses are taking place, and then automatically resume it when the disk would otherwise be idle, so in practice it has little effect. Note that a one-time offline test can also be
carried out immediately upon receipt of a user command. See the -t offline option below, which
causes a one-time offline test to be carried out immediately.andThe third category of testing (and the only category for which the word ‘testing’ is really an
appropriate choice) is "self" testing. This third type of test is only performed (immediately) when a command to run it is issued. The -t and -X options can be used to carry out and abort
such self-tests; please see below for further details.So -t offline is equivalent to the automatic testing enabled with -o on, but that’s not testing, it’s just data collection (it updates the “offline” attributes). The short test, scheduled manually (or using smartd), is an actual test, as is the long test; smartd comes with example settings enabling nightly short tests and weekly long tests.
|
We're busy setting up smartmontools on our various Linux based servers and although it works, we want to streamline the process a bit. As I understand, we can enable automatic testing which should perform a test each 4 hours, but it does not indicate exactly what test is completed?
Is this test the same as running a short test? We're currently manually doing short tests daily so if the automatic test done each 4 hours is the same I would rather just rely on the automatic testing.
|
smartmontools - Is the automatic test the same as running a short test?
|
There is no precise time, it could be a minute from now or a month from now. But if it's bound to happen it will and you can't really prolong the disk's life, so make sure you backed up all and stop relying on it.
|
How much time can I expect this disk (1 TB) to function? I have already made a backup of my important data. But I will be unable to buy a new HDD until February. Is there any way I can extend its life? Perhaps by formatting the disk or something else?
|
Disk is likely to fail soon [closed]
|
With a hardware RAID controller like that, smartctl is querying the RAID controller, and the controller firmware is making the actual SMART queries to the physical disks.
The message:
SMART Status not supported: ATA return descriptor not supported by controller firmware
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.means the smartctl is trying to say roughly: "The RAID controller is telling me a disk responded with something the controller did not fully understand and could not pass to me. Based on the parts of the response we both understood, the disk(s) seem to be OK... but the part that was not understood might have been important."
You might want to check the RAID controller's vendor support to see if there is a firmware update for the controller, possibly with updated SMART support.
|
I'm confused about this smartctl output. It says SMART status is not supported, but then it says it PASSED.
# sudo smartctl -H -d megaraid,24 /dev/sdb
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-1160.59.1.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF READ SMART DATA SECTION ===
SMART Status not supported: ATA return descriptor not supported by controller firmware
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.# echo $?
4According to the man page, status code 4 means prefail Attribute is less than the danger threshold.
EXIT STATUS
...
...
Bit 4: We found prefail Attributes <= threshold.So I'm confused, is SMART data available on this disk or not?
|
Does my disk support SMART?
|
The smartmontools drive database can be seen here. Its purpose is to provide additional command-line flags to both smartctl and smartd for drives where the default settings (which are themselves defined in the database) aren’t sufficient, and/or to provide warnings about the drive.
Many drives have specific counters, or counters that need to be interpreted in specific ways; see for example the first non-default entry in the database.
Some drives have important misfeatures which users should be told about; see for example m4 SSDs with their counter bug.
USB drives need an access method to be specified, see the entries starting here.
The database isn’t needed, since all the settings it defines can be specified using command-line parameters; having it saves each user from having to determine those command-line parameters themselves.
|
Do smartd need the database?
or
smartctl needs the database?
I saw smart tool github keep updating database:
https://github.com/smartmontools/smartmontools/labels/drivedb
In my understanding, smartd will scan all disks then why does it need a database? what's the function/purpose to use a database in smartd/smartctl?
|
Why smartd need database?
|
If S.M.A.R.T. cannot be enabled, not even in rescue mode, this means that the harddisk isn't working correct anymore and should be replaced.
|
On my debian wheezy server I use a software raid 1 with two harddisks dev/sda3 and dev/sdb3 connected into dev/md2:
mdadm --detail /dev/md2
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3The raid seems to be fine, but on one of the disks SMART is not running:
smartctl --all /dev/sdasais:
SMART support is: Available - device has SMART capability.
SMART support is: DisabledWhile /dev/sdb gives a lot of SMART information.
I tried to start it with
smartctl -s on /dev/sda -T verypermissive not workingBut it doesn't start:
Error SMART Enable failed: scsi error aborted command
Smartctl: SMART Enable Failed.How can I get it running? Or does it mean the disk has a problem?
|
Cannot get smartctl working
|
SSHpass was working fine, but the alpine container python:3.6-alpine doesn't have openssh installed.
This errormessage is confusing as it doesn't mention that the ssh component is failing.
This can be fixed by running apk add --update openssh.
This was resolved by changing the line in the Dockerfile from RUN apk add --update --no-cache sshpass to RUN apk add --update --no-cache openssh sshpass.
|
When I install sshpass on alpine linux it will install and the doc will show up if you run it without arguments, but using any argument (valid or invalid) returns sshpass: Failed to run command: No such file or directory.
It's pathed and even when using an absolute path it has the same behavior. I want to use this with ansible, but it won't even work directly.
I can't seem to find any information online about this functioning or not functioning for other people, but I used other people's containers and my own and I couldn't get it to function on either.
https://pkgs.alpinelinux.org/package/v3.3/main/x86/sshpass
$ docker run -it --rm williamyeh/ansible:alpine3 ash
/ # sshpass
Usage: sshpass [-f|-d|-p|-e] [-hV] command parameters
-f filename Take password to use from file
-d number Use number as file descriptor for getting password
-p password Provide password as argument (security unwise)
-e Password is passed as env-var "SSHPASS"
With no parameters - password will be taken from stdin -P prompt Which string should sshpass search for to detect a password prompt
-v Be verbose about what you're doing
-h Show help (this screen)
-V Print version information
At most one of -f, -d, -p or -e should be used
/ # sshpass hi
sshpass: Failed to run command: No such file or directory
/ # which sshpass
/usr/bin/sshpass
/ # /usr/bin/sshpass
Usage: sshpass [-f|-d|-p|-e] [-hV] command parameters
-f filename Take password to use from file
-d number Use number as file descriptor for getting password
-p password Provide password as argument (security unwise)
-e Password is passed as env-var "SSHPASS"
With no parameters - password will be taken from stdin -P prompt Which string should sshpass search for to detect a password prompt
-v Be verbose about what you're doing
-h Show help (this screen)
-V Print version information
At most one of -f, -d, -p or -e should be used
/ # /usr/bin/sshpass anyinput
sshpass: Failed to run command: No such file or directoryIt's worth mentioning that the underlying ssh executable works and I can connect to the host that way.
|
sshpass not functioning in alpine linux
|
ssh prompts for and reads password (or passphrase) using the terminal (/dev/tty), not its stdin. This way you can pipe/redirect data to/from ssh and still be able to provide a password when asked. But to provide a password not via the terminal, one needs to present a "fake" terminal to ssh. This is what sshpass does.
When you sshpass … ssh …, sshpass runs ssh in a dedicated emulated terminal. This means ssh does not read directly from your terminal, sshpass does. And ssh does not print directly to your terminal, sshpass does. Eventually sshpass will act as a relay, so it will be as if ssh used your terminal. But before this happens, sshpass intercepts what ssh prints; it also injects the string you specify after -p, then ssh "sees" the string as coming from the terminal ssh is using (which is not your terminal). This way ssh can be fooled you typed the password, when it's sshpass who "typed".
By default sshpass waits for assword: (or assword1?) to appear as a part of the prompt for password. E.g. if you didn't use a key and you didn't use sshpass, ssh would print:
[emailprotected]'s password:and it would wait for you to type your password. If you used sshpass to provide your password, then sshpass would intercept this message and "type" the password for you. By waiting for the right prompt sshpass knows when ssh expects a password, only then it passes your password.
In your case the prompt was different. ssh did not ask for the password, it asked for the passphrase using a different prompt. The prompt from ssh was exactly Enter passphrase for key '/home/user1/.ssh/id_rsa':, there was nothing matching assword:, so sshpass kept waiting for the default prompt that never came.
Use -P to override the default.-P
Set the password prompt. Sshpass searched for this prompt in the program's output to the TTY as an indication when to send the password. By default sshpass looks for the string assword: (which matches both Password: and password:). If your client's prompt does not fall under either of these, you can override the default with this option.(source: man 1 sshpass)
In your case it may be:
sshpass -P assphrase -p "pass" ssh [emailprotected]Now if sshpass intercepts Enter passphrase … coming from the ssh, it will respond with whatever you specified after -p. Next it will sit as a relay between your terminal and the one ssh is using; it will become transparent.
In general sshpass can be used to provide a password (a string in general) to any tool that normally uses the terminal (as opposed to stdin+stdout+stderr) to prompt for and read the password. -P allows you to adjust the command to the prompt the tool uses.1 The manual says assword:, but the output from your sshpass -v says using match "assword". One way or another you need -P to properly pass a passphrase.
|
I configured key pairs for SSH connection.
It works but of course asks for the passphrase.
ssh [emailprotected]So now I try to login with sshpass which I have installed. I tried with -p property but also and with -f property and nothing works - it just hangs.
verbose gives these info on the client side
sshpass -v -p "pass" ssh [emailprotected]
SSHPASS searching for password prompt using match "assword"
SSHPASS read:
SSHPASS read: Enter passphrase for key '/home/user1/.ssh/id_rsa':On the server side I can see these information in the log:
Accepted key RSA SHA256:V/V29pA2Ps5k/lBgz2R5XFP6vaaaOUN5hj0hca+j8TI found at __PROGRAMDATA__/ssh/administrators_authoriz
ed_keys:1
debug3: mm_answer_keyallowed: publickey authentication test: RSA key is allowed
debug3: mm_request_send: entering, type 23
debug3: send packet: type 60 [preauth]
debug2: userauth_pubkey: authenticated 0 pkalg rsa-sha2-256 [preauth]
debug3: user_specific_delay: user specific delay 0.000ms [preauth]
debug3: ensure_minimum_time_since: elapsed 19.018ms, delaying 1.849ms (requested 5.217ms) [preauth]
Postponed publickey for user1 from 10.7.141.243 port 44750 ssh2 [preauth]Thanks a lot for your kind assistance!
|
"ssh" works but "sshpass" doesn't - how is this possible?
|
@KamilMaciorowski comment lend me into the right direction. According to this useful answer at ServerFault, Windows is taking its own OpenSSH implementation. To solve this, openssh has to be installed in Cygwin in order to use this one instead.
This solves the errors and now I can do this:
$ gpg -d -q myappserver23.sshpasswd.gpg > pass_file && sshpass -fpass_file ssh [emailprotected] > test.txt$ cat test.txt
/home/myuser
uid=1001(myuser) gid=1001(mygroup) groups=1001(mygroup)PD. This is not insecure because we're using gpg here. But if in doubt, try this.
|
According to this RedHat SSH password automation guide I'm following the Example 4: GPG one, and following the steps in that guide I create my pass_file using my own passphrase. Then, I got this:
gpg -d -q myappserver23.sshpasswd.gpg > pass_file && sshpass -fpass_file ssh [emailprotected]Note the lack of a space between the -f option and pass_file according to sshpass man page
When I run the command above, I'm asked for my passphrase, I type it correctly and then I'm asked for the server's password as if sshpass wasn't even used.
In short, this works but I still got a password prompt...
I'm aware of the -q options certainly and I've added also -vvv to both sshpass and ssh and it seems this is related to shh and not to sshpass I believe.
I'll just share here the debug messages I got after the ssh banner message
(...)
debug3: input_userauth_banner
----------------------------------------------------------------------------------------
Here goes ssh banner message
----------------------------------------------------------------------------------------debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic,password
debug3: preferred publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering public key: C:\\Users\\myuser/.ssh/id_rsa RSA SHA256:7PAz6lsENYkfYGwFZWNf0OJ88Z9mFDMSBc+P9t+4H1k
debug3: send packet: type 50
debug2: we sent a publickey packet, wait for reply
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Trying private key: C:\\Users\\myuser/.ssh/id_dsa
debug3: no such identity: C:\\Users\\myuser/.ssh/id_dsa: No such file or directory
debug1: Trying private key: C:\\Users\\myuser/.ssh/id_ecdsa
debug3: no such identity: C:\\Users\\myuser/.ssh/id_ecdsa: No such file or directory
debug1: Trying private key: C:\\Users\\myuser/.ssh/id_ed25519
debug3: no such identity: C:\\Users\\myuser/.ssh/id_ed25519: No such file or directory
debug1: Trying private key: C:\\Users\\myuser/.ssh/id_xmss
debug3: no such identity: C:\\Users\\myuser/.ssh/id_xmss: No such file or directory
debug2: we did not send a packet, disable method
debug3: authmethod_lookup password
debug3: remaining preferred: ,password
debug3: authmethod_is_enabled password
debug1: Next authentication method: password
debug3: failed to open file:C:/dev/tty error:3
debug1: read_passphrase: can't open /dev/tty: No such file or directory
[emailprotected]'s password:The last line of this output is the password prompt. Obviously, if I type my password here, I'll be able to login to the remote server but then, what's the point to use sshpass ? I'd like to be able to just login without having to type any password.
Please, don't advise the use of private keys, I know about the subject but its not applicable to my real world scenario.
Any help will be appreciated.
|
Using sshpass in Cygwin, ssh stills prompts for password
|
Given that the application that I'm using is a CLI program that searches for ssh in $PATH, it is possible to modify PATH to include a special ssh command that is actually a sshpass wrapper.
Wrap sshpass into a executable called ssh:
/path/to/bin/ssh:
#!/usr/bin/env bash
sshpass -f /path/to/passwordfile ssh "$@"Then prepend the parent dir of the custom ssh to PATH when running myapp by adding the following alias in .zshrc or .bashrc:
alias myapp="PATH=/path/to/bin:$PATH myapp"
|
I am using an application that uses /usr/bin/ssh and requires passwordless authentication. Meanwhile I want to use this with a server that requires both publickey and password authentication at the same time. Just using sshpass is a good solution in most circumstances (given that my secret password is safe and that I am also using safe public key authentication at the same time), but sshpass wraps around ssh and not the other way around.
How can I use sshpass when /usr/bin/ssh x is called? My current workaround is the config below. It works, but it hops from the target server to itself. I would rather not have this overhead.
Host x
HostName xxx.xxx.xx
ProxyCommand sshpass -f passwordfile ssh -W %h:%p xxx.xxx.xxIs there a way to get the same result without hopping to the server itself?
(Otherwise - for future people with the same problem - the above config is a workable solution.)
|
sshpass through regular ssh client
|
It's a bit difficult to understand what you're really trying to do.
If you want to concatenate the content of $sourcelist and $tools/$2 and execute it in Bash, you can use cat with those two files and pipe to ssh like this:
cat "$sourcelist" "$tools/$2" | sshpass -e ssh $usrconn@$ipconn
|
I have a bash script that will execute a script of my choosing against a server over ssh. My problem is that I also want to use an input file with common variables so I don't have to change them in each script. So far my attempts at getting it to source the two files have resulted in it trying to find one of them on the remote machine.
input
JBLGSMR002,IP.IP.IP.IP,root,pers,perssourcelist
#!/bin/bash
var1="Some stuff"
var2="Some stuff 2"Script
#!/bin/bash
#
#set -x
input="/home/jbutryn/Documents/scripts/shell/input/nodelist.csv"
sourcelist="/home/jbutryn/Documents/scripts/shell/Tools/slist"
tools="/home/jbutryn/Documents/scripts/shell/Tools"
#
is.there () {
if grep -wF $1 $2 > /dev/null 2>&1 ; then
echo "true"
else
echo "false"
fi
}
#
nodethere=$(is.there $1 $input)
#
if [[ $nodethere = "true" ]]; then
ipconn=$(awk -F ',' '/'"$1"'/ {print $2}' $input)
usrconn=$(awk -F ',' '/'"$1"'/ {print $3}' $input)
elif [[ $nodethere = "false" ]]; then
echo "Couldn't find $1 in database"
exit 1
fi
#
if [[ -f $tools/$2 ]]; then
echo "Please enter your password for $1: "
read -s SSHPASS
eval "export SSHPASS='""$SSHPASS""'"
sshpass -e ssh $usrconn@$ipconn < "$tools/$2"
elif [[ ! -f $tools/$2 ]]; then
echo "Couldn't find $2 script in the Tools"
exit 1
fiI have this test script to see if it's passing the variables to the remote machine:
Test Script
#!/bin/bash
#
touch testlog
echo $var1 >> ./testlog
echo $var2 >> ./testlogAnd this is what I've tried so far to get the sourcelist to pass through:
if [[ -f $tools/$2 ]]; then
echo "Please enter your password for $1: "
read -s SSHPASS
eval "export SSHPASS='""$SSHPASS""'"
sshpass -e ssh $usrconn@$ipconn < "$sourcelist"; "$tools/$2"This one will create a blank testlog file on the local machine
if [[ -f $tools/$2 ]]; then
echo "Please enter your password for $1: "
read -s SSHPASS
eval "export SSHPASS='""$SSHPASS""'"
sshpass -e ssh $usrconn@$ipconn <'EOF'
source $sourcelist
bash "$tools/$2"
logout
EOFThis one will create a blank "testlog" file on the local machine
I've also tried using source bash . to call the files but I still can't seem to get both local files to pass to the remote machine. Anyone know how this can be done?
|
How can I pass two files through ssh?
|
sshpass is used different way:
./sshpass -p password ssh root@xxxxas explained in the manual page synopsis:
sshpass [-ffilename|-dnum|-ppassword|-e] [options] command arguments
|
I'm trying to use sshpass to login automatically, however, it seems to have problem with /dev/tty
echo password | ./sshpass ssh root@xxxx
...
debug1: read_passphrase: can't open /dev/tty: No such device or address
...
Permission denied (publickey,gssapi-with-mic,password).Any ideas? I can login directly without sshpass, so it's a tty problem.
|
sshpass no longer works?
|
I suggest some improvements for your remote capturing traffic script:preventing CTRC+C from killing your script:
This can be done using a trap to capture SIGINT. You usually place it at the beginning of your script. See this example:
trap "pkill ssh" INTObviously, the filter to get the only the ssh you want can be improved.
Filtering out the communication with the remote machine from that tcpdump
You telling us your system is not reacting well to the capture, might be a recursive data capture overloading the system. Depending on the interface you are listening to, you might be creating a recursive situation where in each connection you have with the remote machine, the controlling connection data will be added to the capture, which in turn will generate more packets to feed tcpdump.
So your tcpdump might needs to filter out the communication between the local and remote machine. Use something similar to:
/usr/bin/tcpdump -i eth5.1 -s 0 -n -v -U -w - "not port 22"or:
/usr/bin/tcpdump -i eth5.1 -s 0 -n -v -U -w - "not my_local_host"This might alleviate the slowness and instability, so the script will behave better. See also later on, the point about using nc.
Limiting the number of packets in tcpdump to avoid having to use CTRC+C all the time.
If you only want to capture a small set of traffic, you should limit the number of tcpdump packets captured by tcpdump.
For instance for capturing 100 packets and returning:
tcpdump -c 100 -w -Limiting tcpdump in time to avoid using CTRC+C all the time.
If you want to capture 5 minutes (300 seconds) of traffic, use the timeout command on the remote side. As in:
ssh "timeout 300 tcpdump -w -"Dropping that shell
You are using the shell because you need $PATH to invoke tcpdump. Invoke the full tcpdump path instead.
As in:
sshpass -p $password ssh -T $username@$ip_address -p 30007 "/usr/bin/tcpdump -i eth5.1 -s 0 -n -v -U -w -"unbuffering tcpdump
tcpdump output to tcpdump to stdout is buffered. To unbuffer it to get data faster from the remote host, and lose less data when interrupting it, use -l:
/usr/bin/tcpdump -i eth5.1 -s 0 -n -v -U -l -w -This might or might not be useful on your case, make some tests. It will sure left you more data on your side when you interrupt it with CTRL-C.
not using SSH for getting the traffic dumps. Use netcat instead.
If you do not have so strict security requirements e.g. a capture inside your own network, and not have firewall(s) blocking ports in the way between the two hosts, drop that SSH for getting tcpdump output back. It is a computationally heavy protocol, and will get in the way if you need to capture more volume of remote traffic. Use nc instead to get the tcpdump data.
As in local machine:
nc -l -p 20000 > capture.dump &Remote machine:
ssh remote "/usr/bin/tcpdump -w - | nc IP_local_machine 20000" > /dev/nullDropping that sshpass
Use ssh keys to authenticate instead of a password.
Lastly, if doing it more professionally, considering the possibility of using an agent + a professional service.
You can do it using cshark.Capture traffic and upload it directly to CloudShark for analysis. Use
the CloudShark service at https://www.cloudshark.org or your own
CloudShark appliance.
|
I have setup a simple script like the below:
sshpass -p $password ssh -T $username@$ip_address -p 30007 <<- EOF > $save_file.pcap
sh
tcpdump -i eth5.1 -s 0 -n -v -U -w -
EOFsed -i '1d' $save_file.pcapThe purpose of this script is so that I can run a tcpdump on a remote device, yet save the output into a file on my local machine (the remote device has very limited storage capacity, so this would allow me to obtain much large captures, as well as, of course, allowing me to setup captures on demand much more quickly).
The purpose of the sh and the heredoc is because by default, I am not dropped into the appropriate shell of this remote device. Issuing sh in the remote device gets me to the proper shell to be able to run my tcpdump, and this heredoc is the only method I've found to accomplish this task and still port the information back into my local file.
The issue that I'm running into is that once the script gets to the tcpdump section of this script, my terminal is given output like the below, and like I would expect to see when running a tcpdump into a file:
drew@drew-Ubuntu-18:~/Desktop$ ./Script.sh
tcpdump: listening on eth5.1, link-type EN10MB (Ethernet), capture size 65535 bytes
Got 665And of course that "Got" counter increases as more packets are captured and piped into my local file. Unfortunately, the only method I have found thus far to stop this and return my terminal is to initiate a CTRL+C.
The issue here is that this doesn't only stop the tcpdump on the remote machine, but it ends the script that is running on my local machine.
This of course means that nothing further in my script is run, and there are many tasks that I need to perform with this data past just the sed that I included here.
I've tried to instead set things up like follows instead:
tcpdump -i eth5.1 -s 0 -n -v -U -w - &
read -n 1 -s; kill $!The thought process here is that my raw tcpdump information would still be posting to stdout, and therefor still be populating in my local capture file. However, it seems like when I tried to run a capture in this manner, with the &, it didn't actually let me post anything else into the terminal (not sure if just too much junk flying at all times or what). I even tried this locally and it seems like trying to run a raw tcpdump posting to stdout doesn't let anything else happen.
Based on this information, the only thing I can think of at this point is if there is some manner in which I can use the CTRL+C in order to close out of the tcpdump on the remote machine, but keep my script still running. Any suggestions I can try? Or other methods of going about this that would be far more logical?
|
Stop CTRL+C Exiting Local Script Which is Running tcpdump in Remote Machine
|
You can script(1) as a mini-expect, provided that you can cope with adjusting arbitrary timeouts, which is of course quite kludgy:
{ sleep 1; echo PASSWD; } | script -q /dev/null -c 'ssh user@host CMD'or with the syntax of BSD's script(1):
{ sleep 1; echo PASSWD; } | script -q /dev/null ssh user@host CMDThe sleep is necessary because ssh will drain the tty's input buffer (and discard whatever was already written to it) before reading the password. If the remote server is sometimes slow to respond, using a "large-enough" timeout may be unpractical.
sshpass, expect etc handle that by waiting to ssh to write the ... password: prompt before they write the password to the master end of the pty. Doing that from a standard shell is neither simple nor very robust. Here is a kludge using a named pipe:
passwdcmd(){
t=$(mktemp -u); mkfifo "$t" || return
script /dev/null -qc "$2" <>"$t" | { dd count=1 2>/dev/null; echo "$1" >"$t"; rm "$t"; cat; }
}passwdcmd PASSWD 'ssh user@host CMD'Of course, this is not very secure, especially since echo may not be a/the shell built-in. For any non-interactive use of ssh, use public key authentication.
|
These are standard sshpass commands to read password from file or as argument.
user@linux:~$ sshpass -f pwd.txt ssh admin@server
admin@server:~$user@linux:~$ sshpass -p P@55 ssh admin@server
admin@server:~$Is it possible to write the same program/script in shell if expect not available?
Public/private key is not the option in this case.
If there's available code to provide the same functionality as sshpass out there, please let me know.
The simplest the code, the better.
|
sshpass alternative in linux shell/bash code
|
If StrictHostKeyChecking is set to accept-new or no, new hosts are automatically added to the ~/.ssh/known_hosts file. The default is to ask.
If a host key changes, you will get a warning message that begins like this
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.and the connection is permitted if set to no. With accept-new, changed host keys are not permitted and you would have to remove the offending lines in known_hosts first.
sshpass -p <password> rsync -avzP -e 'ssh -o StrictHostKeyChecking=accept-new' <file> <user>@<IP address>:<folder>If you want to override the default setting for all hosts, you could add an entry in your ~/.ssh/config:
Host *
StrictHostKeyChecking accept-new
|
I'm using rsync to copy a file to a device:
sshpass -p <password> rsync -avzP <file> <user>@<IP address>:<folder>This has worked fine in the past but I was trying to copy to a new device and got:
Host key verification failed.
rsync error: unexplained error (code 255) at rsync.c(703) [sender=3.2.3]I removed sshpass and just tried rsync -avzP <file> <user>@<IP address>:<folder> and got:
The authenticity of host '<IP address>' can't be established.
<Name> key fingerprint is SHA256:<hex>.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?Entering yes fixes this, then I can run the original command. But how can I do this in a single command with sshpass?
|
Using sshpass with rsync to accept a fingerprint
|
ssh will inherit the standard input stream from the loop and read as much as possible from it, meaning it will read read the remaining lines from your address-list file. Since ssh is reading the remaining lines from the file, the loop will only ever do a single iteration. Technically, the script does not terminate prematurely or fail in any way, it just does not do what you want it to do.
To avoid this, use ssh -n to prevent ssh from reading from standard input:
while read address; do
echo -n "$address "
sshpass -p password ssh -n -o StrictHostKeyChecking=no "user@$address" 'ls /path/to/some/dir'
done < address-listssh behaves in this way so that you would be able to pass data into some program started on the remote host, but as you have discovered, in your case this will prevent your loop from functioning correctly.
|
I need to perform an operation on a list of server, through SSH.
I am using sshpass, and that operation might fail, however it's supposed to happen a few time, and the script should still continue.
#!/bin/bashwhile read address; do
echo -n "$address "
sshpass -p password ssh -o StrictHostKeyChecking=no user@$address 'ls /path/to/some/dir'
done < address-listHowever, the command ran through ssh fail on the first host, and the script immediately exit.
How can I have the loop continue executing, no matter what ? Adding an exit after my command doesn't fix this, and unsetting exit on error set +e doesn't work either.
|
Bash script exit on command failure (sshpass)
|
To selectively display an individual command as well as its output, you can use something like
sh -vc 'echo \"Some text\"'although the nested quoting can start getting on your nerves pretty quickly.
|
The following is a part of my script where I want to echo some text to the local terminal if the condition fails in ssh.
/usr/bin/sshpass -p $PASSWORD /usr/bin/ssh -t -o "StrictHostKeyChecking no" root@$IP -p $PORT '
cd $PATH;
[ ! -d temp ] && mkdir temp;
for new_file in '${NEW_FILE[@]}'
do
[ -f $new_file ] && mv -f $new_file temp/$new_file-'$DATE'
DOWNLOAD=$(wget --no-check-certificate '$URL'/$new_file > /dev/null 2>&1)
if [ '$?' -ne '0' ]; then
mv temp/$new_file-'$DATE' '$PATH'/$new_file
echo "$new_file download failed! please check and re-run the script"
else
chmod +x $new_file
fi
done;'except echo remaining functionality works well ...
Let me know is it possible to echo from ssh to local terminal.
|
SSH remote - display echo on local terminal
|
Old versions of EPEL are still available so you can try to use the 8.5 archived version:
https://dl.fedoraproject.org/pub/archive/epel/8.5.2022-05-10/Everything/x86_64/
|
BACKGROUND:
I am developing a Kickstart file to install Rocky (8.5), and I have included EPEL as a repo in order to install Ansible. Yesterday, when trying to install from my Kickstart file, I received a message that "nothing provides sshpass needed by ansible-2.9.27-1.el8.noarch"
A quick search yielded this page: https://bugzilla.redhat.com/show_bug.cgi?id=2020679
Since Red Hat just released RHEL 8.6 a few days ago, it seems that sshpass was removed from EPEL. Since Rocky will naturally be a little behind RHEL, I am expecting that I will not be able to get this package from Rocky's repos until they release 8.6 in a week or two or whenever.
QUESTION:
Until Rocky 8.6 is released, what repo should I add (temporarily) to my Kickstart file to get past this dependency issue?
|
Rocky 8.5 - Alternate repo for sshpass, removed from EPEL
|
You can take inspiration from commands with similar syntax, such as sudo which also takes options followed by a command name and its options. See also Dynamic zsh autocomplete for custom commands for an introduction on writing zsh completions.
Put the following code in a file called _sshpass on your $fpath. (See How to properly make custom zsh completions "just work"? for more details on how zsh picks up completion functions.)
#compdef sshpass_sshpass () {
local context state line
typeset -A opt_args _arguments \
'(-d -e -f -p)-d+[read password from file descriptor]:file descriptor:_file_descriptors' \
'(-d -e -f -p)-e[take password from $SSHPASS]' \
'(-d -e -f -p)-f+[read password from file]:file:_files' \
'(-d -e -f -p)-p+[actual password]::' \
'(-)1:command:->command' \
'*::arguments: _normal' \
&& return case $state in
(command)
if ((CURRENT == 2)); then
# Insist that the first argument must be an option
compadd -- -d -e -f -p
else
compadd ssh ||
_command_names -e
fi
;;
esac
}_sshpass "$@"Brief explanation:_arguments arranges completion for a command with options.
The options -d, -e, -f and -p are mutually exclusive. (-d -e -f -p)-d… specifies completions for -d and states that this should not be offered if one of the options -d, -e, -f or -p is already present.
-d+ means that a space is optional between -d and its argument.
->command for the first parameter causes _arguments to set $state to command and the execution continues below.
_normal for arguments after the first non-option argument causes them to be completed as if the command line started with the first non-option argument. The :: after * instructs zsh to strip options up the first non-option argument.
For the command name, try completing the name passed to compadd (you can add more names if you want). If this fails, complete any command name instead.
|
I am using sshpass command like this:
sshpass -p 'my_password' ssh user@serverThe full syntax, according to the man page is
sshpass [-ffilename|-dnum|-ppassword|-e] [options] command argumentsAt the moment, zsh does not have any completion rules for sshpass. How can I create simple completion rules, so that as soon as I reach the "command" and "arguments", ssh is completed for command and user@host is completed same as it would be for standard ssh ?
|
zsh completion for sshpass
|
The problem with sshpass + ssh is that ssh first authenticates the user, forks a child to handle the connection and then exits. But sshpass will pull the rug from under the child as soon as the parent ssh has exited, before the child had any chance to detach itself from the terminal (the pseudo-tty created by sshpass) and consequently it will be killed by a SIGHUP signal.
Therefore this would work:
sshpass -p '1234567*' sh -c 'ssh -L 1080:192.168.0.1:2222 [emailprotected] -p 4422 -f -C -N && sleep .1'
|
sshpass -p '1234567*' ssh -L 1080:192.168.0.1:2222 [emailprotected] -p 4422 -f -C -N
The above code works on macOS (creating a 1080 tunnel to 192.168.0.1:2222 via gateway.com:4422 with username admin and password 1234567*).\
It doesn't work on Linux - the process seems to run and terminate immediately.
|
How to keep sshpass process in the background?
|
When you're using here-document, the text is written to the Standard Input (STDIN) of the process, in this case - to the STDIN of your ssh process. Since you're redirecting the here-document to the STDIN of your ssh, it cannot read your keyboard.
Instead of providing those commands in the STDIN of your ssh, assign it to a variable, and then use this variable as the command for ssh to run on the remote host.
# Read the here-document into the $COMMAND variable
read -r -d '' COMMAND << 'MYSCR'
while :
do
clear
...
MYSCR# Run ssh with $COMMAND
sshpass -p $passd ssh -qt -o StrictHostKeyChecking=no -o ConnectTimeout=4 $user@$1 "$COMMAND"
|
I am trying to run below script on remote machine, I am facing two issues hereThe while loop defined in here doc is running endless
read enter key does not work on remote hostWithout the sshpass command script is working fine on local host.
Can someone look into this and help.
1 #!/bin/bash
2 #read -p "Enter the user name to connect to server: " user
3 #read -s -p "Enter the password : " passd
4 #sshpass -p $passd ssh -qt -o StrictHostKeyChecking=no -o ConnectTimeout=4 $user@$1 << 'MYSCR'
5 ##############################################################################################################################################
6 while :
7 do
8 clear
9 echo "Server Name - $(hostname)"
10 echo "1. Check Server Information"
11 echo "2. Check Network Information"
12 echo "q. to Exit"
13 read -p "Enter your choice [ 1 - 3 ]" choice
14 case $choice in
15 1)
16 echo "Server Date : `date`"
17 echo "Server Uptime :`uptime`"
18 read -p "Press [Enter] key to continue..."
19 readEnterKey
20 ;;
21 2)
22 echo "File system information"
23 df -hP | column -t
24 read -p "Press [Enter] key to continue..."
25 readEnterKey
26 ;;
27 q)
28 echo "Bye!"
29 exit 0
30 ;;
31 *)
32 echo "Error: Invalid option..."
33 read -p "Press [Enter] key to continue..."
34 readEnterKey
35 ;;
36 esac
37 done
38 #MYSCR
|
Running a while loop on remote server using heredoc
|
ssh -tt user@server 'screen -ls 2>/dev/null | grep -i detached && screen -r || echo "No screen detached sessions found"'This would work provided you have one screen detached.
|
I would like to connect to a server using ssh and after logging in I would like to automatically execute
screen -RMy script looks as follows:
sshpass -p password ssh -t [emailprotected] 'screen -R; bash -l'It is important that I would like to be able to control from the outside which commands are executed after login. Else I could most likely just add stuff the ".bashrc" or some equal file.
|
Login to ssh session and afterwards automatically look for and access existing screen session?
|
for i in {1..253}
do
ip=192.168.1.${i}
echo "Enter password for: $ip"
read pswd
case "$pswd" in
*) password=$pswd;;
esac
sshpass -p "$password" ssh -o StrictHostKeyChecking=no username@$ip 'hostname
echo "Checking if foo.log exists: `ls -lh /var/log/foo.log | wc -l`"
echo "Checking if bar.log is present: `ls -lh /var/log/bar.log | wc -l`"
' 2>/dev/null
doneThat should work. Remember, ctrl + c will kill this loop if you get tired of it running during testing, or just use a smaller range to debug it, like 1 to 5.
|
I have a simple for loop one liner I use to check for things across a number of servers that have the same password set. I want to develop this one liner into a script that logs into a cluster of servers via IP address, prompts for a password and performs a command. Such as restarting a service. This is what I use:
for i in {1..253}
do sshpass -p PASSWORDHERE ssh -o StrictHostKeyChecking=no [emailprotected].${i} 'hostname
echo "Checking if foo.log exists: `ls -lh /var/log/foo.log | wc -l`"
echo "Checking if bar.log is present: `ls -lh /var/log/bar.log | wc -l`"
' 2>/dev/null; echo ""; doneMy script-fu is weak and I really don't have much of a clue where to start. Incidentally I want to achieve this with a basic set of tools. I'm not able to install anything third party.
Any help appreciated.
|
Bash For Loop - prompt for IP range and password
|
You need to escape the $ before the $HOSTNAME variable (or $(hostname) command), so that it is expanded/run on the remote machine rather than the local machine:
#!/bin/bash
while read PASSWORD SERVER;do
sshpass -p "$PASSWORD" ssh -t -p 1234 $SERVER << EOF
wget -N https://example.com/file.conf 2>&1 | grep -i "failed\|error\|saved"
sed -i "s/variabletoreplace/\$HOSTNAME/" file.conf
EOF
doneAs mentioned in comments it would be much better to use ssh keys rather than sshpass and if all these commands you want to run were in a script on the remote host it would be a lot simpler. Alternatively using a tool like ansible or puppet may be more appropriate.
|
I have created a shell script to connect to a list of servers using sshpass and run a list of about 30 commands. An extract of the script is below.
The script will wget a new config file, but then I am stuck trying to replace a variable in the config file with the server hostname.
I've had no luck getting the remote server hostname, only the local hostname (in this case MacBookPro!) will output.
#!/bin/bash
while read PASSWORD SERVER
do
sshpass -p "$PASSWORD" ssh -t -p 1234 $SERVER << !
echo "Server: $SERVER"
wget -N https://example.com/file.conf 2>&1 | grep -i "failed\|error\|saved"attempt 1:
replace "variabletoreplace" "$HOSTNAME" -- file.confattempt 2:
sed -i "s/variabletoreplace/$(<file.conf)/" /proc/sys/kernel/hostnameend of the script:
!
done <./server_list.txtI've also attempted assigning variables like host=$(hostname -f) or $HOST, but nothing will display the remote host.
I understand in normal ssh commands this type of issue can be down to use of double-quotes instead of single-quotes, but I'm not sure how to amend the script I have whilst keeping it as a list of commands. Any help appreciated.
|
Output remote hostname in sshpass session
|
Issue was the pipe between the ADDR | tail
//old code
value = `sshpass -p $PASSWORD ssh $USERNAME@$REMOTE_IP_ADDR | tail -F /tmp/file.txt | awk '{ print $16 }'`//edited code (working)
value = `sshpass -p $PASSWORD ssh $USERNAME@$REMOTE_IP_ADDR tail -F /tmp/file.txt | awk '{ print $16 }'Thanks for the help ctac_
|
In my script I need to access a remote server using ssh. On the remote I want to gather some data from log files. I have my script set up and the code works, but my problem is when I run the script it will get to the ssh and log onto the remote server, but it wont run the next command. It waits for a keyboard input but i need the command to run without an input from the keyboard.
This is what i have.
value = `sshpass -p $PASSWORD ssh $USERNAME@$REMOTE_IP_ADDR | tail -F /tmp/file.txt | awk '{ print $16 }'`//i have tested this line of code and it works how i need it to
tail -F /tmp/file.txt | awk '{ print $16 }'
|
run command in script after an ssh
|
Thanks to @UlrichSchwarz for pointing me in the right direction. After learning that without quotes around the initiating EOF the heredoc body is processed locally, I added the quotes and the script runs as expected.
The correction:
sshpass -p $password ssh -o StrictHostKeyChecking=no $server /bin/bash << "EOF"
|
Try as I might, I am unable to execute the lpstat and lp commands of CUPS remotely using sshpass in my script. I am able to execute all of the lines involving these commands when using an interactive terminal on the server running CUPS. Any help would be greatly appreciated.
Input:
./cups_print_job.sh printer ./file.pdf 1cups_print_job.sh:
#!/bin/bash#about
#script to print a file using CUPS via the command line. more options at https://www.cups.org/doc/options.html#directions
#following the script name, arg1 = ~/.ssh/config host running CUPS; arg2 = path to file; arg3 = page range (e.g. 1-4,7,9-12)#definitions
server=$1
path=$2
filename=$(basename $path)
range="page-ranges=$3"#file transfer and printingread -s -p "Password for CUPS: " passwordsshpass -p $password scp $path $server:/tmp
sshpass -p $password ssh -o StrictHostKeyChecking=no $server /bin/bash << EOF#definitions
printer_name=$(/usr/bin/lpstat -p -d | head -n1 | awk -F ' ' '{print $2}')
options="-o fit-to-page -o media=Letter -o $range"#print job and cleanup
/usr/bin/lp -d $printer_name $options /tmp/$filename
echo $password | sudo -S rm -r /tmp/$filenameEOFecho
echo 'Print job sent'
echoOutput:
Password for CUPS: ./cups_print_job.sh: line 20: /usr/bin/lpstat: No such file or directory
/usr/bin/lp: No such file or directory
[sudo] password for print:
Print job sent~/.ssh.conf contents:
Host printer
User print
Hostname 192.168.0.16
Port 22
|
Executables specified with absolute path not found when using sshpass
|
As for why etc.
ss, part of the iproute2 utility collection in the Linux kernel, uses an ioctl() request to get the current width of terminal.
However; the entire width is used for the «other» fields and the process field get squeezed onto next line.
You can view this by for example (when having a limited with on terminal):
script ss.txt
ss -nlup4
exitThen widen your terminal window and cat ss.txt.
The reason why
ss -nulp4 | cat -A«works» is because the utility recognizes if it writes to a tty or not:
if (isatty(STDOUT_FILENO)) {}As you can see from the prior line in the source code default width is set to 80. Thus if your terminal is at say 130 columns and you do:
ss -nulp4 | catit recognizes the output is not to a tty (but to a pipe) and the other fields are crammed into 80 columns, whilst the process field is written after these 80 columns. But as your terminal is wider then 80 columns and has room for the process entry it is displayed in one line.
The same goes for for example:
ss -nulp4 > ss.txtAs for how to «achieve my preferred formatting» one likely unsuitable way is to do something in the direction of (depending on terminal):
stty cols 100
ss -nlup4
|
When using ss with -p option, user/pid/fd column jumps underneath the particular line. For instance this is it what I'm actually seeing:
# ss -nulp4
State Recv-Q Send-Q Local Address:Port Peer Address:Port
UNCONN 0 0 *:20000 *:*
users:(("perl",pid=9316,fd=6))
UNCONN 0 0 *:10000 *:*
users:(("perl",pid=9277,fd=6))
UNCONN 0 0 192.168.100.10:53 *:*
users:(("named",pid=95,fd=517),("named",pid=95,fd=516))
UNCONN 0 0 127.0.0.1:53 *:*
users:(("named",pid=95,fd=515),("named",pid=95,fd=514))Preferred output formatting:
# ss -nulp4
State Recv-Q Send-Q Local Address:Port Peer Address:Port
UNCONN 0 0 *:20000 *:* users:(("perl",pid=9316,fd=6))
UNCONN 0 0 *:10000 *:* users:(("perl",pid=9277,fd=6))
UNCONN 0 0 192.168.100.10:53 *:* users:(("named",pid=95,fd=517),("named",pid=95,fd=516))
UNCONN 0 0 127.0.0.1:53 *:* users:(("named",pid=95,fd=515),("named",pid=95,fd=514))To confirm that there are no line breaks I've tried this:
# ss -nulp4 | cat -A
State Recv-Q Send-Q Local Address:Port Peer Address:Port $
UNCONN 0 0 *:20000 *:* users:(("perl",pid=9316,fd=6))$
UNCONN 0 0 *:10000 *:* users:(("perl",pid=9277,fd=6))$
UNCONN 0 0 192.168.100.10:53 *:* users:(("named",pid=95,fd=517),("named",pid=95,fd=516))$
UNCONN 0 0 127.0.0.1:53 *:* users:(("named",pid=95,fd=515),("named",pid=95,fd=514))$And indeed you can see that there were none, but now, strangely enough, output format is the way I've wanted it to be. Could someone explain what's going on here? How can I achieve my preferred formatting?
This is the only thing stopping me from migrating from netstat to ss.
|
ss - linux socket statistics utility output format
|
You can get some information by trying to connect, pass nothing and accept nothing before disconnecting.
socat -u OPEN:/dev/null UNIX-CONNECT:/run/php/php7.0-fpm.sockThere are at least four possible outcomes:If the socket does not exist then the error will be No such file or directory and the exit status will be 1.If you have no write access to the socket then the error will be Permission denied and the exit status will be 1. In this case you cannot tell if there's a process listening.If you have write access to the socket and there is no listening process then the error will be Connection refused and the exit status will be 1.If you have write access to the socket and there is a process listening then the connection will be established. The command will send nothing (like cat /dev/null), it will not try to receive anything (because of -u), so it will exit almost immediately. The exit status will be 0.
The connection gets established, briefly but still. The listening process may be configured to accept just one connection, serve it and exit; or to accept one connection at a time. In such case the probing connection will saturate the limit; this is undesirable. However in practice I expect vast majority of listening processes to be able to serve multiple connections and gracefully deal with clients who disconnect ruthlessly.Notes:You need to parse stderr to tell apart cases that generate exit status 1.
The procedure tells nothing about what process is listening.
|
I want to observe a socket status periodically, so I need to check the socket status by command.
Currently I list all listening sockets by ss and filter them by grep.
ss -l | grep -q /run/php/php7.0-fpm.sockIs there better way to check socket's status?
|
How to check whether a socket is listening or not?
|
Meaning of some of these fields can be deduced from source code of
ss and
Linux kernel. Information you see is printed by tcp_show_info()
function in iproute2/misc/ss.c.
advmss:
In ss.c:
s.advmss = info->tcpi_advmss;
(...)
if (s->advmss)
out(" advmss:%d", s->advmss);In linux/include/linux/tcp.h:
u16 advmss; /* Advertised MSS */app_limited:
In ss.c:
s.app_limited = info->tcpi_delivery_rate_app_limited;
(..)
if (s->app_limited)
out(" app_limited");That one is not documented in linux/include/uapi/linux/tcp.h in
Linux:
struct tcp_info {
(...)
__u8 tcpi_delivery_rate_app_limited:1;but surprisingly we can find some information in the commit that
introduced it:
commit eb8329e0a04db0061f714f033b4454326ba147f4
Author: Yuchung Cheng <[emailprotected]>
Date: Mon Sep 19 23:39:16 2016 -0400 tcp: export data delivery rate This commit export two new fields in struct tcp_info: tcpi_delivery_rate: The most recent goodput, as measured by
tcp_rate_gen(). If the socket is limited by the sending
application (e.g., no data to send), it reports the highest
measurement instead of the most recent. The unit is bytes per
second (like other rate fields in tcp_info). tcpi_delivery_rate_app_limited: A boolean indicating if the goodput
was measured when the socket's throughput was limited by the
sending application. This delivery rate information can be useful for applications that
want to know the current throughput the TCP connection is seeing,
e.g. adaptive bitrate video streaming. It can also be very useful for
debugging or troubleshooting.A quick git blame in ss.c confirms that app_limited was added
after tcpi_delivery_rate_app_limited was added to kernel.
busy:
In ss.c:
s.busy_time = info->tcpi_busy_time;
(..)
if (s->busy_time) {
out(" busy:%llums", s->busy_time / 1000);And in include/uapi/linux/tcp.h in Linux it says:
struct tcp_info {
(...)
__u64 tcpi_busy_time; /* Time (usec) busy sending data */retrans:
In ss.c:
s.retrans = info->tcpi_retrans;
s.retrans_total = info->tcpi_total_retrans;
(...)
if (s->retrans || s->retrans_total)
out(" retrans:%u/%u", s->retrans, s->retrans_total);tcpi_total_retrans is not described in linux/include/uapi/linux/tcp.h:
struct tcp_info {
(...)
__u32 tcpi_total_retrans;but it's used in tcp_get_info():
void tcp_get_info(struct sock *sk, struct tcp_info *info)
{
const struct tcp_sock *tp = tcp_sk(sk); /* iff sk_type == SOCK_STREAM */
(...)
info->tcpi_total_retrans = tp->total_retrans;And in linux/include/linux/tcp.h it says:
struct tcp_sock {
(...)
u32 total_retrans; /* Total retransmits for entire connection */tcpi_retrans is also not described but reading tcp_get_info()
again we see:
info->tcpi_retrans = tp->retrans_out;And in linux/include/linux/tcp.h:
struct tcp_sock {
(...)
u32 retrans_out; /* Retransmitted packets out */dsack_dups:
In ss.c:
s.dsack_dups = info->tcpi_dsack_dups;
(...)
if (s->dsack_dups)
out(" dsack_dups:%u", s->dsack_dups);In include/uapi/linux/tcp.h in Linux:
struct tcp_info {
(...)
__u32 tcpi_dsack_dups; /* RFC4898 tcpEStatsStackDSACKDups */And in https://www.ietf.org/rfc/rfc4898.txt:The number of duplicate segments reported to the local host by D-SACK
blocks.
|
I would want to know the meaning of some items in the ss command output. Eg:
# sudo ss -iepn '( dport = :3443 )'Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp ESTAB 0 0 192.168.43.39:45486 190.0.2.1:443 users:(("rocketchat-desk",pid=28697,fd=80)) timer:(keepalive,11sec,0) uid:1000 ino:210510085 sk:16f1 <->
ts sack cubic wscale:7,7 rto:573 rtt:126.827/104.434 ato:40 mss:1388 pmtu:1500 rcvmss:1388 advmss:1448 cwnd:10 bytes_sent:12904 bytes_retrans:385 bytes_acked:12520 bytes_received:13322 segs_out:433 segs_in:444 data_segs_out:215 data_segs_in:253 send 875.5Kbps lastsnd:18722 lastrcv:18723 lastack:18662 pacing_rate 1.8Mbps delivery_rate 298.1Kbps delivered:216 busy:16182ms retrans:0/10 dsack_dups:10 rcv_rtt:305 rcv_space:14480 rcv_ssthresh:6
CLOSE-WAIT 1 0 [2800:810:54a:7f0::1000]:37844 [2800:3f0:4002:803::200a]:443 users:(("plasma-browser-",pid=16020,fd=175)) uid:1000 ino:90761 sk:1d -->
ts sack cubic wscale:8,7 rto:222 rtt:21.504/5.045 ato:40 mss:1348 pmtu:1500 rcvmss:1208 advmss:1428 cwnd:10 bytes_sent:1470 bytes_acked:1471 bytes_received:11214 segs_out:20 segs_in:20 data_segs_out:8 data_segs_in:13 send 5014881bps lastsnd:96094169 lastrcv:96137280 lastack:96094142 pacing_rate 10029464bps delivery_rate 1363968bps delivered:9 app_limited busy:91ms rcv_space:14280 rcv_ssthresh:64108 minrtt:17.458Mainly items missing in ss man page, I made some guesses, please correct me if I'm wrong:rcvmss: I wonder is MMS receidev
advmss: ?
app_limited: ?
busy: ?
retrans: ?
dsack_dups: Duplicated segments?
minrtt: Minimum RTT achieved in the socket?
|
Detailed output of ss command
|
There are two standard library calls; getservbyname(3) and getservbyport(3). These allow programs to convert a name (e.g. telnet) to a port (23), or from a port back to a name.
The typical implementation uses /etc/services as the authoritative source, but this can be changed by the services entry in nsswitch.conf.
The command getent can be used to do some lookups, including service entries.
e.g. to convert from port to name
% getent services 80
http 80/tcp www www-httpThis tells me that port 80/tcp is mapped to "http" as the service name, but also has aliases "www" and "www-http".
Similarly to go from name to port
% getent services https
https 443/tcpWe can see https is on port 443/tcp.
These service names are just the registered values for these ports. They don't correspond to processes (it could be apache, nginx, a python script...) and it doesn't guarantee the process uses these (eg apache could listen on port 12345 if configured to do so).
As far as I know, the standard netstat and ss commands will provide either names or ports, not both. But you could write a simple program that took the output from ss and add the port->name mapping to each line.
|
Hi I feel like this is an obvious question but I haven't been able to get a good answer so far. Given the name of the service (which I know running on localhost) is there any networking command line tool like (netstat/ss) which will tell me what port that service is running at? Ideally something like:
$ some-program --service-name='mysql' localhost
'mysql' is running at localhost:3306I feel like there are solutions out there but non of them address it adequately. For example I have considered the following two ss commands:ss -tuln with output:Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:21119 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:37766 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:54399 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:5353 0.0.0.0:*
udp UNCONN 0 0 [::]:51755 [::]:*
udp UNCONN 0 0 [::]:5353 [::]:*
udp UNCONN 0 0 *:1716 *:*
tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:*
tcp LISTEN 0 70 127.0.0.1:33060 0.0.0.0:*
tcp LISTEN 0 64 0.0.0.0:59687 0.0.0.0:*
tcp LISTEN 0 151 127.0.0.1:3306 0.0.0.0:*andss -tul with output:Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:36308 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:36570 0.0.0.0:*
udp UNCONN 0 0 127.0.0.53%lo:domain 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:41124 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:21119 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:37766 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:54399 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:mdns 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:54522 0.0.0.0:*
udp UNCONN 0 0 [::]:51755 [::]:*
udp UNCONN 0 0 [::]:mdns [::]:*
udp UNCONN 0 0 *:1716 *:*
tcp LISTEN 0 100 127.0.0.1:smtp 0.0.0.0:*
tcp LISTEN 0 70 127.0.0.1:33060 0.0.0.0:*
tcp LISTEN 0 64 0.0.0.0:59687 0.0.0.0:*
tcp LISTEN 0 151 127.0.0.1:mysql 0.0.0.0:*The first command's output lists the port numbers that are listening while the second command's output is able to resolve them to the services running at the ports. But I can't somehow "combine" the two outputs where I can have the port number mapped to the service running, side by side. For example the rows:
tcp LISTEN 0 151 127.0.0.1:mysql 0.0.0.0:*and
tcp LISTEN 0 151 127.0.0.1:3306 0.0.0.0:*would be "combined" to give "127.0.0.1:3306 (mysql)" or something to that effect. I only know the above mapping because I googled what the default MySQL port is.
Is there a way to do this? It must be said that I am only learning to use these networking tools so any guidance is much appreciated.
|
Given a service name, get its port number?
|
Question #1Q1: From the ss man page I can't find out, what does it mean e.g. * 8567674 without file path.From the docs it explains the Address:Port column like so:
excerptThe format and semantics of ADDRESS_PATTERN depends on address family.inet - ADDRESS_PATTERN consists of IP prefix, optionally followed by colon and port. If prefix or port part is absent or replaced with *, this means wildcard match.
inet6 - The same as inet, only prefix refers to an IPv6 address. Unlike inet colon becomes ambiguous, so that ss allows to use scheme, like used in URLs, where address is suppounded with [ ... ].
unix - ADDRESS_PATTERN is shell-style wildcard.
packet - format looks like inet, only interface index stays instead of port and link layer protocol id instead of address.
netlink - format looks like inet, only socket pid stays instead of port and netlink channel instead of address.PORT is syntactically ADDRESS_PATTERN with wildcard address part. Certainly, it is undefined for UNIX sockets.The last sentence is your answer.
Question #2Q2: Why there is no file path to unix socket for some cases?See this SO Q&A titled: How to use unix domain socket without creating a socket file.
excerptYou can create a unix domain socket with an "abstract socket address". Simply make the first character of the sun_path string in the sockaddr_un you pass to bind be '\0'. After this initial NUL, write a string to the remainder of sun_path and pad it out to UNIX_PATH_MAX with NULs (or anything else).
Sockets created this way will not have any filesystem entry, ....Question #3Q3: How can I sniff unix DGRAM socket through socat without having file path?Again more googling once you know what things are called: socat docs.
excerptABSTRACT-LISTEN:
ABSTRACT-SENDTO:
ABSTRACT-RECVFROM:
ABSTRACT-RECV:
ABSTRACT-CLIENT:
>
The ABSTRACT addresses are almost identical to the related UNIX addresses except that they do not address file system based sockets
but an alternate UNIX domain address space. To archive this the socket
address strings are prefixed with "\0" internally. This feature is
available (only?) on Linux. Option groups are the same as with the
related UNIX addresses, except that the ABSTRACT addresses are not
member of the NAMED group.
|
From that article, I realized that:a UNIX domain socket is bound to a file path.So, I need to sniff DGRAM Unix socket through the socat as mentioned here. But when I try to retrieve the path for this purpose, I find that the target application uses a socket without file path.
The ss -apex command shows results both with and without file paths, e.g.:
u_dgr UNCONN 0 0 /var/lib/samba/private/msg.sock/32222 1345285 * 0 users:(("nmbd",pid=32222,fd=7))
u_dgr UNCONN 0 0 * 8567674 * 0 users:(("gnome-shell",pid=16368,fd=23))From the ss man page I can't find out, what does it mean e.g. * 8567674 without file path.
So, two questions:Why there is no file path to unix socket for some cases?
How can I sniff unix DGRAM socket through socat without having file path?
|
How can I sniff unix dgram socket without having file path?
|
You have your redirection at the wrong point in the pipeline. Presumably the error comes from the ss command, so that is where you should hide the error output. Or you can group the output and redirect from the command as a whole.
Here are some possible solutions to suppress any errors:
Redirect the standard error of the command producing the messages:
ss -tulpnoea 2> /dev/null|grep -i water|grep -v 127 Run the commands in a subshell and redirect the standard error of the subshell:
(ss -tulpnoea|grep -i water|grep -v 127) 2> /dev/nullGroup the commands and redirect the standard error of the group:
{ ss -tulpnoea|grep -i water|grep -v 127 ; } 2> /dev/nullOr if you specifically want to suppress that error and not others (subject to shell support)
ss -tulpnoea 2> >(grep -Fxv 'Failed to find cgroup2 mount' >&2)|grep -i water|grep -v 127) (ss -tulpnoea|grep -i water|grep -v 127) 2> >(grep -Fxv 'Failed to find cgroup2 mount' >&2){ ss -tulpnoea|grep -i water|grep -v 127 ; } 2> >(grep -Fxv 'Failed to find cgroup2 mount' >&2)
|
I run this command
ss -tulpnoea|grep -i water|grep -v 127
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
.....I tried with 2> /dev/null...
ss -tulpnoea|grep -i water|grep -v 127 2> /dev/null
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
Failed to find cgroup2 mount
.....How to avoid the annoying message about cgroup2 mount?
Distro is Slackware 15.0
|
How can I remove this annoying message: "Failed to find cgroup2 mount"?
|
224.0.0.251 is Multicast DNS, and it use the port 5353 (as you noticed). Many operating systems use it to discover new devices/printers/routers with zero or nearly zero configuration. E.g. if you want to send a page to be print to your printer, with e.g. the address my-printer.local, your operating system uses such port to look for the device which is named my-printer
|
I read some tutorials,they say that netstat is deprecated. I tried ss command. THis is the output
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
icmp6 UNCONN 0 0 *:58 *:*
udp UNCONN 0 0 224.0.0.251:5353 0.0.0.0:*
udp UNCONN 0 0 224.0.0.251:5353 0.0.0.0:*
udp UNCONN 0 0 224.0.0.251:5353 0.0.0.0:*
udp UNCONN 0 0 224.0.0.251:5353 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:5353 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:40886 0.0.0.0:*
udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:631 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:17500 0.0.0.0:*
udp UNCONN 0 0 [::]:5353 [::]:*
udp UNCONN 0 0 [::]:55606 [::]:*
udp UNCONN 0 0 *:41016 *:*
udp UNCONN 0 0 *:1716 *:* ss -at at port 5353 shows
ss -aut '( dport = :5353 or sport = :5353 )'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 224.0.0.251:mdns 0.0.0.0:*
udp UNCONN 0 0 224.0.0.251:mdns 0.0.0.0:*
udp UNCONN 0 0 224.0.0.251:mdns 0.0.0.0:*
udp UNCONN 0 0 224.0.0.251:mdns 0.0.0.0:*
udp UNCONN 0 0 224.0.0.251:mdns 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:mdns 0.0.0.0:*
udp UNCONN 0 0 [::]:mdns [::]:* Can someone explain how to interpret it?
|
What does address 224.0.0.251:5353 represent?
|
The ss command (any version) will display an uid only if it's not zero. Here's the source for the Debian 8 jessie version:
if (show_details) {
if (s.uid)
printf(" uid:%u", (unsigned)s.uid);
printf(" ino:%u", s.ino);
printf(" sk:%llx", s.sk);
if (opt[0])
printf(" opt:\"%s\"", opt);
}The test if (s.uid) makes it not display the information each time the socket owner's uid value is 0.
Additional tests (but using kernel 5.6 and a matching ss from iproute2 5.6.0), show that indeed a normal user can know the owner's uid of an other socket even if not owned by itself. So the absence of the uid: entry can mean only two things: owned by the root user, or owned by the kernel (which is different, but can't be distinguished here).
In your case, as the sshd daemon is running as root, all TCP sockets related to it will be owned by root, even if sshd grants a non-root login. The user login granted by sshd is not connected to any TCP socket: it's connected to the sshd daemon (or a separate instance of it) through master/slave PTYs or else with pipes. So ss can't be used to match a TCP connection to a user locally logged in through incoming ssh.
|
I have a debian VM, here the info:
las@Client:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"When I type ss -nte I expect to see the uid of the process that uses the socket because of the -e option, but it doesn't appear and I can't figure it out why. Here's the output:
las@Client:~$ ss -nte
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 176 192.168.56.201:22 192.168.56.1:50468 timer:(on,204ms,0) ino:12286 sk:ffff88000e7f3140 <->How can I solve this?
|
Why ss -e doesn't show uid?
|
A raw socket is a network socket (AF_INET or AF_INET6 usually). It can be used to create raw IP packages which can be used for troubleshooting or to implement your own TCP implementation without using SOCK_STREAM:Raw sockets allow new IPv4 protocols to be implemented in user space. A raw socket receives or sends the raw datagram not including link level headers. [raw(7)]Tools like nmap use raw sockets in order to stop the TCP handshake after the initial SYN, SYN-ACK, as the TCP connection never completely established. As a network socket, it uses sockaddr_in for addresses.
However, the creation of raw sockets is usually restricted. Only privileged processes can create them.A unix socket on the other hand is not a network socket (AF_UNIX). It's a local socket:The AF_UNIX (also known as AF_LOCAL) socket family is used to communicate between processes on the same machine efficiently. [unix(7)]It uses another address structure (sockaddr_un). It's a common way to implement two-way communication on a single system for inter-process communication without going through the network layer.And packet sockets are raw packets at the driver level:Packet sockets are used to receive or send raw packets at the device driver (OSI Layer 2) level. They allow the user to implement protocol modules in user space on top of the physical layer. [packet(7)]The other sockets act on the network layer (OSI Layer 3) or higher. At that point, you're talking directly to your network interface's driver.
For more information see socket(2), ip(7), packet(7), raw(7), socket(7) and unix(7).
|
The ss command (from the iproute2 set of tools which comes as a newer alternative to netstat) has in its --help the following options
-0, --packet display PACKET sockets
-t, --tcp display only TCP sockets
-S, --sctp display only SCTP sockets
-u, --udp display only UDP sockets
-d, --dccp display only DCCP sockets
-w, --raw display only RAW sockets
-x, --unix display only Unix domain socketsWhat exactly is the distinction made here between RAW and UNIX domain sockets?
And what actually are the PACKET sockets?
|
ss command: difference between raw and unix sockets
|
Each TCP communication is identified, at each end, by these four values:
origin IP - origin port - destination IP - destination portIn your case, you have a connection from the host to itself, so you get to see the two endpoints, with their origin and destination ports swapped in the second one.
You can find more information in the Wikipedia article: Transmission Control Protocol. Specifically related to your question: the section on protocol operation, that lists the states (connected, listen, ...) and the section on TCP ports.
|
I am struggling with Node.js app deployment because of some CORS and port issues.
ss-t output
ESTAB 0 0 [::1]:33366 [::1]:3031 ESTAB 0 0 [::1]:3031 [::1]:33366 What are Local Address:Port and Peer Address:Port?
How it comes that port 3031 appears in both roles?
sudo ss -ltpLISTEN 0 128 *:3031 *:* users:(("docker-proxy",pid=16141,fd=4)) Where can I read more about this?
|
How to decipher ss -t command output?
|
Hacky attempt to bring in perl to help. See how it replaces the uid:1001 with user:bob.
# ss -ntel
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 10 0.0.0.0:12345 0.0.0.0:* uid:1001 ino:29109 sk:5e <->
LISTEN 0 100 127.0.0.1:25 0.0.0.0:* ino:18771 sk:2 <->
LISTEN 0 128 0.0.0.0:111 0.0.0.0:* ino:16606 sk:3 <->
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* ino:20128 sk:4 <->
LISTEN 0 10 [::]:12345 [::]:* uid:1001 ino:29108 sk:61 v6only:1 <->
LISTEN 0 128 [::]:111 [::]:* ino:16609 sk:5 v6only:1 <->
LISTEN 0 128 *:80 *:* ino:18314 sk:6 v6only:0 <->
LISTEN 0 128 [::]:22 [::]:* ino:20130 sk:7 v6only:1 <->
# ss -ntel|perl -pne 'if(/uid:(\d+)/){@a=getpwuid($1);s/uid:(\d+)/user:$a[0]/}'
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 10 0.0.0.0:12345 0.0.0.0:* user:bob ino:29109 sk:5e <->
LISTEN 0 100 127.0.0.1:25 0.0.0.0:* ino:18771 sk:2 <->
LISTEN 0 128 0.0.0.0:111 0.0.0.0:* ino:16606 sk:3 <->
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* ino:20128 sk:4 <->
LISTEN 0 10 [::]:12345 [::]:* user:bob ino:29108 sk:61 v6only:1 <->
LISTEN 0 128 [::]:111 [::]:* ino:16609 sk:5 v6only:1 <->
LISTEN 0 128 *:80 *:* ino:18314 sk:6 v6only:0 <->
LISTEN 0 128 [::]:22 [::]:* ino:20130 sk:7 v6only:1 <->
#Note : I've checked the source and there doesn't appear to be anything native within ss to achieve this.
|
I want to know which user launched the process that is using a TCP socket.
I tried with ss -nte, but it only shows the uid, not the user's name. Is there a way to make ss show the user's name?
Here the output of ss -nte
lucio@debian:~$ ss -nte
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.56.1:45414 192.168.56.201:22 timer:(keepalive,46min,0) uid:1000 ino:711653 sk:4 <->
ESTAB 0 0 192.168.56.1:35544 192.168.56.203:22 timer:(keepalive,46min,0) uid:1000 ino:713505 sk:12 <->
ESTAB 0 0 192.168.178.70:55342 151.101.193.69:443 uid:1000 ino:758973 sk:3a <->
ESTAB 0 0 192.168.178.70:41212 198.252.206.25:443 timer:(keepalive,7min4sec,0) uid:1000 ino:756177 sk:3b <->
ESTAB 0 0 192.168.56.1:45542 192.168.56.202:22 timer:(keepalive,46min,0) uid:1000 ino:715991 sk:1e <->
ESTAB 0 0 192.168.178.70:41196 198.252.206.25:443 timer:(keepalive,6min19sec,0) uid:1000 ino:756063 sk:3c <->
ESTAB 0 0 192.168.178.70:43372 216.58.205.78:443 uid:1000 ino:759631 sk:3d <->
|
How to make ss show the user who is using the socket?
|
The raw socket is opened only to read information like statistics from the netatop kernel module, using getsockopt() (eww). There is no code to read or write raw packets with this socket.
https://github.com/Atoptool/atop/blob/v2.3.0/netatopif.c
|
I install atop on Debian 9. It runs as a monitoring daemon.
Why is it listening on a raw socket? Raw sockets are used to generate arbitrary IPv4 packets or capture all packets read all packets for a given IP sub-protocol! But I don't think my atop and its logs show any information from reading packets. I don't even use netatop - and that would require a kernel module, which is not included in Debian. And I would be extremely surprised if any of the atop features involve sending raw IP packets.
$ sudo netstat -l --raw -ep
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
raw 0 0 0.0.0.0:255 0.0.0.0:* 7 root 2427667 7353/atop$ sudo ss -l --raw -p
State Recv-Q Send-Q Local Address:Port Peer Address:Port
UNCONN 0 0 *:ipproto-255 *:* users:(("atop",pid=7353,fd=4))
|
Why does atop open a raw socket?
|
For a normal user that's normal behavior. To be able to associate the socket to a process, at some point, /proc/<pid>/fd/ must be read by ss. Only the same user or a privileged process (including running as root) has access to this.
Here's an strace excerpt about what is happening outside of Docker.
# runuser -u test -- sh -c 'echo $$; exec socat tcp4-listen:5555,reuseaddr -'
445406and beside:
user@host$ strace ss -tlnp sport == 5555 2>&1 |egrep -w '445406|^LISTEN'
openat(AT_FDCWD, "/proc/445406/attr/current", O_RDONLY|O_CLOEXEC) = 4
openat(AT_FDCWD, "/proc/445406/fd/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = -1 EACCES (Permission denied)
LISTEN 0 5 0.0.0.0:5555 0.0.0.0:* The unconstrained root user wouldn't get EACCESS, would have access to needed information and would be able to display the PID in the end.
But Docker doesn't run as normal root user: some capabilities (a capability is a piece of root "powers". root by default has all of them) were removed. And because of this root in the container gets the same error as a normal user, doesn't have access to the needed information to associate the socket to a process.
root@1589d8b38814:/# apt install libcap2-bin
[...]
oot@1589d8b38814:/# cat /proc/$$/status|grep ^Cap
CapInh: 00000000a80425fb
CapPrm: 00000000a80425fb
CapEff: 00000000a80425fb
CapBnd: 00000000a80425fb
CapAmb: 0000000000000000
root@1589d8b38814:/# capsh --decode=00000000a80425fb
0x00000000a80425fb=cap_chown,cap_dac_override,cap_fowner,cap_fsetid,
cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,
cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcapWhile the actual root user or when running the Docker container in privileged mode (--privileged):
root@cce7fc1de1c3:/# cat /proc/$$/status |grep ^Cap
CapInh: 0000003fffffffff
CapPrm: 0000003fffffffff
CapEff: 0000003fffffffff
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
root@cce7fc1de1c3:/# capsh --decode=0000003fffffffff
0x0000003fffffffff=cap_chown,cap_dac_override,cap_dac_read_search,
cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,
cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,
cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,
cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,
cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,
cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,
cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,
cap_block_suspend,cap_audit_readMuch more.
Here dropping cap_sys_ptrace (which affects access in /proc) is enough to derail it. Note that an unprivileged Docker container doesn't give cap_sys_ptrace to its root user.
With socat running as nobody with pid 392 and a privileged docker root user beside:
root@df29c4a57b3f:/# capsh --inh= --caps= -- -c 'ss -tlnp sport == 5555'
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 5 0.0.0.0:5555 0.0.0.0:* users:(("socat",pid=392,fd=5))
root@df29c4a57b3f:/# capsh --drop=cap_sys_ptrace --inh= --caps= -- -c 'ss -tlnp sport == 5555'
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 5 0.0.0.0:5555 0.0.0.0:*
|
Is it normal behavior, or a bug, that when I run ss -nltp, I only see the process/pid information if the user I am running ss -nltp as is the same user as the listening process?
$ docker run -it --rm tianon/network-toolbox
root@bc058746626a:/# apt update
...
root@bc058746626a:/# apt install gosu
...
root@bc058746626a:/# nc -l 4444
[... check ss in another terminal]
^C
root@bc058746626a:/# gosu nobody nc -l 4444
....$ docker exec -it admiring_keller bash
$ # both running as root
root@bc058746626a:/# ss -nltp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 1 0.0.0.0:4444 0.0.0.0:* users:(("nc",pid=325,fd=3))
$ # process running as nobody, ss as root
root@bc058746626a:/# ss -nltp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 1 0.0.0.0:4444 0.0.0.0:*
$ # process still as nobody , ss as nobody
root@bc058746626a:/# gosu nobody ss -nltp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 1 0.0.0.0:4444 0.0.0.0:* users:(("nc",pid=343,fd=3))
root@bc058746626a:/# exit
$ # process still as nobody , ss as nobody
$ docker exec -it --user nobody admiring_keller bash
nobody@bc058746626a:/$ ss -nltp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 1 0.0.0.0:4444 0.0.0.0:* users:(("nc",pid=343,fd=3))
nobody@bc058746626a:/$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4764 4124 pts/0 SNs 16:42 0:00 bash --login -i
nobody 343 0.0 0.0 3204 864 pts/0 SN+ 16:45 0:00 nc -l 4444
nobody 369 0.0 0.0 3872 3152 pts/1 SNs 16:50 0:00 bash
nobody 377 0.0 0.0 7644 2708 pts/1 RN+ 16:54 0:00 ps auxRelated: https://stackoverflow.com/questions/68085747/why-would-a-process-not-be-associated-with-a-port-when-using-gosu
|
iproute2 ss - not showing process/pid information if user is not the same user as listening process?
|
Using lsof command. Usage:
sudo lsof -ni tcp | grep <port>
And the 2nd column is PID.
|
My loacl server IP is 192.168.122.100, and remote server IP is 192.168.122.50. I need to kill all processes which connect to 192.168.122.50:56666. By executing ss comand, I found there are three TCP connections has been established. But I don't konw which process are using these sockets. How can I find out the PID of these socket?
Thank you in advance!
|
How can I confirm which process is connect to remote port?
|
That would be for sockets that are bound to a particular interface in addition to that IPv4 wildcard address using the SO_BINDTODEVICE socket option. From socket(7):SO_BINDTODEVICE
Bind this socket to a particular device like “eth0”, as
specified in the passed interface name. If the name is an
empty string or the option length is zero, the socket
device binding is removed. The passed option is a
variable-length null-terminated interface name string with
the maximum size of IFNAMSIZ. If a socket is bound to an
interface, only packets received from that particular
interface are processed by the socket. Note that this
works only for some socket types, particularly AF_INET
sockets. It is not supported for packet sockets (use
normal bind(2) there).
Before Linux 3.8, this socket option could be set, but
could not retrieved with getsockopt(2). Since Linux 3.8,
it is readable. The optlen argument should contain the
buffer size available to receive the device name and is
recommended to be IFNAMSIZ bytes. The real device name
length is reported back in the optlen argument.(emphasis mine)
Normally, instead, you'd bind your socket to the IP address(es) of the interface instead. SO_BINDTODEVICE is slightly different though. Except if blocked by rp_filter or firewall, a system will accept a connection to a given address as long as the destination address of the incoming packet matches the address the socket is bound to (even if the packet doesn't come from that interface¹). With SO_BINDTODEVICE, as explained above, only packets coming from that very interface are considered.
As an example:
$ sudo socat udp-listen:1234,so-bindtodevice=guest-bridge - &
$ ss -Hau 'sport = 1234'
UNCONN 0 0 0.0.0.0%guest-bridge:1234 0.0.0.0:*In your case, it's probably a DHCP server UDP socket bound by a dnsmasq instance started by libvirt. And you can see that software does set that option when asked to bind to interface(s), or exclude some interface(s).
For a DHCP server, you don't want to bind to the address of the interface, as a DHCP server is meant to accept broadcast "discover" packets with 255.255.255.255 as destination. Here using the wildcard address and SO_BINDTODEVICE means you can have that dnsmasq serving DHCP requests from that virbr0 bridge interface and a separate dnsmasq (or other software) serving them on the same wildcard address on a different interface.¹ which is a perfectly normal and common thing. Think for instance of a router which will typically accept incoming connections from any of its interfaces to any of its inbound addresses, or the local loopback connections you can normally do to the local addresses of any of your network interfaces
|
I get the following line from ss -lun:
udp UNCONN 0 0 0.0.0.0%virbr0:67 0.0.0.0:* users:(("dnsmasq",pid=950,fd=3)) I wonder what 0.0.0.0%virbr0 means here. No trace of it in man, hard to find in search engines.
|
What is 0.0.0.0%virbr0?
|
expire_time is the time left until the timer expires. The TCP stack in the Linux kernel supports a number of timers, and they all have an expiration time.
retrans is the number of retransmissions which have occurred. TCP implementations retransmit packets they believe have been lost; they counts these retransmissions so that they will know when to give up. You shouldn’t see this too often; one way to force it is to try to open a connection on a port which isn’t rejected immediately, e.g. (based on an example in one of your previous questions):
curl http://www.google.com:9000If you run that, you’ll see curl sitting there for a while, and ss -o will show a SYN-SENT entry with an increasing retransmission count. You’ll also see the back-off applied in such circumstances: the initial expiration time will increase every time the packet is retransmitted.
|
ss -o shows TCP timer in the following format:
timer:(<timer_name>,<expire_time>,<retrans>)What do <expire_time> and <retrans> mean?
I found <expire_time> counts down to zero and then restart counting from some number again. Its starting value differs from TCP socket to TCP socket.
<retrans> seems always zero for all the TCP sockets.
|
What do `<expire_time>` and `<retrans>` mean in the output of `ss -o`?
|
ss has a built-in filter language, alas it's almost undocumented in the man page. The last remnents of this documentation are in iproute2 version v4.13.0 and were removed with this commit:doc: remove outdated ss documentation
The current version is
well documented on man page. The latex documentation is very old and
was never upated.The file ss.sgml might appear in other formats once compiled/distributed.
Here's an example:<item>A. Address/port match, where address is checked against mask
and port is either wildcard or exact. It is one of:
<tscreen><verb>
dst prefix:port
src prefix:port
src unix:STRING
src link:protocol:ifindex
src nl:channel:pid
</verb></tscreen> Both prefix and port may be absent or replaced with <tt/*/,
which means wildcard. UNIX socket use more powerful scheme
matching to socket names by shell wildcards. Also, prefixes
unix: and link: may be omitted, if address family is evident
from context (with option <tt/-x/ or with <tt/-f unix/
or with <tt/unix/ keyword) <p>So if you want to exclude any loopback addresses, you can do:
ss -nlptu '! src 127.0.0.0/8 and ! src [::1]'Quotes are actually required only because of the IPv6 address format, but then once using quotes, better quote everything.
|
often I take a look at listening sockets with ss -nlptu, this list often includes few/dozen socket bound on either an ipv4 or ipv6 loopback address. As we know there are a few valid ipv4 loopback addresses to bind to, so I would rather not use something like grep to exclude the most common ones if I can avoid it.
Is there a way to have ss exclude local/loopback listening ports from its listing?
|
iproute2 ss - exclude sockets bound to loopback addresses
|
255 is the value of IPPROTO_RAW. It means this socket allows sending all types of IPv4 packets. (It cannot receive packets). The program has to provide a full IPv4 header.
For comparison, the raw socket with *:icmp allows sending and receiving IPv4 packets which use the ICMP protocol.
These details are specific to Linux. The exact behaviour of raw sockets varies between different Unix variants and versions.
http://man7.org/linux/man-pages/man7/raw.7.html
The IPv4 protocol field has 255 possible values.
https://en.wikipedia.org/wiki/List_of_IP_protocol_numbersThat said, I found this particular IPPRPROTO_RAW socket was not being used to send packets:
Why does atop open a raw socket?
|
What does ss mean by *:ipproto-255, in the local address/port column?
$ sudo ss -ap | grep -vE "^(nl |u_)"
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
p_raw UNCONN 0 0 *:eth0 * users:(("lldpd",pid=742,fd=11))
raw UNCONN 0 0 *:icmp *:* users:(("ping",pid=9077,fd=3))
raw UNCONN 0 0 *:ipproto-255 *:* users:(("atop",pid=7353,fd=4))
raw UNCONN 0 0 :::ipv6-icmp :::* users:(("ping",pid=9077,fd=4))
udp UNCONN 0 0 *:syslog *:* users:(("rsyslogd",pid=495,fd=5))
...If you want to know what it looks like in netstat, it shows up as 0.0.0.0:255.
$ sudo netstat -l --raw -ep
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
raw 0 0 0.0.0.0:255 0.0.0.0:* 7 root 2427667 7353/atop
|
ss shows a raw socket. What does it mean that it is listening on "*:ipproto-255"?
|
You have to switch to the correct network namespace first, because socket state is per namespace (namely per network namespace). For example by using nsenter. sudo has to be moved first, because nsenter also requires privileges. In one line (and using ss's own filtering features) this becomes:
sudo nsenter -t $(docker inspect --format='{{.State.Pid}}' python-app) --net -- \
ss -a -np sport == 9001
|
I run a python code inside docker container performing the following calls
import socket as s,subprocess as sp;s1=s.socket(s.AF_INET,s.SOCK_STREAM);
s1.setsockopt(s.SOL_SOCKET,s.SO_REUSEADDR, 1);s1.bind(("0.0.0.0",9001));s1.listen(1);c,a=s1.accept();I'm trying to get info using ss and see the open sockets, but can't get themdocker run --rm --publish 9001:9001 -it --name python-app sample-python-app reverseshell.pydocker inspect --format='{{.State.Pid}}' python-app
1160502> sudo ss -a -np | grep 9001
tcp LISTEN 0 4096 0.0.0.0:9001 0.0.0.0:* users:(("docker-proxy",pid=1160459,fd=4))
tcp LISTEN 0 4096 [::]:9001 [::]:* users:(("docker-proxy",pid=1160467,fd=4)) however lsof gives me more info:
> sudo lsof -p 1160502
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
lsof: WARNING: can't stat() fuse.portal file system /run/user/1000/doc
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 1160502 dmitry cwd DIR 0,1364 108 19497 /workspace
python 1160502 dmitry rtd DIR 0,1364 188 256 /
python 1160502 dmitry txt REG 0,1364 6120 6529 /layers/paketo-buildpacks_cpython/cpython/bin/python3.10
python 1160502 dmitry mem REG 0,30 6529 /layers/paketo-buildpacks_cpython/cpython/bin/python3.10 (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 9492 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/_posixsubprocess.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 9518 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/fcntl.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 9514 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/array.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 9527 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/select.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 9520 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/math.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 9499 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/_socket.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 634 /lib/x86_64-linux-gnu/libm-2.27.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 692 /lib/x86_64-linux-gnu/libutil-2.27.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 619 /lib/x86_64-linux-gnu/libdl-2.27.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 670 /lib/x86_64-linux-gnu/libpthread-2.27.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 609 /lib/x86_64-linux-gnu/libc-2.27.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 6705 /layers/paketo-buildpacks_cpython/cpython/lib/libpython3.10.so.1.0 (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 591 /lib/x86_64-linux-gnu/ld-2.27.so (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 3735 /usr/lib/locale/locale-archive (path dev=0,32, inode=1544914)
python 1160502 dmitry mem REG 0,30 1365 /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache (stat: No such file or directory)
python 1160502 dmitry mem REG 0,30 1091 /usr/lib/locale/C.UTF-8/LC_CTYPE (stat: No such file or directory)
python 1160502 dmitry 0u CHR 136,0 0t0 3 /dev/pts/0
python 1160502 dmitry 1u CHR 136,0 0t0 3 /dev/pts/0
python 1160502 dmitry 2u CHR 136,0 0t0 3 /dev/pts/0
python 1160502 dmitry 3u sock 0,8 0t0 75159952 protocol: TCPat least I have this line showing that fd=3 opens socket [75159952] but without actual port number.
python 1160502 dmitry 3u sock 0,8 0t0 75159952 protocol: TCPso how to find with ss information about open socket over port 9001 that is not docker-proxy?
|
ss doesn't display socket info related to the process opening SOL_SOCKET
|
I think it might a bug in your application, maybe you can keep it under control if you restart the application before running out of the maximum number of open files / sockets, or increase any artificial limits set by ulimit.
Try looking for a bug report, for example:https://issues.apache.org/jira/browse/YARN-9336
https://issues.apache.org/jira/browse/YARN-4754
https://issues.apache.org/jira/browse/YARN-10207Or report one yourself (if this is your application). For a discussion of this type of issue, see: https://stackoverflow.com/questions/15912370/how-do-i-remove-a-close-wait-socket-connection
|
when we perform the following cli on our rhel machine we get more then 600 CLOSE_WAIT lines
lsof -i tcp:8088 | grep CLOSE_WAIT
java 31100 yarn 385u IPv4 208022048 0t0 TCP master02.hgti.com:radan-http->master02.hgti.com:56504 (CLOSE_WAIT)
java 31100 yarn 407u IPv4 208210692 0t0 TCP master02.hgti.com:radan-http->master02.hgti.com:58918 (CLOSE_WAIT)
java 31100 yarn 408u IPv4 206182798 0t0 TCP master02.hgti.com:radan-http->master02.hgti.com:36538 (CLOSE_WAIT)
java 31100 yarn 410u IPv4 208447279 0t0 TCP master02.hgti.com:radan-http->master02.hgti.com:60972 (CLOSE_WAIT)
java 31100 yarn 412u IPv4 208287324 0t0 TCP master02.hgti.com:radan-http->master02.hgti.com:59820 (CLOSE_WAIT)
java 31100 yarn 413u IPv4 206107964 0t0 TCP master02.hgti.com:radan-http->master02.hgti.com:35704 (CLOSE_WAIT)
.
.
.
.
.as I know During the communication between the server and the client, the closed_wait caused by the socket failure of the server occurs
so any chance to do some settings from Linux side? in order to minimize the close wait sessions?
or its should be only solution from application side?
reference - https://www.programmersought.com/article/74221875444/
|
rhel + any best practice to minimize the CLOSE_WAIT sessions from linux side
|
From TCP Variables:The tcp_reordering variable tells the kernel how much a TCP packet may
be reordered in a stream without assuming that the packet was lost
somewhere on the way.tcp_reordering may be changed via net.ipv4.tcp_reordering variable of sysctl. By default this value is 3.
If you change net.ipv4.tcp_reordering variable, then ss --info will print all connections with values which are differs from 3. Fragment of iproute2 source:
. . .
if (s->reordering != 3)
printf(" reordering:%d", s->reordering);
. . .
|
ss --info returns information about tcp connections. It produces a line simliar to the following (some fields removed for formatting)
tcp ESTAB 0 0 192.168.1.177:60236 54.70.141.88:https
cubic wscale:7,7 rto:204 rtt:0.918/0.419 reordering:59
What exactly does the reordering number mean in this example?
|
What does the reordering field of ss --info mean?
|
It turned out it was after all a problem on my machine. I had another instance of WSL running side to side that I forgot of and that one had a Postgres server running and listening to that port. I wrongly assumed they were running in isolation from each other while instead they are not. Uninstalling Postgres from that WSL instance fixed my issue.
|
When I use ss (socket statistics) to show the usages of port 5432 I get:
$ sudo ss -ln | grep -E 'State|5432'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
u_str LISTEN 0 244 /var/run/postgresql/.s.PGSQL.5432 54481 * 0
tcp LISTEN 0 244 127.0.0.1:5432 0.0.0.0:*When using lsof (list of open files) instead I get no result:
$ sudo lsof -i tcp:5432Why is that?
Related to:Why do nmap, ss (netscan?) and lsof give different results?
Difference between lsof -i : & socket statistics ss -lp | grep ?Edit with answers from comments:sudo ss -lnp does not show the pid of the process(es) that have that listening socket
the 127.0.0.1:5432 0.0.0.0:* on the last line was a copy-paste error, sorry about that, I have removed it
I am running those commands in a WSL terminal, Postgres is not running anywhereEdit with new findings:
I have found out this is happening only when Docker Desktop is running (even though there is no container running): ss doesn't output anything once I quit Docker Desktop. It looks like this might be an issue somehow related with Docker Desktop: I have reported it in this GitHub issue.
|
Why ss show a port is in use but lsof doesn't?
|
After a lot of searching around, I finally came to a conclusion.
My understanding of how to calculate TCP memory usage is correct.
For every socket add socket_memory = rmem_alloc + wmem_alloc + fwd_alloc + wmem_queued + opt_mem + back_log (the r,t, f, w, bl, o fields in skmem)
The reason that my total captured sockets memory above does not add up, is that a lot of the connections are running inside docker containers which are not displayed in the main system ss output, however they are displayed in kernel output of /proc/net/sockstat.
More info in this helpful stackoverflow question: https://stackoverflow.com/questions/37171909/when-using-docker-established-connections-dont-appear-in-netstat
That explains the difference. For a host-only-running processes situation the memory summed would be a match.
|
I am running a Debian GNU/Linux 9.5 (stretch) with kernel: 4.9.0-7-amd64.
I have found the culprit of a memory consumption problem I am facing to be an in-app mechanism for sending logs to FluentD daemon, so I am trying to figure out the TCP memory usage.
According to the following outputs
/proc/net/sockstat:
sockets: used 779
TCP: inuse 23 orphan 0 tw 145 alloc 177 mem 4451
UDP: inuse 5 mem 2
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0mem metric is the number of pages (4K) used by TCP memory. So TCP Memory usage equals 4451 * 4 = 17804k
ss -atmp:
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:ssh *:* users:(("sshd",pid=559,fd=3))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d712)
LISTEN 0 4096 127.0.0.1:8125 *:* users:(("netdata",pid=21419,fd=33))
skmem:(r0,rb33554432,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 4096 *:19999 *:* users:(("netdata",pid=21419,fd=4))
skmem:(r0,rb33554432,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 *:3999 *:* users:(("protokube",pid=3504,fd=9))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 127.0.0.1:19365 *:* users:(("kubelet",pid=2607,fd=10))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 127.0.0.1:10248 *:* users:(("kubelet",pid=2607,fd=29))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 127.0.0.1:10249 *:* users:(("kube-proxy",pid=3250,fd=10))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 128 *:sunrpc *:* users:(("rpcbind",pid=232,fd=8))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
ESTAB 0 0 172.18.25.47:ssh 46.198.221.224:35084 users:(("sshd",pid=20049,fd=3),("sshd",pid=20042,fd=3))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
TIME-WAIT 0 0 100.96.18.1:48226 100.96.18.110:3006
ESTAB 0 0 172.18.25.47:62641 172.18.18.165:3999 users:(("protokube",pid=3504,fd=11))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d15390)
ESTAB 0 0 172.18.25.47:3999 172.18.63.198:46453 users:(("protokube",pid=3504,fd=17))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
SYN-SENT 0 1 172.18.25.47:28870 172.18.23.194:4000 users:(("protokube",pid=3504,fd=3))
skmem:(r0,rb12582912,t1280,tb12582912,f2816,w1280,o0,bl0,d0)
TIME-WAIT 0 0 100.96.18.1:34744 100.96.18.108:3008
ESTAB 0 0 172.18.25.47:3999 172.18.18.165:23733 users:(("protokube",pid=3504,fd=8))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
TIME-WAIT 0 0 100.96.18.1:12992 100.96.18.105:3007
TIME-WAIT 0 0 100.96.18.1:48198 100.96.18.110:3006
TIME-WAIT 0 0 100.96.18.1:63502 100.96.18.102:8001
ESTAB 0 0 127.0.0.1:10249 127.0.0.1:53868 users:(("kube-proxy",pid=3250,fd=5))
skmem:(r0,rb12582912,t0,tb12582912,f4096,w0,o0,bl0,d0)
TIME-WAIT 0 0 100.96.18.1:58032 100.96.18.101:3000
TIME-WAIT 0 0 100.96.18.1:17158 100.96.18.104:8000
ESTAB 0 0 172.18.25.47:38474 172.18.18.165:https users:(("kubelet",pid=2607,fd=38))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d112)
TIME-WAIT 0 0 100.96.18.1:17308 100.96.18.104:8000
ESTAB 0 0 127.0.0.1:32888 127.0.0.1:10255 users:(("go.d.plugin",pid=21570,fd=8))
skmem:(r0,rb12582912,t0,tb12582912,f20480,w0,o0,bl0,d3)
TIME-WAIT 0 0 100.96.18.1:57738 100.96.18.101:3000
TIME-WAIT 0 0 100.96.18.1:23650 100.96.18.97:3004
TIME-WAIT 0 0 100.96.18.1:34518 100.96.18.103:3001
ESTAB 0 0 127.0.0.1:53868 127.0.0.1:10249 users:(("go.d.plugin",pid=21570,fd=6))
skmem:(r0,rb12582912,t0,tb12582912,f8192,w0,o0,bl0,d1)
TIME-WAIT 0 0 100.96.18.1:23000 100.96.18.98:3002
ESTAB 0 0 172.18.25.47:38498 172.18.18.165:https users:(("kube-proxy",pid=3250,fd=7))
skmem:(r0,rb12582912,t0,tb12582912,f8192,w0,o0,bl0,d0)
TIME-WAIT 0 0 100.96.18.1:26430 100.96.18.100:3005
TIME-WAIT 0 0 100.96.18.1:34882 100.96.18.103:3001
ESTAB 0 0 172.18.25.47:3999 172.18.44.34:57033 users:(("protokube",pid=3504,fd=14))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
ESTAB 0 0 172.18.25.47:3999 172.18.25.148:60423 users:(("protokube",pid=3504,fd=18))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
ESTAB 0 0 172.18.25.47:61568 35.196.244.138:https users:(("netdata",pid=21419,fd=70))
skmem:(r0,rb12582912,t0,tb262176,f0,w0,o0,bl0,d0)
TIME-WAIT 0 0 100.96.18.1:13154 100.96.18.105:3007
ESTAB 0 0 172.18.25.47:54289 172.18.30.39:3999 users:(("protokube",pid=3504,fd=12))
skmem:(r0,rb12582912,t0,tb12582912,f4096,w0,o0,bl0,d15392)
TIME-WAIT 0 0 100.96.18.1:34718 100.96.18.108:3008
TIME-WAIT 0 0 100.96.18.1:24078 100.96.18.97:3004
LISTEN 0 128 :::ssh :::* users:(("sshd",pid=559,fd=4))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 4096 :::19999 :::* users:(("netdata",pid=21419,fd=5))
skmem:(r0,rb33554432,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 :::4000 :::* users:(("protokube",pid=3504,fd=5))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 :::32003 :::* users:(("kube-proxy",pid=3250,fd=13))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 :::31719 :::* users:(("kube-proxy",pid=3250,fd=12))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 :::10250 :::* users:(("kubelet",pid=2607,fd=24))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d23)
LISTEN 0 32768 :::9100 :::* users:(("node_exporter",pid=11027,fd=3))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 :::31532 :::* users:(("kube-proxy",pid=3250,fd=11))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 :::30892 :::* users:(("kube-proxy",pid=3250,fd=9))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 :::10255 :::* users:(("kubelet",pid=2607,fd=26))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 128 :::sunrpc :::* users:(("rpcbind",pid=232,fd=11))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
LISTEN 0 32768 :::10256 :::* users:(("kube-proxy",pid=3250,fd=8))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d0)
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13492
ESTAB 0 0 ::ffff:172.18.25.47:10250 ::ffff:172.18.25.148:55670 users:(("kubelet",pid=2607,fd=40))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d15400)
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13096
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13384
ESTAB 0 0 ::ffff:172.18.25.47:10250 ::ffff:172.18.44.34:49454 users:(("kubelet",pid=2607,fd=59))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d7698)
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13200
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13502
TIME-WAIT 0 0 ::ffff:172.18.25.47:4000 ::ffff:172.18.63.198:25438
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13586
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13298
ESTAB 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.148:45776 users:(("node_exporter",pid=11027,fd=7))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d15419)
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13292
ESTAB 0 0 ::ffff:127.0.0.1:10255 ::ffff:127.0.0.1:32888 users:(("kubelet",pid=2607,fd=5))
skmem:(r0,rb12582912,t0,tb12582912,f4096,w0,o0,bl0,d0)
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13206
ESTAB 0 0 ::ffff:172.18.25.47:10250 ::ffff:172.18.18.165:33482 users:(("kubelet",pid=2607,fd=32))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d7707)
TIME-WAIT 0 0 ::ffff:172.18.25.47:4000 ::ffff:172.18.30.39:45200
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13594
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13390
TIME-WAIT 0 0 ::ffff:172.18.25.47:9100 ::ffff:172.18.25.47:13090
ESTAB 0 0 ::ffff:172.18.25.47:10250 ::ffff:172.18.25.148:55590 users:(("kubelet",pid=2607,fd=41))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d15418)
ESTAB 0 0 ::ffff:172.18.25.47:10250 ::ffff:172.18.25.148:55536 users:(("kubelet",pid=2607,fd=11))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d15401)
ESTAB 0 0 ::ffff:172.18.25.47:10250 ::ffff:172.18.25.148:55762 users:(("kubelet",pid=2607,fd=43))
skmem:(r0,rb12582912,t0,tb12582912,f0,w0,o0,bl0,d15407)According to ss manual:
skmem:(r<rmem_alloc>,rb<rcv_buf>,t<wmem_alloc>,tb<snd_buf>,
f<fwd_alloc>,w<wmem_queued>,o<opt_mem>,
bl<back_log>,d<sock_drop>) <rmem_alloc>
the memory allocated for receiving packet <rcv_buf>
the total memory can be allocated for receiving packet <wmem_alloc>
the memory used for sending packet (which has been sent
to layer 3) <snd_buf>
the total memory can be allocated for sending packet <fwd_alloc>
the memory allocated by the socket as cache, but not used
for receiving/sending packet yet. If need memory to
send/receive packet, the memory in this cache will be
used before allocate additional memory. <wmem_queued>
The memory allocated for sending packet (which has not
been sent to layer 3) <ropt_mem>
The memory used for storing socket option, e.g., the key
for TCP MD5 signature <back_log>
The memory used for the sk backlog queue. On a process
context, if the process is receiving packet, and a new
packet is received, it will be put into the sk backlog
queue, so it can be received by the process immediately <sock_drop>
the number of packets dropped before they are de-multi‐
plexed into the socketAdding all the skmem values except for the rb and tb (because they are the maximum amount that can be allocated) and d for dropped packets, I should get a value pretty close to /proc/net/sockstat value. However the value I get is 53k which is nowhere near the 17804k.
Is my logic correct? So what am I missing here?
|
Calculate TCP memory usage (does not add up)
|
Solution found: this command works perfect
watch "ss -o state syn-sent '( dport = :https or sport = :https )'this command also works fine
while true;do sleep 2s && netstat -napotep|grep SYN_SENT; done
|
I want to see the state "syn_sent" of socket in realtime during the connection process
ss or netstat or any command
I have tried those commands, but all fail
watch netstat -tnaop|grep -i syn
ss -4 state syn
|
How to show the "syn_sent" socket state on Linux in realtime?
|
The listening socket isn't the one transporting data! the moment a listening socket gets a connection request, the accept() system call can create a new connected socket. the listening socket doesn't transport any data, it just waits for connection requests. the listening socket and the data-transporting sockets are two separate sockets.
Therefore, ss doesn't have much to show.
|
With ss -tuiOp we can view extended stats for an outbound process, e.g.:
tcp ESTAB 0 0 192.168.68.108:32862 52.86.220.33:https
users:(("chrome",pid=13907,fd=44)) cubic wscale:12,7 rto:292 rtt:91.131/1.147 ato:40 mss:1288 pmtu:1500 rcvmss:1288 advmss:1448 cwnd:10 bytes_sent:25761 bytes_retrans:108 bytes_acked:25654 bytes_received:136601 segs_out:1010 segs_in:630 data_segs_out:407 data_segs_in:522 send 1.13Mbps lastsnd:2184 lastrcv:2092 lastack:2092 pacing_rate 2.26Mbps delivery_rate 339kbps delivered:408 app_limited busy:36036ms retrans:0/2 dsack_dups:2 rcv_rtt:33522.9 rcv_space:67624 rcv_ssthresh:225644 minrtt:82.525However, this isn't viewable for listening ports using ss -tuiOpl:
tcp LISTEN 0 64 *:sip *:* users:(("linphone",pid=13355,fd=39)) cubic cwnd:10 Is there a way to get similar stats for listening ports? I'm particularly interested in bytes_sent, bytes_received, lastrcv.
|
View extended stats for listening ports (using ss?)
|
"the mapping of $DISPLAY on C to $DISPLAY on B" what does that mean?
Clearly you grep out of something on C, so you only see sockets on C which involves "port num=6010". Other connection or listening socket on C are grep out.
You didn't see any connection before because there hasn't been any X client running and connected to sshd(port number=6010), and more info after because you now run an X client, which has connected to your sshd(port num=6010).
You have to know the network topology when using SSH tunnel. SSH server on C opens a new socket which listen on port 6010 because it was asked to by the SSH client on B. The ssh tunnel is still established between SSH client on B and SSH server on C(port num=22, if sshd not specially configured), you don't see this tunnel connection since you grep it out. X clients on C connects to sshd(port number=6010), then sshd multiplex these connections using the ssh tunnel and forward these connections to the X server on B.
"Connection between $DISPLAY on C and $DISPLAY on B" doesn't really exist, the ssh tunnel is created between C:22 and address_of_the_SSH_client_on_B. And since it's a connection, it's not possible in the LISTENING state.
use netstat -ap without grep to see more information.
All the connection we mentioned in this answer means real TCP connection, from the kernel's view, not "connections" from end-users' view.
|
On machine B, I remote access machine C
$ ssh -X t@C
$ echo $DISPLAY
localhost:10.0How can I find/verify the mapping of $DISPLAY on C to $DISPLAY on B? Can it be done by the following command on C?
$ netstat -a | grep 6010
tcp 0 0 localhost:6010 0.0.0.0:* LISTEN
tcp6 0 0 ip6-localhost:6010 [::]:* LISTEN Why is the connection between $DISPLAY on C and $DISPLAY on B LISTEN not ESTABLISHED, given that the X forwarding channel has been created?
When I run a X client on C, how can I verify that it is connected to the X server on B (the local machine)? Why do I get more information about port 6010 in the following than before running the X client?
$ eog &
[1] 1129
$ netstat -a | grep 6010
tcp 0 0 localhost:6010 0.0.0.0:* LISTEN
tcp 0 0 localhost:59782 localhost:6010 TIME_WAIT
tcp 0 0 localhost:59780 localhost:6010 ESTABLISHED
tcp 0 0 localhost:59778 localhost:6010 TIME_WAIT
tcp 0 0 localhost:6010 localhost:59780 ESTABLISHED
tcp6 0 0 ip6-localhost:6010 [::]:* LISTENThanks.
|
How can I find the mapping on `$DISPLAY` after `ssh -X`?
|
MacOS appears to support the 'netstat' command. Netstat has long been deprecated for linux due to the interface it used. 'ss' is a syntactically similar command for linux.
|
In linux we can run ss -x or lsof -U +E and we can see what type unix socket has. But in macOS there is no ss or we can run lsof -U which only shows TYPE - unix, but I would like to know with some utility what exactly so_type a unix socket has.
|
How can I find out what so_type an existing unix socket has in macOS?
|
trick: such things can often be figured out with a run of strace -o logfile.txt programname; in your case, you'd find out that ss creates an AF_NETLINK socket, and sends and receives messages via that.
Netlink is an logical interface of the kernel designed to give access to internals of the networking stack.
Using ldd $(which ss) you can find out that ss seems to use the libmnl library – which makes generating, sending, receiving and parsing these messages feasible. You could theoretically certainly craft such messages from hand, but you'd just be reimplementing libmnl in parts (and quite likely worse), sou you'd want to use that - it's by the same folks who invented the kernel side of this, so it kind of makes sense to use it.
|
I know lsof and ss provide metadata about connections. Where do they get it from?
For example, this represents a connection:
ls -al /proc/102922/fd/98
lrwx------ 1 me me 64 dic 21 06:06 /proc/102922/fd/74 -> 'socket:[3803248]'With ss I can see more info:
tcp ESTAB 0 0 192.168.68.108:33966 198.252.206.25:https users:(("chrome",pid=102922,fd=98)) cubic wscale:9,7 rto:296 rtt:92.785/24.455 ato:40 mss:1448 pmtu:1500 rcvmss:536 advmss:1448 cwnd:10 bytes_sent:1463 bytes_acked:1464 bytes_received:336 segs_out:11 segs_in:7 data_segs_out:6 data_segs_in:2 send 1.25Mbps lastsnd:71284 lastrcv:71292 lastack:26068 pacing_rate 2.5Mbps delivery_rate 271kbps delivered:7 app_limited busy:308ms rcv_space:14480 rcv_ssthresh:64088 minrtt:86.996 But, assume the system my app is running on does not have ss for some reason. How can I go from socket:[3803248] to the tcp stats that ss provides? I don't intend to fully rewrite ss :) but I'm curious about what exists in the filesystem.
|
Where in the filesystem can I get metadata about a socket?
|
I would do it the other way round.
I assumeyou can connect to remote hosts,
and remote hosts is unix.just run
ss -tanp | awk '$5 == "18.23.292.9:8088"' on remote hosts.assuming also that no NAT is set
|
with the following command I want to get which are the IP's that connected on my machine with port 8088
18.23.292.9 is machine that resource manager service is running on with port 8088
ss -tanp | grep 8088 | grep ESTAB
ESTAB 0 0 18.23.292.9:8088 118.2.291.2:52874 users:(("java",pid=13970,fd=829))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:56379 users:(("java",pid=13970,fd=668))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:52337 users:(("java",pid=13970,fd=666))
ESTAB 0 0 18.23.292.9:8088 118.2.280:34088 users:(("java",pid=13970,fd=790))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:59794 users:(("java",pid=13970,fd=660))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:59415 users:(("java",pid=13970,fd=665))
ESTAB 0 0 18.23.292.9:8088 118.2.279:53610 users:(("java",pid=13970,fd=750))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:63875 users:(("java",pid=13970,fd=661))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:50267 users:(("java",pid=13970,fd=667))now I want to know which are the application/services on remote machines are actually connected to port 8088
the reason is that we saw many connection to port 8088 and we want to know which are the process that try to connect
the machines are as below example 118.2.291.2 , 110.6.52.2 , etc
meanwhile I create without success the following script , that capture the IP and port of the machines that are connected
#!/bin/bashport=` netstat -anp | grep :8088 | grep ESTAB | head -1 | awk '{print $5}' | sed s'/:/ /g' | awk '{print $2}' ` ; IP=` netstat -nape | grep $port | awk '{print $5}' | sed s'/:/ /g' | awk '
{print $1}' `
export PORT=` netstat -nape | grep $port | awk '{print $5}' | sed s'/:/ /g' | awk '{print $2}' `echo $IP
echo $PORTmaybe other good example
here is a good example how find out which process is currently using a certain port in Linux. and also we get the list of machines that are connected ( on the right side )
lsof -i tcp:8088
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 13970 yarn 396u IPv4 1052681821 0t0 TCP *:radan-http (LISTEN)
java 13970 yarn 559u IPv4 1201044836 0t0 TCP master02.bigdata130.cgnt:radan-http->worker01.TATA130.cgnt:47506 (ESTABLISHED)
java 13970 yarn 617u IPv4 1201044953 0t0 TCP master02.TATA130.com:radan-http->master03.TATA130.com:33736 (ESTABLISHED)
java 13970 yarn 621u IPv4 1200925788 0t0 TCP master02.TATA130.com:radan-http->master01.TATA130.com:37762 (ESTABLISHED)
java 13970 yarn 631u IPv4 1201038517 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56258 (ESTABLISHED)
java 13970 yarn 634u IPv4 1201046323 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56272 (ESTABLISHED)
java 13970 yarn 635u IPv4 1201038518 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56270 (ESTABLISHED)
java 13970 yarn 664u IPv4 1201049689 0t0 TCP master02.TATA130.com:radan-http->kafka03.TATA130.com:39486 (ESTABLISHED)
java 13970 yarn 693u IPv4 1201050710 0t0 TCP master02.TATA130.com:radan-http->worker02.TATA130.com:39090 (ESTABLISHED)
java 18394 ambari 1511u IPv4 1201046322 0t0 TCP master02.TATA130.com:56258->master02.TATA130.com:radan-http (ESTABLISHED)
java 18394 ambari 1515u IPv4 1201049634 0t0 TCP master02.TATA130.com:56270->master02.TATA130.com:radan-http (ESTABLISHED)
java 18394 ambari 1516u IPv4 1201008383 0t0 TCP master02.TATA130.com:41112->master01.TATA130.com:radan-http (ESTABLISHED)
java 18394 ambari 1517u IPv4 1201038519 0t0 TCP master02.TATA130.com:56272->master02.TATA130.com:radan-http (ESTABLISHED)it will be very useful also if we know which is the user of the PID that used the port on target machines
for example
java 13970 yarn 617u IPv4 1201044953 0t0 TCP master02.TATA130.com:radan-http->master03.TATA130.com:33736 (ESTABLISHED) PID=32424 user=root
java 13970 yarn 621u IPv4 1200925788 0t0 TCP master02.TATA130.com:radan-http->master01.TATA130.com:37762 (ESTABLISHED) PID=324424 user=yarn
java 13970 yarn 631u IPv4 1201038517 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56258 (ESTABLISHED) PID=324224 user=yarnor maybe by this explain as
lets take the line
java 13970 yarn 617u IPv4 1201044953 0t0 TCP master02.TATA130.com:radan-http->master03.TATA130.com:33736 (ESTABLISHED)so on master03 machine the port is 33736
so if we access to master03 machine and do
netstat -nlp | grep :33736tcp 0 0 0.0.0.0:33736 0.0.0.0:* LISTEN 13970/javaand
ps -ef | grep 13970 | grep -v grep | awk '{print $1}'
yarnso my question is - can we use the command lsof -i tcp:8088 , with pipe to other commands that gives us the expected results , or maybe other idea as script?
Expected results
java 13970 yarn 617u IPv4 1201044953 0t0 TCP master02.TATA130.com:radan-http->master03.TATA130.com:33736 (ESTABLISHED) PID=32424 user=root
java 13970 yarn 621u IPv4 1200925788 0t0 TCP master02.TATA130.com:radan-http->master01.TATA130.com:37762 (ESTABLISHED) PID=324424 user=yarn
java 13970 yarn 631u IPv4 1201038517 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56258 (ESTABLISHED) PID=324224 user=yarn
|
how to know what the process that connected to my machine VIA specific port
|
The file /etc/services only provides number to name mapping for ports, and has nothing to do with what ports ss lists. Additionally, the names in this file are common uses for the ports, not what the port is actually used for on your system, as any program (with sufficent permissions) can open any port for any purpose. Basically /etc/services is a subset of the IANA's assigned numbers registry.
What ss lists is what ports on your system are in use. If it is listed in ss, it is in use. If you run ss -p as root, it should list what process specifically has the port open.
You can't directly modify what ports are in use. What you need to do is use sudo ss -p to find out what program is using the port and kill it. Or, conversely, if a port is not listed in ss and you expect it to be, start the service that uses the port.
It is a common misguided perception that ports can be created and deleted. Ports are just a number, they always exist and can't be deleted or created. They are either in use or not in use.
|
When trying to see port clashes within my system, many websites online recommend using /etc/services or ss -tunl to see port info
I am noticing /etc/services is providing different information to -ss on most occasions.
Output comparison examples
sudo cat /etc/servicesftp 21/udp
ftp 21/sctp
ssh 22/tcp
ssh 22/udp
ssh 22/sctp
telnet 23/tcp
telnet 23/udp
smtp 25/tcpversus
ss -tunlNetid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 0.0.0.0:5353 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:46670 0.0.0.0:*
udp UNCONN 0 0 [::]:5353 [::]:*
udp UNCONN 0 0 [::]:38838 [::]:* Is /etc/services a static data file and should only be used as a guide, not an true reflection of what the real port configuration of the system is.
Where does ss program gather this port data, and how can I modify/delete some of the ports, either through ss or another program?
|
Where does ss command gather its data for ports etc
|
The grammar for the Defaults is this (see man sudoers):
Default_Type ::= 'Defaults' |
'Defaults' '@' Host_List |
'Defaults' ':' User_List |
'Defaults' '!' Cmnd_List |
'Defaults' '>' Runas_ListUser_List ::= User |
User ',' User_ListSo on line
Defaults: myUser !requirettyremove the space between Defaults: and myUser.
|
I need to allow to a user to run passwordless sudo without tty.
I have a file under /etc/sudoers.d/ with the special commands and settings I need, since I don't fancy editing directly the sudoers file. In that file I have the following:
# My list of commands that the user can run passwordless
myUser ALL=(ALL) NOPASSWD:SETENV: /foo/bar /foo/zaz
# My new defaults.
Defaults exempt_group = myUser
Defaults !env_reset,env_delete-=PATH
Defaults: myUser !requirettyHowever when I su to the user and run sudo -l I get this in the defaults:
Matching Defaults entries for myUseron this host:
requiretty, !visiblepw, always_set_home, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION
LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin, exempt_group=myUser,
!env_reset, env_delete-=PATH, !requirettyWhere I can see it has first requiretty and in the end my !requiretty, which does not work.
I assume this is happening because it is first parsed the normal sudoers file, then my custom file under /etc/sudoers.d/.
Is there a way of making this work without editing the original /etc/sudoers?
|
sudoers and defaults
|
Because visudo checks the syntax and make sure it is valid configuration file; otherwise you may edit the file, make an error and sudo won't be useable anymore just because of your syntax error.
|
Why it is recommended to edit /etc/sudoers file with the visudo command?
Here is a sample of the file:
## Sudoers allows particular users to run various commands as
## the root user, without needing the root password.
##
## Examples are provided at the bottom of the file for collections
## of related commands, which can then be delegated out to particular
## users or groups.
##
## This file must be edited with the 'visudo' command.Are there any special reasons for it?
|
This file must be edited with the 'visudo' command.? [duplicate]
|
The point of sudoedit is to allow users to edit files they wouldn’t otherwise be allowed to, while running an unprivileged editor. To make this happen, sudoedit copies the file to be edited to a temporary location, makes it writable by the requesting user, and opens it in the configured editor. That’s why the editor shows an unrelated filename in a temporary directory. When the editor exits, sudoedit checks whether any changes were really made, and copies the changed temporary file back to its original location if necessary.
|
I used sudoedit to create a file:
$ sudoedit /etc/systemd/system/apache2.servicebut when I went to save the file, it wrote it in a temporary directory (/var/temp/blahblah). What is going on? Why is it not saving it to the system directory?
|
Why is sudoedit writing to a temporary directory?
|
The manpage saysFiles located in a directory that is writable by the invoking user may not be edited unless that user is root (version 1.8.16 and higher).If you can write to the directory containing the file, then you can edit it in practice without needing sudoedit (although you may not be able to read its current contents): you can move it out of the way and create a new file with the same name. In your particular case, you can read the file, and you should find that at least some editors will allow you to edit it (at least those which save files by writing a temporary file and renaming it into place).
The reasoning behind this feature is given in sudo bug 707: basically, allowing users to edit files in directories they can write to with sudoedit can allow them to circumvent the restrictions set up in sudoedit’s configuration (and effectively edit any file on the system).
|
Why can't I edit files owned by root but being e.g. somewhere deep in my personal directory, it says:sudoedit: existingFile: editing files in a writable directory is not permittedWhile I have the following function defined:
function sunano {
export SUDO_EDITOR='/usr/local/bin/nano'
sudoedit "$@"
}And I edit like this:
sunano existingFileWhere the file is indeed owned by root:
ls -l existingFileProves that:-rwxr-xr-x 1 root root 40 Jun 15 2015 existingFile
|
sudoedit root owned file in a non-root directory
|
The big difference is who is editing what file.
With sudo vim (assuming successful authentication), the root user invokes vim and edits the file in place (with root's environment and vim swap files parallel to the file being edited).
With sudo -e or sudoedit the user who invoked sudo edits a temporary copy of the file owned by themselves with their own environment (including things like ~/.vimrc). Once the user saves the output, the content of the temporary file is copied back into the original file that the user didn't have the permissions to edit. This method also has a couple checks that prevent editing under a few circumstances:the user is trying to edit a symbolic link
the user is trying to edit a file using a path containing a symbolic link
the user has write permissions on the directory containing the fileWhy those specific rules are strictly enforced, I do not know (some sort of security issues I'd assume).
P.S. Users are also disallowed with sudo's edit mode from editing files that are device special files (block devices, serial devices, etc.).
EDIT: Another consequence of not running vim as root, is that the user cannot use vim's shell capabilities this way to run arbitrary commands as root. This allows giving the user access to edit certain files via sudoers rules, while not handing over the keys to the kingdom.
|
Is there a key difference between sudo -e and sudo vim. I have set up the sudoers file so that vim is my default editor. Is there a key difference between the two?
Plus, should I switch from vim to rvim? I tried it but I had some problems with my config file
|
Difference between sudo -e and sudo vim?
|
You shouldn’t run an editor as root unless absolutely necessary; you should use sudoedit or sudo -e or sudo --edit, or your desktop environment’s administrative features.
sudoedit
Once sudoedit is set up appropriately, you can do
SUDO_EDITOR="/opt/sublime_text/sublime_text -w" sudoedit yourfilesudoedit will check that you’re allowed to do this, make a copy of the file that you can edit without changing ownership manually, start your editor, and then, when the editor exits, copy the file back if it has been changed.
I’d suggest a function rather than an alias:
function susubl {
export SUDO_EDITOR="/opt/sublime_text/sublime_text -w"
sudoedit "$@"
}although as Jeff Schaller pointed out, you can use env to put this in an alias and avoid changing your shell’s environment:
alias susubl='env SUDO_EDITOR="/opt/sublime_text/sublime_text -w" sudoedit'Take note that you don't need to use the $SUDO_EDITOR environment variable if $VISUAL or $EDITOR are already good enough.
The -w option ensures that the Sublime Text invocation waits until the files are closed before returning and letting sudoedit copy the files back.
Desktop environments (GNOME)
In GNOME (and perhaps other desktop environments), you can use any GIO/GVFS-capable editor, with the admin:// scheme; for example
gedit admin:///etc/shellsThis will prompt for the appropriate authentication using PolKit, and then open the file for editing if the authentication was successful.
|
System: Linux Mint 18.1 64-bit Cinnamon.Objective: To define Bash aliases to launch various CLI and GUI text editors while opening a file in root mode from gnome-terminal emulator.Progress
For example, the following aliases seem to work as expected:
For CLI, in this example I used Nano (official website):
alias sunano='sudo nano'For GUI, in this example I used Xed (Wikipedia article):
alias suxed='sudo xed'They both open a file as root.Problem
I have an issue with gksudo in conjunction with sublime-text:
alias susubl='gksudo /opt/sublime_text/sublime_text'Sometimes it works. It just does not do anything most of the time.
How do I debug such a thing with inconsistent behavior? It does not output anything. No error message or similar.Question
gksudo has been deprecated in Debian and also no longer included in Ubuntu 18.04 Bionic, so let me re-formulate this question to a still valid one:
How to properly edit system files (as root) in GUI (and CLI) in Linux?
Properly in this context I define as safely in case, for instance, a power loss occurs during the file edit, another example could be lost SSH connection, etc.
|
How to properly edit system files (as root) in GUI (and CLI) in Gnu/Linux?
|
sudoedit allows you to edit a file with an editor running on your own user id. It copies the file to a temporary file which your editor can then write into. As soon as the editor is closed, the edited file is copied back.
There is no built-in possibility to automatically write changes back while the editor is still running.
So you need eitherrun the editor on the other user id (e.g. sudo vi /file/to/edit )
copy the file manually back in a (separate) shell (sudo cp /tmp/... /file/to/edit) or from inside vim :!sudo cp % /file/to/edit. From vim you can also start a shell with :sh or put vim in the background with Ctrl+Z and restore it with fg.
use https://stackoverflow.com/questions/2600783/how-does-the-vim-write-with-sudo-trick-work
create your own version of sudoedit which writes changes back as soon as the temporary files is changed. This should be easily doable with some scripting. Inotify can help you to detect changes (see for example Can a bash script be hooked to a file? )
|
I'm doing some scripting with Vim and I've just started using sudoedit.
Problem is, when I :w it writes to the temp file, so any testing of the script can't happen unless I quit the editor.
How can I force an update of the original, or am I missing the point of sudoedit?
|
Sudoedit Vim force write (update) without quit
|
Assuming that you want to be editing root's crontab, sudo must give you root authority. After it does so, crontab will invoke ${VISUAL:-${EDITOR:-vi}} (it'll use $VISUAL unless it doesn't exist; in that case it'll use $EDITOR unless it doesn't exist; in that case it'll use vi).
You have a few possible solutions. They all subvert the security provided by sudo, but you must already be aware of those issues (and be willing to protect your .vimrc) or you wouldn't be using sudoedit in the first place.
The best is probably to add an assignment to the HOME variable on the sudo command line so crontab thinks the HOME directory is different:
sudo HOME=$HOME crontab -e(That command won't work if there's whitespace in your home directory path!)
|
I've started using sudoedit <file> instead of sudo vim <file>. One of the advantages is that it uses my local ~/.vimrc. However, when using sudo crontab -e, it uses /root/.vimrc instead. Is there a way to make sudo crontab -e use my local ~/.vimrc?
Here is a related question, about using sudoedit with vimdiff. However, substituting crontab -e for vimdiff doesn't work.
|
How can I make `sudo crontab -e` use my `sudoedit` environment?
|
visudo is a command provided for editing the sudoers file in a safe way. To quote its manual page:visudo edits the sudoers file in a safe fashion, analogous to vipw(8).
visudo locks the sudoers file against multiple simultaneous edits,
provides basic sanity checks, and checks for parse errors.The /etc/sudoers.tmp file is lock file used by visudo. Your changes are written to this temporary file so that visudo can carry out its checks. If everything checks out okay, the main /etc/sudoers file will be modified accordingly.
So when you run sudo visudo, a command line editor pops up so that you can edit the file. In your case, this editor appears to be GNU nano. In nano, you can navigate to the bottom of the file using arrow keys (or the Page Down key), and then paste the lines you want to include. Once your changes are done, you can exit the editor with Ctrl + X and choose the 'Y' option to save the file (you'll be asked to confirm the filename - just hit Enter).
Your sudoers file should now be updated. You can use a pager like less to read the file and confirm that for yourself (the command to do that is sudo less /etc/sudoers).
|
I have read this answer but don't know how to add the following line into my sudoers file.
matthew ALL=(ALL) NOPASSWD: /usr/sbin/service fancontrol startI ran "sudo visudo", and a "/etc/sudoers.tmp" window popped up. Is "/etc/sudoers.tmp" the correct file into which the line should be added? If so, under which line should I add the lines? How can I save it? I cannot find a "Save" option there.
I aim to run "sudo service fancontrol start" without a password.
GNU nano 2.9.3 /etc/sudoers.tmp
#
# This file MUST be edited with the 'visudo' command as root.
|
How can I add lines into my sudoers file?
|
If you put the command in your ~/.profile, it will run every time you launch a login shell. Some terminal emulators allow you to use a login shell for each terminal window. Do you want your command running that often?
If you want to be allowed to use sudo for that command without entering a password, use the visudo command with sudo visudo (or, to use your favourite editor, use sudo -E visudo).
DO NOT EDIT /etc/sudoers DIRECTLY.
Add a line like this:
tim ALL=(ALL) NOPASSWD: /path/to/my/commandThe order is important in the sudoers file, so add it below this line: root ALL=(ALL:ALL) ALL
However, if you only want it to run when your system starts up, add it to /etc/rc.local and you don't have to worry about sudo.
|
Is it a bad practice to run a command which requires sudo in ~/.profile?
If really want to do that, how can I make the command run at rebooting Ubuntu? make the command running with sudo and under my user account not require password, by editing /etc/sudoers?
provide my password in the command with sudo in ~/.profile, by echo <passwd> | sudo -S <mycommand>?I haven't verified if the first way works, because I am still learning how to do it.
The second way seems to raise serious security concern, and probably the least way I want to go.
Thanks.
|
How do you run a command with sudo in `~/.profile`?
|
You need to tell the editor to wait:
SUDO_EDITOR="/snap/bin/code --wait" sudoedit /etc/fstabWithout that option, VS Code forks, or notifies an already-running instance, and control immediately returns to sudoedit. The latter sees that nothing has changed and deletes the temporary copy that is used for editing purposes. (Snap might contribute to the effect, but VS Code on its own requires this.)
See also How to properly edit system files (as root) in GUI (and CLI) in Gnu/Linux?
|
I am using Kubuntu 20.04.
When I run sudoedit /etc/fstab, VS Code opens to a blank document and the CLI immediately returns (see details below).
If I run export SUDO_EDITOR=nano, the document opens in the nano editor with the contents of /etc/fstab as expected.
If I run export SUDO_EDITOR=/snap/bin/code, it once again opens VS Code with a blank document.
What am I doing wrong? Or is this a bug?kevin@kevcoder00 ~ $ echo $VISUALkevin@kevcoder00 ~ $ echo $SUDO_EDITORkevin@kevcoder00 ~ $ echo $EDITOR
/snap/bin/code
kevin@kevcoder00 ~ $ sudoedit /etc/fstab
[sudo] password for kevin:
sudoedit: /etc/fstab unchanged
|
Using Visual Studio Code as EDITOR e.g. with sudoedit
|
May be useful related to that sudoedit error message:sudoedit: ... editing files in a writable directory is not
permittedPlease try a modification to sudoers file using sudo visudo, add a line:
Defaults !sudoedit_checkdirMore here.
|
When I want to vimdiff root files, I use the following alias, as per this suggestion.
alias sudovimdiff='SUDO_EDITOR=vimdiff sudoedit'I can then use the following command.
$ sudovimdiff /root/a /root/bHowever, if one of the files is writable by my user, the command fails.
$ sudovimdiff /root/a /tmp/b
sudoedit: /tmp/b: editing files in a writable directory is not permittedIs there a way to vimdiff one root and one non-root file, using my user's environment settings (i.e. sudoedit)?
|
Can I sudoedit a file in a writable directory when using vimdiff?
|
gvim returns almost immediately. When sudoedit notices that the editor has returned it will finish reporting no changes. To get sudoedit to work correctly you need to get it to wait until you are finished editing. I normally use -f switch to do this. I have not tried it but the manual seems to support the use of --remote-wait or --remote-wait-silent.
|
Is it possible to use gvim --remote-silent and similar as an editor for visudo and sudoedit? Actually, I don't think this is related to the --remote option. Even if I set Defaults editor = "/usr/bin/gvim", the tmpfile gvim loads is blank and editing it has no effect.
|
visudo/sudoedit and gvim --remote-silent
|
The following command will remove write permission from group on file /var/db/sudo/ts
sudo chmod g-w /var/db/sudo/ts
|
I'm currently running Mac OS Sierra. I don't know exactly how, but somehow I've altered some permissions and think it'd be a good idea to reset them, but I do not know how. Each time I execute sudo, I am met with this warning: sudo: /var/db/sudo/ts is group writable
The command executes fine, but it seems to be a good idea to fix that. Please advise.
results:
0 dr-x------ 4 root wheel 136 Mar 28 11:34 .
0 dr-x-w---- 5 root wheel 170 Mar 28 10:38 ..
8 -rw--w---- 1 root wheel 80 Jan 27 00:51 zacadmin
8 -rw------- 1 root wheel 80 Mar 28 12:00 zbrown
|
Accidentally set write permission on sudoers?
|
Based on the tips above about using ed (and this example), I came up with the following
ED="/bin/ed"
CONTENT_TO_APPEND="Yay, config!"##### Set editor #####
OLD_EDITOR=$EDITOR
export EDITOR=$ED;##### Append using ed #####
echo "a
$CONTENT_TO_APPEND
.
w
q" | sudoedit -u bob /foo/bar.conf##### Clean up #####
export EDITOR=$OLD_EDITOR
|
I'm looking to modify a file from a script. I can sudoedit the file as the bob user by doing
sudoedit -u bob /foo/bar.confbut don't have rights to do anything else as bob.
I came across http://shadow-file.blogspot.com.au/2009/01/how-to-sudoedit-non-interactively.html which might work, but seems complicated.
Is there some trivial way to do this that I'm missing?
(In case you're wondering, I'm trying to edit the inputs.conf file for a Splunk Universal Forwarder install on a RHEL host with very restrictive permissions)
|
Using sudoedit in a script (non-interactively)
|
sudoers file
You should be able to do any of these.Such as this:
john ALL=(ALL) NOPASSWD: sudoeditor this:
john ALL=(ALL) NOPASSWD: sudoedit /path/to/fileLastly you could do it like this too:
Cmnd_Alias SOMECMD = sudoedit /path/to/file
john ALL=(ALL) NOPASSWD: SOMECMDOnce you have one of these definitions in place you invoke it like so:
sudoedit /path/to/fileDetails
You don't need to invoke it with a sudo command prefix like this:
sudo sudoedit /pat/to/fileIt takes care of the sudo automatically. It's equivalent to sudo -e /pat/to/file which will invoke an editor with elevated privileges.
excerpt from the sudo/sudoedit man page
-e The -e (edit) option indicates that, instead of running a command,
the user wishes to edit one or more files. In lieu of a command, the
string "sudoedit" is used when consulting the sudoers file. If the
user is authorized by sudoers the following steps are taken: 1. Temporary copies are made of the files to be edited with the
owner set to the invoking user. 2. The editor specified by the SUDO_EDITOR, VISUAL or EDITOR
environment variables is run to edit the temporary files.
If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first
program listed in the editor sudoers variable is used. 3. If they have been modified, the temporary files are copied
back to their original location and the temporary versions
are removed.You can override the editor by setting one of the environment variables mentioned above with the name of an editor to use such as vim or gedit, for example.
|
What is the syntax for using NOPASSWD and sudoedit at the same time in /etc/sudoers? I tried this:
john ALL=(ALL) NOPASSWD: sudoedit /path/to/filebut I still get prompted for a password.
|
sudoers - How to use NOPASSWD and sudoedit at the same time?
|
I have worked a lot on it,
And after so many tries and searching I got this working
Type the command below and press Enter to safely open the /etc/sudoers file for editing:
$ sudo visudoOn a new line, insert the text below:
%domain\ admins ALL=(ALL) ALLSince I was having DOMAIN name as two words I have to use: domain\ admins
domain adminsThis was the exact group name I was having.
And % to specify group.
and without % I would think it is take as username.
NO NEED TO SPECIFY DOMAIN That I was trying before.
i.e:
%DOM.DOMAINNAME.COM\\domain\ admins ALL=(ALL) ALL
|
I want to give access domain admins group to sudoers access.
I have come across with many commands but nun of works for me.
fname.lname ALL=(ALL) ALLWith this command i can give access to a particular user, But i want to give access to all the members of the domain admin group.
%DOMAINNAME\\domain\ admins ALL=(ALL) ALL
%DOMAINNAME\\domain\ admins ALL=(ALL) ALL
%DOM.DOMAINNAME.COM\\domain\ admins ALL=(ALL) ALL
DOMAINNAME\\domain\ admins ALL=(ALL) ALL
DOMAINNAME\\domain\ admins ALL=(ALL) ALL
DOM.DOMAINNAME.COM\\domain\ admins ALL=(ALL) ALL
%DOMAINNAME\\domain_admins ALL=(ALL) ALLLike many commands i have tried but non out of these working.
My domain Group have two words i.e
domain admins
Complete domain name is like:
DOM.DOMAINNAME.COM
And short name is
DOMAINNAME
Tell me how can i give access to sudoers for a group.
I am working on python scripting where it asks for sudo run but the user doesn't have sudo access or everytime no any domain user is entertain to enter credentials.
|
Add "domain admins" group in sudo users [duplicate]
|
The problem is not in your line 121, but in your next line, with your line mailfrom "[emailprotected]. You forgot the closing double apostrophe. The good version were:
mailfrom "[emailprotected]"The ground, why it was as a syntax error in line 121, and not in 122, is because the syntax analyser of the sudo and your text editor used a little bit different line ordering.
|
I'm Using Centos version:
[sysadmin@backup-srv ~]$ cat /etc/redhat-release
CentOS release 6.4 (Final)I'm using this sudo version:
[sysadmin@backup-srv ~]$ sudo -V
Sudo version 1.8.6p3
Sudoers policy plugin version 1.8.6p3
Sudoers file grammar version 42
Sudoers I/O plugin version 1.8.6p3I tried to setup notification mail for sudo, added this entry in bottom of sudo file using command visudo, but I am getting the following syntax error:
Defaults mailto "[emailprotected]"
Defaults mailfrom "[emailprotected]
Defaults mail_always on
Defaults mailsub “*** Command run via sudo on %h ***”
Defaults mail_badpass on
Defaults badpass_message "Please Provide Correct Password"
Defaults !lecture,tty_tickets,!fqdn,!syslog
Defaults logfile=/var/log/sudo.logThis is the error I get while saving the sudo config file:
121 Defaults mailto "[emailprotected]"
122 Defaults mailfrom "[emailprotected]
123 Defaults mail_always on
124 Defaults mailsub “*** Command run via sudo on %h ***”
125 Defaults mail_badpass on
126 Defaults badpass_message "Please Provide Correct Password"
127 Defaults !lecture,tty_tickets,!fqdn,!syslog
128 Defaults logfile=/var/log/sudo.logThe error is:
visudo: >>> /etc/sudoers: syntax error near line 121 <<<
visudo: >>> /etc/sudoers: syntax error near line 121 <<<What to do now? How to setup the mail notification for Sudo version 1.8.6p3?
|
Sudo email notification setup error
|
I just needed to provide the full path for the two additional commands.
Cmnd_Alias WWWCMDS = /home/xxx/shop/update.sh,/usr/local/bin/geoipupdate,/usr/sbin/service memcached
www-data ALL=(ALL) NOPASSWD: WWWCMDSand it worked fine.
|
Here is my original /etc/sudoers.d/www file:
Cmnd_Alias WWWCMDS = /home/xxx/shop/update.sh
www-data ALL=(ALL) NOPASSWD: WWWCMDSI simply want to add two additional commands so I RTFM and saw that you just have to comma-separate them.
Cmnd_Alias WWWCMDS = /home/xxx/shop/update.sh,geoipupdate,service memcached restart
www-data ALL=(ALL) NOPASSWD: WWWCMDSBut it triggered a syntax error. What am I doing wrong ?
|
sudoers syntax error after adding a simple comma [duplicate]
|
The syntax you need is
Defaults:www-data !requiretty
www-data ALL=(postgres) NOPASSWD: /usr/bin/osm2pgsqlwhere /usr/bin/osm2pgsql should be replaced by the actual path of osm2pgsql. The !requiretty line allows the sudo from a script without a terminal. This allows the command sudo -u postgres -H osm2pgsql with any options to work.
|
The goal is to let the www-data user execute sudo -u postgres -H osm2pgsql [some options here] without being asked for a password (as this will be part of a script wich runs automatically).
I thought I could do it with the following line in sudoers file
www-data ALL=NOPASSWD: /usr/bin/sudo -u postgres -H osm2pgsql *But this does not seem to work.
What am I doing wrong?
|
Enable user to execute one command as another user
|
The cd should not be required. The following line should do the same.
/WebSphere/was85/mycel/mynode/AppServer/java_1.7_64/bin/java -cp \
/usr/my.jar com/com.my_comapny_witt_entire_name/myMain I broke the command line into two lines by using backslash continuation. There must be no characters between the backslash and the newline for continuation to work.
An alternative is to add you Java bin directory to the PATH variable ahead of any directories that contain java. The command which java should tell you which java will be used.
PATH=/WebSphere/was85/mycel/mynode/AppServer/java_1.7_64/bin:$PATH
java -cp /usr/my.jar com/com.my_comapny_witt_entire_name/myMain
|
I am using IBM z/OS390 and I am using the OMVS shell.
I don't have "vi" installed at all so I use "oedit".
If I try to initialize the app.jar straigh from OMVS shell it Works perfectly. I mean, if I reach the java directory and start the app from there with the two commands below, it Works 100%.
cd /WebSphere/was85/mycel/mynode/AppServer/java_1.7_64/bin
./java -cp /usr/my.jar com/com.my_comapny_witt_entire_name/myMain I created a script file via OEDIT and added both lines. The second was broke in two lines. Unfortunatelly I am getting this error obviously because while running, the second command line has been treated as diferente two lines.
$ ./script_boot_app
Error: Could not find or load main class com.my_comapny_witt_entire_name.
/myMain: ./script_boot_app 3: FSUM7351 not foundProbably if the path and java package names were smaller enough to fit in one line it wouldn't happen. So my question is: how can I use OEDIT and guarante that multiple lines will be treated as just one line exactly as I did when executing the command straight from shell? When typing straigh in OMVS shell and the end of line is reached, naturaly I continue in the below line and it is executed as a single command line. How can I do the same inside of a script created in OEDIT tool?
|
how to make a large command line broke in two lines be executed as a single line inside of a script
|
You probably can't do that.
Emacs is a Lisp Interpreter that runs an editor (which is written in emacs lisp),
so every time you use emacs, you have access to the the lisp interpreter itself.
That interpreter can do all kinds of things: create/remove files or directories, change access rights etc... Basically, once you're inside emacs, you do not need to have access to a shell to do considerable damage - emacs itself is powerful enough.
Your best bet is probably to find a small emacs clone (for example here: http://www.emacswiki.org/emacs/EmacsImplementations ) that supports restricted editing.
|
I'm administrating an Arch Linux server.
How can I securely add emacs to my /etc/sudoers editor variable? Currently I have "emacs" but that allows M-x shell. Basically, I want something like rvim, but for emacs.
|
How can I run emacs with no shell?
|
To determine what editor to run, sudo checks three environment variables (in order): SUDO_EDITOR, VISUAL, and EDITOR, and uses the first editor it finds. (If it doesn't find one, it falls back to a default.)
So you can make it run vimdiff instead of vim as follows:
$ VISUAL=vimdiff sudoedit file1 file2If your sudoers policy only lets you edit certain files, this might fail, in which case you can add a parameter:
$ VISUAL='vimdiff file1' sudoedit file2In that case, I'm assuming you can read file1 as a normal user, but need root access to read file2.
(I'm using VISUAL because that's what I'm used to; feel free to use SUDO_EDITOR instead.)
|
I'm trying to get into the habit of editing root-owned files with sudoedit, instead of sudo vim. This has a few advantages, one of which is that it uses my user's ~/.vimrc.
Is there an equivalent, instead of using sudo vimdiff?
What I've triedInstead of using vimdiff directly, one can open two files in vertical splits, then run :diffthis in both. However, if I open up one file with sudoedit, then I'd have to open the second file directly, instead of sudoedit automatically creating a copy of this file in /var/tmp.
One can also open files directly in splits using vim -O file1 file2. However, unsurprisingly, sudoedit -O fails.
|
Is there a sudoedit equivalent for vimdiff?
|
The issue here is that sudoedit copies the file to a temporary file before opening it in the editor. When the file has an extension, the temporary file is created with the same extension, and filename-based syntax highlighting modes are selected appropriately (e.g. for C files). When the file doesn’t have an extension, as is the case with nanorc, it is created with a random extension; this confuses filename-based syntax highlighting mode selection, and nano ends up treating the file as a standard text file.
If you can reconfigure nano to treat any nanorc* file as a configuration file, you’ll be able to restore the behaviour you’re after. Otherwise I’m not sure there’s a way to handle this automatically.
|
Here are my personal aliases for editing root owned files:
# CLI superuser nano; compiled; version 2.8.0function sunano {
export SUDO_EDITOR='/usr/local/bin/nano'
sudoedit "$@"
}# GUI superuser xed; packaged; version 1.2.2function suxed {
export SUDO_EDITOR='/usr/bin/xed'
sudoedit "$@"
}# GUI superuser sublime-text; packaged; version 3126function susubl {
export SUDO_EDITOR='/opt/sublime_text/sublime_text -w'
sudoedit "$@"
}Let me take it from end:Sublime Text works great now thanks to Stephen Kitt's advice.
Xed seems to work good too, it shows that the privileges are elevated, which I personally don't like to be reminded of, but there seems to be no problem with it, colors are there and it didn't even need some wait switch like Sublime.
The problem I have is with Nano as follows:
If I invoke it as I was used to, e.g.:
sudo nano /etc/nanorcThe colors are there.
But if I call it with the new alias:
sunano /etc/nanorcThere are no colors whatsoever.
The configuration seems to have been read though, because it looks the same as I've configured it.EDIT1: Apparently this issue affects at minimum the config file:
-rw-r--r-- 1 root root 8.6K Apr 8 02:30 /etc/nanorcOther files, e.g. Bash or C++ are colored, I'm confused.
|
Nano through Sudoedit = No colors
|
Your file is likely missing its end-of-file newline. sudo expects that, and visudo will fail to validate a file missing it. Opening a file in Vi and saving it will add a newline at the end if necessary, fixing the file from sudo’s perspective.
|
Alright, I know the question title sucks, but it's the same with the situation itself.
What I am trying to do is this:Create a file with sudoers configuration locally
Use Ansible to ubload that file with the template module
Use the validate feature of the template module to make sure the configuration worksSo far, so good. Now comes the weird part: The validation (validate: 'visudo -cf %s') of that file throws an error. When I comment out the validation line the files gets uploaded, but a manual validation (visudo -cf /etc/sudoers.d/foo_bar) fails also. Opening the file using vi, saving it (:wq) without making any changes and running the validation again succeeds.
My current working thesis: WTF?!
But it's late and I am tired. If anyone has suggestions please let me know. I will update this question as soon as I have new information and I will clean it up once I zero in on a solution.
|
Uploading sudoers.d file through ansible gives syntax error but opening and saving in vi fixes it
|
in sudoers man page (man 5 sudoers) has been mentionedWhen multiple entries match for a user, they are applied in order.
Where there are multiple matches, the last match is used (which is not
necessarily the most specific match).So no matter if your config is specific.
Also consider if you have sudoers group ( like wheel in Red Hat based distros) the line
%wheel ALL=(ALL) ALL should be before NOPASSWD
|
I'm in Ubuntu 18.04LTS and I want to change the sudoers file to execute sudo shutdown -h now without the need of password (for my_username). The steps I take were:
With my user my_username open terminal:
sudo visudo
The line I added:
my_username ALL=(ALL) NOPASSWD: /sbin/shutdown
Where there is only one tab in the first part (between user and ALL) and the rest are spaces.
The user is the one it appears after id in terminal.
After that, just in case I restart the system, and type sudo shutdown -h now but it keeps asking for password.
What I'm doing wrong?
-----EDIT------
Ok, I didn't know that the order in which you add the lines were important, so as asked I added my full file (it's a very simple sudoers config).
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"# Host alias specification# User alias specification# Cmnd alias specification# User privilege specification
root ALL=(ALL:ALL) ALL# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on "#include" directives:
my_username ALL=(ALL) NOPASSWD: /sbin/shutdown
#includedir /etc/sudoers.dThis way it works perfectly for me. The problem was that I added the line after root line.
|
sudoers file change not working? [duplicate]
|
Per man visudo, section "Diagnostics": /etc/sudoers.tmp: Permission denied
You didn’t run visudo as root.I see nothing in your post to indicate that you did run it as root.
Try sudo visudo.
Also it looks like you may be getting errors related to sudo itself. Can you sudo ls ~root successfully?You may also want to review the man page, as: There is a hard-coded list of one or more editors that visudo will use
set at compile-time that may be overridden via the editor sudoers Default
variable. This list defaults to /usr/local/bin/vi. Normally, visudo
does not honor the VISUAL or EDITOR environment variables unless they
contain an editor in the aforementioned editors list.The man page proceeds to describe ways to work around this, but you should be aware of the security implications of doing so. I would advice you to just learn vi since it is both ubiquitous and extremely powerful. (Start by running vimtutor; set aside half an hour for this.)
|
RHEL 5.10
When I do visudo I get this error:
chuck 75->visudo
visudo: /etc/sudoers: Permission denied
visudo: /etc/sudoers: Permission deniedListing of sudo exe file and /etc/sudoers:
chuck 76->ls -l /etc/sudo*
-r--r----- 1 root root 3540 May 9 11:44 /etc/sudoers
-r--r----- 1 root root 3401 Aug 12 2014 /etc/sudoers.20140812chuck 273->ls -l `which sudo`
-rwsr-xr-x 2 root root 182040 Mar 4 2014 /usr/bin/sudochuck 275->ls -l `which visudo`
-rwxr-xr-x 1 root root 98576 Mar 4 2014 /usr/sbin/visudoAs you can see my /etc/sudoers.20140812 backup file has the same permissions as the actual /etc/sudoers file. So I don't know what's up. Just before this happened I changed my environment var in my .cshrc VISUAL to: setenv VISUAL /usr/bin/nano.
I've tried in the shell unset VISUAL but I still get "permission denied" error.
I've tried in the shell setenv VISUAL but that didn't work. When I did visudo I still get "permission denied on /etc/sudoers".
I also logged that shell window off and logged into a new one and still get "permission denied" when I do visudo.
I tried googling the answer and tried a few things but that didn't work.
Searching stackexchange didn't show any past questions either.TRIED: Another thing I tried and the error message.
chuck 59->sudo chmod 0440 /etc/sudoers
sudo: /etc/sudoers is mode 0640, should be 0440
sudo: no valid sudoers sources found, quittingTRIED: Making an alias I called editsudo: alias editsudo 'sudo chmod 770 /etc/sudoers; sudo nedit /etc/sudoers; sudo chmod 0440 /etc/sudoers'
chuck 62->editsudo
sudo: /etc/sudoers is mode 0640, should be 0440
sudo: no valid sudoers sources found, quitting
sudo: /etc/sudoers is mode 0640, should be 0440
sudo: no valid sudoers sources found, quitting
sudo: /etc/sudoers is mode 0640, should be 0440
sudo: no valid sudoers sources found, quitting
|
Now visudo won't work at all
|
Setting my comment as an answer. Add this line as the first executable statement in your script
[[ $UID -ne 0 ]] && exec sudo $0 "$@"This checks to see if you're running as root and restarted the script under sudo with the same arguments. Normal precautions and warnings apply in configuring sudo and with running things as root.
|
Using sudo visudo I add the line username ALL=(ALL) NOPASSWD: /home/user/script.sh in sudoers but the script.sh does not run on double click.
If I add the line username ALL=(ALL) NOPASSWD:ALL in sudoers then the script.sh runs and works when double clicked. How can do it?
Thanks.
|
How to run a bash script by double clicking by entering the path in sudoers?
|
Use sudo visudo and comment the offending line. Just make very sure that there is some way to get root privileges on this machine before you save the changes
|
My user (tom) is mapped to user_u user , user_r role and user_t domain via semanage
[tom@localhost ~]$ id -Z
user_u:user_r:user_t:s0
[tom@localhost ~]$ because I have made the "default" as "user_u"
[tom@localhos ~]$ sudo semanage login -lLogin Name SELinux User MLS/MCS Range Service__default__ user_u s0 *
root unconfined_u s0-s0:c0.c1023 *
system_u system_u s0-s0:c0.c1023 *
[tom@localhos ~]$ but it still can perform sudo
[tom@localhost ~]$ sudo -l
Matching Defaults entries for tom on localhost:User tom may run the following commands on localhost:
(ALL) NOPASSWD: ALL
[tom@localhost ~]$It seems, this is because of "%<user tom's group> ALL = (ALL) NOPASSWD:ALL" in the sudoers
[tom@localhost ~]$ sudo cat /etc/sudoers
root ALL = (ALL) NOPASSWD:ALL
%<user tom's group> ALL = (ALL) NOPASSWD:ALL
admin ALL = (ALL) NOPASSWD:ALL
[tom@localhost ~]$Please help me fix my issue
|
My user (tom) has user_u , user_r and user_t via semanage but it still can perform sudo
|
Sure. You can just drop files into /etc/sudoers.d instead of editing the sudoers file itself:
cat > /etc/sudoers.d/apache <<EOF
User_Alias APACHE = www-data
Cmnd_Alias FIREWALL = /sbin/iptables, /sbin/ifconfig, /sbin/routeAPACHE ALL = (ALL) NOPASSWD: FIREWALL
EOF
chmod 440 /etc/sudoers.d/apacheAnd I guess if you were stuck with a really old sudo without support for the sudoers.d directory you could just concatenate that to /etc/sudoers:
if ! grep -q APACHE /etc/sudoers; then
cat >> /etc/sudoers <<EOF
User_Alias APACHE = www-data
Cmnd_Alias FIREWALL = /sbin/iptables, /sbin/ifconfig, /sbin/routeAPACHE ALL = (ALL) NOPASSWD: FIREWALL
EOF
fi(The if ! grep -q ... is there to prevent concatenating this multiple times if the script is run more than once)
|
I have visudo-edited /etc/sudoers this way:
User_Alias APACHE = www-data
Cmnd_Alias FIREWALL = /sbin/iptables, /sbin/ifconfig, /sbin/routeAPACHE ALL = (ALL) NOPASSWD: FIREWALL(To allow php running iptables cmd).
Is there a way to achieve the same, purely with terminal/tty/cmdline AND/OR root shell script ?
|
Sudo Apache, with command line ? (not visudo)
|
Reading through man 5 systemd.unit and man 5 systemd.target tells us that unit files are used to define targets as well as everything else systemd. There is no documentation specifically on how to create a target, so it's hard to determine the how it should be done, but it is not too different from creating a service.
When you create your target, you will need to make symlinks to the target.wants directory from the systemd services directory. Then you can set/boot your target. Here's how it might look given your example.
/etc/systemd/system/foo.target
This is the target's unit file. If graphical.target is taken as an example, we can create our own target using it as a base.
[Unit]
Description=Foobar boot target
Requires=multi-user.target
Wants=foobar.service
Conflicts=rescue.service rescue.target
After=multi-user.target rescue.service rescue.target
AllowIsolate=yesTo explain the options taken from the systemd manpages;Description -- Describes the target. You should understand
Requires -- Hard dependencies of the target. You should let the basic system start before you start your own service(s)
Wants -- Soft dependencies. The target does not require these to start.
Conflicts -- If a unit has a Conflicts setting on another unit, starting the former will stop the latter and vice versa.
After -- Boots after these services
AllowIsolate -- Really up to you and your environment. Details are available in the manpage systemd.unit(5)/etc/systemd/system/foo.target.wants/
This is the directory where you will link the services you create/require for your target. It is equivalent to the Wants= option in the unit file. Create this directory and then create symlinks like so; ln -s /usr/lib/systemd/system/bar.service /etc/systemd/system/foo.target.wants/bar.service. This creates a symlink from bar.service in the system directory to your foo.target.wants directory. I think creating a unit file for a service is kind of out of the scope of this answer, and that question is definitely more documented so I'll leave that out for now. When you create your unit file, just symlink it into the target.wants directory or add it to the Wants= directive.
|
After searching plenty through plenty a post, Youtube video, and "documentation" on the matter of systemd, I'm still at a loss.
The link (https://wiki.archlinux.org/index.php/systemd#Create_custom_target) seemed promising, but was a bit vague (to me).
Question
How would one go about creating a custom systemd target (IE: foo.target ) so that one may boot with select .service units?
ExampleSystem boots default.target (symlink of "foo.target")
"foo.target" only starts a barebones X server and GUI program, say "gvim".Reason
I'm simply looking to create a custom target for quickly launching one X program.
I'd be nice to exclude all the services I don't need.
Thanks in advance!
|
How to create a systemd target?
|
It is not possible. Systemd is noninteractive.
In 2023, Fedora 38 shortened the default timeout duration to 45s.
|
Is it possible to interactively skip the 90s timeout in systemd? For example, when it is waiting for a disk to become available or user to log out? I know it will fail eventually, so can I just make it fail now? I hate just staring at the screen helplessly.
|
How do I skip the 90s timeout in systemd
|
Using the nofail mount option will ignore missing drives during boot. See man pages fstab(5) and mount(8).nofail Do not report errors for this device if it does not exist.So your fstab line should instead look like:
UUID=6826692e-79f4-4423-8467-cef4d5e840c5 /backup/external ext4 defaults,nofail 0 0
|
Using Fedora 24, I had configured in /etc/fstab an external usb drive:
UUID=6826692e-79f4-4423-8467-cef4d5e840c5 /backup/external ext4 defaults 0 0When I unplugged the usb disk and reboot, it does not boot
That is the error message:
[ TIME ] Timed out waiting for device dev-disk-by\x2duuid-6826692e\x2d79f4\x2d4423\x2d8467\x2dcef4d5e840c5.device.
[DEPEND] Dependency failed for /backup/external.
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for Relabel all filesystems, if necessary.
[DEPEND] Dependency failed for Mark the need to relabel after reboot.Why does not boot? is it a bug? a feature? of systemd?
I know that was a mistake of me, I had to set options to "noauto", but anyway Why booting process stops because a non-critical directory of FHS is missing?
|
Cannot boot because missing external disk
|
Install rEFInd
After some further research, I've found this reddit thread from someone with an identical problem. Multiple posters in this and other threads recommended installing rEFInd instead.
rEFInd was straightforward to install and immediately detected my Windows partition.
I followed these Youtube tutorials, which I recommend:https://www.youtube.com/watch?v=bg0BV5ZJCZUhttps://www.youtube.com/watch?v=g2YYC1f3mnw
|
I've followed the classic procedure to install Windows and Linux in dual boot. First I installed Windows in UEFI mode, then I use a bootable PopOS key to resize the main Windows partition; I created a Linux partition as well as a 500MB /boot/efi partition in the remaining space.
My problem is, systemd-boot can't seem to detect the Windows bootloader.
When I display the systemd-boot menu, it only lists PopOS as a possible boot option, even though I can launch Windows from my BIOS menu with no problem.
When I run bootctl, I get the following output:
System:
Firmware: UEFI 2.70 (American Megatrends 5.14)
Secure Boot: disabled
Setup Mode: setupCurrent Boot Loader:
Product: systemd-boot 245.4-4ubuntu3.1pop0~1590695674~20.04~eaac747
Features: ✓ Boot counting
✓ Menu timeout control
✓ One-shot menu timeout control
✓ Default entry control
✓ One-shot entry control
✓ Support for XBOOTLDR partition
✓ Support for passing random seed to OS
✓ Boot loader sets ESP partition information
ESP: /dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515
File: └─/EFI/SYSTEMD/SYSTEMD-BOOTX64.EFIRandom Seed:
Passed to OS: yes
System Token: set
Exists: yesAvailable Boot Loaders on ESP:
ESP: /boot/efi (/dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515)
File: └─/EFI/systemd/systemd-bootx64.efi (systemd-boot 245.4-4ubuntu3.1pop0~1590695>
File: └─/EFI/BOOT/BOOTX64.EFI (systemd-boot 245.4-4ubuntu3.1pop0~1590695674~20.04~e>Boot Loaders Listed in EFI Variables:
Title: Linux Boot Manager
ID: 0x0003
Status: active, boot-order
Partition: /dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515
File: └─/EFI/SYSTEMD/SYSTEMD-BOOTX64.EFI Title: Windows Boot Manager
ID: 0x0000
Status: active, boot-order
Partition: /dev/disk/by-partuuid/42f0d8f0-13e0-41cf-bc36-ac80dccc54fd
File: └─/EFI/MICROSOFT/BOOT/BOOTMGFW.EFI Title: UEFI OS
ID: 0x0009
Status: active, boot-order
Partition: /dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515
File: └─/EFI/BOOT/BOOTX64.EFIBoot Loader Entries:
$BOOT: /boot/efi (/dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515)Default Boot Loader Entry:
title: Pop!_OS
id: Pop_OS-current.conf
source: /boot/efi/loader/entries/Pop_OS-current.conf
linux: /EFI/Pop_OS-3ce60b75-530a-4cad-9e80-5156a8e6bb56/vmlinuz.efi
initrd: /EFI/Pop_OS-3ce60b75-530a-4cad-9e80-5156a8e6bb56/initrd.img
options: root=UUID=3ce60b75-530a-4cad-9e80-5156a8e6bb56 ro quiet loglevel=0 systemd.sh>Notice the Windows Boot Manager entry under Boot Loaders Listed in EFI Variables. It seems systemd-boot is somewhat aware that my Windows partition exists, it just won't detect it as something that can be booted from.
(running bootctl install doesn't seem to change anything)
My /boot/efi/ directories look like this:
/boot/efi/EFI
├── BOOT
│ └── BOOTX64.EFI
├── Linux
├── Pop_OS-3ce60b75-530a-4cad-9e80-5156a8e6bb56
│ ├── cmdline
│ ├── initrd.img
│ └── vmlinuz.efi
└── systemd
└── systemd-bootx64.efi/boot/efi/loader/entries/
└── Pop_OS-current.confSo the directories that should have been populated with the Windows Bootloader somehow aren't.
How can I diagnose this problem, and add Windows as a startup option to systemd-boot?
|
Pop OS: systemd-boot can't detect Windows
|
After reading comment #6 on "systemd-boot, no timeout, no select menu - LoaderEntryDefault" and looking at "systemd-boot sets efivar LoaderEntryDefault, which overrides default in /boot/loader/loader.conf" nixpkgs issue on GitHub, I've figured out that the issue was probably caused by EFI variables which somehow got set and were overriding the settings from /loader/loader.conf.
Indeed, there were these two variables set that were causing trouble:
§ cat /sys/firmware/efi/efivars/LoaderConfigTimeout-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f
0and
§ cat /sys/firmware/efi/efivars/LoaderEntryDefault-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f
nixos-generation-374(I've made up the value '374' here: it only matters that it was different from the one in /loader/loader.conf at the time when I inspected it.)
The list of EFI variable used by systemd-boot can be found at the end of "systemd-boot UEFI Boot Manager" page on Freedesktop Wiki:
LoaderEntryDefault entry identifier to select as default at bootup non-volatile
LoaderConfigTimeout timeout in seconds to show the menu non-volatile
LoaderEntryOneShot entry identifier to select at the next and only the next bootup non-volatile
LoaderDeviceIdentifier list of identifiers of the volume the loader was started from volatile
LoaderDevicePartUUID partition GPT UUID of the ESP systemd-boot was executed from volatileTo remove LoaderEntryDefault-[...] variable it was enough to press d key twice in the boot menu: to set and unset a new value.
To remove LoaderConfigTimeout-[...] variable it turned out enough to press Shift+t enough times to set the timeout to 0, plus one more time.
This resolved my problem. Here is a related question I asked on Superuser.SE about safely modifying EFI variables in general.
|
Recently, after I've done something to my multiboot system, when I boot NixOS with systemd-boot, boot menu does not show up anymore even though the timeout is still set to 2 seconds in /loader/loader.conf (on the ESP):
# /loader/loader.conf on the ESP
timeout 2
default nixos-generation-380Here is what I have in my /etc/nixos/configuration.nix:
{ # ...
boot.loader = {
efi.canTouchEfiVariables = true;
systemd-boot.enable = true;
timeout = 2;
};
}It turned out that to see the boot menu, I had to press down some key during start-up, as if the timeout had been set to 0 (instead of 2) seconds.
I tried removing systemd-bootx64.efi from the ESP and re-installing NixOS with nixos-install from a USB flash drive. This restored systemd-bootx64.efi but did not bring back the boot menu.
It seems that this problem is not completely uncommon:Reddit: systemd-boot menu suddenly disappeared?
Arch Linux Forums: systemd-boot, no timeout, no select menu - LoaderEntryDefaultBoth issues are reported to be solved. However, I did not understand the first solution:Edit 3: SOLVED! Reinstalling the UEFI did the trick.What does it mean to "reinstall the UEFI"?
As to the second, it suggests to use t and Shift+t keys in the boot menu (which shows up if some key is pressed down during the start-up) to set a different timeout, but I do not want just a different timeout, I want systemd-boot to respect the settings in /loader/loader.conf.
So, my question was: how to make systemd-boot use again the settings from /loader/loader.conf?
I am editing this question after I've found the solution, and I am going to post my answer now.
|
systemd-boot skips boot menu and ignores settings in /loader/loader.conf
|
In man systemd.directives, you can search for "output" and find that StandardOutput= is documented in in man systemd.exec. There you can find options including journal+console to send output to the systemd Journal and the system console. You might also try kmsg+console. According to the docs kmsg "connects standard output with the kernel log buffer which is accessible via dmesg(1),"
|
I have a custom systemd service that runs during the first boot.
If the user has no bootsplash I would like to write to the console and give some info on what's going on. Is there a way to do that from my service?
Here's my systemd service:
[Unit]
Description=Prepare operator after installation
[emailprotected] [emailprotected] [emailprotected] [emailprotected] [emailprotected] [emailprotected]
Wants=network-online.target
After=network.target network-online.target
OnFailure=emergency.target
OnFailureJobMode=replace-irreversibly[Service]
Type=oneshot
ExecStart=/usr/bin/provision-operator[Install]
WantedBy=multi-user.target
|
How do I write output to screen from systemd service during boot?
|
Boot from your bootable USB Arch-linux , mount all your partitions and chroot into the system.
As montioned jasonwryan :You need to mount your ESP to /bootFirst create the efi folder:
mkdir /boot/efimount the esp partition
mount /dev/sda1 /boot/efiVerify your /etc/fstab , the esp mount point need to be added to fstab.
Create a new sub-directory /boot/efi/EFI/arch/
mkdir -p /boot/efi/EFI/arch/Move /boot/vmlinuz-linux , initramfs-linux.img and initramfs-linux-fallback.img :
cp /boot/vmlinuz-linux /boot/efi/EFI/arch/vmlinuz-linux.efi
cp /boot/initramfs-linux.img /boot/initramfs-linux-fallback.img /boot/efi/EFI/archRun mkinitcpio -p linux then update GRUB:
grub-mkconfig -o /boot/grub/grub.cfg`
|
I have installed Arch Linux for the first time, I have attempted to setup my UEFI boot process but must have failed somewhere, on bootup I do see the boot menu with the Arch Linux option but when I select it, I get a message /vmlinuz-linux:Not Found i.e. it can't find the kernel to boot. I've followed the instructions on https://wiki.archlinux.org/index.php/Installation_guide but must have messed up somewhere.
How can I fix this?
partition layout:
/dev/sda1 EFI System (512M)
/dev/sda2 Linux fs (244M)
/dev/sda3 Linux fs (1M)
/dev/sda4 Linux fs (465G)/etc/fstab:
#/dev/sda4
UUID=41d8483f-0d29-4234-bf1e-3c55346b5667 / ext4 rw,realtime,data=unordered 0 1esp was setup in /boot/
edit 1
Oh yeah I can anytime boot from my USB thumb drive for troubleshooting...,
edit2
I see, my /boot/loder/entries/arch.conf looks like:
title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options root=PARTUUID=41d8483f-0d29-4234-bf1e-3c55346b5667 rwbut there's no files in my / at all only the directories. Might that be the problem?
|
installed Arch Linux but cannot boot
|
If you're already using the new LUKS2 format, you can set a label:
For new LUKS2 containers:
# cryptsetup luksFormat --type=luks2 --label=foobar foobar.img
# blkid /dev/loop0
/dev/loop0: UUID="fda16145-822e-405c-9fe8-fe7e7f0ddb5e" LABEL="foobar" TYPE="crypto_LUKS"For existing LUKS2 containers:
# cryptsetup config --label=barfoo /dev/loop0
# blkid /dev/loop0
/dev/loop0: UUID="fda16145-822e-405c-9fe8-fe7e7f0ddb5e" LABEL="barfoo" TYPE="crypto_LUKS"However, it's not possible to set a label for the more common LUKS1 header.With LUKS1, you can only set a label on a higher layer. For example, if you are using GPT partitions, you can set a PARTLABEL.
# parted /dev/loop0
(parted) name 1 foobar
(parted) print
Model: Loopback device (loopback)
Disk /dev/loop0: 105MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: Number Start End Size File system Name Flags
1 1049kB 104MB 103MB foobarThis sets the partition label of partition 1 to "foobar".
You can identify it with PARTLABEL=foobar or find it in /dev/disk/by-partlabel/
# ls -l /dev/disk/by-partlabel/foobar
lrwxrwxrwx 1 root root 13 Oct 10 20:10 /dev/disk/by-partlabel/foobar -> ../../loop0p1Similarly, if you use LUKS on top of LVM, you could go with VG/LV names.As always with labels, take extra care to make sure each label doesn't exist more than once. There's a reason why UUIDs are meant to be "universally unique". You get a lot of problems when trying to use the wrong device; it can even cause data loss (e.g. if cryptswap formats the wrong device on boot).
|
I'm running Arch Linux with systemd boot. In /boot/loader/entries/arch.conf I currently specify the luks crypto device with a line like this:
options rw cryptdevice=/dev/sda1:ABC root=/dev/mapper/ABCI know I can also use UUID instead of /dev/sda1. In that case the kernel options line would look like this:
options rw cryptdevice=UUID=1f5cce52-8299-9221-b2fc-19cebc959f51:ABC root=/dev/mapper/ABCHowever, can I instead use either a partition label or a volume label or any other kind of label? If so, what is the syntax?
|
How to specify cryptdevice by label using systemd boot?
|
On giving this another look, it seems you have a mismatch between PARTUUID=... and UUID=... and that's what's causing this problem.
You mentioned bootloader is configured with:options root=PARTUUID=2251a5a4-6c18-425c-9264-df971d297b09 rwBut when you manage to boot it, you actually find this UUID under /dev/disk/by-uuid:kodi@BB-8:~$ ls -al /dev/disk/by-uuid/
lrwxrwxrwx 1 root root 15 Jul 26 11:10 2251a5a4-6c18-425c-9264-df971d297b09 -> ../../mmcblk0p2Furthermore, that's even listed in /proc/cmdline of the successful boot (I'm assuming it's the one with SuperGrub USB):/proc/cmdline outputBOOT_IMAGE=/boot/vmlinuz-4.19.0-5-amd64 root=UUID=2251a5a4-6c18-425c-9264-df971d297b09 roThe two UUIDs are different. PARTUUID= is the UUID that will be found in the GPT partition table (that's why it's called "PART", because it's a property of the partition, recorded in the partition table), while UUID= is a UUID recorded in the filesystem (ext4, or xfs, or whichever filesystem you formatted the partition with) and Linux is able to read those while scanning the disks.
So, it looks like you need to fix your boot options to use UUID= instead of PARTUUID=, since the UUID you have is a filesystem UUID and not a partition UUID.
Edit file /boot/efi/loader/entries/debian.conf and replace the last line with:options root=UUID=2251a5a4-6c18-425c-9264-df971d297b09 rwThat should fix your issue!
Make sure your /etc/fstab in the system also matches the correct tag.
You can also use the blkid command to inspect the UUIDs present in your partitions and filesystems. This might help you confirm that you have the correct kind of UUID.
For instance, using blkid -o export should display something like:
$ sudo blkid -o export
DEVNAME=/dev/mmcblk0p1
SEC_TYPE=msdos
LABEL=boot
UUID=9A8B-7C6D
TYPE=vfat
PARTUUID=abcd1234-01DEVNAME=/dev/mmcblk0p2
UUID=2251a5a4-6c18-425c-9264-df971d297b09
TYPE=ext4
PARTUUID=abcd1234-01...That should help you see all UUIDs with a tag that is recognized by Linux.
|
I'm running Debian Testing (Buster) and I'm swapping from Grub2 to systemd as I couldn't get Grub2 to work and someone suggested I try systemd-boot instead.
The boot/root drive is on an eMMC drive on the motherboard, whilst the data drive is on an mSATA SSD.
I have systemd-boot half working and it crashes on boot with this message
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
done.
Gave up waiting for root file system device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! PARTUUID=2251a5a4-6c18-425c-9264-df971d297b09 does not exist. Dropping to shell!I've rebooted and managed to login using SuperGrub USB disk and I can see that the UUID does match my root partition so i don't know why it cannot find it.
/proc/cmdline output
BOOT_IMAGE=/boot/vmlinuz-4.19.0-5-amd64 root=UUID=2251a5a4-6c18-425c-9264-df971d297b09 ro/boot and /boot/efi listings
kodi@BB-8:~$ ls /boot
config-4.15.0-3-amd64 grub System.map-4.15.0-3-amd64 vmlinuz-4.19.0-5-amd64
config-4.19.0-5-amd64 initrd.img-4.15.0-3-amd64 System.map-4.19.0-5-amd64 vmlinuz-4.9.45-ubilinux+
config-4.9.45-ubilinux+ initrd.img-4.19.0-5-amd64 System.map-4.9.45-ubilinux+
efi initrd.img-4.9.45-ubilinux+ vmlinuz-4.15.0-3-amd64kodi@BB-8:~$ sudo ls /boot/efi
debian EFI loaderkodi@BB-8:~$ sudo ls /boot/efi/debian
drwx------ 2 root root 4096 Jul 16 22:31 .
drwx------ 5 root root 4096 Jan 1 1970 ..
-rwx------ 1 root root 31595838 Jul 26 11:09 initrd.img-4.15.0-3-amd64
-rwx------ 1 root root 33228805 Jul 26 11:09 initrd.img-amd64
-rwx------ 1 root root 4933392 Jul 26 11:09 vmlinuz-4.15.0-3-amd64
-rwx------ 1 root root 5217520 Jul 26 11:09 vmlinuz-amd64UUID of drives and a df command
kodi@BB-8:~$ ls -al /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 220 Jul 26 08:59 .
drwxr-xr-x 8 root root 160 Jul 26 08:58 ..
lrwxrwxrwx 1 root root 15 Jul 26 11:10 2251a5a4-6c18-425c-9264-df971d297b09 -> ../../mmcblk0p2
lrwxrwxrwx 1 root root 10 Jul 26 11:10 42a36b04-83f8-4105-aef4-7f24b9ffff66 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Jul 26 11:10 8280cf20-b70e-44f4-b092-6d3f92d54eab -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jul 26 11:10 8A84-E6C0 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jul 26 11:10 8B91-8099 -> ../../sdb4
lrwxrwxrwx 1 root root 15 Jul 26 11:10 A9CE-4035 -> ../../mmcblk0p1
lrwxrwxrwx 1 root root 10 Jul 26 11:10 b2a67d10-07da-4eb4-bc16-b768084db045 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 26 11:10 b618f5b0-2b8a-4e33-b288-407fd4355f83 -> ../../sdb5
lrwxrwxrwx 1 root root 10 Jul 26 11:10 f9a00ae7-07d2-4726-947b-03a4074049dd -> ../../sda2kodi@BB-8:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 783M 78M 705M 10% /run
/dev/mmcblk0p2 57G 14G 41G 26% /
tmpfs 3.9G 39M 3.8G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 820G 555G 224G 72% /home
/dev/mmcblk0p1 511M 82M 429M 17% /boot/efi
/dev/sda2 96G 67G 24G 74% /home/hts
10.1.1.1:/media/backup 2.7T 2.3T 178G 93% /media/backup
tmpfs 783M 44K 783M 1% /run/user/1001
/dev/dm-0 7.8G 36M 7.3G 1% /media/kodi/8280cf20-b70e-44f4-b092-6d3f92d54eab
/dev/sdb4 7.9G 4.8G 3.2G 61% /media/kodi/DATA
/dev/sdb3 14G 12G 1.7G 88% /media/kodi/bootUpdate: add modules
Add to /etc/initramfs-tools/modules
mmc_core
mmc_block
sdhci
sdhci-pciThen type for 4.19.0-5 kernel
sudo update-initramfs -u -k allRebooted and I get the same message.
UPDATE: my systemd loader.conf
Here is my /boot/efi/loader/entries/debian.conf
title Debian
linux /debian/vmlinuz-amd64
initrd /debian/initrd.img-amd64
options root=PARTUUID=2251a5a4-6c18-425c-9264-df971d297b09 rwThis is the UUID for my /dev/mmcblk0p2 eMMC device which is mounted as root.
|
systemd-boot cannot find my root
|
After some days of researching, I have 3 approaches to the problem of creating custom entries for running a Systemd Debian without graphical desktop from the Grub. I think that the best approach is 1.
1. Creating a new /etc/grub.d/* configuration file
To do that, i copied /etc/grub.d/10_linux file as a template:
sudo cp /etc/grub.d/10_linux /etc/grub.d/11_multiuserThe original file creates the root entry for the latest kernel and also the "Advanced options" submenu. So, I edited my 11_multiuser file a little bit, just to create a new submenu for the multiuser options, and create inside a new option for each kernel, for the multiuser mode. Here I'll add a patch with the modified lines:
--- /etc/grub.d/10_linux
+++ /etc/grub.d/11_multiuser
@@ -118,6 +118,8 @@
case $type in
recovery)
title="$(gettext_printf "%s, with Linux %s (%s)" "${os}" "${version}" "$(gettext "${GRUB_RECOVERY_TITLE}")")" ;;
+ multiuser)
+ title="$(gettext_printf "%s, with Linux %s (multiuser)" "${os}" "${version}")" ;;
init-*)
title="$(gettext_printf "%s, with Linux %s (%s)" "${os}" "${version}" "${type#init-}")" ;;
*)
@@ -227,57 +229,18 @@
boot_device_id=
title_correction_code=-cat << 'EOF'
-function gfxmode {
- set gfxpayload="${1}"
-EOF
-if [ "$vt_handoff" = 1 ]; then
- cat << 'EOF'
- if [ "${1}" = "keep" ]; then
- set vt_handoff=vt.handoff=7
- else
- set vt_handoff=
- fi
-EOF
-fi
-cat << EOF
-}
-EOF
-
-# Use ELILO's generic "efifb" when it's known to be available.
-# FIXME: We need an interface to select vesafb in case efifb can't be used.
-if [ "x$GRUB_GFXPAYLOAD_LINUX" != x ] || [ "$gfxpayload_dynamic" = 0 ]; then
- echo "set linux_gfx_mode=$GRUB_GFXPAYLOAD_LINUX"
-else
- cat << EOF
-if [ "\${recordfail}" != 1 ]; then
- if [ -e \${prefix}/gfxblacklist.txt ]; then
- if hwmatch \${prefix}/gfxblacklist.txt 3; then
- if [ \${match} = 0 ]; then
- set linux_gfx_mode=keep
- else
- set linux_gfx_mode=text
- fi
- else
- set linux_gfx_mode=text
- fi
- else
- set linux_gfx_mode=keep
- fi
-else
- set linux_gfx_mode=text
-fi
-EOF
-fi
-cat << EOF
-export linux_gfx_mode
-EOF
-
# Extra indentation to add to menu entries in a submenu. We're not in a submenu
# yet, so it's empty. In a submenu it will be equal to '\t' (one tab).
submenu_indentation=""-is_top_level=true
+# para el menu de multiuser
+submenu_indentation="$grub_tab"
+if [ -z "$boot_device_id" ]; then
+ boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
+fi
+gettext_printf "Agregando entradas multiuser...\n" >&2
+echo "submenu '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote) (MultiUser)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_id' {"
+is_top_level=false
while [ "x$list" != "x" ] ; do
linux=`version_find_latest $list`
case $linux in
@@ -331,34 +294,9 @@
linux_root_device_thisversion=${GRUB_DEVICE}
fi- if [ "x$is_top_level" = xtrue ] && [ "x${GRUB_DISABLE_SUBMENU}" != xy ]; then
- linux_entry "${OS}" "${version}" simple \
- "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT}"
-
- submenu_indentation="$grub_tab"
-
- if [ -z "$boot_device_id" ]; then
- boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
- fi
- # TRANSLATORS: %s is replaced with an OS name
- echo "submenu '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_id' {"
- is_top_level=false
- fi
-
- linux_entry "${OS}" "${version}" advanced \
- "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT}"
-
- for supported_init in ${SUPPORTED_INITS}; do
- init_path="${supported_init#*:}"
- if [ -x "${init_path}" ] && [ "$(readlink -f /sbin/init)" != "${init_path}" ]; then
- linux_entry "${OS}" "${version}" "init-${supported_init%%:*}" \
- "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT} init=${init_path}"
- fi
- done
- if [ "x${GRUB_DISABLE_RECOVERY}" != "xtrue" ]; then
- linux_entry "${OS}" "${version}" recovery \
- "${GRUB_CMDLINE_LINUX_RECOVERY} ${GRUB_CMDLINE_LINUX}"
- fi
+ linux_entry "${OS}" "${version}" multiuser \
+ "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT} systemd.unit=multi-user.target"
+ list=`echo $list | tr ' ' '\n' | fgrep -vx "$linux" | tr '\n' ' '`
doneWith this solution, if I add/remove kernels, or perform some action that involves any reconfiguration of the grub menu, my desired multiuser entries will be automatically added for each kernel. Also, I think (but not completely sure) that, if i update the grub, my new configuration file 11_multiuser won't be removed, given that it's not part of the Grub's predefined configuration files.
2. Modifying /etc/grub.d/10_linux file
This is another approach, but i think this is worse than the first one. This way, you are modifying the official file, so you could break the Grub's configuration and the whole system startup. Also, if any update leads to the file replacement, you could loose your configuration. There is only one advantage on doing this: you could insert your multiuser entries in the "Advanced options" submenu.The patch added for the first approach is partially valid for this. Anyway, I totally disagree this approach.
3. Modifying /etc/grub.d/40_custom file
This file is intended to insert specific entries. You could copy the entry from /boot/grub/grub.cfg and paste it in this file adding the systemd.. It is perfectly ok, but the problem is that you must do it for every kernel you want. Also, when removing/adding new kernels to the system, you must maintain this file manually. Plus, these entries appears at the end of the grub menu, and if you have other operating systems like Windows, then your custom entries will be separated from the first Linux entries.
|
I have Debian Stretch and I would like to have a custom grub entry in order to run the system without a graphical desktop. I thought that would be as easy as running a different runlevel, but reading about that, I was aware that, in systemd everything is different.
After reading this question about Red Hat and also this other for Debian Jessie, I learnt about systemd targets, and I think that what I want to do is running in multi-user.target.
I've found this fedora link, this archlinux kernel link and this other one. All them explain that there is one option "systemd.unit" that can be appended in the "linux" line in the grub menu entry. So, I searched for links to explain how to create a custom menu entry: this one. But, looking my own automatic grub entries with the key 'e' in the grub screen, they are more complex than the one in the link. The problem is that I don't know if I must copy all that stuff in the custom menu entry.
setparams 'Debian gnu/linux, con linux 4.8.0-2-amd64'
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part-msdos
insmod ext2
set root='hd0,msdos5'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint-efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 3202c741-ef05-40e4-9368-8617e7b1fb3c
else
search --no-floppy --fs-uuid --set=root 3202c741-ef05-40e4-9368-8617e7b1fb3c
fi
echo 'Cargando Linux 4.8...'
linux /vmlinuz-4.8.0-2-amd64 root=UUID=17f74892-fe09-46ec-91ca-2dca457565a1 ro quiet
echo 'Cargando imagen de memoria inicial...'
initrd /initrd.img-4.8.0-2-amd64This is my automatically created entry for my last kernel. Can I simply copy all this in a custom menu entry and change only the
linux /vmlinuz-4.8.0-2-amd64 root=UUID=17f74892-fe09-46ec-91ca-2dca457565a1 ro quietline to be
linux /vmlinuz-4.8.0-2-amd64 root=UUID=17f74892-fe09-46ec-91ca-2dca457565a1 ro quiet systemd.unit=multi-user.target?
|
Create debian grub custom entry for running systemd multiuser.target
|
This is a kernel build option, so you cannot "add" it at runtime.
Either build your own kernel, or ask your maintainer to build the kernel with this option which they may or may not do because some kernel options depend on others and those dependencies might be very undesirable, e.g. they could slow down the kernel considerably.
|
Looking about, I see that the standard fix is to add this to the kernel boot parameters.
Using systemd-boot, my arch.conf looks like this :
title Arch Linux
linux /vmlinuz-linux
initrd /intel-ucode.img
initrd /initramfs-linux.img
options root=PARTUUID="98b3b4f7-e7f9-6f49-be81-a2ee709c7a3e" rwHow do I add CONFIG_TASK_DELAY_ACCT to the options entry?
Another line?
Or by using some delimeter, add it to the existing line?
What value should I be setting it to?
|
IOTOP complains: CONFIG_TASK_DELAY_ACCT not enabled in kernel
|
From this answer on askUbuntu, I used TestDisk to recover the data in the deleted EFI partition.
I copied the Microsoft folder from /boot/efi/EFI in the deleted EFI partition and copied it to the same destination but in the new efi partition. And Voila! Windows Boot Manger showed up in the systemd-boot menu.
|
I installed Pop-OS in a dual-boot system.
Previously, my EFI partition was around 250mb. Pop Installer told me that it was too small. So instead of resizing and moving (due to possibility of data loss and Windows not booting), I deleted the old EFI partition and created a new EFI partition for install.
Output of efibootmgr:
BootCurrent: 0006
Timeout: 1 seconds
BootOrder: 0006,0007,0002
Boot0002* Windows Boot Manager
Boot0006* Pop!_OS 20.04 LTS
Boot0007* UEFI OSOutput of bootctl:
System:
Firmware: UEFI 2.70 (American Megatrends 5.13)
Secure Boot: disabled
Setup Mode: userCurrent Boot Loader:
Product: systemd-boot 245.4-4ubuntu3.6pop0~1617377648~20.04~eafddeb
Features: ✓ Boot counting
✓ Menu timeout control
✓ One-shot menu timeout control
✓ Default entry control
✓ One-shot entry control
✓ Support for XBOOTLDR partition
✓ Support for passing random seed to OS
✓ Boot loader sets ESP partition information
ESP: /dev/disk/by-partuuid/06919b6c-bed1-461e-9b6d-04dc9597fd38
File: └─/EFI/SYSTEMD/SYSTEMD-BOOTX64.EFIRandom Seed:
Passed to OS: yes
System Token: set
Exists: yesAvailable Boot Loaders on ESP:
ESP: /boot/efi (/dev/disk/by-partuuid/06919b6c-bed1-461e-9b6d-04dc9597fd38)
File: └─/EFI/systemd/systemd-bootx64.efi (systemd-boot 245.4-4ubuntu3.6pop0~1617377648~20.04~eafddeb)
File: └─/EFI/BOOT/BOOTX64.EFI (systemd-boot 245.4-4ubuntu3.6pop0~1617377648~20.04~eafddeb)Boot Loaders Listed in EFI Variables:
Title: Pop!_OS 20.04 LTS
ID: 0x0006
Status: active, boot-order
Partition: /dev/disk/by-partuuid/06919b6c-bed1-461e-9b6d-04dc9597fd38
File: └─/EFI/SYSTEMD/SYSTEMD-BOOTX64.EFI Title: UEFI OS
ID: 0x0007
Status: active, boot-order
Partition: /dev/disk/by-partuuid/06919b6c-bed1-461e-9b6d-04dc9597fd38
File: └─/EFI/BOOT/BOOTX64.EFIBoot Loader Entries:
$BOOT: /boot/efi (/dev/disk/by-partuuid/06919b6c-bed1-461e-9b6d-04dc9597fd38)Default Boot Loader Entry:
title: Pop!_OS
id: Pop_OS-current.conf
source: /boot/efi/loader/entries/Pop_OS-current.conf
linux: /EFI/Pop_OS-39f0e06d-54c4-4fd3-af74-605fcd37bc55/vmlinuz.efi
initrd: /EFI/Pop_OS-39f0e06d-54c4-4fd3-af74-605fcd37bc55/initrd.img
options: root=UUID=39f0e06d-54c4-4fd3-af74-605fcd37bc55 ro quiet loglevel=0 systemd.show_status=false splashThere is no Windows in EFI variables.
I increased the timeout of systemd-boot to 5 seconds, and now I see Pop OS and Boot to System Firmware.
This answer required the Windows EFI partition, which I deleted. Is there a way to get Windows Entry in systemd-boot?
Gparted (if it matters):
|
Deleted the Windows EFI partition, What to do?
|
RequiredBy= does not imply that one service should start after another.
In man systemd.unit, the docs for RequiredBy= say:The primary result is that the current unit will be started when the listed unit is started. In other words, they could end up started in parallel. I think you want a Before= directive in your install section. The docs in man systemd.unit have this to say about Before=:If a unit foo.service contains a setting Before=bar.service and both
units are being started, bar.service's start-up is delayed until foo.service is started up. Note that this setting is independent of and orthogonal to the
requirement dependencies as configured by Requires=.
|
Here's my service file:
[Unit]
Description=Blabla service
Requires=network-online.target nfs-common.service
After=network-online.target nfs-common.service[Service]
Type=oneshot
ExecStart=/path/to/script
RemainAfterExit=no[Install]
RequiredBy=php5-fpm.service apache2.service nginx.serviceWhen enabling it looks promising:
# systemctl enable blabla.service
Created symlink from /etc/systemd/system/php5-fpm.service.requires/blabla.service to /etc/systemd/system/blabla.service.
Created symlink from /etc/systemd/system/apache2.service.requires/blabla.service to /etc/systemd/system/blabla.service.
Created symlink from /etc/systemd/system/nginx.service.requires/blabla.service to /etc/systemd/system/blabla.service.Then after the restart systemd-analyze gives me the following:
# systemd-analyze blame
18.434s blabla.service
5.942s cloud-init.service
2.766s networking.service
1.671s apache2.service
1.398s cloud-init-local.service
1.276s newrelic-sysmond.service
856ms php5-fpm.service
586ms nginx.service
.....According to docs Type=oneshot:Behavior of oneshot is similar to simple; however, it is expected that the process has to exit before systemd starts follow-up units.Any ideas?
|
Systemd RequiredBy directive is ignored
|
It looks like that the hook lvm2 run after the hook encrypt during Arch Linux's Initial RAM filesystem phase is not able to activate thinly provisioned logical volumes.
With the same storage configuration as depicted in my question except for normal logical volumes instead of thinly provisioned ones the volume group containing these can be activated without any problems. With this change Arch Linux successfully boots.
So instead of creating thin logical volumes:
$ lvcreate —type thin-pool —name pool —size 75G system
$ lvcreate —type thin —name swap —virtualsize 4G —thinpool system/pool
$ mkswap -L swap /dev/system/swap
$ lvcreate —type thin —name arch.root —virtualsize 20G —thinpool system/pool
$ mkfs -t ext4 -L arch.root /dev/system/arch.rootOne must create normal logical volumes:
$ lvcreate —name swap —size 4G system
$ mkswap -L swap /dev/system/swap
$ lvcreate —name arch.root —size 20G system
$ mkfs -t ext4 -L arch.root /dev/system/arch.root
|
I'm not able to boot a freshly installed Arch Linux system with its root file system residing on a LVM thinly provisioned logical volume on a LUKS encrypted partition on a GUID partitioned device.
When Systemd's boot loader systemd-boot loads my boot entry Arch Linux it asks me for the passphrase of the LUKS encrypted partition but then, after 10 seconds, fails to activate the LVM volume group on which the root file system of Arch Linux resides. Eventually it drops me into the rescue shell rootfs.
Any ideas if what I'm trying to achieve is possible with Arch Linux?
N.B. I'm new to Arch Linux and thus carefully read upfront all of the Arch Wiki articles relevant for installing an Arch Linux system on that particular storage configuration.
The boot entry Arch Linux (/boot/loader/entries/arch.conf) I specifically configured looks as follows:
title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options cryptdevice=PARTUUID=<of /dev/sda3>:system root=UUID=<of /dev/system/arch.root> rootfstype=ext4 add_efi_memmapFor completeness what follows is a gist of how I configured my storage devices:
$ parted —align optimal
(parted) unit MiB
(parted) select /dev/sda
(parted) mklabel gpt
(parted) mkpart primary 1 2
(parted) name 1 bios
(parted) set 1 bios_grub on
(parted) mkpart primary 2 1026
(parted) name 2 uefi
(parted) set 2 boot on
(parted) mkpart primary 1026 103426
(parted) name 3 system
(parted) quit
$ mkfs -t vfat -n UEFI -F 32 /dev/sda2
$ cryptsetup luksFormat —hash sha512 —cipher aes-xts-plain64 —key-size 512 /dev/sda3
$ cryptsetup open /dev/sda3 system
$ pvcreate /dev/mapper/system
$ vgcreate system /dev/mapper/system
$ lvcreate —type thin-pool —name pool —size 75G system
$ lvcreate —type thin —name swap —virtualsize 4G —thinpool system/pool
$ mkswap -L swap /dev/system/swap
$ lvcreate —type thin —name arch.root —virtualsize 20G —thinpool system/pool
$ mkfs -t ext4 -L arch.root /dev/system/arch.root
|
How to install Arch Linux root file system on LVM thin LVs on LUKS on GUID partitioned device?
|
It seem this did the job: apt-get purge grub-common (it will include configurations files which remove will not).
And to remove unused dependencies (at least in my case):
apt-get purge libfreetype6 libfuse2 libpng16-16 mokutil shim-helpers-amd64-signed shim-signed-common shim-unsigned
System booted without problems.
Hope the next upgrade will run without problems.
|
The first thing I do with a new Linux box is to install systemd-boot. Grub, one would think this abomination was made of the hand of MS! Okay, back to subject:
I just ran an upgrade on my new Debian Buster and a new kernel update was available and the update-package seems to look for grub and it made me think I better remove it, not that I expect the upgrade to run much smoother though, but it seems to be the right thing to do, I never thought of that.
As I said it's disabled in favour of Systemd-boot which works perfectly on the box..
It seems I got these grub related packages installed:Should I just uninstall them all? Any perticarly order? Any other steps to do? E.g. is it safe to delete the folder /boot/grub/
Or am I better off just leaving it?
|
Uninstall grub from Debian (I'am using systemd-boot!)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.