source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
561,600 | I have around 50 very large csv files, they have thousands of lines. And I only want to keep the first 200 lines for each of them - I'm okay if the generated files to overwrite the original ones. What command should I use to do this? | Assuming that the current directory contains all CSV files and that they all have a .csv filename suffix: for file in ./*.csv; do
head -n 200 "$file" >"$file.200"
done This outputs the first 200 lines of each CSV file to a new file using head and a redirection. The new file's name is the same as the old but with .200 appended to the end of the name. There is no check to see if the new filename already exists or not. If you want to replace the originals: for file in ./*.csv; do
head -n 200 "$file" >"$file.200" &&
mv "$file.200" "$file"
done The && at the end of the head command makes it so that the mv won't be run if there was some issue with running head . If your CSV files are scattered in subdirectories under the current directory, then use shopt -s globstar and then replace the pattern ./*.csv in the loop with ./**/*.csv . This will locate any CSV file in or below the current directory and perform the operation on each. The ** globbing pattern matches "recursively" down into subdirectories, but only if the globstar shell option is set. For CSV files containing data with embedded newlines, the above will not work properly as you may possibly truncate a record.
Instead, you would have to use some CSV-aware tool to do the job for you. The following uses CSVkit, a set of command-line tools for parsing and in general working with CSV files, together with jq , a tool for working with JSON files. There is no tool in CSV kit that can truncate a CSV file at a particular point, but we can convert the CSV files to JSON and use jq to only output the first 200 records: for file in ./*.csv; do
csvjson -H "$file" | jq -r '.[:200][] | map(values) | @csv' >"$file.200" &&
mv "$file.200" "$file"
done Given some CSV file like the below short example, a,b,c
1,2,3
"hello, world",2 3,4
"hello
there","my good
man",nice weather for ducks the csvjson command would produce [
{
"a": "a",
"b": "b",
"c": "c"
},
{
"a": "1",
"b": "2",
"c": "3"
},
{
"a": "hello, world",
"b": "2 3",
"c": "4"
},
{
"a": "hello\nthere",
"b": "my good\nman",
"c": "nice weather for ducks"
}
] The jq tool would then take this, and for each object in the array (restricted to the first 200 objects), extract the values as an array and format it as CSV. It's probably possible to do this transformation directly with csvpy , another tool in CSVkit, but as my Python skills are non-existent, I will not attempt to come up with a solution that does that. | {
"source": [
"https://unix.stackexchange.com/questions/561600",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
562,463 | Given a string composed of 0 s and 1 s, my goal is to replace 0 by 1 and vice-versa. Example: Input 111111100000000000000 Intended output 000000011111111111111 I tried, unsuccessfully, the following sed command echo '111111100000000000000' | sed -e 's/0/1/g ; s/1/0/g'
000000000000000000000 What am I missing? | You can use tr for this, its main purpose is character translation: echo 111111100000000000000 | tr 01 10 Your sed command replaces all 0s with 1s, resulting in a string containing only 1s (the original 1s and all the replaced 0s), and then replaces all 1s with 0s, resulting in a string containing only 0s. On long streams, tr is faster than sed ; for a 100MiB file: $ time tr 10 01 < bigfileof01s > /dev/null
tr 10 01 < bigfileof01s > /dev/null 0.07s user 0.03s system 98% cpu 0.100 total
$ time sed y/10/01/ < bigfileof01s > /dev/null
sed y/10/01/ < bigfileof01s > /dev/null 3.91s user 0.11s system 99% cpu 4.036 total | {
"source": [
"https://unix.stackexchange.com/questions/562463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40484/"
]
} |
562,870 | #!/bin/sh
echo "Noise $1"
echo "Enhancement $2"
for snr in 0 5 10 15 20 25
do
python evaluate.py --noise $1 --snr 25 --iterations 1250 --enhancement $2
done If $2 is not specified, I don't want to pass the --enhancement $2 argument to my python script. How would I do that? | Modifying your original script: #!/bin/sh
echo "Noise $1"
echo "Enhancement $2"
for snr in 0 5 10 15 20 25
do
python evaluate.py --noise "$1" --snr "$snr" --iterations 1250 ${2:+--enhancement "$2"}
done The standard parameter expansion ${var:+word} will expand to word if the variable var is set and not empty. In the code above, we use it to add --enhancement "$2" to the command if $2 is available and not empty. I've also taken the liberty to assume that what you are giving to --snr as an option-argument should be the loop variable's value. My personal touch on the code (mostly just using printf rather than echo , avoiding long lines, and giving the code a bit more air): #!/bin/sh
printf 'Noise %s\n' "$1"
printf 'Enhancement %s\n' "$2"
for snr in 0 5 10 15 20 25; do
python evaluate.py \
--noise "$1" \
--snr "$snr" \
--iterations 1250 \
${2:+--enhancement "$2"}
done As mosvy points out in comments below: If your /bin/sh happens to be the dash shell, or some other shell that does not properly reset IFS as a new shell session starts (this is required by POSIX), and if you have, for one reason or other, exported IFS and given it a non-default value, then you may want to use unset IFS at the top of the above script. Do that whenever you have fixed all other issues that exporting IFS doubtlessly have raised (don't export IFS ). | {
"source": [
"https://unix.stackexchange.com/questions/562870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/386283/"
]
} |
562,919 | I packed and compressed a folder to a .tar.gz archive.
After unpacking it was nearly twice as big. du -sh /path/to/old/folder = 263M
du -sh /path/to/extracted/folder = 420M I searched a lot and found out that tar is actually causing this issue by adding metadata or doing other weird stuff with it. I made a diff on 2 files inside the folder, as well as a md5sum. There is absolutely no diff and the checksum is the exact same value. Yet, one file is as twice as big as the original one. root@server:~# du -sh /path/to/old/folder/subfolder/file.mcapm /path/to/extracted/folder/subfolder/file.mcapm
1.1M /path/to/old/folder/subfolder/file.mcapm
2.4M /path/to/extracted/folder/subfolder/file.mcapm
root@server:~# diff /path/to/old/folder/subfolder/file.mcapm /path/to/extracted/folder/subfolder/file.mcapm
root@server:~#
root@server:~# md5sum /path/to/old/folder/subfolder/file.mcapm
root@server:~# f11787a7dd9dcaa510bb63eeaad3f2ad
root@server:~# md5sum /path/to/extracted/folder/subfolder/file.mcapm
root@server:~# f11787a7dd9dcaa510bb63eeaad3f2ad I am not searching for different methods, but for a way to reduce the size of those files again to their original size. How can I achieve that? | [this answer is assuming GNU tar and GNU cp] There is absolutely no diff and the checksum is the exact same value. Yet, one file is as twice as big as the original one. 1.1M /path/to/old/folder/subfolder/file.mcapm
2.4M /path/to/extracted/folder/subfolder/file.mcapm That .mcapm file is probably sparse . Use the -S ( --sparse ) tar option when creating the archive. Example: $ dd if=/dev/null seek=100 of=dummy
...
$ mkdir extracted
$ tar -zcf dummy.tgz dummy
$ tar -C extracted -zxf dummy.tgz
$ du -sh dummy extracted/dummy
0 dummy
52K extracted/dummy
$ tar -S -zcf dummy.tgz dummy
$ tar -C extracted -zxf dummy.tgz
$ du -sh dummy extracted/dummy
0 dummy
0 extracted/dummy You can also "re-sparse" a file afterwards with cp --sparse=always : $ dd if=/dev/zero of=junk count=100
...
$ du -sh junk
52K junk
$ cp --sparse=always junk junk.sparse && mv junk.sparse junk
$ du -sh junk
0 junk | {
"source": [
"https://unix.stackexchange.com/questions/562919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/391216/"
]
} |
562,932 | My home desktop system is Ubuntu 18.04.1 with kernel regularly updated, currently 5.3.0. From time to time, mostly when browsing but not necessarily, the system becomes slow on IO:
- hdd LED constantly on
- system slow on all disk request. E.g. console login or ls ~/ takes minutes
- system fast on other things (mouse moves, virtual console switching)
- iotop shows multiple apps 99% waiting for IO
- iostat shows high wrqm, low wrkb/s after a few minutes the system goes into a complete freeze, I only can make a hard reboot What can I do to investigate the problem better?
What scheduler would you recommend?
If it's a single app killing my hdd, is there a way to disallow it to do so? Update :
The disk is HDD, i.e. a spinning disk. The apps showing IO waits are just all doing IO really. No swapping, there is enough memory. No relevant lines in syslog, I'll see /var/log/messages on the next occurrence | [this answer is assuming GNU tar and GNU cp] There is absolutely no diff and the checksum is the exact same value. Yet, one file is as twice as big as the original one. 1.1M /path/to/old/folder/subfolder/file.mcapm
2.4M /path/to/extracted/folder/subfolder/file.mcapm That .mcapm file is probably sparse . Use the -S ( --sparse ) tar option when creating the archive. Example: $ dd if=/dev/null seek=100 of=dummy
...
$ mkdir extracted
$ tar -zcf dummy.tgz dummy
$ tar -C extracted -zxf dummy.tgz
$ du -sh dummy extracted/dummy
0 dummy
52K extracted/dummy
$ tar -S -zcf dummy.tgz dummy
$ tar -C extracted -zxf dummy.tgz
$ du -sh dummy extracted/dummy
0 dummy
0 extracted/dummy You can also "re-sparse" a file afterwards with cp --sparse=always : $ dd if=/dev/zero of=junk count=100
...
$ du -sh junk
52K junk
$ cp --sparse=always junk junk.sparse && mv junk.sparse junk
$ du -sh junk
0 junk | {
"source": [
"https://unix.stackexchange.com/questions/562932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170566/"
]
} |
563,203 | I want to write a CGI, which must read a specified number of bytes from STDIN. My idea is to do it this way: dd bs=$CONTENT_LENGTH count=1 But I was wondering, if the block size is limited by anything else but the RAM. $ dd bs=1000000000000
dd: memory exhausted by input buffer of size 1000000000000 bytes (931 GiB) The manual page of GNU's coreutils does not specify any limit. | The POSIX specifications for dd donโt specify a maximum explicitly, but there are some limits: the datatype used to store the value given can be expected to be size_t , since thatโs the type of the number of bytes to read given to the read function ; read is also specified to have a limit of SSIZE_MAX ; under Linux, read only transfers up to 2,147,479,552 bytes anyway. On a 64-bit platform, size_t is 64 bits in length; in addition, itโs unsigned, so dd will fail when given values greater than 2 64 โ 1: $ dd if=/dev/zero of=/dev/null bs=18446744073709551616
dd: invalid number: โ18446744073709551616โ On Linux on 64-bit x86, SSIZE_MAX is 0x7fffffffffffffffL (run echo SSIZE_MAX | gcc -include limits.h -E - to check), and thatโs the input limit: $ dd if=/dev/zero of=/dev/null bs=9223372036854775808
dd: invalid number: โ9223372036854775808โ: Value too large for defined data type
$ dd if=/dev/zero of=/dev/null bs=9223372036854775807
dd: memory exhausted by input buffer of size 9223372036854775807 bytes (8.0 EiB) Once you find a value which is accepted, the next limit is the amount of memory which can be allocated, since dd needs to allocate a buffer before it can read into it. Once you find a value which can be allocated, youโll hit the read limit (on Linux and other systems with similar limits), unless you use GNU dd and specify iflag=fullblock : $ dd if=/dev/zero of=ddtest bs=4294967296 count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, 2.0 GiB) copied, 38.3037 s, 56.1 MB/s ( dd copied just under 2 31 bytes, i.e. the Linux limit mentioned above, not even half of what I asked for). As explained in the Q&A linked above, youโll need fullblock to reliably copy all the input data in any case, for any value of bs greater than 1. | {
"source": [
"https://unix.stackexchange.com/questions/563203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7167/"
]
} |
563,287 | I'm trying to dump the env from a systemd service unit and systemctl show-environment doesn't do what I want. Is there any way to systemctl to show me what the environment looks like inside my service? | If your service is running, you can use systemctl status <name>.service to identify the PID(s) of the service process(es), and then use sudo strings /proc/<PID>/environ to look at the actual environment of the process. | {
"source": [
"https://unix.stackexchange.com/questions/563287",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147473/"
]
} |
564,998 | $ printf "hi"
hi$ printf "hi\n"
hi
$ printf "hi\\n"
hi Why doesn't the last line print hi\n ? | This is nothing to do with printf , and everything to do with the argument that you have given to printf . In a double-quoted string, the shell turns \\ into \ . So the argument that you have given to printf is actually hi\n , which of course printf then performs its own escape sequence processing on. In a double-quoted string, the escaping done through \ by the shell is specifically limited to affecting the โ, \ , ` , $ , and " characters. You will find that \n gets passed to printf as-is. So the argument that you have given to printf is actually hi\n again . Be careful about putting escape sequences into the format string for printf . Only some have defined meanings in the Single Unix Specification . \n is defined, but \c is actually not, for example. Further reading https://unix.stackexchange.com/a/359510/5132 POSIX Shell: inside of double-quotes, are there cases where `\` fails to escape `$`, ```, `"`, `\` or `<newline>`? Why is a single backslash shown when using quotes Echo new line and string beginning \t Why does dash expand \\\\ differently to bash? https://unix.stackexchange.com/a/558665/5132 | {
"source": [
"https://unix.stackexchange.com/questions/564998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151111/"
]
} |
565,905 | I'm using macOS 10.15.2 with iTerm2, zsh 5.7.1 and oh-my-zsh (theme robbyrussell). I noticed that the prompt print is slightly slow respect to the bash one. For example, if I press enter , cursor initially goes at the beginning of the next line then, after a little while, the shell prompt comes in and the cursor is moved to its natural position. For example, if โ ~ is the prompt when I'm in my home folder, and [] is my cursor, when I press enter I see: 0 - Idle status โ ~ [] 1 - Immediately after pressing enter [] 2 - Back to idle status โ ~ [] This slowness is particularly evident when I quickly press enter multiple times. In this case, I see some blank lines. This is what I see โ ~
โ ~
โ ~
โ ~
โ ~
โ ~
โ ~
โ ~
โ ~ [] I come from bash shell and when I use bash, there is not such a slowness. I'm not sure this is an issue of oh-my-zsh or its natural behavior. I'd like to know more about this and, eventually, how to fix it. Thanks. PS : the problem comes from oh-my-zsh and it persists even if I disable all the plugins. PPS : I previously posted this question on SO. Thanks to user1934428 for his help and for suggesting me to move this question here. | I don't know what oh-my-zsh puts in the prompt by default. Maybe it tries to identify the version control status, that's a very popular prompt component which might be time-consuming. To see what's going on, turn on command traces with set -x . โ ~
โ ~ set -x trace of the commands that are executed to calculate the prompt โ ~ trace of the commands that are executed to calculate the prompt โ ~ set +x +zsh:3> set +x
โ ~
โ ~ If the trace is so long that it scrolls off the screen, redirect it to a file with exec 2>zsh.err This directs all error messages to the file, not just the trace. To get traces and errors back on the terminal, run exec 2>/dev/tty You can customize the trace format through PS4 . This is a format string which can contain prompt escapes . For example, to add precise timing information: PS4='%D{%s.%9.}+%N:%i> ' | {
"source": [
"https://unix.stackexchange.com/questions/565905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180227/"
]
} |
565,949 | I have two packages which are in conflict after installing a new one with pacman on arch. How can I list all installed packages that are depending on the ones in conflict? Or more general: How can I list all installed packages that are depending on a certain other package | To list the dependencies use pacman -Si (i.e., pacman --sync --info )
or pacman -Qi (i.e., pacman --query --info ). To list the reverse dependencies: pacman -Sii (i.e., pacman --sync --info --info ; yes two infos). Arch Linux: Querying package dependencies | {
"source": [
"https://unix.stackexchange.com/questions/565949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258564/"
]
} |
566,981 | When I try to implement the C string library myself, I found that glibc and the Linux kernel have a different way to implement some functions. For instance, glibc memchr and glibc strchr use some trick to speed up the function but the kernel memchr and the kernel strchr don't. Why aren't the Linux kernel functions optimized like glibc? | The kernel does have optimised versions of some of these functions, in the arch-specific directories; see for example the x86 implementation of memchr (see all the memchr definitions , and all the strchr definitions ). The versions you found are the fallback generic versions; you can spot these by looking for the protective check, #ifndef __HAVE_ARCH_MEMCHR for memchr and #ifndef __HAVE_ARCH_STRCHR for strchr . The C libraryโs optimised versions do tend to used more sophisticated code, so the above doesnโt explain why the kernel doesnโt go to such great lengths to go fast. If you can find scenarios where the kernel would benefit from a more optimised version of one of these functions, I imagine a patch would be welcome (with appropriate supporting evidence, and as long as the optimised function is still understandable โ see this old discussion regarding memcpy ). But I suspect the kernelโs uses of these functions often wonโt make it worth it; for example memcpy and related functions tend to be used on small buffers in the kernel. And never discount the speed gains from a short function which fits in cache or can be inlined... In addition, as mentioned by Iwillnotexist Idonotexist , MMX and SSE canโt easily be used in the kernel , and many optimised versions of memory-searching or copying functions rely on those. In many cases, the version used ends up being the compilerโs built-in version anyway, and these are heavily optimised, much more so than even the C libraryโs can be (for example, memcpy will often be converted to a register load and store, or even a constant store). | {
"source": [
"https://unix.stackexchange.com/questions/566981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259329/"
]
} |
567,053 | I'd like to copy a public ssh key from the ~/.ssh/id_rsa.pub file on my local machine to the ~/.ssh/authorized_keys file on a remote host that is two ssh hops away. In other words, localhost only has ssh access to host1 , but host1 has ssh access to host2 . I want to copy my public ssh key from localhost to host2 . To copy a an ssh key to a remote host one hop away, the ssh documentation gives the command: ssh-copy-id -i ~/.ssh/mykey user@host Is there a way to copy the key to a machine that is two hops away in a single command? | You can pass any ssh option to ssh-copy-id with the -o option. By using the ProxyJump option you can use ssh-copy-id to copy your key to a host via jump host. Here's an example where I copy my ssh key to leia.spack.org via the jump host jump.spack.org: $ ssh-copy-id -o ProxyJump=jump.spack.org leia.spack.org
[email protected]'s password:
Number of key(s) added: 1 And then test it with: $ ssh -J jump.spack.org leia.spack.org
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-42-generic x86_64) | {
"source": [
"https://unix.stackexchange.com/questions/567053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311384/"
]
} |
567,338 | I have a number of files (Jupyter notebooks, .ipynb ) which are text files. All of these contain some LaTeX markup. But when I run file , I get: $ file nb_*
nb_1.ipynb: ASCII text
nb_2.ipynb: ASCII text
nb_3.ipynb: ASCII text, with very long lines
nb_4.ipynb: LaTeX document, ASCII text, with very long lines
nb_5.ipynb: text, with very long lines How does file distinguish these? I would like all files to have the same type. (Why should the files have the same type? I am uploading them to an online system for sharing. The system classifies them somehow and treats them differently, with no possibility for me to change this. I suspect the platform uses file or maybe libmagic internally and would like to work around this.) | The file type recognition is driven by so-called magic patterns. The magic file for analyzing TeX family source code contains a number of macro names that cause
a file to be classified as LaTeX . Each match is assigned a strength , e. g. 15 in case of \begin and 18 for \chapter . This makes the heuristic more robust against
false positives like misclassification of Plain TeX or ConTeXt
documents that happen to define their own macros with those names. | {
"source": [
"https://unix.stackexchange.com/questions/567338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/395210/"
]
} |
567,531 | I always do this to append text to a file echo "text text text ..." >> file
# or
printf "%s\n" "text text text ..." >> file I wonder if there are more ways to achieve the same, more elegant or unusual way. | I quite like this one, where I can set up a log file at the top of a script and write to it throughout without needing either a global variable or to remember to change all occurrences of a filename: exec 3>> /tmp/somefile.log
...
echo "This is a log message" >&3
echo "This goes to stdout"
echo "This is written to stderr" >&2 The exec 3>dest construct opens the file dest for writing (use >> for appending, < for reading - just as usual) and attaches it to file descriptor #3. You then get descriptor #1 for stdout , #2 for stderr , and this new #3 for the file dest . You can join stderr to stdout for the duration of a script with a construct such as exec 2>&1 - there are lots of powerful possibilities. The documentation ( man bash ) has this to say about it: exec [-cl] [-a name] [command [arguments]] If command is specified, it replaces the shell. [...] If command is not specified, any redirections take effect in the current shell [...]. | {
"source": [
"https://unix.stackexchange.com/questions/567531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245871/"
]
} |
568,385 | If I run ls -a user@users-MacBook-Pro:~$ ls -a
. .. I get . and .. (current directory and parent directory?) Is there a reason why they show up after ls -a , do they do anything interesting? | Because -a means show all files. It is useful when combined with -l . As to why show those useless files when not using -l , because of consistency, and because Unix does not try to double guess what is good for you. There is an option -A (for at least GNU ls ) that excludes these two ( .. , and . ). Interestingly the idea of hidden files in Unix came about by a bug in ls where it was trying to hide these two files. To make the code simple the original implementation only checked the first character. People used this to hide files, it then became a feature, and the -a option was added to show the hidden files. Later someone was wondering, the same as you, why . and .. are shown, we know they are there. The -A option was born. Note: Unix has a much looser meaning of file than you may have. FILE โ {normal-file, directory, named-pipe, unix-socket, symbolic-link, devices}. | {
"source": [
"https://unix.stackexchange.com/questions/568385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/391527/"
]
} |
568,634 | I'm basically trying to figure out how one would go about making a GUI from absolute scratch with nothing but the linux kernel and programming in C. I am not looking to create a GUI desktop environment from scratch, but I would like to create some desktop applications and in my search for knowledge, all the information I have been able to find is on GUI APIs and toolkits. I would like to know, at the very least for my understanding of the fundamentals of how linux GUI is made, how one would go about making a GUI environment or a GUI appllication without using any APIs or toolkits. I am wondering if for example: existing APIs and toolkits work via system calls to the kernel (and the kernel is responsible at the lowest level for constructing a GUI image in pixels or something) these toolkits perform syscalls which simply pass information to screen drivers (is there a standard format for sending this information that all screen drivers abide by or do GUI APIs need to be able to output this information in multiple formats depending on the specific screen/driver?) and also if this is roughly true, does the the raw linux kernel usually just send information to the screen in the form of 8-bit characters? I just really want to understand what happens between the linux kernel, and what I see on my screen (control/information flow through both software and hardware if you know, what format the information takes, etc). I would so greatly appreciate a detailed explanation, I understand this might be a dousie to explain in sufficient detail, but I think such an explanation would be a great resource for others who are curious and learning. For context I'm a 3rd year comp sci student who recently started programming in C for my Systems Programming course and I have an intermediate(or so I would describe it) understanding of linux and programming. Again Thank you to anyone who helps me out!!! | How it works (Gnu/Linux + X11) Overview It looks something like this (not draws to scale) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ User โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ Application โ
โ โ โโโโโโโโโโโโฌโโโโโโฌโโโโโโฌโโโโโโค
โ โ โ ... โ SDL โ GTK โ QT โ
โ โ โโโโโโโโโโโโดโโโโโโดโโโโโโดโโโโโโค
โ โ โ xLib โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โโโโโโโดโโโโฌโโโโโโโโโดโโโ X11 โ
โ Gnu โ Libraries โ Server โ
โ Tools โ โ โ
โโโโโโโโโโโ โ โ
โโโโโโโโโโโโโโโโโโโโโโโค โ
โ Linux (kernel) โ โ
โโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Hardware โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ We see from the diagram that X11 talks mostly with the hardware. However it needs to talk via the kernel, to initially get access to this hardware. I am a bit hazy on the detail (and I think it changed since I last looked into it). There is a device /dev/mem that gives access to the whole of memory (I think physical memory), as most of the graphics hardware is memory mapped, this file (see everything is a file) can be used to access it. X11 would open the file (kernel uses file permissions to see if it can do this), then X11 uses mmap to map the file into virtual memory (make it look like memory), now the memory looks like memory. After mmap , the kernel is not involved. X11 needs to know about the various graphics hardware, as it accesses it directly, via memory. (this may have changes, specifically the security model, may no longer give access to ALL of the memory.) Linux At the bottom is Linux (the kernel): a small part of the system. It provides access to hardware, and implements security. Gnu Then Gnu (Libraries; bash; tools:ls, etc; C compiler, etc). Most of the operating system. X11 server (e.g. x.org) Then X11 (Or Wayland, or ...), the base GUI subsystem. This runs in user-land (outside of the kernel): it is just another process, with some privileges.
The kernel does not get involved, except to give access to the hardware. And providing inter-process communication, so that other processes can talk with the X11 server. X11 library A simple abstraction to allow you to write code for X11. GUI libraries Libraries such as qt, gtk, sdl, are next โ they make it easier to use X11, and work on other systems such as wayland, Microsoft's Windows, or MacOS. Applications Applications sit on top of the libraries. Some low-level entry points, for programming xlib Using xlib, is a good way to learn about X11. However do some reading about X11 first. SDL SDL will give you low level access, direct to bit-planes for you to directly draw to. Going lower If you want to go lower, then I am not sure what good current options are, but here are some ideas. Get an old Amiga, or simulator. And some good documentation. e.g. https://archive.org/details/Amiga_System_Programmers_Guide_1988_Abacus/mode/2up (I had 2 books, this one and similar). Look at what can be done on a raspberry pi. I have not looked into this. Links X11 https://en.wikipedia.org/wiki/X_Window_System Modern ways Writing this got my interest, so I had a look at what the modern fast way to do it is. Here are some links: https://blogs.igalia.com/itoral/2014/07/29/a-brief-introduction-to-the-linux-graphics-stack/ | {
"source": [
"https://unix.stackexchange.com/questions/568634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/396359/"
]
} |
568,639 | I have pipe delimited text file named data.txt like ... Kalpesh|100|1
Kalpesh|500|1
Ramesh|500|1
Ramesh|500|1
Ramesh|500|1
Naresh|500|1
Ganesh|500|1
Ganesh|500|1
Ganesh|500|1
Ganesh|500|1 I am using an awk script as follows: awk -F"|" 'BEGIN { ln=0;slno=0;pg=0; }
{
name=$1;
{
if (name !=x||ln > 50) #if same name repeates more than 50times then new page
{
tot=0;
pg++;
printf("\f");
print "PERSONS HAVING OUTSTANDING ADVANCE SALARY"
print "+==============================+"
print "|Sr.| name |Amount Rs.|Nos |"
print "+==============================+"
ln=0;
}
if (name!=x)
slno=1;tot+=$2;
{
printf ("|%3s|%10s|%10.2f|%4d|\n",slno,$1,$2,$3,tot,$4);
ln++;
slno++;
x=name;
}
}
} END {
print "================================"
print "Total for",$1,slno,tot
print "================================"
print "\f" }' data.txt This is giving result like PERSONS HAVING OUTSTANDING ADVANCE SALARY
+==============================+
|Sr.| name |Amount Rs.|Nos |
+==============================+
| 1| Kalpesh| 100.00| 1|
| 2| Kalpesh| 500.00| 1|
PERSONS HAVING OUTSTANDING ADVANCE SALARY
+==============================+
|Sr.| name |Amount Rs.|Nos |
+==============================+
| 1| Ramesh| 500.00| 1|
| 2| Ramesh| 500.00| 1|
| 3| Ramesh| 500.00| 1|
PERSONS HAVING OUTSTANDING ADVANCE SALARY
+==============================+
|Sr.| name |Amount Rs.|Nos |
+==============================+
| 1| Naresh| 500.00| 1|
PERSONS HAVING OUTSTANDING ADVANCE SALARY
+==============================+
|Sr.| name |Amount Rs.|Nos |
+==============================+
| 1| Ganesh| 500.00| 1|
| 2| Ganesh| 500.00| 1|
| 3| Ganesh| 500.00| 1|
| 4| Ganesh| 500.00| 1|
================================
Total for Ganesh 5 2000
================================ My desired output is like PERSONS HAVING OUTSTANDING ADVANCE SALARY
+==============================+
|Sr.| name |Amount Rs.|Nos |
+==============================+
| 1| Kalpesh| 100.00| 1|
| 2| Kalpesh| 500.00| 1|
================================
Total for Kalpesh 2 600
================================
PERSONS HAVING OUTSTANDING ADVANCE SALARY
+==============================+
|Sr.| name |Amount Rs.|Nos |
+==============================+
| 1| Ramesh| 500.00| 1|
| 2| Ramesh| 500.00| 1|
| 3| Ramesh| 500.00| 1|
================================
Total for Ramesh 3 1500
================================
PERSONS HAVING OUTSTANDING ADVANCE SALARY
+==============================+
|Sr.| name |Amount Rs.|Nos |
+==============================+
| 1| Naresh| 500.00| 1|
================================
Total for Naresh 1 500
================================
PERSONS HAVING OUTSTANDING ADVANCE SALARY
+==============================+
|Sr.| name |Amount Rs.|Nos |
+==============================+
| 1| Ganesh| 500.00| 1|
| 2| Ganesh| 500.00| 1|
| 3| Ganesh| 500.00| 1|
| 4| Ganesh| 500.00| 1|
================================
Total for Ganesh 5 2000
================================ | How it works (Gnu/Linux + X11) Overview It looks something like this (not draws to scale) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ User โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ Application โ
โ โ โโโโโโโโโโโโฌโโโโโโฌโโโโโโฌโโโโโโค
โ โ โ ... โ SDL โ GTK โ QT โ
โ โ โโโโโโโโโโโโดโโโโโโดโโโโโโดโโโโโโค
โ โ โ xLib โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โโโโโโโดโโโโฌโโโโโโโโโดโโโ X11 โ
โ Gnu โ Libraries โ Server โ
โ Tools โ โ โ
โโโโโโโโโโโ โ โ
โโโโโโโโโโโโโโโโโโโโโโโค โ
โ Linux (kernel) โ โ
โโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Hardware โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ We see from the diagram that X11 talks mostly with the hardware. However it needs to talk via the kernel, to initially get access to this hardware. I am a bit hazy on the detail (and I think it changed since I last looked into it). There is a device /dev/mem that gives access to the whole of memory (I think physical memory), as most of the graphics hardware is memory mapped, this file (see everything is a file) can be used to access it. X11 would open the file (kernel uses file permissions to see if it can do this), then X11 uses mmap to map the file into virtual memory (make it look like memory), now the memory looks like memory. After mmap , the kernel is not involved. X11 needs to know about the various graphics hardware, as it accesses it directly, via memory. (this may have changes, specifically the security model, may no longer give access to ALL of the memory.) Linux At the bottom is Linux (the kernel): a small part of the system. It provides access to hardware, and implements security. Gnu Then Gnu (Libraries; bash; tools:ls, etc; C compiler, etc). Most of the operating system. X11 server (e.g. x.org) Then X11 (Or Wayland, or ...), the base GUI subsystem. This runs in user-land (outside of the kernel): it is just another process, with some privileges.
The kernel does not get involved, except to give access to the hardware. And providing inter-process communication, so that other processes can talk with the X11 server. X11 library A simple abstraction to allow you to write code for X11. GUI libraries Libraries such as qt, gtk, sdl, are next โ they make it easier to use X11, and work on other systems such as wayland, Microsoft's Windows, or MacOS. Applications Applications sit on top of the libraries. Some low-level entry points, for programming xlib Using xlib, is a good way to learn about X11. However do some reading about X11 first. SDL SDL will give you low level access, direct to bit-planes for you to directly draw to. Going lower If you want to go lower, then I am not sure what good current options are, but here are some ideas. Get an old Amiga, or simulator. And some good documentation. e.g. https://archive.org/details/Amiga_System_Programmers_Guide_1988_Abacus/mode/2up (I had 2 books, this one and similar). Look at what can be done on a raspberry pi. I have not looked into this. Links X11 https://en.wikipedia.org/wiki/X_Window_System Modern ways Writing this got my interest, so I had a look at what the modern fast way to do it is. Here are some links: https://blogs.igalia.com/itoral/2014/07/29/a-brief-introduction-to-the-linux-graphics-stack/ | {
"source": [
"https://unix.stackexchange.com/questions/568639",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343138/"
]
} |
568,666 | From my understanding, $1 is the first field. But strangely enough, awk '$1=$1' omits extra spaces. $ echo "$string"
foo foo bar bar
$ echo "$string" | awk '$1=$1'
foo foo bar bar Why is this happening? | When we assign a value to a field variable ie. value of $1 is assigned to field $1, awk actually rebuilds its $0 by concatenating them with default field delimiter(or OFS) space. we can get the same case in the following scenarios as well... echo -e "foo foo\tbar\t\tbar" | awk '$1=$1'
foo foo bar bar
echo -e "foo foo\tbar\t\tbar" | awk -v OFS=',' '$1=$1'
foo,foo,bar,bar
echo -e "foo foo\tbar\t\tbar" | awk '$3=1'
foo foo 1 bar For GNU AWK this behavior is documented here: https://www.gnu.org/software/gawk/manual/html_node/Changing-Fields.html $1 = $1 # force record to be reconstituted | {
"source": [
"https://unix.stackexchange.com/questions/568666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245871/"
]
} |
568,671 | I have a question regarding changing the home folder for a user on the system.
I was thinking I could do something like: new_folder_name="$2"
user_name="$3"
mkdir /home/$new_folder_name
usermod -d -m /home/$new_folder_name/$user_name This unfortunately did not work and now I feel kinda lost. Anyone have some advice on how to do this? I used mkdir /home/$2
chown $3:$3 /home/$2
chmod 700 /home/$2
usermod --home /home/$2 $3 instead, which works, but it prints chown: invalid group:username:username afterwards, why is that? | When we assign a value to a field variable ie. value of $1 is assigned to field $1, awk actually rebuilds its $0 by concatenating them with default field delimiter(or OFS) space. we can get the same case in the following scenarios as well... echo -e "foo foo\tbar\t\tbar" | awk '$1=$1'
foo foo bar bar
echo -e "foo foo\tbar\t\tbar" | awk -v OFS=',' '$1=$1'
foo,foo,bar,bar
echo -e "foo foo\tbar\t\tbar" | awk '$3=1'
foo foo 1 bar For GNU AWK this behavior is documented here: https://www.gnu.org/software/gawk/manual/html_node/Changing-Fields.html $1 = $1 # force record to be reconstituted | {
"source": [
"https://unix.stackexchange.com/questions/568671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/395310/"
]
} |
569,435 | I have large text file which contains data, formatted like this: 1
2
3
4
5
6
7
8
9
10 I am trying to convert it to this: 1 2 3
4 5 6
7 8 9
10 I tried awk : '{ if (NR%2) {printf "%40s\n", $0} else {printf "%80s\n", $0} }' file.txt | A solution with paste seq 10 | paste - - -
1 2 3
4 5 6
7 8 9
10 paste is a Unix standard tool, and the standard guarantees that this works for at least 12 columns. | {
"source": [
"https://unix.stackexchange.com/questions/569435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343138/"
]
} |
569,570 | I am trying to echo the content of key and certificate files encoded with base64 so that I can then copy the output into other places. I found this thread: Redirecting the content of a file to the command echo? which shows how to echo the file content and also found ways to keep the newline characters for encoding. However when I add the | base64 this breaks the output into multiple lines, and trying to add a second echo just replaces the newlines with white spaces. $ echo "$(cat test.key)" | base64
LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUpRZ0lCQURBTkJna3Foa2lHOXcwQkFRRUZB
QVNDQ1N3d2dna29BZ0VBQW9JQ0FRRFF4Tkh0aHZvcEp1Z0EKOHBsSUNUUU1pOGMwMzRERlR6Z1E5
ME5tcE5zN2hRczNQZ0QwU2JuSFcyVGxqTS9oM1F1QVE0Q1dqaHRiV1ZUbgpSREcveGxWRFBESVVV
MzB1UHJnK0N6dlhOUkhzQkE9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==
$ echo $(echo "$(cat test.key)" | base64)
LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUpRZ0lCQURBTkJna3Foa2lHOXcwQkFRRUZB QVNDQ1N3d2dna29BZ0VBQW9JQ0FRRFF4Tkh0aHZvcEp1Z0EKOHBsSUNUUU1pOGMwMzRERlR6Z1E5 ME5tcE5zN2hRczNQZ0QwU2JuSFcyVGxqTS9oM1F1QVE0Q1dqaHRiV1ZUbgpSREcveGxWRFBESVVV MzB1UHJnK0N6dlhOUkhzQkE9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg== The desired output would be: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUpRZ0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQ1N3d2dna29BZ0VBQW9JQ0FRRFF4Tkh0aHZvcEp1Z0EKOHBsSUNUUU1pOGMwMzRERlR6Z1E5ME5tcE5zN2hRczNQZ0QwU2JuSFcyVGxqTS9oM1F1QVE0Q1dqaHRiV1ZUbgpSREcveGxWRFBESVVVMzB1UHJnK0N6dlhOUkhzQkE9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg== How can I achieve this output? | Use the -w option (line wrapping) of base64 like this: ... | base64 -w 0 A value of 0 will disable line wrapping. | {
"source": [
"https://unix.stackexchange.com/questions/569570",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/397172/"
]
} |
569,941 | I was wondering if there was a quick way to save a command in Ubuntu's terminal. The scenario is: Problem: Typed out a [long command] Forgot I needed to run [another command] before I could run the [long command] I want to be able to save the command for later use in an easy way that's not just putting a # before it and putting it in the up and down key history. Optimally saving it directly to a register or the clipboard. I forgot to mention that I didn't want to echo, either. | This is not your terminal , this is your shell . The name for the shell mechanism that you are looking for is a kill buffer . People forget that shell command line editors have these. ZLE in the Z shell has them, as have GNU Readline in the Bourne Again shell, libedit in the (FreeBSD) Almquist shell, the Korn shell's line editor, and the TENEX C shell's line editor. In all of these shells in emacs mode, simply go to the end of the line to be saved, kill it to the head kill buffer with โย Control + U , type and run the intermediate command, and then yank the kill buffer contents with โย Control + Y . Ensure that you do not do anything with the kill buffer when entering the intermediate command. In the Z shell in vi mode, you have the vi prefix sequences for specifying a named vi -style buffer to kill the line into. You can use one of the other buffers instead of the default buffer. Simply use something like " a d d (in vicmd mode) to delete the whole line into buffer "a", type and run the intermediate command, and then put that buffer's contents with " a p . In their vi modes, the Korn shell, GNU Readline in the Bourne Again shell, and libedit in the (FreeBSD) Almquist shell do not have named vi -style buffers, only the one cut buffer. d d to delete the line into that buffer, followed by putting the buffer contents with p , will work. But it uses the same vi -style buffer that killing and yanking will while entering the intermediate command. | {
"source": [
"https://unix.stackexchange.com/questions/569941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/397494/"
]
} |
570,477 | I have a few thousand files that are individually GZip compressed (passing of course the -n flag so the output is deterministic). They then go into a Git repository. I just discovered that for 3 of these files, Gzip doesn't produce the same output on macOS vs Linux. Here's an example: macOS $ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | shasum -a 256
0ac378465b576991e1c7323008efcade253ce1ab08145899139f11733187e455 -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip --fast -n | shasum -a 256
6e145c6239e64b7e28f61cbab49caacbe0dae846ce33d539bf5c7f2761053712 -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip -n | shasum -a 256
3562fd9f1d18d52e500619b4a5d5dfa709f5da8601b9dd64088fb5da8de7b281 -
$ gzip --version
Apple gzip 272.250.1 Linux $ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | shasum -a 256
0ac378465b576991e1c7323008efcade253ce1ab08145899139f11733187e455 -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip --fast -n | shasum -a 256
10ac8b80af8d734ad3688aa6c7d9b582ab62cf7eda6bc1a0f08d6159cad96ddc -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip -n | shasum -a 256
cbf249e3a35f62a4f3b13e2c91fe0161af5d96a58727d17cf7a62e0ac3806393 -
$ gzip --version
gzip 1.6
Copyright (C) 2007, 2010, 2011 Free Software Foundation, Inc.
Copyright (C) 1993 Jean-loup Gailly.
This is free software. You may redistribute copies of it under the terms of
the GNU General Public License <http://www.gnu.org/licenses/gpl.html>.
There is NO WARRANTY, to the extent permitted by law.
Written by Jean-loup Gailly. How is this possible? I thought the GZip implementation was completely standard? UPDATE: Just to confirm that macOS and Linux versions do produce the same output most of the time, both OSes output the same hash for: $ echo "Vive la France" | gzip --fast -n | shasum -a 256
af842c0cb2dbf94ae19f31c55e05fa0e403b249c8faead413ac2fa5e9b854768 - | Note that the compression algorithm (Deflate) in GZip is not strictly bijective. To elaborate: For some data, there's more than one possible compressed output depending on the algorithmic implementation and used parameters. So there's no guarantee at all that Apple GZip and gzip 1.6 will return the same compressed output. These outputs are all valid GZip streams, the standard just guarantees that every of these possible outputs will be decompressed to the same original data. | {
"source": [
"https://unix.stackexchange.com/questions/570477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/397994/"
]
} |
570,486 | I trying to bind Ctrl+LeftArrow to backward-word in terminal (no XWindowSystem). But I observe, that Ctrl+LeftArrow and LeftArrow generate identically escape sequence in terminal: I press Ctrl+V I press LeftArrow I received ^[[D I press Ctrl+V I press Ctrl+LeftArrow I received ^[[D Same problem with Ctrl+RightArrow.
How I can fix it? (Debian: Linux v4.19.0-8-amd64) | Note that the compression algorithm (Deflate) in GZip is not strictly bijective. To elaborate: For some data, there's more than one possible compressed output depending on the algorithmic implementation and used parameters. So there's no guarantee at all that Apple GZip and gzip 1.6 will return the same compressed output. These outputs are all valid GZip streams, the standard just guarantees that every of these possible outputs will be decompressed to the same original data. | {
"source": [
"https://unix.stackexchange.com/questions/570486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/398001/"
]
} |
570,494 | Edit:
Is there a way to tell if a linux iso will provide the "try X without installing" option? From https://ubuntu.com/download/desktop for example or https://lubuntu.net/downloads/ it is not clear which will give that option.
(My expectation was that all could "try without installing" but recently I tried some which excluded that option from the boot menu) Original, confused questions:
If I create a bootable usb (Ubuntu for example if it matters), does that mean it is a "live usb", where I can run the OS off the usb, or not necessarily, only that I could install the OS off the usb. Are there certain ISO's to choose to get live usb's or all ISO's for certain distros? Are there other names for 'live usb'? | Note that the compression algorithm (Deflate) in GZip is not strictly bijective. To elaborate: For some data, there's more than one possible compressed output depending on the algorithmic implementation and used parameters. So there's no guarantee at all that Apple GZip and gzip 1.6 will return the same compressed output. These outputs are all valid GZip streams, the standard just guarantees that every of these possible outputs will be decompressed to the same original data. | {
"source": [
"https://unix.stackexchange.com/questions/570494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/398009/"
]
} |
570,530 | For example, if I run sudo apt-get -y upgrade if there is a package that requires a restart to upgrade, will the yes flag cause the system to reboot after the command finishes upgrading everything? Or, will it still require a manual reboot? OS and Software: Debian Buster 10 -> kernel version 4.19 on a Raspbian HW apt 1.8.2 ( armhf ) | No, apt on its own wonโt reboot. You can check whether the file /var/run/reboot-required exists after running apt to see if a reboot is required. If you use unattended-upgrades , you can configure that to reboot for you. | {
"source": [
"https://unix.stackexchange.com/questions/570530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/398034/"
]
} |
570,729 | There are some utilities that accept a -- (double dash) as the signal for "end of options", required when a file name starts with a dash: $ echo "Hello World!" >-file
$ cat -- -file
Hello World!
$ cat -file # cat - -file fails in the same way.
cat: invalid option -- 'f'
Try 'cat --help' for more information. But some of those utilities don't show such an option in the manual page. The man page for cat doesn't document the use (or validity) of a -- argument in any of the OS'es. This is not meant to be a Unix - Linux flame war , it is a valid, and, I believe, useful concern. Neither cat , mv , ed (and I am sure many others) document such an option in their manual page that I can find. Note that ./-file is a more portable workaround to the use of -- .
For example, the source (dot) command (and written as . ) doesn't (generally) work well with an -- argument: $ echo 'echo "Hello World!"' >-file
$ . ./-file
Hello World!
$ . -file
ksh: .: -f: unknown option
ksh: .: -i: unknown option
ksh: .: -l: unknown option
ksh: .: -e: unknown option
Usage: . [ options ] name [arg ...]
$ . -- -file # works in bash. Not in dash, ksh, zsh.
ksh: .: -file: cannot open [No such file or directory] | This is a POSIX requirement for all utilities, see POSIX chapter 12.02 ,
Guideline 10 for more information: The first -- argument that is not an option-argument should be accepted as a delimiter indicating the end of options. Any following arguments should be treated as operands, even if they begin with the '-' character. POSIX recommends all utilities to follow these guidelines. There are a few exceptions like echo (read at OPTIONS) . And special builtins that do not follow the guidelines (like break , dot , exec , etc.) : Some of the special built-ins are described as conforming to XBD Utility Syntax Guidelines. For those that are not, the requirement in Utility Description Defaults that "--" be recognized as a first argument to be discarded does not apply and a conforming application shall not use that argument. The intent is to document all commands that do not follow the guidelines in their POSIX man page, from POSIX chapter 12.02 third paragraph: Some of the standard utilities do not conform to all of these guidelines; in those cases, the OPTIONS sections describe the deviations. As the cat POSIX man page documents no deviations in the OPTIONS section, it is expected that it accept -- as a valid argument. There still may be (faulty) implementations that fail to follow the guideline. In particular, most GNU core utilities follow this guideline | {
"source": [
"https://unix.stackexchange.com/questions/570729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
570,741 | Example contents of the playlist file: 1. The fire is on - 03:50
2. Abc dge khji kkt mmy kdj - 09:20
3. Blowing in the winds - 14:16
4. By the rivers of Babylon - 15:46
5. Waka waka it's time for africa - 20:30
6. DGF djf Kmf pffg jdkf dhf - 28:25
7. Fdsa djf | kf |- 34:25
8. Despacito despatico - 41:33
...
...
... The command - ffmpeg -i "a" -ss "b" -to "c" "output" Now from the list, the contents from the beginning (i.e from the serial no.) till the end of the text line (which may include pipes as well) should be the last argument of the command (in the position of 'output'), the timestamp at the end should be the argument for parameter -ss and the timestamp in the next line should be the argument of parameter -to This is quite similiar to this question but i am quite not sure how to modify the awk command to suit this particular case. | This is a POSIX requirement for all utilities, see POSIX chapter 12.02 ,
Guideline 10 for more information: The first -- argument that is not an option-argument should be accepted as a delimiter indicating the end of options. Any following arguments should be treated as operands, even if they begin with the '-' character. POSIX recommends all utilities to follow these guidelines. There are a few exceptions like echo (read at OPTIONS) . And special builtins that do not follow the guidelines (like break , dot , exec , etc.) : Some of the special built-ins are described as conforming to XBD Utility Syntax Guidelines. For those that are not, the requirement in Utility Description Defaults that "--" be recognized as a first argument to be discarded does not apply and a conforming application shall not use that argument. The intent is to document all commands that do not follow the guidelines in their POSIX man page, from POSIX chapter 12.02 third paragraph: Some of the standard utilities do not conform to all of these guidelines; in those cases, the OPTIONS sections describe the deviations. As the cat POSIX man page documents no deviations in the OPTIONS section, it is expected that it accept -- as a valid argument. There still may be (faulty) implementations that fail to follow the guideline. In particular, most GNU core utilities follow this guideline | {
"source": [
"https://unix.stackexchange.com/questions/570741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/332496/"
]
} |
572,294 | cp is a massively popular Linux tool maintained by the coreutils team of the GNU foundation. By default, files with the same name will be overwritten, if the user wants to change this behaviour they can add --no-clobber to their copy command: -n, --no-clobber
do not overwrite an existing file (overrides a previous -i option) Why not something like --no-overwrite ? | โ Clobber โ in the context of data manipulation means destroying data by overwriting it. In the context of files in a Unix environment, the word was used at least as far back as the early 1980s, possibly earlier. Csh had set noclobber to configure > to refuse to overwrite an existing file (later set -o noclobber in ksh93 and other sh-style shells). When GNU coreutils added --no-clobber (in 2009), they used the same vocabulary that shells were using. | {
"source": [
"https://unix.stackexchange.com/questions/572294",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123737/"
]
} |
572,424 | I want a script to curl to a file and to put the status code into a variable (or, at least enable me to test the status code) I can see I can do it in two calls with e.g. url=https://www.gitignore.io/api/nonexistentlanguage
x=$(curl -sI $url | grep HTTP | grep -oe '\d\d\d')
if [[ $x != 200 ]] ; then
echo "$url SAID $x" ; return
fi
curl $url # etc ... but presumably there's a way to avoid the redundant extra call? $? doesn't help: status code 404 still gets an return code of 0 | #!/bin/bash
URL="https://www.gitignore.io/api/nonexistentlanguage"
response=$(curl -s -w "%{http_code}" $URL)
http_code=$(tail -n1 <<< "$response") # get the last line
content=$(sed '$ d' <<< "$response") # get all but the last line which contains the status code
echo "$http_code"
echo "$content" (There are other ways like --write-out to a temporary file. But my example does not need to touch the disk to write any temporary file and remembering to delete it; everything is done in RAM) | {
"source": [
"https://unix.stackexchange.com/questions/572424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181263/"
]
} |
572,427 | I've done tons of research on Computrace from Absolute Software and I haven't found a solid answer to: Does it work on Linux? I've read the following research papers and they don't refer to Linux or *NIX like systems once: Deactivate the Rootkit Absolute Backdoor Revisited They're all focused on reverse engineering the agents/binaries that are dropped onto Windows machines. I've looked at the running processes on multiple Linux systems with Computrace enabled and there isn't any sign of it. So I guess I've answered my own question, but for some reason I don't feel assured that it is 100% NOT working on Linux. If anyone here has experience with Computrace or has also tested themselves, please let me know! | #!/bin/bash
URL="https://www.gitignore.io/api/nonexistentlanguage"
response=$(curl -s -w "%{http_code}" $URL)
http_code=$(tail -n1 <<< "$response") # get the last line
content=$(sed '$ d' <<< "$response") # get all but the last line which contains the status code
echo "$http_code"
echo "$content" (There are other ways like --write-out to a temporary file. But my example does not need to touch the disk to write any temporary file and remembering to delete it; everything is done in RAM) | {
"source": [
"https://unix.stackexchange.com/questions/572427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
572,616 | I have two scripts that use GPU and train ML models. I want to start them before I go to sleep so they work at the night and I expect to see some results in the morning. But because of the GPU memory is limited, I want to run them in serial instead of parallel. I can do it with python train_v1.py && python train_v2.py ; but let's say I started to train the train_v1 . In the mean time, because the training takes long time, I started and finished the implementation of the second script, train_v2.py , and I want to run it automatically when python train_v1.py is finished. How can I achieve that? Thank you. | Here's an approach that doesn't involve looping and checking if the other process is still alive, or calling train_v1.py in a manner different from what you'd normally do: $ python train_v1.py
^Z
[1]+ Stopped python train_v1.py
$ % && python train_v2.py The ^Z is me pressing Ctrl + Z while the process is running to sleep train_v1.py through sending it a SIGTSTP signal. Then, I tell the shell to wake it with % , using it as a command to which I can add the && python train_v2.py at the end. This makes it behave just as if you'd done python train_v1.py && python train_v2.py from the very beginning. Instead of % , you can also use fg . It's the same thing. If you want to learn more about these types of features of the shell, you can read about them in the "JOB CONTROL" section of bash's manpage . EDIT: How to keep adding to the queue As pointed out by jamesdlin in a comment, if you try to continue the pattern to add train_v3.py for example before v2 starts, you'll find that you can't: $ % && python train_v2.py
^Z
[1]+ Stopped python train_v1.py Only train_v1.py gets stopped because train_v2.py hasn't started, and you can't stop/suspend/sleep something that hasn't even started. $ % && python train_v3.py would result in the same as python train_v1.py && python train_v3.py because % corresponds to the last suspended process. Instead of trying to add v3 like that, one should instead use history: $ !! && python train_v3.py
% && python train_v2.py && python train_v3.py One can do history expansion like above, or recall the last command with a keybinding (like up) and add v3 to the end. $ % && python train_v2.py && python train_v3.py That's something that can be repeated to add more to the pipeline. $ !! && python train_v3.py
% && python train_v2.py && python train_v3.py
^Z
[1]+ Stopped python train_v1.py
$ !! && python train_v4.py
% && python train_v2.py && python train_v3.py && python train_v4.py | {
"source": [
"https://unix.stackexchange.com/questions/572616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/398373/"
]
} |
572,628 | I just upgraded an Ubuntu Server from 18.10 to 19.04 and then to 19.10. I think that this upgrade also upgraded tmux to a newer version. Since then my tmux scripts, which build some dashboards, are no longer working. When issuing a command like tmux send-keys "echo 'test'" C-m; I get a lost server message. This happens when nothing has attached to the session which contains the pane which is being targeted. When I start a session and attach to it, then send-keys does work. The syslog contains the following entry Mar 12 23:27:33 machine kernel: [ 27.074805] tmux: server[2657]:
segfault at 751 ip 000056042469f029 sp 00007ffe602aa6f0 error 4 in
tmux[560424675000+62000] This is what my creation script looks like, it is invoked in crontab as @reboot , but the problem also exists when manually executing it. SESSION=stuff
tmux new-session -d -s $SESSION -n 'homepage'
tmux split-window -h -p 50
tmux select-pane -t 1; tmux send-keys "./lhp.sh" C-m;
tmux select-pane -t 2; tmux send-keys "./lnginx.sh" C-m;
tmux split-window -v -p 50
tmux select-pane -t 3; tmux send-keys "./lsmr.sh" C-m;
tmux new-window -t $SESSION -n 'shells'
tmux split-window -h -p 50
tmux select-window -t :1; And at some later point in time (hours or days) I invoke tmux attach-session -t stuff to view the content. Does anyone know I can continue using it as I used to? | Here's an approach that doesn't involve looping and checking if the other process is still alive, or calling train_v1.py in a manner different from what you'd normally do: $ python train_v1.py
^Z
[1]+ Stopped python train_v1.py
$ % && python train_v2.py The ^Z is me pressing Ctrl + Z while the process is running to sleep train_v1.py through sending it a SIGTSTP signal. Then, I tell the shell to wake it with % , using it as a command to which I can add the && python train_v2.py at the end. This makes it behave just as if you'd done python train_v1.py && python train_v2.py from the very beginning. Instead of % , you can also use fg . It's the same thing. If you want to learn more about these types of features of the shell, you can read about them in the "JOB CONTROL" section of bash's manpage . EDIT: How to keep adding to the queue As pointed out by jamesdlin in a comment, if you try to continue the pattern to add train_v3.py for example before v2 starts, you'll find that you can't: $ % && python train_v2.py
^Z
[1]+ Stopped python train_v1.py Only train_v1.py gets stopped because train_v2.py hasn't started, and you can't stop/suspend/sleep something that hasn't even started. $ % && python train_v3.py would result in the same as python train_v1.py && python train_v3.py because % corresponds to the last suspended process. Instead of trying to add v3 like that, one should instead use history: $ !! && python train_v3.py
% && python train_v2.py && python train_v3.py One can do history expansion like above, or recall the last command with a keybinding (like up) and add v3 to the end. $ % && python train_v2.py && python train_v3.py That's something that can be repeated to add more to the pipeline. $ !! && python train_v3.py
% && python train_v2.py && python train_v3.py
^Z
[1]+ Stopped python train_v1.py
$ !! && python train_v4.py
% && python train_v2.py && python train_v3.py && python train_v4.py | {
"source": [
"https://unix.stackexchange.com/questions/572628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61956/"
]
} |
573,377 | Just using kubectl as an example, I note that kubectl run --image nginx ... and kubectl run --image=nginx ... both work. For command-line programs in general, is there a rule about whether an equals sign is allowed/required between the option name and the value? | In general, the implementation of how command-line arguments are interpreted is left completely at the discretion of the programmer. That said, in many cases , the value of a "long" option (such as is introduced with --option_name ) is specified with an = between the option name and the value (i.e. --option_name=value ), whereas for single-letter options it is more customary to separate the flag and value with a space, such as -o value , or use no separation at all (as in -oValue ). An example from the man-page of the GNU date utility: -d, --date=STRING
display time described by STRING, not 'now' -f, --file=DATEFILE
like --date; once for each line of DATEFILE As you can see, the value would be separated by a space from the option switch when using the "short" form (i.e. -d ), but by an = when using the "long" form (i.e. --date ). Edit As pointed out by Stephen Kitt, the GNU coding standard recommends the use of getopt and getopt_long to parse command-line options. The man-page of getopt_long states: A long option may take a parameter, of the form --arg=param or --arg param . So, a program using that function will accept both forms. | {
"source": [
"https://unix.stackexchange.com/questions/573377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146345/"
]
} |
574,257 | I'm new to X11 and want to understand if it is really as dangerous as they say on the Internet. I will explain how I understand this. Any application launched from under the current user has access to the keyboard, mouse, display (e.g. taking a screenshot), and this is not good. But, if we install programs from the official repository (for example, for Debian) , which are unlikely to contain keyloggers, etc., then the danger seems exaggerated. Am I wrong? Yes, you can open applications on separate servers (for example, Xephyr) , but this is inconvenient, since there is no shared clipboard. Creating a clipboard based on tmp files is also inconvenient. | Any application launched from under the current user has access to the keyboard, mouse, display (e.g. taking a screenshot), and this is not good. All the X11 clients on a desktop can access each other in depth, including getting the content of any window, changing it, closing any window, faking key and mouse events to any other client, grabbing any input device, etc. The X11 protocol design is based on the idea that the clients are all TRUSTED and will collaborate, not step on each other's toes (the latter completely broken by modern apps like Firefox, Chrome or Java). BUT, if we install programs from the official repository (for example, for Debian), which are unlikely to contain keyloggers, etc., then the danger problem is clearly exaggerated. Am I wrong? Programs have bugs, which may be exploited. The X11 server and libraries may not be up-to-date. For instance, any X11 client can crash the X server in the current version of Debian (Buster 10) via innocuous Xkb requests. (That was fixed in the upstream sources, but didn't make it yet in Debian). If it's able to crash it, then there's some probability that it's also able to execute code with the privileges of the X11 server (access to hardware, etc). For the problems with the lax authentication in Xwayland (and the regular Xorg Xserver in Debian), see the notes of the end of this answer . Yes, you can open applications on separate servers (for example, Xephyr), but this is inconvenient, since there is no shared clipboard. Creating a clipboard based on tmp files is also inconvenient. Notice that unless you take extra steps, Xephyr allows any local user to connect to it by default. See this for a discussion about it. Creating a shared clipboard between multiple X11 servers is an interesting problem, which deserves its own Q&A, rather than mixed with this. | {
"source": [
"https://unix.stackexchange.com/questions/574257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/401458/"
]
} |
574,961 | I just installed cmake but I'm getting compiler not found error. In trying to build https://gitlab.com/interception/linux/tools on a new Kubuntu installation, running cmake .. from the tools/build directory returns the error: CMake Error at CMakeLists.txt:3 (project):
No CMAKE_CXX_COMPILER could be found.
Tell CMake where to find the compiler by setting either the environment
variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
to the compiler, or to the compiler name if it is in the PATH. What's wrong? I assumed cmake would be equipped with its compiler, but maybe it needs to be configured before it can be used??? | The "compiler" is a separate package that needs to be installed. One called g++ can be installed on it's own and is also included within a bundle of packages called "build-essential". Thus sudo apt-get install build-essential solves the problem (and sudo apt-get install g++ should also work), allowing cmake .. to work with no configuration necessary. | {
"source": [
"https://unix.stackexchange.com/questions/574961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115900/"
]
} |
574,965 | How do i close a port listening on a local host in CentOS7?
So far I have used the below command to find the process id sudo netstat -tlpn | grep 5601 Then, used the below command to kill the process but it starts up with new process id. sudo kill -SIGTERM 29565 Please help. | The "compiler" is a separate package that needs to be installed. One called g++ can be installed on it's own and is also included within a bundle of packages called "build-essential". Thus sudo apt-get install build-essential solves the problem (and sudo apt-get install g++ should also work), allowing cmake .. to work with no configuration necessary. | {
"source": [
"https://unix.stackexchange.com/questions/574965",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/401824/"
]
} |
574,966 | I have a space separated file like this:(it has 1775 lines) head output.fam
0 ALIKE_g_1LTX827_BI_SNP_F01_33250.CEL 0 0 0 -9
0 BURRY_g_3KYJ479_BI_SNP_A12_40182.CEL 0 0 0 -9
0 ABAFT_g_4RWG569_BI_SNP_E12_35136.CEL 0 0 0 -9
0 MILLE_g_5AVC089_BI_SNP_F02_35746.CEL 0 0 0 -9
0 PEDAL_g_8WWR250_BI_SNP_B06_37732.CEL 0 0 0 -9
... and a comma separated file phg000008.individualinfo (that has 1838 lines): #Phen_Sample_ID - individual sample name associated with phenotypes
#Geno_Sample_ID - sample name associates with genotypes
#Ind_id - unique individual name which can be used to match duplicates (in this case same as Phen_Sample_ID)
#Ped_id - Pedigree ID
#Fa_id - Father individual ID
#Ma_id - Mother individual ID
#Sex - coded 1 for Male, 2 for Female
#Ind_QC_flag - value "ALL" indicates released in both Quality Filtered and Complete set
#Genotyping_Plate
#Sample_plate_well_string - This string corresponds to the file within the CEL files distribution
#Genotype_Clustering_Set
#Study-id - dbGaP assigned study id
#Phen_ID,Geno_Sample_ID,Ind_id,Ped_id,Fa_id,Ma_id,Sex,Ind_QC_flag,Genotyping_Plate,Sample_plate_well_string,Genotyping_Clustering_Set,Study_id
G1000,G1000,G1000,fam1000-,0,0,2,ALL,7FDZ321,POSED_g_7FDZ321_BI_SNP_B02_36506,set05,phs000018
G1001,G1001,G1001,fam1001-,G4243,G4205,1,ALL,3KYJ479,BURRY_g_3KYJ479_BI_SNP_H04_40068,set02,phs000018
G2208,G2208,G2208,fam2208-,G3119,G3120,2,ALL,1LTX827,ALIKE_g_1LTX827_BI_SNP_F01_33250,set01,phs000018
G1676,G1676,G1676,fam1676-,G1675,G1674,1,ALL,3KYJ479,BURRY_g_3KYJ479_BI_SNP_A12_40182,set02,phs000018
... I would like to change my output.fam by looking if I could find value from the 2nd column in output.fam, say ALIKE_g_1LTX827_BI_SNP_F01_33250.CEL in phg000008.individualinfo (disregarding .CEL suffix) and is there is a row with that entry replace that entry in output.fam with the value in the first column of phg000008.individualinfo and also for the same line replace the value of the first column of output.fam with the value in the 4th column of phg000008.individualinfo (excluding - suffix) So for example for two lines, output.fam would look like this: fam2208 G2208 0 0 0 -9
fam1676 G1676 0 0 0 -9 | The "compiler" is a separate package that needs to be installed. One called g++ can be installed on it's own and is also included within a bundle of packages called "build-essential". Thus sudo apt-get install build-essential solves the problem (and sudo apt-get install g++ should also work), allowing cmake .. to work with no configuration necessary. | {
"source": [
"https://unix.stackexchange.com/questions/574966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/401868/"
]
} |
576,701 | I like to disable all locale specific differences in shell scripts. What is the preferred way to do it? LANG=C or LC_ALL=C | LANG sets the default locale, i.e. the locale used when no more specific setting ( LC_COLLATE , LC_NUMERIC , LC_TIME etc.) is provided; it doesnโt override any setting, it provides the base value. LC_ALL on the other hand overrides all locale settings. Thus to override scriptsโ settings, you should set LC_ALL . You can check the effects of your settings by running locale . It shows the calculated values, in quotes, for all locale categories which arenโt explicitly set; in your example, LANG isnโt overriding LC_NUMERIC , itโs providing the default value. If LC_ALL and LC_NUMERIC arenโt set in the environment, the locale is taken from LANG , and locale shows that value for LC_NUMERIC , as indicated by the quotes. See the locales manpage and the POSIX definitions of environment variables for details. See also How does the "locale" program work? | {
"source": [
"https://unix.stackexchange.com/questions/576701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7167/"
]
} |
577,603 | I would like to do something like this where on Friday, the output is for both conditions that match: #!/bin/bash
#!/bin/bash
NOW=$(date +"%a")
case $NOW in
Mon)
echo "Mon";;
Tue|Wed|Thu|Fri)
echo "Tue|Wed|Thu|Fri";;
Fri|Sat|Sun)
echo "Fri|Sat|Sun";;
*) ;;
esac As the code above is written, the only output on Friday would be: Tue|Wed|Thu|Fri Desired output on Friday: Tue|Wed|Thu|Fri Fri|Sat|Sun I understand that normally, only the commands corresponding to the first pattern that matches the expression are executed. Is there a way to execute commands for additional matched patterns? EDIT: I am not looking for fall-through behavior , but that's also a nice thing to know about. Thanks steeldriver. | You can use the ;;& conjunction. From man bash : Using ;;& in place of ;; causes the shell to test
the next pattern list in the statement, if any, and execute any
associated list on a successful match. Ex. given $ cat myscript
#!/bin/bash
NOW=$(date -d "$1" +"%a")
case $NOW in
Mon)
echo "Mon";;
Tue|Wed|Thu|Fri)
echo "Tue|Wed|Thu|Fri";;&
Fri|Sat|Sun)
echo "Fri|Sat|Sun";;
*) ;;
esac then $ ./myscript thursday
Tue|Wed|Thu|Fri
$ ./myscript friday
Tue|Wed|Thu|Fri
Fri|Sat|Sun
$ ./myscript saturday
Fri|Sat|Sun For more information (including equivalents in other shells) see Can bash case statements cascade? | {
"source": [
"https://unix.stackexchange.com/questions/577603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
577,862 | How can I recursively cleanup all empty files and directories in a parent directory? Letโs say I have this directory structure: Parent/
|____Child1/
|______ file11.txt (empty)
|______ Dir1/ (empty)
|____Child2/
|_______ file21.txt
|_______ file22.txt (empty)
|____ file1.txt I should end up with this: Parent/
|____Child2/
|_______ file21.txt
|____ file1.txt | This is a really simple one liner: find Parent -empty -delete It's fairly self explanatory. Although when I checked I was surprised that it successfully deletes Parent/Child1. Usually you would expect it to process the parent before the child unless you specify -depth . This works because -delete implies -depth . See the GNU find manual : -delete Delete files; true if removal succeeded. If the removal failed, an error message is issued. If -delete fails, find's exit status will be nonzero (when it eventually exits). Use of -delete automatically turns on the -depth option. Note these features are not part of the Posix Standard , but most likely will be there under many Linux Distribution. You may have a specific problem with smaller ones such as Alpine Linux as they are based on Busybox which doesn't support -empty . Other systems that do include non-standard -empty and -delete include BSD and OSX but apparently not AIX . | {
"source": [
"https://unix.stackexchange.com/questions/577862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/369707/"
]
} |
578,536 | E: You don't have enough free space in /var/cache/apt/archives/.
root@kali:~# df -H
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 406M 7.0M 399M 2% /run
/dev/sda6 12G 11G 480M 96% /
tmpfs 2.1G 78M 2.0G 4% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 2.1G 0 2.1G 0% /sys/fs/cgroup
/dev/sda8 58G 114M 55G 1% /home
tmpfs 406M 37k 406M 1% /run/user/0 | If you're getting this error in a Docker container - it helped me to do a docker system prune | {
"source": [
"https://unix.stackexchange.com/questions/578536",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/404715/"
]
} |
579,068 | As far as I can tell from the documentation of systemd , Wants= and WantedBy= perform the same function, except that the former is put in the dependent unit file and vice-versa. (That, and WantedBy= creates the unit.type.wants directory and populates it with symlinks.) From DigitalOcean: Understanding Systemd Units and Unit Files : The WantedBy= directive... allows you to specify a dependency relationship in a similar way to the Wants= directive does in the [Unit] section. The difference is that this directive is included in the ancillary unit allowing the primary unit listed to remain relatively clean. Is it really just about keeping a unit file "clean"? What is the best practice for using these two directives? That is, if service alpha "wants" service beta, when should I use Wants=beta.service in alpha.service and when should I prefer WantedBy=alpha.service in the beta.service ? | Functionally Wants is in the Unit section and WantedBy is in the Install . The init process systemd does not process/use the Install section at all. Instead, a symlink must be created in multi-user.target.wants . Usually, that's done by the utility systemctl which does read the Install section. In summary, WantedBy is affected by systemctl enable / systemctl disable . Logically Consider which of the services should "know" or be "aware" of the other. For example, a common use of WantedBy : [Install]
WantedBy=multi-user.target Alternatively, that could be in multi-user.target: [Unit]
Wants=nginx.service But that second way doesn't make sense. Logically, nginx.service knows about the system-defined multi-user.target, not the other way around. So in your example, if alpha's author is aware of beta, then alpha Wants beta. If beta's author is aware of alpha then beta is WantedBy alpha. To help you decide, you may consider which service can be installed (say, from a package manager) without the other being present. Config directories As another tool in your box, know that systemd files can also be extended with config directories: /etc/systemd/system/myservice.service.d/extension.conf This allows you to add dependencies where neither service is originally authored to know about the other. I often use this with mounts, where (for example) neither nginx nor the mount need explicit knowledge of the other, but I as the system adminstrator understand the dependency. So I create nginx.service.d/mymount.conf with Wants=mnt-my.mount . | {
"source": [
"https://unix.stackexchange.com/questions/579068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130124/"
]
} |
581,058 | I'm running BOINC on my old netbook, which only has 2ย GB of RAM onboard, which isn't enough for some tasks to run. As in, they refuse to, seeing how low on RAM the device is. I have zRAM with backing_dev and zstd algorithm enabled, so in reality, lack of memory is never an issue, and in especially tough cases I can always just use systemd-run --scope -p (I have successfully ran programs that demanded +16ย GB of RAM using this) How can I make BOINC think that my laptop has more than 2ย GB of RAM installed, so that I could run those demanding tasks? | Create a fake meminfo and mount it over an original /proc/meminfo : $ mkdir fake-meminfo && cd fake-meminfo
$ cp /proc/meminfo .
$ chmod +w meminfo
$ sed -Ei 's,^MemTotal: [0-9]+ kB,MemTotal: 8839012 kB,' meminfo # replace 8839012 with an amount of RAM you want to pretend you have
$ free -m # check how much RAM you have now
total used free shared buff/cache available
Mem: 7655 1586 3770 200 2298 5373
$ sudo mount --bind meminfo /proc/meminfo
$ free -m # check how much RAM you pretend to have after replacing /proc/meminfo
total used free shared buff/cache available
Mem: 8631 2531 3800 201 2299 5403
$ sudo umount /proc/meminfo # restore an original /proc/meminfo
$ free -m
total used free shared buff/cache available
Mem: 7655 1549 3806 200 2299 5410 You can also run the above commands in a mount namespace isolated from
the rest of the system. References: Recover from faking /proc/meminfo | {
"source": [
"https://unix.stackexchange.com/questions/581058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/405442/"
]
} |
581,801 | I am currently running a statistical modelling script that performs a phylogenetic ANOVA. The script runs fine when I analyse the full dataset. But when I take a subset it starts analysing but quickly terminates with segmentation fault. I cannot really figure out by googling if this could be due to a problem from my side (e.g. sample dataset to small for the analysis) and/or bug in the script or if this has something to do with my linux system. I read it has to do with writing data to the memory, but than why is everything fine with a larger dataset? I tried to find more information using google, but this made it more complicated. Thanks for clarifying in advance! | (tl;dr: It's almost certainly a bug in your program or a library it uses.) A segmentation fault indicates that a memory access was not legal. That is, based on the issued request, the CPU issues a page fault because the page requested either isn't resident or has permissions that are incongruous with the request. After that, the kernel checks to see whether it simply doesn't know anything about this page, whether it's just not in memory yet and it should put it there, or whether it needs to perform some special handling (for example, copy-on-write pages are read-only, and this valid page fault may indicate we should copy it and update the permissions). See Wikipedia for minor vs. major (e.g. demand paging ) vs. invalid page faults. Getting a segmentation fault indicates the invalid case: the page is not only not in memory, but the kernel also doesn't have any remediative actions to perform because the process doesn't logically have that page of its virtual address space mapped. As such, this almost certainly indicates a bug in either the program or one of its underlying libraries -- for example, attempting to read or write into memory which is not valid for the process. If the address had happened to be valid, it could have caused stack corruption or scribbled over other data, but reading or writing an un mapped page is caught by hardware. The reason why it works with your larger dataset and not your smaller dataset is entirely specific to that program: it's probably a bug in that program's logic, which is only tripped for the smaller dataset for some reason (for example, your dataset may have a field representing the total number of entries, and if it's not updated, your program may blindly read into unallocated memory if it doesn't do other sanity checks). It's several orders of magnitude less likely than simply being a software bug, but a segmentation fault may also be an indicator of hardware issues, like faulty memory, a faulty CPU, or your hardware tripping over errata (as an example, see here ). Getting segfaults due to failing hardware often results in sometimes-works behaviour, although a bad bit in physical RAM might get mapped the same way in repeated runs of a program if you don't run anything else in between. You can mostly rule out this possibility by booting memtest86+ to check for failing RAM, and using software like Prime95 to stress-test your CPU (including the FP math FMA execution units). You can run the program in a debugger like gdb and get the backtrace at the time of the segmentation fault, which will likely indicate the culprit: % gdb --args ./foo --bar --baz
(gdb) r # run the program
[...wait for segfault...]
(gdb) bt # get the backtrace for the current thread | {
"source": [
"https://unix.stackexchange.com/questions/581801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/403558/"
]
} |
585,162 | Is it possible to use UUIDs to mount drives, rather than using these values in fstab? I have a script which mounts devices, however there is no way to guarantee that the drive labels such as /dev/sda2 will always be the same. I'm aware I can mount the drive at boot time using this method with fstab , however in the case of external disks, they may not always be present at boot time. | Yes it's possible, you just use the UUID option: lsblk -o NAME,UUID
NAME UUID
sdc
โโsdc1 A190-92D5
โโsdc2 A198-A7BC
sudo mount -U A198-A7BC /mnt Or sudo mount UUID=A198-A7BC /mnt Or sudo mount --uuid A198-A7BC /mnt The mount --help : Source:
-L, --label synonym for LABEL= -U, --uuid synonym for UUID= LABEL= specifies device by filesystem label UUID= specifies device by filesystem UUID PARTLABEL= specifies device by partition label
PARTUUID= specifies device by partition UUID
specifies device by path
mountpoint for bind mounts (see --bind/rbind)
regular file for loopdev setup | {
"source": [
"https://unix.stackexchange.com/questions/585162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32864/"
]
} |
585,170 | I want to add one blank space after any occurrence of: <span class="negrita">ANYTHING</span> So, with this SED instruction: sed -E "s/(<span class=\"negrita\">.*?<\/span>)/\1 /g" <<< 'In <span class="negrita">1959</span> economic policy was reoriented in order to undertake <span class="negrita">the country modernization</span>. More text' I get this output: In <span class="negrita">1959</span> economic policy was reoriented in order to undertake <span class="negrita">the country modernization</span> . More text So, as you can see, it is adding the blank space after the last occurrence, but not after the first one. Isn't the "/g" option meant to indicate that it should replace all occurrences? Thanks in advance. | Yes it's possible, you just use the UUID option: lsblk -o NAME,UUID
NAME UUID
sdc
โโsdc1 A190-92D5
โโsdc2 A198-A7BC
sudo mount -U A198-A7BC /mnt Or sudo mount UUID=A198-A7BC /mnt Or sudo mount --uuid A198-A7BC /mnt The mount --help : Source:
-L, --label synonym for LABEL= -U, --uuid synonym for UUID= LABEL= specifies device by filesystem label UUID= specifies device by filesystem UUID PARTLABEL= specifies device by partition label
PARTUUID= specifies device by partition UUID
specifies device by path
mountpoint for bind mounts (see --bind/rbind)
regular file for loopdev setup | {
"source": [
"https://unix.stackexchange.com/questions/585170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/411171/"
]
} |
585,843 | I'm developing homepages to myself. The pages looks good on my laptop but I would like to see if it looks good also in my mobile. Can I test how the sites looks in mobile without publishing the site first in the Internet? My laptop has Ubuntu 20.04. | Firefox and Chromium have Responsive Design Mode : Press Ctrl + Shift + M (For Chromium accessible only in Developer Tools, in Firefox globally) | {
"source": [
"https://unix.stackexchange.com/questions/585843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/411784/"
]
} |
588,102 | I have a file as below. "ID" "1" "2"
"00000687" 0 1
"00000421" 1 0 I want to make it as below. 00000687 0 1
00000421 1 0 I want to remove the first line and remove double quotes from fields on any other lines.ย
FWIW, double quotes appear only in the first column. I think cut -c would work, but cannot make it.ย
What should Iย do? | tail + tr : tail -n +2 file | tr -d \" tail -n+2 prints the file starting from line two to the end. tr -d \" deletes all double quotes. | {
"source": [
"https://unix.stackexchange.com/questions/588102",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328248/"
]
} |
588,629 | Some applications, like ssh have a unit file that ends with @, like ssh.service and [email protected] . They contain different contents, but I cannot understand what exactly is the difference in functionality or purpose. Is it some naming convention I'm not aware of? | As others have mentioned, it's a service template. In the specific case of [email protected] , it's for invoking sshd only on-demand, in the style of classic inetd services. If you expect SSH connections to be rarely used, and want to absolutely minimize sshd 's system resource usage (e.g. in an embedded system), you could disable the regular ssh.service and instead enable ssh.socket . The socket will then automatically start up an instance of [email protected] (which runs sshd -i ) whenever an incoming connection to TCP port 22 (the standard SSH port) is detected. This will slow down the SSH login process, but will remove the need to run sshd when there are no inbound SSH connections. | {
"source": [
"https://unix.stackexchange.com/questions/588629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244418/"
]
} |
589,710 | I created a Bash script which echoes "Hello World" . I also created a test user, bob , using adduser . Nobody has permission to execute that file as denoted by ls : $ ls -l hello.sh
-rw-r--r-- 1 george george 19 Mai 29 13:06 hello.sh As we can see from the above the file's owner is george where he has only read and write access but no execute access. But logged in as george I am able to execute the script directly: $ . hello.sh
Hello World To make matters worse, I log in as bob , where I have only read permission, but I am still able to execute the file: $ su bob
Password:
$ . /home/george/testdir/hello.sh
Hello World What's going on? | In your examples, you are not executing the files, but sourcing them. Executing would be via $ ./hello.sh and for that, execution permission is necessary. In this case a sub-shell is opened in which the commands of the script file are executed. Sourcing , i.e. $ . hello.sh (with a space in between) only reads the file, and the shell from which you have called the . hello.sh command then executes the commands directly as read, i.e. without opening a sub-shell. As the file is only read, the read permission is sufficient for the operation. ( Also note that stating the script filename like that invokes a PATH search , so if there is another hello.sh in your PATH that will be sourced! Use explicit paths, as in . ./hello.sh to ensure you source "the right one". ) If you want to prevent that from happening, you have to remove the read permission, too, for any user who is not supposed to be using the script. This is reasonable anyway if you are really concerned about unauthorized use of the script, since non-authorizeded users could easily bypass the missing execution permission by simply copy-and-pasting the script content into a new file to which they could give themselves execute permissions, and as noted by Kusalananda, otherwise an unauthorized user could still comfortably use the script by calling it via sh ./hello.sh instead of ./hello.sh because this also only requires read permissions on the script file (see this answer e.g.). As a general note , keep in mind that there are subtle differences between sourcing and executing a script (see this question e.g.). | {
"source": [
"https://unix.stackexchange.com/questions/589710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322933/"
]
} |
590,108 | Since the majority of the Linux kernel is written in the C language, so when the kernel gets loaded in Main memory, does the standard C library also get loaded along the Linux kernel? If that's the reason the programs written in C consume less memory than other program as the standard C library is already loaded and as a result are faster also (less page faults) compared to program written in other languages when run on a Linux machine? | The kernel is written in C, but it doesnโt use the C library (as dave_thompson_085 points out, itโs โ freestanding โ). Even if it did, a C library loaded along with the kernel for the kernelโs use would only be available to the kernel (unless the kernel made it explicitly accessible to user space, in some way or other), so it wouldnโt help reduce the memory requirements for programs. That said, in most cases, the earliest programs run after the kernel starts (programs in the initramfs, although theyโll use their own copy of the C library; and ultimately, init ), use the C library, so it ends up being mapped early on, and itโs highly likely that the portions of the library that are widely used will always remain in physical memory. The kernel contains implementations of many of the C libraryโs functions , or variants (for example, printk instead of printf ); but they donโt all follow the standard exactly. In some circumstances, the implementations of C library functions in the compiler are used instead. (Note that the vast majority of programs written in languages other than C ultimately use the C library.) | {
"source": [
"https://unix.stackexchange.com/questions/590108",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288789/"
]
} |
592,657 | I've just switched to bullseye (see sources below) deb http://deb.debian.org/debian/ testing main contrib non-free
deb-src http://deb.debian.org/debian/ testing main contrib non-free
deb http://deb.debian.org/debian/ testing-updates main contrib non-free
deb-src http://deb.debian.org/debian/ testing-updates main contrib non-free
deb http://deb.debian.org/debian-security testing-security main
deb-src http://deb.debian.org/debian-security testing-security main
deb http://security.debian.org testing-security main contrib non-free
deb-src http://security.debian.org testing-security main contrib non-free The update and upgrade went fine, but full-upgrade fails due to the following error message: The following packages have unmet dependencies:
libc6-dev : Breaks: libgcc-8-dev (< 8.4.0-2~) but 8.3.0-6 is to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. From what I see on the packages.debian.org, Debian testing should have libgcc-8-dev: 8.4.0-4 , so I don't see why an older version is to be installed. How can I fix this, to finalize the bullseye full-upgrade? | Installing gcc-8-base ( sudo apt install gcc-8-base ) appeared to do the trick for me and fix the problem for me. | {
"source": [
"https://unix.stackexchange.com/questions/592657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137596/"
]
} |
592,692 | I'd like to rename the recon.text file with its directory name. I have 1000 directories. Some help, please? 7_S4_R1_001_tri10_sha/recon.text
8_S1_R1_001_tri15_sha/recon.text
9_S8_R1_001_tri20_sha/recon.text
10_S5_R1_001_tri25_sha/recon.text
11_S3_R1_001_tri30_sha/recon.text | Installing gcc-8-base ( sudo apt install gcc-8-base ) appeared to do the trick for me and fix the problem for me. | {
"source": [
"https://unix.stackexchange.com/questions/592692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311001/"
]
} |
592,694 | I have a Medion Akoya P6687 notebook, and I have started to use GNU/Linux in it a year and a half. I have always had problems with linux kernels, in fact only 4.19 version worked well for me. I have used other 4.x versions but they didn't work, but I'm not sure if it was because of ACPI errors. I'm stuck with linux-4.19 kernel version because other recent versions (all of the 5.x kernel versions I have tested) give me the same ACPI errors when booting. This is specifically taken from Debian and 5.6.0-2-amd64 version but Arch gives the same results. [ 30.441861] APCI Error: Aborting method \_SB.PCI0.LPCB.H_EC.ECMD due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)
[ 30.441872] APCI Error: Aborting method \_TZ.FNCL due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)
[ 30.441879] APCI Error: Aborting method \_TZ.FN00._OFF due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)
[ 30.441886] APCI Error: Aborting method \_SB.PCI0.LPCB.H_EC._REG due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)
[ 31.696214] thermal thermal_zone1: critical temperature reached (128 C), shut
[ 31.948073] thermal thermal_zone1: critical temperature reached (128 C), shut
[ 61.971231] APCI Error: Aborting method \_SB.PCI0.LPCB.H_EC.ECMD due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)
[ 61.971395] APCI Error: Aborting method \_TZ.FNCL due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)
[ 61.971509] APCI Error: Aborting method \_TZ.FN00._ON due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)
... (similar messages appear every 30 seconds) (posted and screenshot here ) I have tested several distros (Arch, Debian and Void Linux) and the situation is the same: kernel 4.19 works (I currently use debian with 4.19 and I tried to boot an old arch .iso with that kernel and it boots with no problems), but recent kernel versions (5.x) don't, they have the problems above with the ACPI. I can also add that, if I use the acpi=off flag, the notebook boots, but the battery and the touchpad are not detected, and in the most recent arch .iso the keyboard is not detected also. I also have updated the BIOS to the last version but the errors persist, and I don't know what can I do to fix it. If anyone can help me to find a solution I will be very grateful. Thanks. And sorry if my english is not very good. | Installing gcc-8-base ( sudo apt install gcc-8-base ) appeared to do the trick for me and fix the problem for me. | {
"source": [
"https://unix.stackexchange.com/questions/592694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/417776/"
]
} |
593,212 | Can't figure out how to escape everything while using awk. I need to enclose each input string with with single quotes, e.g. input
string1
string2
string3
output
'string1'
'string2'
'string3' Been fighting with escaping ' " $0 and everything else and I just cannot make it work. Either $0 is passed to bash directly, or something else happens. | Here are a couple of ways: use octal escape sequences. On ASCII-based systems where ' is encoded as byte 39 (octal 047), that would be: awk '{print "\047" $0 "\047"}' input
'string1'
'string2'
'string3' pass the quote as a variable $ awk -v q="'" '{print q $0 q}' input
'string1'
'string2'
'string3' | {
"source": [
"https://unix.stackexchange.com/questions/593212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260833/"
]
} |
594,470 | I have the source code of a hello world kernel module that works in Ubuntu 20 in a laptop.
Now I am trying to compile the same code in Ubuntu 20 but inside WSL2. For that I am using this: make -C /sys/modules/$(shell uname -r)/build M=$(PWD) modules The problem is that /lib/modules is empty. It seems that WSL2 does not bring anything in /lib/modules/4.19.104-microsoft-standard/build I tried getting the headers using: sudo apt search linux-headers-`uname -r`
Sorting... Done
Full Text Search... Done But nothing get's populated in the modules folder Is there anything I need to do in order that folder contains all required modules? [EDIT] Getting closer thanks to @HannahJ. I am doing: > sudo make -C /home/<user>/WSL2-Linux-Kernel M=$(pwd) modules
SL2-Linux-Kernel M=$(pwd) modules
make: Entering directory '/home/<user>/WSL2-Linux-Kernel'
CC [M] /home/<user>/containers-assembly-permissionsdemo/demo-2/lkm_example.o
Building modules, stage 2.
MODPOST 1 modules
CC /home/<user>/containers-assembly-permissionsdemo/demo-2/lkm_example.mod.o
LD [M] /home/<user>/containers-assembly-permissionsdemo/demo-2/lkm_example.ko
make: Leaving directory '/home/<user>/WSL2-Linux-Kernel' At the end, I get the lkm_example.ko file created. After that: > sudo insmod lkm_example.ko
insmod: ERROR: could not insert module lkm_example.ko: Invalid module format
> dmesg
[200617.480635] lkm_example: no symbol version for module_layout
[200617.480656] lkm_example: loading out-of-tree module taints kernel.
[200617.481542] module: x86/modules: Skipping invalid relocation target, existing value is nonzero for type 1, loc 0000000074f1d70f, val ffffffffc0000158
> sudo modinfo lkm_example.ko
filename: /home/<user>/containers-assembly-permissionsdemo/demo-2/lkm_example.ko
version: 0.01
description: A simple example Linux module.
author: Carlos Garcia
license: GPL
srcversion: F8B272146BAA2381B6332DE
depends:
retpoline: Y
name: lkm_example
vermagic: 4.19.84-microsoft-standard+ SMP mod_unload modversions This is my Makefile obj-m += lkm_example.o
all:
make -C /home/<usr>/WSL2-Linux-Kernel M=$(PWD) modules
clean:
make -C /home/<usr>/WSL2-Linux-Kernel M=$(PWD) clean
test:
# We put a โ in front of the rmmod command to tell make to ignore
# an error in case the module isnโt loaded.
-sudo rmmod lkm_example
# Clear the kernel log without echo
sudo dmesg -C
# Insert the module
sudo insmod lkm_example.ko
# Display the kernel log
dmesg
unload:
sudo rm /dev/lkm_example
sudo rmmod lkm_example [Edit2]
This is my kernel module: #include <linux/init.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/fs.h>
#include <asm/uaccess.h>
#include <linux/init_task.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Carlos Garcia");
MODULE_DESCRIPTION("A simple example Linux module.");
MODULE_VERSION("0.01");
/* Prototypes for device functions */
static int device_open(struct inode *, struct file *);
static int device_release(struct inode *, struct file *);
static ssize_t device_read(struct file *, char *, size_t, loff_t *);
static ssize_t device_write(struct file *, const char *, size_t, loff_t *);
static int major_num;
static int device_open_count = 0;
static char msg_buffer[MSG_BUFFER_LEN];
static char *msg_ptr;
/* This structure points to all of the device functions */
static struct file_operations file_ops = {
.read = device_read,
.write = device_write,
.open = device_open,
.release = device_release
};
/* When a process reads from our device, this gets called. */
static ssize_t device_read(struct file *flip, char *buffer, size_t len, loff_t *offset)
{
...
}
/* Called when a process tries to write to our device */
static ssize_t device_write(struct file *flip, const char *buffer, size_t len, loff_t *offset)
{
...
}
/* Called when a process opens our device */
static int device_open(struct inode *inode, struct file *file)
{
...
try_module_get(THIS_MODULE);
}
/* Called when a process closes our device */
static int device_release(struct inode *inode, struct file *file)
{
...
module_put(THIS_MODULE);
}
static int __init lkm_example_init(void)
{
...
major_num = register_chrdev(0, "lkm_example", &file_ops);
if (major_num < 0)
{
printk(KERN_ALERT "Could not register device: % d\n", major_num);
return major_num;
}
else
{
printk(KERN_INFO "lkm_example module loaded with device major number % d\n", major_num);
return 0;
}
}
static void __exit lkm_example_exit(void)
{
/* Remember โ we have to clean up after ourselves. Unregister the character device. */
unregister_chrdev(major_num, DEVICE_NAME);
printk(KERN_INFO "Goodbye, World !\n");
}
/* Register module functions */
module_init(lkm_example_init);
module_exit(lkm_example_exit); | I had to do this for an assignment, so I figure I'll share my solution here. The base WSL2 kernel does not allow modules to be loaded. You have to compile and use your own kernel build. How to compile and use a kernel in WSL2 In Ubuntu/WSL: sudo apt install build-essential flex bison libssl-dev libelf-dev git dwarves
git clone https://github.com/microsoft/WSL2-Linux-Kernel.git
cd WSL2-Linux-Kernel
cp Microsoft/config-wsl .config
make -j $(expr $(nproc) - 1) From Windows, copy \\wsl$\<DISTRO>\home\<USER>\WSL2-Linux-Kernel\arch\x86\boot\bzimage to your Windows profile ( %userprofile% , like C:\Users\<Windows_user> ) Create the file %userprofile%\.wslconfig that contains: [wsl2]
kernel=C:\\Users\\WIN10_USER\\bzimage Note: The double backslashes ( \\ ) are required. Also, to avoid a potential old bug, make sure not to leave any trailing whitespace on either line. In PowerShell, run wsl --shutdown Reopen your flavor of WSL2 How to compile the module Note: You'll want to do these from /home/$USER/ or adjust the Makefile to match your location. Create a Makefile that contains: obj-m:=lkm_example.o
all:
make -C $(shell pwd)/WSL2-Linux-Kernel M=$(shell pwd) modules
clean:
make -C $(shell pwd)/WSL2-Linux-Kernel M=$(shell pwd) clean Run make Source for the .wslconfig file steps here . | {
"source": [
"https://unix.stackexchange.com/questions/594470",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/401549/"
]
} |
594,471 | I have screen tearing issues. When I set Tearing prevention ("vsync") in Compositor to something else and then back to Automatic the screen tearing is gone. I would like to know what configuration files Tearing prevention ("vsync") changes to troubleshoot this problem and find a permanent fix. I test for screen tearing with this video . I also have screen tearing with the latest live iso with both free and non-free drivers. Operating System: Manjaro Linux
KDE Plasma Version: 5.18.5
KDE Frameworks Version: 5.70.0
Qt Version: 5.15.0
Kernel Version: 5.6.16-1-MANJARO
OS Type: 64-bit
Processors: 8 ร Intelยฎ Coreโข i7-6700HQ CPU @ 2.60GHz
Memory: 15,5 GiB of RAM
GPU: Nvidia GeForce 940M | I had to do this for an assignment, so I figure I'll share my solution here. The base WSL2 kernel does not allow modules to be loaded. You have to compile and use your own kernel build. How to compile and use a kernel in WSL2 In Ubuntu/WSL: sudo apt install build-essential flex bison libssl-dev libelf-dev git dwarves
git clone https://github.com/microsoft/WSL2-Linux-Kernel.git
cd WSL2-Linux-Kernel
cp Microsoft/config-wsl .config
make -j $(expr $(nproc) - 1) From Windows, copy \\wsl$\<DISTRO>\home\<USER>\WSL2-Linux-Kernel\arch\x86\boot\bzimage to your Windows profile ( %userprofile% , like C:\Users\<Windows_user> ) Create the file %userprofile%\.wslconfig that contains: [wsl2]
kernel=C:\\Users\\WIN10_USER\\bzimage Note: The double backslashes ( \\ ) are required. Also, to avoid a potential old bug, make sure not to leave any trailing whitespace on either line. In PowerShell, run wsl --shutdown Reopen your flavor of WSL2 How to compile the module Note: You'll want to do these from /home/$USER/ or adjust the Makefile to match your location. Create a Makefile that contains: obj-m:=lkm_example.o
all:
make -C $(shell pwd)/WSL2-Linux-Kernel M=$(shell pwd) modules
clean:
make -C $(shell pwd)/WSL2-Linux-Kernel M=$(shell pwd) clean Run make Source for the .wslconfig file steps here . | {
"source": [
"https://unix.stackexchange.com/questions/594471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240617/"
]
} |
594,841 | I'm looking for the equivalent to this JS assignment: FOO = FOO || "I must have been falsey!"; | Either of these expansions might be what you're looking for, depending on when exactly you want to do the assignment: Omitting the colon results in a test only for a parameter that is unset. [...] ${parameter:-word} If parameter is unset or null, the expansion of word is substituted.
Otherwise, the value of parameter is substituted. ${parameter:=word} If parameter is unset or null, the expansion of word is assigned to
parameter. The value of parameter is then substituted. Positional
parameters and special parameters may not be assigned to in this way. If you just want to set a default on first use, then: some-command "${FOO:=default value}"
other-command "$FOO" # both use "default value" if FOO was null/unset If you want to be explicit about it: FOO="${FOO:-default value}"
some-command "${FOO}" | {
"source": [
"https://unix.stackexchange.com/questions/594841",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/347216/"
]
} |
594,903 | In a common Linux distribution, do utilities like rm , mv , ls , grep , wc , etc. run in parallel on their arguments? In other words, if I grep a huge file on a 32-threaded CPU, will it go faster than on dual-core CPU? | You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library. ldd /bin/grep | grep -F libpthread.so So for example on Ubuntu: for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); do if ldd $x | grep -q -F libpthread.so; then echo $x; fi; done However, this produces a lot of false positives due to programs that are linked with a library that itself is linked with pthread. For example, /bin/mkdir on my system is linked with PCRE (I don't know whyโฆ) which itself is linked with pthread. But mkdir is not parallelized in any way. In practice, checking whether the executable contains libpthread gives more reliable results. It could miss executables whose parallel behavior is entirely contained in a library, but basic utility typically aren't designed that way. dpkg -L coreutils grep findutils util-linux | grep /bin/ | xargs grep pthread
Binary file /usr/bin/timeout matches
Binary file /usr/bin/sort matches So the only tool that actually has a chance of being parallelized is sort . ( timeout only links to libpthread because it links to librt.) GNU sort does work in parallel: the number of threads can be configured with the --parallel option , and by default it uses one thread per processor up to 8. ( Using more processors gives less and less benefit as the number of processors increases , tapering off at a rate that depends on how parallelizable the task is.) grep isn't parallelized at all. The PCRE library actually links to the pthread library only because it provides thread-safe functions that use locks and the lock manipulation functions are in the pthread library. The typical simple approach to benefit from parallelization when processing a large amount of data is to split this data into pieces, and process the pieces in parallel. In the case of grep, keep file sizes manageable (for example, if they're log files, rotate them often enough) and call separate instances of grep on each file (for example with GNU Parallel ). Note that grepping is usually IO-bound (it's only CPU-bound if you have a very complicated regex, or if you hit some Unicode corner cases of GNU grep where it has bad performance), so you're unlikely to get much benefit from having many threads. | {
"source": [
"https://unix.stackexchange.com/questions/594903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154237/"
]
} |
594,914 | My problem is the following. (kubuntu 14.04 64bits, kernel 4.40) I have a remote computer (on another place, I can't go on site) that have two network cards. On the second card ( eth1 ), I have a dhcp client which serve the IP 192.168.0.189/24 . Through this IP, I can connect with Teamviewer or anydesk. On the first card ( eth0 ), the IPย is set to 192.168.2.10/24 . All works well. But I have a device IP that IP is 192.168.0.100/24 and must be connected on eth0 (note that 192.168.0.100/24 is free on eth1 ). So I add the IP 192.168.0.110/24 to eth0 to access this new device. The problem is, in that case, we cannot initiate new connection on Teamviewer or anydesk. So, I'm looking to explain my system that it must use eth0 to access 192.168.0.100 eth1 to all other 192.168.0.x I think that route could be what I want, but I don't want to test it right now, because on error, things will be terrible to debug. My question is: Will the command route add 192.168.0.100/24 eth0 be enough? Should I generate some script for the other 192.168.0.x addresses? #ip a before ip addr add 192.168.0.110/24 dev eth0
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.2.10/24 brd 192.168.2.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether yy:yy:yy:yy:yy:yy brd ff:ff:ff:ff:ff:ff
inet 192.168.0.189/24 brd 192.168.0.255 scope global noprefixroute eth1
valid_lft 401100sec preferred_lft forever
#ip a after ip addr add 192.168.0.110/24 dev eth0
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.2.10/24 brd 192.168.2.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 192.168.0.110/24 scope global secondary enp0s8
valid_lft forever preferred_lft forever
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether yy:yy:yy:yy:yy:yy brd ff:ff:ff:ff:ff:ff
inet 192.168.0.189/24 brd 192.168.0.255 scope global noprefixroute eth1
valid_lft 401100sec preferred_lft forever | You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library. ldd /bin/grep | grep -F libpthread.so So for example on Ubuntu: for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); do if ldd $x | grep -q -F libpthread.so; then echo $x; fi; done However, this produces a lot of false positives due to programs that are linked with a library that itself is linked with pthread. For example, /bin/mkdir on my system is linked with PCRE (I don't know whyโฆ) which itself is linked with pthread. But mkdir is not parallelized in any way. In practice, checking whether the executable contains libpthread gives more reliable results. It could miss executables whose parallel behavior is entirely contained in a library, but basic utility typically aren't designed that way. dpkg -L coreutils grep findutils util-linux | grep /bin/ | xargs grep pthread
Binary file /usr/bin/timeout matches
Binary file /usr/bin/sort matches So the only tool that actually has a chance of being parallelized is sort . ( timeout only links to libpthread because it links to librt.) GNU sort does work in parallel: the number of threads can be configured with the --parallel option , and by default it uses one thread per processor up to 8. ( Using more processors gives less and less benefit as the number of processors increases , tapering off at a rate that depends on how parallelizable the task is.) grep isn't parallelized at all. The PCRE library actually links to the pthread library only because it provides thread-safe functions that use locks and the lock manipulation functions are in the pthread library. The typical simple approach to benefit from parallelization when processing a large amount of data is to split this data into pieces, and process the pieces in parallel. In the case of grep, keep file sizes manageable (for example, if they're log files, rotate them often enough) and call separate instances of grep on each file (for example with GNU Parallel ). Note that grepping is usually IO-bound (it's only CPU-bound if you have a very complicated regex, or if you hit some Unicode corner cases of GNU grep where it has bad performance), so you're unlikely to get much benefit from having many threads. | {
"source": [
"https://unix.stackexchange.com/questions/594914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15601/"
]
} |
594,928 | OK, this is weird. I have been battling this all day & have been unsuccessful as of yet. I am working on a project that is Python based. The project is started via systemd scripts. Weird thing is vlc/cvlc works to an extent, but there is no dbus control. If I run the python app from the command line, everything works perfectly. Running the app from systemd is the wonkiness. For instance, when it is run with the following code & service script, I can't control vlc with dbus. If I run the python outside of systemd script, I can access the dbus. There is another weird issue that is a side effect of whatever is causing this problem. It will run 1080 vid just fine but not 4k. Try it out with the following & let me know if you can figure it out. I greatly appreciate any & all help. Thanks! PYTHON CODE (testvlc): #!/usr/bin/env python
from subprocess import Popen, PIPE
import time
vid = 'somevideo.mp4'
cmd = 'DISPLAY=:0 cvlc -f --no-osd %s -L' % vid
Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
while True:
print("Hello!")
time.sleep(5) SYSTEMD SCRIPT (testvlc.service): [Unit]
Description=Test VLC From Python Script
[Service]
User=user
ExecStart=/usr/bin/screen -D -S testvlc -m /home/user/testvlc
[Install]
WantedBy=multi-user.target | You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library. ldd /bin/grep | grep -F libpthread.so So for example on Ubuntu: for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); do if ldd $x | grep -q -F libpthread.so; then echo $x; fi; done However, this produces a lot of false positives due to programs that are linked with a library that itself is linked with pthread. For example, /bin/mkdir on my system is linked with PCRE (I don't know whyโฆ) which itself is linked with pthread. But mkdir is not parallelized in any way. In practice, checking whether the executable contains libpthread gives more reliable results. It could miss executables whose parallel behavior is entirely contained in a library, but basic utility typically aren't designed that way. dpkg -L coreutils grep findutils util-linux | grep /bin/ | xargs grep pthread
Binary file /usr/bin/timeout matches
Binary file /usr/bin/sort matches So the only tool that actually has a chance of being parallelized is sort . ( timeout only links to libpthread because it links to librt.) GNU sort does work in parallel: the number of threads can be configured with the --parallel option , and by default it uses one thread per processor up to 8. ( Using more processors gives less and less benefit as the number of processors increases , tapering off at a rate that depends on how parallelizable the task is.) grep isn't parallelized at all. The PCRE library actually links to the pthread library only because it provides thread-safe functions that use locks and the lock manipulation functions are in the pthread library. The typical simple approach to benefit from parallelization when processing a large amount of data is to split this data into pieces, and process the pieces in parallel. In the case of grep, keep file sizes manageable (for example, if they're log files, rotate them often enough) and call separate instances of grep on each file (for example with GNU Parallel ). Note that grepping is usually IO-bound (it's only CPU-bound if you have a very complicated regex, or if you hit some Unicode corner cases of GNU grep where it has bad performance), so you're unlikely to get much benefit from having many threads. | {
"source": [
"https://unix.stackexchange.com/questions/594928",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137026/"
]
} |
595,094 | I don't want to convert tabular data to nice columns like a standard awk recipe would produce. I want some text that's very long to be formatted into columns like a newspaper column. For example turn Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris tempus orci ut odio tincidunt, vel hendrerit ante viverra. Aenean mollis ex erat, ac commodo lectus scelerisque eget. Aenean sit amet purus felis. Aenean sit amet erat eget velit lobortis fermentum eget eget odio. Donec tincidunt rutrum varius. Nunc viverra ac erat id bibendum. Aenean sit amet venenatis arcu. Morbi enim enim, pulvinar sed velit in, sollicitudin tristique urna. In auctor ex vel diam sagittis, at placerat lacus sollicitudin. Sed a arcu dignissim, sodales odio ac, congue ante. Mauris posuere lorem varius tempor tincidunt. Etiam non metus ac nibh vulputate semper. Proin dapibus ullamcorper tortor, sed ultricies est euismod vel. Aliquam erat volutpat.
Phasellus at sem ornare, suscipit leo in, bibendum nulla. Sed fermentum enim id est feugiat, in commodo lectus fermentum. Sed quis volutpat felis. Donec turpis felis, dignissim vel mollis nec, pellentesque non odio. Aenean vitae sagittis libero, vel egestas diam. Nullam ornare purus quis eros euismod, viverra pretium turpis rhoncus. Etiam sagittis lorem non nisi molestie, ut dictum risus rhoncus. into Lorem ipsum varius. Nunc non metus ac vel mollis nec,
dolor sit amet, viverra ac erat id nibh vulputate pellentesque
consectetur bibendum. Aenean semper. Proin non odio. Aenean
adipiscing sit amet venenatis dapibus ullamcorper vitae sagittis
elit. Mauris arcu. Morbi enim tortor, sed libero, vel egestas
tempus orci ut enim, pulvinar ultricies diam. Nullam ornare
odio tincidunt, sed velit in, est euismod purus quis eros
vel hendrerit ante sollicitudin vel. Aliquam erat euismod, viverra
viverra. Aenean tristique urna. In volutpat. pretium turpis
mollis ex erat, auctor ex vel rhoncus. Etiam
ac commodo lectus diam sagittis, Phasellus at sagittis lorem non
scelerisque at placerat lacus sem ornare, nisi molestie,
eget. Aenean sollicitudin. Sed suscipit leo in, ut dictum risus
sit amet purus a arcu dignissim, bibendum nulla. Sed rhoncus.
felis. Aenean sit sodales odio ac, fermentum enim
amet erat eget congue ante. Mauris id est feugiat,
velit lobortis posuere lorem in commodo lectus
fermentum eget varius tempor fermentum. Sed
eget odio. Donec tincidunt. Etiam quis volutpat
tincidunt rutrum felis. Donec turpis
felis, dignissim It would need to be "paginated", too by a double \n after the width is full. | You can use fold to break the text up and then feed it to pr . Both are most likely available in your system. If this is the file lorem.txt : Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Integer
malesuada nunc vel risus commodo viverra maecenas accumsan lacus. Nec
feugiat nisl pretium fusce id velit ut tortor pretium. Lacus sed
turpis tincidunt id. Nibh sit amet commodo nulla facilisi. In metus
vulputate eu scelerisque felis. Id nibh tortor id aliquet. $ fold -w 20 -s lorem.txt | pr -3
2020-06-25 16:41 Page 1
Lorem ipsum dolor Integer malesuada turpis tincidunt
sit amet, nunc vel risus id. Nibh sit amet
consectetur commodo viverra commodo nulla
adipiscing elit, maecenas accumsan facilisi. In metus
sed do eiusmod lacus. Nec feugiat vulputate eu
tempor incididunt nisl pretium fusce scelerisque felis.
ut labore et dolore id velit ut tortor Id nibh tortor id
magna aliqua. pretium. Lacus sed aliquet. Check the pr and fold man pages for other options. | {
"source": [
"https://unix.stackexchange.com/questions/595094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22881/"
]
} |
596,450 | I want to grep all lines with only one "#" in a line. Example: xxx#aaa#iiiii
xxxxxxxxx#aaa
#xxx#bbb#111#yy
xxxxxxxxxxxxxxxx#
xxx#x
#x#v#e# Should give this output xxxxxxxxx#aaa
xxxxxxxxxxxxxxxx#
xxx#x | try grep '^[^#]*#[^#]*$' file where ^ ; begin of line
[^#]* ; any number of char โ #
# ; #
[^#]* ; any number of char โ #
$ ; end of line as sugested, you can grep on the whole line, with grep -x '[^#]*#[^#]*' with same pattern without begin of line/end of line anchor. -x to grep whole line, see man grep -x, --line-regexp
Select only those matches that exactly match the whole line. For a regular
expression pattern, this is like parenthesizing the pattern and then surrounding it
with ^ and $. | {
"source": [
"https://unix.stackexchange.com/questions/596450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/420898/"
]
} |
596,887 | While using Xorg X11, on KDE/Gnome/XFCE how can we scale the display/resolution for the whole desktop and/or per application? (when this is not available on the settings GUI) The purpose is to keep the screen resolution unchanged (at max) while scaling the size (bigger/smaller) of the desktop/applications. | Linux display This is detailed in depth on how does Linux's display works? QA. On most desktops system (like KDE or Gnome) there are settings available on their respective settings panel, this guide is for additional/manual settings that can be applied to scale an application or the whole desktop. This reference article have many valuable informations for the matter. Scaling applications Scaling application can be done mainly via DPI , specific environment variable (explained bellow), application own setting or some specific desktop setting (out of scope of this QA). Qt applications can be scaled with the following environment variables, note that many applications are hard-coding sizing and font and thus the result on such app may not be as expected. export QT_AUTO_SCREEN_SET_FACTOR=0
export QT_SCALE_FACTOR=2
export QT_FONT_DPI=96 Gnome/GTK applications can be scaled with the following environment variables export GDK_SCALE=2
export GDK_DPI_SCALE=0.5 Gnome/GTK can as well be scaled globally with this Gnome setting gsettings set org.gnome.desktop.interface text-scaling-factor 2.0 Chromium, can be scaled with the following command chromium --high-dpi-support=1 --force-device-scale-factor=1.5 Xpra (python) can be used along with Run scaled to achieve a per app scaling. Environment variables modification can be placed in ~/.profile for a global and automatic appliance after login. Scaling the desktop with Xorg X11 Xorg 's extension RandR have a scaling feature and can be configured with xrandr . This can be used to scale the desktop to display a bigger environment, this can be useful for HiDPI (High Dots Per Inch) displays. RandR can also be used the other way around , example making a screen with 1366x768 max resolution support a greater resolution like 1920x1080. This is achieved by simulating the new greater resolution while rendering it for the supported max resolution, similar to when we watch a Full-HD video on a screen that is not Full-HD. Scaling the desktop without changing the resolution Getting the screen name: xrandr | grep connected | grep -v disconnected | awk '{print $1}' Reduce the screen size by 20% (zoom-in) xrandr --output screen-name --scale 0.8x0.8 Increase the screen size by 20% (zoom-out) xrandr --output screen-name --scale 1.2x1.2 Reset xrandr changes xrandr --output screen-name --scale 1x1 Scaling the desktop and simulate/render a new resolution When using xrandr to "zoom-in" with the previous method , the desktop remain full screen but when we "zoom-out" with for instance xrandr --output screen-name --scale 1.2x1.2 (to get an unsupported resolution) the desktop is not displayed in full screen because this require updating the resolution (to probably a higher unsupported resolution by the screen), we can use a combinaison of --mode , --panning and --scale , xrandr's parameters to achieve a full screen "zoom-out" scaling (simulate a new resolution), example: Get the current setup xdpyinfo | grep -B 2 resolution
# or
xdpyinfo Configuration example Scaling at: 120%
Used/max screen resolution: 1366 x 768
Resolution at 120% (res x 1.2): 1640 x 922 (round)
Scaling factor (new res / res): 1.20058565 x 1.20208604 The idea here is to increase the screen resolution virtually (because we are limited to 1366x768 physically) the command would be (replace screen-name ): xrandr --output screen-name --mode 1366x768 --panning 1640x922 --scale 1.20058565x1.20208604 Reset the changes with xrandr --output screen-name --mode 1366x768 --panning 1366x768 --scale 1x1
# restarting the desktop may be required example with KDE
# kquitapp5 plasmashell
# plasmashell & Making xrandr changes persistant There is a multitude of methods to make xrandr changes persistant, this and this QA have many examples. Experiments notes As a side note and experiments result while using SDDM + KDE, and after many tests to achieve a persistant config, I ended up loading a script with ~/.config/autostart ( systemsettings5 > Startup... > Autostart), and naming my script 00-scriptname to make it run first. # 00-scriptname
# Applying the main xrandr suited changes (scaling at x1.15)
xrandr --output eDP1 --mode 1366x768 --panning 1574x886 --scale 1.15226939x1.15364583
# This is where it get odd/complicated, sometimes the screen resolution is not applied correctly or not applied at all...
# Note that "xrandr --fb" can be used alone to change the screen resolution on a normal situation...
# Here we will be taking advantage of xrandr's "--fb" feature to make the config appliance stable and works every-time.
# The odd thing here is while re-applying the new resolution 1574x886 with "--fb" nothing happen, but
# if we use use an unsupported resolution like 1574x884 (vs 1574x886) then xrandr force the resolution
# to "reset itself" to the configured resolution (1574x886)...
# In short just re-apply the setting with "--fb" and an unsupported resolution to force a reset.
# ("--fb" can be used alone here without re-applying everything)
#xrandr --fb 1574x884
xrandr --fb 1574x884 --output eDP1 --mode 1366x768 --panning 1574x886 --scale 1.15226939x1.15364583 References Some KDE's gui tools: systemsettings5 > display, kcmshell5 xserver and kinfocenter . Links and sources: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 and 12 . | {
"source": [
"https://unix.stackexchange.com/questions/596887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120919/"
]
} |
596,889 | I've a problem with my CentOS. When I've tried to install alien (el7 package, because el8 isn't available), I got this error (My CentOS has Polish language) [mlodybukk@localhost Pobrane]$ sudo yum install alien
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Ostatnio sprawdzono waลผnoลฤ metadanych: 0:01:14 temu wย dniu nie, 5 lip 2020, 22:45:38.
Bลฤ
d:
Problem: conflicting requests
- nothing provides perl(:MODULE_COMPAT_5.16.3) needed by alien-8.90-3.el7.nux.noarch How can I repair this? | Linux display This is detailed in depth on how does Linux's display works? QA. On most desktops system (like KDE or Gnome) there are settings available on their respective settings panel, this guide is for additional/manual settings that can be applied to scale an application or the whole desktop. This reference article have many valuable informations for the matter. Scaling applications Scaling application can be done mainly via DPI , specific environment variable (explained bellow), application own setting or some specific desktop setting (out of scope of this QA). Qt applications can be scaled with the following environment variables, note that many applications are hard-coding sizing and font and thus the result on such app may not be as expected. export QT_AUTO_SCREEN_SET_FACTOR=0
export QT_SCALE_FACTOR=2
export QT_FONT_DPI=96 Gnome/GTK applications can be scaled with the following environment variables export GDK_SCALE=2
export GDK_DPI_SCALE=0.5 Gnome/GTK can as well be scaled globally with this Gnome setting gsettings set org.gnome.desktop.interface text-scaling-factor 2.0 Chromium, can be scaled with the following command chromium --high-dpi-support=1 --force-device-scale-factor=1.5 Xpra (python) can be used along with Run scaled to achieve a per app scaling. Environment variables modification can be placed in ~/.profile for a global and automatic appliance after login. Scaling the desktop with Xorg X11 Xorg 's extension RandR have a scaling feature and can be configured with xrandr . This can be used to scale the desktop to display a bigger environment, this can be useful for HiDPI (High Dots Per Inch) displays. RandR can also be used the other way around , example making a screen with 1366x768 max resolution support a greater resolution like 1920x1080. This is achieved by simulating the new greater resolution while rendering it for the supported max resolution, similar to when we watch a Full-HD video on a screen that is not Full-HD. Scaling the desktop without changing the resolution Getting the screen name: xrandr | grep connected | grep -v disconnected | awk '{print $1}' Reduce the screen size by 20% (zoom-in) xrandr --output screen-name --scale 0.8x0.8 Increase the screen size by 20% (zoom-out) xrandr --output screen-name --scale 1.2x1.2 Reset xrandr changes xrandr --output screen-name --scale 1x1 Scaling the desktop and simulate/render a new resolution When using xrandr to "zoom-in" with the previous method , the desktop remain full screen but when we "zoom-out" with for instance xrandr --output screen-name --scale 1.2x1.2 (to get an unsupported resolution) the desktop is not displayed in full screen because this require updating the resolution (to probably a higher unsupported resolution by the screen), we can use a combinaison of --mode , --panning and --scale , xrandr's parameters to achieve a full screen "zoom-out" scaling (simulate a new resolution), example: Get the current setup xdpyinfo | grep -B 2 resolution
# or
xdpyinfo Configuration example Scaling at: 120%
Used/max screen resolution: 1366 x 768
Resolution at 120% (res x 1.2): 1640 x 922 (round)
Scaling factor (new res / res): 1.20058565 x 1.20208604 The idea here is to increase the screen resolution virtually (because we are limited to 1366x768 physically) the command would be (replace screen-name ): xrandr --output screen-name --mode 1366x768 --panning 1640x922 --scale 1.20058565x1.20208604 Reset the changes with xrandr --output screen-name --mode 1366x768 --panning 1366x768 --scale 1x1
# restarting the desktop may be required example with KDE
# kquitapp5 plasmashell
# plasmashell & Making xrandr changes persistant There is a multitude of methods to make xrandr changes persistant, this and this QA have many examples. Experiments notes As a side note and experiments result while using SDDM + KDE, and after many tests to achieve a persistant config, I ended up loading a script with ~/.config/autostart ( systemsettings5 > Startup... > Autostart), and naming my script 00-scriptname to make it run first. # 00-scriptname
# Applying the main xrandr suited changes (scaling at x1.15)
xrandr --output eDP1 --mode 1366x768 --panning 1574x886 --scale 1.15226939x1.15364583
# This is where it get odd/complicated, sometimes the screen resolution is not applied correctly or not applied at all...
# Note that "xrandr --fb" can be used alone to change the screen resolution on a normal situation...
# Here we will be taking advantage of xrandr's "--fb" feature to make the config appliance stable and works every-time.
# The odd thing here is while re-applying the new resolution 1574x886 with "--fb" nothing happen, but
# if we use use an unsupported resolution like 1574x884 (vs 1574x886) then xrandr force the resolution
# to "reset itself" to the configured resolution (1574x886)...
# In short just re-apply the setting with "--fb" and an unsupported resolution to force a reset.
# ("--fb" can be used alone here without re-applying everything)
#xrandr --fb 1574x884
xrandr --fb 1574x884 --output eDP1 --mode 1366x768 --panning 1574x886 --scale 1.15226939x1.15364583 References Some KDE's gui tools: systemsettings5 > display, kcmshell5 xserver and kinfocenter . Links and sources: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 and 12 . | {
"source": [
"https://unix.stackexchange.com/questions/596889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/421247/"
]
} |
596,894 | The Linux's display system uses multiple technology , protocols, extensions, applications, servers (daemon), drivers and concepts to achieve the windowing system for instance: Xorg, Wayland, X11, OpenGL, RandR, XrandR, Screen Resolution, DPI, Display server, etc. That multitude can be overwhelming or confusing when we don't have the full picture. There are multiple documentations for each side of the Linux's display system, but globally how does it work exactly? | Linux display The Linux's display system, uses multiple technology, protocols, extensions, applications, servers (daemon), drivers and concepts to achieve the windowing system for instance: Xorg, Wayland, X11, OpenGL, RandR, XrandR, Screen Resolution, DPI, Display server, etc. This can be overwhelming to understand fully, but each side of it is meant for a specific purpose and they are not used all together at the same time. X protocol The X Window System, X11 (X version 11) is a windowing system for bitmap displays, common on Unix-like operating systems, X provides the basic framework for a GUI environment: drawing and moving windows on the display device and interacting with a mouse and keyboard. X does not mandate the user interface, this is handled by individual programs. As such, the visual styling of X-based environments varies greatly; different programs may present radically different interfaces. X originated at the Project Athena at Massachusetts Institute of Technology (MIT) in 1984. The X protocol has been at version 11 (hence "X11") since September 1987. The X.Org Foundation leads the X project, with the current reference implementation, X.Org Server, available as free and open source software under the MIT License and similar permissive licenses. X implementation Most Linux distribution uses X.Org Server which is the free and open-source implementation of the display server for the X Window System (X11) stewarded by the X.Org Foundation. Xorg/X alone doesn't support multiple provided features like scaling or rendering, for that Xorg uses extensions such as XFixes , RandR (RandR is managed by xrandr it can for instance setup panning, resolution or scaling), GLX (OpenGL extension), Render or Composite which causes an entire sub-tree of the window hierarchy to be rendered to an off-screen buffer, applications can then take the contents of that buffer and do whatever they like, the off-screen buffer can be automatically merged into the parent window or merged by external programs, called compositing managers to do compositing on their own like some window managers do; E.g. Compiz, Enlightenment, KWin, Marco, Metacity, Muffin, Mutter and Xfwm. For other " non-compositing " window managers, a standalone composite manager can be used, example: Picom , Xcompmgr or Unagi . Xorg supported extensions can be listed with: xdpyinfo -display :0 -queryExtensions | awk '/^number of extensions:/,/^default screen number/' . On the other hand Wayland is intended as a simpler replacement for Xorg/X11, easier to develop and maintain but as of 2020 desktop's support for Wayland is not yet fully ready other than Gnome (e.g. KDE Kwin and Wayland support ); on the distributions side, Fedora does use Wayland by default . Note that Wayland and Xorg can work simultaneously , this can be the case depending on the used configuration. XWayland is a series of patches over the X.Org server codebase that implement an X server running upon the Wayland protocol. The patches are developed and maintained by the Wayland developers for compatibility with X11 applications during the transition to Wayland, and was mainlined in version 1.16 of the X.Org Server in 2014. When a user runs an X application from within Weston, it calls upon XWayland to service the request. The whole scope A display server or window server is a program (like Xorg or Wayland) whose primary task is to coordinate the input and output of its clients to and from the rest of the operating system, the hardware, and each other. The display server communicates with its clients over the display server protocol, a communications protocol, which can be network-transparent or simply network-capable. For instance X11 and Wayland are display server communications protocols. As shown on the diagram a window manager is an other important element of the desktop environment that is a system software that controls the placement and appearance of windows within a windowing system in a graphical user interface. Most window managers are designed to help provide a desktop environment. They work in conjunction with the underlying graphical system that provides required functionality support for graphics hardware, pointing devices, and a keyboard, and are often written and created using a widget toolkit. KDE uses KWin as a window manager (it has a limited support for Wayland as of 2020), similarly Gnome 2 uses Metacity and Gnome 3 uses Mutter as a window manager. An other important aspect of a windows manager is the compositor or compositing window manager , which is a window manager that provides applications with an off-screen buffer for each window. The window manager composites the window buffers into an image representing the screen and writes the result into the display memory. Compositing window managers may perform additional processing on buffered windows, applying 2D and 3D animated effects such as blending, fading, scaling, rotation, duplication, bending and contortion, shuffling, blurring, redirecting applications, and translating windows into one of a number of displays and virtual desktops. Computer graphics technology allows for visual effects to be rendered in real time such as drop shadows, live previews, and complex animation. Since the screen is double-buffered , it does not flicker during updates. The most commonly used compositing window managers include: Linux, BSD, Hurd and OpenSolaris-Compiz, KWin, Xfwm, Enlightenment and Mutter. each one have its own implementation, for instance KDE's KWin's compositor have many features/settings like animation speed, tearing prevention (vsync), window thumbnails, scaling method and can use OpenGLv2/OpenGLv3 or XRender as a rendering backend along with Xorg. ( XRender/Render not to confuse with XRandR/RandR ). OpenGL (Open Graphics Library) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering. OpenGL is a rendering library that can be used with Xorg, Wayland or any application that implements it. OpenGL installation can be checked with glxinfo | grep OpenGL . The display resolution or display modes of a computer monitor or display device is the number of distinct pixels in each dimension that can be displayed. It is usually quoted as widthโรโheight, with the units in pixels: for example, 1024โรโ768 means the width is 1024 pixels and the height is 768 pixels. xrandr can be used to add or render/simulate a new display resolution. The DPI stand for dots per inch and is a measure of spatial printing/display , in particular the number of individual dots that can be placed in a line within the span of 1 inch (2.54 cm). Computer's screens do not have dots, but do have pixels, the closely related concept is pixels per inch or PPI and thus DPI is implemented with the PPI concept. The default 96 DPI mesure mean 96x96 vertically and horizontally. Additionally Is X DPI (dot per inch) setting just meant for text scaling? QA is very informative. Notes Some KDE's gui tools: systemsettings5 > display, kcmshell5 xserver and kinfocenter . References Links and sources: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 and 12 . | {
"source": [
"https://unix.stackexchange.com/questions/596894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120919/"
]
} |
597,259 | Is there a standard dummy executable file that does nothing in Linux? I have a shell command that always opens $EDITOR before the build process to input arguments manually. In my case, my arguments are always already set (this is an automated script) so I never need it, but it still pops up and awaits user input. To solve this, I created an empty executable file that does nothing, so I can set EDITOR=dummy and the build script calls it, it exits and the build process can start. My question is, is there an existing official file in Linux that when executed does nothing, a sort of placeholder that I could use for this purpose? | There's the standard utilities true and false . The first does nothing but return an exit status of 0 for successful execution , the second does nothing but return a non-zero value indicating a non-successful result (*) . You probably want the first one. Though some systems that really want you to enter some text (commit messages, etc.) will check if the "edited" file was actually modified, and just running true wouldn't fly in that case. Instead, touch might work; it updates the timestamps of any files it gets as arguments . However, if the editor gets any other arguments than the filename touch would create those as files. Many editors support an argument like + NNN to tell the initial line to put the cursor in, and so the editor may be called as $EDITOR +123 filename.txt . (E.g. less does this, git doesn't seem to.) Note that you'll want to use true , not e.g. /bin/true . First, if there's a shell involved, specifying the command without a path will allow the shell to use a builtin implementation, and if a shell is not used, the binary file will be found in PATH anyway. Second, not all systems have /bin/true ; e.g. on macOS, it's /usr/bin/true . (Thanks @jpaugh.) (* or as the GNU man page puts it , false "[does] nothing, unsuccessfully". Thanks @8bittree.) | {
"source": [
"https://unix.stackexchange.com/questions/597259",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/421601/"
]
} |
600,496 | I have workspace with many folders which all are consisted of long paths. For example: |- workspace
|-- this.is.very.long.name.context
|-- this.is.another.long.path.authors
|-- // 20 more folders
|-- this.is.folder.with.too.many.characters.folder They all start with same phase ( this.is ) which in my real case is 20 characters long and they mostly differ in last sequence. Is there any way to quickly navigate through them using cd command? Any wild characters like ? ? | I can't speak for others (e.g., zsh ) but if you are using bash ,
wildcards do work to an extent.ย
Example: ~ $ ls
Documents
Desktop
Downloads If you use an asterisk ( * ), you get: ~ $ cd *ments
~/Documents $ That's because bash can do the substitutions before the command ever gets to cd . In the case of cd , if multiple matches work, you would expect the behaviour to be undefined: ~ $ cd *s
bash: cd: too many arguments bash expands this to cd Documents Downloads ,
which doesn't make sense to cd . You can also rely on bash 's autocomplete.ย In your example, you can simply type cd t ; then hitting Tab will auto-complete to: cd this.is. or whatever the next ambiguous character is.ย Hit Tab a second time to see all options in this filtered set. You can repeat by entering another character to narrow it down, Tab to autocomplete to the next ambiguous character,
and then Tab to see all options. Going further, bash can handle wildcards in autocomplete. In the first case above, you can type cd D*s then hit Tab to get suggestions of what could match the pattern: ~ $ cd D*s
Documents/ Downloads/
~ $ cd D*s If only one match exists, it'll get completed for you. ~ $ cd *loads
~ $ cd Downloads/ You could also use ls if you don't mind being in the questioned directory. The -d tells ls to list directories themselves instead of their contents. $ ls -d *long*
this.is.very.long.name.context
this.is.another.long.path.authors or you could use find if you want to look recursively: $ find workspace -type d -name '*long*'
workspace/this.is.very.long.name.context
workspace/this.is.another.long.path.authors | {
"source": [
"https://unix.stackexchange.com/questions/600496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/424574/"
]
} |
602,518 | I have been using ssh to access remote servers for many months, but recently I haven't been able to establish a reliable connection. Sometimes I cannot login and get the message "Connection reset by port 22", when I can login I get the error message "client_loop: send disconnect: Broken pipe" in a few minutes (even if the terminal is not idle). My ~/.ssh/config file has: Host *
ServerAliveInterval 300
ServerAliveCountMax 2
TCPKeepAlive yes My /etc/ssh/sshd_config file has: #ClientAliveInterval 300
#ClientAliveCountMax 3 I recently upgraded my xfinity plan to a faster speed and the problem started happening then. But xfinity insists the issue is on my end. Note that my roommate also has the same issue with ssh... Is there something that I'm missing on my end? Any help would be greatly appreciated!
(I'm running on a Mac) | I solved the same problem by editing the file ~/.ssh/config to have: Host *
ServerAliveInterval 20
TCPKeepAlive no Motivation: TCPKeepAlive no means "do not send keepalive messages to the server". When the opposite, TCPKeepAlive yes, is set, then the client sends keepalive messages to the server and requires a response in order to maintain its end of the connection . This will detect if the server goes down, reboots, etc. The trouble with this is that if the connection between the client and server is broken for a brief period of time (due to flaky a network connection), this will cause the keepalive messages to fail, and the client will end the connection with "broken pipe". Setting TCPKeepAlive no tells the client to just assume the connection is still good until proven otherwise by a user request, meaning that temporary connection breakages while your ssh term is sitting idle in the background won't kill the connection. | {
"source": [
"https://unix.stackexchange.com/questions/602518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/426520/"
]
} |
602,587 | I was making some routine checks just now and realized this: Raspberry Pi OS (previously called Raspbian) Source: Raspberry Pi OS I found no mention of this in their blog, nor on the Wikipedia page. Why change such a good name as "Raspbian" into the cumbersome and problematic "Raspberry Pi OS"? Now I have to rename a bunch of established code and stuff... | First some background: The original Pi fell uncomfortably between two stools hardware wise. A debian "armel" userland (with a Pi specific kernel) could run on the Pi but was far from taking advantage of it. Debian "armhf" wouldn't run because it's minimum CPU requirements were too high. To get around this Mike and I formed the Raspbian project and set about re-building all of Debian and I have been maintaining Raspbian since. While we did produce one or two complete OS images in the early days, the Raspbian project has mostly focused on maintaining a repository of Packages and left the building of OS images up to other people. Some time later the Raspberry Pi foundation started building their own Raspbian images. Over the years the delta between plain Raspbian and the Raspberry Pi foundation Raspbian images has grown as Raspberry Pi have developed their own desktop environment and have backported a substantial number of graphics related packages in support of their migration from their Pi-specific graphics stack to a Mesa based graphics stack. I have not been particularly happy with the lack of distinction between plain Raspbian and the Raspberry Pi foundation Raspbian images but I also didn't feel like pressing the issue too hard. Separately the Pi lineup has been evolving. The original Pi was using ARMv6 CPU, the Pi 2 was using ARMv7. It could run a Debian "armhf" userland and after a while Debian also added support for the Pi 2 in their kernel, though being an "upstream" kernel some things that are supported in the downstream raspberry pi kernels are not supported. The Pi 3 added 64-bit cores, which (after a bit of kernel development) meant Debian "arm64" could now run on the Pi. Then the Pi 4 came along offering up to 4GB of RAM. Through most of this the Raspberry Pi foundation decided to stick with a single OS image based on Raspbian as their official main OS. They decided that the benefits from multiple OS images did not justify the extra work. So that brings us forward to April 2020. The 8GB Pi 4 was in alpha testing and Raspberry Pi decided it was finally time to start producing a 64-bit OS image. I got an e-mail from Eben asking my opinion on naming. I expressed that I would not be happy about the name Raspbian being used for an image that did not actually use anything from the Raspbian project. The name Debian wasn't exactly great either because Debian were building their own images for the Pi. So Raspberry Pi decided to use the term "Raspberry Pi OS" for all their OS images (32-bit for Pi, 64-bit for Pi and 32-bit for PC) based on Debian or Raspbian. | {
"source": [
"https://unix.stackexchange.com/questions/602587",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/426579/"
]
} |
603,236 | In the cd, bash help page: The variable CDPATH defines the search path for the directory containing
DIR. Alternative directory names in CDPATH are separated by a colon (:).
A null directory name is the same as the current directory. If DIR begins
with a slash (/), then CDPATH is not used. But I don't understand the concept of "Alternative directory", and can't find an example that illustrates the use of the colon ( : ) with the cd command. | The variable is not set by default (at least in the systems I am familiar with) but can be set to use a different directory to search for the target dir you gave cd . This is probably easier to illustrate with an example: $ echo $CDPATH ## CDPATH is not set
$ cd etc ## fails: there is no "etc" directory here
bash: cd: etc: No such file or directory
$ CDPATH="/" ##CDPATH is now set to /
$ cd etc ## This now moves us to /etc
/etc In other words, the default behavior for cd foo is "move into the directory named 'foo' which is a subdirectory of the current directory or of any other directory that is given in CDPATH". When CDPATH is not set, cd will only look in the current directory but, when it is set, it will also look for a match in any of the directories you set it to. The colon is not used with cd , it is used to separate the directories you want to set in CDPATH : CDPATH="/path/to/dir1:/path/to/dir2:/path/to/dirN" | {
"source": [
"https://unix.stackexchange.com/questions/603236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/427202/"
]
} |
604,123 | In his autobiography, Just for Fun , Linus mentions the "page-to-disk" feature that was pivotal in making Linux a worthy competitor to Minix and other UNIX clones of the day: I remember that, in December, there was this guy in Germany who only had 2 megabytes of RAM, and he was trying to compile the kernel and he couldn't run GCC because GCC at the time needed more than a megabyte. He asked me if Linux could be compiled with a smaller compiler that wouldn't need as much memory. So I decided that even though I didn't need the particular feature, I would make it happen for him. It's called page-to-disk, and it means that even though someone only has 2 mgs of RAM, he can make it appear to be more using the disk for memory. This was around Christmas 1991. Page-to-disk was a fairly big thing because it was something Minix had never done. It was included in version 0.12, which was released in the first week of January 1992. Immediately, people started to compare Linux not only to Minix but to Coherent, which was a small Unix clone developed by Mark Williams Company. From the beginning, the act of adding page-to-disk caused Linux to rise above the competition. That's when Linux took off. Suddenly there were people switching from Minix to Linux. Is he essentially talking about swapping here? People with some historical perspective on Linux would probably know. | Yes, this is effectively swapping. Quoting the release notes for 0.12 : Virtual memory. In addition to the "mkfs" program, there is now a "mkswap" program on
the root disk. The syntax is identical: "mkswap -c /dev/hdX nnn", and
again: this writes over the partition, so be careful. Swapping can then
be enabled by changing the word at offset 506 in the bootimage to the
desired device. Use the same program as for setting the root file
system (but change the 508 offset to 506 of course). NOTE! This has been tested by Robert Blum, who has a 2M machine, and it
allows you to run gcc without much memory. HOWEVER, I had to stop using
it, as my diskspace was eaten up by the beta-gcc-2.0, so I'd like to
hear that it still works: I've been totally unable to make a
swap-partition for even rudimentary testing since about christmastime.
Thus the new changes could possibly just have backfired on the VM, but I
doubt it. In 0.12, paging is used for a number of features, not just swapping to a device: demand-loading (only loading pages from binaries as theyโre used), sharing (sharing common pages between processes). | {
"source": [
"https://unix.stackexchange.com/questions/604123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/399259/"
]
} |
604,258 | dbus is supposed to provide "a simple way for applications to talk to one another". But I am still not sure what it is useful for, practically. I have never seen a situation where dbus is useful, I only see warnings that some dbus component has experienced errors, such as when I start terminator from commandline (so that I can see errors): Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files I got rid of the above error by adding NO_AT_BRIDGE=1 to /etc/environment . I have no idea what that does. Almost all gui applications seem to be linked with dbus . Some allow to be started without dbus , ie: terminator --no-dbus I see no difference in behavior. What is supposed to stop working, when terminator is started without dbus ? Also, I have tried disabling various dbus components to see what stops working: I have deleted /etc/X11/Xsession.d/95dbus_update-activation-env just to see what happens. It contained the following code: if [ -n "$DBUS_SESSION_BUS_ADDRESS" ] && [ -x "/usr/bin/dbus-update-activation-environment" ]; then
# subshell so we can unset environment variables
(
# unset login-session-specifics
unset XDG_SEAT
unset XDG_SESSION_ID
unset XDG_VTNR
# tell dbus-daemon --session to put the Xsession's environment in activated services' environments
dbus-update-activation-environment --verbose --all
)
fi Everything works the same, as far as I can tell. What was the purpose of the above script? In what situation would it be useful for my applications to talk to each other via dbus ? Are there applications that don't work without dbus ? My system is Debian Buster, and I am using plain openbox environment (without any desktop environment such as Gnome or KDE) | dbus does exactly what you said: it allows two-way communication between applications. For your specific example you mentioned terminator . From terminator's man page , we see: --new-tab
If this is specified and Terminator is already running, DBus
will be used to spawn a new tab in the first Terminator window. So if you do this from another terminal (konsole, xterm, gnome-terminal): $ terminator &
$ terminator --new-tab & You'll see that the first command opens a new window. The second command opens a new tab in the first window. That's done by the second process using dbus to find the first process, asking it to open a new tab, then quitting. If you do this from another terminal: $ terminator --no-dbus &
$ terminator --new-tab & You'll see that the first command opens a new window. The second command fails to find the first window's dbus, so it launches a new window. I installed terminator to test this, and it's true. In addition, I suspect polkit would be affected. Polkit uses dbus to elevate privileges for GUI applications. It's like the sudo of the GUI world. If you are in gnome, and see the whole screen get covered while you are asked for the administrator's password, that's polkit in action. I suspect you won't get that prompt in any GUI application you start from terminator if you have --no-dbus . It'll either fail to authenticate, or fallback to some terminal authentication. From terminator try pkexec ls . That will run ls with elevated privileges. See if it's different with and without the --no-dbus option. I don't have a polkit agent in my window manager (i3) so I can't test this one out. I mostly know about dbus in terms of systemd, so that's where the rest of my answer will come from. Are there applications that don't work without dbus ? Yes. Take systemctl . systemctl status will issue a query to "org.freedesktop.systemd1" , and will present that to you. systemctl start will call a dbus method and pass the unit as an argument to that method. systemd recieves the call and performs the action. If you want to take action in response to a systemd unit (i.e. foo.service) changing states, you can get a file descriptor for interface org.freedesktop.DBus.Properties with path /org/freedesktop/systemd1/unit/foo_2eservice and member PropertiesChanged . Setup an inotify on that FD and you suddenly have a way to react to a service starting, stopping, failing, etc. If you want to take a look at what's available on the systemd dbus for a specific unit (i.e. ssh.service ) try this command: busctl introspect \
org.freedesktop.systemd1 \
/org/freedesktop/systemd1/unit/ssh_2eservice
NAME TYPE SIGNATURE RESULT/VALUE FLAGS
org.freedesktop.DBus.Introspectable interface - - -
.Introspect method - s -
org.freedesktop.DBus.Peer interface - - -
.GetMachineId method - s -
.Ping method - - -
org.freedesktop.DBus.Properties interface - - -
.Get method ss v -
.GetAll method s a{sv} -
.Set method ssv - -
.PropertiesChanged signal sa{sv}as - -
org.freedesktop.systemd1.Service interface - - -
.AttachProcesses method sau - -
.GetProcesses method - a(sus) -
.AllowedCPUs property ay 0 -
.AllowedMemoryNodes property ay 0 -
.AmbientCapabilities property t 0 const
.AppArmorProfile property (bs) false "" const
.BindPaths property a(ssbt) 0 const
.BindReadOnlyPaths property a(ssbt) 0 const
.BlockIOAccounting property b false -
.BlockIODeviceWeight property a(st) 0 -
.BlockIOReadBandwidth property a(st) 0 -
.BlockIOWeight property t 18446744073709551615 -
.BlockIOWriteBandwidth property a(st) 0 -
.BusName property s "" const
.CPUAccounting property b false -
.CPUAffinity property ay 0 const
.CPUAffinityFromNUMA property b false const
.CPUQuotaPerSecUSec property t 18446744073709551615 -
.CPUQuotaPeriodUSec property t 18446744073709551615 -
.CPUSchedulingPolicy property i 0 const
.CPUSchedulingPriority property i 0 const
.CPUSchedulingResetOnFork property b false const
.CPUShares property t 18446744073709551615 -
.CPUUsageNSec property t 18446744073709551615 -
.CPUWeight property t 18446744073709551615 -
.CacheDirectory property as 0 const
.CacheDirectoryMode property u 493 const
.CapabilityBoundingSet property t 18446744073709551615 const
.CleanResult property s "success" emits-change
.ConfigurationDirectory property as 0 const
.ConfigurationDirectoryMode property u 493 const
.ControlGroup property s "/system.slice/ssh.service" -
.ControlPID property u 0 emits-change
.CoredumpFilter property t 51 const
.DefaultMemoryLow property t 0 -
.DefaultMemoryMin property t 0 -
.Delegate property b false -
.DelegateControllers property as 0 -
.DeviceAllow property a(ss) 0 -
.DevicePolicy property s "auto" -
.DisableControllers property as 0 -
.DynamicUser property b false const
.EffectiveCPUs property ay 0 -
.EffectiveMemoryNodes property ay 0 -
.Environment property as 0 const
.EnvironmentFiles property a(sb) 1 "/etc/default/ssh" true const
.ExecCondition property a(sasbttttuii) 0 emits-invalidation
.ExecConditionEx property a(sasasttttuii) 0 emits-invalidation
.ExecMainCode property i 0 emits-change
.ExecMainExitTimestamp property t 0 emits-change
.ExecMainExitTimestampMonotonic property t 0 emits-change
.ExecMainPID property u 835 emits-change
.ExecMainStartTimestamp property t 1597235861087584 emits-change
.ExecMainStartTimestampMonotonic property t 5386565 emits-change
.ExecMainStatus property i 0 emits-change
.ExecReload property a(sasbttttuii) 2 "/usr/sbin/sshd" 2 "/usr/sbin/sshd" "โฆ emits-invalidation
.ExecReloadEx property a(sasasttttuii) 2 "/usr/sbin/sshd" 2 "/usr/sbin/sshd" "โฆ emits-invalidation
.ExecStart property a(sasbttttuii) 1 "/usr/sbin/sshd" 3 "/usr/sbin/sshd" "โฆ emits-invalidation
.ExecStartEx property a(sasasttttuii) 1 "/usr/sbin/sshd" 3 "/usr/sbin/sshd" "โฆ emits-invalidation
.ExecStartPost property a(sasbttttuii) 0 emits-invalidation
.ExecStartPostEx property a(sasasttttuii) 0 emits-invalidation
.ExecStartPre property a(sasbttttuii) 1 "/usr/sbin/sshd" 2 "/usr/sbin/sshd" "โฆ emits-invalidation
.ExecStartPreEx property a(sasasttttuii) 1 "/usr/sbin/sshd" 2 "/usr/sbin/sshd" "โฆ emits-invalidation
.ExecStop property a(sasbttttuii) 0 emits-invalidation
.ExecStopEx property a(sasasttttuii) 0 emits-invalidation
.ExecStopPost property a(sasbttttuii) 0 emits-invalidation
.ExecStopPostEx property a(sasasttttuii) 0 emits-invalidation
.FileDescriptorStoreMax property u 0 const
.FinalKillSignal property i 9 const
.GID property u 4294967295 emits-change
.Group property s "" const
.GuessMainPID property b true const
.IOAccounting property b false -
.IODeviceLatencyTargetUSec property a(st) 0 -
.IODeviceWeight property a(st) 0 -
.IOReadBandwidthMax property a(st) 0 -
.IOReadBytes property t 18446744073709551615 -
.IOReadIOPSMax property a(st) 0 -
.IOReadOperations property t 18446744073709551615 -
.IOSchedulingClass property i 0 const
.IOSchedulingPriority property i 0 const
.IOWeight property t 18446744073709551615 -
.IOWriteBandwidthMax property a(st) 0 -
.IOWriteBytes property t 18446744073709551615 -
.IOWriteIOPSMax property a(st) 0 -
.IOWriteOperations property t 18446744073709551615 -
.IPAccounting property b false -
.IPAddressAllow property a(iayu) 0 -
.IPAddressDeny property a(iayu) 0 -
.IPEgressBytes property t 18446744073709551615 -
.IPEgressFilterPath property as 0 -
.IPEgressPackets property t 18446744073709551615 -
.IPIngressBytes property t 18446744073709551615 -
.IPIngressFilterPath property as 0 -
.IPIngressPackets property t 18446744073709551615 -
.IgnoreSIGPIPE property b true const
.InaccessiblePaths property as 0 const
...skipping...
.CollectMode property s "inactive" const
.ConditionResult property b true emits-change
.ConditionTimestamp property t 1597235861034899 emits-change
.ConditionTimestampMonotonic property t 5333881 emits-change
.Conditions property a(sbbsi) 1 "ConditionPathExists" false true "/etโฆ emits-invalidation
.ConflictedBy property as 0 const
.Conflicts property as 1 "shutdown.target" const
.ConsistsOf property as 0 const
.DefaultDependencies property b true const
.Description property s "OpenBSD Secure Shell server" const
.Documentation property as 2 "man:sshd(8)" "man:sshd_config(5)" const
.DropInPaths property as 0 const
.FailureAction property s "none" const
.FailureActionExitStatus property i -1 const
.Following property s "" -
.FragmentPath property s "/lib/systemd/system/ssh.service" const
.FreezerState property s "running" emits-change
.Id property s "ssh.service" const
.IgnoreOnIsolate property b false const
.InactiveEnterTimestamp property t 0 emits-change
.InactiveEnterTimestampMonotonic property t 0 emits-change
.InactiveExitTimestamp property t 1597235861039525 emits-change
.InactiveExitTimestampMonotonic property t 5338505 emits-change
.InvocationID property ay 16 90 215 118 165 228 162 72 57 179 144โฆ emits-change
.Job property (uo) 0 "/" emits-change
.JobRunningTimeoutUSec property t 18446744073709551615 const
.JobTimeoutAction property s "none" const
.JobTimeoutRebootArgument property s "" const
.JobTimeoutUSec property t 18446744073709551615 const
.JoinsNamespaceOf property as 0 const
.LoadError property (ss) "" "" const
.LoadState property s "loaded" const
.Names property as 2 "ssh.service" "sshd.service" const
.NeedDaemonReload property b false const
.OnFailure property as 0 const
.OnFailureJobMode property s "replace" const
.PartOf property as 0 const
.Perpetual property b false const
.PropagatesReloadTo property as 0 const
.RebootArgument property s "" const
.Refs property as 0 -
.RefuseManualStart property b false const
.RefuseManualStop property b false const
.ReloadPropagatedFrom property as 0 const
.RequiredBy property as 0 const
.Requires property as 3 "system.slice" "-.mount" "sysinit.tarโฆ const
.RequiresMountsFor property as 1 "/run/sshd" const
.Requisite property as 0 const
.RequisiteOf property as 0 const
.SourcePath property s "" const
.StartLimitAction property s "none" const
.StartLimitBurst property u 5 const
.StartLimitIntervalUSec property t 10000000 const
.StateChangeTimestamp property t 1597235861208937 emits-change
.StateChangeTimestampMonotonic property t 5507917 emits-change
.StopWhenUnneeded property b false const
.SubState property s "running" emits-change
.SuccessAction property s "none" const
.SuccessActionExitStatus property i -1 const
.Transient property b false const
.TriggeredBy property as 0 const
.Triggers property as 0 const
.UnitFilePreset property s "enabled" -
.UnitFileState property s "enabled" -
.WantedBy property as 1 "multi-user.target" const
.Wants property as 0 const You can see from this that the dbus interface is pretty powerful. You might ask: Why don't these applications just communicate via sockets or files? DBus provides a common interface. You don't need different logic to call methods or check properties based on the application you are talking to. You just need to know the name of the path. I've used systemd as an example because that's what I understand best, but there are tons of uses of dbus on most desktops. Everything from authentication, to display settings are available on dbus. | {
"source": [
"https://unix.stackexchange.com/questions/604258",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155832/"
]
} |
604,616 | I want to write my own systemd unit files to manage really long running commands 1 (in the order of hours). While looking the ArchWiki article on systemd , it says the following regarding choosing a start up type: Type=simple (default): systemd considers the service to be started up immediately. The process must not fork . Do not use this type if other services need to be ordered on this service, unless it is socket activated. Why must the process not fork at all? Is it referring to forking in the style of the daemon summoning process (parent forks, then exits), or any kind of forking? 1 I don't want tmux/screen because I want a more elegant way of checking status and restarting the service without resorting to tmux send-keys . | The service is allowed to call the fork system call. Systemd won't prevent it, or even notice if it does. This sentence is referring specifically to the practice of forking at the beginning of a daemon to isolate the daemon from its parent process. โThe process must not fork [and exit the parent while running the service in a child process]โ. The man page explains this more verbosely, and with a wording that doesn't lead to this particular confusion. Many programs that are meant to be used as daemons have a mode (often the default mode) where when they start, they isolate themselves from their parent. The daemon starts, calls fork() , and the parent exits. The child process calls setsid() so that it runs in its own process group and session, and runs the service. The purpose is that if the daemon is invoked from a shell command line, the daemon won't receive any signal from the kernel or from the shell even if something happens to the terminal such as the terminal closing (in which case the shell sends SIGHUP to all the process groups it knows of). This also causes the servicing process to be adopted by init, which will reap it when it exits, avoiding a zombie if the daemon was started by something that wouldn't wait() for it (this wouldn't happen if the daemon was started by a shell). When a daemon is started by a monitoring process such as systemd, forking is counterproductive. The monitoring process is supposed to restart the service if it crashes, so it needs to know if the service exits, and that's difficult if the service isn't a direct child of the monitoring process. The monitoring process is not supposed to ever die and does not have a controlling terminal, so there are no concerns around unwanted signals or reaping. Thus there's no reason for the service process not to be a child of the monitor, and there's a good reason for it to be. | {
"source": [
"https://unix.stackexchange.com/questions/604616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/428422/"
]
} |
605,969 | Sometimes I need to add more disk to a database; for that, I need to list the disks to see what disks already exist. The problem is that the output is always sorted as 1,10,11,12...2,20,21...3 etc. How can I sort this output the way I want it? A simple sort does not work; I've also tried using sort -t.. -k.. -n . Example of what I need to sort: [root@server1 ~]# oracleasm listdisks
DATA1
DATA10
DATA11
DATA12
DATA2
DATA3
DATA4
DATA5
DATA6
DATA7
DATA8
DATA9
FRA1
FRA10
FRA11
FRA2
FRA3
..
OCR1
OCR2
OCR3
.... How I'd like to see the output: DATA1
DATA2
DATA3
DATA4
DATA5
DATA6
DATA7
DATA8
DATA9
DATA10
DATA11
DATA12
FRA1
FRA2
FRA3
..
..
FRA10
FRA11
..
OCR1
OCR2
OCR3
.... | Your best bet is piping to GNU sort , with GNU sort 's --version-sort option enabled so that would be oracleasm listdisks | sort --version-sort From the info page --version-sortโ
Sort by version name and number. It behaves like a standard sort,
except that each sequence of decimal digits is treated numerically
as an index/version number. (*Note Details about version sort::.) On your input it gives me DATA1
DATA2
DATA3
DATA4
DATA5
DATA6
DATA7
DATA8
DATA9
DATA10
DATA11
DATA12
FRA1
FRA2
FRA3
FRA10
FRA11
OCR1
OCR2
OCR3 | {
"source": [
"https://unix.stackexchange.com/questions/605969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/373074/"
]
} |
607,524 | Softlinks are easily traceable to the original file with readlink etc... but I am having a hard time tracing hardlinks to the original file. $ ll -i /usr/bin/bash /bin/bash
1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /bin/bash*
1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /usr/bin/bash*
^ above is as expected - cool --> both files point to same inode 1310813 (but the number of links, indicated by ^ , shows to be 1. From Gilles answer the reason for this can be understood) $ find / -samefile /bin/bash 2>/dev/null
/usr/bin/bash above is as expected - so no problems. $ find / -samefile /usr/bin/bash 2>/dev/null
/usr/bin/bash above is NOT cool. How do I trace the original file or every hardlink using the /usr/bin/bash file as reference? Strange - below did not help either. $ find / -inum 1310813 2>/dev/null
/usr/bin/bash | First, there is no original file in the case of hard links; all hard links are equal. However, hard links arenโt involved here, as indicated by the link count of 1 in ls -l โs output: $ ll -i /usr/bin/bash /bin/bash
1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /bin/bash*
1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /usr/bin/bash* Your problem arises because of a symlink, the bin symlink which points to usr/bin . To find all the paths in which bash is available , you need to tell find to follow symlinks, using the -L option: $ find -L / -xdev -samefile /usr/bin/bash 2>/dev/null
/usr/bin/rbash
/usr/bin/bash
/bin/rbash
/bin/bash Iโm using -xdev here because I know your system is installed on a single file system; this avoids descending into /dev , /proc , /run , /sys etc. | {
"source": [
"https://unix.stackexchange.com/questions/607524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243342/"
]
} |
608,116 | I recently installed ubuntu 20.04 and bluetooth seemed to work out-of-the-box. Yesterday, it stopped working with no known reason. I can turn it ON but the settings still show it to be OFF. I tried the following: $ sudo -i
$ rfkill list
0: phy0: Wireless LAN
Soft blocked: no
Hard blocked: no
3: hci0: Bluetooth
Soft blocked: no
Hard blocked: no and on running bluetoothctl , Agent registered
[bluetooth]# power off
No default controller available
[bluetooth]# power on
No default controller available
[bluetooth]# exit What could be the problem and how to tackle it ? | I tried various hacks (all at once) and did a restart but I am not sure which led to the bluetooth working right. I ran sudo apt-get update
sudo apt upgrade
sudo systemctl start bluetooth
sudo rfkill unblock bluetooth # rfkill also requires sudo And after the restart, it worked :? | {
"source": [
"https://unix.stackexchange.com/questions/608116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/396778/"
]
} |
608,207 | I use tar -cJvf resultfile.tar.xz files_to_compress to create tar.xz and tar -xzvf resultfile.tar.xz to extract the archive in current directory. How to use multi threading in both cases? I don't want to install any utilities. | tar -c -I 'xz -9 -T0' -f archive.tar.xz [list of files and folders] This compresses a list of files and directories into an .tar.xz archive. It does so by specifying the arguments to be passed to the xz subprocess, which compresses the tar archive. This is done using the -I argument to tar, which tells tar what program to use to compress the tar archive, and what arguments to pass to it. The -9 tells xz to use maximum compression. The -T0 tells xz to use as many threads as you have CPUs. | {
"source": [
"https://unix.stackexchange.com/questions/608207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/431721/"
]
} |
608,215 | I want to switch to the KDE desktop environment. But my distro uses gnome by default. Now if i run the apt full-upgrade command, the default DE that comes with it is Gnome and not KDE. Wouldn't that also put back Gnome and i would have to uninstall Gnome manually again, because the repository contains Gnome? 1.) How do i stop Gnome from installing when i do a apt full-upgrade? (Since i don't want Gnome) 2.) How do i go about managing my KDE package (i.e updating it). Do i also do a apt-mark hold on the KDE package just to "prevent any potential tamper" whenever i do apt full-upgrade? and then just apt-get update && apt-get upgrade KDE commands to update it? | tar -c -I 'xz -9 -T0' -f archive.tar.xz [list of files and folders] This compresses a list of files and directories into an .tar.xz archive. It does so by specifying the arguments to be passed to the xz subprocess, which compresses the tar archive. This is done using the -I argument to tar, which tells tar what program to use to compress the tar archive, and what arguments to pass to it. The -9 tells xz to use maximum compression. The -T0 tells xz to use as many threads as you have CPUs. | {
"source": [
"https://unix.stackexchange.com/questions/608215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/431688/"
]
} |
608,842 | When I put export GPG_TTY=$(tty) in my .zshrc and restart terminal window and execute echo $GPG_TTY it says not a tty . When I source .zshrc by source ~/.zshrc && echo $GPG_TTY it correctly reports /dev/pts/1 . What could be that my .zshrc fails to find tty when its documentation says that .zshrc is used for interactive shell initialisation? Here is my .zshrc contents: # Enable Powerlevel10k instant prompt. Should stay close to the top of ~/.zshrc.
if [[ -r "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh" ]]; then
source "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh"
fi
export ZSH="/home/ashar/.oh-my-zsh"
export EDITOR=nvim
export GPG_TTY=$(tty)
ZSH_THEME="powerlevel10k/powerlevel10k"
plugins=(git zsh-autosuggestions)
source $ZSH/oh-my-zsh.sh
# To customize prompt, run `p10k configure` or edit ~/.p10k.zsh.
[[ ! -f ~/.p10k.zsh ]] || source ~/.p10k.zsh | tty command requires that stdin is attached to a terminal. When using Powerlevel10k , stdin is redirected from /dev/null when Instant Prompt is activated and until Zsh is fully initialized. This is explained in more detail in Powerlevel10k FAQ . To solve this problem you can either move export GPG_TTY=$(tty) to the top of ~/.zshrc so that it executes before Instant Prompt is activated, or (better!) use export GPG_TTY=$TTY . The latter version will work anywhere and it's over 1000 times faster. TTY is a special parameter set by Zsh very early during initialization. It gives you access to the terminal even when stdin might be redirected. | {
"source": [
"https://unix.stackexchange.com/questions/608842",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/432286/"
]
} |
608,847 | I use Linux Mint, running Cinnamon as my DE. I'm used to switching keyboard layouts using LAlt+LShift.
I'm also used to switching windows between workspaces using LCtrl+LAlt+LShift+<direction>. I used to have a configuration that allowed me to do both of those seamlessly - I do not remember having any issue with layouts changing without my will or with workspace hotkeys not working. Unfortunately, a data loss incident has forced me to lose some configs - including this one. Enabling the layout switching hotkeys in Keyboard Settings now makes me lose functionality I have with Ctrl+Alt+Shift. How did I manage to set this up? I would like to do it again. | tty command requires that stdin is attached to a terminal. When using Powerlevel10k , stdin is redirected from /dev/null when Instant Prompt is activated and until Zsh is fully initialized. This is explained in more detail in Powerlevel10k FAQ . To solve this problem you can either move export GPG_TTY=$(tty) to the top of ~/.zshrc so that it executes before Instant Prompt is activated, or (better!) use export GPG_TTY=$TTY . The latter version will work anywhere and it's over 1000 times faster. TTY is a special parameter set by Zsh very early during initialization. It gives you access to the terminal even when stdin might be redirected. | {
"source": [
"https://unix.stackexchange.com/questions/608847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/432292/"
]
} |
610,494 | How do I remove the first 300 million lines from a 700โฏGB text file
on aย system with 1โฏTB disk space total, with 300โฏGB available?ย
(My system has 2โฏGB of memory.)ย
The answers I found use sed, tail, head: How do I delete the first n lines of a text file using shell commands? Remove the first n lines of a large text file But I think (please correct me) I cannot use them due to the disk space being limited to 1โฏTB and they produce a new file and/or have a tmp file during processing. The file contains database records in JSON format. | If you have enough space to compress the file, which should free a significant amount of space allowing you to do other operations, you can try this: gzip file && zcat file.gz | tail -n +300000001 | gzip > newFile.gz That will first gzip the original input file ( file ) to create file.gz . Then, you zcat the newly created file.gz , pipe it through tail -n +300000001 to remove the first 3M lines, compress the result to save disk space and save it as newFile.gz . The && ensures that you only continue if the gzip operation was successful (it will fail if you run out of space). Note that text files are very compressible. For example, I created a test file using seq 400000000 > file , which prints the numbers from 1 to 400,000,000 and this resulted in a 3.7G file. When I compressed it using the commands above, the compressed file was only 849M and the newFile.gz I created only 213M. | {
"source": [
"https://unix.stackexchange.com/questions/610494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160012/"
]
} |
611,713 | I often need to pop the last positional argument of a bash function or script. By "pop" I mean: "remove it from the list of positional arguments, and (optionally) assign it to a variable." Given how frequently I need this operation, I am a bit surprised that best I have found is what is illustrated by the example below: foo () {
local argv=( "$@" )
local last=${argv[$(( ${#argv[@]} - 1 ))]}
argv=( ${argv[@]:0:$(( ${#argv[@]} - 1 ))} )
echo "last: $last"
echo "rest: ${argv[@]}"
} In other words, an epic production featuring a cast of thousands... Is there anything simpler, easier to read? | You can access the last element with ${argv[-1]} (bash 4.2 or above) and remove it from the array with the unset builtin (bash 4.3 or above): last=${argv[-1]}
unset 'argv[-1]' The quotes around argv[-1] are required as [...] is a glob operator, so argv[-1] unquoted could expand to argv- and/or argv1 if those files existed in the current directory (or to nothing or cause an error if they didn't with nullglob / failglob enabled). | {
"source": [
"https://unix.stackexchange.com/questions/611713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
612,322 | Since being corrected many years ago, I switched from backticks to $() for command expansion. But I still prefer the backticks. It is fewer keystrokes and does not involve the Shift key. I understand that the parentheses are preferable because it is less prone to the errors that backticks is prone to, but what is the reason for the rule to never use backticks? | The Bash FAQ gives a number of reasons to prefer parentheses to backticks, but there isnโt a universal rule that you shouldnโt ever use backticks. The main reason to prefer parentheses in my view is that parsing inside $() is consistent with parsing performed outside, which isnโt the case with backticks. This means that you can take a shell command and wrap it with "$()" without much thought; thatโs not true if you use backticks instead. This cascades, so wrapping a command which itself contains a substitution is easily done with "$()" , not so with backticks. Ultimately I think itโs a question of habit. If you choose to use backticks for simple cases, parentheses for others, youโll have to make that choice every time you want to substitute a command. If you choose to always use parentheses, you never have to think about it again. The latter can explain the presence of a โdonโt use backticksโ rule in certain coding guides: it simplifies development, and removes a source of errors for developers and reviewers. It also explains why using parentheses can be recommended even for one-liners: itโs hard to ingrain a habit for script-writing when itโs not applied everywhere. (As far as keying goes, that depends on the keyboard layout; on my AZERTY keyboard, $() doesnโt involve any shifting, whereas backticks are quite painful to write.) | {
"source": [
"https://unix.stackexchange.com/questions/612322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251630/"
]
} |
612,416 | I tried to check what my DNS resolver is and I noticed this: user@ubuntu:~$ cat /etc/resolv.conf
nameserver 127.0.0.53
options edns0 I was expecting 192.168.1.1 , which is my default gateway, my router. I don't understand why it points at 127.0.0.53 . When I hit that ip, apache2 serves me its contents. Could someone clear this up for me? Shouldn't the file point directly at my default gateway which acts as a DNS resolver - or even better directly at my preferred DNS which is 1.1.1.1 ? P.S: When I capture DNS packets with wireshark on port 53 all I see is 192.168.1.1 and not 127.0.0.53 , as it should be. | You are likely running systemd-resolved as a service. systemd-resolved generates two configuration files on the fly, for optional use by DNS client libraries (such as the BIND DNS client library in C libraries): /run/systemd/resolve/stub-resolv.conf tells DNS client libraries to send their queries to 127.0.0.53. This is where the systemd-resolved process listens for DNS queries, which it then forwards on. /run/systemd/resolve/resolv.conf tells DNS client libraries to send their queries to IP addresses that systemd-resolved has obtained on the fly from its configuration files and DNS server information contained in DHCP leases. Effectively, this bypasses the systemd-resolved forwarding step, at the expense of also bypassing all of systemd-resolved 's logic for making complex decisions about what to actually forward to, for any given transaction. In both cases, systemd-resolved configures a search list of domain name suffixes, again derived on the fly from its configuration files and DHCP leases (which it is told about via a mechanism that is beyond the scope of this answer). /etc/resolv.conf can optionally be: a symbolic link to either of these; a symbolic link to a package-supplied static file at /usr/lib/systemd/resolv.conf , which also specifies 127.0.0.53 but no search domains calculated on the fly; some other file entirely. It's likely that you have such a symbolic link.
In which case, the thing that knows about the 192.168.1.1 setting, that is (presumably) handed out in DHCP leases by the DHCP server on your LAN, is systemd-resolved , which is forwarding query traffic to it as you have observed.
Your DNS client libraries, in your applications programs, are themselves only talking to systemd-resolved . Ironically, although it could be that you haven't captured loopback interface traffic to/from 127.0.0.53 properly, it is more likely that you aren't seeing it because systemd-resolved also (optionally) bypasses the BIND DNS Client in your C libraries and generates no such traffic to be captured. There's an NSS module provided with systemd-resolved , named nss-resolve , that is a plug-in for your C libraries.
Previously, your C libraries would have used another plug-in named nss-dns which uses the BIND DNS Client to make queries using the DNS protocol to the server(s) listed in /etc/resolv.conf , applying the domain suffixes listed therein. nss-resolve gets listed ahead of nss-dns in your /etc/nsswitch.conf file, causing your C libraries to not use the BIND DNS Client, or the DNS protocol, to perform nameโaddress lookups at all.
Instead, nss-resolve speaks a non-standard and idiosyncratic protocol over the (system-wide) Desktop Bus to systemd-resolved , which again makes back end queries of 192.168.1.1 or whatever your DHCP leases and configuration files say. To intercept that you have to monitor the Desktop Bus traffic with dbus-monitor or some such tool.
It's not even IP traffic, let alone IP traffic over a loopback network interface. as the Desktop Bus is reached via an AF_LOCAL socket. If you want to use a third-party resolving proxy DNS server at 1.1.1.1, or some other IP address, you have three choices: Configure your DHCP server to hand that out instead of handing out 192.168.1.1. systemd-resolved will learn of that via the DHCP leases and use it. Configure systemd-resolved via its own configuration mechanisms to use that instead of what it is seeing in the DHCP leases. Make your own /etc/resolv.conf file, an actual regular file instead of a symbolic link, list 1.1.1.1 there and remember to turn off nss-resolve so that you go back to using nss-dns and the BIND DNS Client. The systemd-resolved configuration files are a whole bunch of files in various directories that get combined, and how to configure them for the second choice aforementioned is beyond the scope of this answer.
Read the resolved.conf (5) manual page for that. | {
"source": [
"https://unix.stackexchange.com/questions/612416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/433400/"
]
} |
612,420 | I have a Table in unix with below format and covert as output. +--------------------------+-------------------------+-+
| col_name | type |
+--------------------------+-------------------------+-+
| Name | String |
| Date | Fri 29 13:17:2020 |
+--------------------------+-------------------------+-+ Output: "col_name","type"
"Name","String"
"Date","Fri 29 13:17:2020" Any help would be appreciated. | You are likely running systemd-resolved as a service. systemd-resolved generates two configuration files on the fly, for optional use by DNS client libraries (such as the BIND DNS client library in C libraries): /run/systemd/resolve/stub-resolv.conf tells DNS client libraries to send their queries to 127.0.0.53. This is where the systemd-resolved process listens for DNS queries, which it then forwards on. /run/systemd/resolve/resolv.conf tells DNS client libraries to send their queries to IP addresses that systemd-resolved has obtained on the fly from its configuration files and DNS server information contained in DHCP leases. Effectively, this bypasses the systemd-resolved forwarding step, at the expense of also bypassing all of systemd-resolved 's logic for making complex decisions about what to actually forward to, for any given transaction. In both cases, systemd-resolved configures a search list of domain name suffixes, again derived on the fly from its configuration files and DHCP leases (which it is told about via a mechanism that is beyond the scope of this answer). /etc/resolv.conf can optionally be: a symbolic link to either of these; a symbolic link to a package-supplied static file at /usr/lib/systemd/resolv.conf , which also specifies 127.0.0.53 but no search domains calculated on the fly; some other file entirely. It's likely that you have such a symbolic link.
In which case, the thing that knows about the 192.168.1.1 setting, that is (presumably) handed out in DHCP leases by the DHCP server on your LAN, is systemd-resolved , which is forwarding query traffic to it as you have observed.
Your DNS client libraries, in your applications programs, are themselves only talking to systemd-resolved . Ironically, although it could be that you haven't captured loopback interface traffic to/from 127.0.0.53 properly, it is more likely that you aren't seeing it because systemd-resolved also (optionally) bypasses the BIND DNS Client in your C libraries and generates no such traffic to be captured. There's an NSS module provided with systemd-resolved , named nss-resolve , that is a plug-in for your C libraries.
Previously, your C libraries would have used another plug-in named nss-dns which uses the BIND DNS Client to make queries using the DNS protocol to the server(s) listed in /etc/resolv.conf , applying the domain suffixes listed therein. nss-resolve gets listed ahead of nss-dns in your /etc/nsswitch.conf file, causing your C libraries to not use the BIND DNS Client, or the DNS protocol, to perform nameโaddress lookups at all.
Instead, nss-resolve speaks a non-standard and idiosyncratic protocol over the (system-wide) Desktop Bus to systemd-resolved , which again makes back end queries of 192.168.1.1 or whatever your DHCP leases and configuration files say. To intercept that you have to monitor the Desktop Bus traffic with dbus-monitor or some such tool.
It's not even IP traffic, let alone IP traffic over a loopback network interface. as the Desktop Bus is reached via an AF_LOCAL socket. If you want to use a third-party resolving proxy DNS server at 1.1.1.1, or some other IP address, you have three choices: Configure your DHCP server to hand that out instead of handing out 192.168.1.1. systemd-resolved will learn of that via the DHCP leases and use it. Configure systemd-resolved via its own configuration mechanisms to use that instead of what it is seeing in the DHCP leases. Make your own /etc/resolv.conf file, an actual regular file instead of a symbolic link, list 1.1.1.1 there and remember to turn off nss-resolve so that you go back to using nss-dns and the BIND DNS Client. The systemd-resolved configuration files are a whole bunch of files in various directories that get combined, and how to configure them for the second choice aforementioned is beyond the scope of this answer.
Read the resolved.conf (5) manual page for that. | {
"source": [
"https://unix.stackexchange.com/questions/612420",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/435422/"
]
} |
612,443 | Because of a few different applications I need to use, I need to be able to bypass Google's 2 Factor Authentication pam.d module when an SSH connection is coming from the same network. There is very little information about this online, but there are a few questions on the Stack Network, but none of the solutions worked for me. I am not sure if it is because the solutions are specifically for Linux, or I am just missing something. I am using macOS in all instances here. I am not very familiar with these settings. I do want to require a password, key, & 2FA if I am not on the same local network, but skip the 2FA if I am on the same local network Current Setup: SSH requires a valid key, password, & 2 Factor Auth File Contents Of: /etc/pam.d/sshd auth optional pam_krb5.so use_kcminit
auth optional pam_ntlm.so try_first_pass
auth optional pam_mount.so try_first_pass
auth required pam_opendirectory.so try_first_pass
auth required pam_google_authenticator.so nullok
account required pam_nologin.so
account required pam_sacl.so sacl_service=ssh
account required pam_opendirectory.so
password required pam_opendirectory.so
session required pam_launchd.so
session optional pam_mount.so /etc/ssh/ssh_config # Host *
# ForwardAgent no
# ForwardX11 no
# PasswordAuthentication yes
# HostbasedAuthentication no
GSSAPIAuthentication yes
GSSAPIDelegateCredentials no
# BatchMode no
# CheckHostIP yes
# AddressFamily any
# ConnectTimeout 0
# StrictHostKeyChecking ask
# IdentityFile ~/.ssh/id_rsa
# IdentityFile ~/.ssh/id_dsa
# IdentityFile ~/.ssh/id_ecdsa
# IdentityFile ~/.ssh/id_ed25519
# Port 22
# Protocol 2
# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
# MACs hmac-md5,hmac-sha1,[email protected]
# EscapeChar ~
# Tunnel no
# TunnelDevice any:any
# PermitLocalCommand no
# VisualHostKey no
# ProxyCommand ssh -q -W %h:%p gateway.example.com
# RekeyLimit 1G 1h
Host *
SendEnv LANG LC_* /etc/ssh/sshd_config #Protocol Version
Protocol 2
#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
#PermitRootLogin prohibit-password
#StrictModes yes
MaxAuthTries 3
#MaxSessions 10
PubkeyAuthentication yes
AuthenticationMethods publickey,keyboard-interactive:pam
# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile .ssh/authorized_keys
#AuthorizedPrincipalsFile none
#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
PermitEmptyPasswords no
# Change to no to disable s/key passwords
ChallengeResponseAuthentication yes
# Kerberos options
KerberosAuthentication yes
KerberosOrLocalPasswd yes
KerberosTicketCleanup yes
#KerberosGetAFSToken no
# GSSAPI options
GSSAPIAuthentication yes
GSSAPICleanupCredentials yes
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes
#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
#X11Forwarding no
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
#PrintMotd yes
#PrintLastLog yes
#TCPKeepAlive yes
#PermitUserEnvironment no
#Compression delayed
ClientAliveInterval 360
ClientAliveCountMax 0
#UseDNS no
#PidFile /var/run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# pass locale information
AcceptEnv LANG LC_*
# no default banner path
Banner /etc/ssh/banner
# override default of no subsystems
Subsystem sftp /usr/libexec/sftp-server
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# PermitTTY no
# ForceCommand cvs server EDIT: I attempted a few different combinations of the listed solutions to the Stack posts at the links listed below but I could not get the provided solutions to work. I do not know if I am missing something in my configuration, or if it has to do with I'm using macOS, or if maybe the order of what's listed in my sshd file in pam.d is incorrect. SSH - Only require google-authenticator from outside local network https://serverfault.com/questions/799657/ssh-google-authenticator-ignore-whitelist-ips I attempted to add this to the sshd file in pam.d: auth [success=1 default=ignore] pam_access.so accessfile=/etc/security/access.conf
auth sufficient pam_google_authenticator.so And adding an access.conf file to /etc/security/access.conf: + : ALL : 10.0.1.0/24
+ : ALL : LOCAL
+ : ALL : 10.0.1.4
+ : ALL : 10.0.1.6
+ : ALL : 10.0.1.16
+ : ALL : 10.0.1.20
- : ALL : ALL | You are likely running systemd-resolved as a service. systemd-resolved generates two configuration files on the fly, for optional use by DNS client libraries (such as the BIND DNS client library in C libraries): /run/systemd/resolve/stub-resolv.conf tells DNS client libraries to send their queries to 127.0.0.53. This is where the systemd-resolved process listens for DNS queries, which it then forwards on. /run/systemd/resolve/resolv.conf tells DNS client libraries to send their queries to IP addresses that systemd-resolved has obtained on the fly from its configuration files and DNS server information contained in DHCP leases. Effectively, this bypasses the systemd-resolved forwarding step, at the expense of also bypassing all of systemd-resolved 's logic for making complex decisions about what to actually forward to, for any given transaction. In both cases, systemd-resolved configures a search list of domain name suffixes, again derived on the fly from its configuration files and DHCP leases (which it is told about via a mechanism that is beyond the scope of this answer). /etc/resolv.conf can optionally be: a symbolic link to either of these; a symbolic link to a package-supplied static file at /usr/lib/systemd/resolv.conf , which also specifies 127.0.0.53 but no search domains calculated on the fly; some other file entirely. It's likely that you have such a symbolic link.
In which case, the thing that knows about the 192.168.1.1 setting, that is (presumably) handed out in DHCP leases by the DHCP server on your LAN, is systemd-resolved , which is forwarding query traffic to it as you have observed.
Your DNS client libraries, in your applications programs, are themselves only talking to systemd-resolved . Ironically, although it could be that you haven't captured loopback interface traffic to/from 127.0.0.53 properly, it is more likely that you aren't seeing it because systemd-resolved also (optionally) bypasses the BIND DNS Client in your C libraries and generates no such traffic to be captured. There's an NSS module provided with systemd-resolved , named nss-resolve , that is a plug-in for your C libraries.
Previously, your C libraries would have used another plug-in named nss-dns which uses the BIND DNS Client to make queries using the DNS protocol to the server(s) listed in /etc/resolv.conf , applying the domain suffixes listed therein. nss-resolve gets listed ahead of nss-dns in your /etc/nsswitch.conf file, causing your C libraries to not use the BIND DNS Client, or the DNS protocol, to perform nameโaddress lookups at all.
Instead, nss-resolve speaks a non-standard and idiosyncratic protocol over the (system-wide) Desktop Bus to systemd-resolved , which again makes back end queries of 192.168.1.1 or whatever your DHCP leases and configuration files say. To intercept that you have to monitor the Desktop Bus traffic with dbus-monitor or some such tool.
It's not even IP traffic, let alone IP traffic over a loopback network interface. as the Desktop Bus is reached via an AF_LOCAL socket. If you want to use a third-party resolving proxy DNS server at 1.1.1.1, or some other IP address, you have three choices: Configure your DHCP server to hand that out instead of handing out 192.168.1.1. systemd-resolved will learn of that via the DHCP leases and use it. Configure systemd-resolved via its own configuration mechanisms to use that instead of what it is seeing in the DHCP leases. Make your own /etc/resolv.conf file, an actual regular file instead of a symbolic link, list 1.1.1.1 there and remember to turn off nss-resolve so that you go back to using nss-dns and the BIND DNS Client. The systemd-resolved configuration files are a whole bunch of files in various directories that get combined, and how to configure them for the second choice aforementioned is beyond the scope of this answer.
Read the resolved.conf (5) manual page for that. | {
"source": [
"https://unix.stackexchange.com/questions/612443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321965/"
]
} |
612,611 | Is it possible to make a function like function doStuffAt {
cd $1
# do stuff
} but make it so invoking that function doesn't actually change my pwd, it just changes it for duration of the function? I know I can save the pwd and set it back at the end, but I'm hoping there's a way to just make it happen locally and not have to worry about that. | Yes. Just make the function run its commands in a ( ) subshell instead of a { } group command: doStuffAt() (
cd -- "$1" || exit # the subshell if cd failed.
# do stuff
) The parentheses ( ( ) ) open a new subshell that will inherit the environment of its parent. The subshell will exit as soon as the commands running it it are done, returning you to the parent shell and the cd will only affect the subshell, therefore your PWD will remain unchanged. Note that the subshell will also copy all shell variables, so you cannot pass information back from the subshell function to the main script via global variables. For more on subshells, have a look at man bash : (list) list is executed in a subshell environment (see COMMAND
EXECUTION ENVIRONMENT below). Variable assignments and builtin
commands that affect
the shell's environment do not remain in effect after the command completes. The return status is the exit status of list. Compare to: { list; } list is simply executed in the current shell environment. list must be terminated with a newline or semicolon. This is known as a group command.
The return status is the exit status of list. Note that unlike the metacharacters ( and ), { and } are reserved words and must occur where a reserved word is permitted to be recognized. Since they do not cause a word break, they must be separated from list by whitespace or another shell
metacharacter. | {
"source": [
"https://unix.stackexchange.com/questions/612611",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/433825/"
]
} |
612,905 | To backup a snapshot of my work, I run a command like tar -czf work.tgz work to create a gzipped tar file, which I can then drop in cloud storage. However, I have just noticed that gzip has a 4 GB size limit, and my work.tgz file is more than 4 GB. Despite that, if I create a gzip tar file on my current computer (running Mac OS X 10.15.4, gzip version is called Apple gzip 287.100.2) I can successfully retrieve it. So gunzip works on a >4GB in my particular case. But I want to be able to create and read these large gzip files on either Mac OS X or Linux, and possibly other systems in the future. My question is: will I be able to untar/gunzip large files anywhere? In other words, how portable is a gzip file which is more than 4 GB in size? Does it matter if I create it on Mac OS, Linux, or something else? A bit of online reading suggests gzip will successfully gzip/gunzip a larger file, but will not correctly record the uncompressed size, because the size is stored as a 32 bit integer. Is that all the limit is? | I have just noticed that gzip has a 4 GB size limit More accurately, the gzip format canโt correctly store uncompressed file sizes over 4GiB; it stores the lower 32 bits of the uncompressed size, and gzip -l misleadingly presents that as the size of the original data. The result is that, up to gzip 1.11 included, gzip -l wonโt show the right size for any compressed file whose original size is over 4GiB. Apart from that, there is no limit due to gzip itself, and gzip ped files over 4GiB are portable. The format is specified by RFC 1952 and support for it is widely available. The confusion over the information presented by gzip -l has been fixed in gzip 1.12 ; gzip -l now decompresses the data to determine the real size of the original data, instead of showing the stored size. Will I be able to untar/gunzip large files anywhere? Anywhere that can handle large files, and where spec-compliant implementations of tar and gunzip are available. In other words, how portable is a gzip file which is more than 4 GB in size? The gzip format itself is portable, and gzip files are also portable, regardless of the size of the data they contain. Does it matter if I create it on Mac OS, Linux, or something else? No, a gzip file created on any platform can be uncompressed on any other platform with the required capabilities (in particular, the ability to store large files, in the context of this question). See also Compression Utility Max Files Size Limit | Unix/Linux . | {
"source": [
"https://unix.stackexchange.com/questions/612905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/435807/"
]
} |
613,231 | Heyo! I'm currently working on a non-lfs system from scratch with busybox as the star. Now, my login says: (none) login: Hence, my hostname is broken. hostname brings me (none) too. The guide I was following told me to throw the hostname to /etc/HOSTNAME . I've also tried /etc/hostname . No matter what I do, hostname returns (none) - unless I run hostname <thename> or hostname -F /etc/hostname . Now obviously, I don't want this to be done every time somebody freshly installed the distro -- so what is the real default file, if not /etc/hostname ? Thanks in advance! | The hostname commands in common toolsets, including BusyBox, do not fall back to files when querying the hostname.
They report solely what the kernel returns to them as the hostname from a system call, which the kernel initializes to a string such as "(none)", changeable by reconfiguring and rebuilding the kernel.
(In systemd terminology this is the dynamic hostname , a.k.a. transient hostname ; the one that is actually reported by Linux, the kernel.)
There is no "default file". There's usually a single-shot service that runs at system startup, fairly early on, that goes looking in these various files, pulls out the hostname, and initializes the kernel hostname with it.
(In systemd terminology this configuration string is the static hostname .)
For example: In my toolset I provide an "early" hostname service that runs the toolset's set-dynamic-hostname command after local filesystem mounts and before user login services. The work is divided into stuff that is done (only) when one makes a configuration change, and stuff that is done at (every) system bootstrap: The external configuration import mechanism reads /etc/hostname and /etc/HOSTNAME , amongst other sources (since different operating systems configure this in different ways), and makes an amalgamated rc.conf . The external configuration import mechanism uses the amalgamated rc.conf to configure this service's hostname environment variable. When the service runs, set-dynamic-hostname doesn't need to care about all of the configuration source possibilities and simply takes the environment variable, from the environment configured for the service, and sets the dynamic hostname from it. In systemd this is an initialization action that is hardwired into the code of systemd itself, that runs before service management is even started up. The systemd program itself goes and reads /etc/hostname (and also /proc/cmdline , but not /etc/HOSTNAME nor /etc/default/hostname nor /etc/sysconfig/network ) and passes that to the kernel. In Void Linux there is a startup shell script that reads the static hostname from (only) /etc/hostname , with a fallback to the shell variable read from rc.conf , and sets the dynamic hostname from its value. If you are building a system "from scratch", then you'll have to make a service that does the equivalent.
The BusyBox and ToyBox tools for setting the hostname from a file are hostname -F "${filename}" , so you'll have to make a service that runs that command against /etc/hostname or some such file. BusyBox comes with runit's service management toolset, and a simple runit service would be something along the lines of: #!/bin/sh -e
exec 2>&1
exec hostname -F /etc/hostname Further reading Lennart Poettering et al. (2016). hostnamectl . systemd manual pages. Freedesktop.org. Jonathan de Boyne Pollard (2017). " set-dynamic-hostname ". User commands manual . nosh toolset. Softwares. Jonathan de Boyne Pollard (2017). " rc.conf amalgamation ". nosh Guide . Softwares. Jonathan de Boyne Pollard (2015). " external formats ". nosh Guide . Softwares. Rob Landley. hostname . Toybox command list . landley.net. https://unix.stackexchange.com/a/12832/5132 | {
"source": [
"https://unix.stackexchange.com/questions/613231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382366/"
]
} |
613,843 | I was trying to compute sha256 for a simple string, namely "abc". I found out that using sha256sum utility like this: sha256sum file_with_string gives results identical to: sha256sum # enter, to read input from stdin
abc
^D namely: edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb Note, that before the end-of-input signal another newline was fed to stdin. What bugged me at first was that when I decided to verify it with an online checksum calculator, the result was different: ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad I figured it might have had something to do with the second newline I fed to stdin, so I tried inserting ^D twice this time (instead of using newline) with the following result: abcba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad Now, this is of course poorly formatted (due to the lack of a newline character), but that aside, it matches the one above. After that, I realized I clearly fail to understand something about input parsing in the shell. I double-checked and there's no redundant newline in the file I specified initially, so why am I experiencing this behavior? | The difference is the newline. First, let's just collect the sha256sums of abc and abc\n : $ printf 'abc\n' | sha256sum
edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb -
$ printf 'abc' | sha256sum
ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad - So, the ba...ad sum is for the string abc , while the ed..cb one is for abc\n . Now, if your file is giving you the ed..cb output, that means your file has a newline. And, given that "text files" require a trailing newline, most editors will add one for you if you create a new file. To get a file without a newline, use the printf approach above. Note how file will warn you if your file has no newline: $ printf 'abc' > file
$ file file
file: ASCII text, with no line terminators And $ printf 'abc\n' > file2
$ file file2
file2: ASCII text And now: $ sha256sum file file2
ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad file
edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb file2 | {
"source": [
"https://unix.stackexchange.com/questions/613843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/398918/"
]
} |
615,012 | I recently read that it's a good idea to disable root login, e.g. by setting the root user's shell to /sbin/nologin instead of /bin/bash, and to use a non-root user with sudo rights. I did this now on a server of mine where logs were showing a large amount of login attempts. So instead of root, I now login as a non-root user, and use sudo whenever I need to. How is this safer? In both cases, if anyone cracks the password, they will be able to execute any command. | sudo improves safety/security by providing accountability , and privilege separation . Imagine a system that has more than one person performing administrative tasks. If a root login account is enabled, the system will have no record/log of which person performed a particular action. This is because the logs will only show root was responsible, and now we may not know exactly who root was at that time. OTOH, if all persons must login as a regular user, and then sudo for privilege elevation, the system will have a record of which user account performed an action. In addition, privileges for that particular user account may be managed and allocated in the sudoers file. To answer your question now, a hacker that compromises one user account will get only those privileges assigned to that account. Further, the system logs will (hopefully) have a record showing which user account was compromised. OTOH, if it's a simple, single-user system where the privileges in the sudoers file are set to ALL (e.g. %sudo ALL=(ALL:ALL) ALL ), then the advantages of accountability , and privilege separation are effectively neutered. Finally, in regard to the advantages of sudo , the likelihood is that a knowledgeable hacker may also be able to cover his tracks by erasing log files, etc; sudo is most certainly not a panacea. At the end of the day, I feel that like many other safeguards we put in place, sudo helps keep honest people honest - it's less effective at keeping dishonest people at bay. | {
"source": [
"https://unix.stackexchange.com/questions/615012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/437784/"
]
} |
615,419 | After some googling, I found a way to compile BASH scripts to binary executables (using shc ). I know that shell is an interpreted language, but what does this compiler do? Will it improve the performance of my script in any way? | To answer the question in your title, compiled shell scripts could be better for performance โ if the result of the compilation represented the result of the interpretation, without having to re-interpret the commands in the script over and over. See for instance ksh93 's shcomp or zsh 's zcompile . However, shc doesnโt compile scripts in this way. Itโs not really a compiler, itโs a script โencryptionโ tool with various protection techniques of dubious effectiveness. When you compile a script with shc , the result is a binary whose contents arenโt immediately readable; when it runs, it decrypts its contents, and runs the tool the script was intended for with the decrypted script, making the original script easy to retrieve (itโs passed in its entirety on the interpreterโs command line, with extra spacing in an attempt to make it harder to find). So the overall performance will always be worse: on top of the time taken to run the original script, thereโs the time taken to set the environment up and decrypt the script. | {
"source": [
"https://unix.stackexchange.com/questions/615419",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/420387/"
]
} |
615,438 | Context I'm trying to import a dump that have some long lines (8k+ character) with SQL*Plus, so I face the error SP2-0027: Input is too long (> 2499 characters) . This is a hard-coded limit and cannot be overcome. Expected solution I would like to stream my input in bash and to split lines longer than the expected width on the last , (comma) character. So I should have something like cat my_dump.sql | *magic_command* | sqlplus system/oracle@xe Details I know that newer version can accept lines up to 4999 characters but I still have lines longer ( grep '.\{5000\}' my_dump.sql | wc -l ) It is not really feasible to update the dump by hand I did try to use tr but this split every line wich I do not want I did try to use fmt and fold but it does not seems to be possible to use a custom delimiter I am currently looking on sed but I cannot seem to figure out a regexp that would "find the last match of , in the first 2500 characters if there is more than 2500 characters" | To answer the question in your title, compiled shell scripts could be better for performance โ if the result of the compilation represented the result of the interpretation, without having to re-interpret the commands in the script over and over. See for instance ksh93 's shcomp or zsh 's zcompile . However, shc doesnโt compile scripts in this way. Itโs not really a compiler, itโs a script โencryptionโ tool with various protection techniques of dubious effectiveness. When you compile a script with shc , the result is a binary whose contents arenโt immediately readable; when it runs, it decrypts its contents, and runs the tool the script was intended for with the decrypted script, making the original script easy to retrieve (itโs passed in its entirety on the interpreterโs command line, with extra spacing in an attempt to make it harder to find). So the overall performance will always be worse: on top of the time taken to run the original script, thereโs the time taken to set the environment up and decrypt the script. | {
"source": [
"https://unix.stackexchange.com/questions/615438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/438174/"
]
} |
615,485 | Pipes and redirection are two of the most powerful functions in Linux, and I love them. However, I'm stuck with a situation where I need to write a fixed piece of text to a file without using a pipe, redirection or a function. I'm using Bash in case that makes a difference. First: Why? I'll explain why, in case there's a simpler solution. I have a background yad notification with some menu entries. In some of the menu entries, I want the notification to write a fixed piece of text to a file. Here's an example of what I mean. yad --notification --command=quit --menu='Example!echo sample >text.txt' The problem is that yad doesn't accept redirection, so it literally prints the string sample >text.txt instead of redirecting. Likewise, the pipe symbol ( | ) is a separator in yad; but if you change that, yad takes it as a literal character. For example: yad --notification --command=quit --separator='#' --menu='Example!echo sample | tee text.txt' This literally prints the string sample | tee text.txt instead of piping. There's also no point in writing a function for yad to call, because yad runs in its own space and doesn't recognise the function. Hence my question Thus, I want a command like echo , cat or printf that takes an output file as an argument rather than a redirect. I have searched for such a command but cannot find it. I can, of course, write my own and put it in the default path: FILENAME="${1}"
shift
printf '%s\n' "${*}" >"${FILENAME}" and then yad --notification --command=quit --menu='Example!myscript text.txt sample' But, I'll be surprised indeed if Linux doesn't already have something like this! Thank you | This is a bit of an XY problem but fortunately you've explained your real problem so it's possible to give a meaningful answer. Sure, there are commands that can write text to a file without relying on their environment to open the file. For example, sh can do that: pass it the arguments -c and echo text >filename . Note that this does meet the requirement of โwithout redirectionโ here, since the output of sh is not redirected anywhere. There's a redirection inside the program that sh runs, but that's ok, it's just an internal detail of how sh works. But does this solve your actual problem? Your actual problem is to write text to a file from a yad action. In order to resolve that, you need to determine what a yad action is. Unfortunately, the manual does not document this. All it says is menu:STRING Set popup menu for notification icon. STRING must be in form name1[!action1[!icon1]]|name2[!action2[!icon2]]... . Empty name add separator to menu. Separator character for values (e.g. | ) sets with --separator argument. Separator character for menu items (e.g. ! ) sets with --item-separator argument. The action is a string, but a Unix command is a list of strings: a command name and its arguments. There are several plausible ways to turn a string into a command name and its arguments, including: Treating the string as a command name and calling it with no arguments. Since echo foo prints foo , rather than attempting to execute the program echo foo , this is not what yad does. Passing the string to a shell. Since echo >filename prints >filename , rather than writing to filename , this is not what yad does. Some custom string splitting. At this point, this is presumably what yad does, but depending on exactly how it does it, the solution to your problem can be different. Looking at the source code , the action is passed to popup_menu_item_activate_cb which calls the Glib function g_spawn_command_line_async . This function splits the given string using g_shell_parse_argv which has the following behavior, which is almost never what is desirable but can be worked around: Parses a command line into an argument vector, in much the same way the shell would, but without many of the expansions the shell would perform (variable expansion, globs, operators, filename expansion, etc. are not supported). The results are defined to be the same as those you would get from a UNIX98 /bin/sh, as long as the input contains none of the unsupported shell expansions. If the input does contain such expansions, they are passed through literally. So you can run a shell command by prefixing it with sh -c ' and terminating with ' . If you need a single quote inside the shell command, write '\'' . Alternatively, you can run a shell command by prefixing it with sh -c " , terminating with " , and adding a backslash before any of the characters "$\` that appear in the command. Take care of the nested quoting since the action is itself quoted in the shell script that calls yad. yad --notification \
--menu='Simple example!sh -c "echo sample text >text.txt"' \
--menu='Single-double-single quotes!sh -c "echo '\''Here you can put everything except single quotes literally: two spaces, a $dollar and a single'\''\'\'''\''quote.'\'' >text.txt"' \
--menu="Double-single-double quotes!sh -c 'echo \"Here punctuation is a bit tricky: two spaces, a \\\$dollar and a single'\\''quote.\"' >text.txt'" | {
"source": [
"https://unix.stackexchange.com/questions/615485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41226/"
]
} |
616,330 | As far as I know changing even a bit of a file, will change the whole checksum result, but when I change a file's name this does not affect its checksum (I've tried SHA-1, SHA-256 and MD5). Why? file name is not a part of file data? does it depend on file system? | The name of a file is a string in a directory entry, and a number of other meta data (file type, permissions, ownership, timestamps etc.) is stored in the inode. The filename is therefore not part of what constitutes the actual data of the file. In fact, a single file may have any number of names (hard links) in the filesystem, and may additionally be accessible through any number of arbitrarily named symbolic links. Since the filename is not part of the file's data, it will not be included automatically when you calculate e.g. the MD5 checksum with md5 or md5sum or some similar utility. Changing the file's name (or ownership or timestamps or permission etc.) or accessing it via one of its other names or symbolic links, if it has any, will therefore not have any effect on the file's MD5 checksum. | {
"source": [
"https://unix.stackexchange.com/questions/616330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/378964/"
]
} |
618,665 | When a process breaks, as I know no output will be return anymore. But always after breaking ping command we have the statistics of the execution, and as I know it's part of the output. amirreza@time:~$ ping 4.2.2.4
PING 4.2.2.4 (4.2.2.4) 56(84) bytes of data.
64 bytes from 4.2.2.4: icmp_seq=1 ttl=51 time=95.8 ms
64 bytes from 4.2.2.4: icmp_seq=2 ttl=51 time=92.3 ms
^C
--- 4.2.2.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 92.321/94.052/95.783/1.731 ms
amirreza@time:~$ How does it work? | Ctrl + C makes the terminal send SIGINT to the foreground process group. A process that receives SIGINT can do anything, it can even ignore the signal. A common reaction to SIGINT is to exit gracefully, i.e. after cleaning up etc. Your ping is simply designed to print statistics upon SIGINT and then to exit. Other tools may not exit upon SIGINT at all. E.g. a usual behavior of an interactive shell (while not running a command) is to clear its command line and redraw the prompt. SIGINT is not the only signal designed to terminate commands. See the manual ( man 7 signal ), there are many signals whose default action is to terminate the process. kill sends SIGTERM by default. SIGTERM is not SIGINT. Both can be ignored. SIGKILL cannot be caught, blocked, or ignored , but it should be your last choice. | {
"source": [
"https://unix.stackexchange.com/questions/618665",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/378964/"
]
} |
618,683 | bug: resolv.conf auto-populates search and nameserver
seeking: permanent or temporary (run each time system boots.) recommended solution: resolvconf package solves the auto-population issue
(not to be confused with resolv.conf) -https://www.youtube.com/watch?v=NEyXDdBrw2c
-https://unix.stackexchange.com/q/209760/441088
-https://unix.stackexchange.com/q/362587/441088 My question is identical to the last (441088) except need resolv.conf to no longer update (auto-populate) search and nameservers #sudo vi resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 84.200.70.40
nameserver 84.200.69.80
nameserver 192.168.4.1
nameserver 192.168.4.1
nameserver 192.168.1.1
nameserver 1.1.1.1
search autopopulated-isp-router 1.1.1.1 apparently it just adds additional auto-populated nameservers below the already existing. (it is a little sneaky so you must keep checking resolv.conf to catch the auto-population of nameservers & search server, which are auto-appended to resolvconf settings) how can i change the resolv.conf to prevent auto-populating of nameserver and search with isp ip addresses? Tried with: # service networking stop && service network-manager start
# service networking start && service network-manager stop Network managers: Wicd with both networking and network-manager stopped, then no wicd just nmtui with networking start then with network-manager start Replicable on debian 10.1 and kali 2020 (any version - tried them all) Replicable with dhcp or static configuation (yes able to ping local gateway network router and other ip's on network) # /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.
passwd: files systemd
group: files systemd
shadow: files
gshadow: files
hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname mymachines
networks: files
protocols: db files
services: db files
ethers: db files
rpc: db files
netgroup: nis | Ctrl + C makes the terminal send SIGINT to the foreground process group. A process that receives SIGINT can do anything, it can even ignore the signal. A common reaction to SIGINT is to exit gracefully, i.e. after cleaning up etc. Your ping is simply designed to print statistics upon SIGINT and then to exit. Other tools may not exit upon SIGINT at all. E.g. a usual behavior of an interactive shell (while not running a command) is to clear its command line and redraw the prompt. SIGINT is not the only signal designed to terminate commands. See the manual ( man 7 signal ), there are many signals whose default action is to terminate the process. kill sends SIGTERM by default. SIGTERM is not SIGINT. Both can be ignored. SIGKILL cannot be caught, blocked, or ignored , but it should be your last choice. | {
"source": [
"https://unix.stackexchange.com/questions/618683",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/441088/"
]
} |
619,625 | I asked about Linux's 255-byte file name limitation yesterday, and the answer was that it is a limitation that cannot/will not be easily changed. But I remembered that most Linux supports NTFS, whose maximum file name length is 255 UTF-16 characters. So, I created an NTFS partition, and try to name a file to a 160-character Japanese string, whose bytes in UTF-8 is 480. I expected that it would not work but it worked, as below. How come does it work, when the file name was 480 bytes? Is the 255-byte limitation only for certain file systems and Linux itself can handle file names longer than 255 bytes? ----PS----- The string is the beginning part of a famous old Japanese essay titled "ๆนไธ่จ" . Here is the string. ใใๆฒณใฎๆตใใฏ็ตถใใใใฆใใใใใใจใฎๆฐดใซใใใใใใฉใฟใซๆตฎใใถใใใใใฏใใใคๆถใใใค็ตใณใฆใไน
ใใใจใฉใพใใใใใใใชใใไธใฎไธญใซใใไบบใจใใฟใใจใใพใใใใฎใใจใใใใพใใใฎ้ฝใฎใใกใซใๆฃใไธฆในใ็ใไบใธใใ้ซใใๅใใใไบบใฎไฝใพใฒใฏใไธใ
ใ็ตใฆๅฐฝใใใฌใใฎใชใใฉใใใใใพใใจใใจๅฐใฌใใฐใๆใใใๅฎถใฏใพใใชใใ I had used this web application to count the UTF-8 bytes. | The answer, as often, is โit dependsโ. Looking at the NTFS implementation in particular, it reports a maximum file name length of 255 to statvfs callers, so callers which interpret that as a 255-byte limit might pre-emptively avoid file names which would be valid on NTFS. However, most programs donโt check this (or even NAME_MAX ) ahead of time, and rely on ENAMETOOLONG errors to catch errors. In most cases, the important limit is PATH_MAX , not NAME_MAX ; thatโs whatโs typically used to allocate buffers when manipulating file names (for programs that donโt allocate path buffers dynamically, as expected by OSes like the Hurd which doesn't have arbitrary limits). The NTFS implementation itself doesnโt check file name lengths in bytes, but always as 2-byte characters; file names which canโt be represented in an array of 255 2-byte elements will cause a ENAMETOOLONG error. Note that NTFS is generally handled by a FUSE driver on Linux. The kernel driver currently only supports UCS-2 characters, but the FUSE driver supports UTF-16 surrogate pairs (with the corresponding reduction in character length). | {
"source": [
"https://unix.stackexchange.com/questions/619625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/379327/"
]
} |
Subsets and Splits