output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
Just: paste -d '\n' -- - - "$file2" < "$file1"(provided $file2 is not -). Or with GNU sed, provided $file2 (the variable content, the file name) doesn't contain newline characters and doesn't start with a space or tab character: sed "2~2R$file2" "$file1" < "$file2"With awk (provided $file1 doesn't contain = characters (or at least that if it does, the part before it is not an acceptable awk variable name)): export file2 awk '{print} NR % 2 == 0 {if ((getline l < ENVIRON["file2"]) > 0) print l} ' "$file1"
I have a data file ($file1) which contains two lines of data per individual. I need to intersperse a third line of data from another data file ($file2). So my input looks like: >cat $file1 bob 1 1 0 bob 1 0 1 alan 0 0 1 alan 0 1 1>cat $file2 bob a a b alan a c aSo the desired result would be: >cat $file3 bob 1 1 0 bob 1 0 1 bob a a b alan 0 0 1 alan 0 1 1 alan a c aIf I just needed to intersperse every other line I would have used paste like so: >paste '-d\n' $file1 $file2What would be the best tool to use to achieve this? I am using zsh.
Intersperse lines from two files
Unix represents newlines with the character LF (line feed = \n = ^J = 10 decimal = 012 octal = 0x0a hexadecimal). Windows represents newlines with the two-character sequence CR, LF (CR = carriage return = \r = ^M = 13 decimal = 015 octal = 0x0d hexadecimal). When a Windows text file is processed by a Unix utility, each line thus ends with a spurious CR character. Cygwin utilities are straight ports of Linux tools. They don't handle Windows text files specially, so the CR characters do end up as line content. In your pasted file, there's a CR before each comma. You can convert the files to Unix format first: dos2unix *.csv paste -d , test1.csv test2.csv > paste.csvOr you can just remove the CR characters. Here it works to remove them from inside the line; some other text manipulations would require removing them before processing. paste -d , test1.csv test2.csv | tr -d '\r' > paste.csv
I have two csv files: test1.csv 1 2 3 4test2.csv 6 7 8 9I want to horizontally merge these two files. To do this I use paste -d , test1.csv test2.csv > paste.csv If I open this file in notepad it looks correct i.e. paste.csv 1,6 2,7 3,8 4,9However if I load paste.csv in Excel it looks like What I'm I missing? Thanks in advance! PS This might not matter, but I'm on windows and using Cygwin. UPDATE: When I opened paste.csv in notepad I notice something bizarre. There appears to be an invisible character between the number and comma. For example, if I put my cursor between 1 and , and the hit backspace, nothing happens. When I hit backspace again, the 1 is deleted as expected. If I delete all these invisible characters, and then load paste.csv in excel, it is correct!
Trouble with horizontial merge of csv files under Cygwin
I found that screen has a slowpaste function! https://gist.github.com/jandahl/8436cd6a99d56efd9ff4install screen make a .screenrc file if you don't have one:startup_message offhardstatus alwayslastline hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %m-%d %{W}%c %{g}]'defslowpaste 20 ## It is the value to the left that is the central one for your happinessif you already have a .screenrc then just add the last line: defslowpaste 20 start screen with the appropriate serial port (raspberry pi, first gen, defaults to 115200 baud): screen /dev/ttyUSB3 115200 Have fun!
I'm working on a raspberry pi, and trying to paste some text files into a command-line text editor nano... but the text ends up corrupted on the remote end (partial/incomplete text). I can only guess the paste function of my PC (xubuntu 16.04) is pushing the data too fast (the serial baud is 115200). Can I slow down the paste function somehow?
how can I slow down pasting text into a serial terminal?
With sed: $ sed 's/\(.*\)\t\(.*\)\t/\1:\2-/' filechr1:53736473-54175786 chr1:56861276-56876438 chr1:57512145-57512200printf: printf "%s:%s-%s\n" $(< file)chr1:53736473-54175786 chr1:56861276-56876438 chr1:57512145-57512200
I have a tab delimited file like this: chr1 53736473 54175786 chr1 56861276 56876438 chr1 57512145 57512200I want to concatenate the three fields result like this: chr1:53736473-54175786 chr1:56861276-56876438 chr1:57512145-57512200I tried with paste -d ':-' file, which apparently didn't work. Could anyone help? Ideally could be with simple unix command, I know it is rather easy with higher language.
Concatenate different fields with different seperator
With any awk in any shell on every UNIX box for any number of input files, all you need is: $ paste FileA FileB | awk '{o=$1; for (i=2; i<NF; i+=3) o=o"\t"$i; print o}' A: 18.49 21.29 C: 14.49 38.71 B: 18.89 36.13
I have two or more files FileA A: 18.49 RPKM C: 14.49 RPKM B: 18.89 RPKMFileB A: 21.29 RPKM C: 38.71 RPKM B: 36.13 RPKMI want to paste these two files and print first-column only once and second column from each file Desired output (tab delimited) A: 18.49 21.29 C: 14.49 38.71 B: 18.89 36.13I used the paste command paste FileA FileB | awk '{ { print $1} {ORS="\t"} for (i=2; i<=NF; i+=3) { print $i } {print "\n"} }'I get this output A: 18.49 21.29 C: 14.49 38.71 B: 18.89 36.13 Can you please suggest how to fix this issue. thank you !!
Format output file from paste command
Why you can't use /dev/stderr as a pipeline The problem isn't with paste, and neither is it with /dev/stdin. It's with /dev/stderr. All commands are created with one open input descriptor (0: standard input) and two outputs (1: standard output and 2: standard error). Those can typically be accessed with the names /dev/stdin, /dev/stdout and /dev/stderr respectively, but see How portable are /dev/stdin, /dev/stdout and /dev/stderr?. Many commands, including paste, will also interpret the filename - to mean STDIN. When you run bb on its own, both STDOUT and STDERR are the console, where command output usually appears. The lines go through different descriptors (as shown by your annotate-output) but ultimately end up in the same place. When you add a | and a second command, making a pipeline... bb | paste /dev/stdin /dev/stderrthe | tells the shell to connect the output of bb to the input of paste. paste first tries to read from /dev/stdin, which (via some symlinks) resolves to its own standard input descriptor (which the shell just connected up) so the line 1 comes through. But the shell/pipeline does nothing to STDERR. bb still sends that (e1 e2 etc.) to the console. Meanwhile, paste attempts to read from the same console, which hangs (until you type something). Your link Why can't I read /dev/stdout with a text editor? is still relevant here because those same restrictions apply to /dev/stderr. How to make a second pipeline You have a command that produces both standard output and standard error, and you want to paste those two lines next to each other. That means two concurrent pipes, one for each column. The shell pipeline ... | ... provides one of those, and you're going to need to create the second yourself, and redirect STDERR into that using 2>filename. mkfifo RHS bb 2>RHS | paste /dev/stdin RHSIf this is for use in a script, you may prefer to make that FIFO in a temporary directory, and remove it after use.
Usually paste prints two named (or equivalent) files in adjacent columns like this: paste <(printf '%s\n' a b) <(seq 2)Output: a 1 b 2But when the two files are /dev/stdin and /dev/stderr, it doesn't seem to work the same way. Suppose we have blackbbox program which outputs two lines on standard output and two lines on standard error. For illustration purposes, this can be simulated with a function: bb() { seq 2 | tee >(sed 's/^/e/' > /dev/stderr) ; }Now run annotate-output, (in the devscripts package on Debian/Ubuntu/etc.), to show that it works: annotate-output bash -c 'bb() { seq 2 | tee >(sed 's/^/e/' > /dev/stderr) ; }; bb' 22:06:17 I: Started bash -c bb() { seq 2 | tee >(sed s/^/e/ > /dev/stderr) ; }; bb 22:06:17 O: 1 22:06:17 E: e1 22:06:17 O: 2 22:06:17 E: e2 22:06:17 I: Finished with exitcode 0So it works. Feed bb to paste: bb | paste /dev/stdin /dev/stderrOutput: 1 e1 e2 ^CIt hangs -- ^C means pressing Control-C to quit. Changing the | to a ; also doesn't work: bb ; paste /dev/stdin /dev/stderrOutput: 1 2 e1 e2 ^CAlso hangs -- ^C means pressing Control-C to quit. Desired output: 1 e1 2 e2Can it be done using paste? If not, why not?
Why can't `paste` print stdin next to stderr?
You could do this: let tot=$(cat file1)+$(cat file2) echo $tot
I want to add numbers from two txt files. The number will change in file1, and file2 should update itself like this file2 = file1 + file2. Decimals not needed. Example: file1 3file2 7Output: file1 3file2 10I tried $ paste file1 file2 | awk '{$0 = $1 + $2}' > file2 but all it does is copy the number from file1 to file2.
How do I add numbers from two txt files and write it to the same file?
With any awk: awk 'match($0,/0x[0-9a-zA-Z]*/) {print $0; print substr($0,RSTART,RLENGTH)}' fileWith GNU awk: gawk 'match($0,/0x[0-9a-zA-Z]*/,arr) {print $0; print arr[0]}' fileYou might consider replacing 0x[0-9a-zA-Z]* with 0x[[:xdigit:]]+
Is it possible to output both the full line and the matched parts of some line? Suppose I have this input low [ 0]: 0xffff0000 Interesting description A hi [ 0]: 0xffff00a0 Interesting description B low [ 1]: 0x5000 Interesting description C hi [ 1]: 0x6000 Interesting description D ... hi [15]: 0x806000 ...And I would like to extract the hex value as an interesting part, and then the full line as well. I have used paste and 2 grep commands, but it feels really bulky and I would like to avoid process substitution (<()). This is what I got: paste -d'\n' <(grep '0x[0-9a-zA-Z]*' "$file") \ <(grep -o '0x[0-9a-zA-Z]*' "$file")What's a more, to-the-point way of doing this? I was thinking about awk, but not sure if it's possible to easily grab the matching part and print that (??? below): /0x[0-9a-zA-Z]*/ { print $0 ; print ??? }Example output: low [ 0]: 0xffff0000 Interesting description A 0xffff0000 hi [ 0]: 0xffff00a0 Interesting description B 0xffff00a0 low [ 1]: 0x5000 Interesting description C 0x5000 hi [ 1]: 0x6000 Interesting description D 0x6000 ... hi [15]: 0x806000 ... 0x806000
How to collect both full lines, and matching part of line?
There shouldn't be a limit. It's just that at least your first input file has DOS/Windows -style CRLF line endings, where the carriage return (CR) returns the cursor position to the start of the line before the separating TAB and the next value are printed. Note how the value from the second file starts at position 8, at the first tab stop. The actual output is something like this: -0.023193359375<CR><TAB>-0.018707275391<NL>You can verify it if you view the output with e.g. od -c, it should show \r for the CR there. Run the files or the output through tr -d '\r' to remove the CRs. (or dos2unix or any of the other various ways.)
I have two files which contain only one column of numeric data each, and the same number of rows. When using paste, it does combine the rows from the two files into one row, but the text of the first file is truncated, while the text from the second file is intact: $ head -3 s1_.dat s2_.dat ==> s1_.dat <== -0.023193359375 -0.020416259766 0.014587402344==> s2_.dat <== -0.018707275391 -0.019805908203 0.011108398438$ paste s1_.dat s2_.dat | head -3 -0.02319-0.018707275391 -0.02041-0.019805908203 0.0145870.011108398438Are there column width limits in paste?
Why does paste command truncate one of the input files?
Sort files individually, and redirect the whole output to the resulting file: for file in *.txt ; do sort -k1,1rn < "$file" done > file.concatenated(here it's important the output file doesn't have a .txt extension as it's created first by the redirection). Or if you want to sort the files in place (rewriting them sorted over themselves): set -- *.txt ok=true for file do sort -o "$file" -k1,1rn -- "$file" || ok=false done "$ok" && cat -- "$@" > file.concatenatedThat two stage approach allows us to detect problems in the sorting of files before creating the concatenated file. Your first loop didn't work as you were passing the full list of .txt files in each pass of the loop. sort -m is to merge already sorted files into a sorted output. It's the opposite of what you want. You want to sort files that are not already sorted and just concatenate the results without merging them into a sorted output. Here, the files seem to be sorted in forward order. If you can rely on that being always the case, you should be able to just reverse them which would be much more efficient than sorting them in reverse. To do that, GNU systems have a tac command, and several others tail -r (though beware that some implementations only take one file argument, so you may need to resort to loop with those). tac -- *.txt > file.concatenatedAlso note that -k1,1rn is not the same as -rnk1,1 when it comes to resolving ties. When two lines compare equally, sort resorts to a lexical comparison of the whole line (for instance here, 1 a and 1 b compare equally with -k1,1n, but 1 a comes before 1 b lexically). With the -r option, that last resort comparison is done in reverse. That doesn't apply when a r flag is added to one of the key specifications. GNU sort has -s to disable that last resort comparison which will cause it to preserve the original order of the lines that compare equally.
I want to sort several text files in reverse order and then merge/cat to one single text file. a.txt 0 33.1 2 33.0 10 21.1 20 21.8b.txt 0 30.1 2 33.0 10 28.1 20 27.8and so on *.txt files I want output like this 20 21.8 10 21.1 2 33.0 0 33.1 20 27.8 10 28.1 2 33.0 0 30.1I don't want like this 20 21.8 20 27.8 10 21.1 10 28.1 2 33.0 2 33.0 0 33.1 0 30.1I tried these code for file in *.txt ; do sort -nrk 1,1 *.txt > "$file" ; doneand also I tried sort -m *.txtBut the output from these codes is not I wanted. I am looking for solutions using sort merge paste cat or some other relevant options. Many thanks for any help.
sort multiple files and merge
With awk: # create two test files printf '%s\n' one two three four five six > target_file printf '%s\n' 1:.196 5:.964 6:.172 > numbersawk -F':' 'NR==FNR{ a[$1]=$2; next } FNR in a{ $0=$0 a[FNR] }1' numbers target_fileOutput: one.196 two three four five.964 six.172Explanation: awk -F':' ' # use `:` as input field separator NR==FNR { # if this is the first file, then... a[$1]=$2 # save the second field in array `a` using the first field as index next # stop processing, continue with the next line } FNR in a { # test if the current line number is present in the array $0=$0 a[FNR] # append array value to the current line } 1 # print the current line ' numbers target_file
I have a list of numbers which i would like to add to the end of another file as the final column: 1:.196 5:.964 6:.172The numbers in front (1,5 and 6) indicate at which line the numbers have to be appended in the target file, so that the first line ends with .196, the fifth with .964 and so on. The usual paste file1 file2 does not take the line numbers into account and simply adds 1:.196 at the end of the first line and .964 at the end of the second instead of the fifth. Any ideas how to do it the right way? Expected would be something like this: Lorem Ipsum 1238 Dolor Sit 4559.196 Lorem Ipsum 4589 Sit elitr 1234 Lorem Ipsum 3215 Dolor Sit 5678 Lorem Ipsum 7825 Dolor Sit 9101 Lorem Ipsum 1865 Dolor Sit 1234.964
Append a column to a file based on line number
The answers in Turn off buffering in pipe provide several techniques which you can use. The principal idea is that commands which are not connected to an interactive terminal (for example, in a pipeline) are using buffering. One command which can run another command with different buffering settings is the GNU coreutils stdbuf command which you can apply to your case, unbuffering each command paste <(stdbuf -i0 -o0 -e0 sar -q 5 10 | stdbuf -i0 -o0 -e0 awk '{printf "%-8s %-2s %7s %7s %7s\n", $1,$2,$3,$4,$5}') <(stdbuf -i0 -o0 -e0 sar -r 5 10 | stdbuf -i0 -o0 -e0 awk {'printf "%9s %9s %8s\n", $3,$4,$5}')In the above stdbuf is used to unbuffer the input, output and standard error of each awk and sar command. As pointed out in the comments, in this case unbuffering output is all that is required, so this can be shortened to paste <(stdbuf -o0 sar -q 5 10 | stdbuf -o0 awk '{printf "%-8s %-2s %7s %7s %7s\n", $1,$2,$3,$4,$5}') <(stdbuf -o0 sar -r 5 10 | stdbuf -o0 awk {'printf "%9s %9s %8s\n", $3,$4,$5}')
I'm combining live sar samples with paste and trying to format the output with awk live. It works as expected to format the output, but it doesn't do the formatting live on each sample and instead waits till the full 50 seconds (5 sec samples, 10 of them) completes.Solution was (stdbuf -o0) to disable buffering on the output streamstdbuf -o0 paste <(sar -q 1 5) <(sar -r 1 5) | awk '{printf "%8s %2s %7s %7s %7s %8s %9s %8s\n", $1,$2,$3,$4,$5,$11,$12,$13}'11:53:21 AM runq-sz plist-sz ldavg-1 kbmemfree kbmemused %memused 11:53:22 AM 1 167 0.03 46504 449264 90.62 11:53:23 AM 1 167 0.03 46504 449264 90.62 11:53:24 AM 1 167 0.03 46504 449264 90.62 11:53:25 AM 1 167 0.03 46008 449760 90.72 11:53:26 AM 1 167 0.03 46624 449144 90.60 Average: 1 167 0.03 0.05 90.64 40876 172816First sar command sar -q 5 10 | awk '{printf "%-8s %-2s %7s %7s %7s\n", $1,$2,$3,$4,$5}'01:02:08 AM runq-sz plist-sz ldavg-1 01:02:13 AM 1 160 0.09 01:02:18 AM 1 160 0.08 01:02:23 AM 1 160 0.08Second sar command sar -r 5 10 | awk {'printf "%-8s %-2s %9s %9s %8s\n", $1,$2,$3,$4,$5}'01:19:27 AM kbmemfree kbmemused %memused 01:19:32 AM 113840 381928 77.04 01:19:37 AM 113800 381968 77.05 01:19:42 AM 113840 381928 77.04Using paste alone works to combine both sar reports in real-time paste <(sar -q 5 10) <(sar -r 5 10)01:21:09 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked 01:21:09 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 01:21:14 AM 2 159 0.00 0.03 0.05 0 01:21:14 AM 111416 384352 77.53 12512 142052 948776 191.37 195400 121036 56 01:21:19 AM 2 157 0.00 0.03 0.05 0 01:21:19 AM 111928 383840 77.42 12512 142056 947168 191.05 195060 121036 60 01:21:24 AM 1 156 0.00 0.03 0.05 0 01:21:24 AM 112244 383524 77.36 12528 142060 946784 190.97 194932 121052 72Trying to format with awk inside paste shows nothing till the sar samples complete Same things happens when doing awk formatting after the paste combine paste <(sar -q 5 10 | awk '{printf "%-8s %-2s %7s %7s %7s\n", $1,$2,$3,$4,$5}') <(sar -r 5 10 | awk {'printf "%9s %9s %8s\n", $3,$4,$5}')01:56:53 AM runq-sz plist-sz ldavg-1 kbmemfree kbmemused %memused 01:56:58 AM 1 157 0.00 99664 396104 79.90 01:57:03 AM 1 157 0.00 99664 396104 79.90 01:57:08 AM 1 157 0.00 99644 396124 79.90 01:57:13 AM 1 157 0.00 99612 396156 79.91 01:57:18 AM 1 157 0.00 99628 396140 79.90 01:57:23 AM 1 157 0.00 99656 396112 79.90 01:57:28 AM 1 157 0.00 99520 396248 79.93 01:57:33 AM 1 157 0.00 99656 396112 79.90 01:57:38 AM 2 157 0.00 99268 396500 79.98 01:57:43 AM 1 157 0.00 100152 395616 79.80 Average: 1 157 0.00 0.01 396122 79.90 17557If I use the full sar logs instead of samples it works obviously paste <(sar -q | awk '{printf "%-8s %-2s %7s %7s %7s\n", $1,$2,$3,$4,$5}') <(sar -r | awk {'printf "%9s %9s %8s\n", $3,$4,$5}')12:00:01 AM runq-sz plist-sz ldavg-1 kbmemfree kbmemused %memused 12:10:01 AM 3 156 0.00 71500 424268 85.58 12:20:01 AM 1 150 0.00 110836 384932 77.64 12:30:01 AM 1 150 0.00 108164 387604 78.18So, is there anyway to have the paste command update out to awk after each sample so I can watch the formatted output scroll by live?
Format sar samples live with paste and awk
The behaviour you're looking for is a bug that was fixed between bash-3.2 (the version found on macOS), and bash-4.0. From the CHANGES file:rr. Brace expansion now allows process substitutions to pass through unchanged.For a one-liner, you might try awk: awk -F '\t' {FNR != NR {exit} {out=$5; for (i = 2; i < ARGC; i++) {getline < ARGV[i]; out = out "," $5}; print out}' test*/example.tsvExplanation: FNR != NR { exit } # Exit after first file is finished.{ out=$5; # save the first file's fifth field for (i = 2; i < ARGC; i++) { # loop over the remaining arguments (filenames). getline < ARGV[i]; # Read in the next line from i-th file. out = out "," $5 # save fifth field of the line just read }; print out # print saved columns. }
Why does a brace expansion behave differently than wildcard in combination with paste? Example: Assume we have multiple folders, each containing the same-structured tsv and want to create a 'all.tsv' containing the 5th row of each of those. The two commands behave differently: paste -d, <(cut -d$'\t' -f5 {test,test1,test2}/example.tsv) > all.tsvvs paste -d, <(cut -d$'\t' -f5 test*/example.tsv) > all.tsvThe first creates a tsv with 3 columns as expected, the second one creates a single columned tsv with the values beneath each other. My problem is that list of folders is arbitrarily big, potentially quite long and not sequential. Is there a way to achieve the same behavior as brace expansion with wildcard without moving to a bash script and iteration over the folders? Using GNU bash
Paste & Brace Expansion vs Wildcard
GNU awk is used. I put this command in the bash script. It will be more convenient. Usage: ./join_files.sh or, for pretty printing, do: ./join_files.sh | column -t. #!/bin/bashgawk ' NR == 1 { PROCINFO["sorted_in"] = "@ind_num_asc"; header = $1; }FNR == 1 { file = gensub(/.*\/([^.]*)\..*/, "\\1", "g", FILENAME); header = header OFS file; }FNR > 1 { arr[$1] = arr[$1] OFS $5; }END { print header; for(i in arr) { print i arr[i]; } }' results/*.genes.resultsOutput (I created three files with the same content for testing) $ ./join_files.sh | column -t gene_id TB1 TB2 TB3 ENSG00000000003 1.00 1.00 1.00 ENSG00000000005 0.00 0.00 0.00 ENSG00000000419 1865.00 1865.00 1865.00 ENSG00000000457 1521.00 1521.00 1521.00 ENSG00000000460 1860.00 1860.00 1860.00 ENSG00000000938 6846.00 6846.00 6846.00 ENSG00000000971 0.00 0.00 0.00 ENSG00000001036 1358.00 1358.00 1358.00 ENSG00000001084 1178.00 1178.00 1178.00Explanation - the same code with comments added. Also, look at the man gawk. gawk ' # NR - the total number of input records seen so far. # If the total line number is equal 1NR == 1 { # If the "sorted_in" element exists in PROCINFO, then its value controls # the order in which array elements are traversed in the (for in) loop. # else the order is undefined. PROCINFO["sorted_in"] = "@ind_num_asc"; # Each field in the input record may be referenced by its position: $1, $2, and so on. # $1 - is the first field or the first column. # The first field in the first line is the "gene_id" word; # Assign it to the header variable. header = $1; }# FNR - the input record number in the current input file. # NR is the total lines counter, FNR is the current file lines counter. # FNR == 1 - if it is the first line of the current file.FNR == 1 { # remove from the filename all unneeded parts by the "gensub" function # was - results/TB1.genes.results # become - TB1 file = gensub(/.*\/([^.]*)\..*/, "\\1", "g", FILENAME); # and add it to the header variable, concatenating it with the # previous content of the header, using OFS as delimiter. # OFS - the output field separator, a space by default. header = header OFS file; }# some trick is used here. # $1 - the first column value - "gene_id" # $5 - the fifth column value - "expected_count" FNR > 1 { # create array with "gene_id" indexes: arr["ENSG00000000003"], arr["ENSG00000000419"], so on. # and add "expected_count" values to it, separated by OFS. # each time, when the $1 equals to the specific "gene_id", the $5 value will be # added into this array item. # Example: # arr["ENSG00000000003"] = 1.00 # arr["ENSG00000000003"] = 1.00 2.00 # arr["ENSG00000000003"] = 1.00 2.00 3.00 arr[$1] = arr[$1] OFS $5; }END { print header; for(i in arr) { print i arr[i]; } }' results/*.genes.results
I have many files like following in a directory "results" 58052 results/TB1.genes.results 198003 results/TB1.isoforms.results 58052 results/TB2.genes.results 198003 results/TB2.isoforms.results 58052 results/TB3.genes.results 198003 results/TB3.isoforms.results 58052 results/TB4.genes.results 198003 results/TB4.isoforms.resultsFor eg: TB1.genes.results file looks like following: gene_id transcript_id(s) length effective_length expected_count TPM FPKM ENSG00000000003 ENST00000373020,ENST00000494424,ENST00000496771,ENST00000612152,ENST00000614008 2206.00 1997.20 1.00 0.00 0.01 ENSG00000000005 ENST00000373031,ENST00000485971 940.50 731.73 0.00 0.00 0.00 ENSG00000000419 ENST00000371582,ENST00000371584,ENST00000371588,ENST00000413082,ENST00000466152,ENST00000494752 977.15 768.35 1865.00 14.27 37.82 ENSG00000000457 ENST00000367770,ENST00000367771,ENST00000367772,ENST00000423670,ENST00000470238 3779.11 3570.31 1521.00 2.50 6.64 ENSG00000000460 ENST00000286031,ENST00000359326,ENST00000413811,ENST00000459772,ENST00000466580,ENST00000472795,ENST00000481744,ENST00000496973,ENST00000498289 1936.74 1727.94 1860.00 6.33 16.77 ENSG00000000938 ENST00000374003,ENST00000374004,ENST00000374005,ENST00000399173,ENST00000457296,ENST00000468038,ENST00000475472 2020.10 1811.30 6846.00 22.22 58.90 ENSG00000000971 ENST00000359637,ENST00000367429,ENST00000466229,ENST00000470918,ENST00000496761,ENST00000630130 2587.83 2379.04 0.00 0.00 0.00 ENSG00000001036 ENST00000002165,ENST00000367585,ENST00000451668 1912.64 1703.85 1358.00 4.69 12.42 ENSG00000001084 ENST00000229416,ENST00000504353,ENST00000504525,ENST00000505197,ENST00000505294,ENST00000509541,ENST00000510837,ENST00000513939,ENST00000514004,ENST00000514373,ENST00000514933,ENST00000515580,ENST00000616923 2333.50 2124.73 1178.00 3.26 8.64Other files also has the same columns. To join all "genes.results" with "gene_id" and "expected_count" columns into one text file I gave the following command. paste results/*.genes.results | tail -n+2 | cut -f1,5,12,19,26 > final.genes.rsem.txt[-f1 (gene_id), 5 (expected_count column from TB1.genes.results), 12 (expected_count column from TB2.genes.results), 19 (expected_count column from TB3.genes.results), 26 (expected_count column from TB4.genes.results)]"final.genes.rsem.txt" has, selected gene_id and expected_count columns from every file. ENSG00000000003 1.00 0.00 3.00 2.00 ENSG00000000005 0.00 0.00 0.00 0.00 ENSG00000000419 1865.00 1951.00 5909.00 8163.00 ENSG00000000457 1521.00 1488.00 849.00 1400.00 ENSG00000000460 1860.00 1616.00 2577.00 2715.00 ENSG00000000938 6846.00 5298.00 1.00 2.00 ENSG00000000971 0.00 0.00 6159.00 7069.00 ENSG00000001036 1358.00 1186.00 6196.00 7009.00 ENSG00000001084 1178.00 1186.00 631.00 1293.00My question is - As I have only few samples I gave the column number in the command [like this in "cut" -f1,5,12,19,26]. What I should do if I have more than 100 samples. How can I join them with required columns?
How to join files with required columns in linux?
{ paste -d, /dev/null "${in}"/folders_names.txt | tr -d \\n | cut -c2-; \ sed 's|.*|'"${in}"'/&/file.txt|' "${in}"/folders_names.txt \ | tr \\n \\0 | xargs -0 paste -d,; } > all_files.csvThe first command paste -d, /dev/null "${in}"/folders_names.txt | tr -d \\n | cut -c2-prints the header, e.g. if your "${in}"/folders_names.txt is: w x y zit prints w,x,y,z The sed command processes the same file so that each line becomes a path e.g. if in=a/b/c: a/b/c/w/file.txt a/b/c/x/file.txt a/b/c/y/file.txt a/b/c/z/file.txtand the result is transformed into a null separated input fed to paste via xargs -0 so the final output is e.g. w,x,y,z 5,4,5,7 8,2,1,5 6,1,1,1 1,3,5,9 3,1,8,9If no line in your folders_names.txt contained blanks (i.e. sane file names) you could just run: { paste -d, /dev/null "${in}"/folders_names.txt | tr -d \\n | cut -c2-; \ paste -d, $(sed 's|.*|'"${in}"'/&/file.txt|' "${in}"/folders_names.txt); } > all_files.csvas the second command would expand to paste -d, a/b/c/w/file.txt a/b/c/x/file.txt a/b/c/y/file.txt a/b/c/z/file.txt
I have a number of text files that have the same name. Each file is saved in a different folder also each file contains one column of numbers as follow: FILE.TXT FILE.TXT FILE.TXT FILE.TXT ....5 4 5 7 8 2 1 5 6 1 1 1 1 3 5 9 3 1 8 9 . . . . . . . . . . . . I want to merge the files in one spreadsheet (CSV format) and I want the names of the columns to be the same as the names of the folders that contain that file. I tried for loop as follow: #!/bin/bash in=a/b/c for i in $(cat $in/folders_names.txt); do # i is the folder name that contain the file.txt paste ${in}/${i}/file.txt done > all_files.txt sed 's/ */,/g' all_files.txt >all_files.csv &This code is pasting all the columns ( from all the files ) in one column ( in the file all_files.txt). I don't know what I am doing wrong. Any suggestions?
Paste single-column files with same name using directory name as column name
Because you're asking for the file to be sorted by the order that the fields appear in the file, this is the most basic use of sort: sort file1 file2 > outputfile
I have 2 files File 1: 01:12:00,001 Some text01:14:00,003 Some text02:12:01,394 Some textFile 2: 01:12:00,001 Some text01:12:01,029 Some text01:13:21,123 Some textI need output as follows: 01:12:00,001 Some text01:12:00,001 Some text01:12:01,029 Some text01:13:21,123 Some text01:14:00,003 Some text02:12:01,394 Some text How can I achieve this?
How to merge multiple files based on a timestamp
I tried paste -d ' ' file1 file2 file3 > file4and it worked fine. Tested on MacOS.
These are 3 files: file1 file2 file3 1 2 3 1 1 1 3 3 3 2 1 3 1 1 1 3 3 3 0 0 0 1 1 1 3 3 3I want to join them together and have a final file like: 1 2 3 1 1 1 3 3 3 2 1 3 1 1 1 3 3 3 0 0 0 1 1 1 3 3 3But when I use : paste file1 file2 file3 > file4I see gap in output(file4): 1 2 3 1 1 1 3 3 3 2 1 3 1 1 1 3 3 3 0 0 0 1 1 1 3 3 3What should I do to not see these gaps?
how to paste several file together using a single space as a delimiter [duplicate]
Given $ cat INPUT Tom Nathan Jack Polothen $ pr -s -T -2 < INPUT Tom Jack Nathan Polo(paginate with single tab spacing between columns, no headers, two columns); or $ paste -d ' ' - - < INPUT | rs -T Tom Jack Nathan Polo(paste then transpose)
I have a 4 line input file and i need to modify the file to combine alternate lines. I want to perform the operation in place. INPUT: Tom Nathan Jack PoloDesired Output: Tom Jack Nathan PoloOne way is to collect odd numbered lines and flip them and cut even numbered lines and combine both files to get the final output. But i am looking for a simpler solution.
How to combine alternate lines in a file?
Your file contains CR+LF line ending. (You can say cat -vet inputfile to figure that. Carriage returns would show up as ^M in the output.) The following demonstrates the effect of line endings on the output: $ cat test.txt The quick brown fox jumped over the lazy dog. $ paste -s test.txt The quick brown fox jumped over the lazy dog. $ unix2dos test.txt $ paste -s test.txt dog.lazy er
I have a testA.txt with contents as shown below [jiewmeng@JM textFiles]$ cat testA.txt The quick brown fox jumped over the lazy dog.paste normally works [jiewmeng@JM textFiles]$ paste testA.txt The quick brown fox jumped over the lazy dog.But what happened when I used serial? [jiewmeng@JM textFiles]$ paste -s testA.txt The quicdog.lazy er[jiewmeng@JM textFiles]$ paste -s -d- testA.txt -dog.lazy erI was expecting output similar to [jiewmeng@JM tmp]$ echo -en "The quick\nbrown fox\njumped over\nthe lazy\ndog" | paste -s - The quick brown fox jumped over the lazy dogOpening the file in a test editor seems to work fine, just like cat, or paste did
Wierd output using paste with serial option
You're looking for the pr command pr -2t -s" " file
I am writing a script which returns 15 lines of titles and 15 lines of URLs. title 1 title 2 title 3 *snip* title 14 title 15 http://example.com/query?1 http://example.com/query?2 http://example.com/query?3 *snip* http://example.com/query?14 http://example.com/query?15I'd like to merge it in such a way that produces the following output: title 1 http://example.com/query?1 title 2 http://example.com/query?2 title 3 http://example.com/query?3 *snip* title 15 http://example.com/query?15Upon light inspection and the guidance of this answer, I found the command paste. However, paste does not allow for the performance of more complex behaviors like the ones described above. Is there another tool or combination of tools I can use in order to accomplish the aforementioned behavior? Do note that I'm looking to use all standard coreutils behavior, if at all possible.
merge x and x + n lines
I can think of two ways to approach this:implement your own 'paste' that skips the first three fields of all but the first file - for example awk -F\; ' FNR==NR { a[FNR]=$0; next; } { for (i=4;i<=NF;i++) a[FNR] = sprintf("%s;%s", a[FNR], $i); } END { for (n=1;n<=FNR;n++) print a[n]; }' file*.csvpaste the files together, then retain fields based on an indicator derived from the header row paste -d\; file*.csv | perl -MList::MoreUtils=indexes -F\; -alne ' @keep = indexes { $_ !~ /YEAR|MONTH|DAY/ } @F if $. == 1; print join ";", @F[0..2,@keep]'(if you don't have the List::MoreUtils module, you should be able to implement the same functionality using perl's grep).
I have a bunch of input csv files (delimited with semi-colon ";" having the following format YEAR;MONTH;DAY;RES1FILE1;RES2FILE1;RES3FILE1 1901;01;01;101;154;169 1901;01;02;146;174;136The number of columns for each files is variable, meaning that some files could have 6 columns and some others 4. I would like to paste each files into one big csv file (with ";" as a delimiter. My problem is that, in order to avoid redundancy, I would like to avoid pasting the first three column each time since for every files they are the same (YEAR;MONTH;DAY). Therefore the output should look like this: YEAR;MONTH;DAY;RES1FILE1;RES2FILE1;RES3FILE1;RES1FILE2;RES2FILE2 1901;01;01;101;154;169;185;165 1901;01;02;146;174;136;129;176I am currently using the following command: arr=( *_rcp8p5.csv ) paste "${arr[@]}" | cut -f-4,$(seq -s, 8 4 $((4*${#arr[@]}))) >out_rcp8p5.txtBut it is not working at all
Paste different csv files
Your input files have DOS \r\n line endings. Remove the carriage returns with the dos2unix command or with sed -i 's/\r$//'
I am attempting to merge three files using 'paste' and 'awk'. However, the columns are not adjusting to the longest string of characters. All files are formatted in the same manner as below. F gge0001x D 12-30-2006 T 14:15:20 S a69 B 15.8 M gge06001 P 30.1Below is my faulty code. $ paste <(awk '{print $1}' lineid) < (awk '{printf("%-13.10s\n", $1)}' gge0001x) < (awk '{printf("%-13.10s\n", $1)}' gge0001y) < (awk '{printf("%-13.10s\n", $1)}' gge0001z)This code results in misaligned columns as pictured below.Input File 1 F D T S B M P Q R U X A G H O C K W L Input File 2 gge0006x 12-30-2006 14:05:23 a69 15.4 gge06001 30.8 19.2 1006.2 1012.7 36.2 38.994 107.71 8.411 37.084 7.537 28.198 212.52 68.1Input File 3 gge0006y 12-30-2006 14:05:55 a69 15.3 gge06001 30.6 21.1 1006.6 1014.6 36.1 38.994 107.71 8.433 36.705 7.621 27.623 210.51 68 Input File 4 gge0006z 12-30-2006 14:06:28 a69 15.7 gge06001 30.3 23.5 1008 1014.1 36.6 38.994 107.71 8.434 36.508 7.546 27.574 208.08 67.6 Results for paste file1 file2 file3 file4 | column -t
Format Column Width with Printf
With bash 4 mapfile -t <list paste "${MAPFILE[@]}" | column -s $'\t' -tfor the paste {list}/PQR/A/sum version of the question mapfile -t <list paste "${MAPFILE[@]/%//PQR/A/sum}" | column -s $'\t' -t
To paste many files, whose names are incremental numbers: paste {1..8}| column -s $'\t' -tWhat if your files wasn't named by number, but only words? It can be up to ten files, what should I do?In addition, you have a list of files that contains all the files you want. So far, my approach is: mkdir paste j=0; while read i; do let j+=1; cp $i/ paste/$j; done<list; cd paste; paste {1..8}| column -s $'\t' -tI have no problem with this approach, I just want to ask if there is any shorter one.Actually my files have the same name, just on different locations, for instance 1MUI/PQR/A/sum, 2QHK/PQR/A/sum, 2RKF/PQR/A/sum. The paste command should be paste {list}/PQR/A/sum. The list file is: 1MUI 2QHK 2RKF ...
How to use paste command for many files whose names are not numbers? (paste columns from each file to one file)
A KISS solution based on what you have so far: paste -d, file1.csv file2.csv | awk -F, '{print "TEXT1-" $1 "-TEXT2-" $2 "-TEXT3"}'or paste -d, file1.csv file2.csv | awk -F, '{print "TEXT1", $1, "TEXT2", $2, "TEXT3"}' OFS=-(which may be more convenient if you want to make the TEXTs variable).
I have two (or maybe more) files: file1.csv dog cats mousefile2.csv 001a 002a 003cIf I use paste file1.csv file2.csv the output is dog 001a cats 002a mouse 003cOf course I can use paste -d , file1.csv file2.csv dog,001a cats,002a mouse,003cBut I want this output TEXT1-dog-TEXT2-001a-TEXT3 TEXT1-cats-TEXT2-002a-TEXT3 TEXT1-mouse-TEXT2-003c-TEXT3Is there a way to put multiple .csv files together with extra text before, between and after each line?
Combine .csv-files with text between each line
Most likely, your original files have \r\n end of lines. If it is so, the final file would have an extra \r between each line segment. Try using tr: paste -d "," *csv | tr -d "\r" > output.csv
I have a few csv-s, each containing 3 columns separated by ",". Example: header1,header2,header3 value1,value2,value3 value1,value2,value3 ...Using this tutorial, I thought if I execute paste -d "," *csv > output.csv I will end up with something like this: header1,header2,header3,header1,header2,header3,... value1,value2,value3,value1,value2,value3,... value1,value2,value3,value1,value2,value3,...but instead the output looks like this: header1,header2,header3, header1,header2,header3, header1,header2,header3, ... value1,value2,value3, value1,value2,value3, ...especially each line is 3 columns wide, instead of the number of csv files * 3 wide. What am I doing wrong?
paste command puts data from csv files vertically line by line instead of horizontally next to each other
GNU Parallel parallel echo "{1},{2}" :::: <(cut -d' ' -f1 file) :::: <(cut -d' ' -f2 file) | awk -F, '{ print $1,$2,$1"/"$2,$1/$2 }' OFS=, OFMT='%.2g'Output: 1,2,1/2,0.5 1,5,1/5,0.2 1,8,1/8,0.12 1,18,1/18,0.056 1,5,1/5,0.2 1,19,1/19,0.053 3,2,3/2,1.5 3,5,3/5,0.6 3,8,3/8,0.38 3,18,3/18,0.17 3,5,3/5,0.6 3,19,3/19,0.16 4,2,4/2,2 4,5,4/5,0.8 4,8,4/8,0.5 4,18,4/18,0.22 4,5,4/5,0.8 4,19,4/19,0.21 9,2,9/2,4.5 9,5,9/5,1.8 9,8,9/8,1.1 9,18,9/18,0.5 9,5,9/5,1.8 9,19,9/19,0.47 3,2,3/2,1.5 3,5,3/5,0.6 3,8,3/8,0.38 3,18,3/18,0.17 3,5,3/5,0.6 3,19,3/19,0.16 4,2,4/2,2 4,5,4/5,0.8 4,8,4/8,0.5 4,18,4/18,0.22 4,5,4/5,0.8 4,19,4/19,0.21
I have two column in one file : 1 2 3 5 4 8 9 18 3 5 4 19I want to divide each element of first column to each element of second column and want to print that number too. for example: 1,2,1/2, 1,5,1/5, 1,8,1/8, 1,18,1/18, 1,19,1/19, 3,5,3/5, 4,19,4/19, 3,2,3/2, 3,5,3/8, 3,19,3/19 and so on... Please help me How can i proceed ?
how to divide each one element from a column to each element of other column?
paste + awk solution: paste HI.* | awk '{ for(i=2; i<=NF; i+=2) printf "%s%s", $i, (i==NF? ORS : "\t") }' > result
I have nearly 400 files, each looks like this: head HI.1.Q091_13R_all_PA_code Ha8_00040788 C Ha4_00024045 C Ha4_00025366 C Ha16_00022130 C Ha16_00023451 C Ha8_00040789 C Ha4_00025367 C Ha4_00024046 A Ha16_00022131 C Ha16_00023452 CI want to copy and paste only the "second" column of each file and save it as a tab-delimited file head desired_output C C C A C C C C C C C A C A A A C A C C
how to copy and paste only a specific column in each file?
It was indeed the windows line-endings that caused this behavior. After running sed $'s/\r//' -i file1To replace them, paste worked as expected. Thanks to steeldriver for pointing me into the right direction. Another solution is using dos2unix file1
I am experiencing some strange behavior of the paste tool. For some reason it appears to not do it's job on two specific files, but I cannot reproduce that behavior with other files. The first file: $ cat file1 20.623 40.276 -1.999 -1031 127 141 154 20.362 40.375 -2.239 -941 130 141 159 20.36 40.376 -2.402 -1083 139 151 165 20.374 40.367 -2.405 -1122 131 147 163 20.372 40.366 -2.405 -1165 132 145 161 20.375 40.364 -2.404 -1036 133 149 165 20.358 40.371 -2.405 -1137 139 151 165 20.359 40.374 -2.404 -1086 139 151 165 20.354 40.375 -2.404 -1106 139 148 163 20.356 40.374 -2.404 -1059 139 151 165The second file: $ cat file2 -1 -1 2 -1 -1 -1 -1 2 2 2Now paste does what I expect it to do after the following call: $ paste file2 file1 -1 20.623 40.276 -1.999 -1031 127 141 154 -1 20.362 40.375 -2.239 -941 130 141 159 2 20.36 40.376 -2.402 -1083 139 151 165 -1 20.374 40.367 -2.405 -1122 131 147 163 -1 20.372 40.366 -2.405 -1165 132 145 161 -1 20.375 40.364 -2.404 -1036 133 149 165 -1 20.358 40.371 -2.405 -1137 139 151 165 2 20.359 40.374 -2.404 -1086 139 151 165 2 20.354 40.375 -2.404 -1106 139 148 163 2 20.356 40.374 -2.404 -1059 139 151 165However, when switching the arguments, the produced lines are created by somehow merging the lines instead of concatenating them: $ paste file1 file2 20.623 4-1276 -1.999 -1031 127 141 154 20.362 4-1375 -2.239 -941 130 141 159 20.36 402376 -2.402 -1083 139 151 165 20.374 4-1367 -2.405 -1122 131 147 163 20.372 4-1366 -2.405 -1165 132 145 161 20.375 4-1364 -2.404 -1036 133 149 165 20.358 4-1371 -2.405 -1137 139 151 165 20.359 42.374 -2.404 -1086 139 151 165 20.354 42.375 -2.404 -1106 139 148 163 20.356 42.374 -2.404 -1059 139 151 165Note that the second numbers are messed up. I find it even stranger that paste does what I would expect in the following: $ cat test1 5 5 5 5 6 6 6 6 $ cat test2 -2 -7 $ paste test2 test1 -2 5 5 5 5 -7 6 6 6 6 $ paste test1 test2 5 5 5 5 -2 6 6 6 6 -7The man page couldn't help me save my problems. Any explanations and help for the task I try to achieve?
Strange behavior of paste [closed]
paste command is a good choice if we just need to merge lines of files. To prepend header line with filenames use combination awk + paste: { for f in file*; do awk '{ for(i=1;i<=NF;i++) printf("%s\t",FILENAME); exit }' "$f"; done; echo ""; paste -d"\t" file*; } | column -tThe output (for 3 input files): file1 file1 file2 file2 file3 file3 a 1 a 10 a 0 b 2 b 20 b 0 c 3 c 40 c 0Details:{ command; command; ...} - used to combine outputs of multiple commands for f in file*; - for each file printf("%s\t",FILENAME) - print filename for each column of respective file exit - exits immediately after processing the 1st line
How can I add output of each file incrementally in one singel output? I want to do this instead of running paste command on all files together. It is because I have 10k files and each file is 100 GB in size. file1 a 1 b 2 c 3file2 a 10 b 20 c 40file3 a 0 b 0 c 0Desired output file1 file1 file2 file2 file3 file3 a 1 a 10 a 0 b 2 b 20 b 0 c 3 c 40 c 0I know I can get some thing similar to desired output using paste -d "\t" file{1..3} but I want to perform operation one file after another but not all together and importantly I want to keep the file names.
how to add output as a new column with the file names
try this command nawk '{if ((getline a < "-") > 0) $0 = $0 "," a; print}' file1.csv < file2.csv > file3.csvthis command will browse your file1.csv and file2.csv line by line and save the line from the file1.csv in $0 (for nawk $0 match the hole line, $1 the first column, $2 the second...) and save the line from file2.csv in the variable a. After that it will print $0 (the line from file1), then "," then a (the line from file2) in the file3.csv
I am using a paste command to concatenate two .csv files column wise. These both files are huge file and when I run the paste command as below where comma(,) is the delimiter: paste -d',' file1.csv file2.csv > file3.csvThe command fails giving output paste: line too longHowever, I searched the same over internet and in paste command's manual also. I found the below diagnostics. "line too long" Output lines are restricted to 511 characters.Is there any alternate way to obtain the result, then? I am using below version of bash: GNU bash, version 3.2.57(1)-release (sparc-sun-solaris2.10)
Why paste command doesn't work for Concatenating two files column wise when the characters are more than 511?
You can get it from /dev/vcs1 (for the first virtual console (tty1)). cat /dev/vcs1But chances are those lines are also in a log file. (check /var/log/messages, /var/log/kernel.log, /var/log/syslog for a start). You may also want to check the stdout/stderr of chrome which if you started it with your Windows manager, may be going to some file like ~/.xsession-errors.
Linux have virtual terminals, one can switch between them with chvt 1, chvt 7 commands. Former is in the text mode, later is in the graphics mode. I want to copy all the text from first terminal, with some utility from graphics mode. fbgrab -c 1 ~/image.png saves an image, but in my case it is a transparent rectangle. There is no ebuild for fbdump, so I can't check it. And actually I want some paste (as with wgetpaste -s gists), not a picture. OS is Sabayon Linux with MATE DE on AMD GPU I need this because my chrome browser starts to execute some js, and hangs the system (mouse cursor freezes), then there is a message in virtual terminal about crash in chrome plugin. I want to paste that message, but don't know how to do this.
How to copy text of virtual terminal from graphics mode?
You could use set and parameter expansion on each array element to print just the directory name: set -- */text.txt { printf ' %s' "${@%/*}" | cut -c2-; paste -- "$@"; } # this blank ^ is a literal tab
In a parent folder I have multiple folders inside it. Within each folder I have text file "text.txt". The text files are similar in all the folders, each text file contain 100 line and one column of numbers. example cat /folder1/text1.txt1654 1684 535 35131 . .I want to merge all these text files as columns in one file using the command paste. In the parent folder I ran the command paste ./*/text*.text > all_text.txt # the content for all_text.txt is as follow:cat all_text.txt 1654 354531 .... 1684 224 535 2424 35131 24 . . .How can I add the folders names as a header for each pasted column to get the following output cat all_text.txt folder#1 folder #2 ..... 1654 354531 1684 224 535 2424 35131 24 . . .
Paste text files and add parent directory name as header for each column
In vim there's various ways to copy/paste. (In the enumeration below I'm just showing the three variants using first marking the text then copy/pasting.)Select a range of whole lines: type V, move to end of range, type y. Then go to the target position and use p to print the text below the current line or use P to print it above. Select a range of text with accuracy of characters: type v, move to the end of the area (which may be on another line), type y. Then go to the target position and use p to print the copied text after the current charater or P to print it before. (Existing line breaks are preserved.) Select a block of text: type Ctrl-V, move to the end of the rectangular block, type y. If this block is pasted inside existing text then that existing text will be shifted to the right to make place for the new text.Your copy/paste method seems to have been 3. - Use one of the other methods that fits your needs best.
I'm learning my way around the vim and got around to copy-paste. Or yank-paste. Now when I try to paste a yanked piece of text, it pushes previous content to the right and squeezes in on the left. I took 2 screenshots to show you what behaviour I mean. And before pasting:after pasting:I don't think that my .vimrc affects pasting behaviour: execute pathogen#infect() set number set tabstop=3 "tabs are 3 spaces big (smaller than default) set shiftwidth=3 "3 spaces are also used with auto indent set smartindent "You'll keep the indentation on the next line when you press enter."auto complete brackets inoremap { {}<Esc>i inoremap [ []<Esc>i inoremap ( ()<Esc>ilet g:solarized_termcolors=256 syntax on set t_Co=256 colorscheme solarized set background=light"remap CTRL-c in visual mode for copy to clipboard vnoremap <C-c> "+y"Let vim change the working directory automatically (so you can open files from the current path) set autochdir"Open new split panes to right and bottom, which feels more natural than Vim’s default: set splitbelow set splitright"map a key to maximize current screen map <F5> <C-W>_<C-W><Bar> map <F6> <C-W>=
Pasting with vim squeezes content in between previous content and margin
Job for paste: paste -d, f2.txt f1.txt-d, sets the delimiter as , (instead of tab)With awk: awk 'BEGIN {FS=OFS=","} NR==FNR {a[NR]=$0; next} {print a[FNR], $0}' f2.txt f1.txt BEGIN {FS=OFS=","} sets the input and output field separators as , NR==FNR {a[NR]=$0; next}: for first file (f2.txt), we are saving the record number as key to an associative array (a) with values being the corresponding record {print a[FNR], $0}: for second file, we are just printing the record with the value of record number-ed key from a prependedExample: % cat f1.txt Heading1,Heading2 value1,value2% cat f2.txt Row1 Row2% paste -d, f2.txt f1.txt Row1,Heading1,Heading2 Row2,value1,value2% awk 'BEGIN {FS=OFS=","} NR==FNR {a[NR]=$0; next} {print a[FNR], $0}' f2.txt f1.txt Row1,Heading1,Heading2 Row2,value1,value2
I have a file that looks like this: Heading1,Heading2 value1,value2And another one that looks like this: Row1 Row2How can I combine the two to become: Row1,Heading1,Heading2 Row2,value1,value2Effectively appending a column in the place of the first column?
Append first column to file
Using :set paste prevents vim from re-tabbing my code and fixes the problem. Also, :set nopaste turns it off I also put set pastetoggle=<F2>in my .vimrc so I can toggle it with the F2 key.
I can copy characters in other apps such as browsers with ctrlc. I can then press i to enter insert mode in vim and press shiftctrlv to paste the text in. The problem is that each line gets indented a bit more so I end up with:but what I want (and end up manually editing to achieve) is:
vi / vim - extra indents when pasting text? [duplicate]
One way: paste name.file lastname.file id.file | awk -F '\t' '{printf "The ID of the %s %s is %d\n", $1,$2,$3}'Using awk to get the formatting needed.
I have to 3 files with all containing columns of information id.file 1 2 3name.file Josh Kate Chrislastname.file Smith Jones BlackAnd I would like to combine them in a way, so I can get something like this: The ID of the Josh Smith is 1 The ID of the Kate Jones is 2 The ID of the Chris Black is 3So far I have tried to combine them using paste paste -d ',' id.file name.file lastname.file which works well but I want to add words in the beginning and between values as well.
Combining multiple columns and insert information in middle
A tainted kernel is one that is in an unsupported state because it cannot be guaranteed to function correctly. Most kernel developers will ignore bug reports involving tainted kernels, and community members may ask that you correct the tainting condition before they can proceed with diagnosing problems related to the kernel. In addition, some debugging functionality and API calls may be disabled when the kernel is tainted. The taint state is indicated by a series of flags which represent the various reasons a kernel cannot be trusted to work properly. The most common reason for the kernel to become tainted is loading a proprietary graphics driver from NVIDIA or AMD, in which case it is generally safe to ignore the condition. However, some scenarios that cause the kernel to become tainted may be indicative of more serious problems such as failing hardware. It is a good idea to examine system logs and the specific taint flags set to determine the underlying cause of the issue. This feature is intended to identify conditions which may make it difficult to properly troubleshoot a kernel problem. For example, a proprietary driver can cause problems that cannot be debugged reliably because its source code is not available and its effects cannot be determined. Likewise, if a serious kernel or hardware error had previously occurred, the integrity of the kernel space may have been compromised, meaning that any subsequent debug messages generated by the kernel may not be reliable. Note that correcting the tainting condition alone does not remove the taint state because doing so does not change the fact that the kernel can no longer be relied on to work correctly or produce accurate debugging information. The system must be restarted to clear the taint flags. More information is available in the Linux kernel documentation, including what each taint flag means and how to troubleshoot a tainted kernel prior to reporting bugs. A partial list of conditions that can result in the kernel being tainted follows, each with their own flags. Note that some Linux vendors, such as SUSE, add additional taint flags to indicate conditions such as loading a module that is supported by a third party rather than directly by the vendor.Loading a proprietary (or non-GPL-compatible) kernel module. As noted above, this is the most common reason for the kernel to become tainted. The use of staging drivers, which are part of the kernel source code but are experimental and not fully tested. The use of out-of-tree modules that are not included with the Linux kernel source code. Forcibly loading or unloading modules. This can happen if one is trying to use a module that is not built for the current version of the kernel. (The Linux kernel module ABI is not stable across versions, or even differently-configured builds of the same version.) Running a kernel on certain hardware configurations that are specifically not supported, such as an SMP (multiprocessor) kernel on early AMD Athlon processors not supporting SMP operation. Overriding the ACPI DSDT in the kernel. This is sometimes needed to correct for firmware power-management bugs; see this Arch Linux wiki article for details. Certain critical error conditions, such as machine check exceptions and kernel oopses. Certain serious bugs in the BIOS, UEFI, or other system firmware which the kernel must work around.
Under certain conditions, the Linux kernel may become tainted. For example, loading a proprietary video driver into the kernel taints the kernel. This condition may be visible in system logs, kernel error messages (oops and panics), and through tools such as lsmod, and remains until the system is rebooted. What does this mean? Does it affect my ability to use the system, and how might it affect my support options?
What is a tainted Linux kernel?
As @Chris Stryczynski says, sudo cat /sys/kernel/debug/dri/N/amdgpu_gpu_recover is the correct way to reload the amdgpu kernel module, or you can start your system with amdgpu.gpu_recovery=1 kernel parameter to automatically reset it on a crash. But these options are not usable so much because the display server ( Xorg or Wayland ) has to reinitialize its graphic stack and desktop environments are not capable of doing that. (Not yet implemented.) Using the gpu_recovery kernel parameter, you can save your work even if it's not visible, and then reboot.
My video card crashes from time to time. It's quite annoying but I live with it -- usually I just restart the graphics with sudo systemctl restart lightdm.service, or if needed reboot the whole system. In this particular instance the systemctl call hangs, and I don't want to reboot since I have a long-running job on the machine. The crash is logged in dmesg as [944520.212254] Call Trace: [944520.212256] [<ffffffff818384d5>] schedule+0x35/0x80 [944520.212257] [<ffffffff8183b625>] schedule_timeout+0x1b5/0x270 [944520.212280] [<ffffffffc0235244>] ? dce_v6_0_program_watermarks+0x514/0x720 [amdgpu] [944520.212282] [<ffffffffc0196d2c>] kcl_fence_default_wait+0x1cc/0x260 [amdkcl] [944520.212287] [<ffffffff815b4f50>] ? fence_free+0x20/0x20Clearly the amdgpu module crashed. I would like to restart it, so I tried sudo modprobe -r amdgpu modprobe: FATAL: Module amdgpu is in use.And when I try to find out who is using amdgpu I get lsmod | grep amdgpu amdgpu 2129920 7 amdttm 102400 1 amdgpu amdkcl 32768 1 amdgpu i2c_algo_bit 16384 1 amdgpu drm_kms_helper 155648 1 amdgpu drm 364544 10 drm_kms_helper,amdgpu,amdkcl,amdttmBasically there is 7 "things" using the module and I have no idea how to find them and remove the amdgpu module. Question: Is there any reasonable way to reload the module, without rebooting the system? Or is there a better way to get my video back?
How to restart a failed amdgpu kernel module
You should use Ubuntu 14 LTS instead of 16 LTS This information is from https://github.com/fresco-fl2000/fl2000On which kernel versions does this driver work? This driver is tested on Ubuntu 14 LTS as well as some Android platforms with kernel version 3.10.x. This driver source might not compile on newer kernels (eg. 4.0 or above) because of the fast-moving API changes in the mainstream kernel. You might need to adapt it for your own use.
I need to connect additional monitors on my computer and I get Fresco Logic FL2000DX USB display adapters. This adapters works perfect on Windows but I need to use on my development machine based on Ubuntu 16.04. I find this on git hub: https://github.com/fresco-fl2000/fl2000 and try to install it but installation fail.
How to properly install USB display driver for Fresco Logic FL2000DX on Ubuntu?
In short: no. To go further, a driver is a piece of software that interact with the kernel of the operating system. When you're working in kernel world, interoperability doesn't exist. POSIX neither. Everything is totally OS-specific: the architecture, the sub-systems and the way they have been built and designed, the standard library offered by the kernel to driver writer, there's nothing in common between Linux and Windows. The only ways you can get your oscilloscope working under linux is: by using a Windows virtual machine and forwarding the USB device to it (possible with virtualbox or qemu). by doing reverse engineering when using it with a Windows workstation: analyse USB exchanges, try to guess the protocol used and the command passed to achieve this or this operation... it's a very hard and long job ...
I have a PC Oscilloscope Instrustar ISDS205X which I used on Windows 10. Now that I have switched to Linux, I am unable to find the respective drivers for it. I have tried installing it on PlayOnLinux but the software doesn't install and so do its drivers. Is there any method to convert such Windows drivers to run on Linux? (My CPU is i5-4570 and Distro is Debian 10 KDE Plasma)
Installing Proprietary Windows Drivers on Linux
Disclaimer - please read before you try installing anything Today, I ran into an old laptop, with Nvidia Geforce GT 520M, which is not supported by the latest driver anymore, version 390 works fine though. Therefore, I must strongly recommend running a search on the Nvidia drivers page before you try to install any driver version!Generic way - the recommended wayIf you'd like to have the recommended packages installed too, then you could run this (the version was last updated on 2024-Jun-14): sudo apt-get install --install-recommends nvidia-driver-550I may not update the version anymore, so I will tell you instead, how to find out (manually) that there is a new version. As there are many ways, the most comfortable for me is (as a normal user or root) typing to your terminal: apt-cache policy nvidia-driver-5and double-tapping the Tab, an example output follows: $ apt-cache policy nvidia-driver-5 nvidia-driver-510 nvidia-driver-530-open nvidia-driver-510-server nvidia-driver-535 nvidia-driver-515 nvidia-driver-535-open nvidia-driver-515-open nvidia-driver-535-server nvidia-driver-515-server nvidia-driver-535-server-open nvidia-driver-520 nvidia-driver-545 nvidia-driver-520-open nvidia-driver-545-open nvidia-driver-525 nvidia-driver-550 nvidia-driver-525-open nvidia-driver-550-open nvidia-driver-525-server nvidia-driver-550-server nvidia-driver-530 nvidia-driver-550-server-openLinux Mint - Driver Manager - please AVOID It may be possible to even use Mint's GUI Driver Manager for this. Generally, I like the command-line way much more, actually, I never use this GUI, because it does not tell you what is happening, you would just blindly look at the progress bar. Therefore I strongly recommend not using this tool, and doing the job via the terminal as shown above.Ubuntu way - NOT RECOMMENDED (!!!)Thanks to the Ubuntu base, one can also take advantage of, which takes care of everything, but I do not recommend it due to one has no control over what happens, and things can break as a side effect, so the following I note only for completeness (click your mouse to show): sudo ubuntu-drivers autoinstallTo only list drivers applicable to your system, you can do: sudo ubuntu-drivers listwhich will list all drivers available to install on your Ubuntu-based system.2023 Updates & NotesHow to find if a specific set of versions are available: apt-cache policy 'nvidia-driver-5*'Note the quotes! You need to quote this string.I myself ran into a black screen issue when running the install command above. Remember: DO NOT PANIC in this case. Wait for 10-30 minutes depending on how long the installation usually takes you. The install time depends largely on how powerful your computer is overall. If you usually hear fan noise like me, wait a couple of minutes after the fan stops. If you do wait enough time, the Nvidia driver will be installed correctly. In my case, I had to force shutdown after the wait by pressing the power button for several seconds. It booted correctly which you can prove with nvidia-smi command: +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.67 Driver Version: 550.67 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce GTX 1060 ... Off | 00000000:01:00.0 On | N/A | | N/A 51C P0 23W / 60W | 256MiB / 6144MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------++-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1323 G /usr/lib/xorg/Xorg 100MiB | | 0 N/A N/A 2908 G cinnamon 46MiB | | 0 N/A N/A 4555 G ...ithMdns --variations-seed-version=1 104MiB | +-----------------------------------------------------------------------------------------+Or in GUI in Mint:
I have a Linux Mint 20.0 (Ulyana) Cinnamon, which is Ubuntu 20.04 (Focal) based. Also tested and valid for Linux Mint 21.1 (Vera) Cinnamon, which is Ubuntu 22.04 (Jammy) based.GPU: NVIDIA, GeForce GTX 1060, Max-Q Design, 6 GB GDDR5X VRAM which has the basic specification as follows:Objective To install the latest available drivers without using any PPA (Personal Package Archive).Status If I run the integrated Mint's Driver Manager, I only see old versions available, an old example from the original question:
How to install the latest Nvidia drivers on Linux Mint
The xxx.modeset=0 disables kernel mode setting for the hardware. In other words, this option tells Linux not to try activating and using the incompatible hardware, which is likely the source of your problems. When this is used, your computer will still be functional, but without the benefits of hardware acceleration provided by the graphics card. This will only affect you when you use graphics-intensive programs such as complex, high-end games. Otherwise it's just fine.
I've installed Linux Mint 18 on my machine, but it has a Radeon HD Graphics card, which is not supported by AMD on systems based on Ubuntu 16.04 (according to some research I did). So, after the install the system was booting into a black screen and wouldn't go anywhere. I did some research and found that I could edit the grub boot commands and add radeon.modeset=0. That worked, but I don't know what that does. Is it ok to run permanently with this parameters? Does this lower my overall machine performance? I don't really care about graphics, I just use my machine to ordinary stuff. Thanks, guys.
Linux Mint with radeon.modeset=0
The Device section with VirtualHeads is being ignored because you do not have an Intel card (your xorg.log indicates you have NVidia). Unfortunately the nvidia driver doesn't support virtual screens (the modesetting driver that's recommended for intel cards nowadays doesn't support it either, btw), and it's not possible to use the dummy driver without breaking xrandr in the process (you'd need to have a static "xinerama" configuration in xorg.conf and that's not something you want to do in 2020). Fortunately it's possible to (ab)use the DisplayLink evdi kernel module to add virtual outputs to any Xorg driver that has the Source Output xrandr provider. The process is as follows:Install evdi, either via apt install evdi-dkms or, if you get build errors because your kernel is too new, using this make target from DisplayLink's git repo. Load the kernel module: modprobe evdi initial_device_count=2You may then add options evdi initial_device_count=2to /etc/modprobe.d/local-evdi.conf to persist this across reboots. Restart X, you should now see two additional Sink Output providers in xrandr --listproviders Enable the new output: xrandr --setprovideroutputsource 1 0 --setprovideroutputsource 2 0You'll need to do this whenever you restart X, so put it to some autostart or something. Add the desired output resolution to the xrandr configuration xrandr --addmode DVI-I-1-1 1920x1080Enable the new output: xrandr --output DVI-I-1-1 --mode 1920x1080 --right-of HDMI-0Now there's a second dummy screen to the right of your primary one and you can start a VNC server there. Hope it works for you! :-)
I've been trying to set up a virtual display with Xorg, but there's just no virtual display in xrandr. This seems to be totally ignored: Section "Device" Identifier "Device1" Driver "intel" Option "VirtualHeads" "1" EndSectionSpecs:OS: Debian Testing (Bullseye) Nvidia proprietary driver version: 440.82 CPU: Intel(R) Core(TM) i5-6400 CPU @ 2.70GHzlspci xrandr --verbose xorg.log xorg.conf I've also tried adding this to the xorg.conf with no success (I've tried Device1 as well): Section "Screen" Identifier "VirtualScreen0" Device "Device0" SubSection "Display" Virtual 1600 900 EndSubSection EndSectionWhy: I want to use my laptop as a second display using VNC. I've spent half of a day trying to figure this out, but with no success. I feel like I've tried everything. Does anyone have any clue how to get this working? Thank you very much.
Unable to add a VIRTUAL display to Xorg
Greg Kroah-Hartman has written on this topic here: https://www.kernel.org/doc/html/v4.10/process/stable-api-nonsense.html Besides some technical details regarding compiling C code he draws out a couple of basic software engineering issues that make their decision.Linux Kernel is always a work in progress. This happens for many reasons:New requirements come along. People want their software to do more, that's why most of us upgrade, we want the latest and greatest features. These can require rework to the existing software. Bugs are found which need fixing, sometimes bugs are with the design itself and cannot be fixed without significant rework New ideas and idioms in the software world happen and people find much easier / elegant / efficient ways to do things.This is true of most software, and any software that is not maintained will die a slow and painful death. What you are asking is why doesn't that old unmaintained code still work? Why aren't old interfaces maintained? To ensure backward compatibility would require that old (often "broken" and insecure) interfaces are maintained. Of course it's theoretically possible to do this except it does carry significant cost. Greg Kroah-Hartman writesIf Linux had to ensure that it will preserve a stable source interface, a new interface would have been created, and the older, broken one would have had to be maintained over time, leading to extra work for the [developers]. Since all Linux [developers] do their work on their own time, asking programmers to do extra work for no gain, for free, is not a possibility.Even though Linux is open source, there is still only limited developer time to maintain it. So manpower can still be discussed in terms of "cost". The developers have to chose how they spend their time:Spend a lot of time maintaining old / broken / slow / insecure interfaces. This can sometimes be double to triple the time it took to write the interface in the fist instance. Thow away the old interfaces and expect other software maintainers to [do their job and] maintain their own software.On balance, binning interfaces is really cost-effective (for the kernel developers). If you want to know why developers don't spend months and years of their life saving others from paying $10 for a new wifi adaptor... that's the reason. Remember that's time/cost effective for the kernel developers, not necessarily cost-effective for you or manufacturers.
Why isn't the Linux module API backward compatible? I'm frustrated to find updated drivers after updating the Linux kernel. I have a wireless adapter that needs a proprietary driver, but the manufacturer has discontinued this device about 7 years ago. As the code is very old and was written for Linux 2.6.0.0, it doesn't compile with the latest Linux kernels. I have used many Linux distributions but the same problem is everywhere. Although there is an open-source driver distributed with Linux kernel, it doesn't work. Some people are trying to modify the old proprietary code to make it compatible with the latest Linux kernels, but when a new Linux kernel is released, it takes months to make code compatible with that. Within that time, another new version is released. For this reason, I can't upgrade to a new Linux kernel; sometimes I can't even upgrade my distribution.
Why Linux module API isn't backward compatible?
This should solve your problem: sudo apt-get install linux-generic-hwe-16.04-edge xserver-xorg-input-libinput-hwe-16.04 wget https://mirrors.kernel.org/ubuntu/pool/main/l/linux-firmware/linux-firmware_1.161.1_all.deb sudo dpkg -i linux-firmware_1.161.1_all.debDo this, and reboot. Source? Here, people facing the same issue with the Intel 8265 Bluetooth on Ubuntu 16.04 LTS and other notebook models like Lenovo Y520 and Dell Precision 5520m. This command line means that you will need HWE version of libinput+kernel, and a version higher than 1.161.1 of the linux-firmware package. Got the exact bluetooth model number by browsing theSCCM Package for Windows of Lenovo t470s. As reported by other users, this solution works with Thinkpad variants t470 and t470s
I'm running Ubuntu 16.04 on Lenovo ThinkPad T470s and quite simply, bluetooth appears to not exist, even though it is clearly in the computer according to all specifications available. It appears that I'm missing a kernel driver. Sounds easy enough: except I cannot find any information on where to find this driver or even identify which one it is. I've searched online for the specification and all I find is "integrated bluetooth" or something to that effect. The most specific I've found so far is "Intel Unknown" from an Ubuntu page. I have been unable to use this information to find any kind of Linux bluetooth driver, whether from Intel or anywhere else. Neither lspci nor lsusb show anything useful but here is the output anyway: [root@tutu ~]# lspci 00:00.0 Host bridge: Intel Corporation Device 5904 (rev 02) 00:02.0 VGA compatible controller: Intel Corporation Device 5916 (rev 02) 00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21) 00:14.2 Signal processing controller: Intel Corporation Sunrise Point-LP Thermal subsystem (rev 21) 00:16.0 Communication controller: Intel Corporation Sunrise Point-LP CSME HECI (rev 21) 00:16.3 Serial controller: Intel Corporation Device 9d3d (rev 21) 00:1c.0 PCI bridge: Intel Corporation Device 9d10 (rev f1) 00:1c.2 PCI bridge: Intel Corporation Device 9d12 (rev f1) 00:1d.0 PCI bridge: Intel Corporation Device 9d18 (rev f1) 00:1f.0 ISA bridge: Intel Corporation Device 9d4e (rev 21) 00:1f.2 Memory controller: Intel Corporation Sunrise Point-LP PMC (rev 21) 00:1f.3 Audio device: Intel Corporation Device 9d71 (rev 21) 00:1f.4 SMBus: Intel Corporation Sunrise Point-LP SMBus (rev 21) 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM (rev 21) 3a:00.0 Network controller: Intel Corporation Device 24fd (rev 78) 3c:00.0 Non-Volatile memory controller: Toshiba America Info Systems Device 0115 (rev 01) [root@tutu ~]# lsusb Bus 002 Device 002: ID 0bda:0316 Realtek Semiconductor Corp. Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 005: ID 04ca:7066 Lite-On Technology Corp. Bus 001 Device 004: ID 8087:0a2b Intel Corp. Bus 001 Device 003: ID 0458:0185 KYE Systems Corp. (Mouse Systems) Bus 001 Device 002: ID 1395:002d Sennheiser Communications Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubSo, quite frankly, I'm stuck, and I haven't been able to find anything useful online. Any tips on how to identify my bluetooth chip and/or find Linux (Ubuntu 16.04) drivers for it?
Linux Bluetooth driver for Lenovo ThinkPad T470s
Here is the link for your binary blobs. Quote from the link:Binary firmware blobs from the Debian non-free archive are installed when no good Free Software alternative exists.
Or is the kernel 100% open source?
Does Tails OS have binary blobs in its kernel?
It seems there is a way to prevent the system from trying to set the device's configuration, and it even works from user space. I stumbled upon the commit that added this functionality to the kernel, and luckily it also includes some sample code. User-space programs can declare ownership of a particular port of a USB hub via the device file system, resulting in usb_device_is_owned() returning true. The trick seems to be: unsigned int port = 2; // Just as an example // Send request 24 of type 'U' (USB), which returns an unsigned int unsigned int ioctl_id = _IOR('U', 24, unsigned int); // fd is a file descriptor to the hub's file in the devfs ioctl(fd, ioctl_id, &port); Information on some of the ioctl requests of the USB subsystem is described in the kernel documentation. The full list can be found in the kernel source. The #defines are here. Interestingly, the system still sends Set Configuration Requests for configurations 0 (reset configuration).
I'm trying to write a Linux driver for an existing USB device, based on captured USB communication. The device has multiple configurations. Unfortunately, it seems like the device doesn't follow the USB specification: Only the first Set Configuration Request works. The device locks and eventually crashes on issuing a second Set Configuration Request. USB reset or resetting the configuration (setting it to 0) also doesn't help anything. Now, it seems the Linux USB Core for whatever reason decides to set the device's configuration to the wrong value (it just picks the first one), without me having any chance to step in before it does so. I already tried it from a kernel module and from a user space libusb driver. From reading through the kernel source code, it seems like the function that selects the configuration is usb_choose_configuration() in the generic driver at /drivers/usb/core/generic.c. I can see that the function could be skipped if usb_device_is_owned() returned true, but I have no idea how I could influence the result of that function. I'm hoping I won't have to recompile the whole kernel just to add a USB driver. Thus, here are my questions:How could I prevent the system from setting a configuration before handing control to my driver? It seems in recent kernel versions, usbcore is a builtin module that cannot be replaced. Is there any other way I could override the usb_choose_configuration function in the generic driver (which seems to be part of usbcore)? How could I make the device owned, so that usb_device_is_owned() returns true already when the device is attached?
Prevent Linux kernel from setting USB device configuration
The problem caused by removal of a #define do_posix_clock_monotonic_gettime before kernel 4.0. It used to be defined as: #define do_posix_clock_monotonic_gettime(ts) ktime_get_ts(ts)You could include that define into some of driver's headers. Also, note that ktime_get_ts is defined in linux/timekeeping.h of 4.7.2. Also, I would not expect the driver to function properly under 4.7.2 for sure, as it seems it has only been tested under 3.x.x. Though, give it a try. I hope that helps
I'm trying to compile drivers. do_posix_clock_monotonic_gettime was used in the source code and I'm made sure that the file had #include <linux/time.h>. I was getting implicit declaration when I tried to compile it. When I opened time.h file on my disk, it didn't have do_posix_clock_monotonic_gettime declared. Can someone help me with this? Here is the content of time.h #ifndef _LINUX_TIME_H #define _LINUX_TIME_H# include <linux/cache.h> # include <linux/seqlock.h> # include <linux/math64.h> # include <linux/time64.h>extern struct timezone sys_tz;#define TIME_T_MAX (time_t)((1UL << ((sizeof(time_t) << 3) - 1)) - 1)static inline int timespec_equal(const struct timespec *a, const struct timespec *b) { return (a->tv_sec == b->tv_sec) && (a->tv_nsec == b->tv_nsec); }/* * lhs < rhs: return <0 * lhs == rhs: return 0 * lhs > rhs: return >0 */ static inline int timespec_compare(const struct timespec *lhs, const struct timespec *rhs) { if (lhs->tv_sec < rhs->tv_sec) return -1; if (lhs->tv_sec > rhs->tv_sec) return 1; return lhs->tv_nsec - rhs->tv_nsec; }static inline int timeval_compare(const struct timeval *lhs, const struct timeval *rhs) { if (lhs->tv_sec < rhs->tv_sec) return -1; if (lhs->tv_sec > rhs->tv_sec) return 1; return lhs->tv_usec - rhs->tv_usec; }extern time64_t mktime64(const unsigned int year, const unsigned int mon, const unsigned int day, const unsigned int hour, const unsigned int min, const unsigned int sec);/** * Deprecated. Use mktime64(). */ static inline unsigned long mktime(const unsigned int year, const unsigned int mon, const unsigned int day, const unsigned int hour, const unsigned int min, const unsigned int sec) { return mktime64(year, mon, day, hour, min, sec); }extern void set_normalized_timespec(struct timespec *ts, time_t sec, s64 nsec);/* * timespec_add_safe assumes both values are positive and checks * for overflow. It will return TIME_T_MAX if the reutrn would be * smaller then either of the arguments. */ extern struct timespec timespec_add_safe(const struct timespec lhs, const struct timespec rhs);static inline struct timespec timespec_add(struct timespec lhs, struct timespec rhs) { struct timespec ts_delta; set_normalized_timespec(&ts_delta, lhs.tv_sec + rhs.tv_sec, lhs.tv_nsec + rhs.tv_nsec); return ts_delta; }/* * sub = lhs - rhs, in normalized form */ static inline struct timespec timespec_sub(struct timespec lhs, struct timespec rhs) { struct timespec ts_delta; set_normalized_timespec(&ts_delta, lhs.tv_sec - rhs.tv_sec, lhs.tv_nsec - rhs.tv_nsec); return ts_delta; }/* * Returns true if the timespec is norm, false if denorm: */ static inline bool timespec_valid(const struct timespec *ts) { /* Dates before 1970 are bogus */ if (ts->tv_sec < 0) return false; /* Can't have more nanoseconds then a second */ if ((unsigned long)ts->tv_nsec >= NSEC_PER_SEC) return false; return true; }static inline bool timespec_valid_strict(const struct timespec *ts) { if (!timespec_valid(ts)) return false; /* Disallow values that could overflow ktime_t */ if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX) return false; return true; }static inline bool timeval_valid(const struct timeval *tv) { /* Dates before 1970 are bogus */ if (tv->tv_sec < 0) return false; /* Can't have more microseconds then a second */ if (tv->tv_usec < 0 || tv->tv_usec >= USEC_PER_SEC) return false; return true; }extern struct timespec timespec_trunc(struct timespec t, unsigned gran);/* * Validates if a timespec/timeval used to inject a time offset is valid. * Offsets can be postive or negative. The value of the timeval/timespec * is the sum of its fields, but *NOTE*: the field tv_usec/tv_nsec must * always be non-negative. */ static inline bool timeval_inject_offset_valid(const struct timeval *tv) { /* We don't check the tv_sec as it can be positive or negative */ /* Can't have more microseconds then a second */ if (tv->tv_usec < 0 || tv->tv_usec >= USEC_PER_SEC) return false; return true; }static inline bool timespec_inject_offset_valid(const struct timespec *ts) { /* We don't check the tv_sec as it can be positive or negative */ /* Can't have more nanoseconds then a second */ if (ts->tv_nsec < 0 || ts->tv_nsec >= NSEC_PER_SEC) return false; return true; }#define CURRENT_TIME (current_kernel_time()) #define CURRENT_TIME_SEC ((struct timespec) { get_seconds(), 0 })/* Some architectures do not supply their own clocksource. * This is mainly the case in architectures that get their * inter-tick times by reading the counter on their interval * timer. Since these timers wrap every tick, they're not really * useful as clocksources. Wrapping them to act like one is possible * but not very efficient. So we provide a callout these arches * can implement for use with the jiffies clocksource to provide * finer then tick granular time. */ #ifdef CONFIG_ARCH_USES_GETTIMEOFFSET extern u32 (*arch_gettimeoffset)(void); #endifstruct itimerval; extern int do_setitimer(int which, struct itimerval *value, struct itimerval *ovalue); extern int do_getitimer(int which, struct itimerval *value);extern unsigned int alarm_setitimer(unsigned int seconds);extern long do_utimes(int dfd, const char __user *filename, struct timespec *times, int flags);struct tms; extern void do_sys_times(struct tms *);/* * Similar to the struct tm in userspace <time.h>, but it needs to be here so * that the kernel source is self contained. */ struct tm { /* * the number of seconds after the minute, normally in the range * 0 to 59, but can be up to 60 to allow for leap seconds */ int tm_sec; /* the number of minutes after the hour, in the range 0 to 59*/ int tm_min; /* the number of hours past midnight, in the range 0 to 23 */ int tm_hour; /* the day of the month, in the range 1 to 31 */ int tm_mday; /* the number of months since January, in the range 0 to 11 */ int tm_mon; /* the number of years since 1900 */ long tm_year; /* the number of days since Sunday, in the range 0 to 6 */ int tm_wday; /* the number of days since January 1, in the range 0 to 365 */ int tm_yday; };void time_to_tm(time_t totalsecs, int offset, struct tm *result);/** * timespec_to_ns - Convert timespec to nanoseconds * @ts: pointer to the timespec variable to be converted * * Returns the scalar nanosecond representation of the timespec * parameter. */ static inline s64 timespec_to_ns(const struct timespec *ts) { return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec; }/** * timeval_to_ns - Convert timeval to nanoseconds * @ts: pointer to the timeval variable to be converted * * Returns the scalar nanosecond representation of the timeval * parameter. */ static inline s64 timeval_to_ns(const struct timeval *tv) { return ((s64) tv->tv_sec * NSEC_PER_SEC) + tv->tv_usec * NSEC_PER_USEC; }/** * ns_to_timespec - Convert nanoseconds to timespec * @nsec: the nanoseconds value to be converted * * Returns the timespec representation of the nsec parameter. */ extern struct timespec ns_to_timespec(const s64 nsec);/** * ns_to_timeval - Convert nanoseconds to timeval * @nsec: the nanoseconds value to be converted * * Returns the timeval representation of the nsec parameter. */ extern struct timeval ns_to_timeval(const s64 nsec);/** * timespec_add_ns - Adds nanoseconds to a timespec * @a: pointer to timespec to be incremented * @ns: unsigned nanoseconds value to be added * * This must always be inlined because its used from the x86-64 vdso, * which cannot call other kernel functions. */ static __always_inline void timespec_add_ns(struct timespec *a, u64 ns) { a->tv_sec += __iter_div_u64_rem(a->tv_nsec + ns, NSEC_PER_SEC, &ns); a->tv_nsec = ns; }#endifUsing kernel 4.7.2-1 on Arch. Thanks in advance. Edit: I used configure and make. But didn't work. It was giving me an error regarding time and date. Exactly this is what I got error: macro "__DATE__" might prevent reproducible builds [-Werror=date-time]. So, I supressed the warning by adding extra cflag Wno-error=date-time. Now I get this error: implicit declaration of function ‘do_posix_clock_monotonic_gettime’ [-Werror=implicit-function-declaration] do_posix_clock_monotonic_gettime(&tstamp);The code that was causing error is static void snd_timer_notify1(struct snd_timer_instance *ti, int event) { struct snd_timer *timer; unsigned long flags; unsigned long resolution = 0; struct snd_timer_instance *ts; struct timespec tstamp; if (timer_tstamp_monotonic) do_posix_clock_monotonic_gettime(&tstamp); else getnstimeofday(&tstamp); if (snd_BUG_ON(event < SNDRV_TIMER_EVENT_START || event > SNDRV_TIMER_EVENT_PAUSE)) return; if (event == SNDRV_TIMER_EVENT_START || event == SNDRV_TIMER_EVENT_CONTINUE) resolution = snd_timer_resolution(ti); if (ti->ccallback) ti->ccallback(ti, event, &tstamp, resolution); if (ti->flags & SNDRV_TIMER_IFLG_SLAVE) return; timer = ti->timer; if (timer == NULL) return; if (timer->hw.flags & SNDRV_TIMER_HW_SLAVE) return; spin_lock_irqsave(&timer->lock, flags); list_for_each_entry(ts, &ti->slave_active_head, active_list) if (ts->ccallback) ts->ccallback(ti, event + 100, &tstamp, resolution); spin_unlock_irqrestore(&timer->lock, flags); }There is one more function that uses do_posix_clock_monotime_gettime. It's code is: static void snd_timer_user_tinterrupt(struct snd_timer_instance *timeri, unsigned long resolution, unsigned long ticks) { struct snd_timer_user *tu = timeri->callback_data; struct snd_timer_tread *r, r1; struct timespec tstamp; int prev, append = 0; memset(&tstamp, 0, sizeof(tstamp)); spin_lock(&tu->qlock); if ((tu->filter & ((1 << SNDRV_TIMER_EVENT_RESOLUTION) | (1 << SNDRV_TIMER_EVENT_TICK))) == 0) { spin_unlock(&tu->qlock); return; } if (tu->last_resolution != resolution || ticks > 0) { if (timer_tstamp_monotonic) do_posix_clock_monotonic_gettime(&tstamp); else getnstimeofday(&tstamp); } if ((tu->filter & (1 << SNDRV_TIMER_EVENT_RESOLUTION)) && tu->last_resolution != resolution) { r1.event = SNDRV_TIMER_EVENT_RESOLUTION; r1.tstamp = tstamp; r1.val = resolution; snd_timer_user_append_to_tqueue(tu, &r1); tu->last_resolution = resolution; append++; } if ((tu->filter & (1 << SNDRV_TIMER_EVENT_TICK)) == 0) goto __wake; if (ticks == 0) goto __wake; if (tu->qused > 0) { prev = tu->qtail == 0 ? tu->queue_size - 1 : tu->qtail - 1; r = &tu->tqueue[prev]; if (r->event == SNDRV_TIMER_EVENT_TICK) { r->tstamp = tstamp; r->val += ticks; append++; goto __wake; } } r1.event = SNDRV_TIMER_EVENT_TICK; r1.tstamp = tstamp; r1.val = ticks; snd_timer_user_append_to_tqueue(tu, &r1); append++; __wake: spin_unlock(&tu->qlock); if (append == 0) return; kill_fasync(&tu->fasync, SIGIO, POLL_IN); wake_up(&tu->qchange_sleep); }
do_posix_clock_monotonic_gettime is missing in linux/time.h
After trying different things, this is what worked for me. I'd appreciate any suggestion or explanation if anything seems useless.download the proprietary driver you want to use, from the nvidia website in my case: NVIDIA-Linux-x86_64-375.39.rungo to your non-graphic mode (ctrl-alt-f1) Kill your graphic process (sudo service mdm stop). If the screen turns black, you can use a remote connection (ssh) from a other computer for the next steps, or do the step 1 again. edit your grub file /etc/default/grub and change the GRUB_CMD_LINE_DEFAULT to this oneGRUB_CMDLINE_LINUX_DEFAULT="nouveau.blacklist=1 quiet splash"purge every driver you can have, nouveau or nvidia (sudo apt-get purge xserver-xorg-video-nouveau libdrm-nouveau1a nvidia*) update your initramfs sudo update-initramfs -u -k all . This step is really important, but i don't understand it correctly, so, your comments are welcome reboot go back in non-graphic mode (ctrl-alt-f1) kill your graphic process (sudo service mdm stop). If the screen turns black, you can use a remote connection (ssh) for the next steps or do the step 1 again. run your downloaded proprietary driver files in root mode (sudo ./NVIDIA-Linux-x86_64-375.39.run) and clic "yes", or "accept" to whatever it needs. reboot and enjoythe lspci -vnnn should be like that now 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF104 [GeForce GTX 460] [10de:0e22] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Gigabyte Technology Co., Ltd GF104 [GeForce GTX 460] [1458:34fc] Flags: bus master, fast devsel, latency 0, IRQ 126 Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Memory at d0000000 (64-bit, prefetchable) [size=128M] Memory at d8000000 (64-bit, prefetchable) [size=64M] I/O ports at e000 [size=128] [virtual] Expansion ROM at de000000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [b4] Vendor Specific Information: Len=14 <?> Capabilities: [100] Virtual Channel Capabilities: [128] Power Budgeting <?> Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: nvidia Kernel modules: nvidiafb, nouveau, nvidia_375_drm, nvidia_drm, nvidia_375, nvidiaYou can see that the kernel driver in use is now referencing nvidia instead of nouveau
i'm using Linux Mint. I recently updated my workstation, and from this moment, my drivers went nuts. I was working fine before, with the nvidia-361 drivers, and, when i finished my updates, and after rebooting the PC, il was running in "software rendering mode". I finally get to have a correct desktop, but now, i'm quite sure the card isn't fonctionning properly, because i can't launch any simple game with 3D (like blazeRush for example, wich was running fine before). This is the result of a lspci -vnnn 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF104 [GeForce GTX 460] [10de:0e22] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Gigabyte Technology Co., Ltd GF104 [GeForce GTX 460] [1458:34fc] Flags: bus master, fast devsel, latency 0, IRQ 124 Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Memory at d0000000 (64-bit, prefetchable) [size=128M] Memory at d8000000 (64-bit, prefetchable) [size=64M] I/O ports at e000 [size=128] Expansion ROM at de000000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [b4] Vendor Specific Information: Len=14 <?> Capabilities: [100] Virtual Channel Capabilities: [128] Power Budgeting <?> Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: nouveau Kernel modules: nvidiafb, nouveau, nvidia_375_drm, nvidia_375As you can see, the kernel driver in use is "nouveau", but i would like to use nvidia_375 instead. I already tried to purge with apt like this sudo apt purge *nvidia* xserver-xorg*nouveau* bbswitch*but when i do that, after a reboot, "nouveau" is still here ... i can try to install again the proprietary drivers, but i will be back in the same situation as before the purge. I'm running out of options.
How to switch nvidia driver from "nouveau" to nvidia proprietary
I eventually just wrote a script that could be used to install and uninstall the driver, as well as set up the xorg.conf as my system required it: #!/bin/bash if [[ ! $(whoami) = "root" ]]; then echo -e "\033[1;31mPlease run this as root\033[0m" exit 1 fiif [ "$1" = "enable" ]; then echo -e "\033[22;34mInstalling fglrx... ('/usr/share/fglrx64_p_i_c.x86_64')\033[1m\033[0m" sleep 3 sudo rpm -ivh /usr/share/fglrx-amd-RPM/fglrx64_p_i_c-14.301.1001-1.x86_64.rpm if [ -f "/etc/X11/xorg.conf" ]; then echo "Backing up 'etc/X11/xorg.conf'" mv "/etc/X11/xorg.conf" "/etc/X11/xorg.conf.bak.$(date)" fi echo "Preparing /etc/X11/xorg.conf" echo -e 'Section "ServerLayout"\n Identifier "aticonfig Layout"\n Screen 0 "aticonfig-Screen[0]-0" 0 0\nEndSection\n\nSection "Module"\nEndSection\n\nSection "Monitor"\n Identifier "aticonfig-Monitor[0]-0"\n Option "VendorName" "ATI Proprietary Driver"\n Option "ModelName" "Generic Autodetecting Monitor"\n Option "DPMS" "true"\nEndSection\n\nSection "Device"\n Identifier "aticonfig-Device[0]-0"\n Driver "fglrx"\n BusID "PCI:1:0:0"\nEndSection\n\nSection "Screen"\n Identifier "aticonfig-Screen[0]-0"\n Device "aticonfig-Device[0]-0"\n Monitor "aticonfig-Monitor[0]-0"\n DefaultDepth 24\n SubSection "Display"\n Viewport 0 0\n Depth 24\n EndSubSection\nEndSection\n' > "/etc/X11/xorg.conf"elif [ "$1" = "disable" ]; then echo "\033[22;34mUninstalling fglrx... ('fglrx64_p_i_c.x86_64')\033[1m\033[0m" sleep 3 sudo rpm -ev fglrx64_p_i_c.x86_64 else lsmod | grep fglrx echo -e "\033[22;34mThe options for the script are 'enable' 'disable'\033[1m\033[0m"fi exitI could of perhaps had a go at taking apart the rpm scripts to see what it did during install/uninstall, but the above solution I think is simpler.
I have a Fedora 19 install on a laptop, which has Intel integrated graphics & discrete AMD graphics. I have been using the radeon driver, which works for most stuff - though I have recently tried the fglrx driver, and found it to be quite a bit faster and have better power management, though some OpenGL based programs won't run correctly. So how can I disable the fglrx driver from loading upon boot, so it fall backs onto the radeon driver? I have tried doing this by editing the /etc/modprobe.d/blacklist-fglrx.conf file # Advanced Micro Devices, Inc. # radeon conflicts with AMD Linux Graphics Driver blacklist radeonby commenting the blacklist radeon line, and adding `blacklist fglrx_pciList item` below it. This just resulted in the GUI login screen not loading, so I had to switch to a TTY and edit back to what it was. lspci -k with fglrx installed (the only difference with it not installed is Subsystem: Lenovo Radeon HD 6370M/7370M is not shown, and radeon is used as the AMD driver* - when just blacklisted I think the only change is radeon is used): 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) Subsystem: Lenovo Device 3975 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) Kernel driver in use: pcieport 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Subsystem: Lenovo Device 397a Kernel driver in use: i915 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) Subsystem: Lenovo Device 3975 Kernel driver in use: mei 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: ehci-pci 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: snd_hda_intel 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) Kernel driver in use: pcieport 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) Kernel driver in use: pcieport 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: ehci-pci 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: lpc_ich 00:1f.2 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 4 port SATA IDE Controller (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) Subsystem: Lenovo Device 3975 00:1f.5 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 2 port SATA IDE Controller (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: ata_piix 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Robson CE [Radeon HD 6370M/7370M] Subsystem: Lenovo Radeon HD 6370M/7370M Kernel driver in use: fglrx_pci 07:00.0 Ethernet controller: Qualcomm Atheros AR8152 v2.0 Fast Ethernet (rev c1) Subsystem: Lenovo Device 3979 Kernel driver in use: atl1c 08:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01) Subsystem: Broadcom Corporation Device 051b Kernel driver in use: bcma-pci-bridgeHere is the RPM made by the installer (the installer offers install directly or build packages for openSUSE or RedHat, I used the latest RedHat 64a option), and here are the RPM install/uninstall scripts extracted from it.I know about have tried using modprobe to remove the module when the system has booted - this does not work, resulting in modprobe: FATAL: Module fglrx is in use. Blacklisting the driver (and removing /etc/X11/xorg.conf - one needs to be created after installing fglrx (it looks like this), and fedora does not need one anyway ) works in that the radeon driver is used instead - the problem here is that then quite a few applications don't work, and Gnome Shell & GDM show this:However, uninstalling the driver and rebooting always works.For instance, applying this blacklist in /etc/modprobe.d/blacklist-fglrx.conf: # Advanced Micro Devices, Inc. # radeon conflicts with AMD Linux Graphics Driver #blacklist radeon blacklist fglrx blacklist amd_iommu_v2 blacklist fglrx_pciand removing xorg.conf results in the above GDM error. I can then (and have) switched to a TTY, and used systemctl to switch from GDM to LightDM and log into Xfce (which seems to work normally). Apps such as cairo-dock fail with segmentation faults. glxinfo gives: name of display: :0.0 X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 153 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12systemctl show this as GDM's status (before I switched to LightDM + Xfce), even though it crashed: gdm.service - GNOME Display Manager Loaded: loaded (/usr/lib/systemd/system/gdm.service; enabled) Active: active (running) since Mon 2014-11-10 17:15:27 GMT; 1min 34s ago Main PID: 471 (gdm) CGroup: name=systemd:/system/gdm.service ├─ 471 /usr/sbin/gdm ├─ 597 /usr/libexec/gdm-simple-slave --display-id /org/gnome/DisplayManager/Displays/_0 ├─ 921 /usr/bin/Xorg :0 -background none -verbose -auth /run/gdm/auth-for-gdm-l88Ufh/database -seat seat0 -nolisten tcp vt1 └─1102 gdm-session-worker [pam/gdm-launch-environment]and lspci -k is as follows: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) Subsystem: Lenovo Device 3975 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) Kernel driver in use: pcieport 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Subsystem: Lenovo Device 397a Kernel driver in use: i915 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) Subsystem: Lenovo Device 3975 Kernel driver in use: mei 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: ehci-pci 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: snd_hda_intel 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) Kernel driver in use: pcieport 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) Kernel driver in use: pcieport 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: ehci-pci 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: lpc_ich 00:1f.2 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 4 port SATA IDE Controller (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) Subsystem: Lenovo Device 3975 00:1f.5 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 2 port SATA IDE Controller (rev 05) Subsystem: Lenovo Device 3975 Kernel driver in use: ata_piix 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Robson CE [Radeon HD 6370M/7370M] Subsystem: Lenovo Radeon HD 6370M/7370M Kernel driver in use: radeon 07:00.0 Ethernet controller: Qualcomm Atheros AR8152 v2.0 Fast Ethernet (rev c1) Subsystem: Lenovo Device 3979 Kernel driver in use: atl1c 08:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01) Subsystem: Broadcom Corporation Device 051b Kernel driver in use: bcma-pci-bridgeHere also are the logs from /var/log/gdm/:0.log, /var/log/Xorg.0.log & /var/log/Xorg.0.log.old - I have checked the timestamps of each log, and I think the GDM & the old Xorg logs are the correct logs - the later Xorg log is the one from the current Xfce session, and should also be relevant. I think the issue is either there is still some configuration somewhere telling it to use the fglrx driver, or there is a patched version of libGL (or something similar) which fglrx installed which needs fglrx (in which case this may be unsolvable.......).
Stop fglrx from loading on boot/unload fglrx module without uninstalling it
The general approach is to ship an object file containing the proprietary code, and a “shim”, provided as source code, which is rebuilt for the appropriate kernels when necessary. The interface code handles all the module interface for the kernel, including symbol imports with version strings etc. For example, NVIDIA drivers contain an nv-kernel.o file which is provided only as a binary, and ends up wrapped in the kernel module.
A question has been bugging my mind recently. Since virtually all proprietary modules are out of tree (and therefore not compiled against any kernel versions), I'm wondering how exactly they are compiled and loaded. Many major organizations allow for the download of a single .tar.gz, .deb or whatever and it just works across many kernel versions. Are modules that are downloaded built on the target machine? Do they distribute the source code? Does the source code know which kernel module version they are building against? Or do the drivers force load themselves against the kernel a la finit_module or init_module with ignore version magic flags Or is something else going on behind the scenes? Thanks in advance.
How do proprietary modules work for majority of kernel versions?
Intel does not offer a proprietary driver, the packages on 01.org just exist to update the open-source drivers for select distributions(Fedora and Ubuntu). If you want a newer driver, you just need to update your kernel(for the driver) and mesa(for the userspace that goes with it).
UPDATED: I have Linux Mint 18.1 running on an HP laptop. VGA compatible controller: Intel Corporation Atom Processor Z36xxx/Z37xxx Series Graphics & Display...I can install the drivers from https://01.org but the update tool fails 'make' (but ./configure runs clean) with error: make[3]: Leaving directory '/home/Documents/video_driver/xf86-video-intel-2.99.917/src/sna' Making all in uxa make[3]: Entering directory '/home/Documents/video_driver/xf86-video-intel-2.99.917/src/uxa' CC intel_display.lo CC intel_driver.lo intel_driver.c: In function 'redisplay_dirty': intel_driver.c:645:2: error: too many arguments to function 'PixmapSyncDirtyHelper' PixmapSyncDirtyHelper(dirty, &pixregion); ^ In file included from /usr/include/xorg/gc.h:54:0, from /usr/include/xorg/dix.h:51, from /usr/include/xorg/privates.h:151, from /usr/include/xorg/cursor.h:53, from /usr/include/xorg/scrnintstr.h:53, from /usr/include/xorg/xf86str.h:39, from /usr/include/xorg/xf86.h:44, from intel_driver.c:49: /usr/include/xorg/pixmap.h:131:1: note: declared here PixmapSyncDirtyHelper(PixmapDirtyUpdatePtr dirty); ^ Makefile:524: recipe for target 'intel_driver.lo' failed make[3]: *** [intel_driver.lo] Error 1 make[3]: Leaving directory '/home/Documents/video_driver/xf86-video-intel-2.99.917/src/uxa' Makefile:598: recipe for target 'all-recursive' failed make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory '/home/Documents/video_driver/xf86-video-intel-2.99.917/src' Makefile:468: recipe for target 'all-recursive' failed make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory '/home/Documents/video_driver/xf86-video-intel-2.99.917' Makefile:399: recipe for target 'all' failed make: *** [all] Error 2error. I did find this for ATI (not intel) https://lists.x.org/archives/xorg-driver-ati/2015-July/027750.html It failed. Though I knew it was gong to fail before trying, none of the target files existed. I couldn't find one for Intel specific. None of the driver software leaves a trace of driver files on my system either (i.e. .ko driver file), to perform a manual install (I know how to use modprobe). Where can I get the .ko files so that I can do a manual install and skip all the install scripts? Or and install script that works as a choice #2
How do you install Intel proprietary drivers on Mint?
After I read it from NVIDIA site.You didnt need to do modprobe nvidia-375.But you should add NOMODESET in GRUB when the display was not right.I did some searches for you.
I've tried to install and configure nvidia drivers on Linux Mint 18 but still have errors. Can anyone help me? See some errors: wellington@wellington-mint18 ~ $ cat /var/log/Xorg.0.log | egrep -i "nvidia|nouveau" [ 18.063] (==) Matched nvidia as autoconfigured driver 0 [ 18.063] (==) Matched nouveau as autoconfigured driver 1 [ 18.063] (II) LoadModule: "nvidia" [ 18.064] (WW) Warning, couldn't open module nvidia [ 18.064] (II) UnloadModule: "nvidia" [ 18.064] (II) Unloading nvidia [ 18.064] (EE) Failed to load module "nvidia" (module does not exist, 0)and wellington@wellington-mint18 ~ $ sudo modprobe nvidia_current; modprobe: FATAL: Module nvidia_current not found in directory /lib/modules/4.4.0-21-genericSee scenario: wellington@wellington-mint18 ~ $ uname -a Linux wellington-mint18 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux wellington@wellington-mint18 ~ $ cat /etc/*release DISTRIB_ID=LinuxMint DISTRIB_RELEASE=18 DISTRIB_CODENAME=sarah DISTRIB_DESCRIPTION="Linux Mint 18 Sarah"I tried too: https://gist.github.com/wellington1993/da8b51cae81a05156746bbc8e8304ec6
Linux Mint 18 + Nvidia-375: modprobe: FATAL: Module nvidia_current not found in directory /lib/modules/4.4.0-21-generic
Issue resolved, for some reason, there remained installed a meta-package of previously installed version: nvidia-driver-440. Reproduced on 3 laptops already, the offending package stays the same and is blocking the Nvidia from updating.After updating the OS as a whole, I rebooted, and issued: apt-get purge nvidia-driver-440with result: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: nvidia-driver-440* 0 upgraded, 0 newly installed, 1 to remove and 1 not upgraded. After this operation, 19.5 kB disk space will be freed. Do you want to continue? [Y/n] (Reading database ... 438621 files and directories currently installed.) Removing nvidia-driver-440 (450.80.02-0ubuntu0.20.04.2) ...Now the dry run above seems to be Ok. The installation as root with: apt-get --install-recommends install nvidia-driver-455went smoothly and my system booted up without any problem now.Visual proof (Mint's Driver Manager):
I have a Linux Mint 20 Cinnamon Nvidia-based (GTX 1060) system and would like to install the latest Nvidia drivers now when I noticed they're out. However, I can't seem to be able to install it with: apt-get --dry-run install nvidia-driver-455which throws up the following: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:The following packages have unmet dependencies: nvidia-driver-455 : Depends: libnvidia-extra-455 (= 455.38-0ubuntu0.20.04.1) but it is not going to be installed Depends: nvidia-compute-utils-455 (= 455.38-0ubuntu0.20.04.1) but it is not going to be installed Depends: libnvidia-encode-455 (= 455.38-0ubuntu0.20.04.1) but it is not going to be installed Depends: nvidia-utils-455 (= 455.38-0ubuntu0.20.04.1) but it is not going to be installed Depends: xserver-xorg-video-nvidia-455 (= 455.38-0ubuntu0.20.04.1) but it is not going to be installed Depends: libnvidia-ifr1-455 (= 455.38-0ubuntu0.20.04.1) but it is not going to be installed Recommends: libnvidia-compute-455:i386 (= 455.38-0ubuntu0.20.04.1) Recommends: libnvidia-decode-455:i386 (= 455.38-0ubuntu0.20.04.1) Recommends: libnvidia-encode-455:i386 (= 455.38-0ubuntu0.20.04.1) Recommends: libnvidia-ifr1-455:i386 (= 455.38-0ubuntu0.20.04.1) Recommends: libnvidia-fbc1-455:i386 (= 455.38-0ubuntu0.20.04.1) E: Unable to correct problems, you have held broken packages.I don't have any held packages, nor do I have any residual configurations.Currently, I am running version 450 without any problems on kernel 5.4.0-54-generic. So, what should I do now? How to investigate this?
Can't install nvidia-driver-455 (upgrade from 450 version)
My network driver is rtl8192cu and I've just figured out there are some issues with this driver not only for me but many other users. After digging deeply for hours,I found the following to be helpful for me. Ensure you have the necessary prerequisites installed: sudo apt-get update sudo apt-get install git linux-headers-generic build-essential dkms(In my case I didn't need to install linux headers) Clone this repository: git clone https://github.com/pvaret/rtl8192cu-fixes.gitSet it up as a DKMS module: sudo dkms add ./rtl8192cu-fixesBuild and install it: sudo dkms install 8192cu/1.11Refresh the module list: sudo depmod -aEnsure the native (and broken) kernel driver is blacklisted: sudo cp ./rtl8192cu-fixes/blacklist-native-rtl8192.conf /etc/modprobe.d/Source page
I'm using rtl8192cu on various distros and getting slow internet speeds on them, but on bodhi linux, since rtl8xxxu is available and loaded, I'm getting higher speeds. I'm curious to know if it is possible to download the rtl8xxxu drivers and use them on other distros as well? lsmod | grep rtl on bodhi linux returns: rtl8xxxu 122880 0 mac80211 778240 1 rtl8xxxuAnd I guess that's the reason I have higher speed here.
How to download and install rtl8xxxu driver?
I believe the 'Host Bridge' that lspci is referring to is the PCI host bridge that connects the CPU to the PCI bus. I have a 3rd generation Core i5 and my host bridge description says: 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller (rev 09)I think what it means is that the host bridge is designed for use with the Xeon E3-1200, but it also happens to be compatible with the i3/i5, which is presumably why it is being used on the motherboard. So, I don't think you have the 'wrong' PCI controller. It is a compatible PCI controller that just happens to be labelled with a description that refers to a different CPU. Also, I would think that description information from lspci is most likely coming directly from the controller on the motherboard itself (i.e. a built-in chip), rather than from a driver. You are not going to be able to change that, as it is part of the motherboard. Also, it is unlikely you would see any noticeable performance benefit from trying to optimize the driver for the PCI bus. Are you having any problems that suggest the PCI bus is not working correctly?
A few months ago I installed Debian 10 on my laptop, I have already managed to use it regularly for my daily activities, so I am starting to customize my settings. And started by validating the drivers that are installed for each component of my laptop. I have a Dell Inspiron 15-3567 laptop According to the details of the specifications manual, the laptop has a 7th generation Intel Core I3 processor. Validate it through the command grep 'vendor_id' /proc/cpuinfo ; grep 'model name' /proc/cpuinfo ; grep 'cpu MHz' /proc/cpuinfo obtaining the following information: vendor_id : GenuineIntel vendor_id : GenuineIntel vendor_id : GenuineIntel vendor_id : GenuineIntel model name : Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz model name : Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz model name : Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz model name : Intel(R) Core(TM) i3-7020U CPU @ 2.30GHz cpu MHz : 600.002 cpu MHz : 600.045 cpu MHz : 600.082 cpu MHz : 600.004Then use the lspci command to see the PCI controller that the kernel had associated with the processor, finding the following: diego@computer:~$ lspci -v 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers (rev 03) Subsystem: Dell Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers Flags: bus master, fast devsel, latency 0 Capabilities: <access denied> Kernel driver in use: skl_uncore00:02.0 VGA compatible controller: Intel Corporation Device 5921 (rev 06) (prog-if 00 [VGA controller]) Subsystem: Dell Device 078b Flags: bus master, fast devsel, latency 0, IRQ 127 Memory at d0000000 (64-bit, non-prefetchable) [size=16M] Memory at c0000000 (64-bit, prefetchable) [size=256M] I/O ports at f000 [size=64] [virtual] Expansion ROM at 000c0000 [disabled] [size=128K] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i91500:04.0 Signal processing controller: Intel Corporation Skylake Processor Thermal Subsystem (rev 03) Subsystem: Dell Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem Flags: fast devsel, IRQ 16 Memory at d1320000 (64-bit, non-prefetchable) [size=32K] Capabilities: <access denied> Kernel driver in use: proc_thermal Kernel modules: processor_thermal_device00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21) (prog-if 30 [XHCI]) Subsystem: Dell Sunrise Point-LP USB 3.0 xHCI Controller Flags: bus master, medium devsel, latency 0, IRQ 124 Memory at d1310000 (64-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: xhci_hcd Kernel modules: xhci_pciThe first detail that I observe is that the processor is recognized as an "Intel Corporation Xeon E3-1200 v6 / 7th Gen Core Processor Host Bridge" which does not agree with what was obtained from the command grep 'model name' /proc/cpuinfo My questions are about what would be the procedures for:How to find a controller associated with the type of processor my laptop really has (7th generation core i3). How to compare it with the driver that is currently installed If the driver I find is better, how should I change the driver?So far I have found tutorials where they tell me how to know the drivers that are installed but not one where they tell me how I could change or optimize them to make the laptop more efficient. Thanks for the answers.
How to update the driver used by the Debian Linux kernel for my processor?
Reconfigure X: Xorg -configureit will generate a new file xorg.conf.new , copy it to /etc/X11/xorg.conf: cp xorg.conf.new /etc/X11/xorg.confYou can create and set up /etc/X11/xorg.conf.d/20-nvidia.conf manually without using nvidia-xconfig ; e,g: #/etc/X11/xorg.conf.d/20-nvidia.confSection "Device" Identifier "My GPU" Driver "nvidia" EndSectionOr use the following command to create it: mkdir /etc/X11/xorg.conf.d echo -e 'Section "Device"\n\tIdentifier "My GPU"\n\tDriver "nvidia"\nEndSection' > /etc/X11/xorg.conf.d/20-nvidia.conf
On every reboot my Linux Mint deletes /etc/init.d/xorg.conf so it doesn't boot the graphical interface, so I have to go to CTRL + ALT + F2 every time and copy a backup I have in my home directory using sudo cp then run /etc/init.d/mdm start. It basically doesn't find the file. Another solution is running sudo nvidia-xconfig every time (but it deletes my presets) or reinstalling the NVIDIA driver. Why is my distro deleting the file on every reboot? Any clue? OS: Linux Mint 18 Cinnamon; CPU: AMD FX-4300; GPU: NVIDIA GTX 960; RAM: 16 GB I'm running NVIDIA 367.44 proprietary drivers.
Linux Mint 18 deletes /etc/init.d/xorg.conf on every reboot and fails to start x-server
The main repository is missing in your sources. You need to add at least deb http://deb.debian.org/debian buster mainto /etc/apt/sources.list (see Example sources.list) and reload the package index with sudo apt updatein the terminal or choose "Reload Package Information" (Ctrl + R) in synaptic.
Hello fellow Debian Users, I recently purchased a Dell Optiplex 790, Installed Debian 10.9 and put an ATI Saphire card in it. Needing drivers to fix screen discoloration upon display size change, I added the FGLRX proprietary driver. Now I needed to setup some applications like OBS studio and I can't seem to find them in Synaptic. I tried downloading the deb files manually to install but the dependencies conflict with core packages. I want to get synaptic back to having full main Debian repository packages. How do I do that? output of Aptitude sources: # deb cdrom:[Debian GNU/Linux 10.9.0 _Buster_ - Official amd64 xfce-CD Binary-1 20210327-10:42]/ buster main # deb cdrom:[Debian GNU/Linux 10.9.0 _Buster_ - Official amd64 xfce-CD Binary-1 20210327-10:42]/ buster main deb http://deb.debian.org/debian/ buster-backports contrib main deb-src http://deb.debian.org/debian/ buster-backports contrib main deb http://security.debian.org/debian-security/ buster/updates contrib main deb-src http://security.debian.org/debian-security/ buster/updates main contrib # buster-updates, previously known as 'volatile' deb http://deb.debian.org/debian/ buster-updates contrib main deb-src http://deb.debian.org/debian/ buster-updates main contrib # This system was installed using small removable media # (e.g. netinst, live or single CD). The matching "deb cdrom" # entries were disabled at the end of the installation process. # For information about how to configure apt package sources, # see the sources.list(5) manual.
some APT packages are not showing up in Debian Buster after FGLRX install. how do I fix this?
#!/bin/bash# script: list-nvidia.sh # author: Craig Sanders <[emailprotected]> # license: Public Domain (this script is too trivial to be anything else)# options: # default/none list the packages, one per line # -v verbose (dpkg -l) list the packages # -h hold the packages with apt-mark # -u unhold the packages with apt-mark# build an array of currently-installed nvidia packages. PKGS=( $(dpkg -l '*nvidia*' '*cuda*' '*vdpau*' 2>/dev/null | awk '/^[hi][^n]/ && ! /mesa/ {print $2}') )case "$1" in "-v") dpkg -l "${PKGS[@]}" ;; "-h") apt-mark hold "${PKGS[@]}" ;; "-u") apt-mark unhold "${PKGS[@]}" ;; *) printf "%s\n" "${PKGS[@]}" ;; esacThis script can list installed nvidia-related packages one per line, or in verbose dpkg -l format. It can also use apt-mark to hold and unhold the nvidia packages - I use these options immediately before and immediately after apt-get dist-upgrade to ensure that the nvidia driver only gets upgraded when I want it to (i.e. when I'm ready to reboot my system, or kill and restart X). The plain listing (with printf) is useful if I want to do other things with the list, like use it in a command substitution -- e.g. apt purge $(list-nvidia.sh). Debian package names will never have spaces or newlines etc in them, so there's no need to be paranoid about quoting.
So, Mint 19 surprised me in the update manager with Nvidia's 465 driver, and I attempted to install it; but all it did (it's hard to tell if it even downloaded anything) was inform me that it "Could not apply changes! Fix broken packages first."There is, concerningly, no note on which package is broken, and I don't have anything that Synaptic is aware of being broken. I haven't restarted my system yet, as I have no idea if my current drivers are still viable. Attempting to do it by apt tells me that a number of packages have been "kept back"; though interestingly, they all end in 455. $ sudo apt upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages have been kept back: libnvidia-cfg1-455 libnvidia-common-455 libnvidia-compute-455 libnvidia-compute-455:i386 libnvidia-decode-455 libnvidia-decode-455:i386 libnvidia-encode-455 libnvidia-encode-455:i386 libnvidia-extra-455 libnvidia-fbc1-455 libnvidia-fbc1-455:i386 libnvidia-gl-455 libnvidia-ifr1-455 libnvidia-ifr1-455:i386 nvidia-compute-utils-455 nvidia-kernel-common-455 nvidia-kernel-source-455 xserver-xorg-video-nvidia-455 0 upgraded, 0 newly installed, 0 to remove and 18 not upgraded.I can only assume that something has gone terribly awry with my package organization, but I really don't know what it is or how to fix it. I'm hoping someone can shed a little light on this for me. I haven't ever installed a driver on this machine in a non-apt way, and it's a rarity for me to actually have a broken package to begin with. Attempting to manually install the above packages gives me this. $ sudo apt install libnvidia-cfg1-455 libnvidia-common-455 libnvidia-compute-455 libnvidia-compute-455:i386 libnvidia-decode-455 libnvidia-decode-455:i386 libnvidia-encode-455 libnvidia-encode-455:i386 libnvidia-extra-455 libnvidia-fbc1-455 libnvidia-fbc1-455:i386 libnvidia-gl-455 libnvidia-ifr1-455 libnvidia-ifr1-455:i386 nvidia-compute-utils-455 nvidia-kernel-common-455 nvidia-kernel-source-455 xserver-xorg-video-nvidia-455 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:The following packages have unmet dependencies: nvidia-kernel-common-455 : Depends: nvidia-kernel-common-465 but it is not going to be installed nvidia-kernel-source-455 : Depends: nvidia-kernel-source-465 but it is not going to be installed E: Unable to correct problems, you have held broken packages.This is especially weird to me, as it implies that 455 depends on 465, which shouldn't have existed yet to begin with.
How do I handle this "broken packages" problem with Nvidia 465 on Mint 19?
More and more Linux distributions are adding the necessary facilities for full support of Secure Boot. That can include include configuring Secure Boot with a custom certificate, and signing third-party modules using that certificate when installed using the distribution's standard procedure. If the provider of the third-party modules provides pre-signed modules, it might also be possible to add the public certificate of that module provider to the kernel's whitelist. If all the necessary facilities for handling Secure Boot are not yet in place in a particular Linux distribution, then disabling Secure Boot may be necessary to either allow the use of third-party kernel modules, or the use of that Linux distribution altogether. KDE Neon is apparently an Ubuntu-based Linux distribution, and so it has the same Secure Boot facilities as the corresponding version of Ubuntu Linux. Personally I don't use Ubuntu, but I understand they've spent a lot of effort to integrate Secure Boot to the standard installation process and to make it as user-friendly as possible. Note that Secure Boot for Third Party drivers may require a more complex configuration than just Secure Boot, but it seems the installer might be programmed to cover both cases. Bottom line: assuming that everything works as it should, it is certainly possible that the installer can make the proper preparations to support Secure Boot with third-party drivers. But if you encounter problems, be aware that some of them may be caused by Secure Boot-related UEFI firmware bugs or limitations of Dell's particular implementation of Secure Boot. In that case, you may have to look for the option to disable Secure Boot in the system's firmware settings (commonly called "BIOS settings", but Secure Boot requires UEFI, and UEFI is not a traditional BIOS). If you want to understand what is happening in more detail, this webpage might be helpful, although it is not about Ubuntu-style distributions specifically.
I try to install KDE Neon 18.04 on my Dell Laptop (from USB drive). The installer asks me if I want to use third party software/drivers. Yes, I want to use them (when available). Directly below that option I am allowed to configure Secure Boot. When ticking that checkbox I need to set a password for Secure Boot. Directly above, it says that I need Secure Boot for Third Party drivers. If this is correct I suppose I need to configure Secure Boot and set a password. However I always have been under the opposite impression. I thought that using Secure Boot may prevent (unsigned) third party drivers from being installed. So, where is my mistake? Should I configure Secure Boot and a corresponding password or not?
Neon installation - Secure Boot vs third party drivers
I'll finish this as an answer. Mint should be able to detect the change and fix any problems. So:Shutdown Replace card Boot That being said, as a safety precaution, since things do go wrong sometimes, download the Nvidia drivers from their website. Before making the switch. If something goes wrong you will boot into run level 3 (cmd line) just issue : chmod +x <driver file name >Then run it with : . /NVIDIA-SOME-FILE-NAME And follow the prompt. It will ask you at the end of you want it to configure your xorg configuration, yes to that and then reboot when finished
I'm using Linux Mint. I already know how to do it in Windows but I'd like to know how to upgrade. I'm going to be changing from a GT 610 to a GTX 950. Is there anything I need to change like drivers?
What is the proper procedure for upgrading a nvidia card?
First of all, this solution is for debian, I don't know if it works for crunchbang, but I think it should. It's a pretty easy thing. All you have to know is the date when the new version of drivers (or xorg) was added to the repository. You can check it here: http://packages.qa.debian.org/f/fglrx-driver.html If you're updating your system regularly, you probably know what version caused the problems. If you don't know, you have to check the aptitude log, it's usually in /var/log/ directory. When you know the date, you just have to find a snapshot of the repository you have -- go to http://snapshot.debian.org/ and add a repository from that date. Let's say it's February 10th -- http://snapshot.debian.org/archive/debian/?year=2014&month=2 . There's 4 of them: 2014-02-10 04:35:37 2014-02-10 10:06:43 2014-02-10 16:14:41 2014-02-10 22:04:02Let's say you want this one: http://snapshot.debian.org/archive/debian/20140210T220402Z/ You have to edit /etc/apt/sources.list and add the following: deb http://snapshot.debian.org/archive/debian/20140210T220402Z/ testing main contrib non-free deb-src http://snapshot.debian.org/archive/debian/20140210T220402Z/ testing main contrib non-freeIf you tried update your package list, you would get the error: E: Release file for http://snapshot.debian.org/archive/debian/20140210T220402Z/dists/testing/InRelease is expired (invalid since 5d 16h 37min 17s). Updates for this repository will not be applied.So, you have to run aptitude update in this way: aptitude -o 'Acquire::Check-Valid-Until=false' updateNow, you have to add a rule to the /etc/apt/preferences file: Package: * Pin: origin snapshot.debian.org Pin-Priority: 1001And check it: # apt-cache policy fglrx-driver fglrx-driver: Installed: (none) Candidate: 1:13.12-4 Version table: 1:14.1~beta1.3-1 0 500 http://ftp.pl.debian.org/debian/ testing/non-free amd64 Packages 500 http://ftp.pl.debian.org/debian/ sid/non-free amd64 Packages 1:13.12-4 0 1001 http://snapshot.debian.org/archive/debian/20140210T220402Z/ testing/non-free amd64 PackagesIf you see the snapshot repository with the pin 1001, you can downgrade the system to the date of the snapshot: root:~# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages were automatically installed and are no longer required: dvdauthor libaacplus2 libav-tools libavfilter3 libboost-system1.53.0 libfreenect0.2 libgcrypt20 liblept4 libprocps3 libprotobuf8 libtsk10 libxcb-icccm4 libxcb-image0 libxcb-xf86dri0 libxshmfence1 python-nbxmpp python3-pyqt4 python3-sip vcdimager Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: cups-core-drivers cups-filters-core-drivers devede mencoder The following NEW packages will be installed: libfreenect0.1 libgpod4-nogtk libimobiledevice4 liblastfm1 liblept3 libloudmouth1-0 libofa0 libplist1 libprocps0 libprotobuf7 libqjson0 libtsk3-3 libusbmuxd2 python-qt4 python-sip usbmuxd xserver-xorg-video-qxl The following packages will be upgraded: libprocps3 The following packages will be DOWNGRADED: amarok amarok-common amarok-utils apt apt-utils autoconf binfmt-support bmon bootlogd busybox convertall cpp-4.8 cryptmount cups cups-client cups-common cups-daemon cups-filters cups-ppdc cups-server-common debootstrap deluge deluge-common deluge-gtk deluge-web deluged dictionaries-common dmsetup dos2unix file fonts-freefont-ttf g++-4.8 gajim gcc-4.8 gcc-4.8-base geoip-database gnupg gnuplot-x11 gnutls-bin gpac gpac-modules-base gparted gpgv gpm graphviz initscripts intel-microcode iproute iproute2 iputils-ping iso-codes keepass2 krb5-locales liba52-0.7.4 libapache2-mod-php5 libapt-inst1.5 libapt-pkg4.12 libarchive13 libasan0 libatomic1 libavcodec54 libavdevice53 libavformat54 libavresample1 libavutil52 libbluray1 libcaca0 libcairo-gobject2 libcairo2 libcdt4 libcgraph5 libcups2 libcupscgi1 libcupsfilters1 libcupsimage2 libcupsmime1 libcupsppdc1 libcwidget3 libdbd-mysql-perl libdbi-perl libdbus-glib-1-2 libdevmapper-event1.02.1 libdevmapper1.02.1 libepub0 libfontembed1 libgcc-4.8-dev libgcc1 libgfortran3 libglib2.0-0 libgnutls-openssl27 libgnutls26 libgnutls28 libgomp1 libgpac2 libgphoto2-6 libgphoto2-l10n libgphoto2-port10 libgraph4 libgraphite2-3 libgssapi-krb5-2 libgstreamer-plugins-base1.0-0 libgstreamer1.0-0 libgudev-1.0-0 libgvc5 libgvpr1 libharfbuzz-icu0 libharfbuzz0b libio-socket-ssl-perl libisl10 libitm1 libk5crypto3 libkrb5-3 libkrb5support0 libltdl7 libmagic1 libmhash2 libmono-accessibility4.0-cil libmono-cairo4.0-cil libmono-corlib4.0-cil libmono-corlib4.5-cil libmono-data-tds4.0-cil libmono-i18n-cjk4.0-cil libmono-i18n-mideast4.0-cil libmono-i18n-other4.0-cil libmono-i18n-rare4.0-cil libmono-i18n-west4.0-cil libmono-i18n4.0-all libmono-i18n4.0-cil libmono-posix4.0-cil libmono-security4.0-cil libmono-sqlite4.0-cil libmono-system-configuration4.0-cil libmono-system-core4.0-cil libmono-system-data4.0-cil libmono-system-drawing4.0-cil libmono-system-enterpriseservices4.0-cil libmono-system-runtime-serialization-formatters-soap4.0-cil libmono-system-security4.0-cil libmono-system-transactions4.0-cil libmono-system-web-applicationservices4.0-cil libmono-system-web-services4.0-cil libmono-system-web4.0-cil libmono-system-windows-forms4.0-cil libmono-system-xml4.0-cil libmono-system4.0-cil libmono-web4.0-cil libmono-webbrowser4.0-cil libmozjs24d libmp3lame0 libmysqlclient18 libnewt0.52 libnspr4 libpam-modules libpam-modules-bin libpam-runtime libpam0g libpathplan4 libpcap0.8 libpci3 libpcsclite1 libpipeline1 libportaudio2 libpython3-stdlib libpython3.3 libpython3.3-minimal libpython3.3-stdlib libquadmath0 libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libstdc++-4.8-dev libstdc++6 libstfl0 libswscale2 libsystemd-daemon0 libsystemd-journal0 libsystemd-login0 libtesseract3 libtool libtorrent-rasterbar7 libtsan0 libudev1 libwildmidi-config libwildmidi1 libxdot4 libzip2 linuxlogo live-build lvm2 lynx lynx-cur manpages minitube mono-4.0-gac mono-gac mono-runtime mpd mumble mysql-client mysql-client-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5 nmap nvidia-detect openssh-client openssh-server pciutils php-pear php5 php5-cgi php5-cli php5-common php5-curl php5-dev php5-gd php5-imap php5-intl php5-json php5-mcrypt php5-mysql php5-pspell php5-recode php5-snmp php5-sqlite php5-tidy php5-xmlrpc php5-xsl procps psi-plus psi-plus-common psi-plus-l10n psi-plus-plugins psi-plus-skins psi-plus-sounds psmisc python-gtkspell python-imaging python-keyring python-libtorrent python-lxml python-markupsafe python-pil python-simplejson python3 python3-keyring python3-minimal python3.3 python3.3-minimal qbittorrent qbittorrent-nox qnapi qpdfview qpdfview-djvu-plugin qpdfview-ps-plugin qtchooser reiser4progs sed shared-mime-info sleuthkit smartmontools sqlite3 ssh sudo sysv-rc sysvinit sysvinit-core sysvinit-utils tar tesseract-ocr texlive texlive-base texlive-fonts-recommended texlive-generic-recommended texlive-lang-polish texlive-latex-base texlive-latex-recommended texlive-pictures tmux udev udevil virtualbox virtualbox-dkms virtualbox-guest-additions-iso virtualbox-qt whiptail x11-common xarchiver xbase-clients xclip xorg xserver-common xserver-xephyr xserver-xorg xserver-xorg-core xserver-xorg-input-all xserver-xorg-input-evdev xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-input-vmmouse xserver-xorg-video-all xserver-xorg-video-ati xserver-xorg-video-cirrus xserver-xorg-video-fbdev xserver-xorg-video-intel xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-modesetting xserver-xorg-video-neomagic xserver-xorg-video-nouveau xserver-xorg-video-openchrome xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-trident xserver-xorg-video-vesa xserver-xorg-video-vmware xtables-addons-common xtables-addons-dkms xulrunner-24.0 zenmap 1 upgraded, 17 newly installed, 326 downgraded, 4 to remove and 0 not upgraded. Need to get 273 MB/296 MB of archives. After this operation, 12.0 MB of additional disk space will be used. Do you want to continue? [Y/n]You can play with the package list in the /etc/apt/preferences file. Instead of Package: *, you can set something like Package: *fglrx*, you have to figure this out, but you have to pay attention to Xorg -- it's packages also should be downgraded.
I tried to install AMD graphics without reading. My CrunchBang crashed and now every time I try to login, my keyboard inputs don't go into the login screen. I can reset X with Ctrl+PrtScn+K and it resets but same issue appears. How can I fix that and any other driver issues that may have evolved from my ignorance?
How Can I Reset Graphics Drivers and Restore Defaults?
Hardware was not installed correctly. I reinstalled the card, and it now seems to work. Sorry for the false alarm.
I'm having trouble getting my bluetooth hardware to work. I have an ASUS PCE AX3000 wifi card and the wifi works ok (maybe some problems with it glitching out occasionally, I think it's a power management issue, but I can live with it). I am using Linux Mint 20.1 Kernel 5.4.0-70-generic. The main problem is the bluetooth doesn't seem to work at all. service bluetooth status returns ● bluetooth.service - Bluetooth service Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2021-05-19 07:51:37 EDT; 11h ago Docs: man:bluetoothd(8) Main PID: 53546 (bluetoothd) Status: "Running" Tasks: 1 (limit: 38332) Memory: 1.1M CGroup: /system.slice/bluetooth.service └─53546 /usr/lib/bluetooth/bluetoothdMay 19 07:51:37 desktop-home systemd[1]: Starting Bluetooth service... May 19 07:51:37 desktop-home bluetoothd[53546]: Bluetooth daemon 5.53 May 19 07:51:37 desktop-home systemd[1]: Started Bluetooth service. May 19 07:51:37 desktop-home bluetoothd[53546]: Starting SDP server May 19 07:51:37 desktop-home bluetoothd[53546]: Bluetooth management interface 1.14 initializedThe relevant part of lspci -v seems to be: Subsystem: Intel Corporation Wi-Fi 6 AX200 Flags: bus master, fast devsel, latency 0, IRQ 35 Memory at f6400000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifiWhen I look at bluetoothctl show I get No default controller available. and 1: phy0: Wireless LAN Soft blocked: no Hard blocked: nobluetoothctl: 5.53I would appreciate guidance on next steps.
trouble getting bluetooth to work
At time I use a partial solution that can only be used with Linux devices on the network but not as general printing solution for also mobile devices. But to have it documented I will share it. Maybe there are some pointers or answers from the community so we get the final solution. I assume the Printer is successful connected and you can print to it from the Print Server: printserver ~$ lp -E -d myprinter /usr/share/cups/data/testprintI use Printer Sharing to have direct access to the Queue at printserver. For this I have to configure it as default Server instead of localhost for the local print queue. ┏━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━┓ ┃ localhost ┃ ┃ printserver ┃ ┃ ┃ IPPEverywhere ┃ ┌───────┐ ┃ legacy PPD driver ┏━━━━━━━━━━━┓ ┃ ┃════════════════╋═══│ Queue │═╋═════════════════════┫ myprinter ┃ ┃ ┃ ┃ └───────┘ ┃ ┗━━━━━━━━━━━┛ ┗━━━━━━━━━━━━━┛ ┗━━━━━━━━━━━━━┛Using Debian Buster on the Print Server with CUPS installed you have to enable sharing: printserver ~$ sudo cupsctl -E --share-printers printserver ~$ sudo cupsctl -E # check settingsAlso with Debian on a device with CUPS installed set the default Print Server in /etc/cups/client.conf: client ~$ sudo bash -c 'echo "ServerName printserver" >> /etc/cups/client.conf'That's all we need to do to get access to the Printer. Check its status and options with: client ~$ lpstat -E -t client ~$ lpoptions -E client ~$ lpoptions -E -lWith the last command you will find special options of the Printer that are not generic from lp, for example BRMonoColor of my color Printer. This example will print two copies of a two sided paper in grey instead default colored: client ~$ lp -E -d myprinter -n 2 -o fit-to-page -o collate=true -o sides=two-sided-long-edge -o BRMonoColor=Mono ./two-pages.pdfThere may be a problem if you have for example, a USB Printer local connected to the client. You will not see its queue on localhost. In this case you can always specify the Print Server with the environment variable CUPS_SERVER: client ~$ CUPS_SERVER=localhost lpstat -E -t client ~$ CUPS_SERVER=localhost lp -E ...References:Debian - Printing github - IPP Sample Implementations
On my network I want to use driverless printing with IPPEverywhere using the Linux CUPS printing system. I have some network printer which does support driverless printing with IPP only very buggy. One does not print some pdf files, the other doesn't print more than one copy and so on. But they all print very well using its native PPD printer drivers. So I want to present a print server on my network that serves the network printer with its own printer drivers but appear on the network as a (virtual?) full powered IPP device for each network printer. That means in general that the print server is "translating" the driverless IPP print commands from the network clients to the printers legacy print commands, so I have only IPPEverywhere print queues on the network. By default CUPS creates a local print queue that serves the printer either driverless using IPPEverywhere or with the legacy driver of the printer using its PPD file. ┏━━━━━━━━━━━━━┓ ┃ localhost ┃ ┃ ┌───────┐ ┃ ┏━━━━━━━━━┓ ┃ │ Queue │═╋════════════════┫ Printer ┃ ┃ └───────┘ ┃ ┗━━━━━━━━━┛ ┗━━━━━━━━━━━━━┛The idea now is to have a printserver that behaves like a driverless printer on the network: ┏━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━┓ ┃ localhost ┃ ┃ printserver ┃ ┃ ┌───────┐ ┃ IPPEverywhere ┃ ┌───────┐ ┃ legacy PPD driver ┏━━━━━━━━━┓ ┃ │ Queue │═╋════════════════┫ │ Queue │═╋═════════════════════┫ Printer ┃ ┃ └───────┘ ┃ ┃ └───────┘ ┃ ┗━━━━━━━━━┛ ┗━━━━━━━━━━━━━┛ ┗━━━━━━━━━━━━━┛Connecting the printer to the printserver with its legacy driver is no problem. This is the old method (but will become deprecated and removed with upstream CUPS versions). But how can I find the printserver on the network so I can connect to it for example with my Android smartphone and print, using IPPEverywhere?
CUPS driverless print server as proxy for printers with legacy PPD printer drivers
I have solved the issue by upgrading to a Nvidia GT710. It's not a good card but it is very cheap and is supported by the latest Nvidia drivers. As I do not intend to use my old PC to perform graphics-related tasks, I am satisfied with my upgrade.
TL;DR: Is there a way to get nvidia-304 driver to work with newer Linux kernels (on any distro) with some patches? Have you any suggestions about choosing a good distro? I have recently fixed an old desktop computer a friend gave me. It has a very old Nvidia GeForce 6100 that needs the nvidia-304 legacy driver. As I am used to Linux distros (I have used Ubuntu, Debian, OpenSUSE and CentOS for years on my main computers and on my servers) I wanted to install one of them. I have tried with several distros but it seems that Nvidia has discontinued the drivers I need. New Linux kernels doesn't seem to be supported. Nouveau doesn't support my GPU. Debian 9 has the non-free Nvidia driver in its official repo but it doesn't work very well (with gnome all I can see after login is a black screen, while with KDE it seems to work but it can't play videos in any browser and it lags randomly). Ubuntu 18.04 and OpenSUSE Leap 15 don't even have that driver in their repos and the official .run file from Nvidia doesn't work on their kernels. Manjaro fails to start. I haven't tried with Arch Linux yet but I think it won't work (because of the kernel version). I am going to use this computer to program, so I don't need a lot of performance as far as graphics is concerned. Has anyone tried to get that driver (nvidia-304) to work with not-too-old distros (newer Linux kernels)?
Old Nvidia GPU and Linux
I fixed the issue by following the steps to some guide on how to re-size encrypted partitions. This answer is just to close this thread.
I recently installed Linux lite. I went to change from the nouveau driver to the Nvidia legacy updates driver, restarted my pc. I remember being prompted to have default Xfce or something else and I think I clicked ok but can't remember. Then my whole Xfce system went from the normal Xfce setup that comes when you install Linux lite to the default bad looking default Xfce that is plain Xfce. How can I change this back? This is what it looks like now:My system info: Here is the hardinfo_report.html I generated: http://pastebin.com/ECSYzqH0
After switching to the nvidia video card driver, the whole xfce setup messed up
Typically, when you install a Linux distribution, firmware gets installed into /lib/firmware. When firmware is needed for a device, the Linux kernel looks in that directory for the right firmware file and loads it into the device. It depends on the distribution which firmware files are installed (by default), but often these are separated in different packages for different hardware vendors. For example, Debian has various packages that contain firmware files, most of which (e.g. firmware-iwlwifi, firmware-realtek, firmware-amd-graphics) come from the upstream linux-firmware repository. For a list of the non-free firmware packages that come from this repo, see the firmware-nonfree source package. The free firmware files from that repo are all packaged in a single firmware-linux-free package (built from the firmware-free source package. Often, Linux installers will ask if you want to install non-free firmwares, or they might even autodetect which to install based on your hardware. For example, if you use Debian non-free installer, if a device driver requests firmware that is not available, debian-installer will display a dialog offering to load the missing firmware. If this option is selected, debian-installer will scan available devices for either loose firmware files or packages containing firmware. If found, the firmware will be copied to the correct location (/lib/firmware) and the driver module will be reloaded. Some other Linux distributions (like Ubuntu) also contain nonfree binary blobs in drivers packaged with the kernel.
I would like to know whether all the firmware is copied to my PC if I install a non-free Linux distribution. What I mean is does the firmware of all the devices like WiFi adapters, keyboards etc. get copied to my PC even if my PC does not use those devices? Will this unnecessary firmware bloat/slow down the kernel?
Does all the non-free firmware get copied to my system if I install a non-free Linux distribution?
You're missing (at least) rpmbuild tool: ./packages/RedHat/ati-packager.sh: line 221: rpmbuild: command not foundThis (according to CentOS wiki) should be in rpm-build package which can be installed by running yum install rpm-build.
So, I am running CentOS 7 on my laptop and all seems almost OK. Now I'm trying to install the video card driver. # lspci -nn ... 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Chelsea LP [Radeon HD 7730M] [1002:682f] ... I have found this thread from U&L and have downloaded the zip file. But, when I try to execute the installer with: [root@giedi-prime fglrx-15.302]# sh check.sh Detected configuration: Architecture: x86_64 (64-bit) X Server: XServer 1.17.2 [root@giedi-prime fglrx-15.302]# ./amd-driver-installer-15.302-x86.x86_64.run And try to build an specific package for my distro (closest option is RHEL7!) it fails and there's a log message output at /usr/share/ati/fglrx-installer.log that reads. Check if system has the tools required for Packages Generation. Package build failed! Package build utility output: ./packages/RedHat/ati-packager.sh: line 221: rpmbuild: command not found [Error] Generate Package - error generating package : RedHat/RHEL7 What am I missing here?
how can I build Radeon proprietary drivers for CentOS 7?
Refer here: https://developer.nvidia.com/embedded/linux-tegra And you can find whole required materials about Jetson here: https://developer.nvidia.com/embedded/downloadshttp://developer.nvidia.com/embedded/dlc/l4t-Jetson-TK1-Driver-Package-R21-5http://developer.nvidia.com/embedded/dlc/l4t-Jetson-TK1-Sample-Root-Filesystem-R21-5 Although I did not test remote desktop as you did, my Jetson-TK1 has R21 (rev 5.0). ubuntu@tegra-ubuntu:~$ head -n 1 /etc/nv_tegra_release # R21 (release), REVISION: 5.0, GCID: 7273100, BOARD: ardbeg, EABI: hard, DATE: Wed Jun 8 04:19:09 UTC 2016
Question: I would like to re-install the nvidia driver on a 32 bit ARM computer running Linux for Tegra (L4T), however I don't know what the correct version of this driver is, where to get it, or how to install it properly. Background: I am trying to connect to an NVidia Jetson TK1 (32 bit ARM computer) using remote desktop. When I run startx in terminal freezes at Loading extension GLX. I ran sha1sum -c /etc/nv_tegra_release and at first libglx.so checksum FAILED (maybe apt-get upgrade broke it or something), but I replaced it with the original and now it says OK. /usr/lib/xorg/modules/extensions/libglx.so: OK Unfortunately, that did NOT solve the problem .... The consensus (mainly drawn from people with the same issue but different hardware), is that next I should purge and re-install the appropriate nvidia-xyz driver matching my graphics card. Board: Tegra K1 SOC NVIDIA Kepler GPU with 192 CUDA Cores NVIDIA 4-Plus-1™ Quad-Core ARM® Cortex™-A15 CPU Linux Distribution: head -n 1 /etc/nv_tegra_release R19 (release), REVISION: 2.0, GCID: 3896695, BOARD: ardbeg, EABI: hard, DATE: Fri Apr 18 23:10:46 UTC 2014
How can I determine which driver to re-install on my Nvidia Jetson TK1?
I hate to tell you this, but I have bad news:Start at the Linux Foundation Open Printing Project Database. Click Printer Listing. On the Printer Listing Page, choose Manufacturer: HP, and Model: deskjet 1050 j410a. Click Show This Printer, and arrive at this results pageThe results Page states:Color inkjet printer, this is a Paperweightand later:Miscellaneous Printer supports PJL. Printer supports direct text printing with the 'us-ascii' charset.PJL or Printer Job Language is the low level language written to interface with the print queue. Direct Printing with us-ascii gives you the option to print using the lpd command. In short, there is no ppd file for this modelNow for some good news: Typing the model directly into Google yields the HP Linux Imaging and Printing: Print, Scan and Fax Drivers for Linux support matrix for your model. If you'll look down at the last table, titled Other Information, you'll see Driver Plugin: None, and See Note 8. Driver Plugin is Code for PPD, and Note 8 explains there isn't one (which we just discovered following the steps above). As such, I believe you can press Enter to leave the PPD item blank. If asked later, you can use the model's dat name. This will at least allow the printer to function via CUPS queue, but everything printed will come out in Black and White using the ASCII Character Set.In answering this for you, I must say I've lost respect for HP, as most of their devices are well supported in Linux. If printing in ASCII isn't acceptable for you, use the database link above to find a supported model in the HP Family, or go with a different brand. I can't vouch for Epson through experience, but from what I've read, their models are well supported also.
I'm on manjaro, I have deskjet 1050 printer/scanner. I installed hplip from AUR for the scanner to work. It asks me for ppd file for deskjet 1050, what's a ppd file and how to get it? It's this one, HP Deskjet 1050 All-in-One Printer - J410a, the first one in the table
ppd file for deskjet 1050
The kernel image and headers packages come from the same source package, so they are available simultaneously on the mirror network (barring failures on a specific mirror). If you follow the amd64 link on the linux-headers-6.1.0-21-amd64 package page, you’ll find a package download link which works; that’s the package which apt will download. Examining the package pool shows that all the amd64 packages for 6.1.90-1 were uploaded at the same time, 2024-05-03 21:54. The package file list is unfortunately not particularly reliable for packages which aren’t in the main archive — the latest Debian 12 kernel package was published in the security archive. Given the many different scenarios around kernel image and headers package, it isn’t possible to introduce dependencies between them such that one could guarantee that an image package is only installed if its matching headers package is also installed. In any case that still wouldn’t ensure smooth updates for NVIDIA users — what matters there is whether the NVIDIA module is successfully built, and that can fail with matching kernel packages.
I'm using Debian 12 Bookworm, and currently, when I run uname -a, it shows: Linux pctxd 6.1.0-20-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.85-1 (2024-04-11) x86_64 GNU/LinuxThe package linux-image-6.1.0-21-amd64 and related packages are ready to install. However, the corresponding linux-headers-6.1.0-21-amd64 package is not available. Without these headers, the Nvidia drivers can't be compiled, rendering the graphical user interface non-functional—something I learned the hard way during the last upgrade to 6.1.85-1. Running aptitude show yields: Package: linux-image-6.1.0-21-amd64 Version: 6.1.90-1 New: yes State: not installed Priority: optional Section: kernel Maintainer: Debian Kernel Team <[emailprotected]> Architecture: amd64 Uncompressed Size: 408 M Depends: kmod, linux-base (>= 4.3~), initramfs-tools (>= 0.120+deb8u2) | linux-initramfs-tool Recommends: firmware-linux-free, apparmor Suggests: linux-doc-6.1, debian-kernel-handbook, grub-pc | grub-efi-amd64 | extlinux Conflicts: linux-image-6.1.0-21-amd64-unsigned Breaks: fwupdate (< 12-7), initramfs-tools (< 0.120+deb8u2), wireless-regdb (< 2019.06.03-1~) Replaces: linux-image-6.1.0-21-amd64-unsigned Provides: $kernel (= 6.1.90-1)Just now, web page Package: linux-headers-6.1.0-21-amd64 seems to describe the missing package, but clicking the “list of files” button results in a error page with the information “No such package in this suite on this architecture.” Currently, there is another Security update (regarding libglib2.0) waiting. So, the time lag between the kernel security update and the linux header files necessary for my graphic UI is a increasing risk. For future updates: Is there a way to automatically defer the kernel update until the linux-headers package is available but process the security updates of other packages?
How to defer kernel updates until the corresponding "linux-headers" package is available?
It is really easy to solve Solution with explanation: cd /var/cache/apt/archives/ ls -al #list packages installedsudo rm -rf initramfs-tools* #removes the problem filessudo apt update #updates software cache sudo apt --fix-broken install #looks for other problems and solve it sudo apt install initramfs-tools* #recovers problems packages .Correct script will be : #! /bin/bash sudo apt-get clean sudo dpkg --configure -a sudo apt-get autoclean sudo apt-get update echo echo echo "Packages that will be updated:" echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" sudo apt list --upgradable -a echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" echo echo sudo apt-get upgrade -y sudo apt-get full-upgrade -y sudo apt-get autoremove -y sudo apt-get cleanUse apt-get or aptitude -f instead of apt-fast apt-get #stableapt-fast #unstable but fastaptitude -f #best and safe and fix all the problems .--purge #without specification can sometimes delete important filesIf this didn't worked then post the output of each command I told you to do , specially apt --fix-broken install
NOTE:I have above average Linux experience but not an elite.I have an Nvidia GTX 1650 laptop card.I use Linux XanMod CacULE kernel.I use the latest Mesa and Nvidia drivers fromhttp://ppa.launchpad.net/kisak/kisak-mesa/ubuntu and http://ppa.launchpad.net/graphics-drivers/ppa/ubuntuI installed Elementary OS two months ago and have not run into any issues since then. systemctl --failed and journalctl -p 3 -b were fine as I check them regularly. All updates have been smooth since install.I use a simple apt-fast script to update my system. it is located in /usr/bin/. Here it is #!/usr/bin/fish sudo apt-fast clean sudo dpkg --configure -a sudo apt-fast autoclean sudo apt-fast update echo echo echo "Packages that will be updated:" echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" sudo apt list --upgradable -a echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" echo echo sudo apt-fast upgrade -y sudo apt-fast full-upgrade -y sudo apt-fast autoremove --purge --auto-remove -y sudo apt-fast clean sudo flatpak updateThis was what happened during the update: https://pastebin.com/xh4EZXT5 I found out something went wrong. So I ran sudo dpkg --configure -aThe error persists. I googled the error and tried to reinstall initramfs-tools. I removed it using: sudo apt remove initramfs-tools sudo apt autoremoveThen I ran sudo apt install initramfs-tools but still I get the error. Now I am getting this ~ ❯❯❯ sudo apt-get install -f initramfs-toolsReading package lists... Done Building dependency tree Reading state information... Done initramfs-tools is already the newest version (0.136ubuntu6.6). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue? [Y/n] Y Setting up initramfs-tools (0.136ubuntu6.6) ... update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools (0.136ubuntu6.6) ... update-initramfs: Generating /boot/initrd.img-5.14.15-xanmod1-cacule I: The initramfs will attempt to resume from /dev/dm-2 I: (/dev/mapper/data-swap) I: Set the RESUME variable to override this. Error 24 : Write error : cannot write compressed block E: mkinitramfs failure lz4 -9 -l 24 update-initramfs: failed for /boot/initrd.img-5.14.15-xanmod1-cacule with 1. dpkg: error processing package initramfs-tools (--configure): installed initramfs-tools package post-installation script subprocess returned error exit status 1 Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1)~ ❯❯❯ sudo dpkg --configure -a Setting up initramfs-tools (0.136ubuntu6.6) ... update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools (0.136ubuntu6.6) ... update-initramfs: Generating /boot/initrd.img-5.14.15-xanmod1-cacule I: The initramfs will attempt to resume from /dev/dm-2 I: (/dev/mapper/data-swap) I: Set the RESUME variable to override this. Error 24 : Write error : cannot write compressed block E: mkinitramfs failure lz4 -9 -l 24 update-initramfs: failed for /boot/initrd.img-5.14.15-xanmod1-cacule with 1. dpkg: error processing package initramfs-tools (--configure): installed initramfs-tools package post-installation script subprocess returned error exit status 1 Errors were encountered while processing: initramfs-tools~ ❯❯❯ sudo journalctl -p 3 -b -- Logs begin at Fri 2021-10-22 13:47:38 IST, end at Sat 2021-10-30 16:55:55 IST. -- Oct 30 11:15:51 Strix kernel: x86/cpu: SGX disabled by BIOS. Oct 30 11:15:51 Strix kernel: ACPI BIOS Error (bug): Failure creating named object [\_GPE._E4A], AE_ALREADY_EXISTS (20210604/dswload2-326) Oct 30 11:15:51 Strix kernel: ACPI Error: AE_ALREADY_EXISTS, During name lookup/catalog (20210604/psobject-220) Oct 30 11:15:51 Strix kernel: Oct 30 11:15:55 Strix lightdm[2302]: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory Oct 30 11:15:55 Strix lightdm[2302]: PAM adding faulty module: pam_kwallet.so Oct 30 11:15:55 Strix lightdm[2302]: PAM unable to dlopen(pam_kwallet5.so): /lib/security/pam_kwallet5.so: cannot open shared object file: No such file or directory Oct 30 11:15:55 Strix lightdm[2302]: PAM adding faulty module: pam_kwallet5.so Oct 30 11:15:56 Strix systemd[2306]: Failed to start Portal service (GTK+/GNOME implementation). Oct 30 11:15:56 Strix lightdm[2496]: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory Oct 30 11:15:56 Strix lightdm[2496]: PAM adding faulty module: pam_kwallet.so Oct 30 11:15:56 Strix lightdm[2496]: PAM unable to dlopen(pam_kwallet5.so): /lib/security/pam_kwallet5.so: cannot open shared object file: No such file or directory Oct 30 11:15:56 Strix lightdm[2496]: PAM adding faulty module: pam_kwallet5.so Oct 30 11:15:56 Strix lightdm[2521]: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory Oct 30 11:15:56 Strix lightdm[2521]: PAM adding faulty module: pam_kwallet.so Oct 30 11:15:56 Strix lightdm[2521]: PAM unable to dlopen(pam_kwallet5.so): /lib/security/pam_kwallet5.so: cannot open shared object file: No such file or directory Oct 30 11:15:56 Strix lightdm[2521]: PAM adding faulty module: pam_kwallet5.so Oct 30 11:16:06 Strix lightdm[2521]: gkr-pam: unable to locate daemon control file Oct 30 16:17:56 Strix systemd[1]: Failed to start Ubuntu Advantage APT and MOTD Messages. Oct 30 16:17:56 Strix kernel: [drm:drm_new_set_master [drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to grab modeset ownership Oct 30 16:17:56 Strix kernel: [drm:drm_new_set_master [drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to grab modeset ownership Oct 30 16:18:00 Strix lightdm[36108]: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory Oct 30 16:18:00 Strix lightdm[36108]: PAM adding faulty module: pam_kwallet.so Oct 30 16:18:00 Strix lightdm[36108]: PAM unable to dlopen(pam_kwallet5.so): /lib/security/pam_kwallet5.so: cannot open shared object file: No such file or directory Oct 30 16:18:00 Strix lightdm[36108]: PAM adding faulty module: pam_kwallet5.so Oct 30 16:18:00 Strix bluetoothd[851]: RFCOMM server failed for Headset Voice gateway: rfcomm_bind: Address already in use (98) Oct 30 16:18:00 Strix bluetoothd[851]: RFCOMM server failed for :1.120/Profile/HSPHSProfile/00001108-0000-1000-8000-00805f9b34fb: rfcomm_bind: Address already in use (98) Oct 30 16:18:01 Strix systemd[36112]: Failed to start Portal service (GTK+/GNOME implementation). Oct 30 16:18:01 Strix lightdm[36277]: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory Oct 30 16:18:01 Strix lightdm[36277]: PAM adding faulty module: pam_kwallet.so Oct 30 16:18:01 Strix lightdm[36277]: PAM unable to dlopen(pam_kwallet5.so): /lib/security/pam_kwallet5.so: cannot open shared object file: No such file or directory Oct 30 16:18:01 Strix lightdm[36277]: PAM adding faulty module: pam_kwallet5.so Oct 30 16:18:01 Strix lightdm[36318]: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory Oct 30 16:18:01 Strix lightdm[36318]: PAM adding faulty module: pam_kwallet.so Oct 30 16:18:01 Strix lightdm[36318]: PAM unable to dlopen(pam_kwallet5.so): /lib/security/pam_kwallet5.so: cannot open shared object file: No such file or directory Oct 30 16:18:01 Strix lightdm[36318]: PAM adding faulty module: pam_kwallet5.so Oct 30 16:18:09 Strix lightdm[36318]: gkr-pam: unable to locate daemon control file Oct 30 16:18:10 Strix kernel: [drm:drm_new_set_master [drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to grab modeset ownership Oct 30 16:18:10 Strix kernel: [drm:drm_new_set_master [drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to grab modeset ownership Oct 30 16:18:10 Strix kernel: [drm:drm_new_set_master [drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to grab modeset ownership Oct 30 16:18:10 Strix kernel: [drm:drm_new_set_master [drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to grab modeset ownership Oct 30 16:54:37 Strix systemd[1]: Failed to start Ubuntu Advantage APT and MOTD Messages. Oct 30 16:54:52 Strix systemd[1]: Failed to start Ubuntu Advantage APT and MOTD Messages. Oct 30 16:55:00 Strix systemd[1]: Failed to start Ubuntu Advantage APT and MOTD Messages.~ ❯❯❯ systemctl --failed UNIT LOAD ACTIVE SUB DESCRIPTION ● ua-messaging.service loaded failed failed Ubuntu Advantage APT and MOTD MessagesLOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type.1 loaded units listed. ~ ❯❯❯ sudo systemctl restart ua-messaging.service Job for ua-messaging.service failed because the control process exited with error code. See "systemctl status ua-messaging.service" and "journalctl -xe" for details. ~ ❯❯❯ sudo systemctl start ua-messaging.service Job for ua-messaging.service failed because the control process exited with error code. See "systemctl status ua-messaging.service" and "journalctl -xe" for details.My preferable resolution would be to remove the postinstall script that is causing the problem. Otherwise I won't prefer to remove the PPA for Mesa and Nvidia. My computer is still able to reboot, but I am not able to use APT package manager. I need to install initramfs-tools which I removed following a solution on similar issue.
Regular update broke initramfs on Elementary OS 6
So what I'm specifically looking to know is that if I install Nvidia drivers, this means that the kernel will auto-load these when booting from a system using an Nvidia GPU, correct ?Correct.This will still mean that I can boot from an IntelHD or AMD system flawlessly, and just as from a clean Linux install, correct ?Not necessarily. At least in the past NVIDIA drivers used to overwrite parts of the X.org and Mesa stacks and in the process made them unusable for AMD and Intel. Something might have changed recently but I cannot vouch for that.
I have built a portable Linux installation on USB. In my pursuit of working on all x86_64 systems,I have successfully got it to work in both BIOS and UEFI environments, and also added Mac-specific wifi drivers and got them to work. I want to clear the roadblock of GPUs. So what I'm specifically looking to know is that if I install Nvidia drivers, this means that the kernel will auto-load these when booting from a system using an Nvidia GPU, correct ? This will still mean that I can boot from an IntelHD or AMD system flawlessly, and just as from a clean Linux install, correct ? Needless to say, this question may be obvious and very stupid, but I'm a noob and need guidance. Thanks in advance !
Question Regarding Nvidia GPU drivers and how Linux auto-loads them
How does this Perl script I just whipped up work? #!/usr/bin/perluse strict; use warnings; use Time::HiRes qw/time sleep/;sub launch { return if fork; exec @_; die "Couldn't exec"; }$SIG{CHLD} = 'IGNORE';my $interval = shift; my $start = time(); while (1) { launch(@ARGV); $start += $interval; sleep $start - time(); }Use: perl timer.pl 1 date '+%Y-%m-%d %H:%M:%S' It has been running 45 minutes without a single skip, and I suspect it will continue to do so unless a) system load becomes so high that fork() takes more than a second or b) a leap second is inserted. It cannot guarantee, however, that the command runs at exact second intervals, as there is some overhead, but I doubt it is much worse than an interrupt-based solution. I ran it for about an hour with date +%N (nanoseconds, GNU extension) and ran some statistics on it. The most lag it had was 1155 microseconds. Average (arithmetic mean) 216 µs, median 219 µs, standard deviation 42 µs. It ran faster than 270 µs 95% of the time. I don't think you can beat it except by a C program.
Question I'd like to be able to run a UNIX command precisely every second over a long time period. I need a solution, which does not lag behind after a certain time, because of the time the command itself needs for execution. sleep, watch, and a certain python script all failed me in this regard. On the microcontroller's such as the http://Arduino.cc I'd do that through hardware clock interrupts. I'd like to know whether there is a similar time-precise shell script solution. All the solutions which I found within StackExchange.com, resulted in a noticeable time lag, if run over hours. See details below. Practical purpose / application I want to test whether my network connection is continuously up by sending timestamps via nc (netcat) every 1 second. Sender: precise-timestamp-generator | tee netcat-sender.txt | nc $receiver $portReceiver: nc -l -p $port > netcat-receiver.txtAfter completion, compare the two logs: diff netcat-sender.txt netcat-receiver.txtThe diffs would be the untransmitted timestamps. From this I would know at what time my LAN / WAN / ISP makes troubles. Solution SLEEP while [ true ]; do date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done | tee timelog-sleep.txtGets a certain offset over time, as the command within the loop also takes a little time. Precision cat timelog-sleep.txt2012-07-16 00:45:16 [...] 2012-07-16 10:20:36Seconds elapsed: 34520 wc -l timelog-sleep.txtLines in file: 34243 Precision summarized:34520-34243 = 277 timing problems 34520/34243 = 1.008 = 0.8 % offSolution REPEAT PYTHON Found at: Repeat a Unix command every x seconds forever repeat.py 1 "date '+%Y-%m-%d %H:%M:%S'" >> timelog-repeat-py.txtSupposed to avoid the time offset, but fails to do so. Precision wc -l timelog-repeat-py.txt2012-07-16 13:42:44 [...] 2012-07-16 16:45:24Seconds elapsed: 10960 wc -l timelog-repeat-py.txtLines in file: 10859 Precision summarized:10960-10859 = 101 timing problems 10960/10859 = 1.009 = 0.9 % offSolution WATCH watch -n 1 "date '+%Y-%m-%d %H:%M:%S' >> ~/Desktop/timelog-watch.txt"Precision wc -l timelog-watch.txt 2012-07-16 11:04:08 [...] 2012-07-16 13:25:47Seconds elapsed: 8499 wc -l timelog-watch.txtLines in file: 8366 Precision summarized:8499-8366 = 133 timing problems. 8499/8366 = 1.016 = 1.6 % off.
Run unix command precisely at very short intervals WITHOUT accumulating time lag over time
As Julie said, you can use df to display free space, passing it either the mount point or the device name: df --human-readable /home df --human-readable /dev/sda1You'll get something like this: Filesystem Size Used Avail Use% Mounted on /dev/sda1 833G 84G 749G 10% /homeTo run it continuously, use watch. Default update interval is 2 seconds, but you can tweak that with --interval: watch --interval=60 df --human-readable /dev/sda1
Is there a command line tool which shows in real time how much space remains on my external hard drive?
Real time cmd tool to show HDD space remaining
Some embedded systems (a) need to meet difficult real-time requirements, and yet (b) have very limited hardware (which makes it even more difficult to meet those requirements). If you can't change the hardware, then there are several situations where you are forced to rule out Linux and use something else instead:Perhaps the CPU doesn't even have a MMU, which makes it impossible to run Linux (except uClinux, and as far as I know uClinux is not real-time). Perhaps the CPU is relatively slow, and the worst-case interrupt latency in Linux fails to meet some hard requirement, and some other RTOS tuned for extremely low worst-case interrupt latency can meet the requirement. Perhaps the system has very little RAM. A few years ago, a minimal Linux setup required around 2 MB of RAM; a minimal eCos setup (with a compatibility layer letting it run some applications originally designed to run on Linux) required around 20 kB of RAM. Perhaps there is no port of Linux to your hardware, and there isn't enough time to port Linux before you need to launch (pun!) your system. Many of the simpler RTOSes take much less time to port to new hardware than Linux.
When developing a solution that requires a real-time operating system, what advantages would an operating system such an QNX or VxWorks have over Linux? Or to put it another way, since these operating system are designed specifically for real-time, embedded use - as opposed to Linux which is a more general system that can be tailored to real-time use - when would you need to use one of these operating systems instead of Linux?
Advantages of using a RTOS such as QNX or VxWorks instead of Linux?
When you do NOW=`date '+%F_%H:%M:%S'`or, using more modern syntax, NOW=$( date '+%F_%H:%M:%S' )the variable NOW will be set to the output of the date command at the time when that line is executed. If you do this in ~/.bashrc, then $NOW will be a timestamp that tells you when you started the current interactive shell session. You could also set the variable's value with printf -v NOW '%(%F_%H:%M:%S)T' -1if you're using bash release 4.2 or later. This prints the timestamp directly into the variable without calling date. In the script that you are showing, the variable NOW is being set when the script is run (this is what you want). When the assignment filename="/home/pi/gets/$NOW.jpg"is carried out, the shell will expand the variable in the string. It does this even though it is in double quotes. Single quotes stops the shell from expanding embedded variables (this is not what you want in this case). Note that you don't seem to actually use the filename variable in the call to raspistill though, so I'm not certain why you set its value, unless you just want it outputted by echo at the end. In the rest of the code, you should double quote the $NOW variable expansion (and $filename). If you don't, and later change how you define NOW so that it includes spaces or wildcards (filename globbing patterns), the commands that use $NOW may fail to parse their command line properly. Compare, e.g., string="hello * there"printf 'the string is "%s"\n' $stringwith string="hello * there"printf 'the string is "%s"\n' "$string"Related things:About backticks in command substitutions: Have backticks (i.e. `cmd`) in *sh shells been deprecated? About quoting variable expansions: Security implications of forgetting to quote a variable in bash/POSIX shells and Why does my shell script choke on whitespace or other special characters?
I am trying to create a raspberry pi spy cam bug. I am trying to make it so the new file created for the various processes come up with NOW=`date '+%F_%H:%M:%S'`;which works fine. but it requires an echo to update the time $NOW is also in the /home/pi/.bashrc file Same issue, does not update wo . ~/.bashrc I found on this forum here and it works: #! /bin/bashNOW=`date '+%F_%H:%M:%S'`;filename="/home/pi/gets/$NOW.jpg"raspistill -n -v -t 500 -o $NOW.jpg;echo $filename;I don't get how it works bc it's before o the output of raspistil and in quotes. Thank you all in advance!!!
Current time/date as a variable in BASH and stopping a program with a script
"Real time" means processes that must be finished by their deadlines, or Bad Things (TM) happen. A real-time kernel is one in which the latencies by the kernel are strictly bounded (subject to possiby misbehaving hardware which just doesn't answer on time), and in which most any activity can be interrupted to let higher-priority tasks run. In the case of Linux, the vanilla kernel isn't set up for real-time (it has a cost in performance, and the realtime patches floating around depend on some hacks that the core developers consider gross). Besides, running a real-time kernel on a machine that just can't keep up (most personal machines) makes no sense. That said, the vanilla kernel handles real time priorities, which gives them higher priority than normal tasks, and those tasks will generally run until they voluntarily yield the CPU. This gives better response to those tasks, but means that other tasks get hold off.
If I do the following command on my standard Linux Mint installation: comp ~ $ ps -eo rtprio,nice,cmd RTPRIO NI CMD ... 99 - [migration/0] 99 - [watchdog/0] 99 - [migration/1] - 0 [ksoftirqd/1] 99 - [watchdog/1]I get some of the processes with realtime priority of 99. What is the meaning of rtprio in a non real time Linux? Does this mean that if I just run a program with rtprio 99 it runs real time? Where do real time OSes fall in this story?
Real time priorities in non real time OS
You could probably hack this together using inotify and more specifically incron to get notifications of file system events and trigger a backup. Meanwhile, in order to find a more specific solution you might try to better define your problem.If your problem is backup, it might be good to use a tool that is made to create snapshots of file systems, either through rsnap or a snapshoting file system like xfs or using any file system with lvm. If your problem is sycronizing, perhaps you should look into distributed and/or netowrk file systems.Edit: In light of your update, I think you are making this way to complicated. Just make a folder in your dropbox for scripts. Then in your bashrc files do something like this: export PATH=$PATH:~/Dropbox/bin source ~/Dropbox/bashrcWhatever scripts you have can be run right from the dropbox folder in your home directory, and any aliases and such you want synced can go in a file inside Dropbox that gets sourced by your shell. If other people besides you need access to the scripts, you could symlink them from your Dropbox to somewhere like /usr/local/bin.
Are there any linux/unix console applications similar to Yadis that would allow me to:be set up from the console backup multiple directories backup / sync in real time after the files (text files) are changedUpdate 1: I write shell scripts, ruby scripts, aliases etc etc to make my work easier. I want to have backup of these files. The solution I am looking for will copy these files after any change was made to them to a subdirectory of my dropbox directory and that's it. Backup is done and available from anywhere. Always fresh and ready and I don't have to think about it. I know I can run cron few times a day but I thought there must be a solution for what I am looking for available on linux. I am not so linux experienced so I asked here.
real time backup if file changed?
The answer to this question can be found in signal(7) man page, in Real-time Signals sectionReal-time Signals Linux supports real-time signals as originally defined in the POSIX.1b real-time extensions (and now included in POSIX.1-2001). The range of supported real-time signals is defined by the macros SIGRTMIN and SIGRTMAX. POSIX.1-2001 requires that an implementation support at least POSIX_RTSIG_MAX(8) real-time signals. The Linux kernel supports a range of 32 different real-time signals, numbered 33 to 64. However, the glibc POSIX threads implementation internally uses two (for NPTL) or three (for LinuxThreads) real-time signals (see pthreads(7)), and adjusts the value of SIGRTMIN suitably (to 34 or 35).
Examining the output of kill -l command $ kill -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAXone can notice that the integer value of SIGRTMIN is 34, and not 32. ... 31) SIGSYS 34) SIGRTMIN ... Why? $ uname -r 4.19.0-8-amd64$ ls -l /lib/x86_64-linux-gnu/libc-2.28.so -rwxr-xr-x 1 root root 1.8M May 1 2019 /lib/x86_64-linux-gnu/libc-2.28.so*
Why is the integer value of SIGRTMIN (first real-time signal) 34 and not 32? [duplicate]
tail -c +1 -f /Path/to/syscontection.pcap | tcpdump -l -r -
To display content of pcap file , we use : tcpdump -r /Path/to/syscontection.pcap;However, this command line does not follow the pcap file on realtime , like tail -f which follows a plain text . Is there an option with tcpdump which acts like -f of tail ? OR Is there an option with tail that can read pcap file? ORSomething else ?
"tail -f" using "tcpdump -r"
Well, first of all, the kernel chooses the best one automatically, it is usually TSC if it's available, because it's kept by the cpu and it's very fast (RDTSC and reading EDX:EAX). But that wasn't always the case, in the early days when the SMP systems were mostly built with several discrete cpus it was very important that the cpus where as "equal" as it was possible (perfect match of model, speed and stepping), but even then it sometimes occurred that one was sightly faster that the other, so the TSC counter between then was "unstable", that was the reason to allow to change it (or disable it with the "notsc" kernel parameter). And even with this restrictions the TSC is still the best source, but the kernel has to take great care to only rely on one cpu in multicore systems or actively try to keep them synchronized, and also take in account things like suspend/resume (it resets the counter) and cpu frequency scaling (affects TSC in some cpu models). Some people in those early days of SMP even built systems with cpus of different speeds (kind of the new BIG.little architecture in arm), that created big problems in the timekeeping area. As for a way to check the resolutions you have clock_getres() and you have an example here. And a couple of extra links: oficial kernel doc (there are other interesting files in this dir) and TSC resynchronization in chromebooks with some benchmarks of different clocksources. In short, there shouldn't be any userspace visible changes when changing the clocksource, only a slower gettimeofday().
The output of cat /sys/devices/system/clocksource/clocksource0/available_clocksource lists the available hardware clocks. I have changed the clocks, without any visible difference. sudo /bin/sh -c 'echo acpi_pm > current_clocksource' What are the practical implications of changing the hardware clock? Is there a way to check the resolutions (or some other visible change) of the available clocks?
What does the change of the clocksource influence?
A filesystem overlay would probably work better for this. Using something like aufs you can create a 'virtual' directory out of several combined directories. You can configure whether writes should propagate back to the original directories, or use a copy-on-write method to leave the originals alone, or disallow writes altogether. However to answer your original question, you can use something like lsyncd to trigger an rsync as soon as a file is modified. lsyncd uses rsync, which is capable of syncing directory contents without keeping the directory name. Generally the way to do this is to sync sourcea/. to /some/destdir/. The important part is the dot at the end of the source path. This is a general trick that can be used for things other than rsync, like cp.
How can I set up an instant file sync for two local directories? The catch is I need it in real time (15, 10 or even 5 seconds is too slow), and I don't want the target root directory created. For example, something similar to... cp -fr sourcea/* /some/destdir cp -fr sourceb/* /some/destdir cp -fr sourcec/* /some/destdirCatching deletions would be nice too, though it's not a deal breaker. I do not want the "sourcex" created within destdir. (hence the *). I've tried using rsync, which apparently wasn't close or even close to real-time, as well as several generic sync systems. However, most will want a 1 to 1 sync, rather than contents only. Any suggestions? I've been fighting this for a while now, so any help would be greatly appreciated!
Real Time Local File Sync
A list of non-zero CPU % processes: ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm --sort=+pcpu | awk '$8!=0.0 {print}' | awk 'NR>1'To count them ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm --sort=+pcpu | awk '$8!=0.0 {print}' | awk 'NR>1' | wc -lTo see this continuously updated, but them in a file called processes.sh: #!/bin/bash ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm --sort=+pcpu | awk '$8!=0.0 {print}' | awk 'NR>1'and make it executable with chmod +x processes.sh. Now run it with watch for live updating: watch ./processes.sh
How to properly identify real-time processes currently occupied CPU queue and count them using ps? I know there is a bunch of fileds like prio,rtprio,pri,nice but do not know correct to use. It seems I need use something like ps -eo rtprio,prio,cpu,cmd --sort=+rtprio to get full list, but it do not seems for me right since a lot of processes got with - sign at RTPRIO column. For example, I have 48 cores system running Oracle Linux and try to identify following questions:What processes occupied run queue? What's a count of them? How to identify processes that run in Real Time mode or with increased priority?
How to sort ps output to find processes realtime priority and identify processess currently occupied running queue
I don't have an answer but you might find one amongst the tools, examples and resources written or listed by Brendan Gregg on the perf command and Linux kernel ftrace and debugfs. On my Raspberry Pi these tools were in package perf-tools-unstable. The perf command was actually in /usr/bin/perf_3.16.Of interest may be this discussion and context-switch benchmark by Benoit Sigoure, and the lat_ctx test from the fairly old lmbench suite. They may need some work to run on the Pi, for example with tsuna/contextswitch I edited timectxswws.c get_iterations() to while (iterations * ws_pages * 4096UL < 4294967295UL) {, and removed -march=native -mno-avx from the Makefile.Using perf record for 10 seconds on the Pi over ssh whilst simultaneously doing while sleep .1;do echo hi;done in another ssh: sudo timeout -10 perf_3.16 record -e context-switches -a sudo perf_3.16 script -f time,pid,comm | lessgives output like this sleep 29341 2703976.560357: swapper 0 2703976.562160: kworker/u8:2 29163 2703976.564901: swapper 0 2703976.565737: echo 29342 2703976.565768: migration/3 19 2703976.567549: sleep 29343 2703976.570212: kworker/0:0 28906 2703976.588613: rcu_preempt 7 2703976.609261: sleep 29343 2703976.670674: bash 29066 2703976.671654: echo 29344 2703976.675065: sshd 29065 2703976.675454: swapper 0 2703976.677757: presumably showing when a context-switch event happened, for which process.
I'm trying to get the nearest to real-time processing I can with a Raspbian distribution on a Raspberry Pi for manipulating its GPIO pins. I want to get a "feel" for the kind of performance I can expect. I was going to do this by writing a simple C program that toggles a pin repeatedly as fast as possible, and monitoring it with a logic analyser. But perhaps there's another way, by writing the above program but simply logging context switches to see exactly when that thread/process has control over a sample period of say a couple of seconds. This previous question answers how to see how many context switches are made in some period of time for a given process, but is there a way of logging the precise timing of switches, and perhaps for every process, not just one? Obviously this would create overhead, but could still be useful. Obviously the data should be stored in RAM to minimise overhead Note to self: possible solutions:Command to list in real time all the actions of a process Hacky: make the program repeatedly get and store the current time (and save it to a file once the log reaches a certain limit). Or, a slight improvement to avoid enormous logs: use an algorithm that eliminates consecutive times if they are close enough together that it can be deduced they were not pre-empted by some other process.
Record time of every process or thread context switch
The performance improvement is not visible to everyone, just certain users for which RT kernels really matter : DSP, audio/video processing, and so on. So that config option is not universally beneficial, hence disabled.
In the Linux Kernel CONFIG_NO_HZ is not set. But an initial reading suggests that setting that option would be nice from a performance point of view. But reading some posts like this made me think again. Why CONFIG_NO_HZ is not set by default or why no performance improvement when it is enabled?
Why CONFIG_NO_HZ is not set by default
I found two ways for doing it - which may not be optimal, but they get the job done:#! /bin/bash ps -u | grep '[0-9]' | awk '{print $2}' | while read line do chrt -p "$line" 2>/dev/null doneps -u | grep '[0-9]' | awk '{system("chrt -p" $2)}' 2>/dev/null
I want to make a shell script that finds all active processes and to print to the user the scheduling policy.I want the result to be like this. pid 3042's current scheduling policy: SCHED_OTHER pid 3042's current scheduling priority: 0 pid 3043's current scheduling policy: SCHED_OTHER pid 3043's current scheduling priority: 0 pid 3044's current scheduling policy: SCHED_OTHER pid 3044's current scheduling priority: 0I have managed to do this but only for a single process with the use of ps and chrt commands.
How to find scheduling policy and active processes' priority?
Well, you could do it with some command line tools. cdrecord (wodim on debian) can burn audio CDs on the fly, but it needs an *.inf files that specify track sizes etc. You can generate an inf file upfront with a dummy CD that has (say) one large audio track (74 minutes) using cdda2wav (icedax on debian). In the live setting you record from an audio device of your choice with arecord in one xterm to a temporary file x. Use as argument of --duration the track size in seconds. In another xterm you can start after a few seconds (to allow some buffering) cdrecord which reads the audio from a pipeline from x and uses the prepared inf file. You have to make sure that you specify speed=1 for writing. Of course, you have to test this setup a bit (first times with cdrecord -dummy ...!) and lookup the right options. But the manpage of cdrecord already contains an on the fly example as starting point:To copy an audio CD from a pipe (without intermediate files), first runicedax dev=1,0 -vall cddb=0 -info-onlyand then runicedax dev=1,0 -no-infofile -B -Oraw - | \ wodim dev=2,0 -v -dao -audio -useinfo -text *.infBut after you have everything figured out, you can create a script that automates all these steps.
There's a question over at audio.SE at the moment and I thought it may attract some answers here. I asked the user and he's happy to have it posted here to see if anyone has some ideas. Here it is verbatim:I'm working with a client who needs to minimize the time between when recording is done and when the finalized audio CD ejects from the drive. All of the computer recording software I'm aware of will generate a file which can then be burned to an Audio CD. I know there is external hardware I could buy that would burn an analog stream directly to disc, but I'm wondering if any software exists that can achieve this with a computer's internal optical drive. Ideally for Mac, though if it only exists for Windows or Linux I would be interested to hear about it. (Other alternatives have been explored; I would much rather not deal with optical media at all, but this is a highly specialized situation.)As to the source: "It would likely be an external USB audio interface." If you need more info let me know.
Is it possible to record audio directly to a computer's optical drive?
I wonder if there is any PREEMPTION configuration to enable, or specific way to run an application to get more precise real-time?See the following: http://wiki.linuxcnc.org/cgi-bin/wiki.pl?Latency-Test which links to this: https://forum.linuxcnc.org/18-computer/25927-reducing-latency-on-multicore-pc-s-success?limitstart=0 (from the above page:)Linux related changes:edit /etc/default/grub and add the kernel options: "isolcpus=1 acpi_irq_nobalance noirqbalance" (call update-grub afterwards) make sure that the software "irqbalance" is NOT installed, remove if it is there (Ubuntu software center -> installed software -> search for irqbalance -> remove) add the upstart script "irq-affinity.conf" to /etc/init (see attachments, it will move the irq-handling to the first core) add the sh scripts set-irq-affinity and watchirqs to /usr/local/sbin (first allows to set the affinity mask manually, second opens a console window that shows live how the irqs are scheduled to the different cores -> all numeric irqs but 0 should be handled by cpu0)You might have to adapt the script for Système D, though ... A quick thing to test: You could also up the priority with nice -n <x>, where <x> is the desired priority.Also, how could I check that the process is running real-time?See this question: Real time processes scheduling in Linux
Trying to get working LinuxCNC on my Debian Jessie, I did:Installed the kernel 4.9 RT through Jessie backport using apt-get/aptitude. Restarted my computer and checked uname -a contain PREEMPT RT Installed LinuxCNC by adding the repository and using apt-getAfter that, starting the linuxCNC wizard (by normal clicks through menus) there is a jitter test. My current results are absolutely bad, around 140us. I wonder if there is any PREEMPTION configuration to enable, or specific way to run an application to get more precise real-time? Also, how could I check that the process is running real-time? Note: My computer is an intel E6600
How to run real-time application in Linux?
date makes no effort to synchronize with anything at all, and merely makes some system call (that on linux a strace date ... may or may not show) to lookup the time since the epoch as known by the system. The system itself may synchronize with the BIOS clock, or if a virtual machine may obtain the current time from the parent it runs under, or may use NTP (or other software that does more or less the same thing, e.g. the older rdate or instead the Precision Time Protocol (PTP)) to synchronize the time with other computers, or may use a hardware GPS or radio device to set the time from. None of these involve date, unless the admin is listening to, say, BBC Radio and manually setting the system time with date at the top of the hour (do they still do the doot doot doot dee thing at the top of the hour?). Accuracy of the system clock depends on the NTP configuration (or whatever other software is used), whether NTP (or similar) is broken, the BIOS, whether the BIOS is broken (I've seen a system boot four years into the future and that host had a broken NTP setup for other reasons, good fun, this being a NFS server with Makefile...), whether the hardware clock is misbehaving, and various other details. If you're using NTP, the peers command to ntpq or the drift file might be worth looking at. For the most accurate time (apart from putting an atomic clock on each host...) you might want to read up on PTP in "Toward Higher Precision" (and also learn that Time is an Illusion).
I did not see any similar questions on this site. The manpage, while helpful in describing how to use date, did not have much background info. Even the info page (as prescribed in the manpage: info '(coreutils) date invocation'), had little more than how it operates based on the TZ variable. I'm wondering how the date command line utility works. Specifically, date +%s which returns seconds since the epoch, e.g. 1467743297 (seconds since 1970-01-01 00:00:00 UTC). For example, I'm thinking it doesn't require an internet connection. Then again, does it occasionally try to re-sync with a particular source? If so, how is that source specified? If not, then how much confidence on a given machine does one have in values reported by date? Section 21.1.5 of the info page implies date only deals with the system/software clock, and then any sync with the hwclock is dependent upon the OS. So... maybe the question is more appropriately "which systems have more accurate date +%s reporting?" I suppose this would also imply the accuracy of date is limited to the accuracy of the machine's hardware clock, especially as influenced by the system/software clock? (e.g. the sw clock could interfere, skewing accuracy, even with a highly-accurate hw clock).
how does the gnu coreutils `date` work? [closed]
I don't think so. The patch seems to provide real-time scheduling which is very important for some enviroments (planes, nuclear reactors etc.) but overkill for regular desktop. The current kernels however seems to be enough "real-time" and "preemptive" for regular desktop users[1]. It may be useful if you work with high quality audio recording and playing in which even small amount of time may dramatically reduce the quality. [1] Technically both are 0/1 features but I guess it is clear what I mean ;)
Does the PREEMPT_RT patch (real-time kernel) have any benefit for regular desktop users?
Does the Linux PreemptRT patch benefit desktop users?
I'm not an expert on socat, but after a quick view to its name (SOcket CAT), it seems that it goes through opening two sockets and operating them in user-space. As slm suggests, why do not configuring it via iptables? Iptables is a user-space application which configures netfilter. Netfilter code is embedded in the kernel. It may result in a better performance, since forwarded packet does not need to be passed from kernel-space to user-space and vice versa. Resourceshttps://www.systutorials.com/816/port-forwarding-using-iptables/ https://serverfault.com/questions/140622/how-can-i-port-forward-with-iptables
I am using the command socat to port forward a connection from a real-time live stream. TCP4-LISTEN:8080 TCP4:123.456.789.12:80The problem is it has added delay and low fps while the live stream without port forwarding works perfectly without delay and high fps. What might it be causing this? Is there a way to fix this configuring socat or should I use another method?
Port Forwarding without delay and high fps in a real time live stream using socat
rmmod 8139too doesn't work because either:8139 support is built into the kernel, and the driver can't be unloaded because it's not a module. On many systems, there's a /boot/config-2.6.38.8 file (or similar). You can grep it for something like ‘8139TOO’. If you see something like CONFIG_8139TOO=m, then the 8139too driver is compiled as a module. If it's CONFIG_8139TOO=y, then the driver is built into the kernel. If it says something along the lines of # CONFIG_8139TOO is not set, then the driver has not been compiled at all. Your ethernet card doesn't use the RTL8139 chip, so its driver isn't loaded. You must find your intended ethernet port's driver and unload that one instead. If you have lshw, say sudo lshw | less and look for eth0: the driver module will be listed. If you have systool, try sudo systool -c net -A uevent eth0 and look for the DRIVER= part. The right hand side should show the driver loaded to handle the device. dmesg | grep eth0 may also work, but it's not 100% reliable, especially if your system has been on for a while (if there's a /var/log/dmesg, you may want to grep eth0 /var/log/dmesg too).
From here: http://www.xenomai.org/index.php/RTnet:Installation_%26_Testing#Debugging_RTnetThe Linux driver for the real-time network device was built into the kernel and blocks the hardware.When I execute rmmod 8139too it says the module does not exist in /proc/modules. Kernel is 2.6.38.8 (64 bit). What other information should I provide for the question?linux-y3pi:~ # uname -a Linux linux-y3pi 2.6.38.8-12-desktop #2 SMP PREEMPT Fri Jun 1 17:27:16 IST 2012 x86_64 x86_64 x86_64 GNU/Linuxlinux-y3pi:~ # ifconfig eth0 Link encap:Ethernet HWaddr 00:24:8C:D9:D6:2E inet addr:192.168.16.86 Bcast:192.168.16.255 Mask:255.255.255.0 inet6 addr: fe80::224:8cff:fed9:d62e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:414 errors:0 dropped:0 overruns:0 frame:0 TX packets:261 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:118971 (116.1 Kb) TX bytes:35156 (34.3 Kb) Interrupt:17 Base address:0x4000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:68 errors:0 dropped:0 overruns:0 frame:0 TX packets:68 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4720 (4.6 Kb) TX bytes:4720 (4.6 Kb)linux-y3pi:~ # ethtool -i eth0 driver: r8169 version: 2.3LK-NAPI firmware-version: bus-info: 0000:01:00.0linux-y3pi:~ # rmmod r8169linux-y3pi:~ # ethtool eth0 Settings for eth0: Cannot get device settings: No such device Cannot get wake-on-lan settings: No such device Cannot get message level: No such device Cannot get link status: No such device No data availablelinux-y3pi:~ # lsmod|grep 8169linux-y3pi:~ # lsmod|grep 8139linux-y3pi:~ # .config from /usr/src/linux-2.6.38.8 CONFIG_R8169=m CONFIG_R8169_VLAN=yCONFIG_8139CP=m CONFIG_8139TOO=m #CONFIG_8139TOO_PIO is not set #CONFIG_8139TOO_TUNE_TWISTER is not set CONFIG_8139TOO_8129=y #CONFIG_8139_OLD_RX_RESET is not set
How to know whether the Linux driver for the real-time network device was built into the kernel?
This sounds very much like the approach taken by RTLinux, which still seems to be available but not commercially supported. That being said, there's a community unto itself about real-time Linux concepts, and the CONFIG_PREEMPT_RT patch would seem to enable the functionality you're looking for. As with all kernel hacking, do so at your own risk. There's a HOWTO available to help you get started.
With the real-time executive approach, a small real-time kernel coexists with the Linux kernel. This real-time core uses a simple real-time executive that runs the non-real-time Linux kernel as its lowest priority task and routes interrupts to the Linux kernel through a virtual interrupt layer. All interrupts are initially handled by the core and are passed to standard Linux only when there are no real-time tasks to run. Real-time applications are loaded in kernel space and receive interrupts immediately, giving near hardware speeds for interrupt processing. I wonder how to test this in ordinary desktop Linux, e.g. Ubuntu? If it's even possible?
Real-time executive approach, can be run in desktop Linux?
If on Linux, something like this should do what you are looking for: inotifywait -m -e close_write --format %w%f -r /watch/dir | while IFS= read -r file do cat < "$file" done
Unlike the answer to this question (Can a bash script be hooked to a file?) I want to be able to see content of files that haven't been created yet as or after they are created. I don't know when they will be created or what they will be named. That solution is for a specific file and actually mentions in the question title creating a "hook" to a specific file. My question is different because I don't want to hook anything, and what I do want is not specific to a particular file. My question's title specifies "..as they are created" which should be a clue that the files I am interested in do not exist yet. I have an application that users use to submit information from a website. My code creates output files when the user is finished. I want to be able to see the content of these files as they are created, similar to the way tail -f works, but I don't know ahead of time what the filenames will be. Is there a way to cat files as they are created or would I have to somehow create an endless loop that uses find with the -newermt flag Something like this is the best I can come up with so far: #!/bin/bash # news.shwhile true do d=$(date +"%T Today") sleep 10 find . -newermt "$d" -exec head {} + doneFor clarification, I don't necessarily need to tail the files. Once they are created and closed, they will not be re-opened. Existing files will never change and get a new modification time, and so I am not interested in them.
Is there a way to cat files as they are created? [duplicate]
The answer seems to have been to put the following in my init.d script, which I put just before the start-stop-daemon calls in do_start: ulimit -r ## (where ## is a sufficiently high number; 99 works)The way I was able to determine this was by making system calls to ulimit -a inside of a bash command inside of my code: bash -c "ulimit -a"The bash part is necessary, because ulimit -a is a shell builtin. ulimit -a on /bin/sh returns different information unrelated to real-time priority. For some reason, I found that my real-time priority was limited to 0 (no real-time priority) when my service is started at boot. When I run it with service or by calling the init.d script, it inherits my permissions which allow for real-time priority. But when the system calls it through the Upstart/SystemV backwards compatibility system, it doesn't get that elevated privilege. I suppose this might relate to posts I have seen that say Upstart doesn't read /etc/security/limits.conf which is where you would set system-wide real-time priority permissions for non-privileged users. If anyone can verify or explain why this solution works, I would love to hear it.
I have a piece of C++ code that runs just fine when I run it from a Linux terminal, but which throws an EPERM error when run from a SystemV (init.d) script on bootup. The error comes from a pthread_create with the following bit of attributes assigned to the thread attempting to be created: pthread_t reading_thread;pthread_attr_t read_attr; struct sched_param read_param; pthread_attr_init(&read_attr); pthread_attr_setschedpolicy(&read_attr, SCHED_FIFO); pthread_attr_setinheritsched(&read_attr, PTHREAD_EXPLICIT_SCHED); read_param.sched_priority = 30; pthread_attr_setschedparam(&read_attr, &read_param);k = pthread_create(&reading_thread, &read_attr, Reading_Thread_Function, (void*) &variable_to_pass_to_Reading_Thread_Function); // Will return EPERMThis code works just fine when run from my terminal. It also runs just fine in the init.d script when I call "/etc/init.d/myinitdscript start". It also runs fine as "sudo service myinitdscript start". The init.d script contains the following: #! /bin/sh ### BEGIN INIT INFO # Provides: myinitdscript # Required-Start: $local_fs $remote_fs $syslog $network # Required-Stop: $local_fs $remote_fs $syslog $network # Default-Start: 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts my daemon # Description: Verbose explanation of starting my daemon ### END INIT INFO PATH=/sbin:/usr/sbin:/bin:/usr/bin LOG=/home/someusershome/initd.log NAME=myinitdscript PIDFILE=/var/run/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME [ -x "$DAEMON" ] || (echo "$DAEMON not found. Exiting $SCRIPTNAME." >> $LOG 2>&1 && exit 0) USERTORUNAS=a_user_on_my_system SOURCE_SCRIPT=/home/$USERTORUNAS/source_script DAEMON_ARGS="some_args_for_script". /lib/init/vars.sh. /lib/lsb/init-functions# Source this script for environmental variables [ -f $SOURCE_SCRIPT ] && . $SOURCE_SCRIPT# This is called when called with 'start', I am skipping that for succintness do_start() { start-stop-daemon --start --make-pidfile --pidfile $PIDFILE --test --background --chuid $USERTORUNAS --startas /bin/bash -- -c "exec $DAEMON -- $DAEMON_ARGS >> $LOG 2>&1 " || return 1 start-stop-daemon --start --make-pidfile --pidfile $PIDFILE --background --chuid $USERTORUNAS --startas /bin/bash -- -c "exec $DAEMON -- $DAEMON_ARGS >> $LOG 2>&1" || return 2 }If I activate this init.d script using: update-rc.d myinitdscript defaults 99it will error on boot with an EPERM error thrown at the pthread_create call (k = 1, aka EPERM). I can run this using sudo service myinitdscript start, and it will run just fine. I can also call /etc/init.d/myinitdscript start, and it will run just fine. It is only when I let the system run this script on boot that it fails. I find that if I add to my start-stop-daemon calls the option "-P fifo:99" I don't get the EPERM error and the code runs okay except at too high a priority so I won't call this a fix. The only part of the code that needs to run real-time is that pthread created from in the code. So I suppose this has to do with my permissions to create a real-time thread with priority 30 from within a normally scheduled thread. Why does my script need special scheduling policies/priorities when run from boot versus when I manually start the init.d script or through service? EDIT: Running on Ubuntu 12.04. EDIT2: I tried adding a call to "ulimit -r" inside my code which the start-stop-daemon call starts, and I get unlimited, so as far as I can see, there shouldn't be any permissions issue going with SCHED_FIFO:30 there EDIT3: It turns out I am running Upstart, and Upstart has an init script called rc-sysinit.conf which starts all the SystemV style scripts. So perhaps Upstart is screwing up my permissions.
realtime pthread created from non-realtime thread with init.d
To start/set a process as real-time, you should use chrt Usage to start a new process: chrt [options] priority command [arguments...]Usage to set a running process: chrt [options] -p priority PIDExample: sudo chrt -r 70 <your command>or <your command> & sudo chrt -r -p 70 $!
I looking for a way to start a real-time process or set a running process as a real-time process.
How to start a realtime process?
Please enable EXPERT mode after launching make nconfig/menuconfig. Then you'll be able to select Fully Preemptible Kernel (RT) option.
I am trying to compile the 5.4 kernel with the latest stable PREEMPT_RT patch (5.4.28-rt19) but for some reason can't select the Fully Preemptible Kernel (RT) option inside make nconfig/menconfig. I've compiled the 4.19 rt patch before, and it was as simple as copying the current config (/boot/config-4.18-xxx) to the new .config, and the option would show. Now I only see: No Forced Preemption (Server) Voluntary Kernel Preemption (Desktop) Preemptible Kernel (Low-Latency Desktop)And if I press F4 to "ShowAll", I do see the option: XXX Fully Preemptible Kernel (Real-Time) But cannot select it. I've tried manually setting it in .config with various PREEMPT options like: CONFIG_PREEMPT=y CONFIG_PREEMPT_RT_BASE=y CONFIG_PREEMPT_RT_FULL=yBut it never shows. I just went ahead and compiled it with CONFIG_PREEMPT_RT_FULL=y (which is overwritten before when saving the make nconfig), but it seems it's still not the fully preemptive kernel that is installed. With 4.19, uname -a would show something like: Linux 4.19.106-rt45 #2 SMP PREEMPT RT <date> or something like that, but now it will just say: Linux 5.4.28-rt19 #2 <date> Anyone know what I'm missing here? OS: CentOS 8.1.1911 Kernel: 4.18.0-147.8.1 -> 5.4.28-rt19
Trouble selecting "Fully Preemptible Kernel (Real-Time)" when configuring/compiling from source
That's what the faketime command is designed for. For instance: $ time faketime -f '+0 x10' sh -c 'date +%T; sleep 10; date +%T' 13:29:02 13:29:12 faketime -f '+0 x10' sh -c 'date +%T; sleep 10; date +%T' 0.00s user 0.00s system 0% cpu 1.009 totalStarted that shell with the clock going 10 times as fast as normal (that sleep 10 slept for 1 second). Clock can be slowed down by using fractional multiplier (like 0.5 or 0,5 (depending on your locale) for every faked second to last 2 real seconds. See for instance: faketime -f '+0 x0.5' watch dateThat works by injecting code into executables with LD_PRELOAD, so won't work for statically linked applications or setuid/setgid executables, or applications that execute commands in sanitized environments or in user namespaces, or which obtain time information or sleep without using the libc system call wrappers.
I would like to change the perception of real time for one process. Making the process believe that time is passing at 50% or 150% of the speed my system/kernel/hardware-clock thinks. I'd like to find a generic solution that can be used with any program, without having to patch the program's source code. Is there any tool to achieve what I need?
Change perception of real time for one process
The manpage is correct. It should not be hard to find confirmation of this. https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/sched/deadline.h#n5?h=v4.10/* * SCHED_DEADLINE tasks has negative priorities, reflecting * the fact that any of them has higher prio than RT and * NORMAL/BATCH tasks. */#define MAX_DL_PRIO 0This was the approach chosen by maintainer of the Linux scheduler. I have quoted LWN's explanation below, although you should aspire to read the whole of any LWN article which becomes relevant to your interests. As these are finite in length, I can't guarantee they resolve every specific confusion you might have. https://lwn.net/Articles/356576/Peter Zijlstra, though, thinks that deadline scheduling should run at the highest priority; otherwise it cannot ensure that the deadlines will be met.I'm a bit confused since I thought rtprio 99 would be the highest priority in the system (watchdogs, migration, posixcputimer...)LWN links to Peter's initial review, which mentions this.The only two tasks that need to be the absolute highest priority are kstopmachine and migrate, so those would need to be lifted above EDF, the rest is not important :-)I don't know exactly what migrate would be this context, but the LWN article does call out SMP realtime as being a challenge. stopmachine is on the list that says "so don't do that for RT!", for this reason. Peter makes this explicit later on. Watchdogs surely operate on greater timescales than realtime processes, and deadline scheduling will leave time for them to run later (see below). Ironically, I'm struggling to find information on the behaviour of timers in the real-time kernel. There's an RT wiki which mentions this for priorities, but not deadlines... note the page was last edited 2008 and specifies a test machine with a PIII 400 Mhz cpu. It's also interesting that Peter didn't mention timers in the initial review. It does look like RT processes have been encouraged to use clock_nanosleep() if possible. (Clearly this would have little or no utility for the CPUTIME clocks, which might be what you're referring to). The deadline scheduler has the potential to guarantee that deadlines will be met, provided processes do not exceed their specified Worst Case Execution Time. The priority scheduler does not have this feature. The maintainers favoured this guarantee, rather than making it conditional on whether a SCHED_FIFO process is present. Deadline scheduling without the guarantee would be rather different beast... whether or not that would have some utility remaining; I don't really know. Deadline scheduled processes have a maximum bandwidth - which is enforced by the Linux scheduler. In principle, it should be possible to look at the total bandwidth of deadline scheduled processes, as well as the largest execution period, and determine the worst-case effect on any priority scheduled processes. The reverse would not be true, because there is no enforcement of WCET for the POSIX scheduling class SCHED_FIFO.A deadline system does away with static priorities. Instead, each running task provides a set of three scheduling parameters:A deadline - when the work must be completed. An execution period - how often the work must be performed. The worst-case execution time (WCET) - the maximum amount of CPU time which will be required to get the work done. The "bandwidth" requirement of a process - what percentage of a CPU it needs - is easily calculated, so the scheduler knows at the outset whether the system is oversubscribed or not. The scheduler can (and should) refuse to accept tasks which would require more bandwidth than the system has available. By refusing excess work, the scheduler will always be able to provide the requisite CPU time to every process within the specified deadline. That kind of promise makes realtime developers happy.
As stated in the man pages: A SCHED_FIFO thread runs until either it is blocked by an I/O request, it is preempted by a higher priority thread, or it calls sched_yield(2).From the same source: SCHED_DEADLINE threads are the highest priority (user controllable) threads in the system; if any SCHED_DEADLINE thread is runnable, it will preempt any thread scheduled under one of the other policies.Does that mean even threads with rtprio 99 will be preempted by a SCHED_DEADLINE thread? It's kinda directly stated there, but I'm a bit confused since I thought rtprio 99 would be the highest priority in the system (watchdogs, migration, posixcputimer...). I'd be interested to know this for both the standard kernel and the rt_patched one. Thanks everyone.
Can SCHED_FIFO be preempted by SCHED_DEADLINE?